Unnamed: 0
int64 0
7.24k
| id
int64 1
7.28k
| raw_text
stringlengths 9
124k
| vw_text
stringlengths 12
15k
|
---|---|---|---|
6,600 | 6,970 | Influence Maximization with ?-Almost
Submodular Threshold Functions
Qiang Li??, Wei Chen?, Xiaoming Sun??, Jialin Zhang??
?
CAS Key Lab of Network Data Science and Technology,
Institute of Computing Technology, Chinese Academy of Sciences
?
University of Chinese Academy of Sciences
?
Microsoft Research
{liqiang01,sunxiaoming,zhangjialin}@ict.ac.cn
[email protected]
Abstract
Influence maximization is the problem of selecting k nodes in a social network to
maximize their influence spread. The problem has been extensively studied but
most works focus on the submodular influence diffusion models. In this paper,
motivated by empirical evidences, we explore influence maximization in the nonsubmodular regime. In particular, we study the general threshold model in which
a fraction of nodes have non-submodular threshold functions, but their threshold
functions are closely upper- and lower-bounded by some submodular functions
(we call them ?-almost submodular). We first show a strong hardness result: there
is no 1/n?/c approximation for influence maximization (unless P = NP) for all
networks with up to n? ?-almost submodular nodes, where ? is in (0,1) and c is a
parameter depending on ?. This indicates that influence maximization is still hard
to approximate even though threshold functions are close to submodular. We then
provide (1 ? ?)` (1 ? 1/e) approximation algorithms when the number of ?-almost
submodular nodes is `. Finally, we conduct experiments on a number of real-world
datasets, and the results demonstrate that our approximation algorithms outperform
other baseline algorithms.
1
Introduction
Influence maximization, proposed by Kempe, Kleinberg, and Tardos [1], considers the problem
of selecting k seed nodes in a social network that maximizes the spread of influence under predefined diffusion model. This problem has many applications including viral marketing [2, 3], media
advertising [4] and rumors spreading [5] etc., and many aspects of the problem has been extensively
studied.
Most existing algorithms for influence maximization, typically under the independent cascade (IC)
model and the linear threshold (LT) model [1], utilize the submodularity of the influence spread as a
set function on the set of seed nodes, because it permits a (1 ? 1/e)-approximation solution by the
greedy scheme [1, 6, 7], following the foundational work on submodular function maximization [8].
One important result concerning submodularity in the influence model is by Mossel and Roch [9],
who prove that in the general threshold model, the global influence spread function is submodular
when all local threshold functions at all nodes are submodular. This result implies that ?local"
submodularity ensures the submodularity of ?global" influence spread.
Although influence maximization under submodular diffusion models is dominant in the research
literature, in real networks, non-submodularity of influence spread function has been observed.
Backstrom et al. [10] study the communities of two networks LiveJournal and DBLP and draw
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
pictures of the impulse that a person joins a community against the number of his friends already
in this community. The curve is concave overall, except that a drop is observed in first two nodes.
Yang et al. [11] track emotion contagion under Flickr and find that the probability that an individual
becomes happy is superlinear to the number of his happy friends with higher PageRank scores. These
are all instances of non-submodular influence spread functions.
Influence maximization under many non-submodular diffusion models are proved to be hard to
approximate. For example, in the diffusions of rumors, innovations, or riot behaviors, the individual
in a social network is activated only when the number of her neighbors already adopting the behavior
exceeds her threshold. It has been shown that the influence maximization problem based on this fixed
threshold model cannot be approximated within a ratio of n1?? for any ? > 0 [1]. Meanwhile Chen
[12] proves that the seed minimization problem, to activate the whole network with minimum size of
1??
seed set, is also inapproximable, in particular, within a ratio of O(2log n ).
In this paper we give the first attempt on the influence maximization under the non-submodular
diffusion models. We study the general threshold model in which a fraction of nodes have nonsubmodular threshold functions, but their threshold functions are closely upper- and lower-bounded
by some submodular functions (we call them ?-almost submodular). Such a model bears conceptual
similarity to the empirical finding in [10, 11]: both studies show that the influence curve is only
slightly non-concave, and Yang et al. [11] further shows that different roles have different curves
? some are submodular while others are not, and ordinary users usually have behaviors close to
?
submodular while opinion leaders may not. We first show a strong hardness result: there is no 1/n c
?
approximation for influence maximization (unless P = NP) for all networks with up to n ?-almost
submodular nodes, where ? is in (0, 1) and c is a parameter depending on ?. On the other hand, we
propose constant approximation algorithms for networks where the number of ?-almost submodular
nodes is a small constant. The positive results imply that non-submodular problem can be partly
solved as long as there are only a few non-submodular nodes and the threshold function is not far
away from submodularity. Finally, we conduct experiments on real datasets to empirically verify our
algorithms. Empirical results on real datasets show that our approximation algorithms outperform
other baseline algorithms.
Related Work. Influence maximization has been well studied over the past years [13, 6, 7, 14, 15].
In particular, Leskovec et al. [6] propose a lazy-forward optimization that avoids unnecessary
computation of expected size. Chen et al. [7, 14] propose scalable heuristic algorithms that handle
network of million edges. Based on the technique of Reverse Reachable Set, Borgs et al. [16]
reduce the running time of greedy algorithms to near-linear under the IC model [1]. Tang et al. [17]
implement the near-linear algorithm and process Twitter network with million edges. Subsequently,
Tang et al. [18] and Nguyen et al. [19] further improve the efficiency of algorithms. These works all
utilize the submodularity to accelerate approximation algorithms.
Seed minimization, as the dual problem of influence maximization, is to find a small seed set such
that expected influence coverage exceeds a desired threshold. Chen [12] provide some strong negative
results on seed minimization problem under fixed threshold model, which is a special case of general
threshold model where its threshold function has breaking points. Goyal et al. [20] propose a greedy
algorithm with a bicriteria approximation. Recently, Zhang et al. [21] study the probabilistic variant
of seed minimization problem.
Due to the limitation of independent cascade and linear threshold model, general threshold model has
been proposed [1, 9]. Not much is known about the general threshold model, other than it is NP-hard
to approximate [1]. One special case which receives many attention is k-complex contagion where a
node becomes active if at least k of its neighbours have been activated [22, 23, 24]. Gao et al. [25]
make one step further of k-complex contagion model by considering the threshold comes from a
probability distribution.
Optimization of non-submodular function is another interesting direction. Du et al. [26] introduce two
techniques ? restricted submodularity and shifted submodularity ? to analyze greedy approximation
of non-submodular functions. Recently, Horel et al.[27] study the problem of maximizing a set
function that is very close to submodular. They assume that function values can be obtained from an
oracle and focused on its query complexity. In our study, the local threshold functions are close to
submodular and our target is to study its effect on the global influence spread function, which is the
result of complex cascading behavior derived from the local threshold functions.
2
2
Preliminaries
For a set function f : 2V ? R, we say that it is monotone if f (S) ? f (T ) for all S ? T ? V ;
we say that it is submodular if f (S ? {v}) ? f (S) ? f (T ? {v}) ? f (T ), for all S ? T ? V and
v ? V \ T . For a directed graph G = (V, E), we use N in (v) to denote the in-neighbors of v in G.
We now formally define the general threshold model used in the paper.
Definition 1 (General Threshold Model [1]). In the general threshold model, for a social graph
in
G = (V, E), every node v ? V has a threshold function fv : 2N (v) ? [0, 1]. The function fv (?)
should be monotone and fv (?) = 0. Initially at time 0, each node v ? V is in the inactive state and
chooses ?v uniformly at random from the interval [0, 1]. A seed set S0 is also selected, and their
states are set to be active. Afterwards, the influence propagates in discrete time steps. At time step
t ? 1, node v becomes active if fv (St?1 ? N in (v)) ? ?v , where St?1 is the set of active nodes by
time step t ? 1. The process ends when no new node becomes active in a step.
General threshold model is one of the most important models in the influence maximization problem.
Usually we focus on two properties of threshold function ? submodularity and supermodularity.
Submodularity can be understood as diminishing marginal returns when adding more nodes to the
seed set. In contrast, supermodularity means increasing marginal returns. Given a seed set S, let ?(S)
denote the expected number of activated nodes after the process of influence propagation terminates.
Submodularity is the key property that guarantees the performance of greedy algorithms [9]. In this
paper, we would like to study the influence maximization with nearly submodular threshold function
? ?-almost submodular function, or in short ?-AS.
Definition 2 (?-Almost Submodular (?-AS)). A set function f : 2V ? R is ?-almost submodular
if there exists a submodular function f sub defined on 2V and for any subset S ? V , f sub (S) ?
f (S) ? (1 ? ?)f sub (S). Here ? is a small positive number.
The definition of ?-almost submodular here is equivalent to "Approximate submodularity" defined
in [27]. For an ?-almost submodular threshold function fv , define its upper and lower submodular
bound as f v and f v . Hence by definition, we have f v = (1 ? ?)f v .
Given the definition of ?-almost submodular function, we then model the almost submodular graph.
In this paper, we consider the influence maximization problem based on this kind of graphs.
Definition 3 ((?, ?)-Almost Submodular Graph). Given fixed parameters ?, ? ? [0, 1], we say that a
graph with n (n = |V |) nodes is a (?, ?)-Almost Submodular Graph (under the general threshold
model), if there are at most n? nodes in the graph with ?-almost submodular threshold functions
while other nodes have submodular threshold functions.
Definition 4 (?-ASIM). Given a graph containing ?-almost submodular nodes and an input k,
Influence Maximization problem on graph with ?-Almost Submodular nodes (?-ASIM) is the problem
to find k seed nodes such that the influence spread invoked by the k nodes is maximized.
3
Inapproximability of ?-ASIM
In this section we show that it is in general hard to approximate the influence maximization problem
even if there are only sublinear number of nodes with ?-almost submodular threshold functions. The
main reason is that even a small number of nodes with ?-almost submodular threshold functions fv (?)
would cause the global influence spread function far from submodularity, making the maximization
problem very difficult. The theorem below shows our hardness result.
?
Theorem 1. For any small ? > 0 and any ? ? (0, 1), there is no 1/n c -approximation influence
2
maximization algorithm for all (?, ?)-almost submodular graphs where c = 3 + 3/ log 2??
, unless
P=NP.
We first construct a probabilistic-AND gate gadget by amplifying the non-submodularity through a
binary tree. Then we prove the lower bound of approximation ratio by the reduction from set cover
problem. Due to page limits, we only sketch the main technique. The full proof can be found in the
supplementary material.
3
Here we construct a basic gadget with input s1 , s2 and output t (see Figure 1a). We assume that
node t has two in-neighbours s1 , s2 and the threshold function g(?) of t is ?-almost submodular:
g(S) = |S|/2, when |S| = 0 or 2; 1??
2 , when |S| = 1.
t
d
t
..
.
s1
..
.
..
.
s2
..
.
..
.
s1
..
.
..
.
..
.
s2
(b) Tree gadget T?
(a) Basic gadget
Figure 1: Diagrams of gadegts
Let Pa (v) be the activation probability of node v in this case. This simple gadget is obviously far away
from the AND gate, and our next step is to construct a more complex gadget with input node s1 , s2 .
We hope that the output node t is active only when both s1 , s2 are active, and if only one of s1 and s2
is active, the probability that node t becomes active is close to 0. We call it a probabilistic-AND gate.
The main idea is to amplify the gap between submodularity and non-submodularity by binary tree
(figure 1b). In this gadget T? with a complete binary tree, node t is the root of a full binary tree and
each node holds a directed edge to its parent. For each leaf node v in the tree, both s1 , s2 hold the
directed edges towards it. The threshold function for each node in the tree is g(?) defined above while
? is the index of gadget T? . The depth of the tree is parameter d which will be determined later. We
use vi to denote a node of depth i (t is in depth 1). It is obviously that Pa (t) = 1 if both s1 , s2 are
activated, and Pa (t) = 0 if neither s1 or s2 is activated. Thus, we would like to prove, in case when
only one of s1 , s2 is activated, the activation probability becomes smaller for inner nodes in the tree.
d
Lemma 2. For gadget T? with depth d, the probability of activating output node t is less than ( 2??
2 )
when only one of s1 , s2 is activated.
Proof. In this case, for leaf node vd , we have Pa (vd ) = 1??
2 . Apparently, the probability of becoming
activated for nodes with depth d are independent with each other. Given a basic gadget, if each of
the two children nodes is activated with an independent probability p, then the parent node will be
activated with probability
p2 ? g(2) + 2p(1 ? p) ? g(1) + (1 ? p)2 ? g(0) = p2 + 2p(1 ? p)
1??
= p(1 ? ?(1 ? p)).
2
So we have Pa (vi ) ? Pa (vi+1 )(1 ? ?(1 ? Pa (vi+1 ))). Since Pa (vd ) = 1??
2 < 1/2, and Pa (vi ) ?
Pa (vi+1 ) from above, we have pa (vi ) < 1/2 for all i, and thus we can rewrite the recurrence as
Pa (vi ) ? Pa (vi+1 )(1 ? ?/2). Hence for the gadget with depth d, the probability that node t becomes
2?? d?1
d
< ( 2??
activated is Pa (t) = Pa (v1 ) ? 1??
2 ( 2 )
2 ) .
Lemma 2 shows that gadget T? is indeed a probabilistic-AND gate with two input nodes, and the
probability that t is activated when only one of s1 and s2 is activated approaches 0 exponentially fast
with the depth d. We say a gadget T? works well if output node t stay inactive when only one of the
input nodes is activated.
By the similar method we construct multi-input-AND gates based on 2-input-AND gates. Finally,
we show that if the influence maximization problem can be approximated beyond the ratio shown
above, we can solve the set cover problem in polynomial time. The main idea is as follows. For any
set cover instance, we will put all elements to be the input of our multi-input-probabilistic-AND gate,
and connect the output with a large number of additional nodes. Thus, if k sets can cover all elements,
all of those addition nodes will be activated, on contrast, if at least one of the elements cannot be
covered, almost all of the additional nodes will remain inactive.
4
4
Approximation Algorithms
In the previous section, we show that influence maximization is hard to approximate when the number
of ?-almost submodular nodes is sublinear but still a non-constant number. In this section, we discuss
the situation where only small number of nodes hold ?-almost submodular threshold functions. We
firstly provide a greedy algorithm for small number of non-submodular nodes which may not be
?-almost submodular, then, we restrict to the case of ?-almost submodular nodes.
4.1
Approximation Algorithm with Small Number of Non-submodular Nodes
In the case of ` (` < k) non-submodular nodes, we provide an approximate algorithm as follows. We
first add these non-submodular nodes into the seed set, and then generate the rest of the seed set by
the classical greedy algorithm. The proof of Theorem 3 can be found in the supplementary material.
Theorem 3. Given a graph of n nodes where all nodes have submodular threshold functions except
k?`
` < k nodes, for influence maximization of k seeds with greedy scheme we can obtain a (1 ? e? k )approximation ratio.
4.2
Approximation Algorithm of ?-ASIM
In this section, we consider the case when all non-submodular nodes have ?-almost submodular threshold functions, and provide an approximation algorithm that allows more than k ?-almost submodular
nodes, with the approximation ratio close to 1 ? 1/e when ? is small. The main idea is based on the
mapping between probability spaces.
Given a graph containing nodes with ?-almost submodular threshold functions, we simply set each
node?s threshold function to its submodular lower bound and then run classical greedy algorithm A
on this graph (Algorithm 1). Algorithm 1 takes the lower bounds of ?-almost submodular threshold
functions as input parameters. The following theorem analyzes the performance of Algorithm 1.
Algorithm 1 Galg-L algorithm for Influence Maximization
Input: G = (V, E), A, {fv }, {f v }, k
Output: Seed set S
1: set S = ?
2: replace each nodes v?s threshold function fv with f v
3: run algorithm A on G with {f v } and obtain S
4: return S
Theorem 4. Given a graph G = (V, E), under the general threshold model, assuming that ` nodes
have ?-almost submodular threshold functions and the other nodes have submodular threshold
functions. Then the greedy algorithm Galg-L has approximation ratio of (1 ? 1e )(1 ? ?)` .
Proof. Let Ve be the set of nodes with ?-almost submodular threshold functions. Without loss of
generality, we assume Ve = {v1 , v2 , . . . , v` }. Now consider two general threshold models M , M
with different threshold functions. Both models hold threshold functions {fv } for v ? V ? Ve . For
node v in Ve , M , M hold {f v } and {f v } respectively.
In any threshold model, after we sample each node?s threshold ?v , the diffusion process becomes deterministic. A graph with threshold functions {fv } and sampled thresholds {?v } is
called a possible world of the threshold model, which is similar to the live-edge graph in the
independent cascade model. An instance of threshold model?s possible world can be written as
{?v1 , ?v2 , . . . , ?vn ; fv1 , fv2 , . . . , fvn }. Here we build a one-to-one mapping from all M ?s possible
worlds with ?v ? 1 ? ? (v ? Ve ) to all M ?s possible worlds:
{?v1 , . . . , ?vn ; fv1 , . . . , fvn } ? {
?v`
fv`
fv1
?v1
,...,
, ?v
. . . , ?vn ;
,...,
, fv`+1 , . . . , fvn }.
1??
1 ? ? `+1
1??
1??
The above corresponding relation shows this one-to-one mapping between M and M . For any
instance of M ?s possible world with ?v ? 1 ? ? (v ? Ve ), we amplify the threshold of node v in
?v
1
Ve to 1??
. At the same time, we amplify the corresponding threshold function by a factor of 1??
.
5
Obviously, this amplification process will not effect the influence process under this possible world,
because for each v ? Ve , both its threshold value and the its threshold function are amplified by the
same factor 1/(1 ? ?). Furthermore, the amplified possible world is an instance of M .
R
~ ~
~ ~
Expected influence can be computed by ?(S) = ??[0,1]
n D(?; f , S)d?
~ , where D(?; f , S) is the
~
~ f~}. We refer M , M ?s expected
deterministic influence size of seed set S under possible world {?;
n
~
influence size functions as ?, ?. We define ? ? [0, 1] as the vector of n nodes threshold, and
?~e ? [0, 1]` , ?~0 ? [0, 1]n?` are the threshold vectors of Ve and V ? Ve . Besides, the threshold
functions of Ve and V ? Ve will be represented as f~e , f~0 . A possible world is symbolized as
{?~e , ?~0 ; f~e , f~0 }. For any seed set S, we have
?(S)
=
?
R
~
R??[0,1]
n
~ f~, S)d~
D(?;
?
R
~e ?[0,1??]`
?
R
`
= (1 ? ?)
= (1 ? ?)
~0 ?[0,1]n?`
?
~e
?
`
R 1??
~e
?
R 1??
`
R
?[0,1]`
R
?[0,1]`
D((?~e , ?~0 ); f~, S)d?~e d?~0
~ ~0 ~
n?` D((?e , ? ); f , S)d
~0 ?[0,1]
?
~e
?
1??
d?~0
~e
f~e
?
~0
~0
~e
~0 ?[0,1]n?` D(( 1?? , ? ); ( 1?? , f ), S)d ?
?
1??
d?~0
~ f~e ~0
= (1 ? ?) ??[0,1]
~
n D(?; ( 1?? , f ), S)d?
~
`
= (1 ? ?) ?(S).
The third equality utilizes our one-to-one mapping, in particular D((?~e , ?~0 ); f~, S) =
~e ~0
~e
f~e
?
?
D(( 1??
, ? ); ( 1??
, f~0 ), S) for 1??
? [0, 1]` , because they follow the same deterministic propagation process. Hence given a seed set S, the respective influence sizes in model M , M satisfy the
relation ?(S) ? (1 ? ?)` ?(S).
Let ? be the expected influence size function of the original model, and assume that the optimal
?
?
solution for ?, ?, ? are S , S ? , S ? respectly. Apparently, ?(S ) ? ?(S ? ) since for every node v,
?
?
f v ? fv . According to the previous analysis, we have ?(S ? ) ? ?(S ) ? (1 ? ?)` ?(S ). Hence for
A
output S of the greedy algorithm for optimizing ?, we have approximation ratio
1
1
1
?
?(S A ) ? ?(S A ) ? (1 ? )?(S ? ) ? (1 ? )(1 ? ?)` ?(S ) ? (1 ? )(1 ? ?)` ?(S ? ).
e
e
e
The theorem holds.
If we replace threshold functions by their upper bound and run the greedy algorithm, we obtain
Galg-U. With similar analysis, Galg-U also holds approximation ratio of (1 ? 1e )(1 ? ?)` on graphs
with ` ?-almost submodular nodes. The novel technique used to prove approximation ratio is similar
to the sandwich approximation in [28]. But their approximation ratio relies on instance-dependent
influence sizes, while we utilize mapping of probabilistic space to provide instance-independent
approximation ratio.
5
Experiments
In addition to the theoretical analysis, we are curious about the performance of greedy algorithms
Galg-U, Galg-L on real networks with non-submodular nodes. Our experiments run on a machine
with two 2.4GHz Intel(R) Xeon(R) E5-2620 CPUs, 4 processors (24 cores), 128GB memory and
64bit Ubuntu 14.04.1. All algorithms tested in this paper are written in C++ and compiled with g++
4.8.4. Some algorithms are implemented with multi-thread to decrease the running time.
5.1
Experiment setup
Datasets. We conduct experiments on three real networks. The first network is NetHEPT, an
academic collaboration network extracted from "High Energy Physics - Theory" section of arXiv
(http://www.arXiv.org) used by many works [7, 14, 15, 19, 20]. NetHEPT is an undirected network
with 15233 nodes and 31376 edges, each node represents an author and each edge represents that
two authors collaborated on a paper. The second one is Flixster, an American movie rating social site.
Each node represents a user, and directed edge (u, v) means v rated the same movie shortly after u
6
did. We select topic 3 with 29357 nodes and 174939 directed edges here. The last one is the DBLP
dataset, which is a larger collaboration network mined from the computer science bibliography site
DBLP with 654628 nodes and 1990259 undirected edges [14]. We process its edges in the same way
as the NetHEPT dataset.
Propagation Models. We adapt general threshold model in this paper. Our Galg-U,Galg-L are
designed on submodular upper and lower bounds, respectively. Since directly applying greedy scheme
on graphs with submodular threshold function is time-consuming, we assign the submodular threshold
function and submodular upper bound of ?-AS function as linear function here: fv (S) = |S|/d(v),
where d(v) is the in-degree of v. This makes the corresponding model an instance of the linearthreshold model, and thus the greedy algorithm can be accelerated with Reverse Reachable Set
(RRset) technique [17].
We construct two different ?-almost submodular threshold functions in this paper: (1) a power function
|S| ?
d(v)
?
|S|
1
1
with ? satisfying d(v)
= d(v)
(1 ? ?); (2) fv (S) = d(v)
(1 ? ?) for |S| ? 2 and |S|/d(v)
otherwise. The former ?-almost submodular function is a supermodular function. The supermodular
phenomenon has been observed in Flickr [11]. The second ?-almost submodular function is just
dropping down the original threshold function for the first several nodes, which is consistent with the
phenomenon observed in LiveJournal [10]. We call them ?-AS-1 and ?-AS-2 functions respectively.
Algorithms. We test our approximation Algorithm 1 and other baseline algorithms on the graphs
with ?-almost submodular nodes.
? TIM-U, TIM-L: Tang et al. [17] proposed a greedy algorithm TIM+ accelerated with Reverse
Reachable Set (RRset). The running time of TIM+ is O(k(m + n) log n) on graphs with n nodes
and m edges. RRset can be sampled in live-edge graph of IC model, and with some extension
we can sample RRset under Triggering model [1]. LT model also belongs to Triggering model,
but General Threshold model with non-submodular threshold functions does not fall into the
category of Triggering model. Thus TIM+ can not be directly applied on original graphs with
non-submodular nodes. In our experiments, we choose ?-AS-1 and ?-AS-2 thresholds to ensure
that TIM+ can run with their upper or lower bound. We then run Algorithm 1 with TIM+ as input.
Algorithm Galg-L based on TIM+ can be written in short as TIM-L. By using the upper bound we
obtain TIM-U.
? Greedy: We can still apply the naive greedy scheme on graph with ?-almost submodular nodes
and generate results without theoretical guarantee. The naive greedy algorithm is time consuming,
with running time is O(k(m + n)n).
? High-degree: High-degree outputs seed set according to the decreasing order of the out-degree.
? PageRank: PageRank is widely used to discover nodes with high influence. The insight of
PageRank is that important nodes point to important nodes. In this paper, The transition probability
on edge e = (u, v) is 1/d(u). We set restart probability as 0.15 and use the power method to
compute the PageRank values. Finally PageRank outputs nodes with top PageRank values.
? Random: Random simply selects seeds randomly from node set.
Experiment methods. The datasets provide the structure of network, and we first assume each node
holds linear threshold function as described above. In each experiment, we randomly sample some
nodes with in-degree greater than 2, and assign those nodes with our ?-almost submodular functions,
?-AS-1 or ?-AS-2. Since the naive greedy algorithm is quite time-consuming, we just run it on
NetHEPT.
5.2
Experiment results
Results on NetHEPT. Our first set of experiments focuses on the NetHEPT dataset with the aim
of comparing TIM-U, TIM-L and Greedy. TIM-U, TIM-L have theoretical guarantee, but the
approximation ratio is low when the graph contains a considerable number of ?-AS nodes. Figure 2
shows the influence size of each method, varying from 1 to 100 seeds. Figure 2a and 2b are results
conducted on constructed graph with ?-AS-1 nodes. Observe that TIM-U, TIM-L slightly outperform
Greedy in all cases. Compared with results of 3000 ?-AS nodes, influence of output seeds drops
obviously in graph with 10000 ?-AS nodes. But the ratio that TIM-U, TIM-L exceed PageRank
7
(a) 3000 ?-AS-1 nodes
(b) 10000 ?-AS-1 nodes
(c) 3000 ?-AS-2 nodes
(d) 10000 ?-AS-2 nodes
Figure 2: Results of IM on NetHEPT with ? = 0.2
increases with rising fraction of ?-AS nodes. In particular, ?-AS-1 is indeed supermodular, TIM-U,
TIM-L beats Greedy even when many nodes have supermodular threshold functions.
We remark that TIM-U, TIM-L and Greedy outperform other baseline algorithms significantly. When
k = 100, TIM-U is 6.1% better compared with PageRank and 27.2% better compared with Highdegree. When conducted with ?-AS-2 function, Figure 2c and 2d report that TIM-U, TIM-L and
Greedy still perform extremely well. Influence size conducted on graphs with ?-AS-2 function is
better than those with ?-AS-1 function. This is what we expect: supermodular function is harder to
handle among the class of ?-almost submodular functions.
Another thing to notice is that TIM-U, TIM-L can output seeds on NetHEPT within seconds, while it
takes weeks to run the naive greedy algorithm. With RRsets technique, TIM+ dramatically reduces
the running time. The ?-almost submodular functions selected here ensure that TIM+ can be invoked.
Since TIM-U, TIM-L match the performance of Greedy while TIM-U, TIM-L are scalable, we do not
run Greedy in the following larger datasets.
Results on Flixster. Figure 3 shows the results of experiments conducted on Flixster with
(a) 3000 ?-AS-1 nodes
(b) 10000 ?-AS-1 nodes
(c) 3000 ?-AS-2 nodes
(d) 10000 ?-AS-2 nodes
Figure 3: Results of IM on Flixster with ? = 0.2
? = 0.2. We further evaluate algorithms by Flixster with ? = 0.4 (see Figure 4). Observe that
TIM-U, TIM-L outperform other heuristic algorithms in all cases. Compared with PageRank,
30%, 46.3%, 26%, 29.7% improvement are observed in the four experiments in Figure 3. TIM-U
performs closely to TIM-L consistently. The improvement is larger than that in NetHEPT. The extra
improvement might due to more complex network structure. The average degree is 5.95 in Flixster,
compared to 2.05 in NetHEPT. In dense network, nodes may be activated by multiple influence
chains, which makes influence propagates further from seeds. Baseline algorithms only pay attention
to the structure of the network, hence they are defeated by TIM-U, TIM-L that focus on influence
spread. The more ?-AS nodes in network, the more improvement is obtained.
When we set ? as 0.4, Figure 4 shows that TIM-U is 37.6%, 74.2%, 28%, 35.6% better than PageRank respectively. Notice that the gap between the performances of TIM-U and PageRank increases
as ? increases. In Flixster dataset, we observe that TIM-U,TIM-L hold greater advantage in case of
larger number of ?-AS nodes and larger ?.
Results on DBLP. For DBLP dataset, the results are shown in Figure 5. TIM-U and TIM-L are still
the best algorithms according to performance. But PageRank and High-degree also performs
well, just about 2.6% behind TIM-U and TIM-L. DBLP network has many nodes with large degree,
which correspond to those active scientists. Once such active authors are activated, the influence will
increase significantly. This may partly explain why TIM-U,TIM-L perform similarly to PageRank.
8
(a) 3000 ?-AS-1 nodes
(b) 10000 ?-AS-1 nodes
(c) 3000 ?-AS-2 nodes
(d) 10000 ?-AS-2 nodes
Figure 4: Results of IM on Flixster with ? = 0.4
(a) 3000 ?-AS-1 nodes
(b) 10000 ?-AS-1 nodes
(c) 3000 ?-AS-2 nodes
(d) 10000 ?-AS-2 nodes
Figure 5: Results of IM on DBLP with ? = 0.2
6
Conclusion and Future Work
In this paper, we study the influence maximization problem on propagation models with nonsubmodular threshold functions, which are different from most of existing studies where the threshold
functions and the influence spread function are both submodular. We investigate the problem
by studying a special case ? the ?-almost submodular threshold function. We first show that
influence maximization problem is still hard to approximate even when the number of ?-almost
submodular nodes is sub-linear. Next we provide a greedy algorithm based on submodular lower
bounds of threshold function to handle the graph with small number of ?-almost submodular nodes
and show its theoretical guarantee. We further conduct experiments on real networks and compare
our algorithms with other baselines to evaluate our algorithms in practice. Experimental results
show that our algorithms not only have good theoretical guarantees on graph with small number of
?-almost submodular nodes, they also perform well on graph with a fairly large fraction of ?-almost
submodular nodes.
Our study mainly focuses on handling ?-almost submodular threshold functions. One future direction
is to investigate models with arbitrary non-submodular threshold functions. Another issue is that the
greedy algorithms we propose are slow when the submodular upper bound or lower bound of threshold
function do not correspond to the Triggering model. It remains open whether we could utilize RRset
or other techniques to accelerate our algorithms under this circumstance. How to accelerate the naive
greedy process with arbitrary submodular threshold functions is another interesting direction.
Acknowledgments
This work was supported in part by the National Natural Science Foundation of China Grant 61433014,
61502449, 61602440, the 973 Program of China Grants No. 2016YFB1000201.
9
References
[1] David Kempe, Jon Kleinberg, and ?va Tardos. Maximizing the spread of influence through a
social network. In Proceedings of the ninth ACM SIGKDD, pages 137?146. ACM, 2003.
[2] Mani Subramani and Balaji Rajagopalan. Knowledge-sharing and influence in online social
networks via viral marketing. Communications of The ACM, 46(12):300?307, 2003.
[3] Wei Chen, Fu Li, Tian Lin, and Aviad Rubinstein. Combining traditional marketing and viral
marketing with amphibious influence maximization. In ACM Conference on Economics and
Computation, 2015.
[4] Cigdem Aslay, Wei Lu, Francesco Bonchi, Amit Goyal, and Laks V S Lakshmanan. Viral
marketing meets social advertising: ad allocation with minimum regret. In Proceedings of The
VLDB Endowment, pages 814?825, 2015.
[5] Biao Wang, Ge Chen, Luoyi Fu, Li Song, Xinbing Wang, and Xue Liu. Drimux: Dynamic
rumor influence minimization with user experience in social networks. In AAAI?16, pages
791?797, 2016.
[6] Jure Leskovec, Andreas Krause, Carlos Guestrin, Christos Faloutsos, Jeanne M Vanbriesen, and
Natalie Glance. Cost-effective outbreak detection in networks. In ACM Knowledge Discovery
and Data Mining, pages 420?429, 2007.
[7] Wei Chen, Yajun Wang, and Siyu Yang. Efficient influence maximization in social networks. In
Proceedings of the 15th ACM SIGKDD. ACM, 2009.
[8] G. Nemhauser, L. Wolsey, and M. Fisher. An analysis of the approximations for maximizing
submodular set functions. Mathematical Programming, 14:265?294, 1978.
[9] Elchanan Mossel and Sebastien Roch. On the submodularity of influence in social networks. In
STOC?07, pages 128?134, 2007.
[10] Lars Backstrom, Dan Huttenlocher, Jon Kleinberg, and Xiangyang Lan. Group formation in
large social networks: membership, growth, and evolution. In KDD?06, pages 44?54. ACM,
2006.
[11] Yang Yang, Jia Jia, Boya Wu, and Jie Tang. Social role-aware emotion contagion in image
social networks. In AAAI, pages 65?71, 2016.
[12] Ning Chen. On the approximability of influence in social networks. In SODA?08, 2008.
[13] Shishir Bharathi, David Kempe, and Mahyar Salek. Competitive influence maximization in
social networks. In International Workshop on Web and Internet Economics, pages 306?311.
Springer, 2007.
[14] Wei Chen, Chi Wang, and Yajun Wang. Scalable influence maximization for prevalent viral
marketing in large-scale social networks. In KDD?10, 2010.
[15] Amit Goyal, Wei Lu, and Laks V. S. Lakshmanan. SIMPATH: An Efficient Algorithm for
Influence Maximization under the Linear Threshold Model. In ICDM?11, pages 211?220, 2011.
[16] Christian Borgs, Michael Brautbar, Jennifer Chayes, and Brendan Lucier. Maximizing social
influence in nearly optimal time. In SODA?14, pages 946?957. ACM-SIAM, 2014.
[17] Youze Tang, Xiaokui Xiao, and Yanchen Shi. Influence maximization: near-optimal time
complexity meets practical efficiency. In SIGMOD?14, 2014.
[18] Youze Tang, Yanchen Shi, and Xiaokui Xiao. Influence maximization in near-linear time: A
martingale approach. In SIGMOD?15, pages 1539?1554. ACM, 2015.
[19] H. T. Nguyen, M. T. Thai, and T. N. Dinh. Stop-and-stare: Optimal sampling algorithms for
viral marketing in billion-scale networks. In SIGMOD?16, pages 695?710. ACM, 2016.
10
[20] Amit Goyal, Francesco Bonchi, Laks VS Lakshmanan, and Suresh Venkatasubramanian. On
minimizing budget and time in influence propagation over social networks. Social Network
Analysis and Mining, pages 1?14, 2012.
[21] Peng Zhang, Wei Chen, Xiaoming Sun, Yajun Wang, and Jialin Zhang. Minimizing seed
set selection with probabilistic coverage guarantee in a social network. In KDD?14, pages
1306?1315, 2014.
[22] Golnaz Ghasemiesfeh, Roozbeh Ebrahimi, and Jie Gao. Complex contagion and the weakness
of long ties in social networks: revisited. In ACM Conference on Electronic Commerce, 2013.
[23] Roozbeh Ebrahimi, Jie Gao, Golnaz Ghasemiesfeh, and Grant Schoenebeck. Complex contagions in kleinberg?s small world model. In ITCS?15, 2015.
[24] Wei Chen, Qiang Li, Xiaoming Sun, and Jialin Zhang. The routing of complex contagion in
kleinberg?s small-world networks. In International Computing and Combinatorics Conference,
pages 307?318, 2016.
[25] Jie Gao, Golnaz Ghasemiesfeh, Grant Schoenebeck, and Fang-Yi Yu. General threshold
model for social cascades: Analysis and simulations. In ACM Conference on Economics and
Computation, 2016.
[26] Ding-Zhu Du, Ronald L Graham, Panos M Pardalos, Peng-Jun Wan, Weili Wu, and Wenbo
Zhao. Analysis of greedy approximations with nonsubmodular potential functions. In SODA?08,
pages 167?175, 2008.
[27] Thibaut Horel and Yaron Singer. Maximization of approximately submodular functions. In
NIPS?16, pages 3045?3053, 2016.
[28] Wei Lu, Wei Chen, and Laks VS Lakshmanan. From competition to complementarity: comparative influence diffusion and maximization. Proceedings of the VLDB Endowment, 9(2):60?71,
2015.
11
| 6970 |@word rising:1 polynomial:1 open:1 vldb:2 simulation:1 lakshmanan:4 bicriteria:1 harder:1 reduction:1 venkatasubramanian:1 liu:1 contains:1 score:1 selecting:2 past:1 existing:2 yajun:3 com:1 comparing:1 activation:2 written:3 ronald:1 kdd:3 christian:1 drop:2 designed:1 v:2 greedy:32 selected:2 leaf:2 ubuntu:1 short:2 core:1 node:125 revisited:1 firstly:1 org:1 zhang:5 mathematical:1 constructed:1 natalie:1 prove:4 dan:1 bonchi:2 introduce:1 peng:2 expected:6 indeed:2 hardness:3 behavior:4 multi:3 chi:1 decreasing:1 highdegree:1 cpu:1 considering:1 increasing:1 becomes:8 discover:1 bounded:2 maximizes:1 medium:1 what:1 kind:1 finding:1 guarantee:6 every:2 concave:2 growth:1 tie:1 grant:4 brautbar:1 positive:2 understood:1 local:4 scientist:1 limit:1 meet:2 becoming:1 approximately:1 might:1 studied:3 china:2 tian:1 directed:5 acknowledgment:1 practical:1 commerce:1 practice:1 goyal:4 implement:1 regret:1 suresh:1 foundational:1 empirical:3 cascade:4 significantly:2 cannot:2 close:6 superlinear:1 amplify:3 boya:1 put:1 selection:1 influence:73 live:2 applying:1 www:1 equivalent:1 deterministic:3 fv2:1 maximizing:4 shi:2 attention:2 economics:3 focused:1 fv1:3 insight:1 cascading:1 his:2 fang:1 handle:3 siyu:1 tardos:2 target:1 user:3 programming:1 pa:15 element:3 youze:2 approximated:2 satisfying:1 complementarity:1 balaji:1 huttenlocher:1 observed:5 role:2 ding:1 solved:1 wang:6 ensures:1 sun:3 decrease:1 complexity:2 thai:1 dynamic:1 rewrite:1 efficiency:2 accelerate:3 represented:1 rumor:3 fast:1 effective:1 activate:1 query:1 rubinstein:1 formation:1 bharathi:1 quite:1 heuristic:2 supplementary:2 solve:1 larger:5 say:4 supermodularity:2 otherwise:1 widely:1 online:1 obviously:4 chayes:1 advantage:1 propose:5 schoenebeck:2 combining:1 weic:1 academy:2 amplified:2 amplification:1 competition:1 billion:1 parent:2 comparative:1 tim:49 depending:2 friend:2 ac:1 p2:2 strong:3 coverage:2 implemented:1 implies:1 come:1 direction:3 submodularity:18 ning:1 closely:3 subsequently:1 lars:1 routing:1 opinion:1 material:2 pardalos:1 activating:1 assign:2 preliminary:1 im:4 extension:1 hold:9 ic:3 seed:26 mapping:5 fvn:3 week:1 spreading:1 amplifying:1 minimization:5 hope:1 aim:1 varying:1 derived:1 focus:5 improvement:4 consistently:1 prevalent:1 indicates:1 mainly:1 contrast:2 sigkdd:2 brendan:1 baseline:6 twitter:1 dependent:1 jeanne:1 membership:1 typically:1 initially:1 her:2 relation:2 diminishing:1 selects:1 linearthreshold:1 overall:1 dual:1 among:1 issue:1 special:3 kempe:3 fairly:1 marginal:2 emotion:2 construct:5 once:1 aware:1 beach:1 sampling:1 qiang:2 represents:3 yu:1 nearly:2 jon:2 future:2 np:4 others:1 report:1 few:1 randomly:2 neighbour:2 ve:12 national:1 individual:2 microsoft:2 n1:1 attempt:1 sandwich:1 detection:1 investigate:2 mining:2 weakness:1 activated:17 behind:1 jialin:3 chain:1 predefined:1 edge:14 fu:2 experience:1 respective:1 elchanan:1 unless:3 conduct:4 tree:9 desired:1 theoretical:5 leskovec:2 riot:1 instance:8 xeon:1 cover:4 maximization:37 ordinary:1 cost:1 subset:1 conducted:4 connect:1 xue:1 chooses:1 st:3 person:1 international:2 siam:1 stay:1 probabilistic:7 physic:1 xiangyang:1 michael:1 aaai:2 containing:2 choose:1 wan:1 american:1 zhao:1 return:3 li:4 potential:1 satisfy:1 combinatorics:1 vi:9 ad:1 later:1 root:1 lab:1 analyze:1 apparently:2 competitive:1 carlos:1 yaron:1 jia:2 who:1 maximized:1 correspond:2 itcs:1 lu:3 advertising:2 processor:1 explain:1 flickr:2 sharing:1 definition:7 against:1 energy:1 proof:4 sampled:2 stop:1 proved:1 dataset:5 knowledge:2 lucier:1 simpath:1 higher:1 supermodular:5 follow:1 wei:10 roozbeh:2 though:1 generality:1 horel:2 marketing:7 furthermore:1 just:3 hand:1 receives:1 sketch:1 web:1 propagation:5 glance:1 impulse:1 usa:1 effect:2 verify:1 former:1 hence:5 equality:1 mani:1 evolution:1 recurrence:1 complete:1 demonstrate:1 performs:2 image:1 invoked:2 novel:1 recently:2 thibaut:1 viral:6 empirically:1 exponentially:1 million:2 refer:1 dinh:1 similarly:1 submodular:95 reachable:3 similarity:1 compiled:1 etc:1 add:1 dominant:1 subramani:1 optimizing:1 belongs:1 reverse:3 binary:4 yi:1 guestrin:1 minimum:2 additional:2 analyzes:1 greater:2 maximize:1 afterwards:1 full:2 multiple:1 reduces:1 exceeds:2 match:1 academic:1 adapt:1 defeated:1 long:3 lin:1 concerning:1 icdm:1 nonsubmodular:4 va:1 scalable:3 variant:1 basic:3 panos:1 circumstance:1 arxiv:2 adopting:1 addition:2 krause:1 interval:1 diagram:1 extra:1 rest:1 undirected:2 thing:1 call:4 curious:1 near:4 yang:5 exceed:1 restrict:1 triggering:4 reduce:1 idea:3 cn:1 inner:1 aviad:1 andreas:1 inactive:3 thread:1 motivated:1 whether:1 gb:1 song:1 cause:1 remark:1 jie:4 dramatically:1 covered:1 extensively:2 category:1 generate:2 http:1 outperform:5 shifted:1 notice:2 track:1 discrete:1 dropping:1 group:1 key:2 four:1 threshold:87 lan:1 neither:1 diffusion:8 utilize:4 v1:5 graph:31 monotone:2 fraction:4 year:1 run:9 soda:3 almost:49 wu:2 vn:3 utilizes:1 electronic:1 draw:1 graham:1 bit:1 bound:12 collaborated:1 pay:1 internet:1 mined:1 oracle:1 symbolized:1 bibliography:1 asim:4 kleinberg:5 aspect:1 extremely:1 approximability:1 xiaoming:3 according:3 terminates:1 slightly:2 smaller:1 remain:1 vanbriesen:1 backstrom:2 making:1 s1:13 outbreak:1 restricted:1 remains:1 jennifer:1 discus:1 singer:1 ge:1 end:1 yanchen:2 studying:1 permit:1 apply:1 observe:3 away:2 v2:2 faloutsos:1 shortly:1 gate:7 original:3 ebrahimi:2 top:1 running:5 ensure:2 laks:4 sigmod:3 chinese:2 prof:1 build:1 classical:2 amit:3 already:2 traditional:1 nemhauser:1 vd:3 restart:1 topic:1 considers:1 reason:1 assuming:1 besides:1 index:1 ratio:14 happy:2 minimizing:2 innovation:1 difficult:1 setup:1 stoc:1 negative:1 perform:3 sebastien:1 upper:9 francesco:2 datasets:6 beat:1 situation:1 communication:1 ninth:1 arbitrary:2 community:3 rating:1 david:2 fv:15 nip:2 wenbo:1 jure:1 roch:2 beyond:1 usually:2 below:1 regime:1 rajagopalan:1 program:1 pagerank:14 including:1 memory:1 power:2 natural:1 zhu:1 scheme:4 improve:1 movie:2 technology:2 mossel:2 imply:1 contagion:7 picture:1 rated:1 jun:1 naive:5 ict:1 literature:1 discovery:1 loss:1 expect:1 bear:1 xiaokui:2 sublinear:2 interesting:2 limitation:1 allocation:1 wolsey:1 foundation:1 degree:8 consistent:1 s0:1 propagates:2 xiao:2 collaboration:2 endowment:2 supported:1 flixster:8 last:1 institute:1 neighbor:2 fall:1 ghz:1 curve:3 depth:7 world:12 avoids:1 transition:1 forward:1 author:3 nguyen:2 far:3 social:23 approximate:8 global:4 active:11 conceptual:1 unnecessary:1 consuming:3 leader:1 why:1 ca:2 e5:1 du:2 complex:8 meanwhile:1 did:1 spread:13 main:5 dense:1 whole:1 s2:13 child:1 gadget:13 site:2 intel:1 join:1 martingale:1 slow:1 christos:1 sub:4 breaking:1 third:1 tang:6 theorem:7 down:1 borgs:2 evidence:1 livejournal:2 exists:1 workshop:1 adding:1 budget:1 chen:12 dblp:7 gap:2 lt:2 simply:2 explore:1 gao:4 lazy:1 inapproximability:1 springer:1 relies:1 extracted:1 acm:13 towards:1 replace:2 fisher:1 considerable:1 hard:6 inapproximable:1 determined:1 except:2 uniformly:1 lemma:2 called:1 partly:2 experimental:1 formally:1 select:1 accelerated:2 evaluate:2 tested:1 phenomenon:2 handling:1 |
6,601 | 6,971 | InfoGAIL: Interpretable Imitation Learning from
Visual Demonstrations
Yunzhu Li
MIT
[email protected]
Jiaming Song
Stanford University
[email protected]
Stefano Ermon
Stanford University
[email protected]
Abstract
The goal of imitation learning is to mimic expert behavior without access to an
explicit reward signal. Expert demonstrations provided by humans, however, often
show significant variability due to latent factors that are typically not explicitly
modeled. In this paper, we propose a new algorithm that can infer the latent
structure of expert demonstrations in an unsupervised way. Our method, built on
top of Generative Adversarial Imitation Learning, can not only imitate complex
behaviors, but also learn interpretable and meaningful representations of complex
behavioral data, including visual demonstrations. In the driving domain, we
show that a model learned from human demonstrations is able to both accurately
reproduce a variety of behaviors and accurately anticipate human actions using raw
visual inputs. Compared with various baselines, our method can better capture the
latent structure underlying expert demonstrations, often recovering semantically
meaningful factors of variation in the data.
1
Introduction
A key limitation of reinforcement learning (RL) is that it involves the optimization of a predefined
reward function or reinforcement signal [1?6]. Explicitly defining a reward function is straightforward
in some cases, e.g., in games such as Go or chess. However, designing an appropriate reward function
can be difficult in more complex and less well-specified environments, e.g., for autonomous driving
where there is a need to balance safety, comfort, and efficiency.
Imitation learning methods have the potential to close this gap by learning how to perform tasks
directly from expert demonstrations, and has succeeded in a wide range of problems [7?11]. Among
them, Generative Adversarial Imitation Learning (GAIL, [12]) is a model-free imitation learning
method that is highly effective and scales to relatively high dimensional environments. The training
process of GAIL can be thought of as building a generative model, which is a stochastic policy
that when coupled with a fixed simulation environment, produces similar behaviors to the expert
demonstrations. Similarity is achieved by jointly training a discriminator to distinguish expert
trajectories from ones produced by the learned policy, as in GANs [13].
In imitation learning, example demonstrations are typically provided by human experts. These
demonstrations can show significant variability. For example, they might be collected from multiple
experts, each employing a different policy. External latent factors of variation that are not explicitly
captured by the simulation environment can also significantly affect the observed behavior. For
example, expert demonstrations might be collected from users with different skills and habits. The
goal of this paper is to develop an imitation learning framework that is able to automatically discover
and disentangle the latent factors of variation underlying expert demonstrations. Analogous to the goal
of uncovering style, shape, and color in generative modeling of images [14], we aim to automatically
learn similar interpretable concepts from human demonstrations through an unsupervised manner.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
We propose a new method for learning a latent variable generative model that can produce trajectories
in a dynamic environment, i.e., sequences of state-actions pairs in a Markov Decision Process. Not
only can the model accurately reproduce expert behavior, but also empirically learns a latent space
of the observations that is semantically meaningful. Our approach is an extension of GAIL, where
the objective is augmented with a mutual information term between the latent variables and the
observed state-action pairs. We first illustrate the core concepts in a synthetic 2D example and
then demonstrate an application in autonomous driving, where we learn to imitate complex driving
behaviors while recovering semantically meaningful structure, without any supervision beyond the
expert trajectories. 1 Remarkably, our method performs directly on raw visual inputs, using raw
pixels as the only source of perceptual information. The code for reproducing the experiments are
available at https://github.com/ermongroup/InfoGAIL.
In particular, the contributions of this paper are threefold:
1. We extend GAIL with a component which approximately maximizes the mutual information
between latent space and trajectories, similar to InfoGAN [14], resulting in a policy where
low-level actions can be controlled through more abstract, high-level latent variables.
2. We extend GAIL to use raw pixels as input and produce human-like behaviors in complex
high-dimensional dynamic environments.
3. We demonstrate an application to autonomous highway driving using the TORCS driving
simulator [15]. We first demonstrate that the learned policy is able to correctly navigate the
track without collisions. Then, we show that our model learns to reproduce different kinds
of human-like driving behaviors by exploring the latent variable space.
2
Background
2.1
Preliminaries
We use the tuple (S, A, P, r , ?0 , ) to define an infinite-horizon, discounted Markov decision process
(MDP), where S represents the state space, A represents the action space, P : S ?A?S ! R denotes
the transition probability distribution, r : S ! R denotes the reward function, ?0 : S ! R is the
distribution of the initial state s0 , and 2 (0, 1) is the discount factor. Let ? denote a stochastic policy
? : S ? A ! [0, 1], and ?E denote the expert policy to which we only have access to demonstrations.
The expert demonstrations ?E are a set of trajectories generated using policy ?E , each of which
consists of a sequence of state-action pairs. We use an expectation with respect
P1to a policy ? to denote
an expectation with respect to the trajectories it generates: E? [f (s, a)] , E[ t=0 t f (st , at )], where
s0 ? ?0 , at ? ?(at |st ), st+1 ? P (st+1 |at , st ).
2.2
Imitation learning
The goal of imitation learning is to learn how to perform a task directly from expert demonstrations,
without any access to the reinforcement signal r. Typically, there are two approaches to imitation
learning: 1) behavior cloning (BC), which learns a policy through supervised learning over the stateaction pairs from the expert trajectories [16]; and 2) apprenticeship learning (AL), which assumes the
expert policy is optimal under some unknown reward and learns a policy by recovering the reward
and solving the corresponding planning problem. BC tends to have poor generalization properties
due to compounding errors and covariate shift [17, 18]. AL, on the other hand, has the advantage of
learning a reward function that can be used to score trajectories [19?21], but is typically expensive to
run because it requires solving a reinforcement learning (RL) problem inside a learning loop.
2.3
Generative Adversarial Imitation Learning
Recent work on AL has adopted a different approach by learning a policy without directly estimating the corresponding reward function. In particular, Generative Adversarial Imitation Learning
(GAIL, [12]) is a recent AL method inspired by Generative Adversarial Networks (GAN, [13]). In the
GAIL framework, the agent imitates the behavior of an expert policy ?E by matching the generated
state-action distribution with the expert?s distribution, where the optimum is achieved when the
1
A video showing the experimental results is available at https://youtu.be/YtNPBAW6h5k.
2
distance between these two distributions is minimized as measured by Jensen-Shannon divergence.
The formal GAIL objective is denoted as
min
?
max
D2(0,1)S?A
E? [log D(s, a)] + E?E [log(1
D(s, a))]
H(?)
(1)
where ? is the policy that we wish to imitate ?E with, D is a discriminative classifier which
tries to distinguish state-action pairs from the trajectories generated by ? and ?E , and H(?) ,
E? [ log ?(a|s)] is the -discounted causal entropy of the policy ?? [22]. Instead of directly learning
a reward function, GAIL relies on the discriminator to guide ? into imitating the expert policy.
GAIL is model-free: it requires interaction with the environment to generate rollouts, but it does
not need to construct a model for the environment. Unlike GANs, GAIL considers the environment/simulator as a black box, and thus the objective is not differentiable end-to-end. Hence,
optimization of GAIL objective requires RL techniques based on Monte-Carlo estimation of policy
gradients. Optimization over the GAIL objective is performed by alternating between a gradient step
to increase (1) with respect to the discriminator parameters, and a Trust Region Policy Optimization
(TRPO, [2]) step to decrease (1) with respect to ?.
3
Interpretable Imitation Learning through Visual Inputs
Demonstrations are typically collected from human experts. The resulting trajectories can show
significant variability among different individuals due to internal latent factors of variation, such as
levels of expertise and preferences for different strategies. Even the same individual might make
different decisions while encountering the same situation, potentially resulting in demonstrations
generated from multiple near-optimal but distinct policies. In this section, we propose an approach
that can 1) discover and disentangle salient latent factors of variation underlying expert demonstrations
without supervision, 2) learn policies that produce trajectories which correspond to these latent factors,
and 3) use visual inputs as the only external perceptual information.
0
1
Formally, we assume that the expert policy is a mixture of experts ?E = {?E
, ?E
, . . . }, and we define
the generative process of the expert trajectory ?E as: s0 ? ?0 , c ? p(c), ? ? p(?|c), at ? ?(at |st ),
st+1 ? P (st+1 |at , st ), where c is a discrete latent variable that selects a specific policy ? from the
mixture of expert policies through p(?|c) (which is unknown and needs to be learned), and p(c) is the
prior distribution of c (which is assumed to be known before training). Similar to the GAIL setting,
we consider the apprenticeship learning problem as the dual of an occupancy measure matching
problem, and treat the trajectory ?E as a set of state-action pairs. Instead of learning a policy solely
based on the current state, we extend it to include an explicit dependence on the latent variable c. The
objective is to recover a policy ?(a|s, c) as an approximation of ?E ; when c is samples from the prior
p(c), the trajectories ? generated by the conditional policy ?(a|s, c) should be similar to the expert
trajectories ?E , as measured by a discriminative classifier.
3.1
Interpretable Imitation Learning
Learning from demonstrations generated by a mixture of experts is challenging as we have no access
to the policies employed by the individual experts. We have to proceed in an unsupervised way,
similar to clustering. The original Generative Adversarial Imitation Learning method would fail as it
assumes all the demonstrations come from a single expert, and there is no incentive in separating
and disentangling variations observed in the data. A method that can automatically disentangle the
demonstrations in a meaningful way is thus needed.
The way we address this problem is to introduce a latent variable c into our policy function, ?(a|s, c).
Without further constraints over c, applying GAIL directly to this ?(a|s, c) could simply ignore c
and fail to separate different types of behaviors present in the expert trajectories 2 . To incentivize
the model to use c as much as possible, we utilize an information-theoretic regularization enforcing
that there should be high mutual information between c and the state-action pairs in the generated
trajectory. This concept was introduced by InfoGAN [14], where latent codes are utilized to discover
the salient semantic features of the data distribution and guide the generating process. In particular,
the regularization seeks to maximize the mutual information between latent codes and trajectories,
2
For a fair comparison, we consider this form as our GAIL baseline in the experiments below.
3
denoted as I(c; ? ),which is hard to maximize directly as it requires access to the posterior P (c|? ).
Hence we introduce a variational lower bound, LI (?, Q), of the mutual information I(c; ? )3 :
LI (?, Q) = Ec?p(c),a??(?|s,c) [log Q(c|? )] + H(c)
(2)
? I(c; ? )
where Q(c|? ) is an approximation of the true posterior P (c|? ). The objective under this regularization,
which we call Information Maximizing Generative Adversarial Imitation Learning (InfoGAIL), then
becomes:
min max E? [log D(s, a)] + E?E [log(1
?,Q
D
D(s, a))]
1 LI (?, Q)
2 H(?)
(3)
where 1 > 0 is the hyperparameter for information maximization regularization term, and 2 > 0 is
the hyperparameter for the casual entropy term. By introducing the latent code, InfoGAIL is able
to identify the salient factors in the expert trajectories through mutual information maximization,
and imitate the corresponding expert policy through generative adversarial training. This allows us
to disentangle trajectories that may arise from a mixture of experts, such as different individuals
performing the same task.
To optimize the objective, we use a simplified posterior approximation Q(c|s, a), since directly
working with entire trajectories ? would be too expensive, especially when the dimension of the
observations is very high (such as images). We then parameterize policy ?, discriminator D and
posterior approximation Q with weights ?, ! and respectively. We optimize LI (?? , Q ) with
stochastic gradient methods, ?? using TRPO [2], and Q is updated using the Adam optimizer [23].
An outline for the optimization procedure is shown in Algorithm 1.
Algorithm 1 InfoGAIL
Input: Initial parameters of policy, discriminator and posterior approximation ?0 , !0 ,
trajectories ?E ? ?E containing state-action pairs.
Output: Learned policy ??
for i = 0, 1, 2, ... do
Sample a batch of latent codes: ci ? p(c)
Sample trajectories: ?i ? ??i (ci ), with the latent code fixed during each rollout.
Sample state-action pairs i ? ?i and E ? ?E with same batch size.
Update !i to !i+1 by ascending with gradients
?
?
D!i (s, a))]
!i = E i [r!i log D!i (s, a)] + E E [r!i log(1
Update
i
to
i+1
0;
expert
by descending with gradients
?
=
1 E i [r i log Q i (c|s, a)]
i
Take a policy step from ?i to ?i+1 , using the TRPO update rule with the following objective:
? [log D! (s, a)]
E
1 LI (??i , Q i+1 )
2 H(??i )
i
i+1
end for
3.2
Reward Augmentation
In complex and less well-specified environments, imitation learning methods have the potential to
perform better than reinforcement learning methods as they do not require manual specification of
an appropriate reward function. However, if the expert is performing sub-optimally, then any policy
trained under the recovered rewards will be also suboptimal; in other words, the imitation learning
agent?s potential is bounded by the capabilities of the expert that produced the training data. In
many cases, while it is very difficult to fully specify a suitable reward function for a given task, it is
relatively straightforward to come up with constraints that we would like to enforce over the policy.
This motivates the introduction of reward augmentation [8], a general framework to incorporate prior
knowledge in imitation learning by providing additional incentives to the agent without interfering
3
[14] presents a proof for the lower bound.
4
with the imitation learning process. We achieve this by specifying a surrogate state-based reward
?(?? ) = Es??? [r(s)] that reflects our bias over the desired agent?s behavior:
min max E?? [log D! (s, a)] + E?E [log(1
?,
!
D! (s, a))]
0 ?(?? )
1 LI (?? , Q
)
2 H(?? )
(4)
where 0 > 0 is a hyperparameter. This approach can be seen as a hybrid between imitation and
reinforcement learning, where part of the reinforcement signal for the policy optimization is coming
from the surrogate reward and part from the discriminator, i.e., from mimicking the expert. For
example, in our autonomous driving experiment below we show that by providing the agent with a
penalty if it collides with other cars or drives off the road, we are able to significantly improve the
average rollout distance of the learned policy.
3.3
Improved Optimization
While GAIL is successful in tasks with low-dimensional inputs (in [12], the largest observation has
376 continuous variables), few have explored tasks where the input dimension is very high (such as
images - 110 ? 200 ? 3 pixels as in our driving experiments). In order to effectively learn a policy
that relies solely on high-dimensional input, we make the following improvements over the original
GAIL framework.
It is well known that the traditional GAN objective suffers from vanishing gradient and mode collapse
problems [24, 25]. We propose to use the Wasserstein GAN (WGAN [26]) technique to alleviate
these problems and augment our objective function as follows:
min max E?? [D! (s, a)]
?,
!
E?E [D! (s, a)]
0 ?(?? )
1 LI (?? , Q
)
2 H(?? )
(5)
We note that this modification is especially important in our setting, where we want to model complex
distributions over trajectories that can potentially have a large number of modes.
We also use several variance reduction techniques, including baselines [27] and replay buffers [28].
Besides the baseline, we have three models to update in the InfoGAIL framework, which are
represented as neural networks: the discriminator network D! (s, a), the policy network ?? (a|s, c),
and the posterior estimation network Q (c|s, a). We update D! using RMSprop (as suggested in
the original WGAN paper), and update Q and ?? using Adam and TRPO respectively. We include
the detailed training procedure in Appendix C. To speed up training, we initialize our policy from
behavior cloning, as in [12].
Note that the discriminator network D! and the posterior approximation network Q are treated as
distinct networks, as opposed to the InfoGAN approach where they share the same network parameters
until the final output layer. This is because the current WGAN training framework requires weight
clipping and momentum-free optimization methods when training D! . These changes would interfere
with the training of an expressive Q if D! and Q share the same network parameters.
4
Experiments
We demonstrate the performance of our method by applying it first to a synthetic 2D example and
then in a challenging driving domain where the agent is imitating driving behaviors from visual
inputs. By conducting experiments on these two environments, we show that our learned policy ??
can 1) imitate expert behaviors using high-dimensional inputs with only a small number of expert
demonstrations, 2) cluster expert behaviors into different and semantically meaningful categories, and
3) reproduce different categories of behaviors by setting the high-level latent variables appropriately.
The driving experiments are conducted in the TORCS (The Open Source Racing Car Simulator, [15])
environment. The demonstrations are collected by manually driving along the race track, and show
typical behaviors like staying within lanes, avoiding collisions and surpassing other cars. The policy
accepts raw visual inputs as the only external inputs for the state, and produces a three-dimensional
continuous action that consists of steering, acceleration, and braking. We assume that our policies
are Gaussian distributions with fixed standard deviations, thus H(?) is constant.
5
(a) Expert
(b) Behavior cloning
(c) GAIL
(d) Ours
Figure 1: Learned trajectories in the synthetic 2D plane environment. Each color denotes one
specific latent code. Behavior cloning deviates from the expert demonstrations due to compounding
errors. GAIL does produce circular trajectories but fails to capture the latent structure for it assumes
that the demonstrations are generated from a single expert, and tries to learn an average policy. Our
method (InfoGAIL) successfully distinguishes expert behaviors and imitates each mode accordingly
(colors are ordered in accordance to the expert for visualization purposes, but are not identifiable).
4.1
Learning to Distinguish Trajectories
We demonstrate the effectiveness of InfoGAIL on a synthetic example. The environment is a 2D
plane where the agent can move around freely at a constant velocity by selecting its direction pt at
(discrete) time t. For the agent, the observations at time t are positions from t 4 to t. The (unlabeled)
expert demonstrations contain three distinct modes, each generated with a stochastic expert policy
that produces a circle-like trajectory (see Figure 1, panel a). The objective is to distinguish these
three distinct modes and imitate the corresponding expert behavior. We consider three methods:
behavior cloning, GAIL and InfoGAIL (details included in Appendix A). In particular, for all the
experiments we assume the same architecture and that the latent code is a one-hot encoded vector
with 3 dimensions and a uniform prior; only InfoGAIL regularizes the latent code. Figure 1 shows
that the introduction of latent variables allows InfoGAIL to distinguish the three types of behavior and
imitate each behavior successfully; the other two methods, however, fail to distinguish distinct modes.
BC suffers from the compounding error problem and the learned policy tends to deviate from the
expert trajectories; GAIL does learn to generate circular trajectories but it fails to separate different
modes due to the lack of a mechanism that can explicitly account for the underlying structure.
In the rest of Section 4, we show how InfoGAIL can infer the latent structure of human decisionmaking in a driving domain. In particular, our agent only relies on visual inputs to sense the
environment.
4.2
Utilizing Raw Visual Inputs via Transfer Learning
The high dimensional nature of visual inputs poses a significant challenges to learning a policy.
Intuitively, the policy will have to simultaneously learn how to identify meaningful visual features,
and how to leverage them to achieve the desired behavior using only a small number of expert
demonstrations. Therefore, methods to mitigate the high sample complexity of the problem are
crucial to success in this domain.
In this paper, we take a transfer learning approach. Features extracted using a CNN pre-trained
on ImageNet contain high-level information about the input images, which can be adapted to new
vision tasks via transfer learning [29]. However, it is not yet clear whether these relatively high-level
features can be directly applied to tasks where perception and action are tightly interconnected; we
demonstrate that this is possible through our experiments. We perform transfer learning by exploiting
features from a pre-trained neural network that effectively convert raw images into relatively highlevel information [30]. In particular, we use a Deep Residual Network [31] pre-trained on the
ImageNet classification task [32] to obtain the visual features used as inputs for the policy network.
4.3
Network Structure
Our policy accepts certain auxiliary information as internal input to serve as a short-term memory.
This auxiliary information can be accessed along with the raw visual inputs. In our experiments, the
auxiliary information for the policy at time t consists of the following: 1) velocity at time t, which
is a three dimensional vector; 2) actions at time t 1 and t 2, which are both three dimensional
vectors; 3) damage of the car, which is a real value. The auxiliary input has 10 dimensions in total.
6
Figure 2: Visualizing the training process of turn. Here we show the trajectories of InfoGAIL
at different stages of training. Blue and red indicate policies under different latent codes, which
correspond to ?turning from inner lane? and ?turning from outer lane? respectively. The rightmost
figure shows the trajectories under latent codes [1, 0] (red), [0, 1] (blue), and [0.5, 0.5] (purple), which
suggests that, to some extent, our method is able to generalize to cases previously unseen in the
training data.
For the policy network, input visual features are passed through two convolutional layers, and then
combined with the auxiliary information vector and (in the case of InfoGAIL) the latent code c. We
parameterize the baseline as a network with the same architecture except for the final layer, which is
just a scalar output that indicates the expected accumulated future rewards.
The discriminator D! accepts three elements as input: the input image, the auxiliary information,
and the current action. The output is a score for the WGAN training objective, which is supposed to
be lower for expert state-action pairs, and higher for generated ones. The posterior approximation
network Q adopts the same architecture as the discriminator, except that the output is a softmax
over the discrete latent variables or a factored Gaussian over continuous latent variables. We include
details of our architecture in Appendix B.
4.4
Interpretable Imitation Learning from Visual Demonstrations
In this experiment, we consider two subsets of human driving behaviors: turn, where the expert
takes a turn using either the inside lane or the outside lane; and pass, where the expert passes another
vehicle from either the left or the right. In both cases, the expert policy has two significant modes.
Our goal is to have InfoGAIL capture these two separate modes from expert demonstrations in an
unsupervised way.
We use a discrete latent code, which is a one-hot encoded vector with two possible states. For both
settings, there are 80 expert trajectories in total, with 100 frames in each trajectory; our prior for
the latent code is a uniform discrete distribution over the two states. The performance of a learned
policy is quantified with two metrics: the average distance is determined by the distance traveled by
the agent before a collision (and is bounded by the length of the simulation horizon), and accuracy
is defined as the classification accuracy of the expert state-action pairs according to the latent code
inferred with Q . We add constant reward at every time step as reward augmentation, which is used
to encourage the car to "stay alive" as long as possible and can be regarded as another way of reducing
collision and off-lane driving (as these will lead to the termination of that episode).
The average distance and sampled trajectories at different stages of training are shown in Figures 2 and
3 for turn and pass respectively. During the initial stages of training, the model does not distinguish
the two modes and has a high chance of colliding and driving off-lane, due to the limitations of
behavior cloning (which we used to initialize the policy). As training progresses, trajectories provided
by the learned policy begin to diverge. Towards the end of training, the two types of trajectories are
clearly distinguishable, with only a few exceptions. In turn, [0, 1] corresponds to using the inside
lane, while [1, 0] corresponds to the outside lane. In pass, the two kinds of latent codes correspond
to passing from right and left respectively. Meanwhile, the average distance of the rollouts steadily
increases with more training.
Learning the two modes separately requires accurate inference of the latent code. To examine the
accuracy of posterior inference, we select state-action pairs from the expert trajectories (where
the state is represented as a concatenation of raw image and auxiliary variables) and obtain the
corresponding latent code through Q (c|s, a); see Table 1. Although we did not explicitly provide
any label, our model is able to correctly distinguish over 81% of the state-action pairs in pass (and
almost all the pairs in turn, confirming the clear separation between generated trajectories with
different latent codes in Figure 2).
7
Figure 3: Experimental results for pass. Left: Trajectories of InfoGAIL at different stages of
training (epoch 1 to 37). Blue and red indicate policies using different latent code values, which
correspond to passing from right or left. Middle: Traveled distance denotes the absolute distance
from the start position, averaged over 60 rollouts of the InfoGAIL policy trained at different epochs.
Right: Trajectories of pass produced by an agent trained on the original GAIL objective. Compared
to InfoGAIL, GAIL fails to distinguish between different modes.
Table 1: Classification accuracies for pass.
Method
Accuracy
Chance
K-means
PCA
InfoGAIL (Ours)
50%
55.4%
61.7%
81.9%
SVM
CNN
85.8%
90.8%
Table 2: Average rollout distances.
Method
Behavior Cloning
GAIL
InfoGAIL \ RB
InfoGAIL \ RA
InfoGAIL \ WGAN
InfoGAIL (Ours)
Human
Avg. rollout distance
701.83
914.45
1031.13
1123.89
1177.72
1226.68
1203.51
For comparison, we also visualize the trajectories of pass for the original GAIL objective in Figure 3,
where there is no mutual information regularization. GAIL learns the expert trajectories as a whole,
and cannot distinguish the two modes in the expert policy.
Interestingly, instead of learning two separate trajectories, GAIL tries to fit the left trajectory by
swinging the car suddenly to the left after it has surpassed the other car from the right. We believe
this reflects a limitation in the discriminators. Since D! (s, a) only requires state-action pairs as
input, the policy is only required to match most of the state-action pairs; matching each rollout in a
whole with expert trajectories is not necessary. InfoGAIL with discrete latent codes can alleviate this
problem by forcing the model to learn separate trajectories.
4.5
Ablation Experiments
We conduct a series of ablation experiments to demonstrate that our proposed improved optimization
techniques in Section 3.2 and 3.3 are indeed crucial for learning an effective policy. Our policy drives
a car on the race track along with other cars, whereas the human expert provides 20 trajectories with
500 frames each by trying to drive as fast as possible without collision. Reward augmentation is
performed by adding a reward that encourages the car to drive faster. The performance of the policy
is determined by the average distance. Here a longer average rollout distance indicates a better policy.
In our ablation experiments, we selectively remove some of the improved optimization methods
from Section 3.2 and 3.3 (we do not use any latent code in these experiments). InfoGAIL(Ours)
includes all the optimization techniques; GAIL excludes all the techniques; InfoGAIL\WGAN
switches the WGAN objective with the GAN objective; InfoGAIL\RA removes reward augmentation; InfoGAIL\RB removes the replay buffer and only samples from the most recent rollouts;
Behavior Cloning is the behavior cloning method and Human is the expert policy. Table 2 shows
the average rollout distances of different policies. Our method is able to outperform the expert with
the help of reward augmentation; policies without reward augmentation or WGANs perform slightly
worse than the expert; removing the replay buffer causes the performance to deteriorate significantly
due to increased variance in gradient estimation.
8
5
Related work
There are two major paradigms for vision-based driving systems [33]. Mediated perception is a
two-step approach that first obtains scene information and then makes a driving decision [34?36];
behavior reflex, on the other hand, adopts a direct approach by mapping visual inputs to driving
actions [37, 16]. Many of the current autonomous driving methods rely on the two-step approach,
which requires hand-crafting features such as the detection of lane markings and cars [38, 33]. Our
approach, on the other hand, attempts to learn these features directly from vision to actions. While
mediated perception approaches are currently more prevalent, we believe that end-to-end learning
methods are more scalable and may lead to better performance in the long run.
[39] introduce an end-to-end imitation learning framework that learns to drive entirely from visual
information, and test their approach on real-world scenarios. However, their method uses behavior
cloning by performing supervised learning over the state-action pairs, which is well-known to
generalize poorly to more sophisticated tasks, such as changing lanes or passing vehicles. With the
use of GAIL, our method can learn to perform these sophisticated operations easily. [40] performs
end-to-end visual imitation learning in TORCS through DAgger [18], querying the reference policies
during training, which in many cases is difficult.
Most imitation learning methods for end-to-end driving rely heavily on LIDAR-like inputs to obtain
precise distance measurements [21, 41]. These inputs are not usually available to humans during
driving. In particular, [41] applies GAIL to the task of modeling human driving behavior on highways.
In contrast, our policy requires only raw visual information as external input, which in practice is all
the information humans need in order to drive.
[42] and [9] have also introduced a pre-trained deep neural network to achieve better performance
in imitation learning with relatively few demonstrations. Specifically, they introduce a pre-trained
model to learn dense, incremental reward functions that are suitable for performing downstream
reinforcement learning tasks, such as real-world robotic experiments. This is different from our
approach, in that transfer learning is performed over the critic instead of the policy. It would be
interesting to combine that reward with our approach through reward augmentation.
6
Conclusion
In this paper, we present a method to imitate complex behaviors while identifying salient latent factors
of variation in the demonstrations. Discovering these latent factors does not require direct supervision
beyond expert demonstrations, and the whole process can be trained directly with standard policy
optimization algorithms. We also introduce several techniques to successfully perform imitation
learning using visual inputs, including transfer learning and reward augmentation. Our experimental
results in the TORCS simulator show that our methods can automatically distinguish certain behaviors
in human driving, while learning a policy that can imitate and even outperform the human experts
using visual information as the sole external input. We hope that our work can further inspire
end-to-end learning approaches to autonomous driving under more realistic scenarios.
Acknowledgements
We thank Shengjia Zhao and Neal Jean for their assistance and advice. Toyota Research Institute
(TRI) provided funds to assist the authors with their research but this article solely reflects the
opinions and conclusions of its authors and not TRI or any other Toyota entity. This research was
also supported by Intel Corporation, FLI and NSF grants 1651565, 1522054, 1733686.
References
[1] S. Levine and V. Koltun, ?Guided policy search.,? in ICML (3), pp. 1?9, 2013.
[2] J. Schulman, S. Levine, P. Abbeel, M. I. Jordan, and P. Moritz, ?Trust region policy optimization.,? in ICML, pp. 1889?1897, 2015.
[3] T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wierstra,
?Continuous control with deep reinforcement learning,? arXiv preprint arXiv:1509.02971, 2015.
9
[4] J. Schulman, P. Moritz, S. Levine, M. Jordan, and P. Abbeel, ?High-dimensional continuous
control using generalized advantage estimation,? arXiv preprint arXiv:1506.02438, 2015.
[5] D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. Van Den Driessche, J. Schrittwieser,
I. Antonoglou, V. Panneershelvam, M. Lanctot, et al., ?Mastering the game of go with deep
neural networks and tree search,? Nature, vol. 529, no. 7587, pp. 484?489, 2016.
[6] A. Tamar, S. Levine, P. Abbeel, Y. WU, and G. Thomas, ?Value iteration networks,? in Advances
in Neural Information Processing Systems, pp. 2146?2154, 2016.
[7] B. D. Ziebart, A. L. Maas, J. A. Bagnell, and A. K. Dey, ?Maximum entropy inverse reinforcement learning.,? in AAAI, vol. 8, pp. 1433?1438, Chicago, IL, USA, 2008.
[8] P. Englert and M. Toussaint, ?Inverse kkt?learning cost functions of manipulation tasks from
demonstrations,? in Proceedings of the International Symposium of Robotics Research, 2015.
[9] C. Finn, S. Levine, and P. Abbeel, ?Guided cost learning: Deep inverse optimal control via
policy optimization,? in Proceedings of the 33rd International Conference on Machine Learning,
vol. 48, 2016.
[10] B. Stadie, P. Abbeel, and I. Sutskever, ?Third person imitation learning,? in ICLR, 2017.
[11] S. Ermon, Y. Xue, R. Toth, B. N. Dilkina, R. Bernstein, T. Damoulas, P. Clark, S. DeGloria,
A. Mude, C. Barrett, et al., ?Learning large-scale dynamic discrete choice models of spatiotemporal preferences with application to migratory pastoralism in east africa.,? in AAAI, pp. 644?
650, 2015.
[12] J. Ho and S. Ermon, ?Generative adversarial imitation learning,? in Advances in Neural Information Processing Systems, pp. 4565?4573, 2016.
[13] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and
Y. Bengio, ?Generative adversarial nets,? in Advances in neural information processing systems,
pp. 2672?2680, 2014.
[14] X. Chen, Y. Duan, R. Houthooft, J. Schulman, I. Sutskever, and P. Abbeel, ?Infogan: Interpretable representation learning by information maximizing generative adversarial nets,? in
Advances in Neural Information Processing Systems, pp. 2172?2180, 2016.
[15] B. Wymann, E. Espi?, C. Guionneau, C. Dimitrakakis, R. Coulom, and A. Sumner, ?Torcs, the
open racing car simulator,? Software available at http://torcs. sourceforge. net, 2000.
[16] D. A. Pomerleau, ?Efficient training of artificial neural networks for autonomous navigation,?
Neural Computation, vol. 3, no. 1, pp. 88?97, 1991.
[17] S. Ross and D. Bagnell, ?Efficient reductions for imitation learning.,? in AISTATS, pp. 3?5,
2010.
[18] S. Ross, G. J. Gordon, and D. Bagnell, ?A reduction of imitation learning and structured
prediction to no-regret online learning.,? in AISTATS, p. 6, 2011.
[19] P. Abbeel and A. Y. Ng, ?Apprenticeship learning via inverse reinforcement learning,? in
Proceedings of the twenty-first international conference on Machine learning, p. 1, ACM, 2004.
[20] U. Syed, M. Bowling, and R. E. Schapire, ?Apprenticeship learning using linear programming,?
in Proceedings of the 25th international conference on Machine learning, pp. 1032?1039, ACM,
2008.
[21] J. Ho, J. K. Gupta, and S. Ermon, ?Model-free imitation learning with policy optimization,? in
Proceedings of the 33rd International Conference on Machine Learning, 2016.
[22] M. Bloem and N. Bambos, ?Infinite time horizon maximum causal entropy inverse reinforcement
learning,? in Decision and Control (CDC), 2014 IEEE 53rd Annual Conference on, pp. 4911?
4916, IEEE, 2014.
10
[23] D. Kingma and J. Ba, ?Adam: A method for stochastic optimization,? arXiv preprint
arXiv:1412.6980, 2014.
[24] T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen, ?Improved
techniques for training gans,? in Advances in Neural Information Processing Systems, pp. 2234?
2242, 2016.
[25] S. Arora, R. Ge, Y. Liang, T. Ma, and Y. Zhang, ?Generalization and equilibrium in generative
adversarial nets (gans),? arXiv preprint arXiv:1703.00573, 2017.
[26] M. Arjovsky, S. Chintala, and L. Bottou, ?Wasserstein gan,? arXiv preprint arXiv:1701.07875,
2017.
[27] R. J. Williams, ?Simple statistical gradient-following algorithms for connectionist reinforcement
learning,? Machine learning, vol. 8, no. 3-4, pp. 229?256, 1992.
[28] V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves,
M. Riedmiller, A. K. Fidjeland, G. Ostrovski, et al., ?Human-level control through deep
reinforcement learning,? Nature, vol. 518, no. 7540, pp. 529?533, 2015.
[29] J. Yosinski, J. Clune, Y. Bengio, and H. Lipson, ?How transferable are features in deep neural
networks?,? in Advances in neural information processing systems, pp. 3320?3328, 2014.
[30] A. Sharif Razavian, H. Azizpour, J. Sullivan, and S. Carlsson, ?Cnn features off-the-shelf: an
astounding baseline for recognition,? in Proceedings of the IEEE Conference on Computer
Vision and Pattern Recognition Workshops, pp. 806?813, 2014.
[31] K. He, X. Zhang, S. Ren, and J. Sun, ?Deep residual learning for image recognition,? in
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770?778,
2016.
[32] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy,
A. Khosla, M. Bernstein, et al., ?Imagenet large scale visual recognition challenge,? International Journal of Computer Vision, vol. 115, no. 3, pp. 211?252, 2015.
[33] C. Chen, A. Seff, A. Kornhauser, and J. Xiao, ?Deepdriving: Learning affordance for direct
perception in autonomous driving,? in Proceedings of the IEEE International Conference on
Computer Vision, pp. 2722?2730, 2015.
[34] M. Aly, ?Real time detection of lane markers in urban streets,? in Intelligent Vehicles Symposium,
2008 IEEE, pp. 7?12, IEEE, 2008.
[35] P. Lenz, J. Ziegler, A. Geiger, and M. Roser, ?Sparse scene flow segmentation for moving
object detection in urban environments,? in Intelligent Vehicles Symposium (IV), 2011 IEEE,
pp. 926?932, IEEE, 2011.
[36] K. Kitani, B. Ziebart, J. Bagnell, and M. Hebert, ?Activity forecasting,? Computer Vision?ECCV
2012, pp. 201?214, 2012.
[37] D. A. Pomerleau, ?Alvinn, an autonomous land vehicle in a neural network,? tech. rep., Carnegie
Mellon University, Computer Science Department, 1989.
[38] A. Geiger, P. Lenz, C. Stiller, and R. Urtasun, ?Vision meets robotics: The kitti dataset,? The
International Journal of Robotics Research, vol. 32, no. 11, pp. 1231?1237, 2013.
[39] M. Bojarski, D. Del Testa, D. Dworakowski, B. Firner, B. Flepp, P. Goyal, L. D. Jackel,
M. Monfort, U. Muller, J. Zhang, et al., ?End to end learning for self-driving cars,? arXiv
preprint arXiv:1604.07316, 2016.
[40] J. Zhang and K. Cho, ?Query-efficient imitation learning for end-to-end autonomous driving,?
arXiv preprint arXiv:1605.06450, 2016.
[41] A. Kuefler, J. Morton, T. Wheeler, and M. Kochenderfer, ?Imitating driver behavior with
generative adversarial networks,? arXiv preprint arXiv:1701.06699, 2017.
[42] P. Sermanet, K. Xu, and S. Levine, ?Unsupervised perceptual rewards for imitation learning,?
arXiv preprint arXiv:1612.06699, 2016.
11
| 6971 |@word cnn:3 middle:1 open:2 termination:1 d2:1 simulation:3 seek:1 reduction:3 initial:3 series:1 score:2 selecting:1 bc:3 ours:4 interestingly:1 rightmost:1 africa:1 current:4 com:1 recovered:1 yet:1 guez:1 realistic:1 chicago:1 confirming:1 shape:1 remove:3 interpretable:7 update:6 fund:1 generative:17 discovering:1 imitate:9 accordingly:1 plane:2 vanishing:1 core:1 short:1 provides:1 preference:2 accessed:1 zhang:4 rollout:7 along:3 wierstra:1 direct:3 dilkina:1 symposium:3 koltun:1 driver:1 pritzel:1 consists:3 combine:1 behavioral:1 inside:3 introduce:5 manner:1 apprenticeship:4 deteriorate:1 ra:2 indeed:1 expected:1 behavior:38 planning:1 examine:1 simulator:5 inspired:1 discounted:2 automatically:4 duan:1 becomes:1 provided:4 discover:3 underlying:4 estimating:1 maximizes:1 bounded:2 panel:1 begin:1 kind:2 corporation:1 mitigate:1 every:1 stateaction:1 zaremba:1 classifier:2 control:5 grant:1 safety:1 before:2 accordance:1 treat:1 tends:2 meet:1 solely:3 approximately:1 might:3 black:1 quantified:1 specifying:1 challenging:2 suggests:1 collapse:1 hunt:1 range:1 averaged:1 practice:1 regret:1 goyal:1 sullivan:1 procedure:2 habit:1 wheeler:1 riedmiller:1 thought:1 significantly:3 matching:3 word:1 road:1 pre:5 cannot:1 close:1 unlabeled:1 applying:2 bellemare:1 descending:1 optimize:2 maximizing:2 straightforward:2 go:2 williams:1 sumner:1 swinging:1 identifying:1 pouget:1 factored:1 rule:1 utilizing:1 regarded:1 variation:7 autonomous:10 analogous:1 updated:1 pt:1 heavily:1 user:1 programming:1 us:1 designing:1 goodfellow:2 velocity:2 element:1 expensive:2 recognition:5 utilized:1 racing:2 observed:3 levine:6 preprint:9 capture:3 parameterize:2 sharif:1 region:2 sun:1 episode:1 decrease:1 environment:16 rmsprop:1 complexity:1 reward:30 ziebart:2 warde:1 dynamic:3 trained:9 solving:2 serve:1 efficiency:1 easily:1 various:1 represented:2 distinct:5 fast:1 effective:2 monte:1 artificial:1 query:1 outside:2 jean:1 encoded:2 stanford:4 unseen:1 jointly:1 final:2 online:1 sequence:2 advantage:2 differentiable:1 highlevel:1 net:4 propose:4 interaction:1 coming:1 interconnected:1 loop:1 ablation:3 roser:1 poorly:1 achieve:3 supposed:1 sourceforge:1 exploiting:1 sutskever:2 cluster:1 optimum:1 decisionmaking:1 produce:7 generating:1 adam:3 incremental:1 staying:1 kitti:1 help:1 illustrate:1 develop:1 silver:3 pose:1 object:1 measured:2 stiller:1 sole:1 progress:1 recovering:3 c:2 involves:1 come:2 auxiliary:7 indicate:2 direction:1 guided:2 stochastic:5 human:19 ermon:5 opinion:1 require:2 abbeel:7 generalization:2 preliminary:1 alleviate:2 anticipate:1 extension:1 exploring:1 around:1 equilibrium:1 mapping:1 visualize:1 driving:29 major:1 optimizer:1 purpose:1 estimation:4 lenz:2 label:1 currently:1 jackel:1 ross:2 ziegler:1 highway:2 largest:1 gail:31 successfully:3 reflects:3 hope:1 compounding:3 mit:2 clearly:1 gaussian:2 aim:1 shelf:1 rusu:1 azizpour:1 clune:1 morton:1 improvement:1 shengjia:1 prevalent:1 indicates:2 cloning:10 tech:1 contrast:1 adversarial:13 baseline:6 sense:1 inference:2 accumulated:1 typically:5 entire:1 flepp:1 reproduce:4 selects:1 mimicking:1 pixel:3 uncovering:1 among:2 dual:1 denoted:2 augment:1 classification:3 softmax:1 initialize:2 mutual:7 construct:1 beach:1 ng:1 manually:1 veness:1 represents:2 unsupervised:5 icml:2 mimic:1 minimized:1 future:1 mirza:1 gordon:1 connectionist:1 few:3 distinguishes:1 intelligent:2 bojarski:1 simultaneously:1 divergence:1 wgan:7 individual:4 tightly:1 astounding:1 rollouts:4 attempt:1 detection:3 ostrovski:1 highly:1 circular:2 mnih:1 mixture:4 navigation:1 farley:1 predefined:1 accurate:1 succeeded:1 tuple:1 encourage:1 necessary:1 conduct:1 tree:1 iv:1 desired:2 circle:1 causal:2 increased:1 modeling:2 maximization:2 clipping:1 cost:2 introducing:1 deviation:1 subset:1 uniform:2 successful:1 conducted:1 too:1 optimally:1 spatiotemporal:1 xue:1 synthetic:4 combined:1 cho:1 st:10 person:1 international:8 stay:1 off:4 diverge:1 gans:4 augmentation:9 aaai:2 containing:1 opposed:1 huang:2 worse:1 external:5 wgans:1 expert:69 zhao:1 style:1 li:8 account:1 potential:3 includes:1 explicitly:5 race:2 damoulas:1 performed:3 try:3 vehicle:5 razavian:1 red:3 start:1 recover:1 dagger:1 capability:1 lipson:1 youtu:1 contribution:1 purple:1 il:1 accuracy:5 convolutional:1 variance:2 conducting:1 correspond:4 identify:2 generalize:2 raw:10 kavukcuoglu:1 accurately:3 produced:3 ren:1 carlo:1 trajectory:48 expertise:1 drive:6 russakovsky:1 casual:1 suffers:2 manual:1 pp:25 steadily:1 chintala:1 proof:1 sampled:1 dataset:1 color:3 knowledge:1 car:13 segmentation:1 sophisticated:2 higher:1 supervised:2 specify:1 improved:4 inspire:1 box:1 dey:1 just:1 stage:4 until:1 hand:4 working:1 trust:2 expressive:1 su:1 marker:1 lack:1 del:1 interfere:1 mode:13 mdp:1 believe:2 usa:2 building:1 lillicrap:1 concept:3 true:1 contain:2 hence:2 regularization:5 alternating:1 moritz:2 kitani:1 semantic:1 neal:1 visualizing:1 assistance:1 game:2 during:4 encourages:1 bowling:1 seff:1 transferable:1 self:1 generalized:1 trying:1 outline:1 theoretic:1 demonstrate:7 performs:2 stefano:1 image:8 variational:1 rl:3 empirically:1 tassa:1 extend:3 yosinski:1 he:1 braking:1 surpassing:1 significant:5 measurement:1 mellon:1 rd:3 affordance:1 erez:1 moving:1 access:5 specification:1 similarity:1 supervision:3 encountering:1 longer:1 add:1 disentangle:4 posterior:9 recent:3 forcing:1 scenario:2 manipulation:1 buffer:3 certain:2 rep:1 success:1 muller:1 captured:1 seen:1 additional:1 wasserstein:2 steering:1 arjovsky:1 employed:1 freely:1 deng:1 maximize:2 paradigm:1 signal:4 multiple:2 infer:2 match:1 faster:1 long:3 controlled:1 prediction:1 scalable:1 vision:9 expectation:2 metric:1 surpassed:1 arxiv:18 iteration:1 jiaming:1 achieved:2 robotics:3 background:1 remarkably:1 want:1 separately:1 whereas:1 krause:1 source:2 englert:1 crucial:2 appropriately:1 collides:1 unlike:1 rest:1 pass:1 tri:2 flow:1 effectiveness:1 jordan:2 call:1 monfort:1 near:1 leverage:1 bernstein:2 bengio:2 variety:1 affect:1 fit:1 switch:1 architecture:4 suboptimal:1 inner:1 tamar:1 shift:1 whether:1 pca:1 assist:1 passed:1 forecasting:1 penalty:1 song:1 proceed:1 passing:3 cause:1 action:25 deep:8 heess:1 collision:5 detailed:1 clear:2 karpathy:1 discount:1 category:2 http:3 generate:2 outperform:2 schapire:1 nsf:1 correctly:2 track:3 rb:2 blue:3 discrete:7 threefold:1 hyperparameter:3 incentive:2 vol:8 carnegie:1 key:1 salient:4 trpo:4 urban:2 changing:1 utilize:1 incentivize:1 excludes:1 downstream:1 convert:1 houthooft:1 dimitrakakis:1 run:2 inverse:5 almost:1 wu:1 separation:1 geiger:2 decision:5 appendix:3 lanctot:1 entirely:1 bound:2 layer:3 distinguish:11 courville:1 identifiable:1 annual:1 activity:1 adapted:1 constraint:2 alive:1 colliding:1 scene:2 software:1 lane:12 generates:1 speed:1 min:4 performing:4 relatively:5 structured:1 marking:1 according:1 department:1 poor:1 slightly:1 mastering:1 modification:1 chess:1 intuitively:1 den:1 imitating:3 visualization:1 previously:1 turn:6 fail:3 mechanism:1 needed:1 bambos:1 ge:1 finn:1 ascending:1 end:18 antonoglou:1 kochenderfer:1 adopted:1 available:4 operation:1 panneershelvam:1 salimans:1 appropriate:2 enforce:1 batch:2 ho:2 original:5 thomas:1 top:1 denotes:4 assumes:3 include:3 gan:5 clustering:1 especially:2 suddenly:1 crafting:1 objective:17 move:1 strategy:1 damage:1 dependence:1 traditional:1 surrogate:2 bagnell:4 gradient:8 iclr:1 distance:14 separate:5 thank:1 separating:1 concatenation:1 entity:1 outer:1 fidjeland:1 maddison:1 street:1 collected:4 considers:1 extent:1 urtasun:1 enforcing:1 ozair:1 code:22 besides:1 modeled:1 length:1 providing:2 demonstration:34 balance:1 schrittwieser:1 coulom:1 difficult:3 disentangling:1 liang:1 sermanet:1 potentially:2 ba:1 pomerleau:2 motivates:1 policy:78 unknown:2 perform:7 twenty:1 satheesh:1 observation:4 markov:2 defining:1 situation:1 variability:3 regularizes:1 precise:1 frame:2 reproducing:1 aly:1 inferred:1 introduced:2 pair:17 required:1 specified:2 discriminator:11 imagenet:3 learned:11 accepts:3 kingma:1 nip:1 address:1 able:8 beyond:2 suggested:1 below:2 perception:4 usually:1 comfort:1 pattern:2 challenge:2 built:1 including:3 max:4 video:1 memory:1 hot:2 suitable:2 syed:1 treated:1 hybrid:1 rely:2 turning:2 residual:2 occupancy:1 improve:1 github:1 migratory:1 arora:1 mediated:2 coupled:1 imitates:2 deviate:2 prior:5 traveled:2 epoch:2 acknowledgement:1 schulman:3 carlsson:1 graf:1 fully:1 cdc:1 interesting:1 limitation:3 querying:1 toussaint:1 clark:1 agent:11 s0:3 article:1 xiao:1 share:2 interfering:1 critic:1 eccv:1 land:1 maas:1 supported:1 free:4 hebert:1 formal:1 guide:2 bias:1 institute:1 wide:1 absolute:1 sparse:1 van:1 dimension:4 transition:1 world:2 adopts:2 author:2 reinforcement:14 avg:1 simplified:1 sifre:1 employing:1 ec:1 skill:1 ignore:1 obtains:1 robotic:1 kkt:1 assumed:1 discriminative:2 imitation:35 continuous:5 latent:46 search:2 khosla:1 table:4 learn:13 transfer:6 nature:3 ca:1 toth:1 alvinn:1 bottou:1 complex:8 meanwhile:1 fli:1 domain:4 did:1 aistats:2 dense:1 whole:3 arise:1 fair:1 xu:2 augmented:1 advice:1 intel:1 sub:1 momentum:1 fails:3 explicit:2 wish:1 position:2 stadie:1 replay:3 perceptual:3 infogan:4 toyota:2 third:1 learns:6 removing:1 specific:2 navigate:1 covariate:1 showing:1 jensen:1 explored:1 barrett:1 svm:1 abadie:1 gupta:1 workshop:1 adding:1 effectively:2 ci:2 horizon:3 gap:1 chen:3 entropy:4 distinguishable:1 simply:1 visual:23 ordered:1 scalar:1 reflex:1 applies:1 driessche:1 corresponds:2 radford:1 chance:2 relies:3 extracted:1 acm:2 ma:2 conditional:1 goal:5 cheung:1 acceleration:1 towards:1 bloem:1 hard:1 change:1 included:1 infinite:2 typical:1 except:2 semantically:4 determined:2 reducing:1 lidar:1 specifically:1 total:2 pas:8 experimental:3 e:1 shannon:1 meaningful:7 east:1 exception:1 formally:1 select:1 selectively:1 internal:2 incorporate:1 avoiding:1 |
6,602 | 6,972 | Variational Laws of
Visual Attention for Dynamic Scenes
Dario Zanca
DINFO, University of Florence
DIISM, University of Siena
[email protected]
Marco Gori
DIISM, University of Siena
[email protected]
Abstract
Computational models of visual attention are at the crossroad of disciplines like
cognitive science, computational neuroscience, and computer vision. This paper
proposes a model of attentional scanpath that is based on the principle that there
are foundational laws that drive the emergence of visual attention. We devise variational laws of the eye-movement that rely on a generalized view of the Least Action
Principle in physics. The potential energy captures details as well as peripheral
visual features, while the kinetic energy corresponds with the classic interpretation
in analytic mechanics. In addition, the Lagrangian contains a brightness invariance
term, which characterizes significantly the scanpath trajectories. We obtain differential equations of visual attention as the stationary point of the generalized action,
and we propose an algorithm to estimate the model parameters. Finally, we report
experimental results to validate the model in tasks of saliency detection.
1
Introduction
Eye movements in humans constitute an essential mechanism to disentangle the tremendous amount
of information that reaches the retina every second. This mechanism in adults is very sophisticated.
In fact, it involves both bottom-up processes, which depend on raw input features, and top-down
processes, which include task dependent strategies [2; 3; 4]. It turns out that visual attention is
interwound with high level cognitive processes, so as its deep understanding seems to be trapped
into a sort of eggs-chicken dilemma. Does visual scene interpretation drive visual attention or the
other way around? Which one ?was born? first? Interestingly, this dilemma seems to disappears
in newborns: despite their lack of knowledge of the world, they exhibit mechanisms of attention to
extract relevant information from what they see [5]. Moreover, there are evidences that the very first
fixations are highly correlated among adult subjects who are presented with a new input [25]. This
shows that they still share a common mechanism that drive early fixations, while scanpaths diverge
later under top-down influences.
Many attempts have been made in the direction of modeling visual attention. Based on the feature
integration theory of attention [14], Koch and Ullman in [9] assume that human attention operates
in the early representation, which is basically a set of feature maps. They assume that these maps
are then combined in a central representation, namely the saliency map, which drives the attention
mechanisms. The first complete implementation of this scheme was proposed by Itti et al. in [10].
In that paper, feature maps for color, intensity and orientation are extracted at different scales.
Then center-surround differences and normalization are computed for each pixel. Finally, all this
information is combined linearly in a centralized saliency map. Several other models have been
proposed by the computer vision community, in particular to address the problem of refining saliency
maps estimation. They usually differ in the definition of saliency, while they postulate a centralized
control of the attention mechanism through the saliency map. For instance, it has been claimed that
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
the attention is driven according to a principle of information maximization [16] or by an opportune
selection of surprising regions [17]. A detailed description of the state of the art is given in [8].
Machine learning approaches have been used to learn models of saliency. Judd et al. [1] collected
1003 images observed by 15 subjects and trained an SVM classifier with low-, middle-, and high-level
features. More recently, automatic feature extraction methods with convolutional neural networks
achieved top level performance on saliency estimation [26; 18].
Most of the referred papers share the idea that saliency is the product of a global computation. Some
authors also provide scanpaths of image exploration, but to simulate them over the image, they all use
the procedure defined by [9]. The winner-take-all algorithm is used to select the most salient location
for the first fixation. Then three rules are introduced to select the next location: inhibition-of-return,
similarity preference, and proximity preference. An attempt of introducing biological biases has been
made by [6] to achieve more realistic saccades and improve performance.
In this paper, we present a novel paradigm in which visual attention emerges from a few unifying
functional principles. In particular, we assume that attention is driven by the curiosity for regions with
many details, and by the need to achieve brightness invariance, which leads to fixation and motion
tracking. These principles are given a mathematical expression by a variational approach based on
a generalization of least action, whose stationary point leads to the correspondent Euler-Lagrange
differential equations of the focus of attention. The theory herein proposed offers an intriguing model
for capturing a mechanisms behind saccadic eye movements, as well as object tracking within the
same framework. In order to compare our results with the state of the art in the literature, we have
also computed the saliency map by counting the visits in each pixel over a given time window, both
on static and dynamic scenes. It is worth mentioning that while many papers rely on models that are
purposely designed to optimize the approximation of the saliency map, for the proposed approach
such a computation is obtained as a byproduct of a model of scanpath.
The paper is organized as follows. Section 2 provides a mathematical description of the model and
the Euler-Lagrange equations of motion that describe attention dynamics. The technical details,
including formal derivation of the motion equations, are postponed to the Appendix. In the Section 3
we describe the experimental setup and show performance of the model in a task of saliency detection
on two popular dataset of images [12; 11] and one dataset of videos [27]. Some conclusions and
critical analysis are finally drawn in Section 4.
2
The model
In this section, we propose a model of visual attention that takes place in the earliest stage of vision,
which we assume to be completely data driven. We begin discussing the driving principles.
2.1
Principles of visual attention
The brightness signal b(t, x) can be thought of as a real-valued function
b : R + ? R2 ? R
(1)
where t is the time and x = (x1 , x2 ) denotes the position. The scanpath over the visual input is
defined as
x : R + ? R2
(2)
The scanpath x(t) will be also referred to as trajectory or observation.
Three fundamental principles drive the model of attention. They lead to the introduction of the
correspondent terms of the Lagrangian of the action.
i) Boundedness of the trajectory
Trajectory x(t) is bounded into a defined area (retina). This is modeled by a harmonic
oscillator at the borders of the image which constraints the motion within the retina1 :
X
2
2
V (x) = k
(li ? xi ) ? [xi > li ] + (xi ) ? [xi < 0]
(3)
i=1,2
1
Here, we use Iverson?s notation, according to which if p is a proposition then [p] = 1 if p=true and
[p] = 0 otherwise
2
where k is the elastic constant, li is the i-th dimension of the rectangle which represents the
retina2 .
ii) Curiosity driven principle
Visual attention is attracted by regions with many details, that is where the magnitude of
the gradient of the brightness is high. In addition to this local field, the role of peripheral
information is included by processing a blurred version p(t, x) of the brightness b(t, x). The
modulation of these two terms is given by
C(t, x) = b2x cos2 (?t) + p2x sin2 (?t),
(4)
where bx and px denote the gradient w.r.t. x. Notice that the alternation of the local and
peripheral fields has a fundamental role in avoiding trapping into regions with too many
details.
iii) brightness invariance
Trajectories that exhibit brightness invariance are motivated by the need to perform fixation.
Formally, we impose the constraint b? = bt + bx x? = 0. This is in fact the classic constraint
that is widely used in computer vision for the estimation of the optical flow [20]. Its
soft-satisfaction can be expressed by the associated term
2
B(t, x, x)
? = bt + bx x? .
(5)
Notice that, in the case of static images, bt = 0, and the term is fully satisfied for trajectory
x(t) whose velocity x? is perpendicular to the gradient, i.e.when the focus is on the borders
of the objects. This kind of behavior favors coherent fixation of objects. Interestingly, in
case of static images, the model can conveniently be simplified by using the upper bound of
the brightness as follows:
B(t, x, x)
? = b? 2 (t, x) = (?bt + bx x)
? 2?
? x, x)
? 2b2t + 2b2x x? 2 := B(t,
?
(6)
This inequality comes from the parallelogram law of Hilbert spaces. As it will be seen the
rest of the paper, this approximation significantly simplifies the motion equations.
2.2
Least Action Principle
Visual attention scanpaths are modeled as the motion of a particle of mass m within a potential field.
This makes it possible to construct the generalized action
Z T
S=
L(t, x, x)
? dt
(7)
0
where L = K ? U , where K is the kinetic energy
K(x)
? =
1
mx? 2
2
(8)
and U is a generalized potential energy defined as
U (t, x, x)
? = V (x) ? ?C(t, x) + ?B(t, x, x).
?
(9)
Here, we assume that ?, ? > 0. Notice, in passing that while V and B get the usual sign of potentials,
C comes with the flipped sign. This is due to the fact that, whenever it is large, it generates an
attractive field. In addition, we notice that the brightness invariance term is not a truly potential,
since it depends on both the position and the velocity. However, its generalized interpretation as a
?potential? comes from considering that it generates a force field. In order to discover the trajectory
we look for a stationary point of the action in Eq. (7), which corresponds to the Euler-Lagrange
equations
d ?L
?L
=
,
(10)
dt ? x? i
?xi
2
A straightforward extension can be given for circular retina.
3
where i = 1, 2 for the two motion coordinates. The right-hand term in (10) can be written as
?L
= ?Cx ? Vx ? ?Bx .
?x
(11)
d ?L
d
= m?
x ? ? Bx?
dt ? x?
dt
(12)
Likewise we have
so as the general motion equation turns out to be
m?
x??
d
Bx? + Vx ? ?Cx + ?Bx = 0.
dt
(13)
These are the general equations of visual attention. In the Appendix we give the technical details of
the derivations. Throughout the paper, the proposed model is referred to as the EYe MOvement Laws
(EYMOL).
2.3
Parameters estimation with simulated annealing
Different choices of parameters lead to different behaviors of the system. In particular, weights
can emphasize the contribution of curiosity or brightness invariance terms. To better control the
system we use two different parameters for the curiosity term, namely ?b and ?p , to weight b and p
contributions respectively. The best values for the three parameters (?b , ?p , ?) are estimated using
the algorithm of simulated annealing (SA). This method allows to perform iterative improvements,
starting from a known state i. At each step, the SA considers some neighbouring state j of the current
state, and probabilistically moves to the new state j or stays on the current state i. For our specific
problem, we limit our search to a parallelepiped-domain D of possible values, due to theoretical
bounds and numerical3 issues. Distance between states i and j is proportional with a temperature T ,
which is initialized to 1 and decreases over time as Tk = ? ? Tk?1 , where k identifies the iteration
step, and 0 << ? < 1. The iteration step is repeated until the system reaches a state that is good
enough for the application, which in our case is to maximize the NSS similarity between human
saliency maps and simulated saliency maps.
Only a batch of a 100 images from CAT2000-TRAIN is used to perform the SA algorithm4 . This
batch is created by randomly selecting 5 images from each of the 20 categories of the dataset. To
start the SA, parameters are initialized in the middle point of the 3-dimensional parameters domain
D. The process is repeated 5 times, on different sub-samples, to select 5 parameters configurations.
Finally, those configurations together with the average configuration are tested on the whole dataset,
to select the best one.
Algorithm 1 In the psedo-code, P() is the acceptance probability and score() is computed as the average of
NSS scores on the sample batch of 100 images.
1: procedure S IMULATEDA NNEALING
2:
Select an initial state i ? D
3:
T ?1
4:
do
5:
Generate random state j, neighbor of i
6:
if P(score(i), score(j)) ? Random(0, 1) then
7:
i?j
8:
end if
9:
T ???T
10:
while T ? 0.01
11: end procedure
3
Too high values for ?b or ?p produce numerically unstable and unrealistic trajectories for the focus of
attention.
4
Each step of the SA algorithm needs evaluation over all the selected images. Considering the whole dataset
would be very expensive in terms of time.
4
Model version
V1 (approx. br. inv.)
V2 (exact br. inv.)
MIT1003
AUC
NSS
0.7996 (0.0002) 1.2784 (0.0003)
0.7990 (0.0003) 1.2865 (0.0039)
CAT2000-TRAIN
AUC
NSS
0.8393 (0.0001) 1.8208 (0.0015)
0.8376 (0.0013) 1.8103 (0.0137)
Table 1: Results on MIT1003 [1] and CAT2000-TRAIN [11] of the two different version of EYMOL. Between
brackets is indicated the standard error.
3
Experiments
To quantitative evaluate how well our model predicts human fixations, we defined an experimental setup for salient detection both in images and in video. We used images from MIT1003 [1],
MIT300 [12] and CAT2000 [11], and video from SFU [27] eye-tracking database. Many of the design
choices were common to both experiments; when they differ, it is explicitly specified.
3.1
Input pre-processing
All input images are converted to gray-scale. Peripheral input p is implemented as a blurred versions
of the brightness b. This blurred version is obtained by convolving the original gray-scale image
with a Gaussian kernel. For the images only, an algorithm identifies the rectangular zone of the
input image in which the totality of information is contained in order to compute li in (14). Finally
both b and p are multiplied by a Gaussian blob centered in the middle of the frame in order to make
brightness gradients smaller as we move toward periphery and produce a center bias.
3.2
Saliency maps computation
Differently by many of the most popular methodologies in the state-of-the-art [10; 16; 1; 24; 18], the
saliency map is not itself the central component of our model but it can be naturally calculated from
the visual attention laws in (13). The output of the model is a trajectory determined by a system of
two second ordered differential equations, provided with a set of initial conditions. Since numerical
integration of (13) does not raise big numerical difficulties, we used standard functions of the python
scientific library SciPy [21].
Saliency map is then calculated by summing up the most visited locations during a sufficiently large
number of virtual observations. For images, we collected data by running the model 199 times, each
run was randomly initialized almost at the center of the image and with a small random velocity,
and integrated for a running time corresponding to 1 second of visual exploration. For videos, we
collected data by running the model 100 times, each run was initialized almost at the center of the
first frame of the clip and with a small random velocity.
Model that have some blur and center bias on the saliency map can improve their score with respect
to some metrics. A grid search over blur radius and center parameter ? have been used, in order to
maximize AUC-Judd and NSS score on the training data of CAT2000 in the case of images, and on
SFU in case of videos.
3.3
Saliency detection on images
Two versions of the the model have been evaluated. The first version V1 implementing brightness
invariance in the approximated form (6), the second version V2 implementing the brightness invariance in its exact form, as described in the Appendix. Model V1 and V2 have been compared on the
MIT1003 and CAT2000-TRAIN datasets, since they provide public data about fixations. Parameters
estimation have been conducted independently for the two models and the best configuration for each
one is used in this comparison. Results are statistically equivalent (see Table2) and this proves that,
in the case of static images, the approximation is very good and does not cause loss in the score.
For further experiments we decided to use the approximated form V1 due to its simpler form of the
equation that also reduces time of computation.
Model V1 has been evaluated in two different dataset of eye-tracking data: MIT300 and CAT2000TEST. In this case, scores were officially provided by MIT Saliency Benchmark Team [15]. Description of the metrics used is provided in [13]. Table 2 and Table 3 shows the scores of our
5
Itti-Koch [10], implem. by [19]
AIM [16]
Judd Model [1]
AWS [24]
eDN [18]
EYMOL
AUC
0.75
0.77
0.81
0.74
0.82
0.77
SIM
0.44
0.40
0.42
0.43
0.44
0.46
MIT300
EMD CC
4.26 0.37
4.73 0.31
4.45 0.47
4.62 0.37
4.56 0.45
3.64 0.43
NSS
0.97
0.79
1.18
1.01
1.14
1.06
KL
1.03
1.18
1.12
1.07
1.14
1.53
Table 2: Results on MIT300 [12] provided by MIT Saliency Benchmark Team [15]. The models are sorted
chronologically. In bold, the best results for each metric and benchmarks.
Itti-Koch [10], implem. by [19]
AIM [16]
Judd Model [1]
AWS [24]
eDN [18]
EYMOL
AUC
0.77
0.76
0.84
0.76
0.85
0.83
SIM
0.48
0.44
0.46
0.49
0.52
0.61
CAT2000-TEST
EMD CC
3.44 0.42
3.69 0.36
3.60 0.54
3.36 0.42
2.64 0.54
1.91 0.72
NSS
1.06
0.89
1.30
1.09
1.30
1.78
KL
0.92
1.13
0.94
0.94
0.97
1.67
Table 3: Results on CAT2000 [11] provided by MIT Saliency Benchmark Team [15]. The models are sorted
chronologically. In bold, the best results for each metric and benchmarks.
model compared with five other popular method [10; 16; 1; 24; 18], which have been selected to be
representative of different approaches. Despite its simplicity, our model reaches best score in half of
the cases and for different metrics.
3.4
Saliency detection on dynamic scenes
We evaluated our model in a task of saliency detection with the dataset SFU [27]. The dataset contains
12 clips and fixations of 15 observers, each of them have watched twice every video. Table 4 provides
a comparison with other four model. Also in this case, despite of its simplicity and even if it was not
designed for the specific task, our model competes well with state-of-the-art models. Our model can
be easily run in real-time to produce an attentive scanpath. In some favorable case, it shows evidences
of tracking moving objects on the scene.
Mean AUC
Mean NSS
EYMOL
0.817
1.015
SFU Eye-Tracking Database
Surprise [17] Judd Model [1]
0.70
0.66
0.77
0.28
0.48
1.06
Itti-Koch [10]
HEVC [28]
0.83
1.41
Table 4: Results on the video dataset SFU [27]. Scores are calculated as the mean of AUC and NSS metrics of
all frames of each clip, and then averaged for the 12 clips.
4
Conclusions
In this paper we investigated how human attention mechanisms emerge in the early stage of vision,
which we assume completely data-driven. The proposed model consists of differential equations,
which provide a real-time model of scanpath. These equations are derived in a generalized framework
of least action, which nicely resembles related derivations of laws in physics. A remarkable novelty
concerns the unified interpretation of curiosity-driven movements and the brightness invariance term
for fixation and tracking, that are regarded as mechanisms that jointly contribute to optimize the
acquisition of visual information. Experimental results on both image and video datasets of saliency
are very promising, especially if we consider that the proposed theory offers a truly model of eye
movements, whereas the computation of the saliency maps only arises as a byproduct.
6
In future work, we intend to investigate behavioural data, not only in terms of saliency maps, but also
by comparing actual generated scanpaths with human data in order to discover temporal correlations.
We aim at providing the integration of the presented model with a theory of feature extraction that is
still expressed in terms of variational-based laws of learning [29].
Appendix: Euler-Lagrange equations
In this section we explicitly compute the differential laws of visual attention that describe the visual
attention scanpath, as the Euler-Lagrange equations of the action functional (7).
First, we compute the partial derivatives of the different contributions w.r.t. x, in order to compute
the exact contributions of (11). For the retina boundaries,
X
Vx = k
? 2 (li ? xi ) ? [xi > li ] + 2xi ? [xi < 0]
(14)
i=1,2
The curiosity term (4)
Cx =2cos2 (?t)bx ? bxx + 2sin2 (?t)px ? pxx
(15)
For the term of brightness invariance,
?
2
(bt + bx x)
?
?x
= 2 (bt + bx x)
? (btx + bxx x)
?
Bx =
(16)
(17)
Since we assume b ? C 2 (t, x), by the Schwarz?s theorem5 , we have that btx = bxt , so that
Bx = 2 (bt + bx x)
? (bxt + bxx x)
?
?
?
= 2(b)(bx )
(18)
(19)
We proceed by computing the contribution in (12). Derivative w.r.t. x? of the brightness invariance
term is
?
2
Bx? =
(bt + bx x)
?
(20)
? x?
= 2 (bt + bx x)
? bx
(21)
?
= 2(b)(bx )
(22)
So that, total derivative w.r.t. t can be write as
d
Bx? =2 ?bbx + b? b? x
dt
(23)
We observe that ?b ? ?b(t, x, x,
? x
?) is the only term which depends on second derivatives of x. Since
we are interested in expressing EL in an explicit form for the variable x
?, we explore more closely its
contribution
d
?b(t, x, x,
? x
?) = b?
(24)
dt
d
= (bt + bx x)
?
(25)
dt
=b? t + b? x ? x? + bx ? x
?
(26)
(27)
Substituting it in (23) we have
d
Bx? =2 (b? t + b? x ? x? + bx ? x
?)bx + b? b? x
dt
=2 (b? t + b? x ? x)b
? x + b? b? x + 2(bx ? x
?)bx
(28)
(29)
Schwarz?s theorem states that, if f : Rn ? R has continuous second partial derivatives at any given point
in R , then ?i, j ? {1, ..., n} it holds fxi xj = fxj xi
5
n
7
So that, from (12) we get
d ?L
= m?
x ? 2? (b? t + b? x ? x)b
? x + b? b? x + (bx ? x
?)bx
(30)
dt ? x?
Euler-Lagrange equations. Combining (11) and (30), we get Euler-Lagrange equation of attention
? b? x ) + (bx ? x
m?
x ? 2? (b? t + b? x ? x)(b
? x ) + (b)(
?)bx = ?Cx ? Vx ? ?Bx
(31)
In order to obtain explicit form for the variable x
?, we re-write the equation as to move to the left all
contributes which do not depend on that variable.
? b? x ))
m?
x ? 2?(bx ? x
?)bx =?Cx ? Vx ? ?Bx + 2?((b? t + b? x ? x)(b
? x ) + (b)(
= ?Cx ? Vx + 2?(b? t + b? x ? x)(b
? x)
|
{z
}
(32)
(33)
A=(A1 ,A2 )
In matrix form, the equation is
m?
x1
2?(bx1 x
?1 + bx2 x
?2 )bx1
A1
?
=
m?
x2
2?(bx1 x
?1 + bx2 x
?2 )bx2
A2
which gives us the system of two differential equations
m?
x1 ? 2?(bx1 x
?1 + bx2 x
?2 )bx1 = A1
m?
x2 ? 2?(bx1 x
?1 + bx2 x
?2 )bx2 = A2
Grouping by same variable,
x1 ? 2?(bx1 bx2 )?
x2
(m ? 2?b2x1 )?
?2?(bx1 bx2 )?
x2
x1 + (m ? 2?b2x2 )?
= A1
= A2
(34)
(35)
(36)
We define
(m ? 2?b2x ) ?2?(bx bx )
1
2
1
D=
?2?(bx1 bx2 ) (m ? 2?b2x2 )
(m ? 2?b2 ) A1
A1 ?2?(bx1 bx2 )
x
1
,D =
D1 =
A2 (m ? 2?b2x2 ) 2 ?2?(bx1 bx2 ) A2
(37)
(38)
By the Cramer?s method we get differential equation of visual attention for the two spatial component,
i.e.
?
D1
?
?x
?
? ?1 = D
(39)
?
?
D
2
?
?x
?2 =
D
Notice that, this raise to a further condition over the parameter ?. In particular, in the case values of
b(t, x) are normalized in the range [0, 1], it imposes to chose
m
D 6= 0 =? ? <
(40)
4
In fact,
D = (m ? 2?b2x1 )(m ? 2?b2x2 ) ? 4?2 (bx1 bx2 )2
= m m ? 2?(b2x1 + b2x1 )
(41)
(42)
For values of bx = 0, we have that
D = m2 > 0
(43)
D > 0.
(44)
so that ?t, we must impose
8
If ? > 0, then
m ? 2?(b2x1 + b2x1 ) > 0
m
?<
2
2(bx1 + b2x1 )
m
, so that the condition
4
m
0<?<
4
(45)
(46)
The quantity on the right reaches its minimum at
(47)
guarantees the well-posedness of the problem.
References
[1] Judd, T., Ehinger, K., Durand, F., Torralba, A.: Learning to Predict Where Humans Look. IEEE
International Conference on Computer Vision (2009)
[2] Itti, L., Koch, C.: Computational modelling of visual attention. Nature Reviews Neuroscience,
vol 3, n 3, pp 194?203. (2001)
[3] Connor, C.E., Egeth, H.E., Yantis, S.: Visual Attention: Bottom-Up Versus Top-Down. Current
Biology, vol 14, n 19, pp R850?R852. (2004)
[4] McMains, S., Kastner, S.: Interactions of Top-Down and Bottom-Up Mechanisms in Human
Visual Cortex. Society for Neuroscience, vol 31, n 2, pp 587?597. (2011)
[5] Hainline, L., Turkel, J., Abramov, I., Lemerise, E., Harris, C.M.: Characteristics of saccades in
human infants. Vision Research, vol 24, n 12, pp 1771?1780. (1984)
[6] Le Meur, O., Liu, Z.: Saccadic model of eye movements for free-viewing condition. Vision
Research, vol 116, pp 152?164. (2015)
[7] Gelfand, I.M., Fomin, S.V.: Calculus of Variation. Englewood : Prentice Hall (1993)
[8] Borji, A., Itti, L.: State-of-the-Art in Visual Attention Modeling. IEEE Transactions on Pattern
Analysis and Machine Intelligence, vol 35, n 1. (2013)
[9] Koch, C., Ullman, S.: Shifts in selective visual attention: towards the underlying neural circuitry.
Springer Human Neurobiology, vol 4, n 4, pp 219-227. (1985)
[10] Itti, L., Koch, C.: A Model of Saliency-Based Visual Attention for Rapid Scene Analysis. IEEE
Transactions on Pattern Analysis and Machine Intelligence, vol 20, n 11. (1998)
[11] Borji, A., Itti, L.: CAT2000: A Large Scale Fixation Dataset for Boosting Saliency Research.
arXiv:1505.03581. (2015)
[12] Judd, T., Durand, F., Torralba, A.:: A Benchmark of Computational Models of Saliency to
Predict Human Fixations. MIT Technical Report. (2012)
[13] Bylinskii, Z., Judd, T., Oliva, A., Torralba, A.: What do different evaluation metrics tell us about
saliency models? arXiv:1604.03605. (2016)
[14] Treisman, A.M., Gelade, G.: A Feature Integration Theory of Attention. Cognitive Psychology,
vol 12, pp 97-136. (1980)
[15] Bylinskii, Z., Judd, T., Borji, A., Itti, L., Durand, F., Torralba, A.: MIT Saliency Benchmark.
http://saliency.mit.edu/
[16] Bruce, N., Tsotsos, J.: Attention based on information maximization. J. Vis., vol 7, n 9. (2007)
[17] Itti, L., Baldi, P.: Bayesian Surprise Attracts Human Attention. Vision Research, vol 49, n 10,
pp 1295?1306. (2009)
9
[18] Vig, E., Dorr, M., Cox, D.: Large-Scale Optimization of Hierarchical Features for Saliency
Prediction in Natural Images. IEEE Conference on Computer Vision and Pattern Recognition.
(2014)
[19] Harel, J.: A Saliency Implementation in MATLAB . http://www.klab.caltech.edu/ harel/share/gbvs.php
[20] Horn, B.K.P., Schunck, B.G.: Determining optical flow. Artificial Intelligence, vol 17, n 1, pp
185-203. (1981)
[21] Jones, E., Travis, O., Peterson, P.: SciPy: Open source scientific tools for Python.
http://www.scipy.org/. (2001)
[22] Zhang, J., Sclaroff, S.: Saliency detection: a Boolean map approach . Proc. of the IEEE
International Conference on Computer Vision. (2013)
[23] Cornia, M., Baraldi, L., Serra, G., Cucchiara, R.: Predicting Human Eye Fixations via an
LSTM-based Saliency Attentive Model. http://arxiv.org/abs/1611.09571. (2016)
[24] Garcia-Diaz, A., Lebor?n, V., Fdez-Vida, X.R., Pardo, X.M.: On the relationship between
optical variability, visual saliency, and eye fixations: Journal of Vision, vol 12, n 6, pp 17. (2012)
[25] Tatler, B.W., Baddeley, R.J., Gilchrist, I.D.: Visual correlates of fixation selection: Effects of
scale and time. Vision Research, vol 45, n 5, pp 643-659. (2005)
[26] Kruthiventi, S.S.S., Ayush, K., Venkatesh, R.:DeepFix: arXiv:1510.02927. (2015)
[27] Hadizadeh, H., Enriquez, M.J., Bajic, I.V.: Eye-Tracking Database for a Set of Standard Video
Sequences. IEEE Transactions on Image Processing. (2012)
[28] Xu, M., Jiang, L., Ye, Z., Wang, Z.: Learning to Detect Video Saliency With HEVC Features.
IEEE Transactions on Image Processing. (2017)
[29] Maggini, M., Rossi, A.: On-line Learning on Temporal Manifolds. AI*IA 2016 Advances in
Artificial Intelligence Springer International Publishing, pp 321?333. (2016)
10
| 6972 |@word cox:1 version:8 middle:3 seems:2 open:1 calculus:1 cos2:2 brightness:17 boundedness:1 initial:2 born:1 contains:2 configuration:4 selecting:1 score:11 liu:1 interestingly:2 current:3 comparing:1 surprising:1 intriguing:1 attracted:1 written:1 must:1 realistic:1 numerical:2 blur:2 analytic:1 designed:2 stationary:3 half:1 selected:2 infant:1 intelligence:4 trapping:1 provides:2 boosting:1 contribute:1 location:3 preference:2 org:2 simpler:1 zhang:1 five:1 mathematical:2 iverson:1 differential:7 consists:1 fixation:15 baldi:1 gbvs:1 rapid:1 behavior:2 mechanic:1 actual:1 window:1 considering:2 begin:1 discover:2 moreover:1 bounded:1 notation:1 mass:1 provided:5 competes:1 what:2 underlying:1 kind:1 unified:1 guarantee:1 temporal:2 quantitative:1 every:2 borji:3 classifier:1 control:2 local:2 limit:1 table2:1 vig:1 despite:3 jiang:1 modulation:1 chose:1 twice:1 resembles:1 mentioning:1 perpendicular:1 statistically:1 averaged:1 range:1 decided:1 horn:1 dorr:1 procedure:3 foundational:1 area:1 parallelepiped:1 significantly:2 thought:1 pre:1 get:4 selection:2 prentice:1 influence:1 optimize:2 equivalent:1 map:18 lagrangian:2 center:6 www:2 straightforward:1 attention:38 starting:1 independently:1 rectangular:1 simplicity:2 scipy:3 fomin:1 m2:1 rule:1 regarded:1 classic:2 coordinate:1 variation:1 hevc:2 exact:3 neighbouring:1 edn:2 velocity:4 expensive:1 approximated:2 recognition:1 predicts:1 database:3 bottom:3 observed:1 role:2 wang:1 capture:1 region:4 movement:7 decrease:1 dynamic:4 trained:1 depend:2 raise:2 bx1:13 dilemma:2 completely:2 easily:1 differently:1 derivation:3 train:4 describe:3 artificial:2 tell:1 whose:2 gelfand:1 widely:1 valued:1 tested:1 otherwise:1 favor:1 fdez:1 emergence:1 itself:1 jointly:1 baraldi:1 blob:1 sequence:1 propose:2 interaction:1 product:1 relevant:1 combining:1 tatler:1 achieve:2 bajic:1 description:3 validate:1 correspondent:2 produce:3 object:4 tk:2 sa:5 sim:2 eq:1 implemented:1 involves:1 come:3 differ:2 direction:1 radius:1 closely:1 exploration:2 human:13 vx:6 centered:1 viewing:1 virtual:1 implementing:2 public:1 generalization:1 proposition:1 biological:1 extension:1 marco:2 proximity:1 around:1 koch:7 sufficiently:1 cramer:1 klab:1 hold:1 hall:1 predict:2 circuitry:1 substituting:1 driving:1 early:3 a2:6 torralba:4 estimation:5 favorable:1 proc:1 visited:1 schwarz:2 tool:1 mit:6 gaussian:2 aim:3 newborn:1 probabilistically:1 earliest:1 derived:1 focus:3 refining:1 improvement:1 modelling:1 detect:1 sin2:2 dependent:1 el:1 bt:10 integrated:1 selective:1 interested:1 pixel:2 issue:1 among:1 orientation:1 proposes:1 art:5 integration:4 spatial:1 field:5 construct:1 extraction:2 beach:1 emd:2 nicely:1 biology:1 represents:1 flipped:1 look:2 jones:1 kastner:1 future:1 report:2 few:1 retina:4 randomly:2 harel:2 attempt:2 ab:1 detection:7 centralized:2 acceptance:1 englewood:1 highly:1 circular:1 investigate:1 evaluation:2 truly:2 bracket:1 behind:1 byproduct:2 partial:2 initialized:4 re:1 bx2:12 theoretical:1 instance:1 modeling:2 soft:1 boolean:1 maximization:2 introducing:1 euler:7 conducted:1 too:2 crossroad:1 combined:2 st:1 fundamental:2 international:3 lstm:1 stay:1 physic:2 discipline:1 diverge:1 together:1 treisman:1 central:2 postulate:1 satisfied:1 cognitive:3 algorithm4:1 convolving:1 derivative:5 itti:10 bx:39 return:1 ullman:2 li:6 potential:6 converted:1 bold:2 b2:1 blurred:3 explicitly:2 depends:2 vi:1 later:1 view:1 observer:1 characterizes:1 start:1 sort:1 bruce:1 florence:1 contribution:6 php:1 convolutional:1 who:1 likewise:1 characteristic:1 saliency:39 raw:1 bayesian:1 basically:1 trajectory:9 worth:1 drive:5 cc:2 meur:1 reach:4 whenever:1 definition:1 attentive:2 energy:4 acquisition:1 pp:12 naturally:1 associated:1 static:4 dataset:10 popular:3 knowledge:1 color:1 emerges:1 organized:1 hilbert:1 sophisticated:1 dt:10 methodology:1 bxt:2 evaluated:3 stage:2 until:1 correlation:1 hand:1 lack:1 indicated:1 gray:2 scientific:2 usa:1 dario:2 effect:1 normalized:1 true:1 ye:1 attractive:1 during:1 auc:7 generalized:6 complete:1 motion:8 temperature:1 image:26 variational:4 harmonic:1 novel:1 recently:1 purposely:1 common:2 gilchrist:1 functional:2 winner:1 interpretation:4 numerically:1 expressing:1 surround:1 connor:1 ai:1 automatic:1 approx:1 grid:1 particle:1 moving:1 similarity:2 cortex:1 inhibition:1 disentangle:1 driven:6 periphery:1 claimed:1 inequality:1 durand:3 discussing:1 alternation:1 postponed:1 devise:1 caltech:1 seen:1 minimum:1 impose:2 novelty:1 paradigm:1 maximize:2 signal:1 ii:1 reduces:1 technical:3 offer:2 long:1 totality:1 maggini:1 visit:1 a1:6 watched:1 prediction:1 btx:2 oliva:1 vision:13 mit300:4 metric:7 arxiv:4 iteration:2 normalization:1 kernel:1 achieved:1 chicken:1 addition:3 whereas:1 annealing:2 aws:2 source:1 scanpaths:4 rest:1 subject:2 unisi:1 flow:2 counting:1 iii:1 enough:1 xj:1 psychology:1 attracts:1 idea:1 simplifies:1 implem:2 br:2 shift:1 expression:1 motivated:1 passing:1 cause:1 constitute:1 action:9 scanpath:8 deep:1 proceed:1 matlab:1 detailed:1 amount:1 officially:1 clip:4 category:1 generate:1 http:4 notice:5 sign:2 neuroscience:3 trapped:1 estimated:1 write:2 diaz:1 vol:14 salient:2 four:1 drawn:1 rectangle:1 v1:5 chronologically:2 tsotsos:1 run:3 place:1 throughout:1 almost:2 parallelogram:1 nnealing:1 sfu:5 appendix:4 capturing:1 bound:2 b2t:1 cucchiara:1 constraint:3 fxj:1 scene:6 x2:5 generates:2 pardo:1 simulate:1 optical:3 px:2 rossi:1 according:2 peripheral:4 smaller:1 behavioural:1 equation:20 turn:2 mechanism:10 end:2 multiplied:1 observe:1 hierarchical:1 v2:3 fxi:1 travis:1 batch:3 original:1 gori:1 top:5 include:1 denotes:1 running:3 publishing:1 unifying:1 prof:1 especially:1 society:1 move:3 intend:1 quantity:1 bylinskii:2 strategy:1 saccadic:2 usual:1 exhibit:2 gradient:4 mx:1 distance:1 attentional:1 simulated:3 manifold:1 collected:3 considers:1 unstable:1 toward:1 code:1 modeled:2 relationship:1 providing:1 setup:2 opportune:1 implementation:2 design:1 perform:3 upper:1 observation:2 datasets:2 benchmark:7 neurobiology:1 variability:1 team:3 frame:3 rn:1 community:1 intensity:1 inv:2 posedness:1 introduced:1 bxx:3 namely:2 venkatesh:1 specified:1 kl:2 coherent:1 herein:1 tremendous:1 nip:1 gelade:1 adult:2 address:1 curiosity:6 usually:1 pattern:3 including:1 video:10 unrealistic:1 critical:1 satisfaction:1 difficulty:1 rely:2 force:1 natural:1 predicting:1 ia:1 scheme:1 improve:2 eye:12 library:1 disappears:1 identifies:2 created:1 extract:1 siena:2 review:1 understanding:1 literature:1 python:2 determining:1 law:9 fully:1 loss:1 proportional:1 versus:1 remarkable:1 imposes:1 principle:10 share:3 free:1 bias:3 formal:1 neighbor:1 peterson:1 emerge:1 serra:1 boundary:1 judd:9 dimension:1 world:1 calculated:3 cornia:1 author:1 made:2 simplified:1 transaction:4 correlate:1 emphasize:1 global:1 summing:1 xi:10 search:2 iterative:1 continuous:1 table:7 promising:1 learn:1 nature:1 ca:1 elastic:1 contributes:1 investigated:1 domain:2 linearly:1 border:2 whole:2 big:1 repeated:2 x1:5 xu:1 referred:3 representative:1 egg:1 ehinger:1 n:9 sub:1 position:2 explicit:2 unifi:1 pxx:1 down:4 theorem:1 specific:2 yantis:1 b2x:3 r2:2 svm:1 evidence:2 concern:1 essential:1 grouping:1 magnitude:1 sclaroff:1 surprise:2 cx:6 garcia:1 explore:1 visual:30 conveniently:1 lagrange:7 expressed:2 contained:1 ordered:1 tracking:8 schunck:1 saccade:2 springer:2 corresponds:2 extracted:1 kinetic:2 harris:1 sorted:2 towards:1 oscillator:1 included:1 determined:1 operates:1 total:1 invariance:11 experimental:4 zone:1 select:5 formally:1 arises:1 evaluate:1 baddeley:1 d1:2 avoiding:1 correlated:1 |
6,603 | 6,973 | Recursive Sampling for the Nystr?m Method
Cameron Musco
MIT EECS
[email protected]
Christopher Musco
MIT EECS
[email protected]
Abstract
We give the first algorithm for kernel Nystr?m approximation that runs in linear
time in the number of training points and is provably accurate for all kernel matrices,
without dependence on regularity or incoherence conditions. The algorithm projects
the kernel onto a set of s landmark points sampled by their ridge leverage scores,
requiring just O(ns) kernel evaluations and O(ns2 ) additional runtime. While
leverage score sampling has long been known to give strong theoretical guarantees
for Nystr?m approximation, by employing a fast recursive sampling scheme, our
algorithm is the first to make the approach scalable. Empirically we show that it
finds more accurate kernel approximations in less time than popular techniques
such as classic Nystr?m approximation and the random Fourier features method.
1
Introduction
The kernel method is a powerful for applying linear learning algorithms (SVMs, linear regression,
etc.) to nonlinear problems. The key idea is to map data to a higher dimensional kernel feature space,
where linear relationships correspond to nonlinear relationships in the original data.
Typically this mapping is implicit. A kernel function is used to compute inner products in the
high-dimensional kernel space, without ever actually mapping original data points to the space.
Given n data points x1 , . . . , xn , the n ? n kernel matrix K is formed where Ki,j contains the highdimensional inner product between xi and xj , as computed by the kernel function. All computations
required by a linear learning method are performed using the inner product information in K.
Unfortunately, the transition from linear to nonlinear comes at a high cost. Just generating the entries
of K requires ?(n2 ) time, which is prohibitive for large datasets.
1.1
Kernel approximation
A large body of work seeks to accelerate kernel methods by finding a compressed, often low? to the true kernel matrix K. Techniques include random sampling and
rank, approximation K
embedding [AMS01, BBV06, ANW14], random Fourier feature methods for shift invariant kernels
[RR07, RR09, LSS13], and incomplete Cholesky factorization [FS02, BJ02].
? using a subset of
One of the most popular techniques is the Nystr?m method, which constructs K
?
?landmark? data points [WS01]. Once s data points are selected, K (in factored form) takes just
O(ns) kernel evaluations and O(s3 ) additional time to compute, requires O(ns) space to store, and
? takes O(ns2 ) time.
can be manipulated quickly in downstream applications. E.g., inverting K
The Nystr?m method performs well in practice [YLM+ 12, GM13, TRVR16], is widely implemented
[HFH+ 09, PVG+ 11, IBM14], and is used in a number of applications under different names such as
?landmark isomap? [DST03] and ?landmark MDS? [Pla05]. In the classic variant, landmark points are
selected uniformly at random. However, significant research seeks to improve performance via data31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
dependent sampling that selects landmarks which more closely approximate the full kernel matrix
than uniformly sampled landmarks [SS00, DM05, ZTK08, BW09, KMT12, WZ13, GM13, LJS16].
Theoretical work has converged on leverage score based approaches, as they give the strongest
provable guarantees for both kernel approximation [DMM08, GM13] and statistical performance
in downstream applications [AM15, RCR15, Wan16]. Leverage scores capture how important an
individual data point is in composing the span of the kernel matrix.
Unfortunately, these scores are prohibitively expensive to compute. All known approximation schemes
require ?(n2 ) time or only run quickly under strong conditions on K ? e.g. good conditioning or
data ?incoherence? [DMIMW12, GM13, AM15, CLV16]. Hence, leverage score-based approaches
remain largely in the domain of theory, with limited practical impact [KMT12, LBKL15, YPW15].
1.2
Our contributions
In this work, we close the gap between strong approximation bounds and efficiency: we present a
new Nystr?m algorithm based on recursive leverage score sampling which achieves the ?best of both
worlds?: it produces kernel approximations provably matching the high accuracy of leverage score
methods while only requiring O(ns) kernel evaluations and O(ns2 ) runtime for s landmark points.
Theoretically, this runtime is surprising. In the typical case when s ? n, the algorithm evaluates just
a small subset of K, ignoring most of the kernel space inner products. Yet its performance guarantees
hold for general kernels, requiring no assumptions on coherence or regularity.
Empirically, the runtime?s linear dependence on n means that our method is the first leverage
score algorithm that can compete with the most commonly implemented techniques, including the
classic uniform sampling Nystr?m method and random Fourier features sampling [RR07]. Since our
algorithm obtains higher quality samples, we show experimentally that it outperforms these methods
on benchmark datasets ? it can obtain as accurate a kernel approximation in significantly less time.
Our approximations also have lower rank, so they can be stored in less space and processed more
quickly in downstream learning tasks.
1.3
Paper outline
Our recursive sampling algorithm is built on top of a Nystr?m scheme of Alaoui and Mahoney that
samples landmark points based on their ridge leverage scores [AM15]. After reviewing preliminaries
in Section 2, in Section 3 we analyze this scheme, which we refer to as RLS-Nystr?m. To simplify
prior work, which studies the statistical performance of RLS-Nystr?m for specific kernel learning
tasks [AM15, RCR15, Wan16], we prove a strong, application independent approximation guarantee:
? is constructed with s = ?(d log d ) samples1 , where d = tr(K(K + I) 1 ) is
for any , if K
eff
eff
eff
? 2? .
the so-called ? -effective dimensionality? of K, then with high probability, kK Kk
In Appendix E, we show that this guarantee implies bounds on the statistical performance of RLSNystr?m for kernel ridge regression and canonical correlation analysis. We also use it to prove new
results on the performance of RLS-Nystr?m for kernel rank-k PCA and k-means clustering ? in both
cases just O(k log k) samples are required to obtain a solution with good accuracy.
After affirming the favorable theoretical properties of RLS-Nystr?m, in Section 4 we show that its
runtime can be significantly improved using a recursive sampling approach. Intuitively our algorithm
is simple. We show how to approximate the kernel ridge leverage scores using a uniform sample of 12
of our input points. While the subsampled kernel matrix still has a prohibitive n2 /4 entries, we can
recursively approximate it, using our same sampling algorithm. If our final Nystr?m approximation
will use s landmarks, the recursive approximation only needs rank O(s), which lets us estimate
the ridge leverage scores of the original kernel matrix
in just O(ns2 ) time.
?
? Since n is cut in half
ns2
ns2
2
at each level of recursion, our total runtime is O ns + 2 + 4 + ... = O(ns2 ), significantly
improving upon the method of [AM15], which takes ?(n3 ) time in the worst case.
Our approach builds on recent work on iterative sampling methods for approximate linear algebra
[CLM+ 15, CMM17]. While the analysis in the kernel setting is technical, our final algorithm is
1
This is within a log factor of the best possible for any low-rank approximation with error .
2
simple and easy to implement. We present and test a parameter-free variation of Recursive RLSNystr?m in Section 5, confirming superior performance compared to existing methods.
2
Preliminaries
Consider an input space X and a positive semidefinite kernel function K : X ? X ! R. Let
F be an associated reproducing kernel Hilbert space and : X ! F be a (typically nonlinear)
feature map such that for any x, y 2 X , K(x, y) = h (x), (y)iF . Given a set of n input points
x1 , . . . , xn 2 X , define the kernel matrix K 2 Rn?n by Ki,j = K(xi , xj ).
0
It is often natural to consider the kernelized data matrix that generates K. Informally, let 2 Rn?d
T
be the matrix containing (x1 ), ..., (xn ) as its rows (note that d0 may be infinite). K =
.
While we use for intuition, in our formal proofs we replace it with any matrix B 2 Rn?n satisfying
BBT = K (e.g. a Cholesky factor). Such a B is guaranteed to exist since K is positive semidefinite.
We repeatedly use the singular value decomposition, which allows us to write any rank r matrix
M 2 Rn?d as M = U?VT , where U 2 Rn?r and V 2 Rd?r have orthogonal columns (the left
and right singular vectors of M), and ? 2 Rr?r is a positive diagonal matrix containing the singular
+
1 T
values: 1 (M)
...
U .
2 (M)
r (M) > 0. M?s pseudoinverse is given by M = V?
2.1
Nystr?m approximation
The Nystr?m method selects a subset of ?landmark? points and uses them to construct a low-rank
approximation to K. Given a matrix S 2 Rn?s that has a single entry in each column equal to 1 so
that KS is a subset of s columns from K, the associated Nystr?m approximation is:
? = KS(ST KS)+ ST K.
K
(1)
? can be stored in O(ns) space by separately storing KS 2 Rn?s and (ST KS)+ 2 Rs?s . FurtherK
more, the factors can be computed using just O(ns) evaluations of the kernel inner product to form
KS and O(s3 ) time to compute (ST KS)+ . Typically s ? n so these costs are significantly lower
than the cost to form and store the full kernel matrix K.
We view Nystr?m approximation as a low-rank approximation to the dataset in feature space. ReT
calling that K =
, S selects s kernelized data points ST and we approximate using its
0
0
projection onto these points. Informally, let PS 2 Rd ?d be the orthogonal projection onto the row
def
T
span of ST . We approximate by ? = PS . We can write PS = T S(ST
S)+ ST .
T
2
Since it is an orthogonal projection, PS PS = PS = PS , and so we can write:
? = ? ?T =
K
P2S
T
=
T
S(ST
T
S)+ ST
T
= KS(ST KS)+ ST K.
This recovers the standard Nystr?m approximation (1).
3
The RLS-Nystr?m method
We now introduce the RLS-Nystr?m method, which uses ridge leverage score sampling to select
landmark data points, and discuss its strong approximation guarantees for any kernel matrix K.
3.1
Ridge leverage scores
In classical Nystr?m approximation (1), S is formed by sampling data points uniformly at random.
Uniform sampling can work in practice, but it only gives theoretical guarantees under strong regularity
or incoherence assumptions on K [Git11]. It will fail for many natural kernel matrices where the
relative ?importance? of points is not uniform across the dataset
For example, imagine a dataset where points fall into several clusters, but one of the clusters is much
larger than the rest. Uniform sampling will tend to oversample landmarks from the large cluster while
undersampling or possibly missing smaller but still important clusters. Approximation of K and
learning performance (e.g. classification accuracy) will decline as a result.
3
(a) Uniform landmark sampling.
(b) Improved landmark sampling.
Figure 1: Uniform sampling for Nystr?m approximation can oversample from denser parts of the
dataset. A better Nystr?m scheme will select points that more equally cover the relevant data.
To combat this issue, alternative methods compute a measure of point importance that is used to
select landmarks. For example, one heuristic applies k-means clustering to the input and takes the
cluster centers as landmarks [ZTK08]. A large body of theoretical work measures importance using
variations on the statistical leverage scores. One natural variation is the ridge leverage score:
Definition 1 (Ridge leverage scores [AM15]). For any > 0, the -ridge leverage score of data
point xi with respect to the kernel matrix K is defined as
def
li (K) = K(K + I)
1
i,i
(2)
,
where I is the n ? n identity matrix. For any B 2 Rn?n satisfying BBT = K, we can also write:
li (K) = bTi (BT B + I)
1
(3)
bi ,
where bTi 2 R1?n is the ith row of B.
For conciseness we typically write li (K) as li . To check that (2) and (3) are equivalent note that
bTi (BT B+ I) 1 bi = B(BT B + I) 1 BT i,i . Using the SVD to write B = U?VT and accordingly K = U?2 UT confirms that K(K+ I)
1
= B(BT B+ I)
1
BT = U?2 ?2 + I
1
UT .
It is not hard to check (see [CLM+ 15]) that the ridge scores can be defined alternatively as:
li = minn
y2R
1
kbTi
yT Bk22 + kyk22 .
(4)
This formulation provides better insight into these scores. Since BBT = K, any kernel algorithm
effectively works with B?s rows as data points. The ridge scores reflect the relative importance of
these rows. From (4) it?s clear that li ? 1 since we can set y to the ith standard basis vector. bi will
have score ? 1 (i.e. is less important) when it?s possible to find a more ?spread out? y that uses other
rows in B to approximately reconstruct bi ? in other words when the row is less unique.
3.2
Sum of ridge leverage scores
As is standard in leverage score methods, we don?t directly select landmarks to be the points with the
highest scores. Instead, we sample each point with probability proportional to li . Accordingly, the
? rank, is a random variable with expectation equal
number of landmarks selected, which controls K?s
to the sum of the -ridge leverage scores. To ensure compact kernel approximations, we want this
sum to be small. Immediately from Definition 1, we have:
Pn
1
Fact 2.
).
i=1 li (K) = tr(K(K + I)
def
We denote deff = tr(K(K + I) 1 ). deff is a natural quantity, referred to as the ?effective dimension?
or ?degrees of freedom? for a ridge regression problem on K with regularization [HTF02, Zha06].
deff increases monotonically as decreases. For any fixed it is essentially the smallest possible rank
? satisfying the approximation guarantee given by RLS-Nystr?m: kK Kk
? 2< .
achievable for K
4
3.3
The basic sampling algorithm
We can now introduce the RLS-Nystr?m method as Algorithm 1. We allow sampling each point by
any probability greater than li , which is useful later when we compute the scores approximately.
? accuracy. It could cause us to take more samples, but
Oversampling landmarks can only improve K?s
we will always ensure that the sum of our approximate ridge leverage scores is not too large.
Algorithm 1 RLS-N YSTR?M S AMPLING
input: x1 , . . . , xn 2 X , kernel matrix K, ridge parameter > 0, failure probability 2 (0, 1/8)
?l > l for the -ridge leverage score of each x1 , . . . , xn
1: Compute an over-approximation,
i
n
oi
P
?
?
2: Set pi := min 1, l ? 16 log( l / ) .
i
i
3: Construct S 2 Rn?s by sampling x1 , . . . , xn each independently with probability pi . In other
words, for each i add a column to S with a 1 in position i with probability pi .
4: return the Nystr?m factors KS 2 Rn?s and (ST KS)+ 2 Rs?s .
3.4
Accuracy bounds
? which spectrally approximates K up to a small additive
We show that RLS-Nystr?m produces K
error. This is the strongest type of approximation offered by any known Nystr?m method [GM13]. It
? is used in place of K in many learning applications [CMT10].
guarantees provable accuracy when K
Theorem 3 (Spectral error approximation). For any > 0 and 2 (0, 1/8), Algorithm 1 returns
P
? = KS(ST KS)+ ST K satisfies:
S 2 Rn?s such that with probability 1
, s ? 2 i pi and K
? K K
? + I.
K
P
When ridge scores are computed exactly, i pi = O deff log(deff / ) .
(5)
denotes the Loewner ordering: M N means that N M is positive semidefinite. Note that (5)
? 2? .
immediately implies the well studied (see e.g [GM13]) spectral norm guarantee, kK Kk
? well approximates the top of K?s spectrum (i.e. any
Intuitively, Theorem 3 guarantees that K
eigenvalues > ) while losing information about smaller, less important eigenvalues. Due to space
constraints, we defer the proof to Appendix A. It relies on the view of Nystr?m approximation as a
low-rank projection of the kernelized data (see Section 2.1) and we use an intrinsic dimension matrix
Bernstein bound to show accuracy of the sampled approximation.
Often the regularization parameter is specified for a learning task, and for near optimal performance
on this task, we set the approximation factor in Theorem 3 to ? . In this case we have:
Corollary 4 (Tighter spectral error approximation). For any
> 0 and
run with ridge parameter ? returns S 2 R
such that with probability 1
T
+ T
?
?
? + ? I.
and K = KS(S KS) S K satisfies K K K
n?s
Proof. This follows from Theorem 3 by noting d?eff ? deff /? since (K+? I)
2 (0, 1/8),?Algorithm ?1
,s=O
1
1
? (K+
deff
?
deff
?
log
I)
1
.
? can be used in place of K without sacrificing performance on
Corollary 4 suffices to prove that K
kernel ridge regression and canonical correlation tasks [AM15, Wan16]. We also use it to prove
a projection-cost preservation guarantee (Theorem 12, Appendix B), which gives approximation
bounds for kernel PCA and k-means clustering. Projection-cost preservation has proven a powerful
concept in the matrix sketching literature [FSS13, CEM+ 15, CMM17, BWZ16, CW17] and we hope
that extending the guarantee to kernels leads to applications beyond those considered in this work.
Our results on downstream learning bounds that can be derived from Theorem 3 are summarized in
Table 1. Details can be found in Appendices B and E.
5
? obtained from RLS-Nystr?m (Algorithm 1).
Table 1: Downstream guarantees for K
Guarantee
Theorem
?
Space to store K
(1 + ?) relative error risk bound
Thm 16
? ndeff )
O(
?
Kernel k-means Clustering
(1 + ?) relative error
Thm 17
? nk )
O(
?
Rank k Kernel PCA
(1 + ?) relative Frob norm error
Thm 18
? nk )
O(
?
? additive error
Thm 19
Application
Kernel Ridge Regression w/ param
Kernel CCA w/ params
?
4
x,
y
x +nd y
eff
? ndeff
O(
?
)
? hides log factors in the failure probability, deff , and k.
For conciseness, O(?)
Recursive sampling for efficient RLS-Nystr?m
Having established strong approximation guarantees for RLS-Nystr?m, it remains to provide an
efficient implementation. Specifically, Step 1 of Algorithm 1 naively requires ?(n3 ) time. We show
that significant acceleration is possible using a recursive sampling approach.
4.1
Ridge leverage score approximation via uniform sampling
The key is to estimate the leverage scores by computing (3) approximately, using a uniform sample of
the data points. To ensure accuracy, the sample must be large ? a constant fraction of the points. Our
fast runtimes are achieved by recursively approximating this large sample. In Appendix F we prove:
Lemma 5. For any B 2 Rn?n with BBT = K and S 2 Rn?s chosen by sampling each
data point independently with probability 1/2, let ?li = bTi (BT SST B + I) 1 bi and pi =
P
min{1, 16?li log( i ?li / )} for any 2 (0, 1/8). Then with probability at least 1
:
X
X
X
1) ?li
li for all i
2)
pi ? 64
li log(
li / ).
i
i
i
The first condition ensures that the approximate scores ?li suffice for use in Algorithm 1. The second
ensures that the Nystr?m approximation obtained will not have too many sampled landmarks.
Naively computing ?li in Lemma 5 involves explicitly forming B, requiring ?(n2 ) time (e.g. ?(n3 )
via Cholesky decomposition). Fortunately, the following formula (proof in Appx. F) avoids this cost:
Lemma 6. For any sampling matrix S 2 Rn?s , and any > 0:
1?
T
T
T
1
?l def
=
b
(B
SS
B
+
I)
b
=
K KS ST KS + I
i
i
i
1
ST K
?
i,i
.
It follows that we can compute all ?li for all i in O(ns2 ) time using just O(ns) kernel evaluations, to
compute KS and the diagonal of K.
4.2
Recursive RLS-Nystr?m
We apply Lemmas 5 and 6 to give an efficient recursive implementation of RLS-Nystr?m, Algorithm
2. We show that the output of this algorithm, S, is sampled according to approximate ridge leverage
scores for K and thus satisfies the approximation guarantee of Theorem 3.
Theorem 7 (Main Result). Let S 2 Rn?s be computed by Algorithm 2. With probability 1 3 ,
s ? 384 ? deff log(deff / ), S is sampled by overestimates of the -ridge leverage scores of K, and
? = KS(ST KS)+ ST K satisfies:
thus by Theorem 3, the Nystr?m approximation K
?
K
? + I.
K
K
Algorithm 2 uses O(ns) kernel evaluations and O(ns2 ) computation time.
6
Algorithm 2 R ECURSIVE RLS-N YSTR?M.
input: x1 , . . . , xm 2 X , kernel function K : X ? X ! R, ridge > 0, failure prob. 2 (0, 1/32)
output: weighted sampling matrix S 2 Rm?s
1: if m ? 192 log(1/ ) then
2:
return S := Im?m .
3: end if
4: Let S? be a random subset of {1, ..., m}, with each i included independently with probability 12 .
?
? = {xi1 , xi2 , ..., xi ? } for ij 2 S? be the data sample corresponding to S.
. Let X
|S|
?
? = [ei1 , ei2 , ..., ei ? ] be the sampling matrix corresponding to S.
. Let S
|S|
?
?
5: S := R ECURSIVE RLS-N YSTR?M(X, K, , /3).
? := S
? ? S.
? ?
6: S
?
?
? 1
3
T
T
?
?
?
?
?
7: Set li := 2
K KS S KS + I
S K
for each i 2 {1, . . . , m} .
i,i
?S
? T B + I) 1 BT )i,i . K denotes the kernel matrix for data. By Lemma 6, equals 32 (B(BT S
points {x1 , . . . , xm } and kernel function K.
P
8: Set pi := min{1, ?
li ? 16 log( ?li / )} for each i 2 {1, . . . , m}.
9: Initially set weighted sampling matrix S to be empty. For each i 2 {1, . . . , m}, with probability
pi , append the column p1pi ei onto S.
10: return S.
p
Note that in Algorithm 2 the columns of S are weighted by 1/ pi . The Nystr?m approximation
? = KS(ST KS)+ ST K is not effected by column weights (see derivation in Section 2.1). However,
K
? is used in Step 6).
the weighting is necessary when the output is used in recursive calls (i.e. when S
We prove Theorem 7 via the following intermediate result:
Theorem 8. For any inputs x1 , . . . , xm , K, > 0 and 2 (0, 1/32), let K be the kernel matrix for
x1 , . . . , xm and kernel function K and let deff (K) be the effective dimension of K with parameter .
With probability (1 3 ), R ECURSIVE RLS-N YSTR?M outputs S with s columns that satisfies:
1 T
3 T
(B B + I) (BT SST B + I)
(B B + I)
for any B with BBT = K.
(6)
2
2
def
Additionally, s ? smax (deff (K), ) where smax (w, z) = 384 ? (w + 1) log ((w + 1)/z). The algorithm uses ? c1 msmax (deff (K), ) kernel evaluations and ? c2 msmax (deff (K), )2 additional
computation time where c1 and c2 are fixed universal constants.
Theorem 8 is proved via an inductive argument, given in Appendix C. Roughly, consider in Step 6 of
? := S
? instead of S
? ? S.
? By Lemma 5 and the formula in Lemma 6, the leverage
Algorithm 2, setting S
?
score approximations li computed in Step 7 would be good approximations to the true leverage
scores, and S would satisfy Theorem 8 by a standard matrix Bernstein bound (see Lemma 9).
? := S,
? it will have n/2 columns in expectation, and the computation in Step 7
However, if we set S
will be expensive ? requiring roughly O(n3 ) time. By recursively calling Algorithm 8 and applying
? satisfying with high probability:
Theorem 8 inductively, we obtain S
1 T ? ?T
? S
?S
? T (S
? T B) + I) 3 (BS
?S
? T B + I).
(B SS B + I) ((BS)
2
2
? = S
??S
? in place of S
? in Step 7, the leverage score
This guarantee ensures that when we use S
estimates are changed only by a constant factor. Thus, sampling by these estimates, still gives us the
? and therefore S
? has just O(smax (d (K), )) columns, so Step 7
desired guarantee (6). Further, S
eff
can be performed very efficiently, within the stated runtime bounds.
With Theorem 8 we can easily prove our main result, Theorem 7.
Proof of Theorem 7. In our proof of Theorem 3 in Appendix A.1, we show that if
1 T
3 T
(B B + I) (BT SST B + I)
(B B + I)
2
2
7
for a weighted sampling matrix S, then even if we remove the weights from S so that it has all unit
? = KS(ST KS)+ ST K satisfies:
entries (they don?t effect the Nystr?m approximation), K
?
K
? + I.
K
K
The runtime bounds also follow nearly directly from Theorem 8. In particular, we have established
that O nsmax (deff (K), ) kernel evaluations and O nsmax (deff (K), )2 additional runtime are
required by R ECURSIVE RLS-N YSTR?M. We only needed the upper bound to prove Theorem 8,
but along the way actually show that in a successful run of R ECURSIVE RLS-N YSTR?M, S has
? deff (K) log deff (K)/
columns. Additionally, we may assume that deff (K) 1/2. If it is not,
then it?s not hard to check (see proof of Lemma 20) that must be kKk. If this is the case, the
? satisfies K
?
? + I.
guarantee of Theorem 7 is vacuous: any Nystr?m approximation K
K K
With deff (K) 1/2, deff (K) log deff (K)/ and thus s are ?(smax (deff (K), ) so we conclude that
Theorem 7 uses O(ns) kernel evaluations and O(ns2 ) additional runtime.
5
Empirical evaluation
We conclude with an empirical evaluation of our recursive RLS-Nystr?m method. We use a variant
of Algorithm 2 where, instead of choosing a regularization parameter , the user sets a sample size
s and is automatically determined such that s = ?(deff ? log(deff / )). This variant is practically
appealing as it essentially yields the best possible approximation to K for a fixed sample budget.
Pseudocode and proofs of correctness are included in Appendix D.
5.1
Performance of Recursive RLS-Nystr?m for kernel approximation
We evaluate RLS-Nystr?m on the YearPredictionMSD, Covertype, Cod-RNA, and Adult datasets
downloaded from the UCI ML Repository [Lic13] and [UKM06]. These datasets contain 515345,
581012, 331152, and 48842 data points respectively. We compare against the classic Nystr?m method
with uniform sampling [WS01] and the random Fourier features method [RR07]. Due to the large
size of the datasets, prior leverage score based Nystr?m approaches [DMIMW12, GM13, AM15],
which require at least ?(n2 ) time are infeasible, and thus not included in our tests.
We split categorical features into binary indicatory features and mean center and normalize features to
have variance 1. We use a Gaussian kernel for all tests, with the width parameter selected via cross
? 2 is used to measure approximation error.
validation on regression and classification tasks. kK Kk
Since this quantity is prohibitively expensive to compute directly (it requires building the full kernel
matrix K), the error is estimated using a random subset of 20,000 data points and repeated trials.
10 4
Recursive RLS-Nystrom
Uniform Nystrom
Random Fourier Features
Recursive RLS-Nystrom
Uniform Nystrom
Random Fourier Features
10 4
10
10 2
10 0
10 0
10 -2
10
0
1000
2000
3000
Samples
(a) Adult
4000
5000
10 -2
0
500
1000
1500
0
1000
2000
3000
4000
Samples
(b) Covertype
10 2
10 1
-2
10 -4
2000
Samples
Recursive RLS-Nystrom
Uniform Nystrom
Random Fourier Features
10 3
? 2
?K ? K?
? 2
?K ? K?
10 0
10 -4
Recursive RLS-Nystrom
Uniform Nystrom
Random Fourier Features
4
10 2
10 2
? 2
?K ? K?
? 2
?K ? K?
10 4
(c) Cod-RNA
5000
10 0
0
1000
2000
3000
4000
5000
Samples
(d) YearPredictionMSD
Figure 2: For a given number of samples, Recursive RLS-Nystr?m yields approximations with lower
? 2 . Error is plotted on a logarithmic scale, averaged over 10 trials.
error, measured by kK Kk
Figure 2 confirms that Recursive RLS-Nystr?m consistently obtains substantially better kernel
approximation error than the other methods. As we can see in Figure 3, with the exception of
YearPredictionMSD, the better quality of the landmarks obtained with Recursive RLS-Nystr?m
also translates into runtime improvements. While the cost per sample is higher for our method at
O(nd + ns) time versus O(nd + s2 ) for uniform Nystr?m and O(nd) for random Fourier features,
? with a given accuracy. K
? will
since RLS-Nystr?m requires fewer samples it more quickly obtains K
also have lower rank, which can accelerate processing in downstream applications.
8
10 2
Recursive RLS-Nystrom
Uniform Nystrom
10 -1
10 -2
10
-3
10 -4
10 0
10 -1
10
0
5
10
Recursive RLS-Nystrom
Uniform Nystrom
10 0
10 -1
10 2
10 1
10 -2
-2
10 -3
15
Recursive RLS-Nystrom
Uniform Nystrom
10 1
? 2
?K ? K?
10 1
? 2
?K ? K?
? 2
?K ? K?
10 0
10 3
10 2
Recursive RLS-Nystrom
Uniform Nystrom
? 2
?K ? K?
10 1
10 -3
0
1
2
3
4
0
5
1
Runtime (sec.)
(a) Adult
2
3
4
10 0
5
0
2
4
Runtime (sec.)
Runtime (sec.)
(b) Covertype
6
8
10
Runtime (sec.)
(c) Cod-RNA
(d) YearPredictionMSD
Figure 3: Recursive RLS-Nystr?m obtains a fixed level of approximation faster than uniform sampling,
only underperforming on YearPredictionMSD. Results for random Fourier features are not shown:
while the method is faster, it never obtained high enough accuracy to be directly comparable.
In Appendix G, we show that that runtime of RLS-Nystr?m can be further accelerated, via a heuristic
approach that under-samples landmarks at each level of recursion. This approach brings the per
sample cost down to approximately that of random Fourier features and uniform Nystr?m while
nearly maintaining the same approximation quality. Results are shown in Figure 4.
For datasets such as Covertype in which Recursive RLS-Nystr?m performs significantly better than
uniform sampling, so does the accelerated method (see Figure 4b). However, the performance of the
accelerated method does not degrade when leverage scores are relatively uniform ? it still offers the
best runtime to approximation quality tradeoff (Figure 4c).
We note further runtime optimizations may be possible. Subsequent work extends fast ridge leverage
score methods to distributed and streaming environments [CLV17]. Empirical evaluation of these
techniques could lead to even more scalable, high accuracy Nystr?m methods.
Recursive RLS-Nystrom
Uniform Nystrom
Random Fourier Features
Accelerated Recursive RLS-Nystrom
2
Recursive RLS-Nystrom
Uniform Nystrom
Random Fourier Features
Acclerated Recursive RLS-Nystrom
1
0.5
0
0
500
1000
Samples
1500
2000
(a) Runtimes for Covertype.
10 2
10 0
10
10 3
? 2
?K ? K?
? 2
?K ? K?
Runtime (sec)
1.5
Recursive RLS-Nystrom
Uniform Nystrom
Random Fourier Features
Accelerated Recursive RLS-Nystrom
10 4
10 2
10 1
-2
0
500
1000
1500
2000
Samples
(b) Errors for Covertype.
10 0
0
1
2
3
4
5
Runtime (sec.)
(c) Runtime/error tradeoff for
YearPredictionMSD.
Figure 4: Our accelerated Recursive RLS-Nystr?m, nearly matches the per sample runtime of random
Fourier features and uniform Nystr?m while still providing much better approximation.
5.2
Additional Empirical Results
In Appendix G we verify the usefulness of our kernel approximations in downstream learning tasks.
While full kernel methods do not scale to our large datasets, Recursive RLS-Nystr?m does since
its runtime depends linearly on n. For example, on YearPredictionMSD the method requires 307
sec. (averaged over 5 trials) to build a 2, 000 landmark Nystr?m approximation for 463,716 training
points. Ridge regression using the approximate kernel then requires 208 sec. for a total of 515
sec. These runtime are comparable to those of the very fast random Fourier features method, which
underperforms RLS-Nystr?m in terms of regression and classification accuracy.
Acknowledgements
We would like to thank Michael Mahoney for bringing the potential of ridge leverage scores to our
attention and suggesting their possible approximation via iterative sampling schemes. We would
also like to thank Michael Cohen for pointing out (and fixing) an error in our original manuscript
and generally for his close collaboration in our work on leverage score sampling algorithms. Finally,
thanks to Haim Avron for pointing our an error in our original analysis.
9
References
[AM15] Ahmed Alaoui and Michael W Mahoney. Fast randomized kernel ridge regression
with statistical guarantees. In Advances in Neural Information Processing Systems 28
(NIPS), pages 775?783, 2015.
[AMS01] Dimitris Achlioptas, Frank Mcsherry, and Bernhard Sch?lkopf. Sampling techniques
for kernel methods. In Advances in Neural Information Processing Systems 14 (NIPS),
2001.
[ANW14] Haim Avron, Huy Nguyen, and David Woodruff. Subspace embeddings for the
polynomial kernel. In Advances in Neural Information Processing Systems 27 (NIPS),
pages 2258?2266, 2014.
[Bac13] Francis Bach. Sharp analysis of low-rank kernel matrix approximations. In Proceedings of the 26th Annual Conference on Computational Learning Theory (COLT),
2013.
[BBV06] Maria-Florina Balcan, Avrim Blum, and Santosh Vempala. Kernels as features: On
kernels, margins, and low-dimensional mappings. Machine Learning, 65(1):79?94,
2006.
[BJ02] Francis Bach and Michael I. Jordan. Kernel independent component analysis. Journal
of Machine Learning Research, 3(Jul):1?48, 2002.
[BMD09] Christos Boutsidis, Michael W. Mahoney, and Petros Drineas. Unsupervised feature
selection for the k-means clustering problem. In Advances in Neural Information
Processing Systems 22 (NIPS), pages 153?161, 2009.
[BW09] Mohamed-Ali Belabbas and Patrick J. Wolfe. Spectral methods in machine learning:
New strategies for very large datasets. Proceedings of the National Academy of
Sciences of the USA, 106:369?374, 2009.
[BWZ16] Christos Boutsidis, David P. Woodruff, and Peilin Zhong. Optimal principal component
analysis in distributed and streaming models. In Proceedings of the 48th Annual ACM
Symposium on Theory of Computing (STOC), 2016.
[CEM+ 15] Michael B. Cohen, Sam Elder, Cameron Musco, Christopher Musco, and Madalina
Persu. Dimensionality reduction for k-means clustering and low rank approximation.
In Proceedings of the 47th Annual ACM Symposium on Theory of Computing (STOC),
pages 163?172, 2015.
[CLL+ 15] Shouyuan Chen, Yang Liu, Michael Lyu, Irwin King, and Shengyu Zhang. Fast
relative-error approximation algorithm for ridge regression. In Proceedings of the 31st
Annual Conference on Uncertainty in Artificial Intelligence (UAI), pages 201?210,
2015.
[CLM+ 15] Michael B. Cohen, Yin Tat Lee, Cameron Musco, Christopher Musco, Richard Peng,
and Aaron Sidford. Uniform sampling for matrix approximation. In Proceedings of
the 6th Conference on Innovations in Theoretical Computer Science (ITCS), pages
181?190, 2015.
[CLV16] Daniele Calandriello, Alessandro Lazaric, and Michal Valko. Analysis of Nystr?m
method with sequential ridge leverage score sampling. In Proceedings of the 32nd
Annual Conference on Uncertainty in Artificial Intelligence (UAI), pages 62?71, 2016.
[CLV17] Daniele Calandriello, Alessandro Lazaric, and Michal Valko. Distributed adaptive
sampling for kernel matrix approximation. In Proceedings of the 20th International
Conference on Artificial Intelligence and Statistics (AISTATS), 2017.
[CMM17] Michael B. Cohen, Cameron Musco, and Christopher Musco. Input sparsity time
low-rank approximation via ridge leverage score sampling. In Proceedings of the 28th
Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 1758?1777,
2017.
10
[CMT10] Corinna Cortes, Mehryar Mohri, and Ameet Talwalkar. On the impact of kernel approximation on learning accuracy. In Proceedings of the 13th International Conference
on Artificial Intelligence and Statistics (AISTATS), pages 113?120, 2010.
[CW17] Kenneth L. Clarkson and David P. Woodruff. Low-rank PSD approximation in inputsparsity time. In Proceedings of the 28th Annual ACM-SIAM Symposium on Discrete
Algorithms (SODA), pages 2061?2072, 2017.
[DM05] Petros Drineas and Michael W Mahoney. On the Nystr?m method for approximating
a Gram matrix for improved kernel-based learning. Journal of Machine Learning
Research, 6:2153?2175, 2005.
[DMIMW12] Petros Drineas, Malik Magdon-Ismail, Michael W. Mahoney, and David P. Woodruff.
Fast approximation of matrix coherence and statistical leverage. Journal of Machine
Learning Research, 13:3475?3506, 2012.
[DMM08] Petros Drineas, Michael W Mahoney, and S Muthukrishnan. Relative-error CUR
matrix decompositions. SIAM Journal on Matrix Analysis and Applications, 30(2):844?
881, 2008.
[DST03] Vin De Silva and Joshua B Tenenbaum. Global versus local methods in nonlinear
dimensionality reduction. In Advances in Neural Information Processing Systems 16
(NIPS), pages 721?728, 2003.
[FS02] Shai Fine and Katya Scheinberg. Efficient SVM training using low-rank kernel
representations. Journal of Machine Learning Research, 2:243?264, 2002.
[FSS13] Dan Feldman, Melanie Schmidt, and Christian Sohler. Turning big data into tiny data:
Constant-size coresets for k-means, PCA, and projective clustering. In Proceedings
of the 24th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pages
1434?1453, 2013.
[Git11] Alex Gittens.
The spectral norm error of the naive Nystr?m extension.
arXiv:1110.5305, 2011.
[GM13] Alex Gittens and Michael Mahoney. Revisiting the Nystr?m method for improved
large-scale machine learning. In Proceedings of the 30th International Conference on
Machine Learning (ICML), pages 567?575, 2013. Full version at arXiv:1303.1849.
[HFH+ 09] Mark Hall, Eibe Frank, Geoffrey Holmes, Bernhard Pfahringer, Peter Reutemann,
and Ian H Witten. The WEKA data mining software: an update. ACM SIGKDD
Explorations Newsletter, 11(1):10?18, 2009.
[HKZ14] Daniel Hsu, Sham M. Kakade, and Tong Zhang. Random design analysis of ridge
regression. Foundations of Computational Mathematics, 14(3):569?600, 2014.
[HTF02] Trevor Hastie, Robert Tibshirani, and Jerome Friedman. The elements of statistical
learning: data mining, inference and prediction. Springer, 2nd edition, 2002.
[IBM14] IBM Reseach Division, Skylark Team. Libskylark: Sketching-based Distributed
Matrix Computations for Machine Learning. IBM Corporation, Armonk, NY, 2014.
[KMT12] Sanjiv Kumar, Mehryar Mohri, and Ameet Talwalkar. Sampling methods for the
Nystr?m method. Journal of Machine Learning Research, 13:981?1006, 2012.
[LBKL15] Mu Li, Wei Bi, James T Kwok, and Bao-Liang Lu. Large-scale Nystr?m kernel matrix
approximation using randomized SVD. IEEE Transactions on Neural Networks and
Learning Systems, 26(1):152?164, 2015.
[Lic13] M. Lichman. UCI machine learning repository, 2013.
[LJS16] Chengtao Li, Stefanie Jegelka, and Suvrit Sra. Fast DPP sampling for Nystr?m with
application to kernel methods. In Proceedings of the 33rd International Conference
on Machine Learning (ICML), 2016.
11
[LSS13] Quoc Le, Tam?s Sarl?s, and Alexander Smola. Fastfood - Computing Hilbert space
expansions in loglinear time. In Proceedings of the 30th International Conference on
Machine Learning (ICML), pages 244?252, 2013.
[MU17] Michael Mitzenmacher and Eli Upfal. Probability and Computing: Randomization
and Probabilistic Techniques in Algorithms and Data Analysis. Cambridge university
press, 2017.
[PD16] Saurabh Paul and Petros Drineas. Feature selection for ridge regression with provable
guarantees. Neural Computation, 28(4):716?742, 2016.
[Pla05] John Platt. FastMap, MetricMap, and Landmark MDS are all Nystr?m algorithms.
In Proceedings of the 8th International Conference on Artificial Intelligence and
Statistics (AISTATS), 2005.
[PVG+ 11] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel,
P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau,
M. Brucher, M. Perrot, and E. Duchesnay. Scikit-learn: Machine learning in Python.
Journal of Machine Learning Research, 12:2825?2830, 2011.
[RCR15] Alessandro Rudi, Raffaello Camoriano, and Lorenzo Rosasco. Less is more: Nystr?m
computational regularization. In Advances in Neural Information Processing Systems
28 (NIPS), pages 1648?1656, 2015.
[RR07] Ali Rahimi and Benjamin Recht. Random features for large-scale kernel machines.
In Advances in Neural Information Processing Systems 20 (NIPS), pages 1177?1184,
2007.
[RR09] Ali Rahimi and Benjamin Recht. Weighted sums of random kitchen sinks: Replacing
minimization with randomization in learning. In Advances in Neural Information
Processing Systems 22 (NIPS), pages 1313?1320, 2009.
[SS00] Alex J Smola and Bernhard Sch?kopf. Sparse greedy matrix approximation for
machine learning. In Proceedings of the 17th International Conference on Machine
Learning (ICML), pages 911?918, 2000.
[SS02] Bernhard Sch?lkopf and Alexander J Smola. Learning with kernels: support vector
machines, regularization, optimization, and beyond. MIT press, 2002.
[SSM99] Bernhard Sch?lkopf, Alexander J. Smola, and Klaus-Robert M?ller. Advances in
kernel methods. chapter Kernel principal component analysis, pages 327?352. MIT
Press, 1999.
[Tro15] Joel A. Tropp. An introduction to matrix concentration inequalities. Foundations and
Trends in Machine Learning, 8(1-2):1?230, 2015.
[TRVR16] Stephen Tu, Rebecca Roelofs, Shivaram Venkataraman, and Benjamin Recht. Large
scale kernel learning using block coordinate descent. arXiv:1602.05310, 2016.
[UKM06] Andrew V Uzilov, Joshua M Keegan, and David H Mathews. Detection of non-coding
RNAs on the basis of predicted secondary structure formation free energy change.
BMC bioinformatics, 7(1):173, 2006.
[Wan16] Weiran Wang. On column selection in approximate kernel canonical correlation
analysis. arXiv:1602.02172, 2016.
[Woo14] David P. Woodruff. Sketching as a tool for numerical linear algebra. Foundations and
Trends in Theoretical Computer Science, 10(1-2):1?157, 2014.
[WS01] Christopher Williams and Matthias Seeger. Using the Nystr?m method to speed up
kernel machines. In Advances in Neural Information Processing Systems 14 (NIPS),
pages 682?688, 2001.
12
[WZ13] Shusen Wang and Zhihua Zhang. Improving CUR matrix decomposition and the
Nystr?m approximation via adaptive sampling. Journal of Machine Learning Research,
14:2729?2769, 2013.
[YLM+ 12] Tianbao Yang, Yu-feng Li, Mehrdad Mahdavi, Rong Jin, and Zhi-Hua Zhou. Nystr?m
method vs random Fourier features: A theoretical and empirical comparison. In
Advances in Neural Information Processing Systems 25 (NIPS), pages 476?484, 2012.
[YPW15] Yun Yang, Mert Pilanci, and Martin J Wainwright. Randomized sketches for kernels:
Fast and optimal non-parametric regression. Annals of Statistics, 2015.
[YZ13] Martin Wainwright Yuchen Zhang, John Duchi. Divide and conquer kernel ridge
regression. Proceedings of the 26th Annual Conference on Computational Learning
Theory (COLT), 2013.
[Zha06] Tong Zhang. Learning bounds for kernel regression using effective data dimensionality.
Learning, 17(9), 2006.
[ZTK08] Kai Zhang, Ivor W. Tsang, and James T. Kwok. Improved Nystr?m low-rank approximation and error analysis. In Proceedings of the 25th International Conference on
Machine Learning (ICML), pages 1232?1239, 2008.
13
| 6973 |@word trial:3 repository:2 version:1 achievable:1 norm:3 polynomial:1 nd:6 confirms:2 seek:2 r:2 tat:1 decomposition:4 nystr:73 tr:3 recursively:3 reduction:2 liu:1 contains:1 score:45 lichman:1 woodruff:5 daniel:1 dubourg:1 outperforms:1 existing:1 michal:2 surprising:1 yet:1 must:2 john:2 lic13:2 additive:2 subsequent:1 sanjiv:1 confirming:1 numerical:1 christian:1 remove:1 am15:9 update:1 v:1 half:1 prohibitive:2 selected:4 fewer:1 intelligence:5 accordingly:2 greedy:1 woo14:1 ith:2 provides:1 ws01:3 zhang:6 along:1 constructed:1 c2:2 ns2:10 symposium:5 prove:8 dan:1 introduce:2 pvg:2 blondel:1 theoretically:1 peng:1 roughly:2 automatically:1 zhi:1 param:1 project:1 suffice:1 substantially:1 spectrally:1 chengtao:1 finding:1 ret:1 corporation:1 guarantee:22 combat:1 avron:2 runtime:24 exactly:1 prohibitively:2 rm:1 platt:1 control:1 unit:1 mathews:1 overestimate:1 positive:4 local:1 ss00:2 incoherence:3 approximately:4 katya:1 k:26 studied:1 factorization:1 limited:1 projective:1 bi:6 averaged:2 practical:1 unique:1 recursive:35 practice:2 implement:1 block:1 universal:1 empirical:5 significantly:5 matching:1 projection:6 word:2 onto:4 close:2 selection:3 risk:1 applying:2 equivalent:1 map:2 missing:1 center:2 yt:1 williams:1 attention:1 tianbao:1 independently:3 musco:8 immediately:2 cw17:2 factored:1 insight:1 holmes:1 his:1 bj02:2 classic:4 embedding:1 variation:3 coordinate:1 annals:1 imagine:1 user:1 losing:1 us:6 wolfe:1 element:1 expensive:3 satisfying:4 trend:2 cut:1 wang:2 capture:1 worst:1 tsang:1 revisiting:1 ensures:3 venkataraman:1 ordering:1 decrease:1 highest:1 alessandro:3 intuition:1 environment:1 mu:1 benjamin:3 inductively:1 reviewing:1 algebra:2 ali:3 passos:1 upon:1 ei2:1 efficiency:1 division:1 basis:2 sink:1 drineas:5 accelerate:2 easily:1 chapter:1 muthukrishnan:1 derivation:1 fast:9 effective:4 cod:3 artificial:5 klaus:1 formation:1 y2r:1 choosing:1 sarl:1 heuristic:2 widely:1 larger:1 denser:1 kai:1 s:2 reconstruct:1 compressed:1 belabbas:1 statistic:4 final:2 rr:1 loewner:1 eigenvalue:2 matthias:1 mert:1 product:5 tro15:1 tu:1 relevant:1 uci:2 ystr:6 academy:1 ismail:1 normalize:1 bao:1 regularity:3 p:7 cluster:5 r1:1 produce:2 generating:1 extending:1 fs02:2 empty:1 smax:4 underperforming:1 andrew:1 fixing:1 measured:1 ij:1 strong:7 implemented:2 predicted:1 involves:1 come:1 implies:2 closely:1 exploration:1 eff:6 require:2 fastmap:1 hfh:2 suffices:1 preliminary:2 randomization:2 varoquaux:1 tighter:1 im:1 lss13:2 extension:1 reutemann:1 hold:1 practically:1 rong:1 considered:1 hall:1 mapping:3 lyu:1 pointing:2 camoriano:1 achieves:1 smallest:1 favorable:1 prettenhofer:1 ylm:2 ams01:2 correctness:1 tool:1 weighted:5 hope:1 minimization:1 mit:6 always:1 rna:4 gaussian:1 pn:1 zhou:1 zhong:1 corollary:2 derived:1 improvement:1 consistently:1 rank:19 check:3 maria:1 grisel:1 seeger:1 sigkdd:1 talwalkar:2 inference:1 armonk:1 dependent:1 streaming:2 typically:4 bt:11 pfahringer:1 initially:1 kernelized:3 selects:3 provably:2 issue:1 classification:3 colt:2 gramfort:1 santosh:1 equal:3 construct:3 having:1 beach:1 sampling:45 runtimes:2 bmc:1 once:1 never:1 sohler:1 unsupervised:1 rls:47 nearly:3 yu:1 icml:5 simplify:1 richard:1 manipulated:1 national:1 individual:1 subsampled:1 raffaello:1 kitchen:1 frob:1 freedom:1 psd:1 friedman:1 detection:1 mining:2 cournapeau:1 shusen:1 evaluation:12 joel:1 mahoney:8 semidefinite:3 reseach:1 clm:3 mcsherry:1 accurate:3 necessary:1 orthogonal:3 incomplete:1 yuchen:1 divide:1 desired:1 plotted:1 sacrificing:1 theoretical:8 column:12 cover:1 sidford:1 cost:8 entry:4 subset:6 uniform:28 usefulness:1 weiran:1 successful:1 too:2 stored:2 eec:2 params:1 inputsparsity:1 st:24 thanks:1 international:8 randomized:3 cll:1 siam:4 recht:3 lee:1 xi1:1 probabilistic:1 shivaram:1 michael:14 quickly:4 sketching:3 reflect:1 containing:2 rosasco:1 possibly:1 tam:1 return:5 michel:1 li:26 mahdavi:1 suggesting:1 potential:1 de:1 summarized:1 sec:9 coresets:1 coding:1 satisfy:1 explicitly:1 depends:1 performed:2 view:2 later:1 analyze:1 francis:2 effected:1 vin:1 shai:1 jul:1 defer:1 contribution:1 formed:2 oi:1 accuracy:13 variance:1 largely:1 efficiently:1 roelofs:1 correspond:1 yield:2 bbt:5 lkopf:3 itcs:1 lu:1 converged:1 yearpredictionmsd:7 strongest:2 ndeff:2 trevor:1 definition:2 evaluates:1 failure:3 against:1 boutsidis:2 energy:1 mohamed:1 james:2 nystrom:25 associated:2 proof:8 recovers:1 conciseness:2 petros:5 sampled:6 cur:2 dataset:4 proved:1 popular:2 hsu:1 ut:2 dimensionality:4 kyk22:1 hilbert:2 actually:2 elder:1 manuscript:1 higher:3 follow:1 improved:5 wei:2 formulation:1 mitzenmacher:1 just:9 implicit:1 smola:4 achlioptas:1 correlation:3 jerome:1 sketch:1 oversample:2 tropp:1 replacing:1 christopher:5 nonlinear:5 ei:2 scikit:1 brings:1 quality:4 building:1 effect:1 name:1 verify:1 usa:2 requiring:5 true:2 isomap:1 hence:1 regularization:5 concept:1 inductive:1 contain:1 anw14:2 width:1 daniele:2 yun:1 outline:1 ridge:36 performs:2 newsletter:1 kopf:1 silva:1 balcan:1 duchi:1 superior:1 pseudocode:1 witten:1 empirically:2 cohen:4 conditioning:1 approximates:2 significant:2 refer:1 bk22:1 cambridge:1 feldman:1 rd:3 mathematics:1 bti:4 etc:1 add:1 patrick:1 recent:1 hide:1 store:3 suvrit:1 inequality:1 binary:1 deff:26 vt:2 joshua:2 ampling:1 additional:6 greater:1 fortunately:1 ller:1 monotonically:1 preservation:2 stephen:1 full:5 sham:1 d0:1 rahimi:2 technical:1 faster:2 match:1 ahmed:1 cross:1 long:2 offer:1 bach:2 zha06:2 cameron:4 equally:1 impact:2 prediction:1 scalable:2 regression:15 variant:3 basic:1 essentially:2 expectation:2 florina:1 arxiv:4 kernel:89 achieved:1 underperforms:1 c1:2 want:1 separately:1 fine:1 singular:3 sch:4 rest:1 bringing:1 tend:1 alaoui:2 jordan:1 call:1 near:1 yang:3 leverage:37 noting:1 bernstein:2 easy:1 intermediate:1 split:1 enough:1 xj:2 embeddings:1 appx:1 hastie:1 inner:5 idea:1 decline:1 tradeoff:2 translates:1 weka:1 shift:1 wz13:2 metricmap:1 pca:4 clarkson:1 peter:1 cause:1 repeatedly:1 useful:1 generally:1 clear:1 informally:2 sst:3 tenenbaum:1 svms:1 processed:1 kbti:1 exist:1 canonical:3 oversampling:1 s3:2 estimated:1 lazaric:2 per:3 tibshirani:1 write:6 discrete:3 brucher:1 key:2 blum:1 undersampling:1 calandriello:2 kenneth:1 downstream:7 fraction:1 sum:5 run:4 compete:1 prob:1 powerful:2 uncertainty:2 soda:3 eli:1 place:3 extends:1 coherence:2 appendix:10 peilin:1 comparable:2 bound:12 ki:2 def:5 guaranteed:1 cca:1 haim:2 rudi:1 annual:9 covertype:6 constraint:1 alex:3 n3:4 software:1 calling:2 generates:1 fourier:17 speed:1 argument:1 span:2 min:3 kumar:1 vempala:1 ameet:2 relatively:1 martin:2 according:1 remain:1 across:1 smaller:2 sam:1 appealing:1 gittens:2 kakade:1 b:2 quoc:1 intuitively:2 invariant:1 remains:1 scheinberg:1 discus:1 fail:1 xi2:1 needed:1 thirion:1 end:1 magdon:1 apply:1 kwok:2 spectral:5 alternative:1 corinna:1 schmidt:1 original:5 top:2 clustering:7 include:1 ensure:3 denotes:2 madalina:1 maintaining:1 build:2 conquer:1 approximating:2 classical:1 feng:1 malik:1 perrot:1 quantity:2 strategy:1 concentration:1 dependence:2 rr07:4 md:2 diagonal:2 loglinear:1 mehrdad:1 parametric:1 subspace:1 thank:2 landmark:25 ei1:1 degrade:1 provable:3 minn:1 relationship:2 kk:10 providing:1 innovation:1 liang:1 unfortunately:2 robert:2 stoc:2 frank:2 stated:1 append:1 implementation:2 design:1 upper:1 datasets:8 benchmark:1 ztk08:3 descent:1 jin:1 ever:1 team:1 rn:15 reproducing:1 sharp:1 thm:4 rebecca:1 david:6 inverting:1 vacuous:1 required:3 specified:1 vanderplas:1 established:2 nip:11 adult:3 beyond:2 xm:4 dimitris:1 sparsity:1 built:1 including:1 wainwright:2 natural:4 valko:2 turning:1 recursion:2 melanie:1 scheme:6 improve:2 lorenzo:1 categorical:1 stefanie:1 naive:1 prior:2 literature:1 trvr16:2 acknowledgement:1 python:1 relative:7 proportional:1 proven:1 versus:2 geoffrey:1 validation:1 foundation:3 downloaded:1 upfal:1 degree:1 offered:1 shouyuan:1 jegelka:1 storing:1 pi:10 collaboration:1 tiny:1 row:7 ibm:2 changed:1 mohri:2 free:2 infeasible:1 formal:1 allow:1 fall:1 sparse:1 distributed:4 dpp:1 dimension:3 xn:6 transition:1 world:1 avoids:1 gram:1 commonly:1 adaptive:2 nguyen:1 employing:1 transaction:1 approximate:11 obtains:4 compact:1 bernhard:5 ml:1 pseudoinverse:1 cem:2 ecursive:5 persu:1 uai:2 global:1 conclude:2 xi:4 alternatively:1 don:2 spectrum:1 iterative:2 table:2 additionally:2 learn:1 pilanci:1 ca:1 sra:1 composing:1 ignoring:1 improving:2 expansion:1 mehryar:2 domain:1 aistats:3 spread:1 saurabh:1 main:2 linearly:1 s2:1 big:1 fastfood:1 huy:1 n2:5 edition:1 paul:1 repeated:1 x1:10 body:2 referred:1 ny:1 tong:2 n:11 christos:2 duchesnay:1 position:1 bw09:2 weighting:1 ian:1 theorem:23 formula:2 down:1 specific:1 kkk:1 cortes:1 svm:1 eibe:1 intrinsic:1 naively:2 avrim:1 sequential:1 effectively:1 dm05:2 importance:4 cmm17:3 budget:1 margin:1 nk:2 gap:1 chen:1 logarithmic:1 yin:1 forming:1 ivor:1 zhihua:1 applies:1 springer:1 hua:1 satisfies:7 relies:1 acm:6 identity:1 king:1 acceleration:1 replace:1 experimentally:1 hard:2 included:3 typical:1 infinite:1 uniformly:3 specifically:1 determined:1 change:1 lemma:9 principal:2 called:1 cnmusco:1 total:2 secondary:1 svd:2 ss02:1 exception:1 select:4 highdimensional:1 aaron:1 pedregosa:1 cholesky:3 mark:1 support:1 irwin:1 alexander:3 bioinformatics:1 accelerated:6 evaluate:1 gm13:8 |
6,604 | 6,974 | Interpolated Policy Gradient: Merging On-Policy and
Off-Policy Gradient Estimation for Deep
Reinforcement Learning
Shixiang Gu
University of Cambridge
Max Planck Institute
[email protected]
Richard E. Turner
University of Cambridge
[email protected]
Timothy Lillicrap
DeepMind
[email protected]
Bernhard Sch?lkopf
Max Planck Institute
[email protected]
Zoubin Ghahramani
University of Cambridge
Uber AI Labs
[email protected]
Sergey Levine
UC Berkeley
[email protected]
Abstract
Off-policy model-free deep reinforcement learning methods using previously collected data can improve sample efficiency over on-policy policy gradient techniques.
On the other hand, on-policy algorithms are often more stable and easier to use.
This paper examines, both theoretically and empirically, approaches to merging
on- and off-policy updates for deep reinforcement learning. Theoretical results
show that off-policy updates with a value function estimator can be interpolated
with on-policy policy gradient updates whilst still satisfying performance bounds.
Our analysis uses control variate methods to produce a family of policy gradient
algorithms, with several recently proposed algorithms being special cases of this
family. We then provide an empirical comparison of these techniques with the
remaining algorithmic details fixed, and show how different mixing of off-policy
gradient estimates with on-policy samples contribute to improvements in empirical
performance. The final algorithm provides a generalization and unification of
existing deep policy gradient techniques, has theoretical guarantees on the bias
introduced by off-policy updates, and improves on the state-of-the-art model-free
deep RL methods on a number of OpenAI Gym continuous control benchmarks.
1
Introduction
Reinforcement learning (RL) studies how an agent that interacts sequentially with an environment
can learn from rewards to improve its behavior and optimize long-term returns. Recent research has
demonstrated that deep networks can be successfully combined with RL techniques to solve difficult
control problems. Some of these include robotic control (Schulman et al., 2016; Lillicrap et al., 2016;
Levine et al., 2016), computer games (Mnih et al., 2015), and board games (Silver et al., 2016).
One of the simplest ways to learn a neural network policy is to collect a batch of behavior wherein
the policy is used to act in the world, and then compute and apply a policy gradient update from
this data. This is referred to as on-policy learning because all of the updates are made using data
that was collected from the trajectory distribution induced by the current policy of the agent. It is
straightforward to compute unbiased on-policy gradients, and practical on-policy gradient algorithms
tend to be stable and relatively easy to use. A major drawback of such methods is that they tend to
be data inefficient, because they only look at each data point once. Off-policy algorithms based on
Q-learning and actor-critic learning (Sutton et al., 1999) have also proven to be an effective approach
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
to deep RL such as in (Mnih et al., 2015) and (Lillicrap et al., 2016). Such methods reuse samples
by storing them in a memory replay buffer and train a value function or Q-function with off-policy
updates. This improves data efficiency, but often at a cost in stability and ease of use.
Both on- and off-policy learning techniques have their own advantages. Most recent research has
worked with on-policy algorithms or off-policy algorithms, and a few recent methods have sought to
make use of both on- and off-policy data for learning (Gu et al., 2017; Wang et al., 2017; O?Donoghue
et al., 2017). Such algorithms hope to gain advantages from both modes of learning, whilst avoiding
their limitations. Broadly speaking, there have been two basic approaches in recently proposed
algorithms that make use of both on- and off-policy data and updates. The first approach is to mix
some ratio of on- and off-policy gradients or update steps in order to update a policy, as in the
ACER and PGQ algorithms (Wang et al., 2017; O?Donoghue et al., 2017). In this case, there are no
theoretical bounds on the error induced by incorporating off-policy updates. In the second approach,
an off-policy Q critic is trained but is used as a control variate to reduce on-policy gradient variance,
as in the Q-prop algorithm (Gu et al., 2017). This case does not introduce additional bias to the
gradient estimator, but the policy updates do not use off-policy data.
We seek to unify these two approaches using the method of control variates. We introduce a
parameterized family of policy gradient methods that interpolate between on-policy and off-policy
learning. Such methods are in general biased, but we show that the bias can be bounded.We show
that a number of recent methods (Gu et al., 2017; Wang et al., 2017; O?Donoghue et al., 2017) can be
viewed as special cases of this more general family. Furthermore, our empirical results show that in
most cases, a mix of policy gradient and actor-critic updates achieves the best results, demonstrating
the value of considering interpolated policy gradients.
2
Preliminaries
A key component of our interpolated policy gradient method is the use of control variates to mix
likelihood ratio gradients with deterministic gradient estimates obtained explicitly from a state-action
critic. In this section, we summarize both likelihood ratio and deterministic gradient methods, as well
as how control variates can be used to combine these two approaches.
2.1
On-Policy Likelihood Ratio Policy Gradient
At time t, the RL agent in state st takes action at according to its policy ?(at |st ), the state transitions
to st+1 , and the agent receives a reward r(st , at ). For a parametrized policy ?? , P
the objective is to
?
maximize the ?-discounted cumulative future return J(?) = J(?) = Es0 ,a0 ,????? [ t=0 ? t r(st , at )].
Monte Carlo policy gradient methods, such as REINFORCE (Williams, 1992) and TRPO (Schulman
et al., 2015), use the likelihood ratio policy gradient of the RL objective,
? t , at ) ? b(st ))] = E?? ,? [?? log ?? (at |st )A(s
? t , at )], (1)
?? J(?) = E?? ,? [?? log ?? (at |st )(Q(s
? t , at ) = P?0 ? t0 ?t r(st0 , at0 ) is the Monte Carlo estimate of the ?critic? Q? (st , at ) =
where Q(s
t =t
? t , at )], and ?? = P? ? t p(st = s) are the unnormalized state visitation
Est+1 ,at+1 ,?????|st ,at [Q(s
t=0
frequencies, while b(st ) is known as the baseline, and serves to reduce the variance of the gradient estimate (Williams, 1992). If the baseline estimates the value function, V ? (st ) = Eat ??(?|st ) [Q? (st , at )],
? t ) is an estimate of the advantage function A? (st , at ) = Q? (st , at ) ? V ? (st ). Likelihood
then A(s
ratio policy gradient methods use unbiased gradient estimates (except for the technicality detailed
by Thomas (2014)), but they often suffer from high variance and are sample-intensive.
2.2
Off-Policy Deterministic Policy Gradient
Policy gradient methods with function approximation (Sutton et al., 1999), or actor-critic methods,
are a family of policy gradient methods which first estimate the critic, or the value, of the policy by
Qw ? Q? , and then greedily optimize the policy ?? with respect to Qw . While it is not necessary for
such algorithms to be off-policy, we primarily analyze the off-policy variants, such as (Riedmiller,
2005; Degris et al., 2012; Heess et al., 2015; Lillicrap et al., 2016). For example, DDPG Lillicrap
et al. (2016), which optimizes a continuous deterministic policy ?? (at |st ) = ?(at = ?? (st )), can be
summarized by the following update equations, where Q0 denotes the target Q network and ? denotes
2
?
?
6= ?
?
0
0
1
-
CV
No
Yes
No
Examples
REINFORCE (Williams, 1992),TRPO (Schulman et al., 2015)
Q-Prop (Gu et al., 2017)
DDPG (Silver et al., 2014; Lillicrap et al., 2016),SVG(0) (Heess et al., 2015)
?PGQ (O?Donoghue et al., 2017), ?ACER (Wang et al., 2017)
Table 1: Prior policy gradient method objectives as special cases of IPG.
some off-policy distribution, e.g. from experience replay (Lillicrap et al., 2016):
yt = r(st , at ) + ?Q0 (st+1 , ?? (st+1 ))
w ? arg min E? [(Qw (st , at ) ? yt )2 ],
? ? arg max E? [Qw (st , ?? (st ))].
(2)
This provides the following deterministic policy gradient through the critic:
?? J(?) ? E?? [?? Qw (st , ?? (st ))].
(3)
This policy gradient is generally biased due to the imperfect estimator Qw and off-policy state
sampling from ?. Off-policy actor-critic algorithms therefore allow training the policy on off-policy
samples, at the cost of introducing potentially unbounded bias into the gradient estimate. This usually
makes off-policy algorithms less stable during learning, compared to on-policy algorithms using a
large batch size for each update (Duan et al., 2016; Gu et al., 2017).
2.3
Off-Policy Control Variate Fitting
The control variates method (Ross, 2006) is a general technique for variance reduction of a Monte
Carlo estimator by exploiting a correlated variable for which we know more information such as
analytical expectation. General control variates for RL include state-action baselines, and an example
? w , the first-order
can be an off-policy fitted critic Qw . Q-Prop (Gu et al., 2017), for example, used Q
Taylor expansion of Qw , as the control variates, and showed improvement in stability and sample
efficiency of policy gradient methods. ?? here corresponds to the mean of the stochastic policy ?? .
? t , at ) ? Q
? w (st , at ))] + E?? [?? Qw (st , ?? (st ))].
?? J(?) = E?? ,? [?? log ?? (at |st )(Q(s
(4)
The gradient estimator combines both likelihood ratio and deterministic policy gradients in Eq. 1
and 3. It has lower variance and stable gradient estimates and enables more sample-efficient learning.
However, one limitation of Q-Prop is that it uses only on-policy samples for estimating the policy
gradient. This ensures that the Q-Prop estimator remains unbiased, but limits the use of off-policy
samples for further variance reduction.
3
Interpolated Policy Gradient
?
Our proposed approach, interpolated policy gradient (IPG), mixes likelihood ratio gradient with Q,
which provides unbiased but high-variance gradient estimation, and deterministic gradient through an
off-policy fitted critic Qw , which provides low-variance but biased gradients. IPG directly interpolates
the two terms from Eq. 1 and 3:
? t , at )] + ?E?? [?? Q
? ?w (st )],
?? J(?) ? (1 ? ?)E?? ,? [?? log ?? (at |st )A(s
(5)
? w (st ) =
where we generalized the deterministic policy gradient through the critic as ?? Q
?
?? E? [Qw (st , ?)]. This generalization is to make our analysis applicable with more general forms of
the critic-based control variates, as discussed in the Appendix. This gradient estimator is biased from
two sources: off-policy state sampling ?? , and inaccuracies in the critic Qw . However, as we show in
Section 4, we can bound the biases for all the cases, and in some cases, the algorithm still guarantees
monotonic convergence as in Kakade & Langford (2002); Schulman et al. (2015).
3.1
Control Variates for Interpolated Policy Gradient
While IPG includes ? to trade off bias and variance directly, it contains a likelihood ratio gradient term,
for which we can introduce a control variate (CV) Ross (2006) to further reduce the estimator variance.
3
? ? (st ),
The expression for the IPG with control variates is below, where A?w (st , at ) = Qw (st , at ) ? Q
w
? t , at )] + ?E?? [?? Q
? ?w (st )]
?? J(?) ? (1 ? ?)E?? ,? [?? log ?? (at |st )A(s
? t , at ) ? A?w (st , at ))]
= (1 ? ?)E?? ,? [?? log ?? (at |st )(A(s
? ? (st )] + ?E?? [?? Q
? ? (st )]
+ (1 ? ?)E?? [?? Q
w
w
? t , at ) ? A?w (st , at ))] + E?? [?? Q
? ?w (st )].
? (1 ? ?)E?? ,? [?? log ?? (at |st )(A(s
(6)
The first approximation indicates the biased approximation from IPG, while the second approximation
indicates replacing the ?? in the control variate correction term with ?? and merging with the last
term. The second approximation is a design decision and introduces additional bias when ? 6= ? but it
helps simplify the expression to be analyzed more easily, and the additional benefit from the variance
reduction from the control variate could still outweigh this extra bias. The biases are analyzed in
Section 4. The likelihood ratio gradient term is now proportional to the residual in on- and off-policy
? t , at ) ? A?w (st , at ), and therefore, we call this term residual likelihood ratio
advantage estimates A(s
gradient. Intuitively, if the off-policy critic estimate is accurate, this term has a low magnitude and
the overall variance of the estimator is reduced.
3.2
Relationship to Prior Policy Gradient and Actor-Critic Methods
Crucially, IPG allows interpolating a rich list of prior deep policy gradient methods using only three
parameters: ?, ?, and the use of the control variate (CV). The connection is summarized in Table 1
and the algorithm is presented in Algorithm 1. Importantly, a wide range of prior work has only
explored limiting cases of the spectrum, e.g. ? = 0, 1, with or without the control variate. Our work
provides a thorough theoretical analysis of the biases, and in some cases performance guarantees,
for each of the method in this spectrum and empirically demonstrates often the best performing
algorithms are in the midst of the spectrum.
Algorithm 1 Interpolated Policy Gradient
input ?, ?, useCV
1: Initialize w for critic Qw , ? for stochastic policy ?? , and replay buffer R ? ?.
2: repeat
3:
Roll-out ?? for E episodes, T time steps each, to collect a batch of data B = {s, a, r}1:T,1:E to R
4:
Fit Qw using R and ?? , and fit baseline V? (st ) using B
5:
Compute Monte Carlo advantage estimate A?t,e using B and V?
6: if useCV then
7:
Compute critic-based advantage estimate A?t,e using B, Qw and ??
8:
Compute and center the learning signals lt,e = A?t,e ? A?t,e and set b = 1
9: else
10:
Center the learning signals lt,e = A?t,e and set b = ?
11:
end if
12:
Multiply lt,e by (1 ? ?)
13:
Sample D = s1:M from R
Pand/or
P B based on ?
P
b
1
??
14:
Compute ?? J(?) ? ET
e
t ?? log ?? (at,e |st,e )lt,e + M
m ?? Qw (sm )
15: Update policy ?? using ?? J(?)
16: until ?? converges.
3.3
? = 1: Actor-Critic methods
Before presenting our theoretical analysis, an important special case to discuss is ? = 1, which
corresponds to a deterministic actor-critic method. Several advantages of this special case include
that the policy can be deterministic and the learning can be done completely off-policy, as it does not
? Prior work such as DDPG Lillicrap et al. (2016)
have to estimate the on-policy Monte Carlo critic Q.
and related Q-learning methods have proposed aggressive off-policy exploration strategy to exploit
these properties of the algorithm. In this work, we compare alternatives such as using on-policy
exploration and stochastic policy with classical DDPG algorithm designs, and show that in some
domains the off-policy exploration can significantly deteriorate the performance. Theoretically, we
confirm this empirical observation by showing that the bias from off-policy sampling in ? increases
4
monotonically with the total variation or KL divergence between ? and ?. Both the empirical and
theoretical results indicate that well-designed actor-critic methods with an on-policy exploration
strategy could be a more reliable alternative than with an on-policy exploration.
4
Theoretical Analysis
In this section, we present a theoretical analysis of the bias in the interpolated policy gradient. This is
crucial, since understanding the biases of the methods can improve our intuition about its performance
and make it easier to design new algorithms in the future. Because IPG includes many prior methods
as special cases, our analysis also applies to those methods and other intermediate cases. We first
analyze a special case and derive results for general IPG. All proofs are in the Appendix.
? 6= ?, ? = 0: Policy Gradient with Control Variate and Off-Policy Sampling
4.1
This section provides an analysis of the special case of IPG with ? 6= ?, ? = 1, and the control
variate. Plugging in to Eq. 6, we get an expression similar to Q-Prop in Eq. 4,
? t , at ) ? A? (st , at ))] + E?? [?? Q
? ? (st )],
?? J(?) ? E?? ,? [?? log ?? (at |st )(A(s
w
w
(7)
except that it also supports utilizing off-policy data for updating the policy. To analyze the bias for
? ?
this gradient expression, we first introduce J(?,
? ), a local approximation to J(?), which has been
used in prior theoretical work (Kakade & Langford, 2002; Schulman et al., 2015). The derivation and
the bias from this approximation are discussed in the proof for Theorem 1 in the Appendix.
? ?
J(?) = J(?
? ) + E?? ,? [A?? (st , at )] ? J(?
? ) + E??? ,? [A?? (st , at )] = J(?,
? ).
(8)
? ?
? ?
Note that J(?) = J(?,
? = ?) and ?? J(?) = ?? J(?,
? = ?). In practice, ?
? corresponds to policy
?k at iteration k and ? corresponds next policy ?k+1 after parameter update. Thus, this approximation
is often sufficiently good. Next, we write the approximate objective for Eq. 7,
?
?
? ?)
J??,?=0,CV (?, ?
? ) , J(?
? ) + E??? ,? [A?? (st , at ) ? A?w
(st , at )] + E?? [A??,?
w (st )] ? J(?, ?
?
?
?
A??,?
? [Qw (st , ?)].
w (st ) = E? [Aw (st , ?)] = E? [Qw (st , ?)] ? E?
(9)
? ?
? = ?) = J(?,
? = ?) = J(?), and ?? J??,?=0 (?, ?
Note that J??,?=0 (?, ?
? = ?) equals Eq. 7. We
can bound the absolute error between J??,?=0,CV (?, ?
? ) and J(?) by the following theorem, where
max
DKL
(?i , ?j ) = maxs DKL (?i (?|s), ?j (?|s)) is the maximum KL divergence between ?i , ?j .
?
??,?? (s)|, then
Theorem 1. If = maxs |A??,?
w (s)|, ? = maxs |A
q
q
?
?,?=0,CV
max
max
?
(?, ?
? )
? 2
DKL (?
? , ?) + ? DKL (?, ?
?)
J(?) ? J
(1 ? ?)2
1
Theorem 1 contains two terms: the second term confirms J??,?=0,CV is a local approximation around
? and deviates from J(?) as ?
? deviates, and the first term bounds the bias from off-policy sampling
using the KL divergence between the policies ?
? and ?. This means that the algorithm fits well with
policy gradient methods which constrain the KL divergence per policy update, such as covariant
policy gradient (Bagnell & Schneider, 2003), natural policy gradient (Kakade & Langford, 2002),
REPS (Peters et al., 2010), and trust-region policy optimization (TRPO) (Schulman et al., 2015).
4.1.1
Monotonic Policy Improvement Guarantee
Some forms of on-policy policy gradient methods have theoretical guarantees on monotonic convergence Kakade & Langford (2002); Schulman et al. (2015). Such guarantees often correspond to
stable empirical performance on challenging problems, even when some of the constraints are relaxed
in practice (Schulman et al., 2015; Duan et al., 2016; Gu et al., 2017). We can show that a variant of
IPG allows off-policy sampling while still guaranteeing monotonic convergence. The algorithm and
the proof are provided in the appendix.This algorithm is usually impractical to implement; however,
IPG with trust-region updates when ? 6= ?, ? = 1, CV = true approximates this monotonic algorithm, similar to how TRPO is an approximation to the theoretically monotonic algorithm proposed
by Schulman et al. (2015).
5
4.2
General Bounds on the Interpolated Policy Gradient
We can establish bias bounds for the general IPG algorithm, with and without the control variate,
using Theorem 2. The additional term that contributes to the bias in the general case is ?, which
represents the error between the advantage estimated by the off-policy critic and the true A? values.
Theorem 2. If ? = maxs,a |A?? (s, a) ? A?? (s, a)|, = maxs |A??,?? (s)|, ? = maxs |A??,?? (s)|,
w
??,?
w
???
?
J (?, ?
? ) , J(?
? ) + (1 ? ?)E??? ,? [A ] + ?E?? [A??,?
w ]
?
?
J??,?,CV (?, ?
? ) , J(?
? ) + (1 ? ?)E??? ,? [A??? ? A?w
] + E?? [A??,?
w ]
q
q
??
?
max (?
max (?, ?
?
+2
D
?
,
?)
+
?
D
?
)
KL
KL
1??
(1 ? ?)2
1
q
q
??
?
?,?,CV
max
max
?
(?, ?
? )
?
DKL (?
+2
? , ?) + ? DKL (?, ?
?)
J(?) ? J
1??
(1 ? ?)2
1
then,
J(?) ? J??,? (?, ?
? )
?
This bound shows that the bias from directly mixing the deterministic policy gradient through ?
comes from two terms: how well the critic Qw is approximating Q? , and how close the off-policy
sampling policy is to the actor policy. We also show that the bias introduced is proportional to ?
while the variance of the high variance likelihood ratio gradient term is proportional to (1 ? ?)2 , so ?
allows directly trading off bias and variance. Theorem 2 fully bounds bias in the full spectrum of IPG
methods; this enables us to analyze how biases arise and interact and help us design better algorithms.
5
Related Work
An overarching aim of this paper is to help unify on-policy and off-policy policy gradient algorithms into a single conceptual framework. Our analysis examines how Q-Prop (Gu et al., 2017),
PGQ (O?Donoghue et al., 2017), and ACER (Wang et al., 2017), which are all recent works that
combine on-policy with off-policy learning, are connected to each other (see Table 1). IPG with
0 < ? < 1 and without the control variate relates closely to PGQ and ACER, but differ in the details.
PGQ mixes in the Q-learning Bellman error objective, and ACER mixes parameter update steps
rather than directly mixing gradients. And both PGQ and ACER come with numerous additional
design details that make fair comparisons with methods like TRPO and Q-Prop difficult. We instead
focus on the three minimal variables of IPG and explore their settings in relation to the closely related
TRPO and Q-Prop methods, in order to theoretically and empirically understand in which situations
we might expect gains from mixing on- and off-policy gradients.
Asides from these more recent works, the use of off-policy samples with policy gradients has been
a popular direction of research (Peshkin & Shelton, 2002; Jie & Abbeel, 2010; Degris et al., 2012;
Levine & Koltun, 2013). Most of these methods rely on variants of importance sampling (IS) to
correct for bias. The use of importance sampling ensures unbiased estimates, but at the cost of
considerable variance, as quantified by the ESS measure used by Jie & Abbeel (2010). Ignoring
importance weights produces bias but, as shown in our analysis, this bias can be bounded. Therefore,
our IPG estimators have higher bias as the sampling distribution deviates from the policy, while
IS methods have higher variance. Among these importance sampling methods, Levine & Koltun
(2013) evaluates on tasks that are the most similar to our paper, but the focus is on using importance
sampling to include demonstrations, rather than to speed up learning from scratch.
Lastly, there are many methods that combine on- and off-policy data for policy evaluation (Precup,
2000; Mahmood et al., 2014; Munos et al., 2016), mostly through variants of importance sampling.
Combining our methods with more sophisticated policy evaluation methods will likely lead to further
improvements, as done in (Degris et al., 2012). A more detailed analysis of the effect of importance
sampling on bias and variance is left to future work, where some of the relevant work includes Precup
(2000); Jie & Abbeel (2010); Mahmood et al. (2014); Jiang & Li (2016); Thomas & Brunskill (2016).
6
Experiments
In this section, we empirically show that the three parameters of IPG can interpolate different
behaviors and often achieve superior performance versus prior methods that are limiting cases of this
6
(a) IPG with ? = 0 and the control variate.
(b) IPG with ? = 1.
Figure 1: (a) IPG-? = 0 vs Q-Prop on HalfCheetah-v1, with batch size 5000. IPG-?-rand30000,
which uses 30000 random samples from the replay as samples from ?, outperforms Q-Prop in terms
of learning speed. (b) IPG-?=1 vs other algorithms on Ant-v1. In this domain, on-policy IPG-?=1
with on-policy exploration significantly outperforms DDPG and IPG-?=1-OU, which use a heuristic
OU (Ornstein?Uhlenbeck) process noise exploration strategy, and marginally outperforms Q-Prop.
approach. Crucially, all methods share the same algorithmic structure as Algorithm 1, and we hold
the rest of the experimental details fixed. All experiments were performed on MuJoCo domains in
OpenAI Gym (Todorov et al., 2012; Brockman et al., 2016), with results presented for the average
over three seeds. Additional experimental details are provided in the Appendix.
6.1
? 6= ?, ? = 0, with the control variate
We evaluate the performance of the special case of IPG discussed in Section 4.1. This case is of
particular interest, since we can derive monotonic convergence results for a variant of this method
under certain conditions, despite the presence of off-policy updates. Figure 1a shows the performance
on the HalfCheetah-v1 domain, when the policy update batch size is 5000 transitions (i.e. 5 episodes).
?last? and ?rand? indicate if ? samples from the most recent transitions or uniformly from the
experience replay. ?last05000? would be equivalent to Q-Prop given ? = 0. Comparing ?IPG-?rand05000? and ?Q-Prop? curves, we observe that by drawing the same number of samples randomly
from the replay buffer for estimating the critic gradient, instead of using the on-policy samples, we
get faster convergence. If we sample batches of size 30000 from the replay buffer, the performance
further improves. However, as seen in the ?IPG-?-last30000? curve, if we instead use the 30000
most recent samples, the performance degrades. One possible explanation for this is that, while
using random samples from the replay increases the bound on the bias according to Theorem 1, it
also decorrelates the samples within the batch, providing more stable gradients. This is the original
motivation for experience replay in the DQN method (Mnih et al., 2015), and we have shown that
such decorrelated off-policy samples can similarly produce gains for policy gradient algorithms. See
Table 2 for results on other domains.
The results for this variant of IPG demonstrate that random sampling from the replay provides further
improvement on top of Q-Prop. Note that these replay buffer samples are different from standard
off-policy samples in DDPG or DQN algorithms, which often use aggressive heuristic exploration
strategies. The samples used by IPG are sampled from prior policies that follow a conservative
trust-region update, resulting in greater regularity but less exploration. In the next section, we show
that in some cases, ensuring that the off-policy samples are not too off-policy is essential for good
performance.
6.2
? = ?, ? = 1
In this section, we empirically evaluate another special case of IPG, where ? = ?, indicating onpolicy sampling, and ? = 1, which reduces to a trust-region, on-policy variant of a deterministic
actor-critic method. Although this algorithm performs actor-critic updates, the use of a trust region
makes it more similar to TRPO or Q-Prop than DDPG.
7
IPG-?=0.2
IPG-cv-?=0.2
IPG-?=1
Q-Prop
TRPO
HalfCheetah-v1
? = ? ? 6= ?
3356
3458
4216
4023
2962
4767
4178
4182
2889
N.A.
Ant-v1
? = ? ? 6= ?
4237
4415
3943
3421
3469
3780
3374
3479
1520
N.A.
Walker-v1
? = ? ? 6= ?
3047
1932
1896
1411
2704
805
2832
1692
1487
N.A.
Humanoid-v1
? = ? ? 6= ?
1231
920
1651
1613
1571
1530
1423
1519
615
N.A.
Table 2: Comparisons on all domains with mini-batch size 10000 for Humanoid and 5000 otherwise.
We compare the maximum of average test rewards in the first 10000 episodes (Humanoid requires
more steps to fully converge; see the Appendix for learning curves). Results outperforming Q-Prop (or
IPG-cv-?=0 with ? = ?) are boldface. The two columns show results with on-policy and off-policy
samples for estimating the deterministic policy gradient.
Results for all domains are shown in Table 2. Figure 1b shows the learning curves on Ant-v1.
Although IPG-?=1 methods can be off-policy, the policy is updated every 5000 samples to keep it
consistent with other IPG methods, while DDPG updates the policy on every step in the environment
and makes other design choices Lillicrap et al. (2016). We see that, in this domain, standard DDPG
becomes stuck with a mean reward of 1000, while IPG-?=1 improves monotonically, achieving a
significantly better result. To investigate why this large discrepancy arises, we also ran IPG-?=1 with
the same OU process exploration noise as DDPG, and observed large degradation in performance.
This provides empirical support for Theorem 2. It is illuminating to contrast this result with the
previous experiment, where the off-policy samples did not adversely alter the results. In the previous
experiments, the samples came from Gaussian policies updated with trust-regions. The difference
between ? and ? was therefore approximately bounded by the trust-regions. In the experiment with
Brownian noise, the behaving policy uses temporally correlated noise, with potentially unbounded
KL-divergence from the learned Gaussian policy. In this case, the off-policy samples result in
excessive bias, wiping out the variance reduction benefits of off-policy sampling. In general, we
observed that for the harder Ant-v1 and Walker-v1 domains, on-policy exploration is more effective,
even when doing off-policy state sampling from a replay buffer. This results suggests the following
lesson for designing off-policy actor-critic methods: for domains where exploration is difficult, it may
be more effective to use on-policy exploration with bounded policy updates than to design heuristic
exploration rules such as the OU process noise, due to the resulting reduction in bias.
6.3
General Cases of Interpolated Policy Gradient
Table 2 shows the results for experiments where we compare IPG methods with varying values of
?; additional results are provided in the Appendix. ? 6= ? indicates that the method uses off-policy
samples from the replay buffer, with the same batch size as the on-policy batch for fair comparison.
We ran sweeps over ? = {0.2, 0.4, 0.6, 0.8} and found that ? = 0.2 consistently produce better
performance than Q-Prop, TRPO or prior actor-critic methods. This is consistent with the results in
PGQ (O?Donoghue et al., 2017) and ACER (Wang et al., 2017), which found that their equivalent
of ? = 0.1 performed best on their benchmarks. Importantly, we compared all methods with the
same algorithm designs (exploration, policy, etc.), since Q-Prop and TRPO are IPG-?=0 with and
without the control variate. IPG-?=1 is a novel variant of the actor-critic method that differs from
DDPG (Lillicrap et al., 2016) and SVG(0) (Heess et al., 2015) due to the use of a trust region. The
results in Table 2 suggest that, in most cases, the best performing algorithm is one that interpolates
between the policy-gradient and actor-critic variants, with intermediate values of ?.
7
Discussion
In this paper, we introduced interpolated policy gradient methods, a family of policy gradient
algorithms that allow mixing off-policy learning with on-policy learning while satisfying performance
bounds. This family of algorithms unifies and interpolates on-policy likelihood ratio policy gradient
and off-policy deterministic policy gradient, and includes a number of prior works as approximate
limiting cases. Empirical results confirm that, in many cases, interpolated gradients have improved
sample-efficiency and stability over the prior state-of-the-art methods, and the theoretical results
provide intuition for analyzing the cases in which the different methods perform well or poorly. Our
hope is that this detailed analysis of interpolated gradient methods can not only provide for more
effective algorithms in practice, but also give useful insight for future algorithm design.
8
Acknowledgements
This work is supported by generous sponsorship from Cambridge-T?bingen PhD Fellowship, NSERC,
and Google Focused Research Award.
References
Bagnell, J Andrew and Schneider, Jeff. Covariant policy search. IJCAI, 2003.
Brockman, Greg, Cheung, Vicki, Pettersson, Ludwig, Schneider, Jonas, Schulman, John, Tang, Jie,
and Zaremba, Wojciech. Openai gym. arXiv preprint arXiv:1606.01540, 2016.
Degris, Thomas, White, Martha, and Sutton, Richard S. Off-policy actor-critic. arXiv preprint
arXiv:1205.4839, 2012.
Duan, Yan, Chen, Xi, Houthooft, Rein, Schulman, John, and Abbeel, Pieter. Benchmarking deep
reinforcement learning for continuous control. International Conference on Machine Learning
(ICML), 2016.
Gu, Shixiang, Lillicrap, Timothy, Ghahramani, Zoubin, Turner, Richard E, and Levine, Sergey.
Q-prop: Sample-efficient policy gradient with an off-policy critic. ICLR, 2017.
Heess, Nicolas, Wayne, Gregory, Silver, David, Lillicrap, Tim, Erez, Tom, and Tassa, Yuval. Learning
continuous control policies by stochastic value gradients. In Advances in Neural Information
Processing Systems, pp. 2944?2952, 2015.
Jiang, Nan and Li, Lihong. Doubly robust off-policy value evaluation for reinforcement learning. In
International Conference on Machine Learning, pp. 652?661, 2016.
Jie, Tang and Abbeel, Pieter. On a connection between importance sampling and the likelihood ratio
policy gradient. In Advances in Neural Information Processing Systems, pp. 1000?1008, 2010.
Kakade, Sham and Langford, John. Approximately optimal approximate reinforcement learning. In
International Conference on Machine Learning (ICML), volume 2, pp. 267?274, 2002.
Levine, Sergey and Koltun, Vladlen. Guided policy search. In International Conference on Machine
Learning (ICML), pp. 1?9, 2013.
Levine, Sergey, Finn, Chelsea, Darrell, Trevor, and Abbeel, Pieter. End-to-end training of deep
visuomotor policies. Journal of Machine Learning Research, 17(39):1?40, 2016.
Lillicrap, Timothy P, Hunt, Jonathan J, Pritzel, Alexander, Heess, Nicolas, Erez, Tom, Tassa, Yuval,
Silver, David, and Wierstra, Daan. Continuous control with deep reinforcement learning. ICLR,
2016.
Mahmood, A Rupam, van Hasselt, Hado P, and Sutton, Richard S. Weighted importance sampling
for off-policy learning with linear function approximation. In Advances in Neural Information
Processing Systems, pp. 3014?3022, 2014.
Mnih, Volodymyr, Kavukcuoglu, Koray, Silver, David, Rusu, Andrei A, Veness, Joel, Bellemare,
Marc G, Graves, Alex, Riedmiller, Martin, Fidjeland, Andreas K, Ostrovski, Georg, et al. Humanlevel control through deep reinforcement learning. Nature, 518(7540):529?533, 2015.
Munos, R?mi, Stepleton, Tom, Harutyunyan, Anna, and Bellemare, Marc G. Safe and efficient
off-policy reinforcement learning. arXiv preprint arXiv:1606.02647, 2016.
O?Donoghue, Brendan, Munos, Remi, Kavukcuoglu, Koray, and Mnih, Volodymyr. Pgq: Combining
policy gradient and q-learning. ICLR, 2017.
Peshkin, Leonid and Shelton, Christian R. Learning from scarce experience. In Proceedings of the
Nineteenth International Conference on Machine Learning, 2002.
Peters, Jan, M?lling, Katharina, and Altun, Yasemin. Relative entropy policy search. In AAAI.
Atlanta, 2010.
9
Precup, Doina. Eligibility traces for off-policy policy evaluation. Computer Science Department
Faculty Publication Series, pp. 80, 2000.
Riedmiller, Martin. Neural fitted q iteration?first experiences with a data efficient neural reinforcement
learning method. In European Conference on Machine Learning, pp. 317?328. Springer, 2005.
Ross, Sheldon M. Simulation. Burlington, MA: Elsevier, 2006.
Schulman, John, Levine, Sergey, Abbeel, Pieter, Jordan, Michael I, and Moritz, Philipp. Trust region
policy optimization. In ICML, pp. 1889?1897, 2015.
Schulman, John, Moritz, Philipp, Levine, Sergey, Jordan, Michael, and Abbeel, Pieter. Highdimensional continuous control using generalized advantage estimation. International Conference
on Learning Representations (ICLR), 2016.
Silver, David, Lever, Guy, Heess, Nicolas, Degris, Thomas, Wierstra, Daan, and Riedmiller, Martin.
Deterministic policy gradient algorithms. In International Conference on Machine Learning
(ICML), 2014.
Silver, David, Huang, Aja, Maddison, Chris J, Guez, Arthur, Sifre, Laurent, Van Den Driessche,
George, Schrittwieser, Julian, Antonoglou, Ioannis, Panneershelvam, Veda, Lanctot, Marc, et al.
Mastering the game of go with deep neural networks and tree search. Nature, 529(7587):484?489,
2016.
Sutton, Richard S, McAllester, David A, Singh, Satinder P, Mansour, Yishay, et al. Policy gradient methods for reinforcement learning with function approximation. In Advances in Neural
Information Processing Systems (NIPS), volume 99, pp. 1057?1063, 1999.
Thomas, Philip. Bias in natural actor-critic algorithms. In ICML, pp. 441?448, 2014.
Thomas, Philip and Brunskill, Emma. Data-efficient off-policy policy evaluation for reinforcement
learning. In International Conference on Machine Learning, pp. 2139?2148, 2016.
Todorov, Emanuel, Erez, Tom, and Tassa, Yuval. Mujoco: A physics engine for model-based control.
In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 5026?5033.
IEEE, 2012.
Wang, Ziyu, Bapst, Victor, Heess, Nicolas, Mnih, Volodymyr, Munos, Remi, Kavukcuoglu, Koray,
and de Freitas, Nando. Sample efficient actor-critic with experience replay. ICLR, 2017.
Williams, Ronald J. Simple statistical gradient-following algorithms for connectionist reinforcement
learning. Machine learning, 8(3-4):229?256, 1992.
10
| 6974 |@word faculty:1 pieter:5 confirms:1 seek:1 crucially:2 simulation:1 eng:1 harder:1 reduction:5 contains:2 series:1 humanlevel:1 outperforms:3 existing:1 hasselt:1 current:1 com:1 comparing:1 freitas:1 guez:1 john:5 ronald:1 enables:2 christian:1 designed:1 update:26 aside:1 v:2 es:1 aja:1 provides:8 contribute:1 philipp:2 unbounded:2 wierstra:2 koltun:3 jonas:1 pritzel:1 doubly:1 combine:4 fitting:1 emma:1 introduce:4 deteriorate:1 theoretically:4 behavior:3 mpg:1 bellman:1 discounted:1 duan:3 es0:1 considering:1 becomes:1 provided:3 estimating:3 bounded:4 qw:20 deepmind:1 whilst:2 st0:1 impractical:1 guarantee:6 berkeley:2 thorough:1 every:2 act:1 zaremba:1 demonstrates:1 uk:3 control:33 wayne:1 planck:2 before:1 local:2 limit:1 sutton:5 despite:1 analyzing:1 jiang:2 laurent:1 approximately:2 might:1 quantified:1 collect:2 challenging:1 suggests:1 mujoco:2 ease:1 hunt:1 range:1 practical:1 practice:3 implement:1 differs:1 acer:7 countzero:1 jan:1 riedmiller:4 empirical:8 yan:1 significantly:3 zoubin:3 suggest:1 get:2 altun:1 close:1 bellemare:2 optimize:2 outweigh:1 deterministic:15 demonstrated:1 yt:2 center:2 equivalent:2 straightforward:1 williams:4 overarching:1 go:1 focused:1 unify:2 examines:2 estimator:10 utilizing:1 importantly:2 rule:1 insight:1 stability:3 variation:1 limiting:3 updated:2 target:1 yishay:1 us:5 designing:1 satisfying:2 updating:1 onpolicy:1 observed:2 levine:9 preprint:3 wang:7 region:9 ensures:2 connected:1 episode:3 trade:1 ran:2 intuition:2 environment:2 reward:4 cam:3 trained:1 singh:1 efficiency:4 completely:1 gu:10 easily:1 derivation:1 train:1 effective:4 monte:5 vicki:1 visuomotor:1 rein:1 heuristic:3 solve:1 nineteenth:1 drawing:1 otherwise:1 final:1 advantage:9 analytical:1 relevant:1 combining:2 halfcheetah:3 mixing:5 poorly:1 achieve:1 ludwig:1 exploiting:1 convergence:5 regularity:1 ijcai:1 darrell:1 produce:4 silver:7 converges:1 guaranteeing:1 help:3 derive:2 sponsorship:1 ac:3 andrew:1 tim:1 eq:6 indicate:2 come:2 trading:1 differ:1 direction:1 guided:1 safe:1 drawback:1 closely:2 correct:1 stochastic:4 exploration:15 nando:1 mcallester:1 abbeel:8 generalization:2 preliminary:1 correction:1 hold:1 sufficiently:1 around:1 seed:1 algorithmic:2 major:1 sought:1 achieves:1 generous:1 estimation:3 applicable:1 ross:3 successfully:1 weighted:1 hope:2 bapst:1 gaussian:2 aim:1 rather:2 rusu:1 varying:1 ret26:1 publication:1 focus:2 improvement:5 consistently:1 likelihood:13 indicates:3 contrast:1 brendan:1 greedily:1 baseline:4 elsevier:1 a0:1 relation:1 arg:2 overall:1 among:1 art:2 special:10 initialize:1 uc:1 equal:1 once:1 beach:1 sampling:20 koray:3 veness:1 represents:1 look:1 icml:6 excessive:1 alter:1 future:4 discrepancy:1 connectionist:1 brockman:2 simplify:1 richard:5 few:1 primarily:1 intelligent:1 randomly:1 divergence:5 interpolate:2 atlanta:1 interest:1 ostrovski:1 investigate:1 mnih:6 multiply:1 evaluation:5 joel:1 introduces:1 analyzed:2 accurate:1 unification:1 necessary:1 experience:6 arthur:1 mahmood:3 tree:1 taylor:1 theoretical:11 minimal:1 fitted:3 column:1 cost:3 introducing:1 too:1 aw:1 eec:1 gregory:1 combined:1 st:64 international:9 off:71 physic:1 michael:2 precup:3 aaai:1 lever:1 huang:1 guy:1 adversely:1 inefficient:1 sg717:1 return:2 li:2 wojciech:1 aggressive:2 volodymyr:3 de:2 degris:5 summarized:2 ioannis:1 includes:4 explicitly:1 doina:1 ornstein:1 performed:2 lab:1 analyze:4 doing:1 pand:1 roll:1 variance:19 greg:1 correspond:1 lesson:1 yes:1 ant:4 lkopf:1 unifies:1 kavukcuoglu:3 marginally:1 carlo:5 trajectory:1 harutyunyan:1 decorrelated:1 trevor:1 evaluates:1 svlevine:1 frequency:1 pp:13 proof:3 mi:1 gain:3 sampled:1 emanuel:1 popular:1 improves:4 ou:4 sophisticated:1 higher:2 follow:1 tom:4 wherein:1 improved:1 rand:1 done:2 furthermore:1 lastly:1 langford:5 until:1 hand:1 receives:1 replacing:1 trust:9 google:2 mode:1 dqn:2 usa:1 effect:1 lillicrap:13 unbiased:5 true:2 moritz:2 q0:2 white:1 game:3 during:1 shixiang:2 eligibility:1 unnormalized:1 generalized:2 presenting:1 demonstrate:1 performs:1 novel:1 recently:2 superior:1 empirically:5 rl:7 at0:1 tassa:3 volume:2 discussed:3 approximates:1 cambridge:4 ai:1 cv:12 similarly:1 erez:3 lihong:1 stable:6 actor:18 robot:1 behaving:1 etc:1 brownian:1 own:1 recent:8 showed:1 chelsea:1 optimizes:1 buffer:7 certain:1 rep:1 outperforming:1 came:1 victor:1 yasemin:1 seen:1 additional:7 relaxed:1 greater:1 schneider:3 george:1 converge:1 maximize:1 monotonically:2 signal:2 relates:1 full:1 mix:6 sham:1 reduces:1 faster:1 long:2 rupam:1 award:1 dkl:6 plugging:1 ensuring:1 variant:9 basic:1 expectation:1 arxiv:6 iteration:2 sergey:6 uhlenbeck:1 hado:1 fellowship:1 else:1 walker:2 source:1 crucial:1 sch:1 biased:5 extra:1 rest:1 induced:2 tend:2 jordan:2 call:1 presence:1 intermediate:2 easy:1 todorov:2 variate:24 fit:3 reduce:3 imperfect:1 pgq:8 andreas:1 donoghue:7 intensive:1 t0:1 peshkin:2 expression:4 pettersson:1 veda:1 reuse:1 suffer:1 peter:2 bingen:1 interpolates:3 speaking:1 action:3 deep:13 heess:7 generally:1 jie:5 detailed:3 wiping:1 useful:1 simplest:1 reduced:1 estimated:1 per:1 broadly:1 write:1 georg:1 visitation:1 key:1 trpo:10 openai:3 demonstrating:1 achieving:1 v1:10 houthooft:1 parameterized:1 family:7 decision:1 appendix:7 lanctot:1 bound:11 nan:1 constraint:1 worked:1 constrain:1 alex:1 sheldon:1 interpolated:14 speed:2 min:1 performing:2 eat:1 relatively:1 martin:3 department:1 according:2 vladlen:1 mastering:1 kakade:5 b:1 s1:1 intuitively:1 den:1 equation:1 previously:1 remains:1 discus:1 know:1 finn:1 antonoglou:1 serf:1 end:3 panneershelvam:1 apply:1 observe:1 gym:3 batch:10 alternative:2 thomas:6 original:1 denotes:2 remaining:1 include:4 top:1 exploit:1 ghahramani:2 establish:1 approximating:1 classical:1 rsj:1 sweep:1 objective:5 strategy:4 degrades:1 interacts:1 bagnell:2 gradient:86 iclr:5 reinforce:2 fidjeland:1 parametrized:1 philip:2 maddison:1 chris:1 collected:2 tuebingen:1 boldface:1 relationship:1 mini:1 ratio:14 demonstration:1 providing:1 schrittwieser:1 julian:1 difficult:3 mostly:1 potentially:2 trace:1 svg:2 design:9 policy:198 perform:1 observation:1 sm:1 benchmark:2 daan:2 ipg:42 situation:1 mansour:1 introduced:3 david:6 kl:7 connection:2 engine:1 learned:1 inaccuracy:1 nip:2 usually:2 below:1 summarize:1 max:16 memory:1 reliable:1 explanation:1 natural:2 rely:1 residual:2 turner:2 scarce:1 improve:3 numerous:1 temporally:1 deviate:3 prior:12 understanding:1 schulman:13 acknowledgement:1 graf:1 relative:1 fully:2 expect:1 limitation:2 proportional:3 proven:1 versus:1 humanoid:3 illuminating:1 agent:4 consistent:2 storing:1 critic:35 share:1 repeat:1 last:2 free:2 supported:1 bias:32 allow:2 understand:1 institute:2 wide:1 munos:4 absolute:1 decorrelates:1 benefit:2 van:2 curve:4 world:1 transition:3 cumulative:1 rich:1 stuck:1 made:1 reinforcement:14 sifre:1 approximate:3 bernhard:1 technicality:1 confirm:2 keep:1 satinder:1 sequentially:1 robotic:1 conceptual:1 xi:1 ziyu:1 spectrum:4 continuous:6 search:4 lling:1 why:1 table:8 scratch:1 learn:2 nature:2 robust:1 nicolas:4 ca:1 ignoring:1 contributes:1 interact:1 expansion:1 katharina:1 interpolating:1 european:1 domain:10 marc:3 did:1 anna:1 midst:1 motivation:1 noise:5 arise:1 fair:2 referred:1 benchmarking:1 board:1 andrei:1 brunskill:2 replay:14 burlington:1 tang:2 theorem:9 stepleton:1 showing:1 list:1 explored:1 incorporating:1 essential:1 merging:3 importance:9 phd:1 magnitude:1 chen:1 easier:2 entropy:1 lt:4 timothy:3 remi:2 explore:1 likely:1 nserc:1 monotonic:7 applies:1 covariant:2 corresponds:4 springer:1 driessche:1 ma:1 prop:21 viewed:1 cheung:1 ddpg:11 jeff:1 leonid:1 considerable:1 martha:1 except:2 uniformly:1 yuval:3 degradation:1 conservative:1 total:1 experimental:2 uber:1 est:1 indicating:1 highdimensional:1 support:2 arises:1 jonathan:1 alexander:1 avoiding:1 evaluate:2 shelton:2 correlated:2 |
6,605 | 6,975 | Dynamic Routing Between Capsules
Sara Sabour
Nicholas Frosst
Geoffrey E. Hinton
Google Brain
Toronto
{sasabour, frosst, geoffhinton}@google.com
Abstract
A capsule is a group of neurons whose activity vector represents the instantiation
parameters of a specific type of entity such as an object or an object part. We use
the length of the activity vector to represent the probability that the entity exists and
its orientation to represent the instantiation parameters. Active capsules at one level
make predictions, via transformation matrices, for the instantiation parameters of
higher-level capsules. When multiple predictions agree, a higher level capsule
becomes active. We show that a discrimininatively trained, multi-layer capsule
system achieves state-of-the-art performance on MNIST and is considerably better
than a convolutional net at recognizing highly overlapping digits. To achieve these
results we use an iterative routing-by-agreement mechanism: A lower-level capsule
prefers to send its output to higher level capsules whose activity vectors have a big
scalar product with the prediction coming from the lower-level capsule.
1
Introduction
Human vision ignores irrelevant details by using a carefully determined sequence of fixation points
to ensure that only a tiny fraction of the optic array is ever processed at the highest resolution.
Introspection is a poor guide to understanding how much of our knowledge of a scene comes from
the sequence of fixations and how much we glean from a single fixation, but in this paper we will
assume that a single fixation gives us much more than just a single identified object and its properties.
We assume that our multi-layer visual system creates a parse tree-like structure on each fixation, and
we ignore the issue of how these single-fixation parse trees are coordinated over multiple fixations.
Parse trees are generally constructed on the fly by dynamically allocating memory. Following Hinton
et al. [2000], however, we shall assume that, for a single fixation, a parse tree is carved out of a fixed
multilayer neural network like a sculpture is carved from a rock. Each layer will be divided into many
small groups of neurons called ?capsules? (Hinton et al. [2011]) and each node in the parse tree will
correspond to an active capsule. Using an iterative routing process, each active capsule will choose a
capsule in the layer above to be its parent in the tree. For the higher levels of a visual system, this
iterative process will be solving the problem of assigning parts to wholes.
The activities of the neurons within an active capsule represent the various properties of a particular
entity that is present in the image. These properties can include many different types of instantiation
parameter such as pose (position, size, orientation), deformation, velocity, albedo, hue, texture, etc.
One very special property is the existence of the instantiated entity in the image. An obvious way to
represent existence is by using a separate logistic unit whose output is the probability that the entity
exists. In this paper we explore an interesting alternative which is to use the overall length of the
vector of instantiation parameters to represent the existence of the entity and to force the orientation
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
of the vector to represent the properties of the entity1 . We ensure that the length of the vector output
of a capsule cannot exceed 1 by applying a non-linearity that leaves the orientation of the vector
unchanged but scales down its magnitude.
The fact that the output of a capsule is a vector makes it possible to use a powerful dynamic routing
mechanism to ensure that the output of the capsule gets sent to an appropriate parent in the layer
above. Initially, the output is routed to all possible parents but is scaled down by coupling coefficients
that sum to 1. For each possible parent, the capsule computes a ?prediction vector? by multiplying its
own output by a weight matrix. If this prediction vector has a large scalar product with the output of
a possible parent, there is top-down feedback which increases the coupling coefficient for that parent
and decreasing it for other parents. This increases the contribution that the capsule makes to that
parent thus further increasing the scalar product of the capsule?s prediction with the parent?s output.
This type of ?routing-by-agreement? should be far more effective than the very primitive form of
routing implemented by max-pooling, which allows neurons in one layer to ignore all but the most
active feature detector in a local pool in the layer below. We demonstrate that our dynamic routing
mechanism is an effective way to implement the ?explaining away? that is needed for segmenting
highly overlapping objects.
Convolutional neural networks (CNNs) use translated replicas of learned feature detectors. This
allows them to translate knowledge about good weight values acquired at one position in an image
to other positions. This has proven extremely helpful in image interpretation. Even though we are
replacing the scalar-output feature detectors of CNNs with vector-output capsules and max-pooling
with routing-by-agreement, we would still like to replicate learned knowledge across space. To
achieve this, we make all but the last layer of capsules be convolutional. As with CNNs, we make
higher-level capsules cover larger regions of the image. Unlike max-pooling however, we do not throw
away information about the precise position of the entity within the region. For low level capsules,
location information is ?place-coded? by which capsule is active. As we ascend the hierarchy,
more and more of the positional information is ?rate-coded? in the real-valued components of the
output vector of a capsule. This shift from place-coding to rate-coding combined with the fact that
higher-level capsules represent more complex entities with more degrees of freedom suggests that the
dimensionality of capsules should increase as we ascend the hierarchy.
2
How the vector inputs and outputs of a capsule are computed
There are many possible ways to implement the general idea of capsules. The aim of this paper is not
to explore this whole space but simply to show that one fairly straightforward implementation works
well and that dynamic routing helps.
We want the length of the output vector of a capsule to represent the probability that the entity
represented by the capsule is present in the current input. We therefore use a non-linear "squashing"
function to ensure that short vectors get shrunk to almost zero length and long vectors get shrunk to a
length slightly below 1. We leave it to discriminative learning to make good use of this non-linearity.
vj =
||sj ||2
sj
1 + ||sj ||2 ||sj ||
(1)
where vj is the vector output of capsule j and sj is its total input.
For all but the first layer of capsules, the total input to a capsule sj is a weighted sum over all
?prediction vectors? u
? j|i from the capsules in the layer below and is produced by multiplying the
output ui of a capsule in the layer below by a weight matrix Wij
X
sj =
cij u
? j|i ,
u
? j|i = Wij ui
(2)
i
where the cij are coupling coefficients that are determined by the iterative dynamic routing process.
The coupling coefficients between capsule i and all the capsules in the layer above sum to 1 and are
determined by a ?routing softmax? whose initial logits bij are the log prior probabilities that capsule i
1
This makes biological sense as it does not use large activities to get accurate representations of things that
probably don?t exist.
2
should be coupled to capsule j.
exp(bij )
(3)
cij = P
k exp(bik )
The log priors can be learned discriminatively at the same time as all the other weights. They depend
on the location and type of the two capsules but not on the current input image2 . The initial coupling
coefficients are then iteratively refined by measuring the agreement between the current output vj of
each capsule, j, in the layer above and the prediction u
? j|i made by capsule i.
The agreement is simply the scalar product aij = vj .?
uj|i . This agreement is treated as if it was a log
likelihood and is added to the initial logit, bij before computing the new values for all the coupling
coefficients linking capsule i to higher level capsules.
In convolutional capsule layers, each capsule outputs a local grid of vectors to each type of capsule in
the layer above using different transformation matrices for each member of the grid as well as for
each type of capsule.
Procedure 1 Routing algorithm.
1: procedure ROUTING(?
uj|i , r, l)
2:
for all capsule i in layer l and capsule j in layer (l + 1): bij ? 0.
3:
for r iterations do
4:
for all capsule i in layer l: ci ? softmax(b
. softmax computes Eq. 3
P i)
5:
for all capsule j in layer (l + 1): sj ? i cij u
? j|i
6:
for all capsule j in layer (l + 1): vj ? squash(sj )
. squash computes Eq. 1
7:
for all capsule i in layer l and capsule j in layer (l + 1): bij ? bij + u
? j|i .vj
return vj
3
Margin loss for digit existence
We are using the length of the instantiation vector to represent the probability that a capsule?s entity
exists. We would like the top-level capsule for digit class k to have a long instantiation vector if and
only if that digit is present in the image. To allow for multiple digits, we use a separate margin loss,
Lk for each digit capsule, k:
Lk = Tk max(0, m+ ? ||vk ||)2 + ? (1 ? Tk ) max(0, ||vk || ? m? )2
(4)
where Tk = 1 iff a digit of class k is present3 and m+ = 0.9 and m? = 0.1. The ? down-weighting
of the loss for absent digit classes stops the initial learning from shrinking the lengths of the activity
vectors of all the digit capsules. We use ? = 0.5. The total loss is simply the sum of the losses of all
digit capsules.
4
CapsNet architecture
A simple CapsNet architecture is shown in Fig. 1. The architecture is shallow with only two
convolutional layers and one fully connected layer. Conv1 has 256, 9 ? 9 convolution kernels with a
stride of 1 and ReLU activation. This layer converts pixel intensities to the activities of local feature
detectors that are then used as inputs to the primary capsules.
The primary capsules are the lowest level of multi-dimensional entities and, from an inverse graphics
perspective, activating the primary capsules corresponds to inverting the rendering process. This is a
very different type of computation than piecing instantiated parts together to make familiar wholes,
which is what capsules are designed to be good at.
The second layer (PrimaryCapsules) is a convolutional capsule layer with 32 channels of convolutional
8D capsules (i.e. each primary capsule contains 8 convolutional units with a 9 ? 9 kernel and a stride
of 2). Each primary capsule output sees the outputs of all 256 ? 81 Conv1 units whose receptive
2
For MNIST we found that it was sufficient to set all of these priors to be equal.
We do not allow an image to contain two instances of the same digit class. We address this weakness of
capsules in the discussion section.
3
3
Figure 1: A simple CapsNet with 3 layers. This model gives comparable results to deep convolutional
networks (such as Chang and Chen [2015]). The length of the activity vector of each capsule
in DigitCaps layer indicates presence of an instance of each class and is used to calculate the
classification loss. Wij is a weight matrix between each ui , i ? (1, 32 ? 6 ? 6) in PrimaryCapsules
and vj , j ? (1, 10).
Figure 2: Decoder structure to reconstruct a digit from the DigitCaps layer representation. The
euclidean distance between the image and the output of the Sigmoid layer is minimized during
training. We use the true label as reconstruction target during training.
fields overlap with the location of the center of the capsule. In total PrimaryCapsules has [32 ? 6 ? 6]
capsule outputs (each output is an 8D vector) and each capsule in the [6 ? 6] grid is sharing their
weights with each other. One can see PrimaryCapsules as a Convolution layer with Eq. 1 as its block
non-linearity. The final Layer (DigitCaps) has one 16D capsule per digit class and each of these
capsules receives input from all the capsules in the layer below.
We have routing only between two consecutive capsule layers (e.g. PrimaryCapsules and DigitCaps).
Since Conv1 output is 1D, there is no orientation in its space to agree on. Therefore, no routing is used
between Conv1 and PrimaryCapsules. All the routing logits (bij ) are initialized to zero. Therefore,
initially a capsule output (ui ) is sent to all parent capsules (v0 ...v9 ) with equal probability (cij ).
Our implementation is in TensorFlow (Abadi et al. [2016]) and we use the Adam optimizer (Kingma
and Ba [2014]) with its TensorFlow default parameters, including the exponentially decaying learning
rate, to minimize the sum of the margin losses in Eq. 4.
4.1
Reconstruction as a regularization method
We use an additional reconstruction loss to encourage the digit capsules to encode the instantiation
parameters of the input digit. During training, we mask out all but the activity vector of the correct
digit capsule. Then we use this activity vector to reconstruct the input image. The output of the digit
capsule is fed into a decoder consisting of 3 fully connected layers that model the pixel intensities as
described in Fig. 2. We minimize the sum of squared differences between the outputs of the logistic
units and the pixel intensities. We scale down this reconstruction loss by 0.0005 so that it does not
dominate the margin loss during training. As illustrated in Fig. 3 the reconstructions from the 16D
output of the CapsNet are robust while keeping only important details.
4
Figure 3: Sample MNIST test reconstructions of a CapsNet with 3 routing iterations. (l, p, r)
represents the label, the prediction and the reconstruction target respectively. The two rightmost
columns show two reconstructions of a failure example and it explains how the model confuses a
5 and a 3 in this image. The other columns are from correct classifications and shows that model
preserves many of the details while smoothing the noise.
(l, p, r)
(2, 2, 2)
(5, 5, 5)
(8, 8, 8)
(9, 9, 9)
(5, 3, 5)
(5, 3, 3)
Input
Output
Table 1: CapsNet classification test accuracy. The MNIST average and standard deviation results are
reported from 3 trials.
Method Routing Reconstruction MNIST (%) MultiMNIST (%)
Baseline
CapsNet
CapsNet
CapsNet
CapsNet
5
1
1
3
3
0.39
0.34?0.032
0.29?0.011
0.35?0.036
0.25?0.005
no
yes
no
yes
8.1
7.5
5.2
Capsules on MNIST
Training is performed on 28 ? 28 MNIST (LeCun et al. [1998]) images that have been shifted by up
to 2 pixels in each direction with zero padding. No other data augmentation/deformation is used. The
dataset has 60K and 10K images for training and testing respectively.
We test using a single model without any model averaging. Wan et al. [2013] achieves 0.21% test
error with ensembling and augmenting the data with rotation and scaling. They achieve 0.39%
without them. We get a low test error (0.25%) on a 3 layer network previously only achieved by
deeper networks. Tab. 1 reports the test error rate on MNIST for different CapsNet setups and shows
the importance of routing and reconstruction regularizer. Adding the reconstruction regularizer boosts
the routing performance by enforcing the pose encoding in the capsule vector.
The baseline is a standard CNN with three convolutional layers of 256, 256, 128 channels. Each has
5x5 kernels and stride of 1. The last convolutional layers are followed by two fully connected layers
of size 328, 192. The last fully connected layer is connected with dropout to a 10 class softmax layer
with cross entropy loss. The baseline is also trained on 2-pixel shifted MNIST with Adam optimizer.
The baseline is designed to achieve the best performance on MNIST while keeping the computation
cost as close as to CapsNet. In terms of number of parameters the baseline has 35.4M while CapsNet
has 8.2M parameters and 6.8M parameters without the reconstruction subnetwork.
5.1
What the individual dimensions of a capsule represent
Since we are passing the encoding of only one digit and zeroing out other digits, the dimensions of a
digit capsule should learn to span the space of variations in the way digits of that class are instantiated.
These variations include stroke thickness, skew and width. They also include digit-specific variations
such as the length of the tail of a 2. We can see what the individual dimensions represent by making
use of the decoder network. After computing the activity vector for the correct digit capsule, we can
feed a perturbed version of this activity vector to the decoder network and see how the perturbation
affects the reconstruction. Examples of these perturbations are shown in Fig. 4. We found that one
dimension (out of 16) of the capsule almost always represents the width of the digit. While some
dimensions represent combinations of global variations, there are other dimensions that represent
5
Figure 4: Dimension perturbations. Each row shows the reconstruction when one of the 16 dimensions
in the DigitCaps representation is tweaked by intervals of 0.05 in the range [?0.25, 0.25].
Scale and thickness
Localized part
Stroke thickness
Localized skew
Width and translation
Localized part
variation in a localized part of the digit. For example, different dimensions are used for the length of
the ascender of a 6 and the size of the loop.
5.2
Robustness to Affine Transformations
Experiments show that each DigitCaps capsule learns a more robust representation for each class
than a traditional convolutional network. Because there is natural variance in skew, rotation, style, etc
in hand written digits, the trained CapsNet is moderately robust to small affine transformations of the
training data.
To test the robustness of CapsNet to affine transformations, we trained a CapsNet and a traditional
convolutional network (with MaxPooling and DropOut) on a padded and translated MNIST training
set, in which each example is an MNIST digit placed randomly on a black background of 40 ? 40
pixels. We then tested this network on the affNIST4 data set, in which each example is an MNIST digit
with a random small affine transformation. Our models were never trained with affine transformations
other than translation and any natural transformation seen in the standard MNIST. An under-trained
CapsNet with early stopping which achieved 99.23% accuracy on the expanded MNIST test set
achieved 79% accuracy on the affnist test set. A traditional convolutional model with a similar
number of parameters which achieved similar accuracy (99.22%) on the expanded mnist test set only
achieved 66% on the affnist test set.
6
Segmenting highly overlapping digits
Dynamic routing can be viewed as a parallel attention mechanism that allows each capsule at one
level to attend to some active capsules at the level below and to ignore others. This should allow
the model to recognize multiple objects in the image even if objects overlap. Hinton et al. propose
the task of segmenting and recognizing highly overlapping digits (Hinton et al. [2000] and others
have tested their networks in a similar domain (Goodfellow et al. [2013], Ba et al. [2014], Greff et al.
[2016]). The routing-by-agreement should make it possible to use a prior about the shape of objects
to help segmentation and it should obviate the need to make higher-level segmentation decisions in
the domain of pixels.
6.1
MultiMNIST dataset
We generate the MultiMNIST training and test dataset by overlaying a digit on top of another digit
from the same set (training or test) but different class. Each digit is shifted up to 4 pixels in each
direction resulting in a 36 ? 36 image. Considering a digit in a 28 ? 28 image is bounded in a 20 ? 20
box, two digits bounding boxes on average have 80% overlap. For each digit in the MNIST dataset
we generate 1K MultiMNIST examples. So the training set size is 60M and the test set size is 10M.
4
Available at http://www.cs.toronto.edu/~tijmen/affNIST/.
6
Figure 5: Sample reconstructions of a CapsNet with 3 routing iterations on MultiMNIST test dataset.
The two reconstructed digits are overlayed in green and red as the lower image. The upper image
shows the input image. L:(l1 , l2 ) represents the label for the two digits in the image and R:(r1 , r2 )
represents the two digits used for reconstruction. The two right most columns show two examples
with wrong classification reconstructed from the label and from the prediction (P). In the (2, 8)
example the model confuses 8 with a 7 and in (4, 9) it confuses 9 with 0. The other columns have
correct classifications and show that the model accounts for all the pixels while being able to assign
one pixel to two digits in extremely difficult scenarios (column 1 ? 4). Note that in dataset generation
the pixel values are clipped at 1. The two columns with the (*) mark show reconstructions from a
digit that is neither the label nor the prediction. These columns suggests that the model is not just
finding the best fit for all the digits in the image including the ones that do not exist. Therefore in case
of (5, 0) it cannot reconstruct a 7 because it knows that there is a 5 and 0 that fit best and account for
all the pixels. Also, in case of (8, 1) the loop of 8 has not triggered 0 because it is already accounted
for by 8. Therefore it will not assign one pixel to two digits if one of them does not have any other
support.
R:(2, 7) R:(6, 0) R:(6, 8) R:(7, 1) *R:(5, 7) *R:(2, 3) R:(2, 8) R:P:(2, 7)
L:(2, 7) L:(6, 0) L:(6, 8) L:(7, 1)
L:(5, 0) L:(4, 3)
L:(2, 8) L:(2, 8)
R:(8, 7)
L:(8, 7)
6.2
R:(9, 4)
L:(9, 4)
R:(9, 5)
L:(9, 5)
R:(8, 4)
L:(8, 4)
*R:(0, 8) *R:(1, 6)
L:(1, 8) L:(7, 6)
R:(4, 9)
L:(4, 9)
R:P:(4, 0)
L:(4, 9)
MultiMNIST results
Our 3 layer CapsNet model trained from scratch on MultiMNIST training data achieves higher
test classification accuracy than our baseline convolutional model. We are achieving the same
classification error rate of 5.0% on highly overlapping digit pairs as the sequential attention model of
Ba et al. [2014] achieves on a much easier task that has far less overlap (80% overlap of the boxes
around the two digits in our case vs < 4% for Ba et al. [2014]). On test images, which are composed
of pairs of images from the test set, we treat the two most active digit capsules as the classification
produced by the capsules network. During reconstruction we pick one digit at a time and use the
activity vector of the chosen digit capsule to reconstruct the image of the chosen digit (we know this
image because we used it to generate the composite image). The only difference with our MNIST
model is that we increased the period of the decay step for the learning rate to be 10? larger because
the training dataset is larger.
The reconstructions illustrated in Fig. 5 show that CapsNet is able to segment the image into the
two original digits. Since this segmentation is not at pixel level we observe that the model is able to
deal correctly with the overlaps (a pixel is on in both digits) while accounting for all the pixels. The
position and the style of each digit is encoded in DigitCaps. The decoder has learned to reconstruct
a digit given the encoding. The fact that it is able to reconstruct digits regardless of the overlap
shows that each digit capsule can pick up the style and position from the votes it is receiving from
PrimaryCapsules layer.
7
Tab. 1 emphasizes the importance of capsules with routing on this task. As a baseline for the
classification of CapsNet accuracy we trained a convolution network with two convolution layers and
two fully connected layers on top of them. The first layer has 512 convolution kernels of size 9 ? 9
and stride 1. The second layer has 256 kernels of size 5 ? 5 and stride 1. After each convolution layer
the model has a pooling layer of size 2 ? 2 and stride 2. The third layer is a 1024D fully connected
layer. All three layers have ReLU non-linearities. The final layer of 10 units is fully connected. We
use the TensorFlow default Adam optimizer (Kingma and Ba [2014]) to train a sigmoid cross entropy
loss on the output of final layer. This model has 24.56M parameters which is 2 times more parameters
than CapsNet with 11.36M parameters. We started with a smaller CNN (32 and 64 convolutional
kernels of 5 ? 5 and stride of 1 and a 512D fully connected layer) and incrementally increased the
width of the network until we reached the best test accuracy on a 10K subset of the MultiMNIST
data. We also searched for the right decay step on the 10K validation set.
We decode the two most active DigitCaps capsules one at a time and get two images. Then by
assigning any pixel with non-zero intensity to each digit we get the segmentation results for each
digit.
7
Other datasets
We tested our capsule model on CIFAR10 and achieved 10.6% error with an ensemble of 7 models
each of which is trained with 3 routing iterations on 24 ? 24 patches of the image. Each model
has the same architecture as the simple model we used for MNIST except that there are three color
channels and we used 64 different types of primary capsule. We also found that it helped to introduce
a "none-of-the-above" category for the routing softmaxes, since we do not expect the final layer of
ten capsules to explain everything in the image. 10.6% test error is about what standard convolutional
nets achieved when they were first applied to CIFAR10 (Zeiler and Fergus [2013]).
One drawback of Capsules which it shares with generative models is that it likes to account for
everything in the image so it does better when it can model the clutter than when it just uses an
additional ?orphan? category in the dynamic routing. In CIFAR-10, the backgrounds are much too
varied to model in a reasonable sized net which helps to account for the poorer performance.
We also tested the exact same architecture as we used for MNIST on smallNORB (LeCun et al.
[2004]) and achieved 2.7% test error rate, which is on-par with the state-of-the-art (Cire?san et al.
[2011]). The smallNORB dataset consists of 96x96 stereo grey-scale images. We resized the images
to 48x48 and during training processed random 32x32 crops of them. We passed the central 32x32
patch during test.
We also trained a smaller network on the small training set of SVHN (Netzer et al. [2011]) with
only 73257 images. We reduced the number of first convolutional layer channels to 64, the primary
capsule layer to 16 6D-capsules with 8D final capsule layer at the end and achieved 4.3% on the test
set.
8
Discussion and previous work
For thirty years, the state-of-the-art in speech recognition used hidden Markov models with Gaussian
mixtures as output distributions. These models were easy to learn on small computers, but they
had a representational limitation that was ultimately fatal: The one-of-n representations they use
are exponentially inefficient compared with, say, a recurrent neural network that uses distributed
representations. To double the amount of information that an HMM can remember about the string it
has generated so far, we need to square the number of hidden nodes. For a recurrent net we only need
to double the number of hidden neurons.
Now that convolutional neural networks have become the dominant approach to object recognition, it
makes sense to ask whether there are any exponential inefficiencies that may lead to their demise. A
good candidate is the difficulty that convolutional nets have in generalizing to novel viewpoints. The
ability to deal with translation is built in, but for the other dimensions of an affine transformation
we have to chose between replicating feature detectors on a grid that grows exponentially with the
number of dimensions, or increasing the size of the labelled training set in a similarly exponential way.
Capsules (Hinton et al. [2011]) avoid these exponential inefficiencies by converting pixel intensities
8
into vectors of instantiation parameters of recognized fragments and then applying transformation
matrices to the fragments to predict the instantiation parameters of larger fragments. Transformation
matrices that learn to encode the intrinsic spatial relationship between a part and a whole constitute
viewpoint invariant knowledge that automatically generalizes to novel viewpoints. Hinton et al. [2011]
proposed transforming autoencoders to generate the instantiation parameters of the PrimaryCapsule
layer and their system required transformation matrices to be supplied externally. We propose a
complete system that also answers "how larger and more complex visual entities can be recognized
by using agreements of the poses predicted by active, lower-level capsules".
Capsules make a very strong representational assumption: At each location in the image, there is
at most one instance of the type of entity that a capsule represents. This assumption, which was
motivated by the perceptual phenomenon called "crowding" (Pelli et al. [2004]), eliminates the
binding problem (Hinton [1981a]) and allows a capsule to use a distributed representation (its activity
vector) to encode the instantiation parameters of the entity of that type at a given location. This
distributed representation is exponentially more efficient than encoding the instantiation parameters
by activating a point on a high-dimensional grid and with the right distributed representation, capsules
can then take full advantage of the fact that spatial relationships can be modelled by matrix multiplies.
Capsules use neural activities that vary as viewpoint varies rather than trying to eliminate viewpoint
variation from the activities. This gives them an advantage over "normalization" methods like
spatial transformer networks (Jaderberg et al. [2015]): They can deal with multiple different affine
transformations of different objects or object parts at the same time.
Capsules are also very good for dealing with segmentation, which is another of the toughest problems
in vision, because the vector of instantiation parameters allows them to use routing-by-agreement, as
we have demonstrated in this paper. The importance of dynamic routing procedure is also backed by
biologically plausible models of invarient pattern recognition in the visual cortex. Hinton [1981b]
proposes dynamic connections and canonical object based frames of reference to generate shape
descriptions that can be used for object recognition. Olshausen et al. [1993] improves upon Hinton
[1981b] dynamic connections and presents a biologically plausible, position and scale invariant model
of object representations.
Research on capsules is now at a similar stage to research on recurrent neural networks for speech
recognition at the beginning of this century. There are fundamental representational reasons for
believing that it is a better approach but it probably requires a lot more small insights before it can
out-perform a highly developed technology. The fact that a simple capsules system already gives
unparalleled performance at segmenting overlapping digits is an early indication that capsules are a
direction worth exploring.
Acknowledgement. Of the many who provided us with constructive comments, we are specially
grateful to Robert Gens, Eric Langlois, Vincent Vanhoucke, Chris Williams, and the reviewers for
their fruitful comments and corrections.
9
References
Mart?n Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S
Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, et al. Tensorflow: Large-scale machine
learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467, 2016.
Jimmy Ba, Volodymyr Mnih, and Koray Kavukcuoglu. Multiple object recognition with visual
attention. arXiv preprint arXiv:1412.7755, 2014.
Jia-Ren Chang and Yong-Sheng Chen. Batch-normalized maxout network in network. arXiv preprint
arXiv:1511.02583, 2015.
Dan C Cire?san, Ueli Meier, Jonathan Masci, Luca M Gambardella, and J?rgen Schmidhuber. Highperformance neural networks for visual object classification. arXiv preprint arXiv:1102.0183,
2011.
Ian J Goodfellow, Yaroslav Bulatov, Julian Ibarz, Sacha Arnoud, and Vinay Shet. Multi-digit number
recognition from street view imagery using deep convolutional neural networks. arXiv preprint
arXiv:1312.6082, 2013.
Klaus Greff, Antti Rasmus, Mathias Berglund, Tele Hao, Harri Valpola, and J?rgen Schmidhuber.
Tagger: Deep unsupervised perceptual grouping. In Advances in Neural Information Processing
Systems, pages 4484?4492, 2016.
Geoffrey E Hinton. Shape representation in parallel systems. In International Joint Conference on
Artificial Intelligence Vol 2, 1981a.
Geoffrey E Hinton. A parallel computation that assigns canonical object-based frames of reference.
In Proceedings of the 7th international joint conference on Artificial intelligence-Volume 2, pages
683?685. Morgan Kaufmann Publishers Inc., 1981b.
Geoffrey E Hinton, Zoubin Ghahramani, and Yee Whye Teh. Learning to parse images. In Advances
in neural information processing systems, pages 463?469, 2000.
Geoffrey E Hinton, Alex Krizhevsky, and Sida D Wang. Transforming auto-encoders. In International
Conference on Artificial Neural Networks, pages 44?51. Springer, 2011.
Max Jaderberg, Karen Simonyan, Andrew Zisserman, and Koray Kavukcuoglu. Spatial transformer
networks. In Advances in Neural Information Processing Systems, pages 2017?2025, 2015.
Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint
arXiv:1412.6980, 2014.
Yann LeCun, Corinna Cortes, and Christopher JC Burges. The mnist database of handwritten digits,
1998.
Yann LeCun, Fu Jie Huang, and Leon Bottou. Learning methods for generic object recognition with
invariance to pose and lighting. In Computer Vision and Pattern Recognition, 2004. CVPR 2004.
Proceedings of the 2004 IEEE Computer Society Conference on, volume 2, pages II?104. IEEE,
2004.
Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, and Andrew Y Ng. Reading
digits in natural images with unsupervised feature learning. In NIPS workshop on deep learning
and unsupervised feature learning, volume 2011, page 5, 2011.
Bruno A Olshausen, Charles H Anderson, and David C Van Essen. A neurobiological model of visual
attention and invariant pattern recognition based on dynamic routing of information. Journal of
Neuroscience, 13(11):4700?4719, 1993.
Denis G Pelli, Melanie Palomares, and Najib J Majaj. Crowding is unlike ordinary masking:
Distinguishing feature integration from detection. Journal of vision, 4(12):12?12, 2004.
Li Wan, Matthew D Zeiler, Sixin Zhang, Yann LeCun, and Rob Fergus. Regularization of neural
networks using dropconnect. In Proceedings of the 30th International Conference on Machine
Learning (ICML-13), pages 1058?1066, 2013.
10
Matthew D Zeiler and Rob Fergus. Stochastic pooling for regularization of deep convolutional neural
networks. arXiv preprint arXiv:1301.3557, 2013.
A
How many routing iterations to use?
In order to experimentally verify the convergence of the routing algorithm we plot the average change
in the routing logits at each routing iteration. Fig. A.1 shows the average bij change after each routing
iteration. Experimentally we observe that there is negligible change in the routing by 5 iteration from
the start of training. Average change in the 2nd pass of the routing settles down after 500 epochs of
training to 0.007 while at routing iteration 5 the logits only change by 1e ? 5 on average.
Figure A.1: Average change of each routing logit (bij ) by each routing iteration. After 500 epochs of
training on MNIST the average change is stabilized and as it shown in right figure it decreases almost
linearly in log scale with more routing iterations.
(b) Log scale of final differences.
(a) During training.
We observed that in general more routing iterations increases the network capacity and tends to overfit
to the training dataset. Fig. A.2 shows a comparison of Capsule training loss on Cifar10 when trained
with 1 iteration of routing vs 3 iteration of routing. Motivated by Fig. A.2 and Fig. A.1 we suggest 3
iteration of routing for all experiments.
Figure A.2: Traning loss of CapsuleNet on cifar10 dataset. The batch size at each training step is 128.
The CapsuleNet with 3 iteration of routing optimizes the loss faster and converges to a lower loss at
the end.
11
| 6975 |@word trial:1 cnn:2 version:1 replicate:1 logit:2 nd:1 grey:1 accounting:1 image2:1 pick:2 crowding:2 initial:4 inefficiency:2 contains:1 fragment:3 rightmost:1 current:3 com:1 activation:1 assigning:2 diederik:1 written:1 devin:1 shape:3 designed:2 plot:1 v:2 generative:1 leaf:1 intelligence:2 beginning:1 bissacco:1 short:1 node:2 toronto:2 location:5 denis:1 zhang:1 tagger:1 constructed:1 become:1 sabour:1 abadi:2 consists:1 fixation:8 dan:1 introduce:1 acquired:1 ascend:2 mask:1 ibarz:1 nor:1 multi:4 brain:1 decreasing:1 automatically:1 bulatov:1 considering:1 increasing:2 becomes:1 provided:1 linearity:4 tweaked:1 bounded:1 lowest:1 what:4 string:1 developed:1 finding:1 transformation:13 remember:1 scaled:1 wrong:1 unit:5 segmenting:4 before:2 negligible:1 overlaying:1 local:3 attend:1 treat:1 tends:1 encoding:4 demise:1 black:1 chose:1 dynamically:1 suggests:2 sara:1 range:1 lecun:5 thirty:1 testing:1 block:1 implement:2 digit:61 procedure:3 majaj:1 composite:1 zoubin:1 suggest:1 get:7 cannot:2 close:1 applying:2 transformer:2 yee:1 www:1 fruitful:1 dean:1 demonstrated:1 center:1 reviewer:1 send:1 primitive:1 straightforward:1 attention:4 regardless:1 backed:1 williams:1 resolution:1 jimmy:2 x32:2 assigns:1 matthieu:1 insight:1 array:1 dominate:1 obviate:1 century:1 variation:6 hierarchy:2 target:2 carved:2 decode:1 exact:1 us:2 distinguishing:1 goodfellow:2 agreement:9 velocity:1 recognition:10 database:1 observed:1 softmaxes:1 fly:1 preprint:7 wang:2 calculate:1 region:2 connected:9 decrease:1 highest:1 alessandro:1 transforming:2 ui:4 moderately:1 dynamic:11 ultimately:1 trained:11 depend:1 solving:1 segment:1 grateful:1 creates:1 upon:1 eric:1 translated:2 joint:2 various:1 represented:1 regularizer:2 harri:1 train:1 instantiated:3 effective:2 artificial:3 klaus:1 refined:1 whose:5 encoded:1 larger:5 valued:1 plausible:2 say:1 cvpr:1 reconstruct:6 squash:2 fatal:1 ability:1 simonyan:1 final:6 najib:1 sequence:2 indication:1 triggered:1 advantage:2 net:5 rock:1 reconstruction:19 propose:2 product:4 coming:1 loop:2 gen:1 langlois:1 translate:1 iff:1 achieve:4 representational:3 description:1 parent:10 double:2 convergence:1 r1:1 adam:5 leave:1 converges:1 object:17 help:3 coupling:6 tk:3 recurrent:3 augmenting:1 pose:4 andrew:2 eq:4 strong:1 throw:1 implemented:1 c:1 predicted:1 come:1 direction:3 drawback:1 correct:4 cnns:3 shrunk:2 stochastic:2 human:1 routing:46 settle:1 everything:2 explains:1 piecing:1 activating:2 assign:2 biological:1 exploring:1 correction:1 around:1 ueli:1 exp:2 predict:1 matthew:2 rgen:2 achieves:4 consecutive:1 optimizer:3 early:2 albedo:1 vary:1 label:5 weighted:1 always:1 gaussian:1 aim:1 rather:1 avoid:1 resized:1 arnoud:1 encode:3 vk:2 likelihood:1 indicates:1 believing:1 baseline:7 sense:2 helpful:1 stopping:1 eliminate:1 initially:2 hidden:3 wij:3 tao:1 pixel:18 issue:1 overall:1 orientation:5 classification:10 multiplies:1 proposes:1 smoothing:1 art:3 special:1 fairly:1 softmax:4 equal:2 field:1 never:1 integration:1 beach:1 koray:2 ng:1 represents:6 unsupervised:3 icml:1 introspection:1 minimized:1 report:1 others:2 randomly:1 composed:1 preserve:1 recognize:1 individual:2 familiar:1 consisting:1 jeffrey:1 overlayed:1 freedom:1 detection:1 highly:6 mnih:1 essen:1 weakness:1 mixture:1 allocating:1 accurate:1 poorer:1 fu:1 andy:1 encourage:1 cifar10:4 netzer:2 x48:1 tree:6 euclidean:1 initialized:1 deformation:2 instance:3 column:7 increased:2 cover:1 measuring:1 ordinary:1 cost:1 deviation:1 subset:1 sacha:1 recognizing:2 krizhevsky:1 graphic:1 too:1 reported:1 encoders:1 thickness:3 perturbed:1 answer:1 varies:1 considerably:1 combined:1 traning:1 st:1 fundamental:1 international:4 receiving:1 pool:1 together:1 ashish:1 squared:1 augmentation:1 central:1 imagery:1 choose:1 wan:2 huang:1 berglund:1 dropconnect:1 inefficient:1 style:3 return:1 highperformance:1 li:1 account:4 volodymyr:1 yaroslav:1 stride:7 coding:2 coefficient:6 inc:1 coordinated:1 jc:1 performed:1 helped:1 lot:1 view:1 tab:2 red:1 reached:1 decaying:1 start:1 parallel:3 masking:1 jia:1 contribution:1 minimize:2 square:1 greg:1 convolutional:22 v9:1 accuracy:7 variance:1 ensemble:1 correspond:1 who:1 kaufmann:1 yes:2 modelled:1 handwritten:1 vincent:1 kavukcuoglu:2 produced:2 emphasizes:1 none:1 craig:1 multiplying:2 worth:1 ren:1 lighting:1 stroke:2 detector:5 explain:1 sharing:1 failure:1 obvious:1 stop:1 dataset:10 ask:1 knowledge:4 color:1 dimensionality:1 improves:1 segmentation:5 carefully:1 feed:1 higher:9 zisserman:1 though:1 box:3 anderson:1 just:3 stage:1 until:1 overfit:1 hand:1 receives:1 autoencoders:1 parse:6 christopher:1 replacing:1 x96:1 sheng:1 overlapping:6 google:2 incrementally:1 logistic:2 grows:1 olshausen:2 usa:1 contain:1 true:1 logits:4 normalized:1 verify:1 regularization:3 iteratively:1 illustrated:2 deal:3 x5:1 during:8 width:4 davis:1 trying:1 whye:1 complete:1 demonstrate:1 l1:1 svhn:1 greff:2 image:36 novel:2 charles:1 sigmoid:2 rotation:2 exponentially:4 volume:3 linking:1 interpretation:1 tail:1 grid:5 similarly:1 zeroing:1 bruno:1 replicating:1 had:1 cortex:1 v0:1 etc:2 maxpooling:1 dominant:1 own:1 perspective:1 irrelevant:1 optimizes:1 scenario:1 schmidhuber:2 sixin:1 seen:1 morgan:1 additional:2 converting:1 recognized:2 gambardella:1 period:1 corrado:1 sida:1 ii:1 multiple:6 full:1 faster:1 cross:2 long:3 cifar:1 divided:1 luca:1 coded:2 prediction:11 crop:1 multilayer:1 vision:4 heterogeneous:1 arxiv:14 iteration:16 represent:13 kernel:6 normalization:1 unparalleled:1 achieved:9 agarwal:1 background:2 want:1 interval:1 publisher:1 eliminates:1 unlike:2 specially:1 probably:2 comment:2 pooling:5 sent:2 thing:1 member:1 bik:1 presence:1 exceed:1 confuses:3 easy:1 rendering:1 affect:1 relu:2 fit:2 architecture:5 identified:1 idea:1 barham:1 shift:1 absent:1 whether:1 motivated:2 passed:1 padding:1 stereo:1 routed:1 karen:1 speech:2 passing:1 constitute:1 prefers:1 deep:5 jie:1 generally:1 amount:1 clutter:1 hue:1 ten:1 processed:2 category:2 reduced:1 generate:5 http:1 supplied:1 exist:2 canonical:2 coates:1 shifted:3 stabilized:1 neuroscience:1 per:1 glean:1 correctly:1 shall:1 vol:1 group:2 achieving:1 neither:1 replica:1 padded:1 fraction:1 sum:6 convert:1 year:1 inverse:1 powerful:1 place:2 almost:3 clipped:1 reasonable:1 yann:3 wu:1 patch:2 decision:1 scaling:1 comparable:1 dropout:2 layer:61 followed:1 activity:16 optic:1 alex:1 scene:1 yong:1 extremely:2 span:1 leon:1 expanded:2 combination:1 poor:1 across:1 slightly:1 smaller:2 shallow:1 rob:2 making:1 biologically:2 invariant:3 agree:2 previously:1 skew:3 mechanism:4 needed:1 know:2 fed:1 end:2 available:1 generalizes:1 brevdo:1 observe:2 away:2 appropriate:1 generic:1 nicholas:1 alternative:1 robustness:2 batch:2 corinna:1 existence:4 original:1 top:4 ensure:4 include:3 zeiler:3 ghahramani:1 uj:2 society:1 unchanged:1 added:1 already:2 cire:2 receptive:1 primary:7 traditional:3 subnetwork:1 distance:1 separate:2 valpola:1 sculpture:1 entity:14 decoder:5 hmm:1 street:1 chris:1 capacity:1 reason:1 enforcing:1 length:11 relationship:2 tijmen:1 julian:1 rasmus:1 setup:1 difficult:1 cij:5 robert:1 hao:1 ba:7 implementation:2 perform:1 teh:1 upper:1 neuron:5 convolution:6 datasets:1 markov:1 tele:1 hinton:14 ever:1 precise:1 frame:2 perturbation:3 varied:1 intensity:5 david:1 inverting:1 pair:2 required:1 meier:1 connection:2 pelli:2 learned:4 tensorflow:4 kingma:3 boost:1 nip:2 address:1 able:4 below:6 pattern:3 reading:1 built:1 max:6 memory:1 including:2 green:1 palomares:1 overlap:7 treated:1 force:1 natural:3 difficulty:1 melanie:1 technology:1 shet:1 lk:2 started:1 coupled:1 auto:1 ascender:1 prior:4 understanding:1 l2:1 acknowledgement:1 eugene:1 epoch:2 loss:16 fully:8 discriminatively:1 expect:1 par:1 interesting:1 generation:1 limitation:1 proven:1 geoffrey:5 localized:4 validation:1 degree:1 vanhoucke:1 affine:7 sufficient:1 conv1:4 viewpoint:5 tiny:1 share:1 squashing:1 row:1 translation:3 accounted:1 placed:1 last:3 keeping:2 antti:1 aij:1 guide:1 allow:3 deeper:1 burges:1 explaining:1 distributed:5 van:1 feedback:1 default:2 dimension:11 computes:3 ignores:1 made:1 san:2 far:3 sj:9 reconstructed:2 ignore:3 jaderberg:2 neurobiological:1 dealing:1 global:1 active:11 instantiation:14 discriminative:1 fergus:3 don:1 iterative:4 table:1 scratch:1 channel:4 capsule:118 robust:3 ca:1 learn:3 vinay:1 orphan:1 geoffhinton:1 bottou:1 complex:2 domain:2 vj:8 linearly:1 big:1 whole:4 noise:1 bounding:1 paul:1 fig:9 ensembling:1 shrinking:1 position:7 exponential:3 candidate:1 perceptual:2 spatial:4 weighting:1 third:1 learns:1 bij:9 zhifeng:1 externally:1 down:6 masci:1 ian:1 specific:2 r2:1 decay:2 cortes:1 grouping:1 workshop:1 exists:3 intrinsic:1 mnist:22 adding:1 sequential:1 importance:3 ci:1 texture:1 magnitude:1 margin:4 chen:3 easier:1 entropy:2 generalizing:1 simply:3 explore:2 visual:7 positional:1 scalar:5 bo:1 chang:2 binding:1 springer:1 corresponds:1 mart:1 viewed:1 sized:1 maxout:1 labelled:1 experimentally:2 change:7 determined:3 except:1 yuval:1 averaging:1 called:2 total:4 mathias:1 invariance:1 pas:1 vote:1 citro:1 mark:1 support:1 searched:1 jonathan:1 constructive:1 tested:4 phenomenon:1 |
6,606 | 6,976 | Incorporating Side Information by Adaptive
Convolution
Di Kang
Debarun Dhar
Antoni B. Chan
Department of Computer Science
City University of Hong Kong
{dkang5-c, ddhar2-c}@my.cityu.edu.hk, [email protected]
Abstract
Computer vision tasks often have side information available that is helpful to
solve the task. For example, for crowd counting, the camera perspective (e.g.,
camera angle and height) gives a clue about the appearance and scale of people
in the scene. While side information has been shown to be useful for counting
systems using traditional hand-crafted features, it has not been fully utilized in
counting systems based on deep learning. In order to incorporate the available
side information, we propose an adaptive convolutional neural network (ACNN),
where the convolution filter weights adapt to the current scene context via the
side information. In particular, we model the filter weights as a low-dimensional
manifold within the high-dimensional space of filter weights. The filter weights are
generated using a learned ?filter manifold? sub-network, whose input is the side
information. With the help of side information and adaptive weights, the ACNN can
disentangle the variations related to the side information, and extract discriminative
features related to the current context (e.g. camera perspective, noise level, blur
kernel parameters). We demonstrate the effectiveness of ACNN incorporating side
information on 3 tasks: crowd counting, corrupted digit recognition, and image
deblurring. Our experiments show that ACNN improves the performance compared
to a plain CNN with a similar number of parameters. Since existing crowd counting
datasets do not contain ground-truth side information, we collect a new dataset
with the ground-truth camera angle and height as the side information.
1
Introduction
Computer vision tasks often have side information available that is helpful to solve the task. Here we
define ?side information? as auxiliary metadata that is associated with the main input, and that affects
the appearance/properties of the main input. For example, the camera angle affects the appearance of
a person in an image (see Fig. 1 top). Even within the same scene, a person?s appearance changes as
they move along the ground-plane, due to changes in the relative angles to the camera sensor. Most
deep learning methods ignore the side information, since if given enough data, a sufficiently large
deep network should be able to learn internal representations that are invariant to the side information.
In this paper, we explore how side information can be directly incorporated into deep networks so as
to improve their effectiveness.
Our motivating application is crowd counting in images, which is challenging due to complicated
backgrounds, severe occlusion, low-resolution images, perspective distortion, and different appearances caused by different camera tilt angles. Recent methods are based on crowd density estimation
[1], where each pixel in the crowd density map represents the fraction of people in that location, and
the crowd count is obtained by integrating over a region in the density map. The current state-of-theart uses convolutional neural networks (CNN) to estimate the density maps [2?4]. Previous works
have also shown that using side information, e.g., the scene perspective, helps to improve crowd
counting accuracy [5, 6]. In particular, when extracting hand-crafted features (e.g., edge and texture
statistics) [5?9] use scene perspective normalization, where a ?perspective weight? is applied at each
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
camera
angle z
-10?
-20?
-30?
-40?
-50?
-65?
filter manifold network
FC, FC, ?, FC
auxiliary
input
?
images
filter weights + bias
? ?; ?
5x5
filter
?
z=-10?
z=-20?
filter space
?5x5
z=-30?
filter
manifold
?
convolution
? = ?(??? ?; ? )
z=-40?
input maps
output maps
Figure 2: The adaptive convolutional layer
Figure 1: (top) changes in people?s appearance due to camera with filter manifold network (FMN). The
angle, and the corresponding changes in a convolution filter; (bot- FMN uses the auxiliary input to generate
tom) the filter manifold as a function of the camera angle. Best the filter weights, which are then convolved
with the input maps.
viewed in color.
z=-50?
z=-65?
pixel location during feature extraction, to adjust for the scale of the object at that location. To handle
scale variations, typical CNN-based methods resize the input patch [2] based on the perspective
weight, or extract features at different scales via multiple columns [3] or a pyramid of input patches
[4]. However, incorporating other types of side information into the CNN is not as straightforward.
As a result, all the difficulties due to various contexts, including different backgrounds, occlusion,
perspective distortion and different appearances caused by different camera angles are entangled,
which may introduce an extra burden on the CNNs during training. One simple solution is to add an
extra image channel where each pixel holds the side information [10], which is equivalent to using
1st-layer filter bias terms that change with the side information. However, this may not be the most
effective solution when the side information is a high-level property with a complex relationship with
the image appearance (e.g., the camera angle).
Our solution in this paper is to disentangle the context variations explicitly in the CNN by modifying
the filter weights adaptively. We propose an adaptive CNN (ACNN) that uses side information
(e.g., the perspective weight) as an auxiliary input to adapt the CNN to different scene contexts
(e.g., appearance changes from high/low angle perspectives, and scale changes due to distance).
Specifically, we consider the filter weights in each convolutional layer as points on a low-dimensional
manifold, which is modeled using a sub-network where the side information is the input and the
filter weights are the outputs. The filter manifold is estimated during training, resulting in different
convolution filters for each scene context, which disentangles the context variations related to the
side information. In the ACNN, the convolutional layers focus only on those features most suitable
for the current context specified by the side information, as compared to traditional CNNs that use a
fixed set of filters over all contexts. In other words, the feature extractors are tuned for each context.
We test the effectiveness of ACNN at incorporating side information on 3 computer vision applications. First, we perform crowd counting from images using an ACNN with the camera parameters
(perspective value, or camera tilt angle and height) as side information. Using the camera parameters
as side information, ACNN can perform cross-scene counting without a fine-tuning stage. We collect
a new dataset covering a wide range of angles and heights, containing people from different viewpoints. Second, we use ACNN for recognition of digit images that are corrupted with salt-and-pepper
noise, where the noise level is the side information. Third, we apply ACNN to image deburring,
where the blur kernel parameters are the side information. A single ACNN can be trained to deblur
images for any setting of the kernel parameters. In contrast, using a standard CNN would require
training a separate CNN for each combination of kernel parameters, which is costly if the set of
parameter combinations is large. In our experiments, we show that ACNN can more effectively use
the side information, as compared to traditional CNNs with similar number of parameters ? moving
parameters from static layers to adaptive layers yields stronger learning capability and adaptability.
The contributions of this paper are three-fold: 1) We propose a method to incorporate the side
information directly into CNN by using an adaptive convolutional layer whose weights are generated
via a filter manifold sub-network with side information as the input; 2) We test the efficacy of ACNN
on a variety of computer vision applications, including crowd counting, corrupted digit recognition,
and non-blind image deblurring, and show that ACNN is more effective than traditional CNNs with
2
similar number of parameters. 3) We collect a new crowd counting dataset covering a wide range of
viewpoints and its corresponding side information, i.e. camera tilt angle and camera height.
2
Related work
2.1 Adapting neural networks
The performance of a CNN is affected if the test set is not from the same data distribution as the
training set [2]. A typical approach to adapting a CNN to new data is to select a pre-trained CNN
model, e.g. AlexNet [11], VGG-net [12], or ResNet [13] trained on ImageNet, and then fine-tune
the model weights for the specific task. [2] adopts a similar strategy ? train the model on the whole
dataset and then fine-tune using a subset of image patches that are similar to the test scene.
Another approach is to adapt the input data cube so that the extracted features and the subsequent
classifier/regressor are better matched. [14] proposes a trainable ?Spatial Transformer? unit that
applies an image transformation to register the input image to a standard form before the convolutional
layer. The functional form of the image transformation must be known, and the transformation
parameters are estimated from the image. Because it operates directly on the image, [14] is limited to
2D image transformations, which work well for 2D planar surfaces in an image (e.g., text on a flat
surface), but cannot handle viewpoint changes of 3D objects (e.g. people). In contrast, our ACNN
changes the feature extraction layers based on the current 3D viewpoint, and does not require the
geometric transformation to be known.
Most related to our work are dynamic convolution [15] and dynamic filter networks [16], which use
the input image to dynamically generate the filter weights for convolution. However, their purpose
for dynamically generating filters is quite different from ours. [15, 16] focus on image prediction
tasks (e.g., predicting the next frame from the previous frames), and the dynamically-generated filters
are mainly used to transfer a pixel value in the input image to a new position in the output image
(e.g., predicting the movement of pixels between frames). These input-specific filters are suitable
for low-level tasks, i.e. the input and the output are both in the same space (e.g., images). But
for high-level tasks, dramatically changing features with respect to its input is not helpful for the
end-goal of classification or regression. In contrast, our purpose is to include side information into
supervised learning (regression and classification), by learning how the discriminative image features
and corresponding filters change with respect to the side information. Hence, in our ACNN, the filter
weights are generated from an auxiliary input corresponding to the side information.
HyperNetworks [17] use relaxed weight-sharing between layers/blocks, where layer weights are
generated from a low-dimensional linear manifold. This can improve the expressiveness of RNNs, by
changing the weights over time, or reduce the number of learnable parameters in CNNs, by sharing
weight bases across layers. Specifically, for CNNs, the weight manifold of the HyperNetwork is
shared across layers, and the inputs/embedding vectors of the HyperNetwork are independently
learned for every layer during training. The operation of ACNNs is orthogonal to HyperNetworks - in
ACNN, the weight manifold is trained independently for each layer, and the input/side information is
shared across layers. In addition, our goal is to incorporate the available side information to improve
the performance of the CNN models, which is not considered in [17].
Finally, one advantage of [14?17] is that no extra information or label is needed. However, this also
means they cannot effectively utilize the available side information, which is common in various
computer vision tasks and has been shown to be helpful for traditional hand-crafted features [5].
2.2 Crowd density maps
[1] proposes the concept of an object density map whose integral over any region equals to the number
of objects in that region. The spatial distribution of the objects is preserved in the density map, which
also makes it useful for detection [18, 19] and tracking [20]. Most of the recent state-of-the-art object
counting algorithms adopt the density estimation approach [2?4, 8, 21]. CNN-based methods [2?4]
show strong cross-scene prediction capability, due to the learning capacity of CNNs. Specifically,
[3] uses a multi-column CNN with different receptive field sizes in order to encourage different
columns to capture features at different scales (without input scaling or explicit supervision), while
[4] uses a pyramid of input patches, each sent to separate sub-network, to consider multiple scales.
[2] introduces an extra fine-tuning stage so that the network can be better adapted to a new scene.
In contrast to [2, 3], we propose to use the existing side information (e.g. perspective weight) as an
input to adapt the convolutional layers to different scenes. With the adaptive convolutional layers,
3
only the discriminative features suitable for the current context are extracted. Our experiments show
that moving parameters from static layers to adaptive layers yields stronger learning capability.
2.3
Image deconvolution
Existing works [22?24] demonstrate that CNNs can be used for image deconvolution and restoration.
With non-blind deblurring, the blur kernel is known and the goal is to recover the original image.
[23] concatenate a deep deconvolution CNN and a denoising CNN to perform deblurring and artifact
removal. However, [23] requires a separate network to be trained for each blur kernel family and
kernel parameter. [24] trains a multi-layer perceptron to denoise images corrupted by additive white
Gaussian (AWG) noise. They incorporate the side information (AWG standard deviation) by simply
appending it to the vectorized image patch input. In this paper, we use the kernel parameter as an
auxiliary input, and train a single ACNN for a blur kernel family (for all its parameter values), rather
than for each parameter separately. During prediction, the ?filter-manifold network? uses the auxiliary
input to generate the appropriate deblurring filters, without the need for additional training.
3
Adaptive CNN
In this section, we introduce the adaptive convolutional layer and the ACNN.
3.1
Adaptive convolutional layer
Consider a crowd image dataset containing different viewpoints of people, and we train a separate
CNN to predict the density map for each viewpoint. For two similar viewpoints, we expect that the
two trained CNNs have similar convolution filter weights, as a person?s appearance varies gradually
with the viewpoint (see Fig. 1 top). Hence, as the viewpoint changes smoothly, the convolution
filters weights also change smoothly, and thus sweep a low-dimensional manifold within the highdimensional space of filter weights (see Fig. 1 bottom).
Following this idea, we use an adaptive convolutional layer, where the convolution filter weights
are the outputs of a separate ?filter-manifold network? (FMN, see Fig. 2). In the FMN, the side
information is an auxiliary input that feeds into fully-connected layers with increasing dimension
(similar to the decoder stage of an auto-encoder) with the final layer outputting the convolution filter
weights. The FMN output is reshaped into a 4D tensor of convolution filter weights (and bias), and
convolved with the input image. Note that in contrast to the traditional convolutional layer, whose
filter weights are fixed during the inference stage, the filter weights of an adaptive convolutional layer
change with respect to the auxiliary input. Formally, the adaptive convolutional layer is given by
h = f (x ? g(z; w)), where z is the auxiliary input, g(?; w) is the filter manifold network with tunable
weights w, x is the input image, and f (?) is the activation function.1
Training the adaptive convolutional layer involves updating the FMN weights w, thus learning the
filter manifold as a function of the auxiliary input. During inference, the FMN interpolates along the
filter manifold using the auxiliary input, thus adapting the filter weights of the convolutional layer to
the current context. Hence adaptation does not require fine-tuning or transfer learning.
3.2
Adaptive CNN for crowd counting
We next introduce the ACNN for crowd counting. Density map estimation is not as high-level a
task as recognition. Since the upper convolutional layers extract more abstract features, which are
not that helpful according to both traditional [1, 5] and deep methods [2, 3], we will not use many
convolutional layers. Fig. 3 shows our ACNN for density map estimation using two convolutional
stages. The input is an image patch, while the output is the crowd density at the center of the patch.
All the convolutional layers use the ReLU activation, and each convolutional layer is followed by a
local response normalization layer [11] and a max pooling layer. The auxiliary input for the FMN is
the perspective value for the image patch in the scene, or the camera tilt angle and camera height.
For the fully-connected stage, we use multi-task learning to improve the training of the feature
extractors [2, 25?27]. In particular, the main regression task predicts the crowd density value, while
an auxiliary classification task predicts the number of people in the image patch.
The adaptive convolutional layer has more parameters than a standard convolutional layer with the
same number of filters and the same filter spatial size ? the extra parameters are in the layers of the
1
To reduce clutter, here we do not show the bias term for the convolution.
4
Layer
CNN
ACNN
FMN1
?
34,572 (832)
conv1
1,664 (64)
0 (32)
(1)
(1)
FMN1
FMN2
(10)
(10)
FMN2
?
1,051,372
(25,632)
(40)
(40)
FC4 FC5
conv2
102,464 (64)
0 (32)
(832)
(25632)
(81) (15)
input image
FC1
2,654,720 (512) 1,327,616 (512)
filter weights
filter weights
auxiliary
patch
(32x1x5x5)+32
FC2
41,553 (81)
41,553 (81)
(32x32x5x5)+32
classification
(1x33x33)
task
FC3
82 (1)
82 (1)
FC4
419,985 (81)
210,033 (81)
output
FC5
1,312 (15)
1,312 (15)
?
?
density
total
3,221,780
2,666,540
conv2
conv1
Table 1: Comparison of number of parameters in
(32x9x9)
(32x17x17)
each layer of the ACNN in Fig. 3 and an equivalent
FC1 FC2 FC3
(512) (81) (1)
CNN. The number in parenthesis is the number of
Figure 3: The architecture of our ACNN with adap- convolution filters, or the number of outputs of the
tive convolutional layers for crowd density estimation. FMN/fully-connected (FC) layer.
auxiliary input:
perspective value (1)
FMN. However, since the filters themselves adapt to the scene context, an ACNN can be effective
with fewer feature channels (from 64 to 32), and the parameter savings can be moved to the FMN
(e.g. see Table 1). Hence, if side information is available, a standard CNN can be converted into
an ACNN with a similar number of parameters, but with better learning capability. We verify this
property in the experiments.
Since most of the parameters of the FMN are in its last layer, the FMN has O(LF ) parameters, where
F is the number of filter parameters in the convolution layer and L is the size of the last hidden
layer of the FMN. Hence, for a large number of channels (e.g., 128 in, 512 out), the FMN will be
extremely large. One way to handle more channels is to reduce the number of parameters in the FMN,
by assuming that sub-blocks in the final weight matrix of the FMN form a manifold, which can be
modeled by another FMN (i.e., an FMN-in-FMN). Here, the auxiliary inputs for the sub-block FMNs
are generated from another network whose input is the original auxiliary input.
3.3
Adaptive CNN for image deconvolution
Our ACNN for image deconvolution is based on the deconvolution CNN proposed in [23]. The
ACNN uses the kernel blur parameter (e.g., radius of the disk kernel) as the side information, and
consists of three adaptive convolutional layers (see Fig. 4). The ACNN uses 12 filter channels in the
first 2 layers, which yields an architecture with similar number of parameters as the standard CNN
with 38 filters in [23]. The ACNN consists of two long 1D adaptive convolutional layers: twelve
121?1 vertical 1D filters, followed by twelve 1?121 horizontal 1D filters. The result is passed
through a 1?1 adaptive convolutional layer to fuse all the feature maps. The input is the blurred
image and the output target is the original image. We use leaky ReLU activations [28] for the first
two convolutional layers, and sigmoid activation for the last layer to produce a bounded output as
image. Batch normalization layers [29] are used after the convolutional layers.
During prediction, the FMN uses kernel parameter auxiliary input to generate the appropriate
deblurring filters, without the need for additional training. Hence, the two advantages of using ACNN
are: 1) only one network is needed for each blur kernel family, which is useful for kernels with too
many parameter combinations to enumerate; 2) by interpolating along the filter manifold, ACNN can
work on kernel parameters unseen in the training set.
4
Experiments
To show their potential, we evaluate ACNNs on three tasks: crowd counting, digit recognition with
salt-and-pepper noise, and image deconvolution (deblurring). In order to make fair comparisons,
we compare our ACNN with standard CNNs using traditional convolutional layers, but increase the
number of filter channels in the CNN so that they have similar total number of parameters as the
ACNN. We also test a CNN with side information included as an extra input channel(s) (denoted as
CNN-X), where the side information is replicated in each pixel of the extra channel, as in [10].
For ACNN, each adaptive convolution layer has its own FMN, which is a standard MLP with two
hidden layers and a linear output layer. The size of the FMN output layer is the same as the number
of filter parameters in its associated convolution layer, and the size of the last hidden layer (e.g., 40 in
Fig. 3) was selected so that the ACNN and baseline CNN have roughly equal number of parameters.
5
auxiliary input: blurring kernel parameter
Method
MAE
MESA
[1]
1.70
(1)
(1)
(1)
FMN1
FMN2
FMN3
Regression forest [21]
1.70
(4)
(4)
(4)
(8)
(8)
(8)
RR [8]
1.24
(4368)
(36)
(17486)
CNN-patch+RR [2]
1.70
filter weights
filter weights
filter weights
MCNN [3]
1.32
(12x3x121x1)
(12x12x1x121)
(3x12x1x1)
+12
+12
CNN
1.26
CNN-X
1.20
CNN (normalized patch)
1.26
ACNN-v1
1.23
ACNN-v2
1.14
ACNN-v3
0.96
output image
conv2
input image
conv1
(3x184x184) (12x184x184) (12x184x184) (3x184x184)
Table 2: Comparison of mean absolute error (MAE)
Figure 4: ACNN for image deconvolution. The auxil- for counting with crowd density estimation methods
iary input is the radius r of the disk blurring kernel. on the UCSD ?max? split.
?
?
?
Method
R1
R2 (unseen)
R3
Avg.
CNN
1.83
1.06
0.62 1.17
CNN-X
1.33
1.18
0.61 1.04
R2 (13.2-17.7)
ACNN-v1 1.47
0.95
0.59 1.00
ACNN-v2 1.22
0.91
0.55 0.89
ACNN-v3 1.15
1.02
0.63 0.93
R3 (17.6-22.1)
Table 3: Comparison of MAE on 3 bar regions on the
Figure 5: UCSD dataset with 3 bar regions. The range
UCSD ?max? split.
of perspective values are shown in parentheses.
R1 (6.7-13.2)
4.1
Crowd counting experiments
For crowd counting, we use two crowd counting datasets: the popular UCSD crowd counting dataset,
and our newly collected dataset with camera tilt angle and camera height as side information.
4.1.1
UCSD dataset
Refer to Fig. 3 for the ACNN architecture used for the UCSD dataset. The image size is 238?158, and
33?33 patches are used. We test several variations of the ACNN: v1) only the first convolutional layer
is adaptive, with 64 filters for both of the convolutional layers; v2) only the last convolutional layer is
adaptive, with 64 filters for the first convolutional layer and 30 filters for its second convolutional
layer; v3) all the convolutional layers are adaptive, with 32 filters for all layers, which provides
maximum adaptability. The side information (auxiliary input) used for the FMN is the perspective
value. For comparison, we also test a plain CNN and CNN-X with a similar architecture but using
standard convolutional layers with 64 filters in each layer, and another plain CNN with input patch
size normalization introduced in [2] (i.e., resizing larger patches for near-camera regions). The
numbers of parameters are shown in Table 1. The count predictions in the region-of-interest (ROI)
are evaluated using the mean absolute error (MAE) between the predicted count and the ground-truth.
We first use the widely adopted protocol of ?max? split, which uses 160 frames (frames 601:5:1400)
for training, and the remaining parts (frames 1:600, 1401:2000) for testing. The results are listed in
Table 2. Our ACNN-v3, using two adaptive convolutional layers, offers maximum adaptability and
has the lowest error (0.96 MAE), compared to the equivalent plain CNN and the reference methods.
While CNN-X reduces the error compared to CNN, CNN-X still has larger error than ACNN. This
demonstrates that the FMN of ACNN is better at incorporating the side information. In addition, using
simple input patch size normalization does not improve the performance as effectively as ACNN.
Examples of the learned filter manifolds are shown in Fig. 6. We also tested using 1 hidden layer in
the FMN, and obtained worse errors for each version of ACNN (1.74, 1.15, and 1.20, respectively).
Using only one hidden layer limits the ability to well model the filter manifold.
In the next experiment we test the effect of the side information within the same scene. The ROI of
UCSD is further divided into three bar regions of the same height (see Fig. 5). The models are trained
only on R1 and R3 from the training set, and tested on all three regions of the test set separately.
The results are listed in Table 3. After disentangling the variations due to perspective value, the
performance on R1 has been significantly improved because the ACNN uses the context information
to distinguish it from the other regions. Perspective values within R2 are completely unseen during
training, but our ACNN still gives a comparable or slightly better performance than CNN, which
demonstrates that the FMN can smoothly interpolate along the filter manifold.
6
Method
MAE
LBP+RR [2, 3] 23.97
MCNN [3]
8.80
CNN
8.72
CNN-X (AH)
9.05
CNN-X (AHP)
8.45
ACNN (AH)
8.35
Figure 6: Examples of learned filter manifolds for the 2nd convoluACNN (AHP)
8.00
tional layer. Each row shows one filter as a function of the auxiliary Table 4: Counting results on CityUHK-X,
input (perspective weight), shown at the top. Both the amplitude the new counting dataset with side inforand patterns change, which shows the adaptability of the ACNN. mation.
Image
Predicted density map
Image
Predicted density map
-20.4? , 6.1m
92.44 (1.57)
-29.8? , 4.9m
18.22 (2.47)
6.7 ? ? ? ? ? ? 9.7 ? ? ? ? ? ? 12.6 ? ? ? ? ? ? 15.5 ? ? ? ? ? ? 18.5 ? ? ? ? ? ? 21.4
-39.8? , 6.7m
-55.2? , 11.6m
28.99 (0.66)
21.71 (1.24)
Figure 7: Examples of the predicted density map by our ACNN on the new CityUHK-X dataset. The extrinsic
parameters and predicted count (absolute error in parenthesis) is shown above the images.
4.1.2
CityUHK-X: new crowd dataset with extrinsic camera parameters
The new crowd dataset ?CityUHK-X? contains 55 scenes (3,191 images in total), covering a camera
tilt angle range of [-10? , -65? ] and a height range of [2.2, 16.0] meters. The training set consists of
43 scenes (2,503 images; 78,592 people), and the test set comprises 12 scenes (688 images; 28,191
people). More information and demo images can be found in the supplemental. The resolution
of the new dataset is 512?384, and 65?65 patches are used. The ACNN for this dataset contains
three convolutional and max-pooling layers, resulting in the same output feature map size after the
convolutional stage as in the ACNN for UCSD. The three adaptive convolutional layers use 40, 40
and 32 filters of size 5?5 each. The side information (auxiliary inputs) are camera tilt angle and
camera height (denoted as ?AH?), and the camera tilt angle, camera height, and perspective value
(denoted as ?AHP?). The baseline plain CNN and CNN-X use 64 filters of size 5?5 for all three
convolutional layers.
Results for ACNN, the plain CNN and CNN-X, and multi-column CNN (MCNN) [3] are presented
in Table 4. The plain CNN and MCNN [3], which do not use side information, obtain similar results.
Using side information with ACNN decreases the MAE, compared to the plain CNN and CNN-X,
with more side information improving the results (AHP vs. AH). Fig. 7 presents example results.
4.2
Digit recognition with salt-and-pepper noise
In this experiment, the task is to recognize handwritten digits that are corrupted with different levels of
salt-and-pepper noise. The side information is the noise level. We use the MNIST handwritten digits
dataset, which contains 60,000 training and 10,000 test examples. We randomly add salt-and-pepper
noise (half salt and half pepper), on the MNIST images. Nine noise levels are used on the original
MNIST training set from 0% to 80% with an interval of 10%, with the same number of images for
each noise level, resulting in a training set of 540,000 samples. Separate validation and test sets, both
containing 90,000 samples, are generated from the original MNIST test set.
We test our ACNN with the noise level as the side information, as well as the plain CNN and CNN-X.
We consider two architectures: two or four convolutional layers (2-conv or 4-conv) followed by
7
Architecture
No. Conv. Filters
Error Rate
No. Parameters
CNN 2-conv
32 + 32
8.66%
113,386
CNN-X 2-conv
32 + 32
8.49% (8.60%)
113,674
ACNN 2-conv
32 + 26
7.55% (7.64%)
105,712
CNN 4-conv 32 + 32 + 32 + 32 3.58%
131,882
CNN-X 4-conv 32 + 32 + 32 + 32 3.57% (3.64%)
132,170
ACNN 4-conv 32 + 32 + 32 + 26 2.92% (2.97%)
124,208
Table 5: Digit recognition with salt-and-pepper noise, where the noise level is the side information. The number
of filters for each convolutional layer and total number of parameters are listed. In the Error Rate column, the
parenthesis shows the error when using the estimated side information rather than the ground-truth.
Arch-filters
training set r
r=3 r=5 r=7 r=9 r=11
all
seen r unseen r
blurred image
?
23.42 21.90 20.96 20.28 19.74 21.26
?
?
CNN [23]
{3, 7, 11}
+0.55 -0.25 +0.49 +0.69 +0.56 +0.41 +0.53 +0.22
CNN-X
{3, 7, 11}
+0.88 -0.70 +1.65 +0.47 +1.86 +0.83 +1.46 -0.12
ACNN
{3, 7, 11}
+0.77 +0.06 +1.17 +0.94 +1.28 +0.84 +1.07 +0.50
CNN-X (blind)
{3, 7, 11}
+0.77 -0.77 +1.23 +0.25 +0.98 +0.49 +0.99 -0.26
ACNN (blind)
{3, 7, 11}
+0.76 -0.04 +0.70 +0.80 +1.13 +0.67 +0.86 +0.38
CNN [23]
{3, 5, 7, 9, 11} +0.28 +0.45 +0.62 +0.86 +0.59 +0.56 +0.56
?
CNN-X
{3, 5, 7, 9, 11} +0.99 +1.38 +1.53 +1.60 +1.55 +1.41 +1.41
?
ACNN
{3, 5, 7, 9, 11} +0.71 +0.92 +1.00 +1.28 +1.22 +1.03 +1.03
?
CNN-X (blind) {3, 5, 7, 9, 11} +0.91 +1.06 +0.81 +1.12 +1.24 +1.03 +1.03
?
ACNN (blind)
{3, 5, 7, 9, 11} +0.66 +0.79 +0.64 +1.12 +1.04 +0.85 +0.85
?
Table 6: PSNRs for image deconvolution experiments. The PSNR for the blurred input image is in the first row,
while the other rows are the change in PSNR relative to that of the blurred input image. Blind means the network
takes estimated auxiliary value (disk radius) as the side information.
two fully-connected (FC) layers.2 For ACNN, only the 1st convolutional layer is adaptive. All
convolutional layers use 3?3 filters. All networks use the same configuration for the FC layers, one
128-neuron layer and one 10-neuron layer. ReLU activation is used for all layers, except the final
output layer which uses soft-max. Max pooling is used after each convolutional layer for the 2-conv
network, or after the 2nd and 4th convolutional layers for the 4-conv network.
The classification error rates are listed in Table 5. Generally, adding side information as extra
input channel (CNN-X) decreases the error, but the benefit diminishes as the baseline performance
increases ? CNN-X 4-conv only decreases the error rate by 0.01% compared with CNN. Using ACNN
to incorporate the side information can improve the performance more significantly. In particular, for
ACNN 2-conv, the error rate decreases 0.94% (11% relatively) from 8.49% to 7.55%, while the error
rate decreases 0.65% (18% relatively) from 3.57% to 2.92% for ACNN 4-conv.
We also tested the ACNN when the noise level is unknown ? The noise level is estimated from the
image, and then passed to the ACNN. To this end, a 4-layer CNN (2 conv. layers, 1 max-pooling layer
and 2 FC layers) is trained to predict the noise level from the input image. The error rate increases
slightly when using the estimated noise level (e.g., by 0.05% for the ACNN 4-conv, see Table 5).
More detailed setting of the networks can be found in the supplemental.
4.3 Image deconvolution
In the final experiment, we use ACNN for image deconvolution (deblurring) where the kernel blur
parameter is the side information. We test on the Flickr8k [31] dataset, and randomly select 5000
images for training, 1400 images for validation, and another 1600 images for testing. The images
were blurred uniformly using a disk kernel, and then corrupted with additive Gaussian noise (AWG)
and JPEG compression as in [23], which is the current state-of-the-art for non-blind deconvolution
using deep learning. We train the models with images blurred with different sets of kernel radii
r ? {3, 5, 7, 9, 11}. The test set consists of images blurred with all r ? {3, 5, 7, 9, 11}. The
evaluation is based on the peak signal-to-noise ratio (PSNR) between the deconvolved image and the
original image, relative to the PSNR of the blurred image.
The results are shown in Table 6 using different sets of radii for the training set. First, when trained
on the full training set, ACNN almost doubles the increase in PSNR, compared to the CNN (+1.03dB
vs. +0.56dB). Next, we consider a reduced training set with radii r ? {3, 7, 11}, and ACNN again
doubles the increase in PSNR (+0.84dB vs. +0.41dB). The performance of ACNN on the unseen
radii r ? {5, 9} is better than CNN, which demonstrates the capability of ACNN to interpolate along
2
On the clean MNIST dataset, the 2-conv and 4-conv CNN architectures achieve 0.81% and 0.69% error,
while the current state-of-the-art is ?0.23% error [30].
8
the filter manifold for unseen auxiliary inputs. Interestingly, CNN-X has higher PSNR than ACNN
on seen radii, but lower PSNR on unseen radii. CNN-X cannot well handle interpolation between
unseen aux inputs, which shows the advantage of explicitly modeling the filter manifold.
We also test CNN-X and ACNN for blind deconvolution, where we estimate the kernel radius using
manually-crafted features and random forest regression (see supplemental). For the blind task, the
PSNR drops for CNN-X (0.38 on r ? {3, 5, 7, 9, 11} and 0.34 on r ? {3, 7, 11}) are larger than
ACNN (0.18 and 0.17), which means CNN-X is more sensitive to the auxiliary input.
Example learned filters are presented in Fig. 8, and Fig. 9 presents examples of deblurred images.
Deconvolved images using CNN are overly-smoothed since it treats images blurred by all the kernels
uniformly. In contrast, the ACNN result has more details and higher PSNR.
parameter weights
On this task, CNN-X performs better than ACNN on the seen radii, most likely because the relationship between the side information (disk radius) and the main input (sharp image) is not complicated
and deblurring is a low-level task. Hence, incorporating the side information directly into the filtering
calculations (as an extra channel) is a viable solution3 . In contrast, for the crowd counting and
corrupted digit recognition tasks, the relationship between the side information (camera angle/height
or noise level) and the main input is less straightforward and not deterministic, and hence the more
complex FMN is required to properly adapt the filters. Thus, the adaptive convolutions are not universally applicable, and CNN-X could be used in some situations where there is a simple relationship
between the auxiliary input and the desired filter output.
1.5
1.0
0.5
0.0
0.5
1.0
1.5
0
aux=3
aux=5
aux=7
aux=9
aux=11
20
40
60
80
100
1200
1-D filter parameters
20
40
60
80
100
120
Figure 8: Two examples of filter manifolds for image deconvolution. The y-axis is the filter weight, and x-axis
is location. The auxiliary input is the disk kernel radius. Both the amplitude and the frequency can be adapted.
(a) Original (target)
(b) Blurred (input)
PSNR=24.34
(c) CNN [23]
PSNR=25.30
(d) ACNN
PSNR=26.04
Figure 9: Image deconvolution example: (a) original image; (b) blurred image with disk radius of 7; deconvolved
images using (c) CNN and (d) our ACNN.
5
Conclusion
In this paper, we propose an adaptive convolutional neural network (ACNN), which employs the
available side information as an auxiliary input to adapt the convolution filter weights. The ACNN
can disentangle variations related to the side information, and extract features related to the current
context. We apply ACNN to three computer vision applications: crowd counting using either the
camera angle/height and perspective weight as side information, corrupted digit recognition using
the noise level as side information, and image deconvolution using the kernel parameter as side
information. The experiments show that ACNN can better incorporate high-level side information to
improve performance, as compared to using simple methods such as including the side information
as an extra input channel.
The placement of the adaptive convolution layers is important, and should consider the relationship
between the image content and the aux input, i.e., how the image contents changes with respect to the
auxiliary input. For example, for counting, the auxiliary input indicates the amount of perspective
distortion, which geometrically transforms the people?s appearances, and thus adapting the 2nd layer
is more helpful since changes in object configuration are reflected in mid-level features. In contrast,
salt-and-pepper-noise has a low-level (local) effect on the image, and thus adapting the first layer,
corresponding to low-level features, is sufficient. How to select the appropriate convolution layers for
adaptation is interesting future work.
3
The extra channel is equivalent to using an adaptive bias term for each filter in the 1st convolutional layer.
9
Acknowledgments
The work described in this paper was supported by a grant from the Research Grants Council of
the Hong Kong Special Administrative Region, China (Project No. [T32-101/15-R]), and by a
Strategic Research Grant from City University of Hong Kong (Project No. 7004682). We gratefully
acknowledge the support of NVIDIA Corporation with the donation of the Tesla K40 GPU used for
this research.
References
[1] V. Lempitsky and A. Zisserman, ?Learning To Count Objects in Images,? in NIPS, 2010. 1, 3, 4, 6
[2] C. Zhang, H. Li, X. Wang, and X. Yang, ?Cross-scene Crowd Counting via Deep Convolutional Neural
Networks,? in CVPR, 2015. 1, 2, 3, 4, 6, 7
[3] Y. Zhang, D. Zhou, S. Chen, S. Gao, and Y. Ma, ?Single-Image Crowd Counting via Multi-Column
Convolutional Neural Network,? in CVPR, 2016. 2, 3, 4, 6, 7
[4] D. Onoro-Rubio and R. J. L?pez-Sastre, ?Towards perspective-free object counting with deep learning,? in
ECCV, 2016. 1, 2, 3
[5] A. B. Chan, Z.-S. J. Liang, and N. Vasconcelos, ?Privacy preserving crowd monitoring: Counting people
without people models or tracking,? in CVPR. IEEE, 2008, pp. 1?7. 1, 3, 4
[6] A. B. Chan and N. Vasconcelos, ?Counting people with low-level features and bayesian regression,? IEEE
Trans. Image Process., 2012. 1
[7] ??, ?Bayesian poisson regression for crowd counting,? in ICCV, 2009.
[8] C. Arteta, V. Lempitsky, J. A. Noble, and A. Zisserman, ?Interactive Object Counting,? in ECCV, 2014. 3,
6
[9] H. Idrees, I. Saleemi, C. Seibert, and M. Shah, ?Multi-source multi-scale counting in extremely dense
crowd images,? in CVPR, 2013. 1
[10] M. Gharbi, G. Chaurasia, S. Paris, and F. Durand, ?Deep joint demosaicking and denoising,? ACM
Transactions on Graphics (TOG), 2016. 2, 5
[11] A. Krizhevsky, I. Sutskever, and G. E. Hinton, ?Imagenet classification with deep convolutional neural
networks,? in NIPS, 2012. 3, 4
[12] K. Simonyan and A. Zisserman, ?Very Deep Convolutional Networks for Large-Scale Image Recognition,?
in ICLR, 2015. 3
[13] K. He, X. Zhang, S. Ren, and J. Sun, ?Deep Residual Learning for Image Recognition,? in CVPR, 2016. 3
[14] M. Jaderberg, K. Simonyan, A. Zisserman, and K. Kavukcuoglu, ?Spatial transformer networks,? in NIPS,
2015, pp. 2017?2025. 3
[15] B. Klein, L. Wolf, and Y. Afek, ?A Dynamic Convolutional Layer for short range weather prediction,? in
CVPR, 2015. 3
[16] B. De Brabandere, X. Jia, T. Tuytelaars, and L. Van Gool, ?Dynamic filter networks,? in NIPS, 2016. 3
[17] D. Ha, A. Dai, and Q. V. Le, ?HyperNetworks,? in ICLR, 2017. 3
[18] Z. Ma, L. Yu, and A. B. Chan, ?Small Instance Detection by Integer Programming on Object Density
Maps,? in CVPR, 2015. 3
[19] D. Kang, Z. Ma, and A. B. Chan, ?Beyond counting: Comparisons of density maps for crowd analysis
tasks-counting, detection, and tracking,? arXiv preprint arXiv:1705.10118, 2017. 3
[20] M. Rodriguez, I. Laptev, J. Sivic, and J.-Y. Y. Audibert, ?Density-aware person detection and tracking in
crowds,? in ICCV, 2011. 3
[21] L. Fiaschi, R. Nair, U. Koethe, and F. a. Hamprecht, ?Learning to Count with Regression Forest and
Structured Labels,? in ICPR, 2012. 3, 6
[22] D. Eigen, D. Krishnan, and R. Fergus, ?Restoring an image taken through a window covered with dirt or
rain,? in ICCV, 2013. 4
[23] L. Xu, J. S. Ren, C. Liu, and J. Jia, ?Deep Convolutional Neural Network for Image Deconvolution,? in
NIPS, 2014. 4, 5, 8, 9
[24] H. C. Burger, C. J. Schuler, and S. Harmeling, ?Image denoising: Can plain neural networks compete with
BM3D?? in CVPR, 2012. 4
[25] S. Li, Z.-Q. Liu, and A. B. Chan, ?Heterogeneous Multi-task Learning for Human Pose Estimation with
Deep Convolutional Neural Network,? IJCV, 2015. 4
[26] Z. Zhang, P. Luo, C. C. Loy, and X. Tang, ?Facial Landmark Detection by Deep Multi-task Learning,? in
ECCV, 2014.
[27] Y. Sun, X. Wang, and X. Tang, ?Deep Learning Face Representation by Joint Identification-Verification,?
in NIPS, 2014. 4
[28] A. L. Maas, A. Y. Hannun, and A. Y. Ng, ?Rectifier Nonlinearities Improve Neural Network Acoustic
Models,? in ICML, 2013. 5
[29] S. Ioffe and C. Szegedy, ?Batch Normalization: Accelerating Deep Network Training by Reducing Internal
Covariate Shift,? in ICML, 2015. 5
10
[30] D. Ciresan, U. Meier, and J. Schmidhuber, ?Multi-column Deep Neural Networks for Image Classification,?
in CVPR, 2012, pp. 3642?3649. 8
[31] M. Hodosh, P. Young, and J. Hockenmaier, ?Framing image description as a ranking task: Data, models
and evaluation metrics,? in Journal of Artificial Intelligence Research, 2013. 8
11
| 6976 |@word kong:3 cnn:85 version:1 compression:1 stronger:2 nd:3 disk:7 configuration:2 contains:3 efficacy:1 liu:2 tuned:1 ours:1 interestingly:1 existing:3 current:10 luo:1 activation:5 acnns:2 must:1 gpu:1 subsequent:1 concatenate:1 blur:8 additive:2 drop:1 v:3 half:2 fewer:1 selected:1 intelligence:1 plane:1 short:1 awg:3 provides:1 location:4 zhang:4 height:13 along:5 viable:1 fmn:28 consists:4 ijcv:1 privacy:1 introduce:3 roughly:1 themselves:1 multi:10 bm3d:1 window:1 increasing:1 conv:18 project:2 burger:1 matched:1 bounded:1 alexnet:1 lowest:1 supplemental:3 transformation:5 corporation:1 every:1 flickr8k:1 interactive:1 classifier:1 demonstrates:3 unit:1 grant:3 before:1 local:2 treat:1 limit:1 interpolation:1 rnns:1 china:1 dynamically:3 collect:3 challenging:1 limited:1 range:6 demosaicking:1 acknowledgment:1 camera:30 harmeling:1 testing:2 restoring:1 block:3 lf:1 digit:10 adapting:5 significantly:2 weather:1 word:1 integrating:1 pre:1 cannot:3 context:15 transformer:2 disentangles:1 equivalent:4 map:19 deterministic:1 center:1 straightforward:2 independently:2 hypernetworks:3 resolution:2 embedding:1 handle:4 variation:7 target:2 programming:1 us:12 deblurring:9 recognition:11 utilized:1 updating:1 predicts:2 bottom:1 preprint:1 wang:2 capture:1 region:11 connected:4 sun:2 k40:1 movement:1 decrease:5 dynamic:4 trained:9 laptev:1 tog:1 blurring:2 completely:1 cityu:2 joint:2 various:2 train:5 effective:3 artificial:1 crowd:35 whose:5 quite:1 larger:3 solve:2 cvpr:9 distortion:3 widely:1 tested:3 encoder:1 resizing:1 statistic:1 ability:1 unseen:8 simonyan:2 reshaped:1 tuytelaars:1 final:4 advantage:3 rr:3 net:1 koethe:1 propose:5 outputting:1 adaptation:2 achieve:1 description:1 moved:1 sutskever:1 double:2 adap:1 r1:4 produce:1 generating:1 object:11 help:2 resnet:1 donation:1 pose:1 strong:1 auxiliary:31 predicted:5 involves:1 radius:14 filter:89 cnns:10 modifying:1 human:1 require:3 hold:1 sufficiently:1 considered:1 ground:5 roi:2 predict:2 adopt:1 purpose:2 estimation:7 diminishes:1 applicable:1 label:2 sensitive:1 council:1 city:2 sensor:1 gaussian:2 mation:1 rather:2 zhou:1 focus:2 properly:1 indicates:1 mainly:1 hk:2 contrast:8 baseline:3 helpful:6 inference:2 tional:1 hidden:5 pixel:6 classification:7 denoted:3 proposes:2 spatial:4 art:3 mesa:1 special:1 cube:1 equal:2 aware:1 field:1 vasconcelos:2 extraction:2 beach:1 saving:1 manually:1 represents:1 ng:1 yu:1 icml:2 theart:1 noble:1 future:1 deblurred:1 employ:1 randomly:2 recognize:1 interpolate:2 occlusion:2 detection:5 mlp:1 interest:1 evaluation:2 severe:1 adjust:1 introduces:1 hamprecht:1 edge:1 integral:1 encourage:1 facial:1 orthogonal:1 desired:1 deconvolved:3 instance:1 column:7 soft:1 modeling:1 jpeg:1 restoration:1 strategic:1 deviation:1 subset:1 krizhevsky:1 too:1 graphic:1 motivating:1 varies:1 corrupted:8 my:1 adaptively:1 person:4 density:22 st:4 twelve:2 peak:1 regressor:1 again:1 containing:3 worse:1 li:2 szegedy:1 converted:1 potential:1 de:1 nonlinearities:1 blurred:11 caused:2 explicitly:2 blind:10 register:1 audibert:1 ranking:1 recover:1 complicated:2 capability:5 jia:2 contribution:1 accuracy:1 convolutional:58 yield:3 handwritten:2 bayesian:2 kavukcuoglu:1 identification:1 ren:2 monitoring:1 iary:1 ah:4 sharing:2 frequency:1 pp:3 associated:2 di:1 static:2 newly:1 dataset:19 tunable:1 popular:1 color:1 improves:1 psnr:13 amplitude:2 adaptability:4 feed:1 higher:2 supervised:1 tom:1 planar:1 response:1 improved:1 reflected:1 zisserman:4 evaluated:1 stage:7 arch:1 hand:3 horizontal:1 rodriguez:1 artifact:1 usa:1 effect:2 contain:1 concept:1 verify:1 normalized:1 gharbi:1 hence:8 fc3:2 white:1 x5:2 during:9 covering:3 hong:3 demonstrate:2 performs:1 image:98 dirt:1 common:1 sigmoid:1 functional:1 fc4:2 salt:8 tilt:8 he:1 mae:7 refer:1 tuning:3 afek:1 solution3:1 gratefully:1 moving:2 supervision:1 surface:2 add:2 base:1 disentangle:3 own:1 chan:6 recent:2 perspective:23 schmidhuber:1 nvidia:1 durand:1 seen:3 preserving:1 additional:2 relaxed:1 dai:1 hypernetwork:2 v3:4 signal:1 multiple:2 full:1 reduces:1 adapt:7 calculation:1 cross:3 long:2 offer:1 divided:1 parenthesis:4 prediction:6 regression:8 heterogeneous:1 vision:6 metric:1 poisson:1 arxiv:2 kernel:24 normalization:6 pyramid:2 preserved:1 background:2 addition:2 fine:5 separately:2 lbp:1 entangled:1 interval:1 source:1 extra:11 pooling:4 sent:1 db:4 effectiveness:3 integer:1 extracting:1 near:1 counting:35 yang:1 split:3 enough:1 krishnan:1 variety:1 affect:2 relu:3 pepper:8 architecture:7 ciresan:1 reduce:3 idea:1 vgg:1 shift:1 passed:2 accelerating:1 interpolates:1 nine:1 deep:19 dramatically:1 useful:3 enumerate:1 generally:1 listed:4 tune:2 detailed:1 covered:1 amount:1 clutter:1 transforms:1 mid:1 reduced:1 generate:4 bot:1 estimated:6 extrinsic:2 overly:1 klein:1 affected:1 four:1 changing:2 clean:1 utilize:1 v1:3 fuse:1 dhar:1 fraction:1 geometrically:1 compete:1 angle:21 family:3 almost:1 patch:17 resize:1 scaling:1 comparable:1 layer:100 followed:3 distinguish:1 fold:1 adapted:2 placement:1 scene:19 flat:1 t32:1 abchan:1 extremely:2 relatively:2 department:1 structured:1 according:1 icpr:1 combination:3 across:3 slightly:2 hodosh:1 hockenmaier:1 invariant:1 gradually:1 iccv:3 taken:1 hannun:1 count:6 r3:3 needed:2 end:2 ahp:4 adopted:1 available:7 operation:1 apply:2 v2:3 appropriate:3 appending:1 batch:2 shah:1 eigen:1 convolved:2 original:8 top:4 remaining:1 include:1 rain:1 sweep:1 move:1 tensor:1 strategy:1 costly:1 receptive:1 traditional:8 iclr:2 distance:1 separate:6 fc2:2 capacity:1 decoder:1 landmark:1 manifold:27 collected:1 assuming:1 modeled:2 relationship:5 ratio:1 loy:1 liang:1 disentangling:1 conv2:3 perform:3 unknown:1 upper:1 vertical:1 convolution:21 neuron:2 datasets:2 acknowledge:1 situation:1 hinton:1 psnrs:1 incorporated:1 pez:1 frame:6 ucsd:8 smoothed:1 sharp:1 expressiveness:1 tive:1 introduced:1 meier:1 required:1 specified:1 paris:1 trainable:1 imagenet:2 sivic:1 acoustic:1 learned:5 framing:1 kang:2 nip:7 trans:1 able:1 bar:3 beyond:1 pattern:1 including:3 max:8 gool:1 suitable:3 difficulty:1 predicting:2 residual:1 improve:9 fc1:2 axis:2 extract:4 metadata:1 auto:1 text:1 geometric:1 removal:1 meter:1 fc5:2 relative:3 fully:5 expect:1 interesting:1 filtering:1 validation:2 vectorized:1 sufficient:1 verification:1 conv1:3 viewpoint:9 row:3 eccv:3 maas:1 supported:1 last:5 free:1 side:74 bias:5 perceptron:1 wide:2 face:1 absolute:3 leaky:1 benefit:1 van:1 plain:10 dimension:1 adopts:1 adaptive:33 clue:1 replicated:1 avg:1 universally:1 transaction:1 ignore:1 jaderberg:1 ioffe:1 discriminative:3 demo:1 fergus:1 table:14 learn:1 channel:12 transfer:2 ca:1 schuler:1 forest:3 improving:1 complex:2 interpolating:1 acnn:90 protocol:1 main:5 dense:1 whole:1 noise:23 denoise:1 fair:1 tesla:1 xu:1 crafted:4 fig:14 sub:6 position:1 comprises:1 explicit:1 administrative:1 third:1 extractor:2 young:1 tang:2 antoni:1 specific:2 brabandere:1 rectifier:1 covariate:1 learnable:1 r2:3 deconvolution:17 incorporating:6 burden:1 mnist:5 adding:1 effectively:3 texture:1 chen:1 smoothly:3 fc:7 simply:1 appearance:11 explore:1 likely:1 gao:1 deblur:1 tracking:4 applies:1 wolf:1 truth:4 extracted:2 ma:3 acm:1 nair:1 lempitsky:2 viewed:1 goal:3 towards:1 seibert:1 shared:2 content:2 change:17 included:1 typical:2 specifically:3 operates:1 except:1 uniformly:2 reducing:1 denoising:3 total:4 select:3 highdimensional:1 formally:1 internal:2 people:13 support:1 incorporate:6 evaluate:1 aux:7 |
6,607 | 6,977 | Conic Scan-and-Cover algorithms for
nonparametric topic modeling
Mikhail Yurochkin
Department of Statistics
University of Michigan
[email protected]
Aritra Guha
Department of Statistics
University of Michigan
[email protected]
XuanLong Nguyen
Department of Statistics
University of Michigan
[email protected]
Abstract
We propose new algorithms for topic modeling when the number of topics is
unknown. Our approach relies on an analysis of the concentration of mass and
angular geometry of the topic simplex, a convex polytope constructed by taking
the convex hull of vertices representing the latent topics. Our algorithms are shown
in practice to have accuracy comparable to a Gibbs sampler in terms of topic
estimation, which requires the number of topics be given. Moreover, they are one
of the fastest among several state of the art parametric techniques.1 Statistical
consistency of our estimator is established under some conditions.
1
Introduction
A well-known challenge associated with topic modeling inference can be succinctly summed up
by the statement that sampling based approaches may be accurate but computationally very slow,
e.g., Pritchard et al. (2000); Griffiths & Steyvers (2004), while the variational inference approaches
are faster but their estimates may be inaccurate, e.g., Blei et al. (2003); Hoffman et al. (2013). For
nonparametric topic inference, i.e., when the number of topics is a priori unknown, the problem
becomes more acute. The Hierarchical Dirichlet Process model (Teh et al., 2006) is an elegant
Bayesian nonparametric approach which allows for the number of topics to grow with data size, but
its sampling based inference is much more inefficient compared to the parametric counterpart. As
pointed out by Yurochkin & Nguyen (2016), the root of the inefficiency can be traced to the need for
approximating the posterior distributions of the latent variables representing the topic labels ? these
are not geometrically intrinsic as any permutation of the labels yields the same likelihood.
A promising approach in addressing the aforementioned challenges is to take a convex geometric
perspective, where topic learning and inference may be formulated as a convex geometric problem: the
observed documents correspond to points randomly drawn from a topic polytope, a convex set whose
vertices represent the topics to be inferred. This perspective has been adopted to establish posterior
contraction behavior of the topic polytope in both theory and practice (Nguyen, 2015; Tang et al.,
2014). A method for topic estimation that exploits convex geometry, the Geometric Dirichlet Means
(GDM) algorithm, was proposed by Yurochkin & Nguyen (2016), which demonstrates attractive
behaviors both in terms of running time and estimation accuracy. In this paper we shall continue to
amplify this viewpoint to address nonparametric topic modeling, a setting in which the number of
topics is unknown, as is the distribution inside the topic polytope (in some situations).
We will propose algorithms for topic estimation by explicitly accounting for the concentration of
mass and angular geometry of the topic polytope, typically a simplex in topic modeling applications.
The geometric intuition is fairly clear: each vertex of the topic simplex can be identified by a ray
emanating from its center (to be defined formally), while the concentration of mass can be quantified
1
Code is available at https://github.com/moonfolk/Geometric-Topic-Modeling.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
for the cones hinging on the apex positioned at the center. Such cones can be rotated around the
center to scan for high density regions inside the topic simplex ? under mild conditions such cones
can be constructed efficiently to recover both the number of vertices and their estimates.
We also mention another fruitful approach, which casts topic estimation as a matrix factorization
problem (Deerwester et al., 1990; Xu et al., 2003; Anandkumar et al., 2012; Arora et al., 2012). A
notable recent algorithm coming from the matrix factorization perspective is RecoverKL (Arora et al.,
2012), which solves non-negative matrix factorization (NMF) efficiently under assumptions on the
existence of so-called anchor words. RecoverKL remains to be a parametric technique ? we will
extend it to a nonparametric setting and show that the anchor word assumption appears to limit the
number of topics one can efficiently learn.
Our paper is organized as follows. In Section 2 we discuss recent developments in geometric topic
modeling and introduce our approach; Sections 3 and 4 deliver the contributions outlined above;
Section 5 demonstrates experimental performance; we conclude with a discussion in Section 6.
2
Geometric topic modeling
Background and related work In this section we present the convex geometry of the Latent
Dirichlet Allocation (LDA) model of Blei et al. (2003), along with related theoretical and algorithmic
results that motivate our work. Let V be vocabulary size and ?V ?1 be the corresponding vocabulary
probability simplex. Sample K topics (i.e., distributions on words) ?k ? DirV (?), k = 1, . . . , K,
where ? ? RV+ . Next, sample M document-word probabilities pm residing in the topic simplex
B := Conv(?1 , . . . , ?K ) (cf. Nguyen (2015)), by first generating
P their barycentric coordinates
(i.e., topic proportions) ?m ? DirK (?) and then setting pm := k ?k ?mk for m = 1, . . . , M and
? ? RK
+ . Finally, word counts of the m-th document can be sampled wm ? Mult(pm , Nm ), where
Nm ? N is the number of words in document m. The above model is equivalent to the LDA when
individual words to topic label assignments are marginalized out.
Nguyen (2015) established posterior contraction rates of the topic simplex, provided that ?k ? 1 ?k
and either number of topics K is known or topics are sufficiently separated in terms of the Euclidean
distance. Yurochkin & Nguyen (2016) devised an estimate for B, taken to be a fixed unknown
quantity, by formulating a geometric objective function, which is minimized when topic simplex B
is close to the normalized documents w
?m := wm /Nm . They showed that the estimation of topic
proportions ?m given B simply reduces to taking barycentric coordinates of the projection of w
?m
onto B. To estimate B given K, they proposed a Geometric Dirichlet Means (GDM) algorithm,
which operated by performing a k-means clustering on the normalized documents, followed by a
geometric correction for the cluster centroids. The resulting algorithm is remarkably fast and accurate,
supporting the potential of the geometric approach. The GDM is not applicable when K is unknown,
but it provides a motivation which our approach is built on.
The Conic Scan-and-Cover approach To enable the inference of B when K is not known, we
need to investigate the concentration of mass inside the topic simplex. It suffices to focus on two
types of geometric objects: cones and spheres, which provide the basis for a complete coverage of the
simplex. To gain intuition of our procedure, which we call Conic Scan-and-Cover (CoSAC) approach,
imagine someone standing at a center point of a triangular dark room trying to figure out all corners
with a portable flashlight, which can produce a cone of light. A room corner can be identified with
the direction of the farthest visible data objects. Once a corner is found, one can turn the flashlight to
another direction to scan for the next ones. See Fig. 1a, where red denotes the scanned area. To make
sure that all corners are detected, the cones of light have to be open to an appropriate range of angles
so that enough data objects can be captured and removed from the room. To make sure no false
corners are declared, we also need a suitable stopping criterion, by relying only on data points that lie
beyond a certain spherical radius, see Fig. 1b. Hence, we need to be able to gauge the concentration
of mass for suitable cones and spherical balls in ?V ?1 . This is the subject of the next section.
3
Geometric estimation of the topic simplex
We start by representing B in terms of its convex and angular geometry. First, B is centered at a point
denoted by Cp . The centered probability simplex is denoted by ?V0 ?1 := {x ? RV |x+Cp ? ?V ?1 }.
2
t!
0.4
0.4
0.3
0.3
0.4
c
0.3
(v3)
0.2
0.2
0.1
0.1
0.0
0.0
0.0
(v2)
(v1)
0.1
(v2)
(v1)
0.1
0.2
0.1
0.2
0.0
0.2
0.4
(a) An incomplete coverage using
3 cones (containing red points).
c(v1)
0.2
0.2
0.4
(v1)
c
0.1
1
(v3)
0.2
0.4
0.2
0.0
0.2
0.4
(b) Complete coverage using
3 cones (red) and a ball (yellow).
c
0.3
0.4
1 c
0.2
0.0
0.2
0.4
(c) Cap ?c (v1 ) and cone S? (v1 ).
Figure 1: Complete coverage of topic simplex by cones and a spherical ball for K = 3, V = 3.
Then, write bk := ?k ? Cp ? ?V0 ?1 for k = 1, . . . , K and p?m := pm ? Cp ? ?V0 ?1 for
m = 1, . . . , M . Note that re-centering leaves corresponding barycentric coordinates ?m ? ?K?1
? := Conv{b1 , . . . , bK } can
unchanged. Moreover, the extreme points of centered topic simplex B
V
now be represented by their directions vk ? R and corresponding radii Rk ? R+ such that
bk = Rk vk for any k = 1, . . . , K.
3.1
Coverage of the topic simplex
? can be covered with
The first step toward formulating a CoSAC approach is to show how B
exactly K cones and one spherical ball positioned at Cp . A cone is defined as set S? (v) :=
{p ? ?V0 ?1 |dcos (v, p) < ?}, where we employ the angular distance (a.k.a. cosine distance)
dcos (v, p) := 1 ? cos(v, p) and cos(v, p) is the cosine of angle ?(v, p) formed by vectors v and p.
It is possible to choose ? so that the topic simplex can be covered with
K
S
? Moreover, each cone contains exactly one vertex. Suppose
S? (vk ) ? B.
exactly K cones, that is,
The Conical coverage
k=1
? with r being the inradius. The incenter and inradius
that Cp is the incenter of the topic simplex B,
? Let ai,k denote the distance between
correspond to the maximum volume sphere contained in B.
? with amin ? ai,k ? amax for all i, k, and Rmax , Rmin such that
the i-th and k-th vertex of B,
Rmin ? Rk := kbk k2 ? Rmax ? k = 1, . . . , K. Then we can establish the following.
? and ? ? (?1 , ?2 ), where ?1 = 1 ? r/Rmax and ?2 =
Proposition 1. For simplex B
2
max{(a2max )/(2Rmax
), max (1 ? cos(bi , bk )}, the cone S? (v) around any vertex direction
i,k=1,...,K
? contains exactly one vertex. Moreover, complete coverage holds:
v of B
K
S
k=1
?
S? (vk ) ? B.
We say there is an angular separation
k ) ? 0 for any i, k = 1, . . . , K (i.e., the angles for
if cos(bi , b
r
all pairs are at least ?/2), then ? ? 1 ? Rmax
, 1 6= ?. Thus, under angular separation, the range ?
that allows for full coverage is nonempty independently of K. Our result is in agreement with that of
Nguyen (2015), whose result suggested that topic simplex B can be consistently estimated without
knowing K, provided there is a minimum edge length amin > 0. The notion of angular separation
leads naturally to the Conic Scan-and-Cover algorithm. Before getting there, we show a series of
results allowing us to further extend the range of admissible ?.
The inclusion of a spherical ball centered at Cp allows us to expand substantially the range of ?
for which conical coverage continues to hold. In particular, we can reduce the lower bound on ? in
? with cones using the
Proposition 1, since we only need to cover the regions near the vertices of B
following proposition. Fig. 1b provides an illustration.
Proposition 2. Let B(Cp , R) = {?
p ? RV |k?
p ? Cp k2 ? R}, R > 0; ?1 , ?2 given in Prop. 1, and
?
?
s
2
2 sin2 (b , b )
R
R
sin
(b
,
b
)
i j ?
k
i k
k
?
?3 := 1 ? min min
+ cos(bi , bk ) 1 ?
,1 ,
(1)
i,k
R
R2
3
then we have
K
S
k=1
? whenever ? ? (min{?1 , ?3 }, ?2 ).
S? (vk ) ? B(Cp , R) ? B
Notice that as R ? Rmax , the value of ?3 ? 0. Hence if R ? Rmin ? Rmax , the admissible
range for ? in Prop. 2 results in a substantial strengthening from Prop. 1. It is worth noting that the
above two geometric propositions do not require any distributional properties inside the simplex.
Coverage leftovers In practice complete coverage may fail if ? and R are chosen outside of
corresponding ranges suggested by the previous two propositions. In that case, it is useful to note that
leftover regions will have a very low mass. Next we quantify the mass inside a cone that does contain
a vertex, which allows us to reject a cone that has low mass, therefore not containing a vertex in it.
Proposition 3. The cone S? (v1 ) whose axis is a topic direction v1 has mass
P
R1
??1 ?1 (1 ? ?1 ) i6=1 ?i ?1 d?1
1?c 1
P
=
P(S? (v1 )) > P(?c (b1 )) = R 1 ? ?1
1
i6=1 ?i ?1 d?
?
(1
?
?
)
1
1
0 1
P
PK
PK
PK
PK
(2)
c i6=1 ?i (1 ? c)?1 ?( i=1 ?i )
c i=1 ?i
c2 ( i=1 ?i )( i=1 ?i + 1)
P
P
P
1+ P
+ P
+ ??? ,
( i6=1 ?i )?(?1 )?( i6=1 ?i )
( i6=1 ?i + 1)( i6=1 ?i + 2)
i6=1 ?i + 1
where ?c (b1 ) is the simplicial cap of S? (v1 ) which is composed of vertex b1 and a base parallel to
? and cutting adjacent edges of B
? in the ratio c : (1 ? c).
the corresponding base of B
See Fig. 1c for an illustration for the simplicial cap described in the proposition. Given the lower
bound for the mass around a cone containing a vertex, we have arrived at the following guarantee.
Proposition 4. For ? ? (0, 1), let c? be such that ? = min P(?c? (bk )) and let ?? be such that
k
s
c? =
2
r2
1? 2
Rmax
!?1
!
(sin(d) cot(arccos(1 ? ?? )) + cos(d))
,
(3)
where angle d ? min ?(bk , bk ? bi ). Then, as long as
i,k
??
?? , max
a2max
,
max
(1
?
cos(b
,
b
)
,
i k
2
i,k=1,...,K
2Rmax
(4)
the bound P(S? (vk )) ? ? holds for all k = 1, . . . , K.
3.2
CoSAC: Conic Scan-and-Cover algorithm
Having laid out the geometric foundations, we are ready to present the Conic Scan-and-Cover
(CoSAC) algorithm, which is a scanning procedure for detecting the presence of simplicial vertices
based on data drawn randomly from the simplex. The idea is simple: iteratively pick the farthest point
P
1
from the center estimate C?p := M
m pm , say v, then construct a cone S? (v) for some suitably
chosen ?, and remove all the data residing in this cone. Repeat until there is no data point left.
Specifically, let A = {1, . . . , M } be the index set of the initially unseen data, then set v :=
argmax k?
pm k2 and update A := A \ S? (v). The parameter ? needs to be sufficiently large to ensure
p?m :m?A
that the farthest point is a good estimate of a true vertex, and that the scan will be completed in exactly
K iterations; ? needs to be not too large, so that S? (v) does not contain more than one vertex. The
? the condition of the
existence of such ? is guaranteed by Prop.?1. In particular, for an equilateral B,
Prop. 1 is satisfied as long as ? ? (1 ? 1/ K ? 1, 1 + 1/(K ? 1)).
In our setting, K is unknown. A smaller ? would be a more robust choice, and accordingly the set A
will likely remain non-empty after K iterations. See the illustration of Fig. 1a, where the blue regions
correspond to A after K = 3 iterations of the scan. As a result, we proceed by adopting a stopping
criteria based on Prop. 2: the procedure is stopped as soon as ? m ? A k?
pm k2 < R, which allows us
to complete the scan in K iterations (as in Fig. 1b for K = 3).
The CoSAC algorithm is formally presented by Algorithm 1. Its running is illustrated in Fig. 2,
where we show iterations 1, 26, 29, 30 of the algorithm by plotting norms of the centered documents
4
in the active set A and cone S? (v) against cosine distance to the chosen direction of a topic. Iteration
30 (right) satisfies stopping criteria and therefore CoSAC recovered correct K = 30. Note that this
type of visual representation can be useful in practice to verify choices of ? and R. The following
theorem establishes the consistency of the CoSAC procedure.
Theorem 1. Suppose {?1 , . . . , ?K } are the true topics, incenter Cp is given, ?m ? DirK (?) and
P
?
pm := k ?k ?mk for m = 1, . . . , M and ? ? RK
+ . Let K be the estimated number of topics,
{??1 , . . . , ??K? } be the output of Algorithm 1 trained with ? and R as in Prop. 2. Then ? > 0,
(
)
!
?
?
P
min k?i ? ??j k > , for any i ? {1, . . . , K}
? {K 6= K}
? 0 as M ? ?.
?
j?{1,...,K}
Remark We found the choices ? = 0.6 and R to be median of {k?
p1 k2 , . . . , k?
pM k2 } to be robust in
practice and agreeing with our theoretical results. From Prop. 3 it follows that choosing R as median
c 1?1/K
K
( 1?c
)
?
length is equivalent to choosing ? resulting in an edge cut ratio c such that 1 ? K?1
K?1 K/(K?1)
1/2, then c ? ( 2K )
, which, for any equilateral topic simplex B, is satisfied by setting
? ? (0.3, 1), provided that K ? 2000 based on the Eq. (3).
4
Document Conic Scan-and-Cover algorithm
In the topic modeling problem, pm for m = 1, . . . , M are not given. Instead, under the bag-of-words
assumption, we are given the frequencies of words in documents w1 , . . . , wM which provide a point
estimate w
?m := wm /Nm for the pm . Clearly, if number of documents M ? ? and length of
documents Nm ? ? ?m, we can use Algorithm 1 with the plug-in estimates w
?m in place of pm ,
P
1
since w
?m ? pm . Moreover, Cp will be estimated by C?p := M
w
?m . In practice, M and Nm are
finite, some of which may take relatively small values. Taking the topic direction to be the farthest
point in the topic simplex, i.e., v = argmax kw
?m k2 , where w
?m := w
?m ? C?p ? ?V0 ?1 , may no
w
?m :m?A
longer yield a robust estimate, because the variance of this topic direction estimator can be quite high
(in the Supplement we show that it is upper bounded with (1 ? 1/V )/Nm ).
To obtain improved estimates, we propose a technique that we call ?mean-shifting?. Instead of taking
the farthest point in the simplex, this technique is designed to shift the estimate of a topic to a high
density region, where true topics are likely to be
Pfound. Precisely, given a (current) cone S? (v), we
re-position the cone by updating v := argmin m?S? (v) kw
?m k2 (1 ? cos(w
?m , v)). In other words,
v
we re-position the cone by centering it around the meanP
direction of the cone weighted by the norms
of the data points inside, which is simply given by v ? m?S? (v) w
?m / card(S? (v)). This results in
reduced variance of the topic direction estimate, due to the averaging over data residing in the cone.
The mean-shifting technique may be slightly modified and taken as a local update for a subsequent
optimization which cycles through the entire set of documents and iteratively updates the cones. The
optimization is with respect to the following weighted spherical k-means objective:
min
kvk k2 =1,k=1,...K
K
X
X
k=1 m?S k (vk )
kw
?m k2 (1 ? cos(vk , w
?m )),
(5)
where cones S k (vk ) = {m|dcos (vk , p?m ) < dcos (vl , p?i ) ?l =
6 k} yield a disjoint data partition
K
F
S k (vk ) = {1, . . . , M } (this is different from S? (vk )). The rationale of spherical k-means
k=1
optimization is to use full data for estimation of topic directions, hence further reducing the variance
due to short documents. The connection between objective function (5) and topic simplex estimation
is given in the Supplement. Finally, obtain topic norms Rk along the directions vk using maximum
projection: Rk :=
max hvk , w
?m i. Our entire procedure is summarized in Algorithm 2.
m:m?S k (vk )
Remark In Step 9 of the algorithm, cone S? (v) with a very low cardinality, i.e., card(S? (v)) <
?M , for some small constant ?, is discarded because this is likely an outlier region that does not actually contain a true vertex. The choice of ? is governed by results of Prop. 4. For small ?k = 1/K, ?k,
5
q
? we can choose d such that cos(d) = K+1 . Plugand for an equilateral B
2K
q
?1
q
q
K?1 ? 1??
K+1
ging these values into Eq. (3) leads to c =
2 1 ? K12
(
)
+
.
2K
2K
2
? ? P(?c ) ?
c(K?1)/K
(K?1)(1?c)
1?(1??)
Now, plugging in ? = 0.6 we obtain ? ? K ?1 for large K. Our approximations were based on large
K to get a sense of ?, we now make a conservative choice ? = 0.001, so that (K)?1 > ? ?K < 1000.
As a result, a topic is rejected if the corresponding cone contains less than 0.1% of the data.
Finding anchor words using Conic Scan-and-Cover Another approach to reduce the noise is
to consider the problem from a different viewpoint, where Algorithm 1 will prove itself useful.
RecoverKL by Arora et al. (2012) can identify topics with diminishing errors (in number of documents
M ), provided that topics contain anchor words. The problem of finding anchor words geometrically
reduces to identifying rows of the word-to-word co-occurrence matrix that form a simplex containing
other rows of the same matrix (cf. Arora et al. (2012) for details). An advantage of this approach
is that noise in the word-to-word co-occurrence matrix goes to zero as M ? ? no matter the
document lengths, hence we can use Algorithm 1 with "documents" being rows of the word-to-word
co-occurrence matrix to learn anchor words nonparametrically and then run RecoverKL to obtain
topic estimates. We will call this procedure cscRecoverKL.
Algorithm 1 Conic Scan-and-Cover (CoSAC)
Input: document generating distributions p1 , . . . , pM ,
angle threshold ?, norm threshold R
Output: topics ?1 , . . . , ?k
?p = 1 P pm {find center};
p?m := pm ? C?p for m = 1, . . . , M {center the data}
1: C
m
M
2: A1 = {1, . . . , M } {initialize active set};
k = 1 {initialize topic count}
3: while ?m ? Ak : k?
pm k2 > R do
4:
vk = argmax k?
pm k2 {find topic}
p?m :m?Ak
5:
S? (vk ) = {m : dcos (?
pm , vk ) < ?} {find cone of near documents}
6:
Ak = Ak \ S? (vk ) {update active set}
7:
?k = vk + C?p , k = k + 1 {compute topic}
8: end while
topic v1
? = 0.60
? = 0.60
S? (v1)
0.10
? = 0.60
S? (v26)
? = 0.60
S? (v29)
topic v26
S? (v30)
topic v29
topic v30
norm k?
p i k2
0.08
R = 0.047
0.06
0.04
A2
0.02
0.0
0.2
0.4
0.6
0.8
cosine distance dcos(v1, p?i)
A27
1.0
1.2
0.0
0.2
0.4
0.6
0.8
cosine distance dcos(v26, p?i)
1.0
A30
1.2
0.0
0.2
0.4
0.6
0.8
cosine distance dcos(v29, p?i)
1.0
A31
1.2
0.0
0.2
0.4
0.6
0.8
cosine distance dcos(v30, p?i)
1.0
1.2
Figure 2: Iterations 1, 26, 29, 30 of the Algorithm 1. Red are the documents in the cone S? (vk ); blue
are the documents in the active set Ak+1 for next iteration. Yellow are documents k?
pm k2 < R.
5
5.1
Experimental results
Simulation experiments
In the simulation studies we shall compare CoSAC (Algorithm 2) and cscRecoverKL based on
Algorithm 1 both of which don?t have access to the true K, versus popular parametric topic modeling
approaches (trained with true K): Stochastic Variational Inference (SVI), Collapsed Gibbs sampler,
RecoverKL and GDM (more details in the Supplement). The comparisons are done on the basis of
minimum-matching Euclidean distance, which quantifies distance between topic simplices (Tang
et al., 2014), and running times (perplexity scores comparison is given in the Supplement). Lastly we
will demonstrate the ability of CoSAC to recover correct number of topics for a varying K.
6
Algorithm 2 CoSAC for documents
Input: normalized documents w
?1 , . . . , w
?M ,
angle threshold ?, norm threshold R, outlier threshold ?
Output: topics ?1 , . . . , ?k
?p = 1 P w
w
?m := w
?m ? C?p for m = 1, . . . , M {center the data}
1: C
m ?m {find center};
M
2: A1 = {1, . . . , M } {initialize active set};
k = 1 {initialize topic count}
3: while ? m ? Ak : kw
?m k2 > R do
4:
vk = argmax kw
?m k2 {initialize direction}
w
?m :m?Ak
5:
6:
7:
8:
9:
10:
11:
12:
13:
while vk not converged do {mean-shifting}
S? (vkP
) = {m : dcos (w
?m , vk ) < ?} {find cone of near documents}
vk = m?S? (vk ) w
?m / card(S? (vk )) {update direction}
end while
Ak = Ak \ S? (vk ) {update active set}
if card(S? (vk )) > ?M
then k = k + 1 {record topic direction}
end while
v1 , . . . , vk = weighted spherical k-means (v1 , . . . , vk , w
?1 , . . . , w
?M )
for l in {1, . . . , k} do
Rl :=
max hvl , w
?m i {find topic length along direction vl }
m:m?S l (vl )
?
0.000
?
0
2000
4000
?
6000
8000
?
?
?
?
50
10000
?
?
?
??
???
?
?
??
?
?
?
?
100
?
?
150
200
?
?
250
300
?
?
0
2000
Length of documents Nm
Number of documents M
30
20
?
?
?
?
cscRecoverKL
CoSAC
Bayes factor
10
300
200
0.3
0.2
?
?
?
?
?
Absolute topic number error
?
?
?
?
?
cscRecoverKL
RecoverKL
CoSAC
GDM
?
Gibbs
SVI
0
?
?
100
?
?
Running time, sec
?
?
?
cscRecoverKL
RecoverKL
CoSAC
GDM
Gibbs
SVI
0
?
?
0.1
?
?
Minimum Matching distance
?
?
?
??
?
cscRecoverKL
RecoverKL
CoSAC
GDM
Gibbs
SVI
0.0
0.075
?
0.050
?
?
0.025
Minimum Matching distance
40
14:
?l = Rl vl + C?p {compute topic}
15: end for
4000
6000
8000
10
10000
20
30
40
50
True number of topics K
Number of documents M
1600
1550
775
?
?
?
?
50
LDA Gibbs
HDP Gibbs
CoSAC
100
Training time, sec
?
?
?
?
?
150
1500
?
0
750
?
700
?
Gibbs, M=1000
Gibbs, M=5000
CoSAC, M=1000
CoSAC, M=5000
Perplexity
?
?
725
?
Perplexity
Gibbs, M=1000
Gibbs, M=5000
CoSAC, M=1000
CoSAC, M=5000
?
?
?
?
?
675
0.04
0.06
?
0.02
Minimum Matching distance
Figure 3: Minimum matching Euclidean distance for (a) varying corpora size, (b) varying length of
documents; (c) Running times for varying corpora size; (d) Estimation of number of topics.
0
50
100
Training time, sec
?
?
?
150
0
500
1000
1500
2000
Training time, min
Figure 4: Gibbs sampler convergence analysis for (a) Minimum matching Euclidean distance for
corpora sizes 1000 and 5000; (b) Perplexity for corpora sizes 1000 and 5000; (c) Perplexity for
NYTimes data.
Estimation of the LDA topics First we evaluate the ability of CoSAC and cscRecoverKL to
estimate topics ?1 , . . . , ?K , fixing K = 15. Fig. 3(a) shows performance for the case of fewer
M ? [100, 10000] but longer Nm = 500 documents (e.g. scientific articles, novels, legal documents).
CoSAC demonstrates performance comparable in accuracy to Gibbs sampler and GDM.
Next we consider larger corpora M = 30000 of shorter Nm ? [25, 300] documents (e.g. news
articles, social media posts). Fig. 3(b) shows that this scenario is harder and CoSAC matches the
performance of Gibbs sampler for Nm ? 75. Indeed across both experiments CoSAC only made
mistakes in terms of K for the case of Nm = 25, when it was underestimating on average by 4 topics
7
and for Nm = 50 when it was off by around 1, which explains the earlier observation. Experiments
with varying V and ? are given in the Supplement.
It is worth noting that cscRecoverKL appears to be strictly better than its predecessor. This suggests
that our procedure for selection of anchor words is more accurate in addition to being nonparametric.
Running time A notable advantage of the CoSAC algorithm is its speed. In Fig. 3(c) we see
that Gibbs, SVI, GDM and CoSAC all have linear complexity growth in M , but the slopes are very
different and approximately are INm for SVI and Gibbs (where I is the number of iterations which
has to be large enough for convergence), number of k-means iterations to converge for GDM and is
of order K for the CoSAC procedure making it the fastest algorithm of all under consideration.
Next we compare CoSAC to per iteration quality of the Gibbs sampler trained with 500 iterations for
M = 1000 and M = 5000. Fig. 4(b) shows that Gibbs sampler, when true K is given, can achieve
good perplexity score as fast as CoSAC and outperforms it as training continues, although Fig. 4(a)
suggests that much longer training time is needed for Gibbs sampler to achieve good topic estimates
and small estimation variance.
Estimating number of topics Model selection in the LDA context is a quite challenging task and,
to the best of our knowledge, there is no "go to" procedure. One of the possible approaches is based
on refitting LDA with multiple choices of K and using Bayes Factor for model selection (Griffiths &
Steyvers, 2004). Another option is to adopt the Hierarchical Dirichlet Process (HDP) model, but we
should understand that it is not a procedure to estimate K of the LDA model, but rather a particular
prior on the number of topics, that assumes K to grow with the data. A more recent suggestion is to
slightly modify LDA and use Bayes moment matching (Hsu & Poupart, 2016), but, as can be seen
from Figure 2 of their paper, estimation variance is high and the method is not very accurate (we
tried it with true K = 15 and it took above 1 hour to fit and found 35 topics). Next we compare
Bayes factor model selection versus CoSAC and cscRecoverKL for K ? [5, 50]. Fig. 3(d) shows that
CoSAC consistently recovers exact number of topics in a wide range.
We also observe that cscRecoverKL does not estimate K well (underestimates) in the higher range.
This is expected because cscRecoverKL finds the number of anchor words, not topics. The former
is decreasing when later is increasing. Attempting to fit RecoverKL with more topics than there
are anchor words might lead to deteriorating performance and our modification can address this
limitation of the RecoverKL method.
5.2
Real data analysis
In this section we demonstrate CoSAC algorithm for topic modeling on one of the standard bag
of words datasets ? NYTimes news articles. After preprocessing we obtained M ? 130, 000
documents over V = 5320 words. Bayes factor for the LDA selected the smallest model among
K ? [80, 195], while CoSAC selected 159 topics. We think that disagreement between the two
procedures is attributed to the misspecification of the LDA model when real data is in play, which
affects Bayes factor, while CoSAC is largely based on the geometry of the topic simplex.
The results are summarized in Table 1 ? CoSAC found 159 topics in less than 20min; cscRecoverKL
estimated the number of anchor words in the data to be 27 leading to fewer topics. Fig. 4(c) compares
CoSAC perplexity score to per iteration test perplexity of the LDA (1000 iterations) and HDP (100
iterations) Gibbs samplers. Text files with top 20 words of all topics are included in the Supplementary
material. We note that CoSAC procedure recovered meaningful topics, contextually similar to LDA
and HDP (e.g. elections, terrorist attacks, Enron scandal, etc.) and also recovered more specific topics
about Mike Tyson, boxing and case of Timothy McVeigh which were present among HDP topics, but
not LDA ones. We conclude that CoSAC is a practical procedure for topic modeling on large scale
corpora able to find meaningful topics in a short amount of time.
6
Discussion
We have analyzed the problem of estimating topic simplex without assuming number of vertices
(i.e., topics) to be known. We showed that it is possible to cover topic simplex using two types of
geometric shapes, cones and a sphere, leading to a class of Conic Scan-and-Cover algorithms. We
8
Table 1: Modeling topics of NYTimes articles
K
Perplexity
Coherence
cscRecoverKL
HDP Gibbs
LDA Gibbs
CoSAC
27
221 ? 5
80
159
2603
1477 ? 1.6
1520 ? 1.5
1568
-238
?442 ? 1.7
?300 ? 0.7
-322
Time
37 min
35 hours
5.3 hours
19 min
then proposed several geometric correction techniques to account for the noisy data. Our procedure is
accurate in recovering the true number of topics, while remaining practical due to its computational
speed. We think that angular geometric approach might allow for fast and elegant solutions to other
clustering problems, although as of now it does not immediately offer a unifying problem solving
framework like MCMC or variational inference. An interesting direction in a geometric framework is
related to building models based on geometric quantities such as distances and angles.
Acknowledgments
This research is supported in part by grants NSF CAREER DMS-1351362, NSF CNS-1409303, a
research gift from Adobe Research and a Margaret and Herman Sokol Faculty Award.
9
References
Anandkumar, A., Foster, D. P., Hsu, D., Kakade, S. M., and Liu, Y. A spectral algorithm for Latent Dirichlet
Allocation. NIPS, 2012.
Arora, S., Ge, R., Halpern, Y., Mimno, D., Moitra, A., Sontag, D., Wu, Y., and Zhu, M. A practical algorithm for
topic modeling with provable guarantees. arXiv preprint arXiv:1212.4777, 2012.
Blei, D. M., Ng, A. Y., and Jordan, M. I. Latent Dirichlet Allocation. J. Mach. Learn. Res., 3:993?1022, March
2003.
Deerwester, S., Dumais, S. T., Furnas, G. W., Landauer, T. K., and Harshman, R. Indexing by latent semantic
analysis. Journal of the American Society for Information Science, 41(6):391, Sep 01 1990.
Griffiths, Thomas L and Steyvers, Mark. Finding scientific topics. PNAS, 101(suppl. 1):5228?5235, 2004.
Hoffman, Ma. D., Blei, D. M., Wang, C., and Paisley, J. Stochastic variational inference. J. Mach. Learn. Res.,
14(1):1303?1347, May 2013.
Hsu, Wei-Shou and Poupart, Pascal. Online bayesian moment matching for topic modeling with unknown
number of topics. In Advances In Neural Information Processing Systems, pp. 4529?4537, 2016.
Nguyen, XuanLong. Posterior contraction of the population polytope in finite admixture models. Bernoulli, 21
(1):618?646, 02 2015.
Pritchard, Jonathan K, Stephens, Matthew, and Donnelly, Peter. Inference of population structure using multilocus
genotype data. Genetics, 155(2):945?959, 2000.
Tang, Jian, Meng, Zhaoshi, Nguyen, Xuanlong, Mei, Qiaozhu, and Zhang, Ming. Understanding the limiting
factors of topic modeling via posterior contraction analysis. In Proceedings of The 31st International
Conference on Machine Learning, pp. 190?198. ACM, 2014.
Teh, Y. W., Jordan, M. I., Beal, M. J., and Blei, D. M. Hierarchical dirichlet processes. Journal of the american
statistical association, 101(476), 2006.
Xu, Wei, Liu, Xin, and Gong, Yihong. Document clustering based on non-negative matrix factorization. In
Proceedings of the 26th Annual International ACM SIGIR Conference on Research and Development in
Informaion Retrieval, SIGIR ?03, pp. 267?273. ACM, 2003.
Yurochkin, Mikhail and Nguyen, XuanLong. Geometric dirichlet means algorithm for topic inference. In
Advances in Neural Information Processing Systems, pp. 2505?2513, 2016.
10
| 6977 |@word mild:1 faculty:1 proportion:2 norm:6 suitably:1 open:1 simulation:2 tried:1 contraction:4 accounting:1 pick:1 mention:1 harder:1 moment:2 inefficiency:1 contains:3 series:1 score:3 liu:2 document:33 outperforms:1 recovered:3 com:1 current:1 deteriorating:1 visible:1 subsequent:1 partition:1 shape:1 remove:1 designed:1 update:6 leaf:1 fewer:2 selected:2 accordingly:1 short:2 record:1 underestimating:1 blei:5 provides:2 detecting:1 attack:1 zhang:1 shou:1 along:3 constructed:2 c2:1 predecessor:1 prove:1 ray:1 inside:6 introduce:1 expected:1 indeed:1 cot:1 p1:2 behavior:2 relying:1 spherical:8 decreasing:1 ming:1 election:1 cardinality:1 increasing:1 becomes:1 conv:2 provided:4 moreover:5 bounded:1 estimating:2 mass:10 gift:1 medium:1 sokol:1 argmin:1 rmax:9 substantially:1 finding:3 guarantee:2 growth:1 exactly:5 demonstrates:3 k2:16 farthest:5 grant:1 harshman:1 before:1 local:1 modify:1 limit:1 mistake:1 mach:2 ak:9 meng:1 approximately:1 might:2 quantified:1 suggests:2 challenging:1 someone:1 co:13 fastest:2 factorization:4 contextually:1 range:8 bi:4 practical:3 acknowledgment:1 practice:6 svi:6 procedure:14 mei:1 area:1 mult:1 reject:1 projection:2 matching:8 word:27 griffith:3 get:1 amplify:1 close:1 onto:1 selection:4 collapsed:1 context:1 fruitful:1 equivalent:2 center:9 go:2 independently:1 convex:8 sigir:2 identifying:1 immediately:1 estimator:2 amax:1 steyvers:3 population:2 notion:1 coordinate:3 limiting:1 imagine:1 suppose:2 play:1 exact:1 agreement:1 updating:1 continues:2 cut:1 distributional:1 observed:1 mike:1 preprint:1 wang:1 dirv:1 region:6 cycle:1 news:2 removed:1 substantial:1 intuition:2 meanp:1 nytimes:3 complexity:1 halpern:1 motivate:1 trained:3 solving:1 deliver:1 basis:2 sep:1 incenter:3 represented:1 equilateral:3 separated:1 fast:3 emanating:1 detected:1 outside:1 choosing:2 whose:3 quite:2 larger:1 supplementary:1 say:2 triangular:1 ability:2 statistic:3 unseen:1 think:2 itself:1 noisy:1 online:1 beal:1 advantage:2 took:1 propose:3 coming:1 strengthening:1 achieve:2 margaret:1 amin:2 getting:1 convergence:2 cluster:1 empty:1 r1:1 produce:1 generating:2 rotated:1 object:3 gong:1 fixing:1 eq:2 solves:1 coverage:11 recovering:1 quantify:1 direction:17 radius:2 correct:2 hull:1 stochastic:2 centered:5 enable:1 material:1 explains:1 require:1 suffices:1 proposition:9 strictly:1 correction:2 hold:3 around:5 residing:3 sufficiently:2 algorithmic:1 a31:1 matthew:1 adopt:1 a2:1 smallest:1 estimation:13 applicable:1 bag:2 label:3 leftover:2 gauge:1 establishes:1 weighted:3 hoffman:2 clearly:1 modified:1 rather:1 varying:5 focus:1 vk:30 consistently:2 bernoulli:1 likelihood:1 centroid:1 sense:1 sin2:1 inference:11 stopping:3 inaccurate:1 entire:2 typically:1 vl:4 initially:1 diminishing:1 v30:3 expand:1 among:3 aforementioned:1 pascal:1 denoted:2 priori:1 development:2 arccos:1 art:1 summed:1 fairly:1 initialize:5 construct:1 once:1 having:1 beach:1 sampling:2 ng:1 kw:5 tyson:1 simplex:29 minimized:1 employ:1 randomly:2 composed:1 individual:1 geometry:6 argmax:4 cns:1 investigate:1 analyzed:1 extreme:1 kvk:1 operated:1 light:2 genotype:1 accurate:5 edge:3 shorter:1 incomplete:1 euclidean:4 re:5 theoretical:2 mk:2 stopped:1 inm:1 modeling:16 earlier:1 cover:12 assignment:1 vertex:18 addressing:1 guha:1 too:1 scanning:1 dumais:1 st:2 density:2 international:2 standing:1 refitting:1 off:1 w1:1 nm:13 satisfied:2 containing:4 choose:2 moitra:1 corner:5 american:2 inefficient:1 leading:2 account:1 potential:1 summarized:2 sec:3 matter:1 notable:2 explicitly:1 later:1 root:1 inradius:2 red:4 wm:4 recover:2 start:1 parallel:1 bayes:6 option:1 slope:1 contribution:1 formed:1 accuracy:3 variance:5 largely:1 efficiently:3 yield:3 correspond:3 simplicial:3 identify:1 yellow:2 bayesian:2 worth:2 converged:1 whenever:1 centering:2 against:1 underestimate:1 frequency:1 pp:4 dm:1 naturally:1 associated:1 attributed:1 recovers:1 sampled:1 gain:1 hsu:3 popular:1 knowledge:1 cap:3 organized:1 positioned:2 actually:1 appears:2 higher:1 improved:1 wei:2 done:1 angular:8 rejected:1 lastly:1 until:1 nonparametrically:1 lda:14 quality:1 scientific:2 building:1 usa:1 dcos:10 normalized:3 contain:4 true:10 counterpart:1 verify:1 hence:4 former:1 iteratively:2 semantic:1 illustrated:1 attractive:1 adjacent:1 sin:2 cosine:7 criterion:3 trying:1 arrived:1 complete:6 demonstrate:2 cp:12 variational:4 consideration:1 novel:1 a30:1 rl:2 volume:1 extend:2 association:1 gibbs:22 ai:2 paisley:1 consistency:2 outlined:1 pm:20 pointed:1 inclusion:1 i6:8 apex:1 access:1 acute:1 longer:3 v0:5 etc:1 base:2 posterior:5 recent:3 showed:2 perspective:3 perplexity:9 scenario:1 certain:1 continue:1 captured:1 minimum:7 seen:1 converge:1 v3:2 stephen:1 rv:3 full:2 multiple:1 pnas:1 reduces:2 faster:1 match:1 plug:1 offer:1 long:3 sphere:3 retrieval:1 devised:1 hvk:1 post:1 award:1 plugging:1 a1:2 adobe:1 arxiv:2 iteration:15 represent:1 adopting:1 suppl:1 background:1 remarkably:1 addition:1 grow:2 median:2 jian:1 enron:1 sure:2 file:1 subject:1 elegant:2 gdm:10 jordan:2 anandkumar:2 call:3 near:3 noting:2 presence:1 enough:2 affect:1 fit:2 identified:2 reduce:2 idea:1 knowing:1 shift:1 yihong:1 moonfolk:2 peter:1 sontag:1 proceed:1 remark:2 useful:3 xuanlong:5 clear:1 covered:2 amount:1 nonparametric:6 dark:1 reduced:1 http:1 nsf:2 notice:1 estimated:4 disjoint:1 per:2 blue:2 write:1 shall:2 donnelly:1 threshold:5 traced:1 drawn:2 v1:15 geometrically:2 cone:37 deerwester:2 run:1 angle:7 scandal:1 multilocus:1 place:1 laid:1 wu:1 separation:3 coherence:1 comparable:2 bound:3 followed:1 conical:2 guaranteed:1 annual:1 scanned:1 precisely:1 rmin:3 hvl:1 declared:1 speed:2 min:11 formulating:2 performing:1 attempting:1 relatively:1 department:3 ball:5 march:1 smaller:1 remain:1 slightly:2 agreeing:1 across:1 kakade:1 making:1 modification:1 kbk:1 outlier:2 indexing:1 aritra:2 taken:2 computationally:1 legal:1 remains:1 discus:1 count:3 recoverkl:10 turn:1 nonempty:1 fail:1 needed:1 ge:1 end:4 umich:3 adopted:1 available:1 boxing:1 observe:1 hierarchical:3 v2:2 appropriate:1 disagreement:1 spectral:1 occurrence:3 existence:2 thomas:1 denotes:1 dirichlet:9 running:6 cf:2 clustering:3 ensure:1 completed:1 marginalized:1 assumes:1 top:1 remaining:1 unifying:1 exploit:1 establish:2 approximating:1 society:1 unchanged:1 objective:3 quantity:2 parametric:4 concentration:5 distance:17 card:4 poupart:2 topic:117 polytope:6 portable:1 toward:1 provable:1 assuming:1 hdp:6 code:1 length:7 index:1 illustration:3 ratio:2 statement:1 yurochkin:5 negative:2 unknown:7 teh:2 allowing:1 upper:1 observation:1 datasets:1 discarded:1 finite:2 supporting:1 situation:1 misspecification:1 dirk:2 barycentric:3 pritchard:2 nmf:1 inferred:1 bk:8 cast:1 pair:1 connection:1 established:2 hour:3 nip:2 informaion:1 address:2 beyond:1 able:2 suggested:2 challenge:2 herman:1 built:1 max:6 shifting:3 suitable:2 zhu:1 representing:3 github:1 conic:10 arora:5 axis:1 ready:1 admixture:1 text:1 prior:1 geometric:21 understanding:1 permutation:1 rationale:1 suggestion:1 limitation:1 allocation:3 interesting:1 versus:2 foundation:1 article:4 plotting:1 viewpoint:2 foster:1 row:3 succinctly:1 genetics:1 repeat:1 supported:1 soon:1 allow:1 understand:1 wide:1 terrorist:1 taking:4 mikhail:2 absolute:1 k12:1 mimno:1 vocabulary:2 made:1 qiaozhu:1 preprocessing:1 nguyen:11 social:1 cutting:1 flashlight:2 active:6 anchor:10 b1:4 corpus:6 conclude:2 landauer:1 don:1 latent:6 quantifies:1 zhaoshi:1 table:2 promising:1 learn:4 robust:3 ca:1 career:1 pk:4 motivation:1 noise:2 xu:2 fig:14 simplices:1 slow:1 furnas:1 position:2 lie:1 governed:1 tang:3 admissible:2 rk:7 theorem:2 specific:1 r2:2 intrinsic:1 false:1 supplement:5 hinging:1 michigan:3 timothy:1 simply:2 likely:3 visual:1 contained:1 satisfies:1 relies:1 acm:3 ma:1 prop:9 formulated:1 room:3 included:1 specifically:1 reducing:1 sampler:9 averaging:1 conservative:1 called:1 experimental:2 xin:1 meaningful:2 formally:2 mark:1 scan:15 jonathan:1 evaluate:1 mcmc:1 |
6,608 | 6,978 | FALKON: An Optimal Large Scale Kernel Method
Alessandro Rudi ?
INRIA ? Sierra Project-team,
?
Ecole
Normale Sup?erieure, Paris
Luigi Carratino
University of Genoa
Genova, Italy
Lorenzo Rosasco
University of Genoa,
LCSL, IIT & MIT
Abstract
Kernel methods provide a principled way to perform non linear, nonparametric
learning. They rely on solid functional analytic foundations and enjoy optimal
statistical properties. However, at least in their basic form, they have limited
applicability in large scale scenarios because of stringent computational requirements in terms of time and especially memory. In this paper, we take a substantial
step in scaling up kernel methods, proposing FALKON, a novel algorithm that
allows to efficiently process millions of points. FALKON is derived combining
several algorithmic principles, namely stochastic subsampling, iterative solvers and
preconditioning. Our theoretical analysis shows that optimal
? statistical accuracy
is achieved requiring essentially O(n) memory and O(n n) time. An extensive
experimental analysis on large scale datasets shows that, even with a single machine, FALKON outperforms previous state of the art solutions, which exploit
parallel/distributed architectures.
1
Introduction
The goal in supervised learning is to learn from examples a function that predicts well new data.
Nonparametric methods are often crucial since the functions to be learned can be non-linear and
complex Kernel methods are probably the most popular among nonparametric learning methods, but
despite excellent theoretical properties, they have limited applications in large scale learning because
of time and memory requirements, typically at least quadratic in the number of data points.
Overcoming these limitations has motivated a variety of practical approaches including gradient
methods, as well accelerated, stochastic and preconditioned extensions, to improve time complexity
[1, 2, 3, 4, 5, 6]. Random projections provide an approach to reduce memory requirements, popular
methods including Nystr?om [7, 8], random features [9], and their numerous extensions. From a
theoretical perspective a key question has become to characterize statistical and computational tradeoffs, that is if, or under which conditions, computational gains come at the expense of statistical
accuracy. In particular, recent results considering least squares, show that there are large class of
problems for which, by combining Nystr?om or random features approaches [10, 11, 12, 13, 14, 15]
with ridge regression, it is possible to substantially reduce computations, while preserving the
same optimal statistical accuracy of exact kernel ridge regression (KRR). While statistical lower
bounds exist for this setting, there are no corresponding computational lower bounds. The state of
the art approximation of KRR, for which optimal statistical bounds are known, typically requires
complexities that are roughly O(n2 ) in time and memory (or possibly O(n) in memory, if kernel
computations are made on the fly).
In this paper, we propose and study FALKON, a new algorithm that, to the best of our knowledge,
has the best known theoretical guarantees. At the same time FALKON provides an efficient approach
to apply kernel methods on millions of points, and tested on a variety of large scale problems
?
E-mail: [email protected]. This work was done when A.R. was working at Laboratory of
Computational and Statistical Learning (Istituto Italiano di Tecnologia).
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
outperform previously proposed methods while utilizing only a fraction of computational resources.
More precisely, we take a substantial step in provably reducing the computational
requirements,
?
showing that, up to logarithmic factors, a time/memory complexity of O(n n) and O(n) is sufficient
for optimal statistical accuracy. Our new algorithm, exploits the idea of using Nystr?om methods
to approximate the KRR problem, but also to efficiently compute a preconditioning to be used in
conjugate gradient. To the best of our knowledge this is the first time all these ideas are combined
and put to fruition. Our theoretical analysis derives optimal statistical rates both in a basic setting and
under benign conditions for which fast rates are possible. The potential benefits of different sampling
strategies are also analyzed. Most importantly, the empirical performances are thoroughly tested
on available large scale data-sets. Our results show that, even on a single machine, FALKON can
outperforms state of the art methods on most problems both in terms of time efficiency and prediction
accuracy. In particular, our results suggest that FALKON could be a viable kernel alternative to deep
fully connected neural networks for large scale problems.
The rest of the paper is organized as follows. In Sect. 2 we give some background on kernel methods.
In Sect. 3 we introduce FALKON, while in Sect. 4 we present and discuss the main technical results.
Finally in Sect. 5 we present experimental results.
2
Statistical and Computational Trade-offs in Kernel Methods
We consider the supervised learning problem of estimating a function from random noisy samples. In
statistical learning theory, this can be formalized as the problem of solving
Z
inf E(f ),
E(f ) = (f (x) ? y)2 d?(x, y),
(1)
f ?H
(xi , yi )ni=1
given samples
from ?, which is fixed but unknown and where, H is a space of candidate
solutions. Ideally, a good empirical solution fb should have small excess risk
R(fb ) = E(fb ) ? inf E(f ),
f ?H
(2)
since this implies it will generalize/predict well new data. In this paper, we are interested in
both computational and statistical aspects of the above problem. In particular, we investigate the
computational resources needed to achieve optimal statistical accuracy, i.e. minimal excess risk. Our
focus is on the most popular class of nonparametric methods, namely kernel methods.
Kernel methods and ridge regression. Kernel methods consider a space H of functions
f (x) =
n
X
?j K(x, xi ),
(3)
i=1
where K is a positive definite kernel 2 . The coefficients ?1 , . . . , ?n are typically derived from a
convex optimization problem, that for the square loss is
n
1X
(f (xi ) ? yi )2 + ?kf k2H ,
fbn,? = argmin
f ?H n i=1
(4)
and defines the so called kernel ridge regression (KRR) estimator [16]. An advantage of least squares
approaches is that they reduce computations to a linear system
(Knn + ?nI) ? = yb,
(5)
where Knn is an n ? n matrix defined by (Knn )ij = K(xi , xj ) and yb = (y1 , . . . yn ). We next
comment on computational and statistical properties of KRR.
Computations. Solving Eq. (5) for large datasets is challenging. A direct approach requires O(n2 ) in
space, to allocate Knn , O(n2 ) kernel evaluations, and O(n2 cK + n3 ) in time, to compute and invert
Knn (cK is the kernel evaluation cost assumed constant and omitted throughout).
Statistics. Under basic assumptions, KRR achieves an error R(fb?n ) = O(n?1/2 ), for ?n = n?1/2 ,
which is optimal in a minimax sense and can be improved only under more stringent assumptions
[17, 18].
2
K is positive definite, if the matrix with entries K(xi , xj ) is positive semidefinite ?x1 , . . . , xN , N ? N [16]
2
The question is then if it is possible to achieve the statistical properties of KRR, with less computations.
Gradient methods and early stopping. A natural idea is to consider iterative solvers and in
particular gradient methods, because of their simplicity and low iteration cost. A basic example is
computing the coefficients in (3) by
?t = ?t?1 + ? [(Knn ?t?1 ? yb) + ?n?t?1 ] ,
(6)
for a suitable step-size choice ? .
Computations. In this case, if t is the number of iterations, gradient methods require O(n2 t) in
time, O(n2 ) in memory and O(n2 ) in kernel evaluations, if the kernel matrix is stored. Note that, the
kernel matrix can also be computed on the fly with only O(n) memory, but O(n2 t) kernel evaluations
are required. We note that, beyond the above simple iteration, several variants have been considered
including accelerated [1, 19] and stochastic extensions [20].
Statistics. The statistical properties of iterative approaches are well studied and also in the case where
? is set to zero, and regularization is performed by choosing a suitable stopping
time [21]. In this
?
latter case, the number of iterations can roughly be thought of 1/? and O( n) iterations are needed
for basic gradient descent, O(n1/4 ) for accelerated methods and possible O(1) iterations/epochs
for stochastic methods. Importantly, we note that unlike most optimization studies, here we are
considering the number of iterations needed to solve (1), rather than (4).
While the time complexity of these methods dramatically improves over KRR, and computations can
be done in blocks, memory requirements (or number of kernel evaluations) still makes the application
to large scale setting cumbersome. Randomization provides an approach to tackle this challenge.
Random projections. The rough idea is to use random projections to compute Knn only approximately. The most popular examples in this class of approaches are Nystr?om [7, 8] and random
features [9] methods. In the following we focus in particular on a basic Nystr?om approach based on
considering functions of the form
M
X
fe?,M (x) =
?
ei K(x, x
ei ), with {e
x1 , . . . , x
eM } ? {x1 , . . . , xn },
(7)
i=1
defined considering only a subset of M training points sampled uniformly. In this case, there are only
M coefficients that, following the approach in (4), can be derived considering the linear system
>
>
H?
e = z,
where H = KnM
KnM + ?nKM M , z = KnM
y?.
(8)
Here KnM is the n ? M matrix with (KnM )ij = K(xi , x
ej ) and KM M is the M ? M matrix with
(KM M )ij = K(e
xi , x
ej ). This method consists in subsampling the columns of Knn and can be seen
as a particular form of random projections.
>
Computations. Direct methods for solving (8) require O(nM 2 ) in time to form KnM
KnM and
O(M 3 ) for solving the linear system, and only O(nM ) kernel evaluations. The naive memory
>
requirement is O(nM ) to store KnM , however if KnM
KnM is computed in blocks of dimension at
most M ? M only O(M 2 ) memory is needed. Iterative approaches as in (6) can also be combined
with random projections [22, 23, 24] to slightly reduce time requirements (see Table. 1, or Sect. F in
the appendix, for more details).
Statistics. The key point though, is that random projections allow to dramatically reduce memory
requirements as soon as M n and the question arises of whether this comes at expenses of
statistical accuracy. Interestingly, recent results considering this question show that there are large
? ?n) suffices for the same optimal statistical accuracy of the
classes of problems for which M = O(
exact KRR [11, 12, 13].
In summary, in?this case the computations needed for optimal statistical accuracy are reduced from
O(n2 ) to O(n n) kernel evaluations, but the best time complexity is basically O(n2 ). In the rest of
the paper we discuss how this requirement can indeed be dramatically reduced.
3
FALKON
Our approach is based on a novel combination of randomized projections with iterative solvers plus
preconditioning. The main novelty is that we use random projections to approximate both the problem
and the preconditioning.
3
Preliminaries: preconditioning and KRR. We begin recalling the basic idea behind preconditioning. The key quantity is the condition number, that for a linear system is the ratio between the largest
and smallest singular values of the matrix defining the problem [25]. For example, for problem (5)
the condition number is given by
cond(Knn + ?nI) = (?max + ?n)/(?min + ?n),
with ?max , ?min largest and smallest eigenvalues of Knn , respectively. The importance of the condition
number is that it captures the time complexity of iteratively solving the corresponding linear system.
For example, if a simple gradient descent (6) is used, the number of iterations needed for an accurate
solution of problem (5) is
t = O(cond(Knn + ?nI) log(1/)).
?
It is shown in [23] that in this case t = n log n are needed to achieve a solution with good statistical
?
properties. Indeed, it can be shown that roughly t ? 1/? log( 1 ) are needed where ? = 1/ n and
= 1/n. The idea behind preconditioning is to use a suitable matrix B to define an equivalent linear
system with better condition number. For (5), an ideal choice is B such that
BB > = (Knn + ?nI)?1
(9)
and B > (Knn + ?nI)B ? = B > y?. Clearly, if ?? solves the latter problem, ?? = B?? is a solution
of problem (5). Using a preconditioner B as in (9) one iteration is sufficient, but computing the B is
typically as hard as the original problem. The problem is to derive preconditioning such that (9) might
hold only approximately, but that can be computed efficiently. Derivation of efficient preconditioners
for the exact KRR problem (5) has been the subject of recent studies, [3, 4, 26, 5, 6]. In particular,
[4, 26, 5, 6] consider random projections to approximately compute a preconditioner. Clearly,
while preconditioning (5) leads to computational speed ups in terms of the number of iterations,
requirements in terms of memory/kernel evaluation are the same as standard kernel ridge regression.
The key idea to tackle this problem is to consider an efficient preconditioning approach for problem (8)
rather than (5).
Basic FALKON algorithm. We begin illustrating a basic version of our approach. The key
ingredient is the following preconditioner for Eq. (8),
n
?1
2
BB > =
KM
+
?nK
,
(10)
M
M
M
M
which is itself based on a Nystr?om approximation3 . The above preconditioning is a natural approxi>
mation of the ideal preconditioning of problem (8) that is BB > = (KnM
KnM + ?nKM M )?1 and
reduces to it if M = n. Our theoretical analysis, shows that M n suffices for deriving optimal
statistical rates. In its basic form FALKON is derived combining the above preconditioning and
gradient descent,
fb?,M,t (x) =
M
X
?t,i K(x, x
ei ),
with ?t = B?t
and
(11)
i=1
? > >
B KnM (KnM (B?k?1 ) ? yb) + ?nKM M (B?k?1 ) ,
(12)
n
for t ? N, ?0 = 0 and 1 ? k ? t and a suitable chosen ? . In practice, a refined version of FALKON
is preferable where a faster gradient iteration is used and additional care is taken in organizing
computations.
?k = ?k?1 ?
FALKON. The actual version of FALKON we propose is Alg. 1 (see Sect. A, Alg. 2 for the complete
algorithm). It consists in solving the system B > HB? = B > z via conjugate gradient [25], since it is
a fast gradient method and does not require to specify the step-size. Moreover, to compute B quickly,
with reduced numerical errors, we consider the following strategy
1
1
B = ? T ?1 A?1 ,
T = chol(KM M ), A = chol
T T > + ?I ,
(13)
M
n
where chol() is the Cholesky decomposition (in Sect. A the strategy for non invertible KM M ).
3
For the sake of simplicity, here we assume KM M to be invertible and the Nystr?om centers selected with
uniform sampling from the training set, see Sect. A and Alg. 2 in the appendix for the general algorithm.
4
Algorithm 1 MATLAB code for FALKON. It requires O(nM t + M 3 ) in time and O(M 2 ) in memory.
See Sect. A and Alg. 2 in the appendixes for the complete algorithm.
n?D
n
M ?D
Input: Dataset X = (xi )n
, y? = (yi )n
xj )M
, KernelMatrix
i=1 ? R
i=1 ? R , centers C = (?
j=1 ? R
computing the kernel matrix given two sets of points, regularization parameter ?, number of iterations t.
Output: Nystr?om coefficients ?.
function
n =
T =
A =
alpha = FALKON(X, C, Y, KernelMatrix, lambda, t)
size(X,1); M = size(C,1); KMM = KernelMatrix(C,C);
chol(KMM + eps*M*eye(M));
chol(T*T?/M + lambda*eye(M));
function w = KnM_times_vector(u, v)
w = zeros(M,1); ms = ceil(linspace(0, n, ceil(n/M)+1));
for i=1:ceil(n/M)
Kr = KernelMatrix( X(ms(i)+1:ms(i+1),:), C );
w = w + Kr?*(Kr*u + v(ms(i)+1:ms(i+1),:));
end
end
BHB = @(u) A?\(T?\(KnM_times_vector(T\(A\u), zeros(n,1))/n) + lambda*(A\u));
r = A?\(T?\KnM_times_vector(zeros(M,1), Y/n));
alpha = T\(A\conjgrad(BHB, r, t));
end
Computations. in Alg. 1, B is never built explicitly and A, T are two upper-triangular matrices, so
A?> u, A?1 u for a vector u costs M 2 , and the same for T . The cost of computing the preconditioner
is only 43 M 3 floating point operations (consisting in two Cholesky decompositions and one product
of two triangular matrices). Then FALKON requires O(nM t + M 3 ) in time and the same O(M 2 )
memory requirement of the basic Nystr?om method, if matrix/vector multiplications at each iteration
are performed in blocks. This implies O(nM t) kernel evaluations are needed.
The question remains to characterize M and the number of iterations
needed for good statistical
?
accuracy. Indeed, in the next section we show that roughly O(n n) computations and O(n) memory
are sufficient for optimal accuracy. This implies that FALKON is currently the most efficient kernel
method with the same optimal statistical accuracy of KRR, see Table 1.
4
Theoretical Analysis
In this section, we characterize the generalization properties of FALKON showing it achieves the
optimal generalization error of KRR, with dramatically reduced computations. This result is given
in Thm. 3 and derived in two steps. First, we study the difference between the excess risk of
FALKON and that of the basic Nystr?om (8), showing it depends on the condition number induced
by the preconditioning, hence on M (see Thm.1). Deriving these results requires some care, since
differently to standard optimization results, our goal is to solve (1) i.e. achieve small excess risk.
e
Second, we show that choosing M = O(1/?)
allows to make?this difference as small as e?t/2 (see
Thm.2). Finally, recalling that the basic Nystr?om for ? = 1/ n has essentially the same statistical
properties of KRR [13], we answer the question posed at the end of the last section and show that
roughly log n iterations are sufficient for optimal statistical accuracy. Following the discussion in the
e ?n) in
previous section this means that the computational requirements for optimal accuracy are O(n
e
time/kernel evaluations and O(n)
in space. Later in this section faster rates under further regularity
assumptions are also derived and the effect of different selection methods for the Nystr?om centers
considered.
4.1
Main Result
The first result is interesting in its own right since it corresponds to translating optimization guarantees
into statistical results. In particular, we derive a relation the excess risk of the FALKON algorithm
fb?,M,t from Alg. 1 and the Nystr?om estimator fe?,M from Eq. (8) with uniform sampling.
5
Algorithm
train time
SVM / KRR + direct method
KRR + iterative [1, 2]
Doubly stochastic [22]
Pegasos / KRR + sgd [27]
KRR + iter + precond [3, 28, 4, 5, 6]
Divide & Conquer [29]
Nystr?om, random features [7, 8, 9]
Nystr?om + iterative [23, 24]
Nystr?om + sgd [20]
FALKON (see Thm. 3)
kernel evaluations
3
2
n?
4
n2 ?
n
2
n n
n2
n2
n2
n2
n2
2
n
?
n n
n
2
n?
n2 n
n2
n?2
n ?n
n ?n
n ?n
n?n
n n
memory
2
n
n2
n
n
n
n
n
n
n
n
test time
n
n
n
n
n
n
?
?n
?n
?n
n
Table 1: Computational complexity required by different algorithms, for optimal generalization.
Logarithmic terms are not showed.
Theorem 1. Let n, M ? 3, t ? N, 0 < ? ? ?1 and ? ? (0, 1]. Assume there exists ? ? 1 such that
K(x, x) ? ?2 for any x ? X. Then, the following inequality holds with probability 1 ? ?
r
9?2
n
1/2
1/2
??t
b
e
1+
log ,
R(f?,M,t )
? R(f?,M )
+ 4b
ve
?n
?
Pn
1/2
where vb2 = n1 i=1 yi2 and ? = log(1 + 2/(cond (B > HB)
? 1)), with cond (B > HB) the
>
condition number of B HB. Note that ?1 > 0 is a constant not depending on ?, n, M, ?, t.
The additive term in the bound above decreases exponentially in the number of iterations. If the
condition number of B > HB is smaller than a small universal constant (e.g. 17), then ? > 1/2 and
t
the additive term decreases as e? 2 . Next, theorems derive a condition on M that allows to control
>
cond (B HB), and derive such an exponential decay.
Theorem 2. Under the same conditions of Thm. 1, if
8?2
14?2
log
.
M ?5 1+
?
??
then the exponent ? in Thm. 1 satisfies ? ? 1/2.
The above result gives the desired exponential bound showing that after log n iterations the excess
risk of FALKON is controlled by that of the basic Nystr?om, more precisely
9?2
n
b
e
e
R(f?,M,t ) ? 2R(f?,M ) when t ? log R(f?,M ) + log 1 +
log
+ log 16b
v2 .
?n
?
Finally, we derive an excess risk bound for FALKON. By the no-free-lunch theorem, this requires
some conditions on the learning problem. We first consider a standard basic setting where we only
assume it exists fH ? H such that E(fH ) = inf f ?H E(f ).
Theorem 3. Let ? ? (0, 1]. Assume there exists ? ? 1 such that K(x, x) ? ?2 for any x ? X, and
y ? [? a2 , a2 ], almost surely, a > 0. There exist n0 ? N such that for any n ? n0 , if
1
?= ? ,
n
M ? 75
?
n log
48?2 n
,
?
t ?
1
log(n) + 5 + 2 log(a + 3?),
2
then with probability 1 ? ?,
c0 log2 24
? ? .
n
In particular n0 , c0 do not depend on ?, M, n, t and c0 do not depend on ?.
R(fb?,M,t ) ?
The above result provides the desired bound, and all the constants are given in the appendix. The
obtained learning rate is the same as the full KRR estimator and is known to be optimal in a minmax
sense [17], hence not improvable. As mentioned before, the same bound is also achieved by the
6
basic Nystr?om method but with much worse time
? complexity. Indeed, as discussed before, using
a simple iterative solver typically requires O( n log n) iterations, while we need?only O(log n).
Considering the choice for M this leads to a computational time of O(nM t) = O(n n) for optimal
generalization (omitting logarithmic terms). To the best of our knowledge FALKON currently
provides the best time/space complexity to achieve the statistical accuracy of KRR. Beyond the
basic setting considered above, in the next section we show that FALKON can achieve much faster
rates under refined regularity assumptions and also consider the potential benefits of leverage score
sampling.
4.2
Fast learning rates and Nystr?om with approximate leverage scores
Considering fast rates and Nystr?om with more general sampling is considerably more technical and
a heavier notation is needed. Our analysis apply to any approximation scheme (e.g. [30, 12, 31])
satisfying the definition of q-approximate leverage scores [13], satisfying q ?1 li (?) ? b
li (?) ?
qli (?), ? i ? {1, . . . , n}. Here ? > 0, li (?) = (Knn (Knn + ?nI)?1 )ii are the leverage scores
and q ? 1 controls the quality of the approximation. In particular, given ?, the Nystr?om points are
sampled independently from the dataset with probability pi ? b
li (?). We need a few more definitions.
Let Kx = K(x, ?) for any x ? X and H the reproducing kernel Hilbert space [32] of functions with
inner product defined by H = span{Kx | x ? X} and closed with respect to the inner product h?, ?iH
0
0
defined by hK
R x , Kx0 iH = K(x, x ), for all x, x ? X. Define C : H ? H to be the linear operator
hf, CgiH = X f (x)g(x)d?X (x), for all f, g ? H. Finally define the following quantities,
N? (?) = sup k(C + ?I)?1/2 Kx kH ,
N (?) = Tr(C(C + ?I)?1 ).
x?X
The latter quantity is known as degrees of freedom or effective dimension, can be seen as a measure
of the size of H. The quantity N? (?) can be seen to provide a uniform bound on the leverage scores.
2
In particular note that N (?) ? N? (?) ? ?? [13]. We can now provide a refined version of Thm. 2.
Theorem 4. Under the same conditions of Thm. 1, the exponent ? in Thm. 1 satisfies ? ? 1/2, when
2
1. either Nystr?om uniform sampling is used with M ? 70 [1 + N? (?)] log 8?
?? .
2
n
12?2
2
2. or Nystr?om q-approx. lev. scores [13] is used, with ? ? 19?
n log 2? , n ? 405? log ? ,
8?2
M ? 215 2 + q 2 N (?) log
.
??
We then recall the standard, albeit technical, assumptions leading to fast rates [17, 18]. The capacity
condition requires the existence of ? ? (0, 1] and Q ? 0, such that N (?) ? Q2 ??? . Note that this
condition is always satisfied with Q = ? and ? = 1. The source condition requires the existence
of r ? [1/2, 1] and g ? H, such that fH = C r?1/2 g. Intuitively, the capacity condition measures
the size of H, if ? is small then H is small and rates are faster. The source condition measures the
regularity of fH , if r is big fH is regular and rates are faster. The case r = 1/2 and ? = D/(2s) (for
a kernel with smoothness s and input space RD ) recovers the classic Sobolev condition. For further
discussions on the interpretation of the conditions above see [17, 18, 11, 13]. We can then state our
main result on fast rates
Theorem 5. Let ? ? (0, 1]. Assume there exists ? ? 1 such that K(x, x) ? ?2 for any x ? X,
and y ? [? a2 , a2 ], almost surely, with a > 0. There exist an n0 ? N such that for any n ? n0 the
following holds. When
1
? = n? 2r+? ,
t ? log(n) + 5 + 2 log(a + 3?2 ),
2
1. and either Nystr?om uniform sampling is used with M ? 70 [1 + N? (?)] log 8?
,
?? 2
2. or Nystr?om q-approx. lev. scores [13] is used with M ? 220 2 + q 2 N (?) log 8?
?? ,
then with probability 1 ? ?,
R(fb?,M,t ) ? c0 log2
2r
24 ? 2r+?
n
.
?
where fb?,M,t is the FALKON estimator (Sect. 3, Alg. 1 and Sect. A, Alg. 2 in the appendix for the
complete version). In particular n0 , c0 do not depend on ?, M, n, t and c0 do not depend on ?.
7
The above result shows that FALKON achieves the same fast rates as KRR, under the same conditions
[17]. For r = 1/2, ? = 1, the rate in Thm. 3 is recovered. If ? < 1, r > 1/2, FALKON achieves a
rate close to O(1/n). By selecting the Nystr?om points with uniform sampling, a bigger M could be
needed for fast rates (albeit?always less than n). However, when approximate leverage scores are used
M , smaller than n?/2 n is always enough for optimal generalization. This shows that FALKON
with approximate leverage scores is the first algorithm to achieve fast rates with a computational
?
?
complexity that is O(nN (?)) = O(n1+ 2r+? ) ? O(n1+ 2 ) in time.
Main steps and novelties in the proof. The proof is long and technical and uses a variety of tools
developed to analyze KRR. Our starting point is the analysis of the basic Nystr?om estimator given
in [13]. The key novelty is the quantification of the approximations induced by the preconditioned
iterative solver by relating its excess risk to the one of the basic Nystr?om estimator.
A computational oracle inequality. First we prove that FALKON is equal to the exact Nystr?om
estimator as the iterations go to infinity (Lemma 5). Then, in Lemma 8 (see also Lemma 6, 7) we
show how optimization guarantees can be used to derive statistical results. More precisely, while
optimization results in machine learning typically derives guarantees on empirical minimization
problems, we show, using analytic and probabilistic tools, how these results can be turned into
guarantees on the expected risks. Finally, in the proof of Thm. 1 we concentrate the terms of the
inequality. The other key point is the study of the behavior of the condition number of B > HB with
B given in (10).
Controlling the condition number of B > HB. Let Cn , CM be the empirical P
correlation operators in H
n
associated respectively to the training set and the Nystr?om points Cn = n1 i=1 Kxi ? Kxi , CM =
P
M
1
>
?> ?
V (Cn + ?I)V A?1
ej ? Kx
ej . In Lemma 1 we prove that B HB is equivalent to A
j=1 Kx
M
for a suitable partial isometry V . Then in Lemma 2 we split it in two components
B > HB = A?> V ? (CM + ?I)V A?1 + A?> V ? (Cn ? CM )V A?1 ,
(14)
and prove that the first component is just the identity matrix. By denoting the second component with
E, Eq. (14) implies that the condition number of B > HB is bounded by (1 + kEk)/(1 ? kEk), when
kEk < 1. In Lemma 3 we prove that kEk is analytically bounded by a suitable distance between
Cn ? CM and in Lemma 9, 10 we bound in probability such distance, when the Nystr?om centers are
selected uniformly at random and with approximate leverage scores. Finally in Lemma 11, 12 we
give a condition on M for the two kind of sampling, such that the condition number is controlled and
the error term in the oracle inequality decays as e?t/2 , leading to Thm. 2, 4.
5
Experiments
We present FALKON?s performance on a range of large scale datasets. As shown in Table 2, 3,
FALKON achieves state of the art accuracy and typically outperforms previous approaches in all the
considered large scale datasets including IMAGENET. This is remarkable considering FALKON
required only a fraction of the competitor?s computational resources. Indeed we used a single machine
equipped with two Intel Xeon E5-2630 v3, one NVIDIA Tesla K40c and 128 GB of RAM and a
basic MATLAB FALKON implementation, while typically the results for competing algorithm have
been performed on clusters of GPU workstations (accuracies, times and used architectures are cited
from the corresponding papers).
A minimal MATLAB implementation of FALKON is presented in Appendix G. The code necessary
to reproduce the following experiments, plus a FALKON version that is able to use the GPU, is
available on GitHub at https://github.com/LCSL/FALKON_paper . The error is measured with
MSE, RMSE or relative error for regression problems, and with classification error (c-err) or AUC
for the classification problems, to be consistent with the literature. For datasets which do not have a
fixed test set, we set apart 20% of the data for testing. For all datasets, but YELP and IMAGENET,
we normalize the features by their z-score. From now on we denote with n the cardinality of the
dataset, d the dimensionality.
MillionSongs [36] (Table 2, n = 4.6 ? 105 , d = 90, regression). We used a Gaussian kernel with
? = 6, ? = 10?6 and 104 Nystr?om centers. Moreover with 5 ? 104 center, FALKON achieves a
79.20 MSE, and 4.49 ? 10?3 rel. error in 630 sec.
TIMIT (Table 2, n = 1.2 ? 106 , d = 440, multiclass classification). We used the same preprocessed
dataset of [6] and Gaussian Kernel with ? = 15, ? = 10?9 and 105 Nystr?om centers.
8
Table 2: Architectures: ? cluster 128 EC2 r3.2xlarge machines, ? cluster 8 EC2 r3.8xlarge machines, o
single machine with two Intel Xeon E5-2620, one Nvidia GTX Titan X GPU, 128GB RAM, ? cluster
with IBM POWER8 12-core processor, 512 GB RAM, ? unknown platform.
MillionSongs
FALKON
Prec. KRR [4]
Hierarchical [33]
D&C [29]
Rand. Feat. [29]
Nystr?om [29]
ADMM R. F.[4]
BCD R. F. [24]
BCD Nystr?om [24]
EigenPro [6]
KRR [33] [24]
Deep NN [34]
Sparse Kernels [34]
Ensemble [35]
MSE
Relative error
80.10
80.35
80.93
80.38
-
?3
YELP
TIMIT
Time(s)
RMSE
Time(m)
c-err
Time(h)
55
289?
293?
737?
772?
876?
958?
-
0.833
0.949
0.861
0.854
-
20
42?
60?
500?
-
32.3%
34.0%
33.7%
32.6%
33.5%
32.4%
30.9%
33.5%
1.5
1.7?
1.7?
3.9o
8.3?
-
4.51 ? 10
4.58 ? 10?3
4.56 ? 10?3
5.01 ? 10?3
4.55 ? 10?3
-
YELP (Table 2, n = 1.5 ? 106 , d = 6.52 ? 107 , regression). We used the same dataset of [24]. We
extracted the 3-grams from the plain text with the same pipeline as [24], then we mapped them in a
sparse binary vector which records if the 3-gram is present or not in the example. We used a linear
kernel with 5 ? 104 Nystr?om centers. With 105 centers, we get a RMSE of 0.828 in 50 minutes.
Table 3: Architectures: ? cluster with IBM POWER8 12-core cpu, 512 GB RAM, o single machine
with two Intel Xeon E5-2620, one Nvidia GTX Titan X GPU, 128GB RAM, ? single machine [37]
SUSY
FALKON
EigenPro [6]
Hierarchical [33]
Boosted Decision Tree [38]
Neural Network [38]
Deep Neural Network [38]
Inception-V4 [39]
HIGGS
IMAGENET
c-err
AUC
Time(m)
AUC
Time(h)
c-err
Time(h)
19.6%
19.8%
20.1%
-
0.877
0.863
0.875
0.879
-
4
6o
40?
4680?
-
0.833
0.810
0.816
0.885
-
3
78?
-
20.7%
20.0%
4
-
SUSY (Table 3, n = 5 ? 106 , d = 18, binary classification). We used a Gaussian kernel with ? = 4,
? = 10?6 and 104 Nystr?om centers.
HIGGS (Table 3, n = 1.1 ? 106 , d = 28, binary classification). Each feature has been normalized
subtracting its mean and dividing for its variance. We used a Gaussian kernel with diagonal matrix
width learned with cross validation on a small validation set, ? = 10?8 and 105 Nystr?om centers. If
we use a single ? = 5 we reach an AUC of 0.825.
IMAGENET (Table 3, n = 1.3 ? 106 , d = 1536, multiclass classification). We report the top 1
c-err over the validation set of ILSVRC 2012 with a single crop. The features are obtained from
the convolutional layers of pre-trained Inception-V4 [39]. We used Gaussian kernel with ? = 19,
? = 10?9 and 5 ? 104 Nystr?om centers. Note that with linear kernel we achieve c-err = 22.2%.
Acknowledgments.
The authors would like to thank Mikhail Belkin, Benjamin Recht and Siyuan Ma, Eric Fosler-Lussier, Shivaram
Venkataraman, Stephen L. Tu, for providing their features of the TIMIT and YELP datasets, and NVIDIA
Corporation for the donation of the Tesla K40c GPU used for this research. This work is funded by the Air Force
project FA9550-17-1-0390 (European Office of Aerospace Research and Development) and by the FIRB project
RBFR12M3AC (Italian Ministry of Education, University and Research).
9
References
[1] A. Caponnetto and Yuan Yao. Adaptive rates for regularization operators in learning theory. Analysis and
Applications, 08, 2010.
[2] L. Lo Gerfo, Lorenzo Rosasco, Francesca Odone, Ernesto De Vito, and Alessandro Verri. Spectral
Algorithms for Supervised Learning. Neural Computation, 20(7):1873?1897, 2008.
[3] Gregory E Fasshauer and Michael J McCourt. Stable evaluation of gaussian radial basis function interpolants. SIAM Journal on Scientific Computing, 34(2):A737?A762, 2012.
[4] Haim Avron, Kenneth L Clarkson, and David P Woodruff. Faster kernel ridge regression using sketching
and preconditioning. arXiv preprint arXiv:1611.03220, 2016.
[5] Alon Gonen, Francesco Orabona, and Shai Shalev-Shwartz. Solving ridge regression using sketched
preconditioned svrg. arXiv preprint arXiv:1602.02350, 2016.
[6] Siyuan Ma and Mikhail Belkin. Diving into the shallows: a computational perspective on large-scale
shallow learning. arXiv preprint arXiv:1703.10622, 2017.
[7] Christopher Williams and Matthias Seeger. Using the Nystr?om Method to Speed Up Kernel Machines. In
NIPS, pages 682?688. MIT Press, 2000.
[8] Alex J. Smola and Bernhard Sch?olkopf. Sparse Greedy Matrix Approximation for Machine Learning. In
ICML, pages 911?918. Morgan Kaufmann, 2000.
[9] Ali Rahimi and Benjamin Recht. Random Features for Large-Scale Kernel Machines. In NIPS, pages
1177?1184. Curran Associates, Inc., 2007.
[10] Ali Rahimi and Benjamin Recht. Weighted sums of random kitchen sinks: Replacing minimization with
randomization in learning. In Advances in neural information processing systems, pages 1313?1320, 2009.
[11] Francis Bach. Sharp analysis of low-rank kernel matrix approximations. In COLT, volume 30 of JMLR
Proceedings, pages 185?209. JMLR.org, 2013.
[12] Ahmed Alaoui and Michael W Mahoney. Fast randomized kernel ridge regression with statistical guarantees.
In Advances in Neural Information Processing Systems 28, pages 775?783. 2015.
[13] Alessandro Rudi, Raffaello Camoriano, and Lorenzo Rosasco. Less is more: Nystr?om computational
regularization. In Advances in Neural Information Processing Systems, pages 1648?1656, 2015.
[14] Alessandro Rudi and Lorenzo Rosasco. Generalization properties of learning with random features. arXiv
preprint arXiv:1602.04474, 2016.
[15] Francis Bach. On the equivalence between kernel quadrature rules and random feature expansions. Journal
of Machine Learning Research, 18(21):1?38, 2017.
[16] Bernhard Sch?olkopf and Alexander J. Smola. Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond (Adaptive Computation and Machine Learning). MIT Press,
2002.
[17] Andrea Caponnetto and Ernesto De Vito. Optimal rates for the regularized least-squares algorithm.
Foundations of Computational Mathematics, 7(3):331?368, 2007.
[18] Ingo Steinwart, Don R Hush, Clint Scovel, et al. Optimal rates for regularized least squares regression. In
COLT, 2009.
[19] F. Bauer, S. Pereverzev, and L. Rosasco. On regularization algorithms in learning theory. Journal of
complexity, 23(1):52?72, 2007.
[20] Aymeric Dieuleveut and Francis Bach. Non-parametric stochastic approximation with large step sizes.
arXiv preprint arXiv:1408.0361, 2014.
[21] Yuan Yao, Lorenzo Rosasco, and Andrea Caponnetto. On early stopping in gradient descent learning.
Constructive Approximation, 26(2):289?315, 2007.
[22] Bo Dai, Bo Xie, Niao He, Yingyu Liang, Anant Raj, Maria-Florina F Balcan, and Le Song. Scalable kernel
methods via doubly stochastic gradients. In Advances in Neural Information Processing Systems, pages
3041?3049, 2014.
10
[23] Raffaello Camoriano, Tom?as Angles, Alessandro Rudi, and Lorenzo Rosasco. Nytro: When subsampling
meets early stopping. In Proceedings of the 19th International Conference on Artificial Intelligence and
Statistics, pages 1403?1411, 2016.
[24] Stephen Tu, Rebecca Roelofs, Shivaram Venkataraman, and Benjamin Recht. Large scale kernel learning
using block coordinate descent. arXiv preprint arXiv:1602.05310, 2016.
[25] Yousef Saad. Iterative methods for sparse linear systems. SIAM, 2003.
[26] Kurt Cutajar, Michael Osborne, John Cunningham, and Maurizio Filippone. Preconditioning kernel
matrices. In International Conference on Machine Learning, pages 2529?2538, 2016.
[27] Shai Shalev-Shwartz, Yoram Singer, Nathan Srebro, and Andrew Cotter. Pegasos: Primal estimated
sub-gradient solver for svm. Mathematical programming, 127(1):3?30, 2011.
[28] Yun Yang, Mert Pilanci, and Martin J Wainwright. Randomized sketches for kernels: Fast and optimal
non-parametric regression. arXiv preprint arXiv:1501.06195, 2015.
[29] Yuchen Zhang, John C. Duchi, and Martin J. Wainwright. Divide and Conquer Kernel Ridge Regression.
In COLT, volume 30 of JMLR Proceedings, pages 592?617. JMLR.org, 2013.
[30] Petros Drineas, Malik Magdon-Ismail, Michael W. Mahoney, and David P. Woodruff. Fast approximation
of matrix coherence and statistical leverage. JMLR, 13:3475?3506, 2012.
[31] Michael B. Cohen, Yin Tat Lee, Cameron Musco, Christopher Musco, Richard Peng, and Aaron Sidford.
Uniform Sampling for Matrix Approximation. In ITCS, pages 181?190. ACM, 2015.
[32] I. Steinwart and A. Christmann. Support Vector Machines. Information Science and Statistics. Springer
New York, 2008.
[33] Jie Chen, Haim Avron, and Vikas Sindhwani. Hierarchically compositional kernels for scalable nonparametric learning. CoRR, abs/1608.00860, 2016.
[34] Avner May, Alireza Bagheri Garakani, Zhiyun Lu, Dong Guo, Kuan Liu, Aurelien Bellet, Linxi Fan,
Michael Collins, Daniel J. Hsu, Brian Kingsbury, Michael Picheny, and Fei Sha. Kernel approximation
methods for speech recognition. CoRR, abs/1701.03577, 2017.
[35] Po-Sen Huang, Haim Avron, Tara N. Sainath, Vikas Sindhwani, and Bhuvana Ramabhadran. Kernel
methods match deep neural networks on timit. 2014 IEEE International Conference on Acoustics, Speech
and Signal Processing (ICASSP), pages 205?209, 2014.
[36] Thierry Bertin-Mahieux, Daniel P. W. Ellis, Brian Whitman, and Paul Lamere. The million song dataset.
In ISMIR, 2011.
[37] Alexandre Alves. Stacking machine learning classifiers to identify higgs bosons at the lhc. CoRR,
abs/1612.07725, 2016.
[38] Pierre Baldi, Peter Sadowski, and Daniel Whiteson. Searching for exotic particles in high-energy physics
with deep learning. Nature communications, 5, 2014.
[39] Christian Szegedy, Sergey Ioffe, Vincent Vanhoucke, and Alexander A Alemi. Inception-v4, inceptionresnet and the impact of residual connections on learning. pages 4278?4284, 2017.
[40] Michael Reed and Barry Simon. Methods of Modern Mathematical Physics: Vol.: 1.: Functional Analysis.
Academic press, 1980.
[41] Ernesto D Vito, Lorenzo Rosasco, Andrea Caponnetto, Umberto D Giovannini, and Francesca Odone.
Learning from examples as an inverse problem. In Journal of Machine Learning Research, pages 883?904,
2005.
[42] Alessandro Rudi, Guillermo D Canas, and Lorenzo Rosasco. On the Sample Complexity of Subspace
Learning. In NIPS, pages 2067?2075, 2013.
[43] St?ephane Boucheron, G?abor Lugosi, and Olivier Bousquet. Concentration inequalities. In Advanced
Lectures on Machine Learning. 2004.
11
| 6978 |@word illustrating:1 version:6 c0:6 km:6 tat:1 decomposition:2 sgd:2 nystr:41 solid:1 tr:1 minmax:1 liu:1 score:11 selecting:1 woodruff:2 ecole:1 denoting:1 interestingly:1 kurt:1 daniel:3 luigi:1 outperforms:3 kx0:1 recovered:1 com:1 err:6 scovel:1 gpu:5 john:2 numerical:1 additive:2 benign:1 analytic:2 christian:1 fasshauer:1 n0:6 greedy:1 selected:2 intelligence:1 core:2 record:1 fa9550:1 provides:4 org:2 zhang:1 mathematical:2 zhiyun:1 kingsbury:1 direct:3 become:1 mahieux:1 viable:1 yuan:2 consists:2 doubly:2 prove:4 baldi:1 yingyu:1 introduce:1 firb:1 peng:1 expected:1 indeed:5 behavior:1 andrea:3 roughly:5 bhuvana:1 actual:1 cpu:1 equipped:1 solver:6 considering:9 cardinality:1 project:3 estimating:1 begin:2 moreover:2 notation:1 bounded:2 exotic:1 argmin:1 cm:5 substantially:1 kind:1 q2:1 developed:1 proposing:1 corporation:1 guarantee:6 avron:3 tackle:2 preferable:1 classifier:1 control:2 enjoy:1 yn:1 gerfo:1 positive:3 before:2 ceil:3 yelp:4 despite:1 lev:2 meet:1 clint:1 approximately:3 lugosi:1 inria:2 plus:2 might:1 studied:1 equivalence:1 challenging:1 limited:2 range:1 practical:1 acknowledgment:1 testing:1 practice:1 block:4 definite:2 empirical:4 universal:1 thought:1 projection:9 ups:1 pre:1 radial:1 regular:1 suggest:1 get:1 pegasos:2 close:1 selection:1 operator:3 lamere:1 put:1 risk:9 equivalent:2 center:12 pereverzev:1 go:1 williams:1 starting:1 independently:1 convex:1 sainath:1 musco:2 formalized:1 simplicity:2 canas:1 estimator:7 rule:1 utilizing:1 importantly:2 deriving:2 classic:1 searching:1 coordinate:1 controlling:1 exact:4 programming:1 olivier:1 us:1 curran:1 associate:1 satisfying:2 recognition:1 predicts:1 linspace:1 fly:2 preprint:7 capture:1 connected:1 sect:11 venkataraman:2 trade:1 decrease:2 substantial:2 alessandro:7 principled:1 mentioned:1 complexity:12 benjamin:4 ideally:1 vito:3 trained:1 depend:4 solving:7 ali:2 efficiency:1 eric:1 basis:1 preconditioning:16 sink:1 drineas:1 po:1 icassp:1 whitman:1 differently:1 iit:1 mccourt:1 derivation:1 train:1 fast:12 effective:1 artificial:1 choosing:2 refined:3 odone:2 shalev:2 posed:1 solve:2 triangular:2 statistic:5 knn:15 noisy:1 itself:1 kuan:1 advantage:1 eigenvalue:1 matthias:1 sen:1 propose:2 subtracting:1 mert:1 product:3 fr:1 tu:2 turned:1 combining:3 organizing:1 achieve:8 ismail:1 kh:1 normalize:1 olkopf:2 qli:1 regularity:3 requirement:12 cluster:5 sierra:1 derive:6 depending:1 donation:1 boson:1 alon:1 andrew:1 measured:1 ij:3 thierry:1 eq:4 solves:1 dividing:1 christmann:1 come:2 implies:4 concentrate:1 stochastic:7 stringent:2 translating:1 education:1 require:3 suffices:2 generalization:6 preliminary:1 randomization:2 brian:2 extension:3 hold:3 considered:4 k2h:1 algorithmic:1 predict:1 camoriano:2 achieves:6 early:3 smallest:2 omitted:1 fh:5 a2:4 krr:24 currently:2 largest:2 tool:2 weighted:1 cotter:1 nkm:3 minimization:2 mit:3 offs:1 rough:1 gaussian:6 clearly:2 mation:1 always:3 rather:2 normale:1 pn:1 ej:4 ck:2 boosted:1 office:1 derived:6 focus:2 maria:1 rank:1 hk:1 seeger:1 linxi:1 sense:2 stopping:4 nn:2 typically:8 abor:1 cunningham:1 italian:1 relation:1 reproduce:1 interested:1 provably:1 sketched:1 among:1 classification:6 colt:3 exponent:2 development:1 art:4 platform:1 equal:1 never:1 ernesto:3 beach:1 sampling:10 icml:1 report:1 ephane:1 richard:1 few:1 belkin:2 modern:1 lhc:1 ve:1 floating:1 kitchen:1 raffaello:2 consisting:1 n1:5 recalling:2 freedom:1 ab:3 investigate:1 evaluation:12 power8:2 mahoney:2 analyzed:1 semidefinite:1 behind:2 dieuleveut:1 primal:1 accurate:1 partial:1 necessary:1 istituto:1 tree:1 divide:2 yuchen:1 desired:2 theoretical:7 minimal:2 column:1 xeon:3 elli:1 sidford:1 applicability:1 cost:4 stacking:1 entry:1 subset:1 genoa:2 uniform:7 characterize:3 stored:1 answer:1 gregory:1 kxi:2 considerably:1 combined:2 thoroughly:1 st:2 cited:1 recht:4 randomized:3 ec2:2 siam:2 international:3 probabilistic:1 physic:2 v4:3 shivaram:2 lee:1 invertible:2 michael:8 quickly:1 sketching:1 yao:2 dong:1 alemi:1 fbn:1 nm:7 satisfied:1 rosasco:9 possibly:1 huang:1 lambda:3 worse:1 leading:2 li:4 szegedy:1 potential:2 knm:14 de:2 sec:1 coefficient:4 inc:1 titan:2 explicitly:1 umberto:1 depends:1 performed:3 later:1 higgs:3 closed:1 analyze:1 sup:2 francis:3 hf:1 parallel:1 shai:2 simon:1 rmse:3 timit:4 om:41 air:1 ni:7 square:5 accuracy:17 improvable:1 kek:4 efficiently:3 ensemble:1 variance:1 convolutional:1 kaufmann:1 roelofs:1 identify:1 generalize:1 itcs:1 vincent:1 basically:1 lu:1 conjgrad:1 processor:1 reach:1 cumbersome:1 definition:2 competitor:1 energy:1 lcsl:2 proof:3 di:1 recovers:1 petros:1 associated:1 gain:1 sampled:2 dataset:6 workstation:1 popular:4 hsu:1 recall:1 knowledge:3 improves:1 dimensionality:1 organized:1 hilbert:1 cutajar:1 alexandre:1 supervised:3 xie:1 tom:1 specify:1 improved:1 rand:1 yb:4 verri:1 done:2 though:1 just:1 inception:3 smola:2 preconditioner:4 correlation:1 working:1 steinwart:2 sketch:1 ei:3 christopher:2 replacing:1 defines:1 quality:1 scientific:1 usa:1 effect:1 omitting:1 normalized:1 requiring:1 gtx:2 regularization:6 hence:2 analytically:1 boucheron:1 laboratory:1 iteratively:1 francesca:2 k40c:2 width:1 auc:4 m:5 yun:1 ridge:9 complete:3 duchi:1 balcan:1 novel:2 functional:2 cohen:1 exponentially:1 volume:2 million:3 discussed:1 interpretation:1 he:1 relating:1 eps:1 smoothness:1 approx:2 rd:1 erieure:1 mathematics:1 particle:1 funded:1 stable:1 bagheri:1 own:1 recent:3 showed:1 perspective:2 raj:1 italy:1 inf:3 isometry:1 apart:1 scenario:1 store:1 nvidia:4 susy:2 diving:1 inequality:5 binary:3 siyuan:2 rbfr12m3ac:1 yi:3 preserving:1 seen:3 additional:1 care:2 ministry:1 morgan:1 dai:1 surely:2 novelty:3 v3:1 barry:1 signal:1 ii:1 stephen:2 full:1 reduces:1 rahimi:2 caponnetto:4 technical:4 faster:6 match:1 ahmed:1 cross:1 long:2 bach:3 academic:1 cameron:1 bigger:1 controlled:2 impact:1 prediction:1 variant:1 basic:20 regression:14 crop:1 essentially:2 florina:1 scalable:2 arxiv:14 iteration:19 kernel:56 alireza:1 sergey:1 filippone:1 achieved:2 invert:1 background:1 singular:1 source:2 crucial:1 sch:2 saad:1 rest:2 unlike:1 probably:1 comment:1 subject:1 induced:2 alaoui:1 leverage:9 ideal:2 yang:1 split:1 enough:1 hb:11 variety:3 xj:3 architecture:4 competing:1 reduce:5 idea:7 inner:2 cn:5 tradeoff:1 multiclass:2 fosler:1 whether:1 motivated:1 heavier:1 allocate:1 gb:5 clarkson:1 song:2 peter:1 speech:2 york:1 compositional:1 matlab:3 deep:5 dramatically:4 chol:5 jie:1 nonparametric:5 reduced:4 http:1 outperform:1 exist:3 fruition:1 estimated:1 vol:1 key:7 iter:1 preprocessed:1 kenneth:1 ram:5 fraction:2 sum:1 angle:1 inverse:1 throughout:1 almost:2 ismir:1 interpolants:1 sobolev:1 decision:1 appendix:6 scaling:1 genova:1 coherence:1 bound:10 layer:1 haim:3 rudi:6 fan:1 quadratic:1 oracle:2 precisely:3 infinity:1 alex:1 fei:1 n3:1 aurelien:1 bcd:2 sake:1 bousquet:1 aspect:1 speed:2 nathan:1 min:2 preconditioners:1 span:1 martin:2 maurizio:1 combination:1 conjugate:2 smaller:2 slightly:1 em:1 bellet:1 shallow:2 lunch:1 kmm:2 avner:1 intuitively:1 taken:1 pipeline:1 resource:3 previously:1 remains:1 discus:2 r3:2 needed:12 singer:1 italiano:1 end:4 nytro:1 available:2 operation:1 magdon:1 apply:2 hierarchical:2 v2:1 prec:1 spectral:1 pierre:1 alternative:1 existence:2 original:1 vikas:2 top:1 subsampling:3 log2:2 exploit:2 yoram:1 especially:1 conquer:2 ramabhadran:1 malik:1 question:6 quantity:4 strategy:3 parametric:2 sha:1 concentration:1 diagonal:1 niao:1 gradient:14 subspace:1 distance:2 thank:1 mapped:1 capacity:2 mail:1 preconditioned:3 code:2 reed:1 ratio:1 providing:1 liang:1 fe:2 expense:2 implementation:2 yousef:1 unknown:2 perform:1 upper:1 francesco:1 datasets:7 ingo:1 descent:5 defining:1 communication:1 team:1 y1:1 reproducing:1 sharp:1 thm:12 overcoming:1 rebecca:1 david:2 namely:2 paris:1 required:3 extensive:1 anant:1 imagenet:4 connection:1 aerospace:1 acoustic:1 learned:2 hush:1 nip:4 beyond:3 able:1 giovannini:1 gonen:1 challenge:1 built:1 including:4 memory:18 max:2 wainwright:2 suitable:6 natural:2 rely:1 quantification:1 force:1 regularized:2 residual:1 advanced:1 minimax:1 scheme:1 improve:1 github:2 lorenzo:8 eye:2 numerous:1 naive:1 text:1 epoch:1 literature:1 kf:1 multiplication:1 relative:2 fully:1 loss:1 lecture:1 interesting:1 limitation:1 srebro:1 ingredient:1 remarkable:1 bertin:1 validation:3 foundation:2 degree:1 vanhoucke:1 sufficient:4 consistent:1 garakani:1 principle:1 pi:1 ibm:2 lo:1 summary:1 guillermo:1 last:1 soon:1 free:1 svrg:1 allow:1 mikhail:2 sparse:4 distributed:1 benefit:2 bauer:1 dimension:2 xn:2 gram:2 xlarge:2 plain:1 fb:9 author:1 made:1 adaptive:2 bb:3 excess:8 approximate:7 alpha:2 picheny:1 feat:1 bernhard:2 approxi:1 ioffe:1 assumed:1 xi:8 shwartz:2 don:1 iterative:10 table:12 learn:1 nature:1 pilanci:1 ca:1 whiteson:1 alg:8 e5:3 mse:3 excellent:1 complex:1 european:1 expansion:1 main:5 yi2:1 hierarchically:1 big:1 paul:1 n2:19 osborne:1 tesla:2 quadrature:1 x1:3 intel:3 sub:1 exponential:2 candidate:1 jmlr:5 theorem:7 minute:1 sadowski:1 showing:4 decay:2 svm:2 derives:2 exists:4 ih:2 albeit:2 rel:1 corr:3 importance:1 kr:3 kx:5 nk:1 chen:1 alves:1 logarithmic:3 yin:1 lussier:1 bo:2 sindhwani:2 springer:1 corresponds:1 satisfies:2 extracted:1 ma:2 acm:1 goal:2 identity:1 vb2:1 orabona:1 admm:1 hard:1 tecnologia:1 reducing:1 uniformly:2 lemma:8 called:1 experimental:2 cond:5 aaron:1 tara:1 ilsvrc:1 cholesky:2 support:2 latter:3 arises:1 guo:1 alexander:2 collins:1 accelerated:3 constructive:1 eigenpro:2 tested:2 |
6,609 | 6,979 | Structured Generative Adversarial Networks
Zhijie Deng? , 2,3 Hao Zhang? , 2 Xiaodan Liang, 2 Luona Yang,
1,2
Shizhen Xu, 1 Jun Zhu? , 3 Eric P. Xing
1
Tsinghua University, 2 Carnegie Mellon University, 3 Petuum Inc.
{dzj17,xsz12}@mails.tsinghua.edu.cn, {hao,xiaodan1,luonay1}@cs.cmu.edu,
[email protected], [email protected]
1
Abstract
We study the problem of conditional generative modeling based on designated
semantics or structures. Existing models that build conditional generators either
require massive labeled instances as supervision or are unable to accurately control
the semantics of generated samples. We propose structured generative adversarial
networks (SGANs) for semi-supervised conditional generative modeling. SGAN
assumes the data x is generated conditioned on two independent latent variables:
y that encodes the designated semantics, and z that contains other factors of
variation. To ensure disentangled semantics in y and z, SGAN builds two collaborative games in the hidden space to minimize the reconstruction error of y
and z, respectively. Training SGAN also involves solving two adversarial games
that have their equilibrium concentrating at the true joint data distributions p(x, z)
and p(x, y), avoiding distributing the probability mass diffusely over data space
that MLE-based methods may suffer. We assess SGAN by evaluating its trained
networks, and its performance on downstream tasks. We show that SGAN delivers
a highly controllable generator, and disentangled representations; it also establishes
start-of-the-art results across multiple datasets when applied for semi-supervised
image classification (1.27%, 5.73%, 17.26% error rates on MNIST, SVHN and
CIFAR-10 using 50, 1000 and 4000 labels, respectively). Benefiting from the
separate modeling of y and z, SGAN can generate images with high visual quality
and strictly following the designated semantic, and can be extended to a wide
spectrum of applications, such as style transfer.
1
Introduction
Deep generative models (DGMs) [12, 8, 26] have gained considerable research interest recently
because of their high capacity of modeling complex data distributions and ease of training or inference.
Among various DGMs, variational autoencoders (VAEs) and generative adversarial networks (GANs)
can be trained unsupervisedly to map a random noise z ? N (0, 1) to the data distribution p(x), and
have reported remarkable successes in many domains including image/text generation [17, 9, 3, 27],
representation learning [27, 4], and posterior inference [12, 5]. They have also been extended to
model the conditional distribution p(x|y), which involves training a neural network generator G that
takes as inputs both the random noise z and a condition y, and generates samples that have desired
properties specified by y. Obtaining such a conditional generator would be quite helpful for a wide
spectrum of downstream applications, such as classification, where synthetic data from G can be used
to augment the training set. However, training conditional generator is inherently difficult, because it
requires not only a holistic characterization of the data distribution, but also fine-grained alignments
between different modes of the distribution and different conditions. Previous works have tackled this
problem by using a large amount of labeled data to guide the generator?s learning [32, 23, 25], which
compromises the generator?s usefulness because obtaining the label information might be expensive.
?
indicates equal contributions. ? indicates the corresponding author. 31st Conference on Neural Information
Processing Systems (NIPS 2017), Long Beach, CA, USA.
In this paper, we investigate the problem of building conditional generative models under semisupervised settings, where we have access to only a small set of labeled data. The existing works [11,
15] have explored this direction based on DGMs, but the resulted conditional generators exhibit
inadequate controllability, which we define as the generator?s ability to conditionally generate samples
that have structures strictly agreeing with those specified by the condition ? a more controllable
generator can better capture and respect the semantics of the condition.
When supervision from labeled data is scarce, the controllability of a generative model is usually
influenced by its ability to disentangle the designated semantics from other factors of variations
(which we will term as disentanglability in the following text). In other words, the model has to
first learn from a small set of labeled data what semantics or structures the condition y is essentially
representing by trying to recognize y in the latent space. As a second step, when performing
conditional generation, the semantics shall be exclusively captured and governed within y but not
interweaved with other factors. Following this intuition, we build the structured generative adversarial
network (SGAN) with enhanced controllability and disentanglability for semi-supervised generative
modeling. SGAN separates the hidden space to two parts y and z, and learns a more structured
generator distribution p(x|y, z) ? where the data are generated conditioned on two latent variables:
y, which encodes the designated semantics, and z that contains other factors of variation. To impose
the aforementioned exclusiveness constraint, SGAN first introduces two dedicated inference networks
C and I to map x back to the hidden space as C : x ? y, I : x ? z, respectively. Then, SGAN
enforces G to generate samples that when being mapped back to hidden space using C (or I), the
inferred latent code and the generator condition are always matched, regardless of the variations
of the other variable z (or y). To train SGAN, we draw inspirations from the recently proposed
adversarially learned inference framework (ALI) [5], and build two adversarial games to drive I, G to
match the true joint distributions p(x, z), and C, G to match the true joint distribution p(x, y). Thus,
SGAN can be seen as a combination of two adversarial games and two collaborative games, where
I, G combat each other to match joint distributions in the visible space, but I, C, G collaborate with
each other to minimize a reconstruction error in the hidden space. We theoretically show that SGAN
will converge to desired equilibrium if trained properly.
To empirically evaluate SGAN, we first define a mutual predictability (MP) measure to evaluate the
disentanglability of various DGMs, and show that in terms of MP, SGAN outperforms all existing
models that are able to infer the latent code z across multiple image datasets. When classifying
the generated images using a golden classifier, SGAN achieves the highest accuracy, confirming
its improved controllability for conditional generation under semi-supervised settings. In the semisupervised image classification task, SGAN outperforms strong baselines, and establishes new
state-of-the-art results on MNIST, SVHN and CIFAR-10 dataset. For controllable generation, SGAN
can generate images with high visual quality in terms of both visual comparison and inception score,
thanks to the disentangled latent space modeling. As SGAN is able to infer the unstructured code z,
we further apply SGAN for style transfer, and obtain impressive results.
2
Related Work
DGMs have drawn increasing interest from the community, and have been developed mainly toward
two directions: VAE-based models [12, 11, 32] that learn the data distribution via maximum likelihood
estimation (MLE), and GAN-based methods [19, 27, 21] that train a generator via adversarial learning.
SGAN combines the best of MLE-based methods and GAN-based methods which we will discuss
in detail in the next section. DGMs have also been applied for conditional generation, such as
CGAN [19], CVAE [11]. DisVAE [32] is a successful extension of CVAE that generates images
conditioned on text attributes. In parallel, CGAN has been developed to generate images conditioned
on text [24, 23], bounding boxes, key points [25], locations [24], other images [10, 6, 31], or generate
text conditioned on images [17]. All these models are trained using fully labeled data.
A variety of techniques have been developed toward learning disentangled representations for generative modeling [3, 29]. InfoGAN [3] disentangles hidden dimensions on unlabeled data by mutual
information regularization. However, the semantic of each disentangled dimension is uncontrollable
because it is discovered after training rather than designated by user modeling. We establish some
connections between SGAN and InfoGAN in the next section.
There is also interest in developing DGMs for semi-supervised conditional generation, such as semisupervised CVAE [11], its many variants [16, 9, 18], ALI [5] and TripleGAN [15], among which
the closest to us are [15, 9]. In [9], VAE is enhanced with a discriminator loss and an independency
2
constraint, and trained via joint MLE and discriminator loss minimization. By contrast, SGAN is an
adversarial framework that is trained to match two joint distributions in the visible space, thus avoids
MLE for visible variables. TripleGAN builds a three-player adversarial game to drive the generator to
match the conditional distribution p(x|y), while SGAN models the conditional distribution p(x|y, z)
instead. TripleGAN therefore lacks constraints to ensure the semantics of interest to be exclusively
captured by y, and lacks a mechanism to perform posterior inference for z.
3
Structured Generative Adversarial Networks (SGAN)
We build our model based on the generative adversarial networks (GANs) [8], a framework for
learning DGMs using a two-player adversarial game. Specifically, given observed data {xi }N
i=1 ,
GANs try to estimate a generator distribution pg (x) to match the true data distribution pdata (x),
where pg (x) is modeled as a neural network G that transforms a noise variable z ? N (0, 1)
? = G(z). GANs assess the quality of x
? by introducing a neural network
into generated data x
discriminator D to judge whether a sample is from pdata (x) or the generator distribution pg (x). D
is trained to distinguish generated samples from true samples while G is trained to fool D:
min max L(D, G) = Ex?pdata (x) [log(D(x))] + Ez?p(z) [log(1 ? D(G(z)))],
G
D
Goodfellow et al. [8] show the global optimum of the above problem is attained at pg = pdata . It is
noted that the original GAN models the latent space using a single unstructured noise variable z. The
semantics and structures that may be of our interest are entangled in z, and the generator transforms
? in a highly uncontrollable way ? it lacks both disentanglability and controllability.
z into x
We next describe SGAN, a generic extension to GANs that is enhanced with improved disentanglability and controllability for semi-supervised conditional generative modeling.
Overview. We consider a semi-supervised setting, where we observe a large set of unlabeled data
X = {xi }N
i=1 . We are interested in both the observed sample x and some hidden structures y of
? that matches the true data
x, and want to build a conditional generator that can generate data x
distribution of x, while obey the structures specified in y (e.g. generate pictures of digits given 0-9).
Besides the unlabeled x, we also have access to a small chunk of data Xl = {xlj , yjl }M
j=1 where
the structure y is jointly observed. Therefore, our model needs to characterize the joint distribution
p(x, y) instead of the marginal p(x), for both fully and partially observed x.
As the data generation process is intrinsically complex and usually determined by many factors
beyond y, it is necessary to consider other factors that are irrelevant with y, and separate the hidden
space into two parts (y, z), of which y encodes the designated semantics, and z includes any
other factors of variation [3]. We make a mild assumption that y and z are independent from each
other so that y could be disentangled from z. Our model thus needs to take into consideration the
uncertainty of both (x, y) and z, i.e. characterizing the joint distribution p(x, y, z) while being able
to disentangle y from z. Directly estimating p(x, y, z) is difficult, as (1) we have never observed z
and only observed y for partial x; (2) y and z might be entangled at any time as the training proceeds.
As an alternative, SGAN builds two inference networks I and C. The two inference networks define
two distributions pi (z|x) and pc (y|x) that are trained to approximate the true posteriors p(z|x) and
p(y|x) using two different adversarial games. The two games are unified via a shared generator
x ? pg (x|y, z). Marginalizing out z or y obtains pg (x|z) and pg (x|y):
Z
Z
pg (x|z) =
p(y)pg (x|y, z)dy, pg (x|y) =
p(z)pg (x|y, z)dz,
(1)
y
z
where p(y) and p(z) are appropriate known priors for y and z. As SGAN is able to perform posterior
inference for both z and y given x (even for unlabeled data), we can directly imposes constraints [13]
that enforce the structures of interest being exclusively captured by y, while those irreverent factors
being encoded in z (as we will show later). Fig.1 illustrates the key components of SGAN, which we
elaborate as follows.
Generator pg (x|y, z). We assume the following generative process from y, z to x: z ? p(z), y ?
p(y), x ? p(x|y, z), where p(z) is chosen as a non-informative prior, and p(y) as an appropriate
prior that meets our modeling needs (e.g. a categorical distribution for digit class). We parametrize
p(x|y, z) using a neural network generator G, which takes y and z as inputs, and outputs generated
samples x ? pg (x|y, z) = G(y, z). G can be seen as a ?decoder? in VAE parlance, and its
architecture depends on specific applications, such as a deconvolutional neural network for generating
images [25, 21].
3
?
?
?
?
?(?, ?)
?
?
?
?(?, ?)
?*
I(?)
G(?, ?)
?#
?#
(a)
?
?
?
I(?)
G(?, ?)
?#
(b)
?*
(c)
?
?
G(?, ?)
?#
(d)
C(?)
?#
(e)
Figure 1: An overview of the SGAN model: (a) the generator pg (x|y, z); (b) the adversarial game Lxz ; (c) the
adversarial game Lxy ; (d) the collaborative game Rz ; (e) the collaborative game Ry .
Adversarial game Lxz . Following the adversarially learning inference (ALI) framework, we construct an adversarial game to match the distributions of joint pairs (x, z) drawn from the two different factorizations: pg (x, z) = p(z)pg (x|z), pi (x, z) = p(x)pi (z|x). Specifically, to draw
samples from pg (x, z), we note the fact that we can first draw the tuple (x, y, z) following
y ? p(y), z ? p(z), x ? pg (x|y, z), and then only taking (x, z) as needed. This implicitly
performs the marginalization as in Eq. 1. On the other hand, we introduce an inference network
I : x ? z to approximate the true posterior p(z|x). Obtaining (x, z) ? p(x)pi (z|x) with I is
straightforward: x ? p(x), z ? pi (z|x) = I(x). Training G and I involves finding the Nash
equilibrium for the following minimax game Lxz (we slightly abuse Lxz for both the minimax
objective and a name for this adversarial game):
min max Lxz = Ex?p(x) [log(Dxz (x, I(x)))] + Ez?p(z),y?p(y) [log(1 ? Dxz (G(y, z), z))], (2)
I,G Dxz
where we introduce Dxz as a critic network that is trained to distinguish pairs (x, z) ? pg (x, z)
from those come from pi (x, z). This minimax objective reaches optimum if and only if the conditional distribution pg (x|z) characterized by G inverses the approximate posterior pi (z|x), implying
pg (x, z) = pi (x, z) [4, 5]. As we have never observed z for x, as long as z is assumed to be independent from y, it is reasonable to just set the true joint distribution p(x, z) = p?g (x, z) = p?i (x, z),
where we use p?g and p?i to denote the optimal distributions when Lxz reaches its equilibrium.
Adversarial game Lxy . The second adversarial game is built to match the true joint data distribution
p(x, y) that has been observed on Xl . We introduce the other critic network Dxy to discriminate
(x, y) ? p(x, y) from (x, y) ? pg (x, y) = p(y)pg (x|y), and build the game Lxy as:
min max Lxy = E(x,y)?p(x,y) [log(Dxy (x, y))] + Ey?p(y),z?p(z) [log(1 ? Dxy (G(y, z), y))].
G
Dxy
(3)
Collaborative game Ry . Although training the adversarial game Lxy theoretically drives pg (x, y)
to concentrate on the true data distribution p(x, y), it turns out to be very difficult to train Lxy to
desired convergence, as (1) the joint distribution p(x, y) characterized by Xl might be biased due
to its small data size; (2) there is little supervision from Xl to tell G what y essentially represents,
and how to generate samples conditioned on y. As a result, G might lack controllability ? it might
generate low-fidelity samples that are not aligned with their conditions, which will always be rejected
by Dxy . A natural solution to these issues is to allow (learned) posterior inference of y to reconstruct
y from generated x [5]. By minimizing the reconstruction error, we can backpropagate the gradient
to G to enhance its controllability. Once pg (x|y) can generate high-fidelity samples that respect the
structures y, we can reuse the generated samples (x, y) ? pg (x, y) as true samples in the first term
of Lxy , to prevent Dxz from collapsing into a biased p(x, y) characterized by Xl .
Intuitively, we introduce the second inference network C : x ? y which approximates the posterior
p(y|x) as y ? pc (y|x) = C(x), e.g. C reduces to a N-way classifier if y is categorical. To train
pc (y|x), we define a collaboration (reconstruction) game Ry in the hidden space of y:
min Ry = ?E(x,y)?p(x,y) [log pc (y|x)] ? E(x,y)?pg (x,y) [log pc (y|x)],
C,G
(4)
which aims to minimize the reconstruction error of y in terms of C and G, on both labeled data Xl
and generated data (x, y) ? pg (x, y). On the one hand, minimizing the first term of Ry w.r.t. C
guides C toward the true posterior p(y|x). On the other hand, minimizing the second term w.r.t. G
enhances G with extra controllability ? it minimizes the chance that G could generate samples that
would otherwise be falsely predicted by C. Note that we also minimize the second term w.r.t. C,
which proves effective in semi-supervised learning settings that uses synthetic samples to augment the
predictive power of C. In summary, minimizing Ry can be seen as a collaborative game between two
players C and G that drives pg (x|y) to match p(x|y) and pc (y|x) to match the posterior p(y|x).
4
Collaborative games Rz . As SGAN allows posterior inference for both y and z, we can explicitly impose constraints Ry and Rz to separate y from z during training. To explain, we
first note that optimizing the second term of Ry w.r.t G actually enforces the structure information to be fully persevered in y, because C is asked to recover the structure y from
G(y, z), which is generated conditioned on y, regardless of the uncertainty of z (as z is
marginalized out
during sampling). Therefore, minimizing
Ry indicates the
constraint:
following
minC,G Ey?p(y)
pc (y|G(y, z1 )), pc (y|G(y, z2 ))
, ?z1 , z2 ? p(z), where
a, b
is some distance
function between a and b (e.g. cross entropy if C is a N-way classifier). On the counter part, we also
want to enforce any other unstructured information that is not of our interest to be fully captured in z,
without being entangled with y. So we build the second collaborative game Rz as:
min Rz = ?E(x,z)?pg (x,z) [log pi (z|x)]
I,G
(5)
where I is required to recover z from those samples generated by G conditioned on z,
i.e. reconstructing
z in the hidden space.
Similar to Ry , minimizing Rz indicates:
minI,G Ez?p(z)
pi (z|G(y1 , z)), pi (z|G(y2 , z))
, ?y1 , y2 ? p(y), and when we model I as a
deterministic mapping [4], the k ? k distance between distributions is equal to the `-2 distance between
the outputs of I.
Theoretical Guarantees. We provide some theoretical results about the SGAN framework under the
nonparametric assumption. The proofs of the theorems are deferred to the supplementary materials.
Theorem 3.1 The global minimum of maxDxz Lxz is achieved if and only if p(x)pi (z|x) =
?
= 21 . Similarly, the global minimum of maxDxy Lxy is achieved if
p(z)pg (x|z). At that point Dxz
?
and only if p(x, y) = p(y)pg (x|y). At that point Dxy
= 12 .
Theorem 3.2 There exists a generator G? (y, z) of which the conditional distributions pg (x|y) and
pg (x|z) can both achieve equilibrium in their own minimax games Lxy and Lxz .
Theorem 3.3 Minimizing Rz w.r.t. I will keep the equilibrium of the adversarial game Lxz . Similarly, minimizing Ry w.r.t. C will keep the equilibrium of the adversarial game Lxy unchanged.
Algorithm 1 Training Structured Generative Adversarial Networks (SGAN).
1: Pretrain C by minimizing the first term of Eq. 4 w.r.t. C using Xl .
2: repeat
3:
Sample a batch of x: xu ? p(x).
4:
Sample batches of pairs (x, y): (xl , yl ) ? p(x, y), (xg , yg ) ? pg (x, y), (xc , yc ) ? pc (x, y).
Obtain a batch (xm , ym ) by mixing data from (xl , yl ), (xg , yg ), (xc , yc ) with proper mixing portion.
5:
6:
for k = 1 ? K do
7:
Train Dxz by maximizing the first term of Lxz using xu and the second using xg .
8:
Train Dxy by maximizing the first term of Lxy using (xm , ym ) and the second using (xg , yg ).
9:
end for
10:
Train I by minimizing Lxz using xu and Rz using xg .
11:
Train C by minimizing Ry using (xm , ym ) (see text).
12:
Train G by minimizing Lxy + Lxz + Ry + Rz using (xg , yg ).
13: until convergence.
Training. SGAN is fully differentiable and can be trained end-to-end using stochastic gradient
descent, following the strategy in [8] that alternatively trains the two critic networks Dxy , Dxz and
the other networks G, I and C. Though minimizing Ry and Rz w.r.t. G will introduce slight bias,
we find empirically it works well and contributes to disentangling y and z. The training procedures
are summarized in Algorithm 1. Moreover, to guarantee that C could be properly trained without
bias, we pretrain C by minimizing the first term of Ry until convergence, and do not minimize Ry
w.r.t. C until G has started generating meaning samples (usually after several epochs of training).
As the training proceeds, we gradually improve the portion of synthetic samples (x, y) ? pg (x, y)
and (x, y) ? pc (x, y) in the stochastic batch, to help the training of Dxy and C (see Algorithm 1),
and you can refer to our codes on GitHub for more details of the portion. We empirically found this
mutual bootstrapping trick yields improved C and G.
Discussion and connections. SGAN is essentially a combination of two adversarial games Lxy and
Lxz , and two collaborative games Ry , Rz , where Lxy and Lxz are optimized to match the data
distributions in the visible space, while Ry and Rz are trained to match the posteriors in the hidden
space. It combines the best of GAN-based methods and MLE-based methods: on one hand, estimating
5
density in the visible space using GAN-based formulation avoids distributing the probability mass
diffusely over data space [5], which MLE-based frameworks (e.g. VAE) suffer. One the other hand,
incorporating reconstruction-based constraints in latent space helps enforce the disentanglement
between structured information in y and unstructured ones in z, as we argued above.
We also establish some connections between SGAN and some existing works [15, 27, 3]. We note
the Lxy game in SGAN is connected to the TripleGAN framework [15] when its trade-off parameter
? = 0. We will empirically show that SGAN yields better controllability on G, and also improved
performance on downstream tasks, due to the separate modeling of y and z. SGAN also connects to
InfoGAN in the sense that the second term of Ry (Eq. 4) reduces to the mutual information penalty in
InfoGAN under unsupervised settings. However, SGAN and InfoGAN have totally different aims and
modeling techniques. SGAN builds a conditional generator that has the semantic of interest y as a
fully controllable input (known before training); InfoGAN in contrast aims to disentangle some latent
variables whose semantics are interpreted after training (by observation). Though extending InfoGAN
to semi-supervised settings seems straightforward, successfully learning the joint distribution p(x, y)
with very few labels is non-trivial: InfoGAN only maximizes the mutual information between y
and G(y, z), bypassing p(y|x) or p(x, y), thus its direct extension to semi-supervised settings may
fail due to lack of p(x, y). Moreover, SGAN has dedicated inference networks I and C, while
the network Q(x) in InfoGAN shares parameters with the discriminator, which has been argued
as problematic [15, 9] as it may compete with the discriminator and prevents its success in semisupervised settings. See our ablation study in section 4.2 and Fig.3. Finally, the first term in Ry
is similar to the way Improved-GAN models the conditional p(y|x) for labeled data, but SGAN
treats the generated data very differently ? Improved-GAN labels xg = G(z, y) as a new class
y = K + 1, instead SGAN reuses xg and xc to mutually boost I, C and G, which is key to the
success of semi-supervised learning (see section 4.2).
4
Evaluation
We empirically evaluate SGAN through experiments on different datasets. We show that separately
modeling z and y in the hidden space helps better disentangle the semantics of our interest from other
irrelevant attributes, thus yields improved performance for both generative modeling (G) and posterior
inference (C, I) (section 4.1 4.3). Under SGAN framework, the learned inference networks and
generators can further benefit a lot of downstream applications, such as semi-supervised classification,
controllable image generation and style transfer (section 4.2 4.3).
Dataset and configurations. We evaluate SGAN on three image datasets: (1) MNIST [14]: we use
the 60K training images as unlabeled data, and sample n ? {20, 50, 100} labels for semi-supervised
learning following [12, 27], and evaluate on the 10K test images. (2) SVHN [20]: a standard train/test
split is provided, where we sample n = 1000 labels from the training set for semi-supervised
learning [27, 15, 5]. (3) CIFAR-10: a challenging dataset for conditional image generation that
consists of 50K training and 10K test images from 10 object classes. We randomly sample n = 4000
labels [27, 28, 15] for semi-supervised learning. For all datasets, our semantic of interest is the
digit/object class, so y is a 10-dim categorical variable. We use a 64-dim gaussian noise as z in
MNIST and a 100-dim uniform noise as z in SVHN and CIFAR-10.
Implementation. We implement SGAN using TensorFlow [1] and Theano [2] with distributed
acceleration provided by Poseidon [33] which parallelizes line 7-8 and 10-12 of Algorithm. 1. The
neural network architectures of C, G and Dxy mostly follow those used in TripleGAN [15] and
we design I and Dxz according to [5] but with shallower structures to alleviate the training costs.
Empirically SGAN needs 1.3-1.5x more training time than TripleGAN [15] without parallelization.
It is noted that properly weighting the losses of the four games in SGAN during training may lead to
performance improvement. However, we simply set them equal without heavy tuning1 .
4.1
Controllability and Disentanglability
We evaluate the controllability and disentanglability of SGAN by assessing its generator network G
and inference network I, respectively. Specifically, as SGAN is able to perform posterior inference for
z, we define a novel quantitative measure based on z to compare its disentanglability to other DGMs:
we first use the trained I (or the ?recognition network? in VAE-based models) to infer z for unseen x
from test sets. Ideally, as z and y are modeled as independent, when I is trained to approach the true
posterior of z, its output, when used as features, shall have weak predictability for y. Accordingly, we
1
The code is publicly available at https://github.com/thudzj/StructuredGAN.
6
use z as features to train a linear SVM classifier to predict the true y, and define the converged accuracy of this classifier as the mutual predictability (MP) measure, and expect lower MP for models that
can better disentangle y from z. We conduct this experiment on all three sets, and report the averaged
MP measure of five runs in Fig. 2, comparing the following DGMs (that are able to infer z): (1) ALI [5]
and (2) VAE [12], trained without label information; (3) CVAE-full2 : the M2 model in [11] trained under the fully supervised setting; (4) SGAN trained under semi-supervised settings. We use 50, 1000 and
4000 labels for MNIST, SVHN and CIFAR-10 dataset under semi-supervised settings, respectively.
ALI
MP
0.9
VAE
Clearly, SGAN demonstrates low MP when predicting y
SGAN
CVAE-full
using z on three datasets. Using only 50 labels, SGAN
0.6
exhibits reasonable MP. In fact, on MNIST with only 20
labels as supervision, SGAN achieves 0.65 MP, outper0.3
forming other baselines by a large margin. The results
clearly demonstrate SGAN?s ability to disentangle y and
MNIST SVHN CIFAR-10
z, even when the supervision is very scarce.
Figure 2: Comparisons of the MP measure
On the other hand, better disentanglability also implies for different DGMs (lower is better).
improved controllability of G, because less entangled y and z would be easier for G to recognize the
designated semantics ? so G should be able to generate samples that are less deviated from y during
conditional generation. To verify this, following [9], we use a pretrained gold-standard classifier
(0.56% error on MNIST test set) to classify generated images, and use the condition y as ground truth
to calculate the accuracy. We compare SGAN in Table 1 to CVAE-semi and TripleGAN [15], another
strong baseline that is also designed for conditional generation under semi-supervised settings. We use
n = 20, 50, 100 labels on MNIST, and observe a significantly higher accuracy for both TripleGAN
and SGAN. For comparison, a generator trained by CVAE-full achieves 0.6% error. When there are
fewer labels available, SGAN outperforms TripleGAN. The generator in SGAN can generate samples
that consistently obey the conditions specified in y, even when there are only two images per class
(n = 20) as supervision. These results verify our statements that disentangled semantics further
enhance the controllability of the conditioned generator G.
4.2
Semi-supervised Classification
# labeled samples
It is natural to use SGAN for semi-supervised
Model
n = 20 n = 50 n = 100
prediction.With a little supervision, SGAN can
CVAE-semi
33.05
10.72
5.66
deliver a conditional generator with reasonably
3.06
1.80
1.29
good controllability, with which, one can syn- TripleGAN
SGAN
1.68
1.23
0.93
thesize samples from pg (x, y) to augment the
training of C when minimizing Ry . Once C Table 1: Errors (%) of generated samples classified by a
becomes more accurate, it tends to make less classifier with 0.56% test error.
mistakes when inferring y from x. Moreover, as we are sampling (x, y) ? pc (x, y) to train Dxy
during the maximization of Lxy , a more accurate C means more available labeled samples (by
predicting y from unlabeled x using C) to lower the bias brought by the small set Xl , which in return
can enhance G in the minimization phase of Lxy . Consequently, a mutual boosting cycle between G
and C is formed.
To empirically validate this, we deploy SGAN for semi-supervised classification on MNIST, SVHN
and CIFAR-10, and compare the test errors of C to strong baselines in Table 2. To keep the
comparisons fair, we adopt the same neural network architectures and hyper-parameter settings
from [15], and report the averaged results of 10 runs with randomly sampled labels (every class has
equal number of labels). We note that SGAN outperforms the current state-of-the-art methods across
all datasets and settings. Especially, on MNIST when labeled instances are very scarce (n = 20),
SGAN attains the highest accuracy (4.0% test error) with significantly lower variance, benefiting
from the mutual boosting effects explained above. This is very critical for applications under low-shot
or even one-shot settings where the small set Xl might not be a good representative for the data
distribution p(x, y).
2
For CVAE-full, we use test images and ground truth labels together to infer z when calculating MP. We
are unable to compare to semi-supervised CVAE as in CVAE inferring z for test images requires image labels as
input, which is unfair to other methods.
7
Method
Ladder [22]
VAE [12]
CatGAN [28]
ALI [5]
ImprovedGAN [27]
TripleGAN [15]
SGAN
n = 20
16.77(?4.52)
5.40(?6.53)
4.0(?4.14)
MNIST
n = 50
2.21(?1.36)
1.59(?0.69)
1.29(?0.47)
n = 100
0.89(?0.50)
3.33(?0.14)
1.39(?0.28)
0.93 (?0.07)
0.92(?0.58)
0.89(?0.11)
SVHN
n = 1000
36.02(?0.10)
7.3
8.11(?1.3)
5.83(?0.20)
5.73(?0.12)
CIFAR-10
n = 4000
20.40(?0.47)
19.58(?0.58)
18.3
18.63(?2.32)
18.82(?0.32)
17.26(?0.69)
Table 2: Comparisons of semi-supervised classification errors (%) on MNIST, SVHN and CIFAR-10 test sets.
4.3
Qualitative Results
In this section we present qualitative results produced by SGAN?s generator under semi-supervised
settings. Unless otherwise specified, we use 50, 1000 and 4000 labels on MNIST, SVHN, CIFAR-10
for the results. These results are randomly selected without cherry pick, and more results could be
found in the supplementary materials.
Controllable generation. To figure out how each module in SGAN
contributes to the final results, we
conduct an ablation study in Fig.3,
where we plot images generated by
SGAN with or without the terms
Ry and Rz during training. As
we have observed, our full model
(a) w/o Ry , Rz
(b) w/o Rz
(c) Full model
accurately disentangles y and z. Figure 3: Ablation study: conditional generation results by SGAN
When there is no collaborative game (a) without Ry , Rz , (b) without Rz (c) full model. Each row has the
involved, the generator easily col- same y while each column shares the same z.
lapses to a biased conditional distribution defined by the classifier C that is trained only on a very
small set of labeled data with insufficient supervision. For example, the generator cannot clearly
distinguish the following digits: 0, 2, 3, 5, 8. Incorporating Ry into training significantly alleviate
this issue ? an augmented C would resolve G?s confusion. However, it still makes mistakes in some
confusing classes, such as 3 and 5. Ry and Rz connect the two adversarial games to form a mutual
boosting cycle. The absence of any of them would break this cycle, consequently, SGAN would be
under-constrained and may collapse to some local minima ? resulting in both a less accurate classifier
C and a less controlled G.
Visual quality. Next, we investigate whether a more disentangled y, z will result in higher visual
quality on generated samples, as it makes sense that the conditioned generator G would be much easier
to learn when its inputs y and z carry more orthogonal information. We conduct this experiment on
CIFAR-10 that is consisted of natural images with more uncertainty besides the object categories.
We compare several state-of-the-art generators in Fig 4 to SGAN without any advanced GAN
training strategies (e.g. WGAN, gradient penalties) that are reported to possibly improve the visual
quality. We find SGAN?s conditional generator does generate less blurred images with the main
objects more salient, compared to TripleGAN and ImprovedGAN w/o minibatch discrimination (see
supplementary). For a quantitative measure, we generate 50K images and compute the inception
(a) CIFAR-10 data
(b) TripleGAN
(c) SGAN
Figure 4: Visual comparison of generated images on CIFAR-10. For (b) and (c), each row shares the same y.
8
(a)
(b)
(c)
(e)
(f)
(d)
Figure 5: (a)-(c): image progression, (d)-(f): style transfer using SGAN.
score [27] as 6.91(?0.07), compared to TripleGAN 5.08(?0.09) and Improved-GAN 3.87(?0.03)
w/o minibatch discrimination, confirming the advantage of structured modeling for y and z.
Image progression. To demonstrate that SGAN generalizes well instead of just memorizing the
data, we generate images with interpolated z in Fig.5(a)-(c) [32]. Clearly, the images generated with
progression are semantically consistent with y, and change smoothly from left to right. This verifies
that SGAN correctly disentangles semantics, and learns accurate class-conditional distributions.
Style transfer. We apply SGAN for style transfer [7, 30]. Specifically, as y is modeled as digit/object
category on all three dataset, we suppose z shall encode any other information that are orthogonal
to y (probably style information). To see whether I behaves properly, we use SGAN to transfer the
unstructured information from z in Fig.5(d)-(f): given an image x (the leftmost image), we infer its
unstructured code z. We generate images conditioned on z, but with different y. It is interesting to
see that z encodes various aspects of the images, such as the shape, texture, orientation, background
information, etc, as expected. Moreover, G can correctly transfer these information to other classes.
5
Conclusion
We have presented SGAN for semi-supervised conditional generative modeling, which learns from a
small set of labeled instances to disentangle the semantics of our interest from other elements in the
latent space. We show that SGAN has improved disentanglability and controllability compared to
baseline frameworks. SGAN?s design is beneficial to a lot of downstream applications: it establishes
new state-of-the-art results on semi-supervised classification, and outperforms strong baseline in
terms of the visual quality and inception score on controllable image generation.
Acknowledgements
Zhijie Deng and Jun Zhu are supported by NSF China (Nos. 61620106010, 61621136008, 61332007),
the MIIT Grant of Int. Man. Comp. Stan (No. 2016ZXFB00001), Tsinghua Tiangong Institute
for Intelligent Computing and the NVIDIA NVAIL Program. Hao Zhang is supported by the
AFRL/DARPA project FA872105C0003. Xiaodan Liang is supported by award FA870215D0002.
References
[1] Mart?n Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin,
Sanjay Ghemawat, Geoffrey Irving, Michael Isard, et al. Tensorflow: A system for large-scale machine
learning. In USENIX Symposium on Operating Systems Design and Implementation, 2016.
9
[2] James Bergstra, Olivier Breuleux, Fr?d?ric Bastien, Pascal Lamblin, Razvan Pascanu, Guillaume Desjardins, Joseph Turian, David Warde-Farley, and Yoshua Bengio. Theano: A cpu and gpu math compiler in
python. pages 3?10, 2010.
[3] Xi Chen, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever, and Pieter Abbeel. Infogan:
Interpretable representation learning by information maximizing generative adversarial nets. In Advances
in Neural Information Processing Systems, pages 2172?2180, 2016.
[4] Jeff Donahue, Philipp Kr?henb?hl, and Trevor Darrell. Adversarial feature learning. arXiv preprint
arXiv:1605.09782, 2016.
[5] Vincent Dumoulin, Ishmael Belghazi, Ben Poole, Alex Lamb, Martin Arjovsky, Olivier Mastropietro, and
Aaron Courville. Adversarially learned inference. arXiv preprint arXiv:1606.00704, 2016.
[6] Tzu-Chien Fu, Yen-Cheng Liu, Wei-Chen Chiu, Sheng-De Wang, and Yu-Chiang Frank Wang. Learning
cross-domain disentangled deep representation with supervision from a single domain. arXiv preprint
arXiv:1705.01314, 2017.
[7] Leon A Gatys, Alexander S Ecker, and Matthias Bethge. A neural algorithm of artistic style. arXiv preprint
arXiv:1508.06576, 2015.
[8] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron
Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Information Processing
Systems, pages 2672?2680, 2014.
[9] Zhiting Hu, Zichao Yang, Xiaodan Liang, Ruslan Salakhutdinov, and Eric P Xing. Controllable text
generation. arXiv preprint arXiv:1703.00955, 2017.
[10] Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A Efros. Image-to-image translation with conditional
adversarial networks. arXiv preprint arXiv:1611.07004, 2016.
[11] Diederik P Kingma, Shakir Mohamed, Danilo Jimenez Rezende, and Max Welling. Semi-supervised
learning with deep generative models. In Advances in Neural Information Processing Systems, pages
3581?3589, 2014.
[12] Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114,
2013.
[13] Samuli Laine and Timo Aila. Temporal ensembling for semi-supervised learning. arXiv preprint
arXiv:1610.02242, 2016.
[14] Yann LeCun, L?on Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to
document recognition. Proceedings of the IEEE, 86(11):2278?2324, 1998.
[15] Chongxuan Li, Kun Xu, Jun Zhu, and Bo Zhang. Triple generative adversarial nets. In Advances in Neural
Information Processing Systems, 2017.
[16] Chongxuan Li, Jun Zhu, Tianlin Shi, and Bo Zhang. Max-margin deep generative models. In Advances in
Neural Information Processing Systems, pages 1837?1845, 2015.
[17] Xiaodan Liang, Zhiting Hu, Hao Zhang, Chuang Gan, and Eric P Xing. Recurrent topic-transition gan for
visual paragraph generation. arXiv preprint arXiv:1703.07022, 2017.
[18] Lars Maal?e, Casper Kaae S?nderby, S?ren Kaae S?nderby, and Ole Winther. Auxiliary deep generative
models. arXiv preprint arXiv:1602.05473, 2016.
[19] Mehdi Mirza and Simon Osindero. Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784,
2014.
[20] Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, and Andrew Y Ng. Reading digits
in natural images with unsupervised feature learning.
[21] Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep
convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015.
[22] Antti Rasmus, Mathias Berglund, Mikko Honkala, Harri Valpola, and Tapani Raiko. Semi-supervised
learning with ladder networks. In Advances in Neural Information Processing Systems, pages 3546?3554,
2015.
10
[23] Scott Reed, Zeynep Akata, Xinchen Yan, Lajanugen Logeswaran, Bernt Schiele, and Honglak Lee.
Generative adversarial text to image synthesis. In International Conference on Machine Learning, pages
1060?1069, 2016.
[24] Scott Reed, A?ron van den Oord, Nal Kalchbrenner, Victor Bapst, Matt Botvinick, and Nando de Freitas.
Generating interpretable images with controllable structure. In International Conference on Learning
Representations, 2017.
[25] Scott E Reed, Zeynep Akata, Santosh Mohan, Samuel Tenka, Bernt Schiele, and Honglak Lee. Learning
what and where to draw. In Advances in Neural Information Processing Systems, pages 217?225, 2016.
[26] Ruslan Salakhutdinov and Geoffrey Hinton. Deep boltzmann machines. In Artificial Intelligence and
Statistics, pages 448?455, 2009.
[27] Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved
techniques for training gans. In Advances in Neural Information Processing Systems, pages 2226?2234,
2016.
[28] Jost Tobias Springenberg. Unsupervised and semi-supervised learning with categorical generative adversarial networks. arXiv preprint arXiv:1511.06390, 2015.
[29] Luan Tran, Xi Yin, and Xiaoming Liu. Disentangled representation learning gan for pose-invariant face
recognition. In Conference on Computer Vision and Pattern Recognition, 2017.
[30] Hao Wang, Xiaodan Liang, Hao Zhang, Dit-Yan Yeung, and Eric P Xing. Zm-net: Real-time zero-shot
image manipulation network. arXiv preprint arXiv:1703.07255, 2017.
[31] Xiaolong Wang and Abhinav Gupta. Generative image modeling using style and structure adversarial
networks. In European Conference on Computer Vision, pages 318?335. Springer, 2016.
[32] Xinchen Yan, Jimei Yang, Kihyuk Sohn, and Honglak Lee. Attribute2image: Conditional image generation
from visual attributes. In European Conference on Computer Vision, pages 776?791. Springer, 2016.
[33] Hao Zhang, Zhiting Hu, Jinliang Wei, Pengtao Xie, Gunhee Kim, Qirong Ho, and Eric Xing. Poseidon: A system architecture for efficient gpu-based deep learning on multiple machines. arXiv preprint
arXiv:1512.06216, 2015.
11
| 6979 |@word mild:1 seems:1 hu:3 pieter:1 tenka:1 pg:37 pick:1 shot:3 carry:1 configuration:1 contains:2 exclusively:3 score:3 liu:2 jimenez:1 document:1 deconvolutional:1 outperforms:5 existing:4 cvae:11 current:1 z2:2 com:1 comparing:1 freitas:1 diederik:2 gpu:2 john:1 devin:1 visible:5 informative:1 confirming:2 shape:1 designed:1 plot:1 interpretable:2 discrimination:2 implying:1 generative:29 fewer:1 selected:1 isard:1 alec:2 accordingly:1 intelligence:1 timo:1 bissacco:1 sgan:85 chiang:1 characterization:1 boosting:3 pascanu:1 location:1 math:1 philipp:1 ron:1 zhang:7 five:1 direct:1 symposium:1 qualitative:2 consists:1 abadi:1 combine:2 paragraph:1 introduce:5 falsely:1 theoretically:2 expected:1 gatys:1 ry:26 salakhutdinov:2 resolve:1 little:2 cpu:1 duan:1 soumith:1 increasing:1 totally:1 provided:2 estimating:2 matched:1 moreover:4 maximizes:1 mass:2 becomes:1 project:1 what:3 lxz:14 interpreted:1 minimizes:1 developed:3 unified:1 finding:1 bootstrapping:1 guarantee:2 temporal:1 combat:1 quantitative:2 every:1 golden:1 jimei:1 zaremba:1 botvinick:1 classifier:9 demonstrates:1 sherjil:1 control:1 grant:1 reuses:1 before:1 local:1 treat:1 tsinghua:4 tends:1 mistake:2 encoding:1 meet:1 abuse:1 might:6 china:1 challenging:1 luke:1 gunhee:1 ease:1 factorization:1 collapse:1 logeswaran:1 catgan:1 averaged:2 lecun:1 enforces:2 implement:1 petuum:1 tiangong:1 razvan:1 digit:6 procedure:1 yan:5 significantly:3 word:1 cannot:1 unlabeled:6 disentangles:3 map:2 deterministic:1 dz:1 maximizing:3 dean:1 straightforward:2 regardless:2 ecker:1 shi:1 unstructured:6 matthieu:1 pouget:1 m2:1 lamblin:1 disentangled:10 variation:5 enhanced:3 deploy:1 suppose:1 massive:1 user:1 olivier:2 xinchen:2 us:1 xiaodan:5 goodfellow:3 mikko:1 trick:1 element:1 expensive:1 recognition:4 nderby:2 labeled:13 observed:9 module:1 preprint:15 wang:5 capture:1 calculate:1 connected:1 cycle:3 counter:1 highest:2 trade:1 alessandro:1 intuition:1 nash:1 schiele:2 asked:1 ideally:1 warde:2 tobias:1 trained:20 solving:1 compromise:1 ali:6 predictive:1 deliver:1 eric:5 easily:1 joint:13 darpa:1 differently:1 various:3 harri:1 train:13 describe:1 effective:1 ole:1 artificial:1 vicki:1 tell:1 hyper:1 rein:1 kalchbrenner:1 quite:1 encoded:1 supplementary:3 whose:1 jean:1 bernt:2 reconstruct:1 otherwise:2 ability:3 statistic:1 unseen:1 jointly:1 final:1 shakir:1 advantage:1 differentiable:1 net:5 matthias:1 propose:1 reconstruction:6 tran:1 fr:1 parallelizes:1 zm:1 aligned:1 ablation:3 holistic:1 mixing:2 qirong:1 achieve:1 benefiting:2 gold:1 yjl:1 validate:1 sutskever:1 convergence:3 optimum:2 extending:1 assessing:1 darrell:1 generating:3 adam:1 ben:1 object:5 help:3 tim:1 recurrent:1 andrew:1 pose:1 eq:3 strong:4 auxiliary:1 c:2 involves:3 judge:1 come:1 predicted:1 implies:1 direction:2 concentrate:1 kaae:2 attribute:3 stochastic:2 lars:1 nando:1 material:2 require:1 argued:2 abbeel:1 uncontrollable:2 alleviate:2 disentanglement:1 extension:3 strictly:2 bypassing:1 ground:2 equilibrium:7 mapping:1 predict:1 desjardins:1 achieves:3 adopt:1 exclusiveness:1 efros:1 cgan:2 miit:1 estimation:1 ruslan:2 label:18 honkala:1 establishes:3 successfully:1 minimization:2 brought:1 clearly:4 bapst:1 always:2 gaussian:1 aim:3 rather:1 zhou:1 minc:1 vae:8 encode:1 rezende:1 properly:4 improvement:1 consistently:1 indicates:4 mainly:1 likelihood:1 pretrain:2 contrast:2 adversarial:37 attains:1 baseline:6 sense:2 kim:1 helpful:1 inference:19 dim:3 hidden:12 interested:1 semantics:18 tao:1 issue:2 classification:8 among:2 aforementioned:1 augment:3 fidelity:2 orientation:1 jianmin:1 pascal:1 art:5 constrained:1 mutual:9 marginal:1 santosh:1 equal:4 construct:1 never:2 beach:1 once:2 sampling:2 ng:1 adversarially:3 represents:1 yu:1 unsupervised:4 pdata:4 report:2 yoshua:3 intelligent:1 mirza:2 few:1 randomly:3 resulted:1 recognize:2 wgan:1 phase:1 connects:1 jeffrey:1 interest:11 highly:2 investigate:2 alexei:1 evaluation:1 alignment:1 deferred:1 introduces:1 farley:2 pc:11 xiaolong:1 cherry:1 accurate:4 andy:1 tuple:1 fu:1 partial:1 necessary:1 netzer:1 orthogonal:2 unless:1 conduct:3 desired:3 theoretical:2 instance:3 classify:1 modeling:17 column:1 ishmael:1 maximization:1 artistic:1 cost:1 introducing:1 uniform:1 usefulness:1 successful:1 inadequate:1 osindero:1 characterize:1 reported:2 connect:1 synthetic:3 chunk:1 st:1 thanks:1 density:1 tianlin:1 winther:1 international:2 oord:1 lee:3 yl:2 off:1 enhance:3 bethge:1 ym:3 yg:4 gans:6 together:1 michael:1 ilya:1 synthesis:1 tzu:1 possibly:1 berglund:1 collapsing:1 style:9 return:1 wojciech:1 li:2 de:2 bergstra:1 summarized:1 includes:1 int:1 inc:1 blurred:1 mp:11 explicitly:1 depends:1 later:1 try:1 lot:2 break:1 dumoulin:1 portion:3 xing:5 start:1 recover:2 parallel:1 compiler:1 bayes:1 metz:1 simon:1 yen:1 collaborative:10 minimize:5 ass:2 contribution:1 accuracy:5 publicly:1 formed:1 variance:1 convolutional:1 yield:3 weak:1 vincent:1 accurately:2 produced:1 ren:1 unsupervisedly:1 drive:4 comp:1 converged:1 classified:1 explain:1 influenced:1 reach:2 trevor:1 involved:1 james:1 mohamed:1 chintala:1 proof:1 sampled:1 dataset:5 concentrating:1 intrinsically:1 syn:1 akata:2 actually:1 back:2 afrl:1 attained:1 higher:2 supervised:31 follow:1 danilo:1 xie:1 improved:11 wei:2 formulation:1 box:1 though:2 inception:3 just:2 rejected:1 autoencoders:1 parlance:1 hand:6 until:3 sheng:1 mehdi:2 lack:5 minibatch:2 mode:1 quality:7 semisupervised:4 building:1 effect:1 usa:1 name:1 true:15 lxy:17 y2:2 verify:2 regularization:1 inspiration:1 consisted:1 lapse:1 semantic:4 conditionally:1 game:35 during:6 irving:1 davis:1 noted:2 samuel:1 nvail:1 leftmost:1 trying:1 demonstrate:2 confusion:1 performs:1 delivers:1 svhn:10 dedicated:2 image:45 variational:2 consideration:1 meaning:1 recently:2 novel:1 behaves:1 diffusely:2 empirically:7 overview:2 slight:1 approximates:1 mellon:1 refer:1 honglak:3 collaborate:1 similarly:2 access:2 supervision:9 impressive:1 operating:1 etc:1 patrick:1 disentangle:7 posterior:15 closest:1 own:1 optimizing:1 irrelevant:2 manipulation:1 nvidia:1 luan:1 success:3 samuli:1 victor:1 captured:4 seen:3 minimum:3 arjovsky:1 impose:2 isola:1 deng:2 ey:2 tapani:1 converge:1 semi:32 multiple:3 full:6 infer:6 reduces:2 match:13 characterized:3 cross:2 long:2 cifar:13 mle:7 award:1 controlled:1 prediction:1 variant:1 jost:1 essentially:3 cmu:2 vision:3 arxiv:30 yeung:1 achieved:2 background:1 want:2 fine:1 separately:1 entangled:4 biased:3 extra:1 parallelization:1 breuleux:1 lajanugen:1 probably:1 yang:3 split:1 bengio:3 mastropietro:1 variety:1 marginalization:1 architecture:4 cn:2 barham:1 haffner:1 whether:3 distributing:2 reuse:1 penalty:2 suffer:2 henb:1 deep:8 fool:1 amount:1 transforms:2 nonparametric:1 sohn:1 category:2 dit:1 generate:18 http:1 problematic:1 nsf:1 coates:1 per:1 correctly:2 carnegie:1 dgms:11 shall:3 key:3 independency:1 four:1 salient:1 drawn:2 prevent:1 nal:1 downstream:5 houthooft:1 laine:1 compete:1 inverse:1 run:2 uncertainty:3 you:1 springenberg:1 reasonable:2 lamb:1 yann:1 wu:1 draw:4 dy:1 confusing:1 ric:1 distinguish:3 tackled:1 deviated:1 courville:2 cheng:1 constraint:7 alex:1 encodes:4 generates:2 interpolated:1 aspect:1 min:5 leon:1 performing:1 martin:1 xiaoming:1 structured:8 designated:8 developing:1 according:1 combination:2 across:3 slightly:1 xlj:1 agreeing:1 reconstructing:1 beneficial:1 aila:1 joseph:1 hl:1 memorizing:1 intuitively:1 gradually:1 explained:1 theano:2 den:1 invariant:1 mutually:1 bing:1 discus:1 turn:1 mechanism:1 fail:1 needed:1 end:3 maal:1 parametrize:1 available:3 generalizes:1 apply:2 observe:2 obey:2 progression:3 generic:1 appropriate:2 enforce:3 zxfb00001:1 salimans:1 alternative:1 batch:4 ho:1 original:1 rz:18 assumes:1 chuang:1 ensure:2 gan:12 marginalized:1 xc:3 calculating:1 build:11 establish:2 prof:1 especially:1 unchanged:1 objective:2 strategy:2 exhibit:2 gradient:4 enhances:1 distance:3 unable:2 separate:5 mapped:1 capacity:1 decoder:1 phillip:1 valpola:1 topic:1 mail:2 zeynep:2 trivial:1 toward:3 ozair:1 code:6 besides:2 modeled:3 reed:3 mini:1 insufficient:1 minimizing:15 zichao:1 rasmus:1 liang:5 difficult:3 disentangling:1 mostly:1 kun:1 statement:1 frank:1 hao:7 implementation:2 design:3 proper:1 boltzmann:1 perform:3 shallower:1 observation:1 datasets:7 descent:1 controllability:16 extended:2 hinton:1 y1:2 discovered:1 usenix:1 community:1 inferred:1 david:2 pair:3 required:1 specified:5 connection:3 discriminator:5 z1:2 optimized:1 learned:4 tensorflow:2 boost:1 kingma:2 nip:1 able:7 beyond:1 proceeds:2 usually:3 sanjay:1 xm:3 yc:2 poole:1 scott:3 reading:1 pattern:1 program:1 built:1 including:1 max:6 zhiting:3 power:1 critical:1 natural:4 predicting:2 scarce:3 advanced:1 zhu:5 representing:1 minimax:4 improve:2 github:2 epxing:1 ladder:2 abhinav:1 picture:1 stan:1 started:1 xg:8 raiko:1 categorical:4 jun:5 auto:1 text:8 prior:3 epoch:1 acknowledgement:1 python:1 schulman:1 marginalizing:1 fully:7 loss:3 expect:1 generation:17 interesting:1 geoffrey:2 remarkable:1 generator:36 triple:1 chongxuan:2 consistent:1 imposes:1 classifying:1 pi:12 critic:3 casper:1 translation:1 collaboration:1 share:3 summary:1 heavy:1 repeat:1 row:2 supported:3 antti:1 guide:2 allow:1 bias:3 institute:1 wide:2 characterizing:1 taking:1 face:1 benefit:1 distributed:1 van:1 dimension:2 evaluating:1 avoids:2 transition:1 author:1 attribute2image:1 welling:2 approximate:3 obtains:1 implicitly:1 chien:1 keep:3 belghazi:1 global:3 assumed:1 xi:5 alternatively:1 spectrum:2 latent:10 table:4 learn:3 transfer:8 reasonably:1 ca:1 controllable:9 inherently:1 obtaining:3 contributes:2 bottou:1 complex:2 european:2 domain:3 improvedgan:2 main:1 bounding:1 noise:6 paul:1 turian:1 verifies:1 fair:1 xu:6 augmented:1 fig:7 representative:1 ensembling:1 elaborate:1 predictability:3 inferring:2 xl:11 col:1 governed:1 unfair:1 infogan:10 weighting:1 learns:3 grained:1 zhifeng:1 donahue:1 theorem:4 ian:2 specific:1 bastien:1 ghemawat:1 explored:1 abadie:1 svm:1 gupta:1 exists:1 incorporating:2 mnist:14 gained:1 kr:1 texture:1 mohan:1 conditioned:11 illustrates:1 margin:2 chen:5 easier:2 backpropagate:1 entropy:1 smoothly:1 yin:1 simply:1 forming:1 ez:3 visual:10 prevents:1 partially:1 bo:3 pretrained:1 radford:2 springer:2 truth:2 chance:1 dcszj:1 mart:1 tinghui:1 conditional:32 cheung:1 acceleration:1 consequently:2 jeff:1 shared:1 absence:1 considerable:1 change:1 man:1 specifically:4 determined:1 semantically:1 yuval:1 discriminate:1 mathias:1 matt:1 player:3 vaes:1 aaron:2 chiu:1 guillaume:1 kihyuk:1 alexander:1 dxy:11 evaluate:6 avoiding:1 ex:2 |
6,610 | 6,980 | Conservative Contextual Linear Bandits
Abbas Kazerouni
Stanford University
[email protected]
Mohammad Ghavamzadeh
DeepMind
[email protected]
Yasin Abbasi-Yadkori
Adobe Research
[email protected]
Benjamin Van Roy
Stanford University
[email protected]
Abstract
Safety is a desirable property that can immensely increase the applicability of
learning algorithms in real-world decision-making problems. It is much easier
for a company to deploy an algorithm that is safe, i.e., guaranteed to perform at
least as well as a baseline. In this paper, we study the issue of safety in contextual
linear bandits that have application in many different fields including personalized
recommendation. We formulate a notion of safety for this class of algorithms. We
develop a safe contextual linear bandit algorithm, called conservative linear UCB
(CLUCB), that simultaneously minimizes its regret and satisfies the safety constraint, i.e., maintains its performance above a fixed percentage of the performance
of a baseline strategy, uniformly over time. We prove an upper-bound on the regret
of CLUCB and show that it can be decomposed into two terms: 1) an upper-bound
for the regret of the standard linear UCB algorithm that grows with the time horizon
and 2) a constant term that accounts for the loss of being conservative in order to
satisfy the safety constraint. We empirically show that our algorithm is safe and
validate our theoretical analysis.
1
Introduction
Many problems in science and engineering can be formulated as decision-making problems under
uncertainty. Although many learning algorithms have been developed to find a good policy/strategy
for these problems, most of them do not provide any guarantee for the performance of their resulting
policy during the initial exploratory phase. This is a major obstacle in using learning algorithms in
many different fields, such as online marketing, health sciences, finance, and robotics. Therefore,
developing learning algorithms with safety guarantees can immensely increase the applicability of
learning in solving decision problems. A policy generated by a learning algorithm is considered to be
safe, if it is guaranteed to perform at least as well as a baseline. The baseline can be either a baseline
value or the performance of a baseline strategy. It is important to note that since the policy is learned
from data, it is a random variable, and thus, the safety guarantees are in high probability.
Safety can be studied in both offline and online scenarios. In the offline case, the algorithm learns
the policy from a batch of data, usually generated by the current strategy or recent strategies of the
company, and the question is whether the learned policy will perform as well as the current strategy or
no worse than a baseline value, when it is deployed. This scenario has been recently studied heavily
in both model-based (e.g., Petrik et al. [2016]) and model-free (e.g., Bottou et al. 2013; Thomas et
al. 2015a,b; Swaminathan and Joachims 2015a,b) settings. In the model-based approach, we first
use the batch of data and build a simulator that mimics the behavior of the dynamical system under
study (hospital?s ER, financial market, robot), and then use this simulator to generate data and learn
the policy. The main challenge here is to have guarantees on the performance of the learned policy,
given the error in the simulator. This line of research is closely related to the area of robust learning
and control. In the model-free approach, we learn the policy directly from the batch of data, without
building a simulator. This line of research is related to off-policy evaluation and control. While the
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
model-free approach is more suitable for problems in which we have access to a large batch of data,
such as in online marketing, the model-based approach works better in problems in which data is
harder to collect, but instead, we have good knowledge about the underlying dynamical system that
allows us to build an accurate simulator.
In the online scenario, the algorithm learns a policy while interacting with the real system. Although
(reasonable) online algorithms will eventually learn a good or an optimal policy, there is no guarantee
for their performance along the way (the performance of their intermediate policies), especially at
the very beginning, when they perform a large amount of exploration. Thus, in order to guarantee
safety in online algorithms, it is important to control their exploration and make it more conservative.
Consider a manager that allows our learning algorithm runs together with her company?s current
strategy (baseline policy), as long as it is safe, i.e., the loss incurred by letting a portion of the traffic
handled by our algorithm (instead of by the baseline policy) does not exceed a certain threshold.
Although we are confident that our algorithm will eventually perform at least as well as the baseline
strategy, it should be able to remain alive (not terminated by the manager) long enough for this to
happen. Therefore, we should make it more conservative (less exploratory) in a way not to violate the
manager?s safety constraint. This setting has been studied in the multi-armed bandit (MAB) [Wu et
al., 2016]. Wu et al. [2016] considered the baseline policy as a fixed arm in MAB, formulated safety
using a constraint defined based on the performance of the baseline policy (mean of the baseline arm),
and modified the UCB algorithm [Auer et al., 2002] to satisfy this constraint.
In this paper, we study the notion of safety in contextual linear bandits, a setting that has application
in many different fields including personalized recommendation. We first formulate safety in this
setting, as a constraint that must hold uniformly in time, in Section 2. Our goal is to design learning
algorithms that minimize regret under the constraint that at any given time, their expected sum of
rewards should be above a fixed percentage of the expected sum of rewards of the baseline policy.
This fixed percentage depends on the amount of risk that the manager is willing to take. In Section 3,
we propose an algorithm, called conservative linear UCB (CLUCB), that satisfies the safety constraint.
At each round, CLUCB plays the action suggested by the standard linear UCB (LUCB) algorithm
(e.g., Dani et al. 2008; Rusmevichientong and Tsitsiklis 2010; Abbasi-Yadkori et al. 2011; Chu et
al. 2011; Russo and Van Roy 2014), only if it satisfies the safety constraint for the worst choice of
the parameter in the confidence set, and plays the action suggested by the baseline policy, otherwise.
We prove an upper-bound for the regret of CLUCB, which can be decomposed
? into two terms. The
first term is an upper-bound on the regret of LUCB that grows at the rate T log(T ). The second
term is constant (does not grow with the horizon T ) and accounts for the loss of being conservative in
order to satisfy the safety constraint. This improves over the regret bound derived in Wu et al. [2016]
for the MAB setting, where the regret of being conservative grows with time. In Section 4, we show
how CLUCB can be extended to the case that the reward of the baseline policy is unknown without a
change in its rate of regret. Finally in Section 5, we report experimental results that show CLUCB
behaves as expected in practice and validate our theoretical analysis.
2
Problem Formulation
In this section, we first review the standard linear bandit setting and then introduce the conservative
linear bandit formulation considered in this paper.
2.1
Linear Bandit
In the linear bandit setting, at any time t, the agent is given a set of (possibly) infinitely many
actions/options At , where each action a ? At is associated with a feature vector ?ta ? Rd . At each
round t, the agent selects an action at ? At and observes a random reward yt generated as
yt = h?? , ?tat i + ?t ,
(1)
where ?? ? Rd is the unknown reward parameter, h?? , ?tat i = rat t is the expected reward of action at
at time t, i.e., rat t = E[yt ], and ?t is a random noise such that
Assumption 1 Each element ?t of the noise sequence {?t }?
t=1 is conditionally ?-sub-Gaussian,
i.e., E[e??t | a1:t , ?1:t?1 ] ? exp(? 2 ? 2 /2), ?? ? R.
The sub-Gaussian assumption implies that E[?t | a1:t , ?1:t?1 ] = 0 and Var[?t | a1:t , ?1:t?1 ] ? ? 2 .
2
Note that the above formulation contains time-varying action sets and time-dependent feature vectors
for each action, and thus, includes the linear contextual bandit setting. In linear contextual bandit, if
we denote by xt , the state of the system at time t, the time-dependent feature vector ?ta for action a
will be equal to ?(xt , a), the feature vector of state-action pair (xt , a).
We also make the following standard assumption on the unknown parameter ?? and feature vectors:
Assumption 2 There exist constants B, D ? 0 such that k?? k2 ? B, k?ta k2 ? D, and h?? , ?ta i ?
[0, 1], for all t and all a ? At .
We define B = ? ? Rd : k?k2 ? B and F = ? ? Rd : k?k2 ? D, h?? , ?i ? [0, 1] to be the
parameter space and feature space, respectively.
Obviously, if the agent knows ?? , she will choose the optimal action a?t = arg maxa?At h?? , ?ta i at
each round t. Since ?? is unknown, the agent?s goal is to maximize her cumulative expected rewards
PT
after T rounds, i.e., t=1 h?? , ?tat i, or equivalently, to minimize its (pseudo)-regret, i.e.,
RT =
T
X
h?? , ?ta?t i ?
t=1
T
X
h?? , ?tat i,
(2)
t=1
which is the difference between the cumulative expected rewards of the optimal and agent?s strategies.
2.2
Conservative Linear Bandit
The conservative linear bandit setting is exactly the same as the linear bandit, except that there exists
a baseline policy ?b (e.g., the company?s current strategy) that at each round t, selects action bt ? At
and incurs the expected reward rbtt = h?? , ?tbt i. We assume that the expected rewards of the actions
taken by the baseline policy, rbtt , are known (see Remark 1). We relax this assumption in Section 4
and extend our proposed algorithm to the case that the reward function of the baseline policy is not
known in advance. Another difference between the conservative and standard linear bandit settings is
the performance constraint, which is defined as follows:
Definition 1 (Performance Constraint) At each round t, the difference between the performances
of the baseline and the agent?s policies should remain below a pre-defined fraction ? ? (0, 1) of the
baseline performance. This constraint may be written formally as
t
t
t
t
t
X
X
X
X
X
i
i
i
i
?t ? {1, . . . , T },
rbi ?
rai ? ?
rbi or equivalently as
rai ? (1??)
rbi i . (3)
i=1
i=1
i=1
i=1
i=1
The parameter ? controls the level of conservatism of the agent. Small values show that only small
losses are tolerated and the agent should be overly conservative, whereas large values indicate that
the manager is willing to take risk and the agent can be more explorative. Here, given the value of
?, the agent should select her actions in a way to both minimize her regret (2) and to satisfy the
performance constraint (3). In the next section, we propose a linear bandit algorithm to achieve this
goal with high probability.
Remark 1. Since the baseline policy is often our company?s strategy, it is reasonable to assume that
a large amount of data generated by this policy is available, and thus, we have an accurate estimate of
its reward function. If in addition to this accurate estimate, we have access to the actual data, we can
use them in our algorithms. The reason we do not use the data generated by the actions suggested by
the baseline policy in constructing the confidence sets of our algorithm in Section 3 is mainly to keep
the analysis simple. However, when dealing with the more general case of unknown baseline reward
in Section 4, we construct the confidence sets using all available data, including those generated by
the baseline policy. It is important to note that having a good estimate of the baseline reward function
does not necessarily mean that we know the unknown parameter ?? . This is because the data used for
this estimate has been generated by the baseline policy, and thus, may only provide a good estimate
of ?? in a limited subspace.
3
A Conservative Linear Bandit Algorithm
In this section, we propose a linear bandit algorithm, called conservative linear upper confidence
bound (CLUCB), whose pseudocode is shown in Algorithm 1. CLUCB is based on the optimism
in the face of uncertainty principle, and given the value of ?, minimizes the regret (2) and satisfies
the performance constraint (3) with high probability. At each round t, CLUCB uses the previous
3
Algorithm 1 CLUCB
Input: ?, B, F
Initialize: S0 = ?, z0 = 0 ? Rd , and C1 = B
for t = 1, 2, 3, ? ? ? do
Find (a0t , ?et ) ? arg max(a,?)?At ?Ct h?, ?ta i
Compute Lt = min??Ct h?, zt?1 + ?ta0 i
t
P
Pt
if Lt + i?S c rbi i ? (1 ? ?) i=1 rbi i then
t?1
Play at = a0t and observe reward yt defined by (1)
c
Set zt = zt?1 + ?tat , St = St?1 ? t, Stc = St?1
Given at and yt , construct the confidence set Ct+1 according to (5)
else
Play at = bt and observe reward yt defined by (1)
c
Set zt = zt?1 , St = St?1 , Stc = St?1
? t, Ct+1 = Ct
end if
end for
observations and builds a confidence set Ct that with high probability contains the unknown parameter
?? . It then selects the optimistic action a0t ? arg maxa?At max??Ct h?, ?ta i, which has the best
performance among all the actions available in At , within the confidence set Ct . In order to make
sure that the constraint (3) is satisfied, the algorithm plays the optimistic action a0t , only if it satisfies
the constraint for the worst choice of the parameter ? ? Ct . To make this more precise, let St?1 be
the set of rounds i < t at which CLUCB has played the optimistic action, i.e., ai = a0i . Similarly,
c
St?1
= {1, 2, ? ? ? , t ? 1} ? St?1 is the set of rounds j < t at which CLUCB has followed the
baseline policy, i.e., aj = bj .
In order to guarantee that it does not violate constraint (3), at each round t, CLUCB plays the
optimistic action, i.e., at = a0t , only if
zt?1
min
h X
??Ct
c
i?St?1
rbi i
t
D z X}| { E
i
X
+ ?,
?iai + h?, ?ta0t i ? (1 ? ?)
rbi i ,
i=1
i?St?1
and plays the conservative action, i.e., at = bt , otherwise. In the following, we describe how CLUCB
constructs and updates its confidence sets Ct .
3.1
Construction of Confidence Sets
CLUCB starts by the most general confidence set C1 = B and updates its confidence set only when it
plays an optimistic action. This is mainly to simplify the analysis and is based on the idea that since
the reward function of the baseline policy is known ahead of time, playing a baseline action does not
provide any new information about the unknown parameter ?? . However, this can be easily changed
to update the confidence set after each action. In fact, this is what we do in the algorithm proposed in
Section 4. We follow the approach of Abbasi-Yadkori et al. [2011] to build confidence sets for ?? .
Let St = {i1 , . . . , imt } be the set of rounds up to and including round t at which CLUCB has played
the optimistic action. Note that we have defined mt = |St |. For a fixed value of ? > 0, let
?1
?bt = (?t ?| + ?I) ?t Yt ,
(4)
t
i
be the regularized least square estimate of ? at round t, where ?t = [?ia1i1 , . . . , ?amimt t ] and Yt =
[yi1 , . . . , yimt ]> . For a fixed confidence parameter ? ? (0, 1), we construct the confidence set for the
next round t + 1 as
n
o
Ct+1 = ? ? Rd : k? ? ?bt kVt ? ?t+1 ,
(5)
r
where ?t+1 = ? d log
?
1+(mt +1)D 2 /?
?
+
?
?B, Vt = ?I + ?t ?>
t , and the weighted norm is defined
as kxkV = x> V x for any x ? Rd and any positive definite V ? Rd?d . Note that similar to the linear
UCB algorithm (LUCB) in Abbasi-Yadkori et al. [2011], the sub-Gaussian parameter ? and the
regularization parameter ? that appear in the definitions of ?t+1 and Vt should also be given to the
CLUCB algorithm as input. The following proposition (Theorem 2 in Abbasi-Yadkori et al. 2011)
shows that the confidence sets constructed by (5) contain the true parameter ?? with high probability.
4
Proposition 1 For the confidence set Ct defined by (5), we have P ?? ? Ct , ?t ? N ? 1 ? ?.
As mentioned before, CLUCB ensures that performance constraint (3) holds for all ? ? Ct at all
rounds t. As a result, if all the confidence sets hold (i.e., contain the true parameter ?? ), CLUCB
is guaranteed to satisfy performance constraint (3). Proposition 1 indicates that this happens with
probability at least 1 ? ?. It is worth noting that satisfying constraint (3) implies that CLUCB is at
least as good as the baseline policy at all rounds. In this vein, Proposition 1 guarantees that, with
probability at least 1 ? ?, CLUCB performs no worse than the baseline policy at all rounds.
3.2
Regret Analysis of CLUCB
In this section, we prove a regret bound for the proposed CLUCB algorithm. Let ?tbt = rat ?t ? rbtt
be the baseline gap at round t, i.e., the difference between the expected rewards of the optimal and
baseline actions at round t. This quantity shows how sub-optimal the action suggested by the baseline
policy is at round t. We make the following assumption on the performance of the baseline policy ?b .
Assumption 3 There exist 0 ? ?l ? ?h and 0 < rl such that, at each round t,
?l ? ?tbt ? ?h
and
rl ? rbtt .
(6)
An obvious candidate for both ?h and rh is 1, as all the mean rewards are confined in [0, 1]. The
reward lower-bound rl ensures that the baseline policy maintains a minimum level of performance at
each round. Finally, ?l = 0 is a reasonable candidate for the lower-bound of the baseline gap.
The following proposition shows that the regret of CLUCB can be decomposed into the regret of
a linear UCB (LUCB) algorithm (e.g., Abbasi-Yadkori et al. 2011) and a regret caused by being
conservative in order to satisfy the performance constraint (3).
Proposition 2 The regret of CLUCB can be decomposed into two terms as follows:
RT (CLUCB) ? RST (LUCB) + nT ?h ,
(7)
where RST (LUCB) is the cumulative (pseudo)-regret of LUCB at rounds t ? ST and nT = |STc | =
T ? mT is the number of rounds (in T rounds) at which CLUCB has played a conservative action.
Proof:
From the definition of regret (2), we have
RT (CLUCB) =
T
X
t=1
rat ?t
?
T
X
t=1
rat t
=
X
?tb
t
X z t }| t {
X t
(rat ?t ? rat t ) +
(ra?t ? rbt ) ?
(ra?t ? rat t ) + nT ?h . (8)
c
t?ST
t?ST
t?ST
The result follows from the fact that for t ? ST , CLUCB plays the exact same actions as LUCB, and
thus, the first term in (8) represents LUCB?s regret for these rounds.
The regret bound of LUCB for the confidence set (5) can be derived from the results of AbbasiYadkori et al. [2011]. Let E be the event that ?? ? Ct , ?t ? N, which according to Proposition 1
holds w.p. at least 1 ? ?. The following proposition provides a bound on RST (LUCB). Since this
proposition is a direct application of Thm. 3 in Abbasi-Yadkori et al. [2011], we omit its proof here.
Proposition 3 On event E = {?? ? Ct , ?t ? N}, for any T ? N, we have
s
s
?
mT D
1
mT D
RST (LUCB) ? 4 mT d log ? +
? B ? + ? 2 log( ) + d log 1 +
d
?
?d
?
D
= O d log
T
T .
(9)
??
Now in order to bound the regret of CLUCB, we only need to find an upper-bound on nT , i.e., the
number of times that CLUCB deviates from LUCB and selects the action suggested by the baseline
policy. We prove an upper-bound on nT in Theorem 4, which is the main technical result of this
section. Due to space constraint, we only provide a proof sketch for Theorem 4 in the paper and
report its detailed proof in Appendix A. The proof requires several technical lemmas that have been
proved in Appendix C.
5
Theorem 4 Let ? ? max(1, D2 ). Then, on event E, for any horizon T ? N, we have
"
?
(B ? + ?)2
nT ? 1 + 114d
log
?rl (?l + ?rl )
2
!#2
?
62d(B ? + ?)
?
.
?(?l + ?rl )
Proof Sketch: Let ? = max 1 ? t ? T | at 6= a0t be the last round that CLUCB takes an action
suggested by the baseline policy. We first show that at round ? , the following holds:
?
?
X
rbtt
X
? ?(m? ?1 + 1)?l + 2??
??a0?
V ?1 + 2
?t
?tat
V ?1 + 2??
?
t=1
t
t?S? ?1
X t
?
?a0 +
?at
?
t?S? ?1
.
V??1
Next, using Lemmas 7 and 8 (reported in Appendix C), and the Cauchy-Schwartz inequality, we
deduce that
?
?
X
?
rbtt ? ?(m? ?1 + 1)?l + 8d(B ? + ?) log
t=1
Since
rbtt
2(m? ?1 + 1)
?
p
(m? ?1 + 1).
? rl for all t, and ? = n? ?1 + m? ?1 + 1, it follows that
?
?rl n? ?1 ? ?(m? ?1 + 1)(?l + ?rl ) + 8d(B ? + ?) log
2(m? ?1 + 1)
?
p
(m? ?1 + 1).
(10)
Note that n? ?1 and m? ?1 appear on the LHS and RHS of (10), respectively. The key point is that
the RHS is positive only for a finite number of integers m? ?1 , and thus, it has a finite upper bound.
Using Lemma 9 (reported and proved in Appendix C), we prove that
?rl n? ?1 ? 114d
?
"
? + ?)2
? log
?l + ?rl
2 (B
!#2
?
62d(B ? + ?)
?
.
?(?l + ?rl )
Finally, the fact that nT = n? = n? ?1 + 1 completes the proof.
We now have all the necessary ingredients to derive a regret bound on the performance of the CLUCB
algorithm. We report the regret bound of CLUCB in Theorem 5, whose proof is a direct consequence
of the results of Propositions 2 and 3, and Theorem 4.
Theorem 5 Let ? ? max(1, D2 ). With probability at least 1 ? ?, the CLUCB algorithm satisfies the
performance constraint (3) for all t ? N, and has the regret bound
DT ?
K?h
,
RT (CLUCB) = O d log
T+
??
?rl
(11)
where K is a constant that only depends on the parameters of the problem as
K = 1 + 114d
?
"
? + ?)2
log
?l + ?rl
2 (B
!#2
?
62d(B ? + ?)
?
.
?(?l + ?rl )
Remark
2. The first term in the regret bound (11) is the regret of LUCB, which grows at the rate
?
T log(T ). The second term accounts for the loss incurred by being conservative in order to satisfy
the performance constraint (3). Our results indicate that this loss does not grow with time (since
CLUCB acts conservatively only in a finite number of rounds). This is a clear improvement over
the regret bound reported in Wu et al. [2016] for the MAB setting, in which the regret of being
conservative grows with time. Furthermore, the regret bound of Theorem 5 clearly indicates that
CLUCB?s regret is larger for smaller values of ?. This perfectly matches the intuition that the agent
must be more conservative, and thus, suffers higher regret for smaller values of ?. Theorem 5 also
indicates that CLUCB?s regret is smaller for smaller values of ?h , because when the baseline policy
?b is close to optimal, the algorithm does not lose much by being conservative.
6
Algorithm 2 CLUCB2
Input: ?, rl , B, F
Initialize: n ? 0, z ? 0, w ? 0, v ? 0 and C1 ? B
for t = 1, 2, 3, ? ? ? do
Let bt be the action suggested by ?b at round t
e = arg max(a,?)?A ?C h?, ?t i
Find (a0t , ?)
a
t
t
Find Rt = max??Ct h?, v + ?tbt i & Lt = min??Ct h?, z + ?ta0 i + ? max min??Ct h?, wi, nrl
t
if Lt ? (1 ? ?)Rt then
Play at = a0t and observe yt defined by (1)
Set z ? z + ?ta0 and v ? v + ?tbt
t
else
Play at = bt and observe yt defined by (1)
Set w = w + ?tbt and n ? n + 1
end if
Given at and yt , construct the confidence set Ct+1 according to (15)
end for
4
Unknown Baseline Reward
In this section, we consider the case where the expected rewards of the actions taken by the baseline
policy, rbtt , are unknown at the beginning. We show how the CLUCB algorithm presented in Section 3
should be changed to handle this case, and present a new algorithm, called CLUCB2. We prove a
regret bound for CLUCB2, which is at the same rate as that for CLUCB. This shows that the lack of
knowledge about the reward function of the baseline policy does not hurt our algorithm in terms of
the rate of the regret. The pseudocode of CLUCB2 is shown in Algorithm 2. The main difference
with CLUCB is in the condition that should be checked at each round t to see whether we should
play the optimistic action a0t or the conservative action bt . This condition should be selected in a way
that CLUCB2 satisfies constraint (3). We may rewrite (3) as
X
X
X
rai i + rat 0t + ?
rbi i ? (1 ? ?) rbtt +
rbi i .
(12)
c
i?St?1
i?St?1
i?St?1
If we lower-bound the LHS and upper-bound the RHS of (12), we obtain
X
X
X
?ibi + ?tbt i.
min h?,
?iai + ?ta0t i + ? min h?,
?ibi i ? (1 ? ?) max h?,
??Ct
??Ct
i?St?1
??Ct
c
i?St?1
(13)
i?St?1
Since each confidence set Ct is built in a way to contain the true parameter ?? with high probability,
it is easy to see that (12) is satisfied whenever (13) is true.
CLUCB2 uses both optimistic and conservative actions, and their corresponding rewards in building
its confidence sets. Specifically for any t, we let ?t = [?1a1 , ?2a2 , ? ? ? , ?tat ], Yt = [y1 , y2 , ? ? ? , yt ]| ,
Vt = ?I + ?|t ?t , and define the least-square estimate after round t as
?1
?bt = (?t ?|t + ?I) ?t Yt .
Given Vt and ?bt , the confidence set for round t + 1 is constructed as
n
o
Ct+1 = ? ? Ct : k? ? ?bt kVt ? ?t+1 ,
r
where C1 = B and ?t = ?
d log
1+tD 2 /?
?
(14)
(15)
?
+ B ?. Similar to Proposition 1, we can easily
?
prove that the confidence
sets built by (15) contain the true parameter ? with high probability,
i.e., P ?? ? Ct , ?t ? N ? 1 ? ?.
Remark 3. Note that unlike the CLUCB algorithm, here we build nested confidence sets, i.e., ? ? ? ?
Ct+1 ? Ct ? Ct?1 ? ? ? ? , which is necessary for the proof of the algorithm. This can potentially
increase the computational complexity of CLUCB2, but from a practical point of view, the confidence
7
Figure 1: Average per-step regret (over 1, 000 runs) of LUCB and CLUCB for different values of ?.
sets become nested automatically after sufficient data has been observed. Therefore, the nested
constraint in building the confidence sets can be relaxed after sufficiently large number of rounds.
The following theorem guarantees that CLUCB2 satisfies the safety constraint (3) with high probability, while its regret has the same rate as that of CLUCB and is worse than that of LUCB only up to an
additive constant.
Theorem 6 Let ? ? max(1, D2 ) and ? ? 2/e. Then, with probability at least 1 ? ?, CLUCB2
algorithm satisfies the performance constraint (3) for all t ? N, and has the regret bound
K?h
DT ?
RT (CLUCB2) = O d log
T+ 2 2 ,
(16)
??
? rl
where K is a constant that depends only on the parameters of the problem as
"
!#2
?
?
10d(B
?
+
?)
+ 1.
K = 256d2 (B ? + ?)2 log
?rl (?)1/4
We report the proof of Theorem 6 in Appendix B. The proof follows the same steps as that of
Theorem 5, with additional non-trivial technicalities that have been highlighted there.
5
Simulation Results
In this section, we provide simulation results to illustrate the performance of the proposed CLUCB
algorithm. We considered a time independent action set of 100 arms each having a time independent
feature vectorliving in R4 space. These feature vectors and the parameter ?? are randomly drawn
from N 0, I4 such that the mean reward associated to each arm is positive. The observation noise at
each time step is also generated independently from N (0, 1), and the mean reward of the baseline
policy at any time is taken to be the reward associated to the 10?th best action. We have taken
? = 1, ? = 0.001 and the results are averaged over 1,000 realizations.
In Figure 1, we plot per-step regret (i.e., Rtt ) of LUCB and CLUCB for different values of ? over
a horizon T = 40, 000. Figure 1 shows that per-step regret of CLUCB remains constant at the
beginning (the conservative phase). This is because during this phase, CLUCB follows the baseline
policy to make sure that the performance constraint (3) is satisfied. As expected, the length of the
conservative phase decreases as ? is increased, since the performance constraint is relaxed for larger
values of ?, and hence, CLUCB starts playing optimistic actions more quickly. After this initial
conservative phase, CLUCB has learned enough about the optimal action and its performance starts
converging to that of LUCB. On the other hand, Figure 1 shows that per-step regret of CLUCB at the
first few periods remains much lower than that of LUCB. This is because LUCB plays agnostic to the
safety constraint, and thus, may select very poor actions in its initial exploration phase. In regard
to this, Figure 2(a) plots the percentage of the rounds, in the first 1, 000 rounds, at which the safety
constraint (3) is violated by LUCB and CLUCB for different values of ?. According to this figure,
8
(a)
(b)
Figure 2: (a) Percentage of the rounds, in the first 1, 000 rounds, at which the safety constraint is
violated by LUCB and CLUCB for different values of ?, (b) Per-step regret of LUCB and CLUCB
for different values of ?, at round t = 40, 000.
CLUCB satisfies the performance constraint for all values of ?, while LUCB fails in a significant
number of rounds, specially for small values of ? (i.e., tight constraint).
To better illustrate the effect of the performance constraint (3) on the regret of the algorithms,
Figure 2(b) plots the per-step regret achieved by CLUCB at round t = 40, 000 for different values of
?, as well as that for LUCB. As expected from our analysis and is shown in Figure 1, the performance
of CLUCB converges to that of LUCB after an initial conservative phase. Figure 2(b) confirms that
the convergence happens more quickly for larger values of ?, where the constraint is more relaxed.
6
Conclusions
In this paper, we studied the concept of safety in contextual linear bandits to address the challenges that
arise in implementing such algorithms in practical situations such as personalized recommendation
systems. Most of the existing linear bandit algorithms, such as LUCB [Abbasi-Yadkori et al., 2011],
suffer from a large regret at their initial exploratory rounds. This unsafe behavior is not acceptable
in many practical situations, where having a reasonable performance at any time is necessary for a
learning algorithm to be considered reliable and to remain in production.
To guarantee safe learning, we formulated a conservative linear bandit problem, where the performance of the learning algorithm (measured in terms of its cumulative rewards) at any time is
constrained to be at least as good as a fraction of the performance of a baseline policy. We proposed
a conservative version of LUCB algorithm, called CLUCB, to solve this constrained problem, and
showed that it satisfies the safety constraint with high probability, while achieving a regret bound
equivalent to that of LUCB up to an additive time-independent constant. We designed two versions of
CLUCB that can be used depending on whether the reward function of the baseline policy is known or
unknown, and showed that in each case, CLUCB acts conservatively (i.e., plays the action suggested
by the baseline policy) only at a finite number of rounds, which depends on how suboptimal the
baseline policy is. We reported simulation results that support our analysis and show the performance
of the proposed CLUCB algorithm.
9
References
Y. Abbasi-Yadkori, D. P?al, and C. Szepesv?ari. Improved algorithms for linear stochastic bandits. In
Advances in Neural Information Processing Systems, pages 2312?2320, 2011.
P. Auer, N. Cesa-Bianchi, and P. Fischer. Finite-time analysis of the multiarmed bandit problem.
Machine Learning Journal, 47:235?256, 2002.
L. Bottou, J. Peters, J. Quinonero-Candela, D. Charles, D. Chickering, E. Portugaly, D. Ray, P. Simard,
and E. Snelson. Counterfactual reasoning and learning systems: The example of computational
advertising. Journal of Machine Learning Research, 14:3207?3260, 2013.
W. Chu, L. Li, L. Reyzin, and R. Schapire. Contextual bandits with linear payoff functions. In
Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics,
pages 208?214, 2011.
V. Dani, T. Hayes, and S. Kakade. Stochastic linear optimization under bandit feedback. In COLT,
pages 355?366, 2008.
M. Petrik, M. Ghavamzadeh, and Y. Chow. Safe policy improvement by minimizing robust baseline
regret. In Advances in Neural Information Processing Systems, pages 2298?2306, 2016.
P. Rusmevichientong and J. Tsitsiklis. Linearly parameterized bandits. Mathematics of Operations
Research, 35(2):395?411, 2010.
D. Russo and B. Van Roy. Learning to optimize via posterior sampling. Mathematics of Operations
Research, 39(4):1221?1243, 2014.
A. Swaminathan and T. Joachims. Batch learning from logged bandit feedback through counterfactual
risk minimization. Journal of Machine Learning Research, 16:1731?1755, 2015.
A. Swaminathan and T. Joachims. Counterfactual risk minimization: Learning from logged bandit
feedback. In Proceedings of The 32nd International Conference on Machine Learning, 2015.
P. Thomas, G. Theocharous, and M. Ghavamzadeh. High confidence off-policy evaluation. In
Proceedings of the Twenty-Ninth Conference on Artificial Intelligence, 2015.
P. Thomas, G. Theocharous, and M. Ghavamzadeh. High confidence policy improvement. In
Proceedings of the Thirty-Second International Conference on Machine Learning, pages 2380?
2388, 2015.
Y. Wu, R. Shariff, T. Lattimore, and C. Szepesv?ari. Conservative bandits. In Proceedings of The 33rd
International Conference on Machine Learning, pages 1254?1262, 2016.
10
| 6980 |@word version:2 norm:1 nd:1 d2:4 willing:2 tat:7 simulation:3 confirms:1 incurs:1 harder:1 initial:5 contains:2 existing:1 current:4 contextual:8 com:2 nt:7 chu:2 must:2 written:1 explorative:1 additive:2 happen:1 plot:3 designed:1 update:3 intelligence:2 selected:1 beginning:3 yi1:1 provides:1 along:1 constructed:2 direct:2 become:1 prove:7 ray:1 introduce:1 market:1 ra:2 expected:12 behavior:2 simulator:5 manager:5 yasin:1 multi:1 decomposed:4 company:5 td:1 actual:1 armed:1 automatically:1 underlying:1 agnostic:1 what:1 minimizes:2 deepmind:1 maxa:2 developed:1 guarantee:10 pseudo:2 clucb:63 act:2 finance:1 exactly:1 k2:4 schwartz:1 control:4 omit:1 appear:2 safety:22 positive:3 engineering:1 before:1 consequence:1 theocharous:2 ibi:2 studied:4 r4:1 collect:1 limited:1 averaged:1 russo:2 practical:3 thirty:1 practice:1 regret:48 definite:1 area:1 confidence:29 pre:1 close:1 risk:4 optimize:1 equivalent:1 yt:14 independently:1 formulate:2 financial:1 handle:1 notion:2 exploratory:3 hurt:1 pt:2 deploy:1 heavily:1 play:14 construction:1 exact:1 us:2 element:1 roy:3 satisfying:1 vein:1 observed:1 worst:2 ensures:2 decrease:1 observes:1 mentioned:1 benjamin:1 intuition:1 complexity:1 reward:29 ghavamzadeh:4 solving:1 rewrite:1 tight:1 petrik:2 easily:2 describe:1 artificial:2 whose:2 stanford:4 larger:3 solve:1 relax:1 otherwise:2 statistic:1 fischer:1 highlighted:1 online:6 obviously:1 sequence:1 propose:3 realization:1 reyzin:1 achieve:1 validate:2 rst:4 convergence:1 converges:1 derive:1 develop:1 illustrate:2 depending:1 measured:1 implies:2 indicate:2 safe:7 closely:1 stochastic:2 exploration:3 implementing:1 mab:4 proposition:12 hold:5 immensely:2 sufficiently:1 considered:5 exp:1 bj:1 kvt:2 major:1 a2:1 lose:1 weighted:1 minimization:2 dani:2 clearly:1 gaussian:3 modified:1 varying:1 derived:2 joachim:3 she:1 improvement:3 indicates:3 mainly:2 baseline:51 dependent:2 bt:11 a0:2 chow:1 her:4 bandit:29 selects:4 i1:1 issue:1 arg:4 among:1 colt:1 constrained:2 initialize:2 field:3 equal:1 construct:5 having:3 beach:1 sampling:1 represents:1 mimic:1 report:4 simplify:1 few:1 randomly:1 simultaneously:1 phase:7 evaluation:2 a0i:1 accurate:3 necessary:3 lh:2 theoretical:2 increased:1 obstacle:1 portugaly:1 applicability:2 reported:4 tolerated:1 confident:1 st:25 international:4 off:2 together:1 quickly:2 abbasi:9 satisfied:3 cesa:1 choose:1 possibly:1 worse:3 simard:1 li:1 account:3 imt:1 rusmevichientong:2 includes:1 satisfy:7 caused:1 depends:4 view:1 optimistic:9 candela:1 traffic:1 portion:1 start:3 maintains:2 option:1 minimize:3 square:2 rbt:1 tbt:7 advertising:1 worth:1 suffers:1 whenever:1 checked:1 definition:3 obvious:1 associated:3 proof:11 proved:2 counterfactual:3 knowledge:2 improves:1 auer:2 ta:8 dt:2 higher:1 follow:1 improved:1 iai:2 formulation:3 furthermore:1 marketing:2 swaminathan:3 sketch:2 hand:1 lack:1 google:1 aj:1 grows:5 building:3 effect:1 usa:1 contain:4 true:5 y2:1 concept:1 regularization:1 hence:1 conditionally:1 round:43 during:2 rat:9 mohammad:1 performs:1 reasoning:1 snelson:1 lattimore:1 recently:1 ari:2 charles:1 behaves:1 pseudocode:2 mt:6 empirically:1 rl:18 extend:1 significant:1 multiarmed:1 ai:1 rd:9 mathematics:2 similarly:1 robot:1 access:2 deduce:1 posterior:1 recent:1 showed:2 scenario:3 kxkv:1 certain:1 inequality:1 vt:4 minimum:1 additional:1 relaxed:3 maximize:1 period:1 living:1 violate:2 desirable:1 technical:2 match:1 long:3 a1:4 adobe:2 converging:1 abbas:1 robotics:1 confined:1 c1:4 achieved:1 whereas:1 addition:1 szepesv:2 else:2 grow:2 completes:1 unlike:1 specially:1 sure:2 integer:1 noting:1 intermediate:1 exceed:1 enough:2 easy:1 rbi:9 nrl:1 perfectly:1 suboptimal:1 idea:1 whether:3 a0t:9 handled:1 optimism:1 suffer:1 peter:1 action:42 remark:4 detailed:1 clear:1 amount:3 generate:1 schapire:1 percentage:5 exist:2 overly:1 per:6 key:1 threshold:1 achieving:1 drawn:1 fraction:2 sum:2 run:2 fourteenth:1 parameterized:1 uncertainty:2 logged:2 reasonable:4 wu:5 decision:3 appendix:5 acceptable:1 bound:26 ct:31 guaranteed:3 played:3 followed:1 i4:1 ahead:1 constraint:39 alive:1 personalized:3 min:6 developing:1 rai:3 according:4 poor:1 remain:3 smaller:4 wi:1 kakade:1 making:2 happens:2 taken:4 remains:2 eventually:2 know:2 letting:1 end:4 available:3 operation:2 observe:4 yadkori:9 batch:5 thomas:3 build:5 especially:1 question:1 quantity:1 strategy:11 rt:7 subspace:1 quinonero:1 bvr:1 cauchy:1 trivial:1 reason:1 abbasiyadkori:1 length:1 minimizing:1 equivalently:2 potentially:1 design:1 zt:6 policy:50 unknown:11 perform:5 bianchi:1 upper:9 twenty:1 observation:2 finite:5 situation:2 extended:1 payoff:1 precise:1 y1:1 interacting:1 rtt:1 ninth:1 thm:1 pair:1 learned:4 unsafe:1 nip:1 address:1 able:1 suggested:8 usually:1 dynamical:2 below:1 challenge:2 tb:1 built:2 including:4 max:10 reliable:1 suitable:1 event:3 regularized:1 arm:4 health:1 deviate:1 review:1 loss:6 var:1 ingredient:1 incurred:2 agent:11 sufficient:1 s0:1 principle:1 playing:2 production:1 changed:2 last:1 free:3 offline:2 tsitsiklis:2 face:1 van:3 regard:1 feedback:3 world:1 cumulative:4 conservatively:2 keep:1 dealing:1 technicality:1 hayes:1 learn:3 robust:2 ca:1 bottou:2 conservatism:1 necessarily:1 constructing:1 stc:3 main:3 linearly:1 terminated:1 rh:4 noise:3 arise:1 deployed:1 sub:4 fails:1 candidate:2 chickering:1 learns:2 z0:1 theorem:13 xt:3 er:1 exists:1 horizon:4 gap:2 easier:1 lt:4 infinitely:1 recommendation:3 nested:3 satisfies:11 goal:3 formulated:3 change:1 specifically:1 except:1 uniformly:2 lemma:3 conservative:31 called:5 hospital:1 lucb:29 experimental:1 ucb:7 formally:1 select:2 ta0:3 support:1 violated:2 |
6,611 | 6,981 | Variational Memory Addressing
in Generative Models
J?rg Bornschein Andriy Mnih Daniel Zoran Danilo J. Rezende
{bornschein, amnih, danielzoran, danilor}@google.com
DeepMind, London, UK
Abstract
Aiming to augment generative models with external memory, we interpret the
output of a memory module with stochastic addressing as a conditional mixture
distribution, where a read operation corresponds to sampling a discrete memory
address and retrieving the corresponding content from memory. This perspective
allows us to apply variational inference to memory addressing, which enables
effective training of the memory module by using the target information to guide
memory lookups. Stochastic addressing is particularly well-suited for generative
models as it naturally encourages multimodality which is a prominent aspect of
most high-dimensional datasets. Treating the chosen address as a latent variable
also allows us to quantify the amount of information gained with a memory lookup
and measure the contribution of the memory module to the generative process.
To illustrate the advantages of this approach we incorporate it into a variational
autoencoder and apply the resulting model to the task of generative few-shot
learning. The intuition behind this architecture is that the memory module can
pick a relevant template from memory and the continuous part of the model can
concentrate on modeling remaining variations. We demonstrate empirically that
our model is able to identify and access the relevant memory contents even with
hundreds of unseen Omniglot characters in memory.
1
Introduction
Recent years have seen rapid developments in generative modelling. Much of the progress was driven
by the use of powerful neural networks to parameterize conditional distributions composed to define
the generative process (e.g., VAEs [1, 2], GANs [3]). In the Variational Autoencoder (VAE) framework
for example, we typically define a generative model p(z), p? (x|z) and an approximate inference
model q (z|x). All conditional distributions are parameterized by multilayered perceptrons (MLPs)
which, in the simplest case, output the mean and the diagonal variance of a Normal distribution
given the conditioning variables. We then optimize a variational lower bound to learn the generative
model for x. Considering recent progress, we now have the theory and the tools to train powerful,
potentially non-factorial parametric conditional distributions p(x|y) that generalize well with respect
to x (normalizing flows [4], inverse autoregressive flows [5], etc.).
Another line of work which has been gaining popularity recently is memory augmented neural
networks [6, 7, 8]. In this family of models the network is augmented with a memory buffer which
allows read and write operations and is persistent in time. Such models usually handle input and output
to the memory buffer using differentiable ?soft? write/read operations to allow back-propagating
gradients during training.
Here we propose a memory-augmented generative model that uses a discrete latent variable a acting
as an address into the memory buffer M. This stochastic perspective allows us to introduce a
variational approximation over the addressing variable which takes advantage of target information
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
z
m
p(z)
p(m)
p(a)
m
x
z
p(x|z, m(z))
x
a
m
z
p(x|m, z)
x
p(x|ma , z)
Figure 1: Left: Sketch of typical SOTA generative latent variable model with memory. Red edges
indicate approximate inference distributions q(?|?). The KL(q||p) cost to identify a specific memory
entry might be substantial, even though the cost of accessing a memory entry should be in the order
of log |M|. Middle & Right: We combine a top-level categorical distribution p(a) and a conditional
variational autoencoder with a Gaussian p(z|m).
when retrieving contents from memory during training. We compute the sampling distribution over
the addresses based on a learned similarity measure between the memory contents at each address
and the target. The memory contents ma at the selected address a serve as a context for a continuous
latent variable z, which together with ma is used to generate the target observation. We therefore
interpret memory as a non-parametric conditional mixture distribution. It is non-parametric in the
sense that we can change the content and the size of the memory from one evaluation of the model
to another without having to relearn the model parameters. And since the retrieved content ma
is dependent on the stochastic variable a, which is part of the generative model, we can directly
use it downstream to generate the observation x. These two properties set our model apart from
other work on VAEs with mixture priors [9, 10] aimed at unconditional density modelling. Another
distinguishing feature of our approach is that we perform sampling-based variational inference on the
mixing variable instead of integrating it out as is done in prior work, which is essential for scaling to
a large number of memory addresses.
Most existing memory-augmented generative models use soft attention with the weights dependent on
the continuous latent variable to access the memory. This does not provide clean separation between
inferring the address to access in memory and the latent factors of variation that account for the
variability of the observation relative to the memory contents (see Figure 1). Or, alternatively, when
the attention weights depend deterministically on the encoder, the retrieved memory content can not
be directly used in the decoder.
Our contributions in this paper are threefold: a) We interpret memory-read operations as conditional
mixture distribution and use amortized variational inference for training; b) demonstrate that we can
combine discrete memory addressing variables with continuous latent variables to build powerful
models for generative few-shot learning that scale gracefully with the number of items in memory;
and c) demonstrate that the KL divergence over the discrete variable a serves as a useful measure to
monitor memory usage during inference and training.
2
Model and Training
We will now describe the proposed model along with the variational inference procedure we use to
train it. The generative model has the form
Z
X
p(x|M) =
p(a|M)
p(z|ma ) p(x|z, ma ) dz
(1)
a
z
where x is the observation we wish to model, a is the addressing categorical latent variable, z the
continuous latent vector, M the memory buffer and ma the memory contents at the ath address.
The generative process proceeds by first sampling an address a from the categorical distribution
p(a|M), retrieving the contents ma from the memory buffer M, and then sampling the observation
x from a conditional variational auto-encoder with ma as the context conditioned on (Figure 1, B).
2
The intuition here is that if the memory buffer contains a set of templates, a trained model of this
type should be able to produce observations by distorting a template retrieved from a randomly
sampled memory location using the conditional variational autoencoder to account for the remaining
variability.
We can write the variational lower bound for the model in (1):
log p(x|M)
E
a,z?q(?|M,x)
[log p(x, z, a|M)
log q(a, z|M, x)]
where q(a, z|M, x) = q(a|M, x)q(z|ma , x).
(2)
(3)
In the rest of the paper, we omit the dependence on M for brevity. We will now describe the
components of the model and the variational posterior (3) in detail.
The first component of the model is the memory buffer M. We here do not implement an explicit write
operation but consider two possible sources for the memory content: Learned memory: In generative
experiments aimed at better understanding the model?s behaviour we treat M as model parameters.
That is we initialize M randomly and update its values using the gradient of the objective. Few-shot
learning: In the generative few-shot learning experiments, before processing each minibatch, we
sample |M| entries from the training data and store them in their raw (pixel) form in M. We ensure
that the training minibatch {x1 , ..., x|B| } contains disjoint samples from the same character classes,
so that the model can use M to find suitable templates for each target x.
The second component is the addressing variable a 2 {1, ..., |M|} which selects a memory entry
ma from the memory buffer M. The varitional posterior distribution q(a|x) is parameterized as a
softmax over a similarity measure between x and each of the memory entries ma :
q
q (a|x) / exp Sq (ma , x),
(4)
where S (x, y) is a learned similarity function described in more detail below.
Given a sample a from the posterior q (a|x), retreiving ma from M is a purely deterministic
operation. Sampling from q(a|x) is easy as it amounts to computing its value for each slot in memory
and sampling from the resulting categorical distribution. Given a, we can compute the probability
of drawing that address under the prior p(a). We here use a learned prior p(a) that shares some
parameters with q(a|x).
Similarity functions: To obtain an efficient implementation for mini-batch training we use the same
memory content M for the all training examples in a mini-batch and choose a specific form for the
similarity function. We parameterize Sq (m, x) with two MLPs: h that embeds the memory content
into the matching space and hq that does the same to the query x. The similarity is then computed as
the inner product of the embeddings, normalized by the norm of the memory content embedding:
hea , eq i
||ea ||2
where ea = h (ma ) , eq = hq (x).
Sq (ma , x) =
(5)
(6)
This form allows us to compute the similarities between the embeddings of a mini-batch of |B|
observations and |M| memory entries at the computational cost of O(|M||B||e|), where |e| is the
dimensionality of the embedding. We also experimented with several alternative similarity functions
q
such as the plain inner product (hea , eq i) and the cosine similarity (hea , e i/||ea || ? ||eq ||) and found that
they did not outperform the above similarity function. For the unconditioneal prior p(a), we learn a
query point ep 2 R|e| to use in similarity function (5) in place of eq . We share h between p(a) and
q(a|x). Using a trainable p(a) allows the model to learn that some memory entries are more useful
for generating new targets than others. Control experiments showed that there is only a very small
degradation in performance when we assume a flat prior p(a) = 1/|M|.
2.1
Gradients and Training
For the continuous variable z we use the methods developed in the context of variational autoencoders [1]. We use a conditional Gaussian prior p(z|ma ) and an approximate conditional posterior
q(z|x, ma ). However, since we have a discrete latent variable a in the model we can not simply backpropagate gradients through it. Here we show how to use VIMCO [11] to estimate the gradients for
3
this model. With VIMCO, we essentially optimize the multi-sample variational bound [12, 13, 11]:
"
#
K
1 X p(x, ma , z)
log p(x)
E
log
=L
(7)
K
q(a, z|x)
a(k) ?q(a|x)
k=1
z(k) ?q(z|ma ,x)
Multiple samples from the posterior enable VIMCO to estimate low-variance gradients for those parameters of the model which influence the non-differentiable discrete variable a. The corresponding
gradient estimates are:
?
?
X
r? L '
! (k) r? log p? (x, a(k) , z(k) ) r? log q? (z|a, x)
(8)
a(k) ,z (k) ? q(?|x)
r L'
X
a(k) ,z (k)
!
?
with ! (k) = P
(k)
(k)
? q(?|x)
(k)
r log q (a(k) |x)
p(x, a(k) , z(k) )
q(a(k) , z(k) |x)
X
0
1
log
!
? (k )
K 1 0
, !
? (k) =
!
? (k)
1 X (k0 )
= log
!
?
K 0
k
and !
!
k
! (k)
k 6=k
For z-related gradients this is equivalent to IWAE [13]. Alternative gradient estimators for discrete
latent variable models (e.g. NVIL [14], RWS [12] or Gumbel-max relaxation-based approaches
[15, 16]) might work here too, but we have not investigated their effectiveness. Notice how the
gradients r log p(x|z, a) provide updates for the memory contents ma (if necessary), while the
gradients r log p(a) and r log q(a|x) provide updates for the embedding MLPs. The former update
the mixture components while the latter update their relative weights. The log-likelihood bound
(2) suggests that we can decompose the overall loss into three terms: the expected reconstruction
error Ea,z?q [log p(x|a, z)] and the two KL terms which measure the information flow from the
approximate posterior to the generative model for our latent variables: KL(q(a|x)||p(a)), and
Ea?q [KL(q(z|a, x)||p(z|a))].
3
Related work
Attention and external memory are two closely related techniques that have recently become important
building blocks for neural models. Attention has been widely used for supervised learning tasks such
as translation, image classification and image captioning. External memory can be seen as an input or
an internal state and attention mechanisms can either be used for selective reading or incremental
updating. While most work involving memory and attention has been done in the context supervised
learning, here we are interested in using them effectively in the generative setting.
In [17] the authors use soft-attention with learned memory contents to augment models to have
more parameters in the generative model. External memory as a way of implementing one-shot
generalization was introduced in [18]. This was achieved by treating the exemplars conditioned
on as memory entries accessed through a soft attention mechanism at each step of the incremental
generative process similar to the one in DRAW [19]. Generative Matching Networks [20] are a similar
architecture which uses a single-step VAE generative process instead of an iterative DRAW-like
one. In both cases, soft attention is used to access the exemplar memory, with the address weights
computed based on a learned similarity function between an observation at the address and a function
of the latent state of the generative model.
In contrast to this kind of deterministic soft addressing, we use hard attention, which stochastically
picks a single memory entry and thus might be more appropriate in the few-shot setting. As the
memory location is stochastic in our model, we perform variational inference over it, which has not
been done for memory addressing in a generative model before. A similar approach has however
been used for training stochastic attention for image captioning [21]. In the context of memory,
hard attention has been used in RLNTM ? a version of the Neural Turing Machine modified to use
stochastic hard addressing [22]. However, RLNTM has been trained using REINFORCE rather
than variational inference. A number of architectures for VAEs augmented with mixture priors have
4
Figure 2: A: Typical learning curve when training a model to recall MNIST digits (M ? training
data (each step); x ? M; |M| = 256): In the beginning the continuous latent variables model most
of the variability of the data; after ? 100k update steps the stochastic memory component takes
over and both the NLL bound and the KL(q(a|x)||p(a)) estimate approach log(256), the NLL of
an optimal probabilistic lookup-table. B: Randomly selected samples from the MNIST model with
learned memory: Samples within the same row use a common ma .
been proposed, but they do not use the mixture component indicator variable to index memory and
integrate out the variable instead [9, 10], which prevents them from scaling to a large number of
mixing components.
An alternative approach to generative few-shot learning proposed in [23] uses a hierarchical VAE
to model a large number of small related datasets jointly. The statistical structure common to
observations in the same dataset are modelled by a continuous latent vector shared among all such
observations. Unlike our model, this model is not memory-based and does not use any form of
attention. Generative models with memory have also been proposed for sequence modelling in [24],
using differentiable soft addressing. Our approach to stochastic addressing is sufficiently general to
be applicable in this setting as well and it would be interesting how it would perform as a plug-in
replacement for soft addressing.
4
Experiments
We optimize the parameters with Adam [25] and report experiments with the best results from
learning rates in {1e-4, 3e-4}. We use minibatches of size 32 and K=4 samples from the approximate
posterior q(?|x) to compute the gradients, the KL estimates, and the log-likelihood bounds. We keep
the architectures deliberately simple and do not use autoregressive connections or IAF [5] in our
models as we are primarily interested in the quantitative and qualitative behaviour of the memory
component.
4.1
MNIST with fully connected MLPs
We first perform a series of experiments on the binarized MNIST dataset [26]. We use 2 layered enand decoders with 256 and 128 hidden units with ReLU nonlinearities and a 32 dimensional Gaussian
latent variable z.
Train to recall: To investigate the model?s capability to use its memory to its full extent, we consider
the case where it is trained to maximize the likelihood for random data points x which are present
in M. During inference, an optimal model would pick the template ma that is equivalent to x with
probability q(a|x)=1. The corresponding prior probability would be p(a) ? 1/|M|. Because there
are no further variations that need to be modeled by z, its posterior q(z|x, m) can match the prior
p(z|m), yielding a KL cost of zero. The model expected log likelihood would be -log |M|, equal
to the log-likelihood of an optimal probabilistic lookup table. Figure 2A illustrates that our model
converges to the optimal solution. We observed that the time to convergence depends on the size
of the memory and with |M| > 512 the model sometimes fails to find the optimal solution. It is
noteworthy that the trained model from Figure 2A can handle much larger memory sizes at test time,
e.g. achieving NLL ? log(2048) given 2048 test set images in memory. This indicates that the
matching MLPs for q(a|x) are sufficiently discriminative.
5
Figure 3: Approximate inference with q(a|x): Histogram and corresponding top-5 entries ma for
two randomly selected targets. M contains 10 examples from 8 unseen test-set character classes.
Figure 4: A: Generative one-shot sampling: Left most column is the testset example provided in
M; remaining columns show randomly selected samples from p(x|M). The model was trained
with 4 examples from 8 classes each per gradient step. B: Breakdown of the KL cost for different
models trained with varying number of examples per class in memory. KL(q(a|x)||p(a)) increases
from 2.0 to 4.5 nats as KL(q(z|ma , x)||p(z|ma )) decreases from 28.2 to 21.8 nats. As the number
of examples per class increases, the model shifts the responsibility for modeling the data from the
continuous variable z to the discrete a. The overall testset NLL for the different models improves
from 75.1 to 69.1 nats.
Learned memory: We train models with |M| 2 {64, 128, 256, 512, 1024} randomly initialized mixture components (ma 2 R256 ). After training, all models converged to an average
KL(q(a|x)||p(a)) ? 2.5 ? 0.3 nats over both the training and the test set, suggesting that the
model identified between e2.2 ? 9 and e2.8 ? 16 clusters in the data that are represented by a. The
entropy of p(a) is significantly higher, indicating that multiple ma are used to represent the same data
clusters. A manual inspection of the q(a|x) histograms confirms this interpretation. Although our
model overfits slightly more to the training set, we do generally not observe a big difference between
our model and the corresponding baseline VAE (a VAE with the same architecture, but without the
top level mixture distribution) in terms of the final NLL. This is probably not surprising, because
MNIST provides many training examples describing a relatively simple data manifold. Figure 2B
shows samples from the model.
4.2
Omniglot with convolutional MLPs
To apply the model to a more challenging dataset and to use it for generative few-shot learning, we
train it on various versions of the Omniglot [27] dataset. For these experiments we use convolutional
en- and decoders: The approximate posterior q(z|m, x) takes the concatenation of x and m as input
and predicts the mean and variance for the 64 dimensional z. It consists of 6 convolutional layers
with 3 ? 3 kernels and 48 or 64 feature maps each. Every second layer uses a stride of 2 to get an
overall downsampling of 8 ? 8. The convolutional pyramid is followed by a fully-connected MLP
with 1 hidden layer and 2|z| output units. The architecture of p(x|m, z) uses the same downscaling
pyramid to map m to a |z|-dimensional vector, which is concatenated with z and upscaled with
transposed convolutions to the full image size again. We use skip connections from the downscaling
layers of m to the corresponding upscaling layers to preserve a high bandwidth path from m to x.
To reduce overfitting, given the relatively small size of the Omniglot dataset, we tie the parameters
of the convolutional downscaling layers in q(z|m) and p(x|m, z). The embedding MLPs for p(a)
and q(a|x) use the same convolutional architecture and map images x and memory content ma into
6
Figure 5: Robustness to increasing memory size at test-time: A: Varying the number of confounding
memory entries: At test-time we vary the number of classes in M. For an optimal model of disjoint
data from C classes we expect L = average L per class + log C (dashed lines). The model was
trained with 4 examples from 8 character classes in memory per gradient step. We also show our best
soft-attenttion baseline model which was trained with 16 examples from two classes each gradient
step. B: Memory contains examples from all 144 test-set character classes and we vary the number
of examples per class. At C=0 we show the LL of our best unconditioned baseline VAE. The models
were trained with 8 character classes and {1, 4, 8} examples per class in memory.
a 128-dimensional matching space for the similarity calculations. We left their parameters untied
because we did not observe any improvement nor degradation of performance when tying them.
With learned memory: We run experiments on the 28 ? 28 pixel sized version of Omniglot which
was introduced in [13]. The dataset contains 24,345 unlabeled examples in the training, and 8,070
examples in the test set from 1623 different character classes. The goal of this experiment is to show
that our architecture can learn to use the top-level memory to model highly multi-modal input data.
We run experiments with up to 2048 randomly initialized mixture components and observe that the
model makes substantial use of them: The average KL(q(a|x)||p(a)) typically approaches log |M|,
while KL(q(z|?)||p(z|?)) and the overall training-set NLL are significantly lower compared to the
corresponding baseline VAE. However big models without regularization tend to overfit heavily (e.g.
training-set NLL < 80 nats; testset NLL > 150 nats when using |M|=2048). By constraining the
model size (|M|=256, convolutions with 32 feature maps) and adding 3e-4 L2 weight decay to all
parameters with the exception of M, we obtain a model with a testset NLL of 103.6 nats (evaluated
with K=5000 samples from the posterior), which is about the same as a two-layer IWAE and slightly
worse than the best RBMs (103.4 and ?100 respectively, [13]).
Few-shot learning: The 28 ? 28 pixel version [13] of Omniglot does not contain any alphabet or
character-class labels. For few-shot learning we therefore start from the original dataset [27] and
scale the 104 ? 104 pixel sized examples with 4 ? 4 max-pooling to 26 ? 26 pixels. We here use the
45/5 split introduced in [18] because we are mostly interested in the quantitative behaviour of the
memory component, and not so much in finding optimal regularization hyperparameters to maximize
performance on small datasets. For each gradient step, we sample 8 random character-classes from
random alphabets. From each character-class we sample 4 examples and use them as targets x to form
a minibatch of size 32. Depending on the experiment, we select a certain number of the remaining
examples from the same character classes to populate M. We chose 8 character-classes and 4
examples per class for computational convenience (to obtain reasonable minibatch and memory sizes).
In control experiments with 32 character classes per minibatch we obtain almost indistinguishable
learning dynamics and results.
To establish meaningful baselines, we train additional models with identical encoder and decoder
architectures: 1) A simple, unconditioned VAE. 2) A memory-augmented generative model with
soft-attention. Because the soft-attention weights have to depend solely on the variables in the
generative model and may not take input directly from the encoder, we have to use z as the top-level
latent variable: p(z), p(x|z, m(z)) and q(z|x). The overall structure of this model resembles the
structure of prior work on memory-augmented generative models (see section 3 and Figure 1A), and
is very similar to the one used in [20], for example.
For the unconditioned baseline VAE we obtain a NLL of 90.8, while our memory augmented model
reaches up to 68.8 nats. Figure 5 shows the scaling properties of our model when varying the
number of conditioning examples at test-time. We observe only minimal degradation compared
7
Model
Generative Matching Nets
Generative Matching Nets
Generative Matching Nets
Variational Memory Addressing
Variational Memory Addressing
Variational Memory Addressing
Variational Memory Addressing
Ctest
1
2
4
1
2
4
16
1
83.3
86.4
88.3
86.5
87.2
87.5
89.6
2
78.9
84.9
87.3
83.0
83.3
83.3
85.1
3
75.7
82.4
86.7
79.6
80.9
81.2
81.5
4
72.9
81.0
85.4
79.0
79.3
80.7
81.9
5
70.1
78.8
84.0
76.5
79.1
79.5
81.3
10
59.9
71.4
80.2
76.2
77.0
78.6
79.8
19
45.8
61.2
73.7
73.9
75.0
76.7
77.0
Table 1: Our model compared to Generative Matching Networks [20]: GMNs have an extra stage
that computes joint statistics over the memory context which gives the model a clear advantage when
multiple conditiong examples per class are available. But with increasing number of classes C it
quickly degrades. LL bounds were evaluated with K=1000 posterior samples.
to a theoretically optimal model when we increase the number of concurrent character classes in
memory up to 144, indicating that memory readout works reliably with |M| 2500 items in memory.
The soft-attention baseline model reaches up to 73.4 nats when M contains 16 examples from 1
or 2 character-classes, but degrades rapidly with increasing number of confounding classes (see
Figure 5A). Figure 3 shows histograms and samples from q(a|x), visually confirming that our model
performs reliable approximate inference over the memory locations.
We also train a model on the Omniglot dataset used in [20]. This split provides a relatively small
training set. We reduce the number of feature channels and hidden layers in our MLPs and add 3e-4
L2 weight decay to all parameters to reduce overfitting. The model in [20] has a clear advantage
when many examples from very few character classes are in memory because it was specifically
designed to extract joint statistics from memory before applying the soft-attention readout. But like
our own soft-attention baseline, it quickly degrades as the number of concurrent classes in memory is
increased to 4 (table 1).
Few-shot classification: Although this is not the main aim of this paper, we can use the
trained
model to perform discriminative few-shot classification: We can estimate p(c|x) ?
P
E
[p(x, z, ma )/p(x)] or by using the feed forward approximation p(c|x) ?
m
has
P a label c z?q(z|a,x)
q(a|x).
Without
any further retraining or finetuneing we obtain classification accuracies
ma has label c
of 91%, 97%, 77% and 90% for 5-way 1-shot, 5-way 5-shot, 20-way 1-shot and 20-way 5-shot
respectively with q(a|x).
5
Conclusions
In our experiments we generally observe that the proposed model is very well behaved: we never
used temperature annealing for the categorical softmax or other tricks to encourage the model to
use memory. The interplay between p(a) and q(a|x) maintains exploration (high entropy) during
the early phase of training and decreases naturally as the sampled ma become more informative.
The KL divergences for the continuous and discrete latent variables show intuitively interpretable
results for all our experiments: On the densely sampled MNIST dataset only a few distinctive mixture
components are identified, while on the more disjoint and sparsely sampled Omniglot dataset the
model chooses to use many more memory entries and uses the continuous latent variables less. By
interpreting memory addressing as a stochastic operation, we gain the ability to apply a variational
approximation which helps the model to perform precise memory lookups during inference and
training. Compared to soft-attention approaches, we loose the ability to naively backprop through
read-operations and we have to use approximations like VIMCO. However, our experiments strongly
suggest that this can be a worthwhile trade-off. Our experiments also show that the proposed
variational approximation is robust to increasing memory sizes: A model trained with 32 items in
memory performed nearly optimally with more than 2500 items in memory at test-time. Beginning
with M 48 our hard-attention implementation becomes noticeably faster in terms of wall-clock
time per parameter update than the corresponding soft-attention baseline. Even though we use K=4
posterior samples during training and the soft-attention baseline only requires a single one.
8
Acknowledgments
We thank our colleagues at DeepMind and especially Oriol Vinyals and Sergey Bartunov for insightful
discussions.
References
[1] Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint
arXiv:1312.6114, 2013.
[2] Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation
and approximate inference in deep generative models. In Proceedings of The 31st International
Conference on Machine Learning, pages 1278?1286, 2014.
[3] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil
Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in neural
information processing systems, pages 2672?2680, 2014.
[4] Danilo Jimenez Rezende and Shakir Mohamed. Variational inference with normalizing flows.
arXiv preprint arXiv:1505.05770, 2015.
[5] Diederik P Kingma, Tim Salimans, and Max Welling. Improving variational inference with
inverse autoregressive flow. arXiv preprint arXiv:1606.04934, 2016.
[6] Sreerupa Das, C. Lee Giles, and Guo zheng Sun. Learning context-free grammars: Capabilities
and limitations of a recurrent neural network with an external stack memory. In In Proceedings
of the Fourteenth Annual Conference of the Cognitive Science Society, pages 791?795. Morgan
Kaufmann Publishers, 1992.
[7] Sainbayar Sukhbaatar, arthur szlam, Jason Weston, and Rob Fergus. End-to-end memory
networks. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors,
Advances in Neural Information Processing Systems 28, pages 2440?2448. Curran Associates,
Inc., 2015.
[8] Alex Graves, Greg Wayne, Malcolm Reynolds, Tim Harley, Ivo Danihelka, Agnieszka GrabskaBarwi?nska, Sergio G?mez Colmenarejo, Edward Grefenstette, Tiago Ramalho, John Agapiou,
et al. Hybrid computing using a neural network with dynamic external memory. Nature,
538(7626):471?476, 2016.
[9] Nat Dilokthanakul, Pedro AM Mediano, Marta Garnelo, Matthew CH Lee, Hugh Salimbeni,
Kai Arulkumaran, and Murray Shanahan. Deep unsupervised clustering with gaussian mixture
variational autoencoders. arXiv preprint arXiv:1611.02648, 2016.
[10] Eric Nalisnick, Lars Hertel, and Padhraic Smyth. Approximate inference for deep latent gaussian
mixtures. In NIPS Workshop on Bayesian Deep Learning, 2016.
[11] Andriy Mnih and Danilo J Rezende. Variational inference for monte carlo objectives. arXiv
preprint arXiv:1602.06725, 2016.
[12] J?rg Bornschein and Yoshua Bengio. Reweighted wake-sleep. arXiv preprint arXiv:1406.2751,
2014.
[13] Yuri Burda, Roger Grosse, and Ruslan Salakhutdinov. Importance weighted autoencoders.
arXiv preprint arXiv:1509.00519, 2015.
[14] Andriy Mnih and Karol Gregor. Neural variational inference and learning in belief networks.
arXiv preprint arXiv:1402.0030, 2014.
[15] Chris J Maddison, Andriy Mnih, and Yee Whye Teh. The concrete distribution: A continuous
relaxation of discrete random variables. arXiv preprint arXiv:1611.00712, 2016.
[16] Eric Jang, Shixiang Gu, and Ben Poole. Categorical reparameterization with gumbel-softmax.
stat, 1050:1, 2017.
9
[17] Chongxuan Li, Jun Zhu, and Bo Zhang. Learning to generate with memory. In International
Conference on Machine Learning, pages 1177?1186, 2016.
[18] Danilo Jimenez Rezende, Shakir Mohamed, Ivo Danihelka, Karol Gregor, and Daan Wierstra.
One-shot generalization in deep generative models. arXiv preprint arXiv:1603.05106, 2016.
[19] DRAW: A Recurrent Neural Network For Image Generation, 2015.
[20] Sergey Bartunov and Dmitry P Vetrov. Fast adaptation in generative models with generative
matching networks. arXiv preprint arXiv:1612.02192, 2016.
[21] Jimmy Ba, Ruslan R Salakhutdinov, Roger B Grosse, and Brendan J Frey. Learning wake-sleep
recurrent attention models. In Advances in Neural Information Processing Systems, pages
2593?2601, 2015.
[22] Wojciech Zaremba and Ilya Sutskever. Reinforcement learning neural turing machines. arXiv
preprint arXiv:1505.00521, 362, 2015.
[23] Harrison Edwards and Amos Storkey. Towards a Neural Statistician. 2 2017.
[24] Mevlana Gemici, Chia-Chun Hung, Adam Santoro, Greg Wayne, Shakir Mohamed, Danilo J
Rezende, David Amos, and Timothy Lillicrap. Generative temporal models with memory. arXiv
preprint arXiv:1702.04649, 2017.
[25] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint
arXiv:1412.6980, 2014.
[26] Hugo Larochelle. Binarized mnist dataset http://www.cs.toronto.edu/~larocheh/public/
datasets/binarized_mnist/binarized_mnist_[train|valid|test].amat, 2011.
[27] Brenden M Lake, Ruslan Salakhutdinov, and Joshua B Tenenbaum. Human-level concept
learning through probabilistic program induction. Science, 350(6266):1332?1338, 2015.
10
| 6981 |@word version:4 middle:1 norm:1 retraining:1 confirms:1 pick:3 shot:18 contains:6 series:1 jimenez:3 daniel:1 reynolds:1 existing:1 com:1 surprising:1 diederik:3 john:1 informative:1 confirming:1 enables:1 shanahan:1 treating:2 designed:1 update:7 interpretable:1 sukhbaatar:1 generative:43 selected:4 item:4 ivo:2 inspection:1 beginning:2 provides:2 location:3 toronto:1 accessed:1 zhang:1 wierstra:2 along:1 become:2 retrieving:3 persistent:1 qualitative:1 consists:1 combine:2 multimodality:1 introduce:1 theoretically:1 expected:2 rapid:1 nor:1 multi:2 salakhutdinov:3 rws:1 considering:1 increasing:4 becomes:1 provided:1 tying:1 kind:1 deepmind:2 developed:1 finding:1 temporal:1 quantitative:2 every:1 binarized:2 tie:1 zaremba:1 uk:1 control:2 wayne:2 unit:2 sherjil:1 omit:1 szlam:1 danihelka:2 before:3 frey:1 treat:1 aiming:1 vetrov:1 encoding:1 path:1 solely:1 noteworthy:1 might:3 chose:1 resembles:1 suggests:1 challenging:1 downscaling:3 acknowledgment:1 block:1 implement:1 backpropagation:1 sq:3 procedure:1 digit:1 significantly:2 matching:9 integrating:1 suggest:1 get:1 convenience:1 unlabeled:1 layered:1 ctest:1 context:7 influence:1 applying:1 yee:1 optimize:3 equivalent:2 deterministic:2 map:4 dz:1 www:1 attention:23 jimmy:2 iwae:2 pouget:1 estimator:1 reparameterization:1 embedding:4 handle:2 variation:3 marta:1 target:8 heavily:1 smyth:1 us:6 distinguishing:1 goodfellow:1 curran:1 trick:1 amortized:1 associate:1 storkey:1 particularly:1 updating:1 breakdown:1 sparsely:1 predicts:1 ep:1 observed:1 module:4 preprint:14 parameterize:2 readout:2 connected:2 sun:1 decrease:2 trade:1 substantial:2 intuition:2 accessing:1 nats:9 warde:1 dynamic:2 zoran:1 depend:2 trained:11 serve:1 purely:1 distinctive:1 eric:2 gu:1 joint:2 k0:1 represented:1 various:1 alphabet:2 train:8 ramalho:1 fast:1 effective:1 london:1 describe:2 monte:1 query:2 jean:1 widely:1 larger:1 kai:1 drawing:1 encoder:4 ability:2 statistic:2 grammar:1 unseen:2 jointly:1 final:1 unconditioned:3 shakir:4 nll:10 advantage:4 differentiable:3 sequence:1 net:4 bornschein:3 interplay:1 propose:1 reconstruction:1 product:2 adaptation:1 relevant:2 ath:1 rapidly:1 mixing:2 sutskever:1 convergence:1 cluster:2 produce:1 generating:1 captioning:2 incremental:2 ben:1 adam:3 converges:1 illustrate:1 depending:1 help:1 stat:1 propagating:1 tim:2 exemplar:2 recurrent:3 progress:2 eq:5 edward:2 c:1 skip:1 indicate:1 larochelle:1 quantify:1 concentrate:1 closely:1 stochastic:12 lars:1 exploration:1 human:1 enable:1 public:1 implementing:1 noticeably:1 backprop:1 behaviour:3 generalization:2 wall:1 decompose:1 sainbayar:1 vimco:4 sufficiently:2 normal:1 exp:1 visually:1 lawrence:1 matthew:1 vary:2 early:1 ruslan:3 applicable:1 label:3 concurrent:2 tool:1 weighted:1 amos:2 gaussian:5 aim:1 modified:1 rather:1 varying:3 vae:9 rezende:6 improvement:1 arulkumaran:1 modelling:3 likelihood:5 indicates:1 contrast:1 adversarial:1 brendan:1 baseline:10 sense:1 am:1 inference:19 dependent:2 typically:2 santoro:1 hidden:3 selective:1 selects:1 interested:3 pixel:5 overall:5 classification:4 among:1 augment:2 development:1 softmax:3 initialize:1 equal:1 never:1 having:1 beach:1 sampling:8 identical:1 unsupervised:1 nearly:1 others:1 report:1 mirza:1 yoshua:2 few:13 primarily:1 randomly:7 composed:1 preserve:1 densely:1 divergence:2 phase:1 replacement:1 statistician:1 harley:1 mlp:1 investigate:1 mnih:4 highly:1 zheng:1 evaluation:1 mixture:13 yielding:1 unconditional:1 behind:1 nvil:1 farley:1 edge:1 encourage:1 necessary:1 arthur:1 initialized:2 minimal:1 increased:1 column:2 modeling:2 soft:17 giles:1 cost:5 addressing:19 entry:12 hundred:1 too:1 optimally:1 chooses:1 st:2 density:1 international:2 hugh:1 probabilistic:3 off:1 lee:3 together:1 quickly:2 concrete:1 gans:1 ilya:1 again:1 padhraic:1 choose:1 worse:1 external:6 stochastically:1 cognitive:1 wojciech:1 li:1 account:2 suggesting:1 nonlinearities:1 lookup:5 stride:1 inc:1 depends:1 performed:1 jason:1 responsibility:1 overfits:1 red:1 start:1 bayes:1 maintains:1 capability:2 contribution:2 mlps:8 greg:2 convolutional:6 variance:3 accuracy:1 kaufmann:1 identify:2 generalize:1 modelled:1 raw:1 bayesian:1 carlo:1 converged:1 reach:2 manual:1 rbms:1 colleague:1 mohamed:4 e2:2 naturally:2 transposed:1 sampled:4 gain:1 dataset:11 recall:2 dimensionality:1 improves:1 ea:5 back:1 feed:1 higher:1 danilo:6 supervised:2 modal:1 done:3 though:2 evaluated:2 strongly:1 mez:1 roger:2 stage:1 autoencoders:3 relearn:1 sketch:1 overfit:1 clock:1 mehdi:1 google:1 minibatch:5 behaved:1 usa:1 usage:1 lillicrap:1 normalized:1 building:1 contain:1 deliberately:1 former:1 regularization:2 concept:1 read:5 reweighted:1 ll:2 during:7 indistinguishable:1 encourages:1 shixiang:1 cosine:1 prominent:1 whye:1 demonstrate:3 performs:1 temperature:1 interpreting:1 image:7 variational:30 recently:2 common:2 empirically:1 hugo:1 conditioning:2 interpretation:1 interpret:3 omniglot:8 sugiyama:1 access:4 similarity:13 etc:1 add:1 sergio:1 nalisnick:1 posterior:12 own:1 recent:2 showed:1 perspective:2 retrieved:3 confounding:2 driven:1 apart:1 store:1 buffer:8 certain:1 yuri:1 joshua:1 seen:2 morgan:1 additional:1 maximize:2 dashed:1 multiple:3 full:2 match:1 faster:1 plug:1 calculation:1 long:1 chia:1 sota:1 involving:1 essentially:1 arxiv:28 histogram:3 sometimes:1 represent:1 kernel:1 pyramid:2 achieved:1 sergey:2 annealing:1 wake:2 harrison:1 source:1 publisher:1 extra:1 rest:1 unlike:1 nska:1 probably:1 pooling:1 tend:1 flow:5 effectiveness:1 constraining:1 split:2 easy:1 embeddings:2 bartunov:2 bengio:2 relu:1 architecture:9 identified:2 andriy:4 bandwidth:1 inner:2 reduce:3 shift:1 distorting:1 deep:5 useful:2 generally:2 clear:2 aimed:2 factorial:1 amount:2 tenenbaum:1 simplest:1 generate:3 http:1 outperform:1 notice:1 disjoint:3 popularity:1 per:11 upscaling:1 discrete:10 write:4 threefold:1 monitor:1 achieving:1 clean:1 relaxation:2 downstream:1 year:1 run:2 inverse:2 parameterized:2 powerful:3 turing:2 fourteenth:1 place:1 family:1 reasonable:1 almost:1 separation:1 lake:1 draw:3 scaling:3 bound:7 layer:8 followed:1 courville:1 sleep:2 annual:1 alex:1 untied:1 flat:1 aspect:1 relatively:3 sreerupa:1 slightly:2 character:16 rob:1 intuitively:1 bing:1 describing:1 loose:1 mechanism:2 serf:1 end:2 amnih:1 available:1 operation:8 apply:4 observe:5 hierarchical:1 worthwhile:1 appropriate:1 salimans:1 batch:3 alternative:3 robustness:1 jang:1 original:1 top:5 remaining:4 ensure:1 clustering:1 tiago:1 concatenated:1 build:1 establish:1 especially:1 society:1 murray:1 gregor:2 objective:2 parametric:3 degrades:3 dependence:1 diagonal:1 gradient:16 hq:2 thank:1 reinforce:1 concatenation:1 decoder:4 gracefully:1 hea:3 chris:1 manifold:1 maddison:1 extent:1 induction:1 ozair:1 index:1 modeled:1 mini:3 grabskabarwi:1 downsampling:1 upscaled:1 mostly:1 potentially:1 ba:2 implementation:2 reliably:1 iaf:1 perform:6 teh:1 observation:10 convolution:2 datasets:4 daan:2 variability:3 precise:1 stack:1 brenden:1 introduced:3 david:2 r256:1 kl:15 trainable:1 connection:2 learned:9 kingma:3 nip:2 address:13 able:2 proceeds:1 usually:1 below:1 poole:1 reading:1 program:1 gaining:1 memory:115 max:4 reliable:1 belief:1 suitable:1 hybrid:1 indicator:1 zhu:1 categorical:6 jun:1 autoencoder:4 auto:2 danilor:1 extract:1 prior:11 understanding:1 l2:2 relative:2 graf:1 loss:1 fully:2 expect:1 interesting:1 limitation:1 generation:1 larocheh:1 chongxuan:1 integrate:1 editor:1 share:2 translation:1 row:1 free:1 populate:1 guide:1 allow:1 burda:1 karol:2 template:5 amat:1 curve:1 plain:1 valid:1 autoregressive:3 computes:1 author:1 forward:1 agnieszka:1 reinforcement:1 testset:4 welling:2 approximate:10 dmitry:1 keep:1 overfitting:2 discriminative:2 fergus:1 alternatively:1 agapiou:1 continuous:12 latent:20 iterative:1 table:4 nature:1 learn:4 channel:1 robust:1 ca:1 improving:1 investigated:1 garnett:1 da:1 did:2 main:1 multilayered:1 big:2 hyperparameters:1 x1:1 augmented:8 xu:1 en:1 grosse:2 embeds:1 fails:1 inferring:1 deterministically:1 wish:1 explicit:1 ian:1 specific:2 insightful:1 experimented:1 decay:2 abadie:1 cortes:1 normalizing:2 chun:1 essential:1 naively:1 mnist:7 workshop:1 adding:1 effectively:1 gained:1 importance:1 nat:1 conditioned:2 illustrates:1 gumbel:2 suited:1 rg:2 backpropagate:1 entropy:2 timothy:1 simply:1 prevents:1 vinyals:1 bo:1 pedro:1 corresponds:1 ch:1 ma:32 minibatches:1 weston:1 conditional:11 slot:1 sized:2 goal:1 grefenstette:1 towards:1 shared:1 content:18 change:1 hard:4 typical:2 specifically:1 acting:1 degradation:3 meaningful:1 vaes:3 perceptrons:1 indicating:2 exception:1 select:1 internal:1 aaron:1 guo:1 latter:1 colmenarejo:1 brevity:1 oriol:1 incorporate:1 malcolm:1 hung:1 |
6,612 | 6,982 | On Tensor Train Rank Minimization: Statistical
Efficiency and Scalable Algorithm
Masaaki Imaizumi
Institute of Statistical Mathematics
RIKEN Center for Advanced Intelligence Project
[email protected]
Takanori Maehara
RIKEN Center for Advanced Intelligence Project
[email protected]
Kohei Hayashi
National Institute of Advanced Industrial Science and Technology
RIKEN Center for Advanced Intelligence Project
[email protected]
Abstract
Tensor train (TT) decomposition provides a space-efficient representation for
higher-order tensors. Despite its advantage, we face two crucial limitations when
we apply the TT decomposition to machine learning problems: the lack of statistical
theory and of scalable algorithms. In this paper, we address the limitations. First,
we introduce a convex relaxation of the TT decomposition problem and derive
its error bound for the tensor completion task. Next, we develop a randomized
optimization method, in which the time complexity is as efficient as the space
complexity is. In experiments, we numerically confirm the derived bounds and
empirically demonstrate the performance of our method with a real higher-order
tensor.
1
Introduction
Tensor decomposition is an essential tool for dealing with data represented as multidimensional arrays,
or simply, tensors. Through tensor decomposition, we can determine latent factors of an input tensor
in a low-dimensional multilinear space, which saves the storage cost and enables predicting missing
elements. Note that, a different multilinear interaction among latent factors defines a different tensor
decomposition model, which yields several variations of tensor decomposition. For general purposes,
however, either Tucker decomposition [29] or CANDECOMP/PARAFAC (CP) decomposition [8]
model is commonly used.
In the past three years, an alternative tensor decomposition model, called tensor train (TT) decomposition [21] has actively been studied in machine learning communities for such as approximating the
inference on a Markov random field [18], modeling supervised learning [19, 24], analyzing restricted
Boltzmann machine [4], and compressing deep neural networks [17]. A key property is that, for
higher-order tensors, TT decomposition provides more space-saving representation called TT format
while preserving the representation power. Given an order-K tensor (i.e., a K-dimensional tensor),
the space complexity of Tucker decomposition is exponential in K, whereas that of TT decomposition
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
is linear in K. Further, on TT format, several mathematical operations including the basic linear
algebra operations can be performed efficiently [21].
Despite its potential importance, we face two crucial limitations when applying this decomposition
to a much wider class of machine learning problems. First, its statistical performance is unknown.
In Tucker decomposition and its variants, many authors addressed the generalization error and
derived statistical bounds (e.g. [28, 27]). For example, Tomioka et al.[28] clarify the way in which
using the convex relaxation of Tucker decomposition, the generalization error is affected by the
rank (i.e., the dimensionalities of latent factors), dimension of an input, and number of observed
elements. In contrast, such a relationship has not been studied for TT decomposition yet. Second,
standard TT decomposition algorithms, such as alternating least squares (ALS) [6, 30] , require a
huge computational cost. The main bottleneck arises from the singular value decomposition (SVD)
operation to an ?unfolding? matrix, which is reshaped from the input tensor. The size of the unfolding
matrix is huge and the computational cost grows exponentially in K.
In this paper, we tackle the above issues and present a scalable yet statistically-guaranteed TT
decomposition method. We first introduce a convex relaxation of the TT decomposition problem and
its optimization algorithm via the alternating direction method of multipliers (ADMM). Based on
this, a statistical error bound for tensor completion is derived, which achieves the same statistical
efficiency as the convex version of Tucker decomposition does. Next, because the ADMM algorithm
is not sufficiently scalable, we develop an alternative method by using a randomization technique. At
the expense of losing the global convergence property, the dependency of K on the time complexity
is reduced from exponential to quadratic. In addition, we show that a similar error bound is still
guaranteed. In experiments, we numerically confirm the derived bounds and empirically demonstrate
the performance of our method using a real higher-order tensor.
2
2.1
Preliminaries
Notation
Let X ? RI1 ?????IK be the space of order-K tensors, where IkQdenotes the dimensionality of the
k-th mode for k = 1, . . . , K. For brevity, we define I<k := k0 <k Ik0 ; similarly, I?k , Ik< and
Ik? are defined. For a vector Y 2 Rd , [Y ]i denotes the i-th element of Y . Similarly, [X]i1 ,...,iK
denotes the (i1 , . . . , iK ) elements of a tensor X 2 X . Let [X]i1 ,...,ik 1 ,:,ik+1 ,...,iK denote an Ik k
dimensional vector (Xi1 ,...,ik 1 ,j,ik+1 ,...,iK )Ij=1
called the mode-k fiber. For a vector Y 2 Rd ,
kY k = (Y T Y )1/2 denotes the `2 -norm and kY k1 = maxi |[Y ]i | denotes the max norm. For tensors
PI ...I
X, X 0 2 X , an inner product is defined as hX, X 0 i := i11,...,iKK =1 X(i1 , . . . , iK )X 0 (i1 , . . . , iK )
P
and kXkF = hX, Xi1/2 denotes the Frobenius norm. For a matrix Z, kZks := j j (Z) denotes
the Schatten-1 norm, where j (?) is a j-th singular value of Z.
2.2
Tensor Train Decomposition
Let us define a tuple of positive integers (R1 , . . . , RK 1 ) and an order-3 tensor Gk 2 RIk ?Rk 1 ?Rk
for each k = 1, . . . , K. Here, we set R0 = RK = 1. Then, TT decomposition represents each
element of X as follows:
Xi1 ,...,iK = [G1 ]i1 ,:,: [G2 ]i2 ,:,: ? ? ? [GK ]iK ,:,: .
(1)
Note that [Gk ]ik ,:,: is an Rk 1 ? Rk matrix. We define G := {Gk }K
k=1 as a set of the tensors,
and let X(G) be a tensor whose elements are represented by G as (1). The tuple (R1 , . . . , RK 1 )
controls the complexity of TT decomposition, and it is called a Tensor Train (TT) rank. Note that TT
decomposition is universal, i.e., any tensor can be represented by TT decomposition with sufficiently
large TT rank [20].
When we evaluate the computational complexity, we assume the shape of G is roughly symmetric.
That is, we assume there exist I, R 2 N such that Ik = O(I) for k = 1, . . . , K and Rk = O(R) for
k = 1, . . . , K 1.
2
2.3
Tensor Completion Problem
Suppose there exists a true tensor X ? 2 X that is unknown, and a part of the elements of X ? is
,...,IK
observed with some noise. Let S ? {(j1 , j2 , . . . , jK )}Ij11,...,j
be a set of indexes of the observed
K =1
QK
elements and n := |S| ? k=1 Ik be the number of observations. Let j(i) be an i-th element of
S for i = 1, . . . , n, and yi denote i-th observation from X ? with noise. We consider the following
observation model:
yi = [X ? ]j(i) + ?i ,
(2)
where ?i is i.i.d. noise with zero mean and variance 2 . For simplicity, we introduce observation
vector Y := (y1 , . . . , yn ), noise vector E := (?1 , . . . , ?n ), and rearranging operator X : X ! Rn
that randomly picks the elements of X. Then, the model (2) is rewritten as follows:
Y = X(X ? ) + E.
The goal of tensor completion is to estimate the true tensor X ? from the observation vector Y .
Because the estimation problem is ill-posed, we need to restrict the degree of freedom of X ? , such
as rank. Because the direct optimization of rank is difficult, its convex surrogation is alternatively
used [2, 3, 11, 31, 22]. For tensor completion, the convex surrogation yields the following optimization
problem [5, 14, 23, 26]:
?
1
min
kY X(X)k2 + n kXks? ,
(3)
X2? 2n
where ? ? X is a convex subset of X , n
0 is a regularization coefficient, and k ? ks? is the
PK
1
e
e
overlapped Schatten norm defined as kXks? := K
k=1 kX(k) ks . Here, X(k) is the k-unfolding
matrix defined by concatenating the mode-k fibers of X. The overlapped Schatten norm regularizes
the rank of X in terms of Tucker decomposition [16, 28]. Although the Tucker rank of X ? is unknown
in general, the convex optimization adjusts the rank depending on n .
To solve the convex problem (3), the ADMM algorithm is often employed [1, 26, 28]. Since the
overlapped Schatten norm is not differentiable, the ADMM algorithm avoids the differentiation of
the regularization term by alternatively minimizing the augmented Lagrangian function iteratively.
3
Convex Formulation of TT Rank Minimization
To adopt TT decomposition to the convex optimization problem as (3), we need the convex surrogation
of TT rank. For that purpose, we introduce the Schatten TT norm [22] as follows:
kXks,T :=
1
K
1
K
X1
k=1
kQk (X)ks :=
1
K
1
K
X1 X
k=1
j (Qk (X)),
(4)
j
where Qk : X ! RI?k ?Ik< is a reshaping operator that converts a tensor to a large matrix where the
first k modes are combined into the rows and the rest K k modes are combined into the columns.
Oseledets et al.[21] shows that the matrix rank of Qk (X) can bound the k-th TT rank of X, implying
that the Schatten TT norm surrogates the sum of the TT rank. Putting the Schatten TT norm into (3),
we obtain the following optimization problem:
?
1
min
kY X(X)k2 + n kXks,T .
(5)
X2X 2n
3.1
ADMM Algorithm
1
K 1
To solve
(5), we consider the augmented Lagrangian function L(x, {Zk }K
k=1 , {?k }k=1 ), where
Q
Q
I
k
x 2 R k is the vectorization of X, Zk is a reshaped matrices with size I?k ?Ik< , and ?k 2 R k Ik
(0)
(0)
is a coefficient for constraints. Given initial points (x(0) , {Zk }k , {?k }k ), the `-th step of ADMM
3
is written as follows:
x
(`+1)
(`+1)
Zk
(`+1)
?k
=
e T Y + n?
?
= prox
n /?
1
K
1
K
X1
(`)
(Vk (Zk )
(`)
?k )
k=1
!
/(1 + n?K),
(`)
(Vk 1 (x(`+1) + ?k )), k = 1, . . . , K,
(`)
= ?k + (x(`+1)
(`+1)
Vk (Zk
)), k = 1, . . . , K.
e is an n ? QIk matrix that works as the inversion mapping of X; Vk is a vectorizing
Here, ?
k=1
operator of an I?k ? Ik< matrix; prox(?) is the shrinkage operation of the singular values as
proxb (W ) = U max{S bI, 0}V T , where U SV T is the singular value decomposition of W ; ? > 0
is a hyperparameter for a step size. We stop the iteration when the convergence criterion is satisfied
(e.g. as suggested by Tomioka et al.[28]). Since the Schatten TT norm (4) is convex, the sequence of
the variables of ADMM is guaranteed to converge to the optimal solution ([5, Theorem 5.1]). We
refer to this algorithm as TT-ADMM.
TT-ADMM requires huge resources in terms of both time and space. For the time complexity, the
proximal operation of the Schatten TT norm, namely the SVD thresholding of Vk 1 , yields the
dominant complexity, which is O(I 3K/2 ) time. For the space complexity, we have O(K) variables
of size O(I K ), which requires O(KI K ) space.
4
Alternating Minimization with Randomization
In this section, we consider reducing the space complexity for handling higher order tensors. The
idea is simple: we only maintain the TT format of the input tensor rather than the input tensor itself.
This leads the following optimization problem:
?
1
min
kY X(X(G))k2 + n kX(G)ks,T .
(6)
G
2n
Remember that G = {Gk }k is the set of TT components and X(G) is the tensor given by the TT
format with G. Now we only need to store the TT components G, which drastically improves the
space efficiency.
4.1
Randomized Schatten TT norm
We approximate the optimization of the Schatten TT norm. To avoid the computation of exponentially
large-scale SVDs in the Schatten TT norm, we employ a technique called the ?very sparse random
projection? [12]. The main idea is that, if the size of a matrix is sufficiently larger than its rank, then
its singular values (and vectors) are well preserved even after the projection by a sparse random
matrix. This motivates us to use the Schatten TT norm over the random projection.
Preliminary, we introduce tensors for the random projection. Let D1 , D2 2 N be the size of the
matrix after projection. For each k = 1, . . . , K 1 and parameters, let ?k,1 2 RD1 ?I1 ?????Ik be a
tensor whose elements are independently and identically distributed as follows:
8 p
with probability 1/2s,
<+ s/d1
[?k,1 ]d1 ,i1 ,...,ik = 0 p
(7)
with probability 1 1/s,
:
s/d1
with probability 1/2s,
for i1 , . . . , ik and d1 = 1, . . . , D1 . Here, s > 0 is a hyperparameter controlling sparsity. Similarly,
we introduce a tensor ?k,2 2 RD2 ?Ik+1 ?????IK 1 that is defined in the same way as ?k,1 . With
?k,1 and ?k,2 , let Pk : X ! RD1 ?D2 be a random projection operator whose element is defined as
follows:
[Pk (X)]d1 ,d2 =
I1
X
j1 =1
???
IK
X
[?k,1 ]d1 ,j1 ,...,jk [X]j1 ,...,jK [?k,2 ]d2 ,jk+1 ,...,jK .
jK =1
4
(8)
Note that we can compute the above projection by using the facts that X has the TT format and the
(k)
projection matrices are sparse. Let ?j be a set of indexes of non-zero elements of ?k,j . Then, using
the TT representation of X, (8) is rewritten as
X
[Pk (X(G))]d1 ,d2 =
[?k,1 ]d1 ,j1 ,...,jk [G1 ]j1 ? ? ? [Gk ]jk
(k)
(j1 ,...,jk )2?1
X
(k)
(jk+1 ,...,jK )2?2
[Gk ]jk+1 ? ? ? [GK ]jK [?k,2 ]d2 ,jk+1 ,...,jK ,
(1)
(2)
If the projection matrices have only S nonzero elements (i.e., S = |?j | = |?j |), the computational
cost of the above equation is O(D1 D2 SKR3 ).
The next theorem guarantees that the Schatten-1 norm of Pk (X) approximates the original one.
Theorem 1. Suppose X 2 X has TT rank (R1 , . . . , Rk ). Consider the reshaping operator Qk in
(4), and the random operator Pk as (8) with tensors ?k,1 and ?k,2 defined as (7). If D1 , D2
max{R
) + log(1/?))/?2 }, and all the singular vectors u of Q(X)k are well-spread as
k , 4(log(6Rkp
P
3
j |uj | ? ?/(1.6k s), we have
1 ?
kQk (X)ks ? kPk (X)ks ? (1 + ?)kQk (X)ks ,
Rk
with probability at least 1
?.
Note that the well-spread condition can be seen as a stronger version of the incoherence assumption
which will be discussed later.
4.2
Alternating Minimization
Note that the new problem (6) is non-convex because X(G) does not form a convex set on X .
However, if we fix G except for Gk , it becomes convex with respect to Gk . Combining with the
random projection, we obtain the following minimization problem:
"
#
K
X1
1
n
2
min
kY X(X(G))k +
kPk0 (X(G))ks .
(9)
Gk
2n
K 1 0
k =1
We solve this by the ADMM method for each k = 1, . . . , K. Let gk 2 RIk Rk 1 Rk be the vectorization of Gk , and Wk0 2 RD1 ?D2 be a matrix for the randomly projected matrix. The augmented
1
K 1
D1 D2 K 1
0
0
Lagrangian function is then given by Lk (gk , {Wk0 }K
}k0 =1
k0 =1 , { k }k0 =1 ), where { k 2 R
(0)
(0) K 1
(0) K 1
are the Lagrange multipliers. Starting from initial points (gk , {Wk0 }k0 =1 , { k0 }k0 =1 ), the `-th
ADMM step is written as follows:
! 1
!
K
K
X1
X1
1
(`+1)
(`)
(`)
T
T
T
T
e
gk
= ? ?/n + ?
? Y /n +
k0 k0
k0 (? Vk (Wk0 )
k0 ) ,
K 1 0
k0 =1
k =1
?
?
(`+1)
(`+1)
(`)
Wk 0
= prox n /? Vek 1 ( k0 gk
+ k0 ) , k 0 = 1, . . . , K 1,
(`+1)
k0
=
(`)
k0
+(
(`+1)
k 0 gk
(`+1)
Vek (Wk0 )), k 0 = 1, . . . , K
1.
Here, (k) 2 RD1 D2 ?Ik Rk 1 Rk is the matrix imitating the mapping Gk 7! Pk (X(Gk ; G\{Gk })),
Vek is a vectorizing operator of D1 ? D2 matrix, and ? is an n ? Ik Rk 1 Rk matrix of the operator
X X(?; G\{Gk }) with respect to gk . Similarly to the convex approach, we iterate the ADMM steps
until convergence. We refer to this algorithm as TT-RAM, where RAM stands for randomized least
square.
The time complexity of TT-RAM at the `-th iteration is O((n + KD2 )KI 2 R4 ); the details are
deferred to Supplementary material. The space complexity is O(n + KI 2 R4 ), where O(n) is for Y
and O(KI 2 R4 ) is for the parameters.
5
5
Theoretical Analysis
In this section, we investigate how the TT rank and the number of observations affect to the estimation
error. Note that all the proofs of this section are deferred to Supplementary material.
5.1
Convex Solution
To analyze the statistical error of the convex problem (5), we assume the incoherence of the reshaped
version of X ? .
Assumption 2. (Incoherence Assumption) There exists k 2 {1, . . . , K} such that a matrix Qk (X ? )
k
has orthogonal singular vectors {ur 2 RI?k , vr 2 RIk< }R
r=1 satisfying
1
max kPU ei k ? (?Rk /I?k ) 2
1?i?I<k
1
and
max kPV ei k ? (?Rk /Ik< ) 2
1?i?I<k
with some 0 ? ? < 1. Here, PU and PV are linear projections onto spaces spanned by {ur }r and
{vr }r ; {ei }i is the natural basis.
Intuitively, the incoherence assumption requires that the singular vectors for the matrix Qk (X ? ) are
well separated. This type of assumption is commonly used in the matrix and tensor completion
studies [2, 3, 31]. Under the incoherence assumption, the error rate of the solution of (5) is derived.
b 2 X be
Theorem 3. Let X ? 2 X be a true tensor with TT rank (R1 , . . . , RK 1 ), and let X
?
the minimizer of (3). Suppose that n
kX (E)k1 /n and that Assumption 2 for some k 0 2
{1, 2, . . . , K} is satisfied. If
n
Cm0 ?2k0 max{I?k0 , Ik0 < }Rk0 log3 max{I?k0 , Ik0 < }
with a constant Cm0 , then with probability at least 1
CX ,
b
kX
X ? kF ? CX
(max{I?k0 , Ik0 < })
n
K
K
X1 p
3
and with a constant
Rk .
k=1
Theorem 3 states that the bound for the statistical error gets larger as the TT rank increases. In other
words, completing a tensor is relatively easy as long as the tensor has small TT rank. Also, when we
set n ! 0 as n increases, we can state the consistency of the minimizer.
The result of Theorem 3 is similar to that obtained from the studies on matrix completion [3, 16] and
tensor completion with the Tucker decomposition or SVD [28, 31]. Note that, although Theorem 3 is
for tensor completion, the result can easily be generalized to other settings such as the tensor recovery
or the compressed sensing problems.
5.2
TT-RAM Solution
Prior to the analysis, let G ? be the true TT components such that X ? = X(G ? ). For simplification,
we assume that the elements of G ? are normalized, i.e., kGk k = 1, 8k, and an Rk ? Ik 1 Ik matrix
reshaped from G?k has a Rk row rank.
In addition to the incoherence property (Assumption 2), we introduce an additional assumption on
the initial point of the ALS iteration.
K
Assumption 4. (Initial Point Assumption) Let G init := {Ginit
k }k=1 be the initial point of the ALS
iteration procedure. Then, there exists a finite constant C that satisfies
max
k2{1,...,K}
kGinit
k
G?k kF ? C .
Assumption 4 requires that the initial point is sufficiently close to the true solutions G ? . Although
the ALS method is not guaranteed to converge to the global optimum in general, Assumption 4
guarantees the convergence to the true solutions [25]. Now we can evaluate the error rate of the
solution obtained by TT-RAM.
6
Theorem 5. Let X(G ? ) be the true tensor generated by G ? with TT rank (R1 , . . . , RK 1 ), and
Gb = G t be the solution of TT-RAM at the t-th iteration. Further, suppose that Assumption 2 for some
k 0 2 {1, 2, . . . , K} and Assumption 4 are satisfied, and suppose that Theorem (1) holds with ? > 0
for k = 1, . . . , K. Let Cm , CA , CB > 0 be 0 < < 1 be some constants. If
Cm ?2k0 Rk0 max{I?k0 , Ik0 < } log3 max{I?k0 , Ik0 < },
P p
and the number of iterations t satisfies t (log ) 1 {log(CB n K 1 (1 + ?) k Rk )
then with probability at least 1 ?(max{I?k0 , Ik0 < }) 3 and for n kX? (E)k1 /n,
n
b
kX(G)
X(G ? )kF ? CA (1 + ?)
n
log C },
K
X1 p
(10)
Rk .
k=1
Again, we can obtain the consistency of TT-RAM by setting n ! 0 as n increases. Since the setting
of n corresponds to that of Theorem 3, the speed of convergence of TT-RAM in terms of n is
equivalent to the speed of TT-ADMM.
By comparing with the convex approach (Theorem 3), the error rate becomes slightly worse. Here,
PK 1 p
the term n k=1 Rk in (10) comes from the estimation by the alternating minimization, which
linearly increases by K. This is because there are K optimization problems and their errors are
accumulated to the final solution. The term (1 + ?) in (10) comes from the random projection. The
size of the error ? can be arbitrary small by controlling the parameters of the random projection
D1 , D2 and s.
6
Related Work
To solve the tensor completion problem with TT decomposition, Wang et al.[30] and Grasedyck et
al.[6] developed algorithms that iteratively solve minimization problems with respect to Gk for each
k = 1, . . . , K. Unfortunately, the adaptivity of the TT rank is not well discussed. [30] assumed that
the TT rank is given. Grasedyck et al.[6] proposed a grid search method. However, the TT rank is
determined by a single parameter (i.e., R1 = ? ? ? = RK 1 ) and the search method lacks its generality.
Furthermore, the scalability problem remains in both methods?they require more than O(I K ) space.
Phien et al. [22] proposed a convex optimization method using the Schatten TT norm. However,
because they employed an alternating-type optimization method, the global convergence of their
method is not guaranteed. Moreover, since they maintain X directly and perform the reshape of X
several times, their method requires O(I K ) time.
Table 1 highlights the difference between the existing and our methods. We emphasize that our study
is the first attempt to analyze the statistical performance of TT decomposition. In addition, TT-RAM
is only the method that both time and space complexities do not grow exponentially in K.
Method
TCAM-TT[30]
ADF for TT[6]
SiLRTC-TT[22]
TT-ADMM
TT-RAM
Global
Convergence
X
Rank
Adaptivity
(search)
X
X
X
Time
Complexity
O(nIKR4 )
O(KIR3 + nKR2 )
O(I 3K/2 )
O(KI 3K/2 )
O((n + KD2 )KI 2 R4 )
Space
Complexity
O(I K )
O(I K )
O(KI K )
O(I K )
O(n + KI 2 R4 )
Statistical
Bounds
X
X
Table 1: Comparison of TT completion algorithms, with R is a parameter for the TT rank such that
R = R1 = ? ? ? = RK 1 , I = I1 = ? ? ? = IK is dimension, K is the number of modes, n is the
number of observed elements, and D is the dimension of random projection.
7
7.1
Experiments
Validation of Statistical Efficiency
Using synthetic data, we verify the theoretical bounds derived in Theorems 3 and 5. We first
generate TT components G ? ; each component G?k is generated as G?k = G?k /kG?k kF where each
7
p
b X ? kF against SRR P Rk with the order-4
Figure 1: Synthetic data: the estimation error kX
k
tensor (K = 4) and the order-5 tensor (K = 5). For each rank and n , we measure the error by 10
trials with different random seeds, which affect both the missing pattern and the initial points.
Table 2: Electricity data: the prediction error and the runtime (in seconds).
Method
Tucker
TCAM-TT
ADF for TT
SiLRTC-TT
TT-ADMM
TT-RAM
K=5
Error Time
0.219 7.125
0.219 2.174
0.998 1.221
0.339 1.478
0.221 0.289
0.219 4.644
K
Error
0.371
0.928
1.160
0.928
1.019
0.928
=7
Time
610.61
27.497
23.211
206.708
154.991
4.726
K
Error
N/A
0.928
1.180
N/A
1.061
0.928
=8
Time
N/A
146.651
278.712
N/A
2418.00
7.654
K = 10
Error Time
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
1.173 7.968
element of G?k is sampled from the i.i.d. standard normal distribution.
Then we generate Y by
Q
following the generative model (2) with the observation ratio n/ k Ik = 0.5 and the noise variance
0.01. We prepare two tensors of different size: an order-4 tensor of size 8 ? 8 ? 10 ? 10 and an
order-5 tensor of size 5 ? 5 ? 7 ? 7 ? 7. At the order-4 tensor, the TT rank is set as (R1 , R2 , R3 )
where R1 , R2 , R3 2 {3, 5, 7}. At the order-5 tensor, the TT rank is set as (R1 , R2 , R3 , R4 ) where
R1 , R2 , R3 , R4 2 {2, 4}. For estimation, we set the size of Gk and ?k as 10, which is larger than
the true TT rank. The regularization coefficient n is selected from {1, 3, 5}. The parameters for
random projection are set as s = 20 and D1 = D2 = 10.
P p
Figure 1 shows the relation between the estimation error and the sum of root rank (SRR) k Rk .
The result of TT-ADMM shows that the empirical errors are linearly related to SSR which is shown
by the theoretical result. The result of TT-RAM roughly replicates the theoretical relationship.
7.2
Higher-Order Markov Chain for Electricity Data
We apply the proposed tensor completion methods for analyzing the electricity consumption data [13].
The dataset contains time series measurements of household electric power consumption for every
minutes from December 2006 to November 2010 and it contains over 200, 000 observations.
The higher-order Markov chain is a suitable method to represent long-term dependency, and it is a common tool of time-series analysis [7] and natural language processing [9]. Let {Wt }t be discrete-time
random variables take values in a finite set B, and the order-K Markov chain describes the conditional
distribution of Wt with given {W? }? <t as P (Wt |{W? }? <t ) = P (Wt |Wt 1 , . . . , Wt K ). As K
increases, the conditional distribution of Wt can include more information from the past observations.
We complete the missing values of K-th Markov transition of the electricity dataset. We discretize
the value of the dataset into 10 values and set K 2 {5, 7, 8, 10}. Next, we empirically estimate the
conditional distribution of size 10K using 200, 000 observations. Then, we create X by randomly
selecting n = 10, 000 elements from the the conditional distribution and regarding the other elements
as missing. After completion, the prediction error is measured. We select hyperparameters using a
grid search with cross-validation.
8
Figure 2 compares the prediction error and the runtime by the related studies with TT decomposition.
For reference, we also report those values by Tucker decomposition without TT. When K = 5, the
rank adaptive methods achieve low estimation errors. As K increases, however, all the methods
except for TT-RAM suffers from the scalability issue. Indeed, at K = 10, only TT-RAM works and
the others does not due to exhausting memory.
8
Conclusion
In this paper, we investigated TT decomposition from the statistical and computational viewpoints.
To analyze its statistical performance, we formulated the convex tensor completion problem via the
low-rank TT decomposition using the TT Schatten norm. In addition, because the optimization of the
convex problem is infeasible, we developed an alternative algorithm called TT-RAM by combining
with the ideas of random projection and alternating minimization. Based on this, we derived the error
bounds of estimation for both the convex minimizer and the solution obtained by TT-RAM. The
experiments supported our theoretical results and demonstrated the scalability of TT-RAM.
Acknowledgement
We thank Prof. Taiji Suzuki for comments that greatly improved the manuscript. M. Imaizumi is
supported by Grant-in-Aid for JSPS Research Fellow (15J10206) from the JSPS. K. Hayashi is
supported by ONR N62909-17-1-2138.
References
[1] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein. Distributed optimization and statistical
learning via the alternating direction method of multipliers. Foundations and Trends R in
Machine Learning, 3(1):1?122, 2011.
[2] E. Candes and B. Recht. Exact matrix completion via convex optimization. Communications of
the ACM, 55(6):111?119, 2012.
[3] E. J. Candes and Y. Plan. Matrix completion with noise. Proceedings of the IEEE, 98(6):925?
936, 2010.
[4] J. Chen, S. Cheng, H. Xie, L. Wang, and T. Xiang. On the equivalence of restricted boltzmann
machines and tensor network states. arXiv preprint arXiv:1701.04831, 2017.
[5] S. Gandy, B. Recht, and I. Yamada. Tensor completion and low-n-rank tensor recovery via
convex optimization. Inverse Problems, 27(2):025010, 2011.
[6] L. Grasedyck, M. Kluge, and S. Kr?mer. Alternating least squares tensor completion in the
TT-format. arXiv preprint arXiv:1509.00311, 2015.
[7] J. D. Hamilton. Time series analysis, volume 2. Princeton University Press, Princeton, 1994.
[8] R. A. Harshman. Foundations of the parafac procedure: Models and conditions for an ?explanatory? multi-modal factor analysis. UCLA Working Papers in Phonetics, 16:1?84, 1970.
[9] D. Jurafsky and J. H. Martin. Speech and language processing, volume 3. Pearson, 2014.
[10] T. G. Kolda and B. W. Bader. Tensor decompositions and applications. SIAM review, 51(3):455?
500, 2009.
[11] A. Krishnamurthy and A. Singh. Low-rank matrix and tensor completion via adaptive sampling.
In Advances in Neural Information Processing Systems, pages 836?844, 2013.
[12] P. Li, T. J. Hastie, and K. W. Church. Very sparse random projections. In Proceedings of
the 12th International Conference on Knowledge Discovery and Data Mining, pages 287?296.
ACM, 2006.
[13] M. Lichman. UCI machine learning repository, 2013.
9
[14] J. Liu, P. Musialski, P. Wonka, and J. Ye. Tensor completion for estimating missing values in
visual data. Transactions on Pattern Analysis and Machine Intelligence, 35(1):208?220, 2013.
[15] Y. Mu, J. Dong, X. Yuan, and S. Yan. Accelerated low-rank visual recovery by random
projection. In IEEE Conference on Computer Vision and Pattern Recognition, pages 2609?2616,
2011.
[16] S. Negahban, M. J. Wainwright, et al. Estimation of (near) low-rank matrices with noise and
high-dimensional scaling. The Annals of Statistics, 39(2):1069?1097, 2011.
[17] A. Novikov, D. Podoprikhin, A. Osokin, and D. P. Vetrov. Tensorizing neural networks. In
Advances in Neural Information Processing Systems, pages 442?450, 2015.
[18] A. Novikov, A. Rodomanov, A. Osokin, and D. Vetrov. Putting mrfs on a tensor train. In
International Conference on Machine Learning, pages 811?819. JMLR W&CP, 2014.
[19] A. Novikov, M. Trofimov, and I. Oseledets.
arXiv:1605.03795, 2016.
Exponential machines.
arXiv preprint
[20] I. Oseledets and E. Tyrtyshnikov. TT-cross approximation for multidimensional arrays. Linear
Algebra and its Applications, 432(1):70?88, 2010.
[21] I. V. Oseledets. Tensor-train decomposition. SIAM Journal on Scientific Computing, 33(5):2295?
2317, 2011.
[22] H. N. Phien, H. D. Tuan, J. A. Bengua, and M. N. Do. Efficient tensor completion: Low-rank
tensor train. arXiv preprint arXiv:1601.01083, 2016.
[23] M. Signoretto, R. Van de Plas, B. De Moor, and J. A. Suykens. Tensor versus matrix completion:
a comparison with application to spectral data. Signal Processing Letters, 18(7):403?406, 2011.
[24] E. Stoudenmire and D. J. Schwab. Supervised learning with tensor networks. In Advances in
Neural Information Processing Systems, pages 4799?4807, 2016.
[25] T. Suzuki, H. Kanagawa, H. Kobayashi, N. Shimizu, and Y. Tagami. Minimax optimal alternating minimization for kernel nonparametric tensor learning. In Advances in Neural Information
Processing Systems, pages 3783?3791, 2016.
[26] R. Tomioka, K. Hayashi, and H. Kashima. Estimation of low-rank tensors via convex optimization. arXiv preprint arXiv:1010.0789, 2010.
[27] R. Tomioka and T. Suzuki. Convex tensor decomposition via structured schatten norm regularization. In Advances in Neural Information Processing Systems, pages 1331?1339, 2013.
[28] R. Tomioka, T. Suzuki, K. Hayashi, and H. Kashima. Statistical performance of convex tensor
decomposition. In Advances in Neural Information Processing Systems (NIPS. Citeseer, 2011.
[29] L. R. Tucker. Some mathematical notes on three-mode factor analysis. Psychometrika,
31(3):279?311, 1966.
[30] W. Wang, V. Aggarwal, and S. Aeron. Tensor completion by alternating minimization under the
tensor train (TT) model. arXiv preprint arXiv:1609.05587, 2016.
[31] Z. Zhang and S. Aeron. Exact tensor completion using t-svd. Transactions on Signal Processing,
2016.
10
| 6982 |@word kgk:1 repository:1 version:3 inversion:1 trial:1 norm:20 stronger:1 trofimov:1 d2:14 vek:3 decomposition:42 citeseer:1 pick:1 initial:7 liu:1 contains:2 kpv:1 series:3 selecting:1 lichman:1 past:2 existing:1 com:1 comparing:1 gmail:1 yet:2 written:2 chu:1 j1:7 shape:1 enables:1 rd2:1 implying:1 intelligence:4 generative:1 selected:1 podoprikhin:1 yamada:1 provides:2 schwab:1 zhang:1 mathematical:2 direct:1 ik:37 yuan:1 introduce:7 indeed:1 roughly:2 multi:1 becomes:2 project:3 estimating:1 notation:1 moreover:1 psychometrika:1 kg:1 cm:2 developed:2 differentiation:1 guarantee:2 remember:1 every:1 multidimensional:2 fellow:1 tackle:1 runtime:2 k2:4 control:1 grant:1 yn:1 hamilton:1 harshman:1 positive:1 kobayashi:1 despite:2 vetrov:2 analyzing:2 incoherence:6 studied:2 k:8 r4:7 equivalence:1 jurafsky:1 bi:1 statistically:1 mer:1 procedure:2 universal:1 empirical:1 kohei:2 yan:1 projection:18 boyd:1 word:1 get:1 onto:1 close:1 operator:8 storage:1 applying:1 equivalent:1 lagrangian:3 center:3 missing:5 demonstrated:1 starting:1 independently:1 convex:29 simplicity:1 recovery:3 adjusts:1 array:2 spanned:1 variation:1 krishnamurthy:1 oseledets:4 kolda:1 controlling:2 suppose:5 annals:1 exact:2 losing:1 overlapped:3 element:19 trend:1 satisfying:1 jk:15 taiji:1 recognition:1 observed:4 preprint:6 wang:3 svds:1 compressing:1 mu:1 complexity:15 singh:1 algebra:2 efficiency:4 basis:1 easily:1 k0:24 represented:3 fiber:2 riken:4 train:9 separated:1 pearson:1 whose:3 posed:1 solve:5 larger:3 supplementary:2 compressed:1 statistic:1 g1:2 reshaped:4 itself:1 final:1 advantage:1 differentiable:1 sequence:1 interaction:1 product:1 j2:1 uci:1 combining:2 plas:1 achieve:1 cm0:2 frobenius:1 ky:6 scalability:3 convergence:7 optimum:1 r1:11 wider:1 derive:1 develop:2 ac:1 completion:24 depending:1 measured:1 ij:1 novikov:3 come:2 direction:2 bader:1 material:2 require:2 hx:2 fix:1 generalization:2 preliminary:2 randomization:2 multilinear:2 clarify:1 hold:1 sufficiently:4 normal:1 cb:2 mapping:2 seed:1 achieves:1 adopt:1 purpose:2 estimation:10 srr:2 prepare:1 create:1 tool:2 moor:1 minimization:10 unfolding:3 rather:1 avoid:1 shrinkage:1 ikk:1 derived:7 parafac:2 vk:6 rank:40 greatly:1 industrial:1 contrast:1 inference:1 mrfs:1 accumulated:1 gandy:1 explanatory:1 relation:1 i1:11 issue:2 among:1 ill:1 tyrtyshnikov:1 plan:1 field:1 saving:1 beach:1 sampling:1 represents:1 kd2:2 report:1 others:1 employ:1 randomly:3 national:1 maintain:2 attempt:1 freedom:1 huge:3 investigate:1 mining:1 replicates:1 deferred:2 chain:3 tuple:2 orthogonal:1 theoretical:5 column:1 modeling:1 kxkf:1 electricity:4 cost:4 subset:1 ri1:1 jsps:2 kxks:4 dependency:2 sv:1 proximal:1 synthetic:2 combined:2 st:1 recht:2 international:2 randomized:3 siam:2 negahban:1 xi1:3 dong:1 again:1 satisfied:3 worse:1 actively:1 li:1 potential:1 prox:3 de:2 wk:1 coefficient:3 performed:1 later:1 root:1 analyze:3 candes:2 rk0:2 square:3 qk:7 variance:2 efficiently:1 yield:3 ik0:7 kpk:1 suffers:1 against:1 imaizumi:3 tucker:11 proof:1 stop:1 sampled:1 dataset:3 knowledge:1 dimensionality:2 improves:1 musialski:1 adf:2 manuscript:1 higher:7 supervised:2 xie:1 modal:1 improved:1 formulation:1 generality:1 furthermore:1 until:1 working:1 ei:3 lack:2 defines:1 mode:7 scientific:1 grows:1 usa:1 ye:1 verify:1 multiplier:3 true:8 normalized:1 regularization:4 alternating:11 symmetric:1 iteratively:2 nonzero:1 i2:1 criterion:1 generalized:1 tt:93 demonstrate:2 complete:1 cp:2 phonetics:1 parikh:1 common:1 empirically:3 jp:2 exponentially:3 volume:2 discussed:2 approximates:1 numerically:2 refer:2 measurement:1 rd:2 consistency:2 mathematics:1 similarly:4 grid:2 language:2 pu:1 dominant:1 store:1 onr:1 yi:2 preserving:1 seen:1 additional:1 employed:2 r0:1 determine:1 converge:2 signal:2 aggarwal:1 cross:2 long:3 reshaping:2 prediction:3 scalable:4 basic:1 variant:1 vision:1 arxiv:12 iteration:6 represent:1 kernel:1 qik:1 suykens:1 preserved:1 whereas:1 addition:4 x2x:1 addressed:1 singular:8 grow:1 crucial:2 rest:1 bengua:1 comment:1 december:1 integer:1 near:1 identically:1 easy:1 iterate:1 affect:2 hastie:1 restrict:1 inner:1 idea:3 regarding:1 bottleneck:1 gb:1 speech:1 deep:1 nonparametric:1 wk0:5 reduced:1 generate:2 exist:1 discrete:1 hyperparameter:2 affected:1 key:1 putting:2 kqk:3 ram:17 relaxation:3 year:1 convert:1 sum:2 inverse:1 letter:1 scaling:1 bound:11 ki:8 completing:1 guaranteed:5 simplification:1 cheng:1 quadratic:1 constraint:1 x2:1 ri:2 ucla:1 speed:2 min:4 format:6 relatively:1 martin:1 structured:1 describes:1 slightly:1 ur:2 tagami:1 intuitively:1 restricted:2 imitating:1 resource:1 equation:1 remains:1 aeron:2 r3:4 operation:5 rewritten:2 apply:2 spectral:1 reshape:1 save:1 alternative:3 kashima:2 original:1 denotes:6 include:1 tuan:1 household:1 ism:1 k1:3 uj:1 prof:1 approximating:1 tensor:80 surrogate:1 thank:1 schatten:17 stoudenmire:1 consumption:2 index:2 relationship:2 ratio:1 minimizing:1 difficult:1 unfortunately:1 expense:1 gk:25 wonka:1 kpu:1 motivates:1 boltzmann:2 unknown:3 perform:1 discretize:1 observation:10 markov:5 finite:2 tensorizing:1 november:1 regularizes:1 communication:1 ssr:1 y1:1 rn:1 arbitrary:1 community:1 peleato:1 namely:1 eckstein:1 nip:2 address:1 suggested:1 pattern:3 candecomp:1 sparsity:1 including:1 max:12 memory:1 wainwright:1 power:2 suitable:1 natural:2 predicting:1 advanced:4 minimax:1 technology:1 kpk0:1 lk:1 church:1 prior:1 review:1 vectorizing:2 grasedyck:3 kf:5 acknowledgement:1 discovery:1 xiang:1 highlight:1 adaptivity:2 limitation:3 versus:1 validation:2 foundation:2 degree:1 rik:3 thresholding:1 viewpoint:1 pi:1 row:2 supported:3 infeasible:1 drastically:1 institute:2 face:2 sparse:4 distributed:2 van:1 dimension:3 stand:1 avoids:1 transition:1 author:1 commonly:2 adaptive:2 projected:1 suzuki:4 osokin:2 log3:2 transaction:2 approximate:1 emphasize:1 confirm:2 dealing:1 global:4 assumed:1 alternatively:2 search:4 latent:3 vectorization:2 table:3 zk:6 kanagawa:1 ca:3 rearranging:1 init:1 investigated:1 rkp:1 electric:1 pk:8 main:2 spread:2 linearly:2 noise:7 hyperparameters:1 x1:8 augmented:3 vr:2 aid:1 tomioka:5 pv:1 exponential:3 concatenating:1 jmlr:1 rk:30 theorem:12 minute:1 maxi:1 sensing:1 r2:4 phien:2 essential:1 exists:3 i11:1 importance:1 kr:1 kx:7 chen:1 shimizu:1 rd1:4 cx:2 simply:1 visual:2 lagrange:1 signoretto:1 g2:1 hayashi:5 corresponds:1 minimizer:3 satisfies:2 acm:2 conditional:4 goal:1 formulated:1 admm:16 determined:1 except:2 reducing:1 wt:7 exhausting:1 called:6 svd:4 select:1 arises:1 brevity:1 accelerated:1 evaluate:2 princeton:2 d1:16 handling:1 |
6,613 | 6,983 | Scalable L?evy Process Priors for Spectral Kernel
Learning
Phillip A. Jang
Andrew E. Loeb Matthew B. Davidow
Cornell University
Andrew Gordon Wilson
Abstract
Gaussian processes are rich distributions over functions, with generalization properties determined by a kernel function. When used for long-range extrapolation,
predictions are particularly sensitive to the choice of kernel parameters. It is
therefore critical to account for kernel uncertainty in our predictive distributions.
We propose a distribution over kernels formed by modelling a spectral mixture
density with a L?evy process. The resulting distribution has support for all stationary covariances?including the popular RBF, periodic, and Mat?ern kernels?
combined with inductive biases which enable automatic and data efficient learning, long-range extrapolation, and state of the art predictive performance. The
proposed model also presents an approach to spectral regularization, as the L?evy
process introduces a sparsity-inducing prior over mixture components, allowing
automatic selection over model order and pruning of extraneous components. We
exploit the algebraic structure of the proposed process for O(n) training and O(1)
predictions. We perform extrapolations having reasonable uncertainty estimates
on several benchmarks, show that the proposed model can recover flexible ground
truth covariances and that it is robust to errors in initialization.
1
Introduction
Gaussian processes (GPs) naturally give rise to a function space view of modelling, whereby we
place a prior distribution over functions, and reason about the properties of likely functions under
this prior (Rasmussen & Williams, 2006). Given data, we then infer a posterior distribution over
functions to make predictions. The generalisation behavior of the Gaussian process is determined
by its prior support (which functions are a priori possible) and its inductive biases (which functions
are a priori likely), which are in turn encoded by a kernel function. However, popular kernels,
and even multiple kernel learning procedures, typically cannot extract highly expressive hidden
representations, as was envisaged for neural networks (MacKay, 1998; Wilson, 2014).
To discover such representations, recent approaches have advocated building more expressive kernel functions. For instance, spectral mixture kernels (Wilson & Adams, 2013) were introduced for
flexible kernel learning and extrapolation, by modelling a spectral density with a scale-location mixture of Gaussians, with promising results. However, Wilson & Adams (2013) specify the number of
mixture components by hand, and do not characterize uncertainty over the mixture hyperparameters.
As kernel functions become increasingly expressive and parametrized, it becomes natural to also
adopt a function space view of kernel learning?to represent uncertainty over the values of the
kernel function, and to reflect the belief that the kernel does not have a simple form. Just as we
use Gaussian processes over functions to model data, we can apply the function space view a step
further in a hierarchical model?with a prior distribution over kernels.
In this paper, we introduce a scalable distribution over kernels by modelling a spectral density, the
Fourier transform of a kernel, with a L?evy process. We consider both scale-location mixtures of
Gaussians and Laplacians as basis functions for the L?evy process, to induce a prior over kernels that
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
gives rise to the sharply peaked spectral densities that often occur in practice?providing a powerful
inductive bias for kernel learning. Moreover, this choice of basis functions allows our kernel function, conditioned on the L?evy process, to be expressed in closed form. This prior distribution over
kernels also has support for all stationary covariances?containing, for instance, any composition
of the popular RBF, Mat?ern, rational quadratic, gamma-exponential, or spectral mixture kernels.
And unlike the spectral mixture representation in Wilson & Adams (2013), this proposed process
prior allows for natural automatic inference over the number of mixture components in the spectral
density model. Moreover, the priors implied by popular L?evy processes such as the gamma process
and symmetric ?-stable process result in even stronger complexity penalties than `1 regularization,
yielding sparse representations and removing mixture components which fit to noise.
Conditioned on this distribution over kernels, we model data with a Gaussian process. To form a
predictive distribution, we take a Bayesian model average of GP predictive distributions over a large
set of possible kernel functions, represented by the support of our prior over kernels, weighted by
the posterior probabilities of each of these kernels. This procedure leads to a non-Gaussian heavytailed predictive distribution for modelling data. We develop a reversible jump MCMC (RJ-MCMC)
scheme (Green, 1995) to infer the posterior distribution over kernels, including inference over the
number of components in the L?evy process expansion. For scalability, we pursue a structured kernel
interpolation (Wilson & Nickisch, 2015) approach, in our case exploiting algebraic structure in the
L?evy process expansion, for O(n) inference and O(1) predictions, compared to the standard O(n3 )
and O(n2 ) computations for inference and predictions with Gaussian processes. Flexible distributions over kernels will be especially valuable on large datasets, which often contain additional
structure to learn rich statistical representations.
The key contributions of this paper are summarized as follows:
1. The first fully probabilistic approach to inference with spectral mixture kernels ? to incorporate kernel uncertainty into our predictive distributions, yielding a more realistic coverage of extrapolations. This feature is demonstrated in Section 5.3.
2. Spectral regularization in spectral kernel learning. The L?evy process prior acts as a sparsityinducing prior on mixture components, automatically pruning extraneous components.
This feature allows for automatic inference over model order, a key hyperparameter which
must be hand tuned in the original spectral mixture kernel paper.
3. Reduced dependence on a good initialization, a key practical improvement over the original
spectral mixture kernel paper.
4. A conceptually natural and interpretable function space view of kernel learning.
2
Background
We provide a review of Gaussian and L?evy processes as models for prior distributions over functions.
2.1
Gaussian Processes
A stochastic process f (x) is a Gaussian process (GP) if for any finite collection of inputs X =
T
{x1 , ? ? ? , xn } ? RD , the vector of function values [f (x1 ), ? ? ? , f (xn )] is jointly Gaussian.
The distribution of a GP is completely determined by its mean function m(x), and covariance
kernel k(x, x0 ). A GP used to specify a distribution over functions is denoted as f (x) ?
GP(m(x), k(x, x0 )), where E[f (xi )] = m(xi ) and cov(f (x), f (x0 )) = k(x, x0 ). The generalization properties of the GP are encoded by the covariance kernel and its hyperparameters.
By exploiting properties of joint Gaussian variables, we can obtain closed form expressions for
conditional mean and covariance functions of unobserved function values given observed function
T
values. Given that f (x) is observed at n training inputs X with values f = [f (x1 ), ? ? ? , f (xn )] ,
the predictive distribution of the unobserved function values f? at n? testing inputs X? is given by
f? |X? , X, ? ? N (?
f? , cov(f? )),
?
f? = mX? + KX? ,X K ?1 (f ? mX ),
cov(f? ) = KX? ,X? ?
X,X
?1
KX? ,X KX,X
KX,X? .
where KX? ,X for example denotes the n? ? n matrix of covariances evaluated at X? and X.
2
(1)
(2)
(3)
The popular radial basis function (RBF) kernel has the following form:
2
kRBF (x, x0 ) = exp(?0.5 kx ? x0 k /`2 ).
(4)
GPs with RBF kernels are limited in their expressiveness and act primarily as smoothing interpolators, because the only covariance structure they can learn from data is the length scale `, which
determines how quickly covariance decays with distance.
Wilson & Adams (2013) introduce the more expressive spectral mixture (SM) kernel capable of extracting more complex covariance structures than the RBF kernel, formed by placing a scale-location
mixture of Gaussians in the spectrum of the covariance kernel. The RBF kernel in comparison can
only model a single Gaussian centered at the origin in frequency (spectral) space.
2.2
L?evy Processes
A stochastic process {L(?)}??R+ is a L?evy process if it has stationary, independent increments and
it is continuous in probability. In other words, L must satisfy
1. L(0) = 0,
2. L(?0 ), L(?1 ) ? L(?0 ), ? ? ? , L(?n ) ? L(?n?1 ) are independent ??0 ? ?1 ? ? ? ? ? ?n ,
d
3. L(?2 ) ? L(?1 ) = L(?2 ? ?1 ) ??2 ? ?1 ,
4. lim P(|L(? + h) ? L(?)| ? ?) = 0 ?? > 0 ?? ? 0.
h?0
15
10
f(x)
By the L?evy-Khintchine representation, the distribution of a (pure jump) L?evy process is completely determined by its L?evy measure. That
is, the characteristic function of L(?) is given
by:
log E[eiuL(?) ] =
Z
?
eiu?? ? 1 ? iu ? ?1|?|?1 ?(d?).
?2
?1
5
0
?3
-5
Rd \{0}
?1
0
?2
2
?3
4
6
8
10
where the L?evy measure ?(d?) is any ?-finite
x
measure which satisfies the following integraFigure 1: Annotated realization of a compound
bility condition
Z
Poisson process, a special case of a L?evy process.
The ?j represent jump locations, and ?j represent
(1 ? ? 2 )?(d?) < ?.
Rd \{0}
jump magnitudes.
A L?evy process can be viewed as a combination of a Brownian motion with drift and a superposition
of independent Poisson processes with differing jump sizes ?. The L?evy measure ?(d?) determines
the expected number of Poisson events per unit of time for any particular jump size ?. The Brownian component of a L?evy process will not be considered for this model. For higher dimension
input spaces ? ? ?, one defines the more general notion of L?evy random measure, which is also
characterized by its L?evy measure ?(d?d?) (Wolpert et al., 2011) . We will show that the sample
realizations of L?evy processes can be used to draw sample parameters for adaptive basis expansions.
2.3
L?evy Process Priors over Adaptive Expansions
Suppose
we wish to specify a o prior over the class of adaptive expansions:
n
PJ
f : X ? R f (x) = j=1 ?j ?(x, ?j ) . Through a simple manipulation, we can rewrite
f (x) into the form of a stochastic integral:
Z
Z
J
J
J
X
X
X
f (x) =
?j ?(x, ?j ) =
?j
?(x, ?)??j (?)d? =
?(x, ?)
?j ??j (?)d? .
j=1
j=1
?
?
j=1
|
{z
=dL(?)
}
Hence, by specifying a prior for the measure L(?), we can simultaneously specify a prior for all
of the parameters {J, (?1 , ?1 ), ..., (?J , ?J )} of the expansion. L?evy random measures provide a
3
family of priors naturally suited for this purpose, as there is a one-to-one correspondence between
the jump behavior of the L?evy prior and the components of the expansion.
To illustrate this point, suppose the basis function parameters ?j are one-dimensional and consider
the integral of dL(?) from 0 to ?.
Z ?X
Z ?
J
J
X
dL(?) =
?j ??j (?)d? =
L(?) =
?j 1[0,?] (?j ).
0
0
j=1
j=1
PJ
We see in Figure 1 that j=1 ?j 1[0,?] (?j ) resembles the sample path of a compound Poisson process, with the number of jumps J, jump sizes ?j , and jump locations ?j corresponding to the number
of basis functions, basis function weights, and basis function parameters respectively. We can use a
compound Poisson process to define a prior over all such piecewise constant paths. More generally,
we can use a L?evy process to define a prior for L(?).
Through the L?evy-Khintchine representation, the jump behavior of the prior is characterized by a
L?evy measure ?(d?d?) which controls the mean number of Poisson events in every region of the
parameter space, encoding the inductive biases of the model. As the number of parameters in this
framework is random, we use a form of trans-dimensional reversible jump Markov chain Monte
Carlo (RJ-MCMC) to sample the parameter space (Green, 2003).
Popular L?evy processes such as the gamma process, symmetric gamma process, and the symmetric
?-stable process each possess desirable properties for different situations. The gamma process is
able to produce strictly positive gamma distributed ?j without transforming the output space. The
symmetric gamma process can produce both positive and negative ?j , and according to Wolpert et al.
(2011) can achieve nearly all the commonly used isotropic geostatistical covariance functions. The
symmetric ?-stable process can produce heavy-tailed distributions for ?j and is appropriate when
one might expect the basis expansion to be dominated by a few heavily weighted functions.
While one could dispense with L?evy processes and place Gaussian or Laplace priors on ?j to obtain
`2 or `1 regularization on the expansions, respectively, a key benefit particular to these L?evy process
priors are that the implied priors on the coefficients yield even stronger complexity penalties than
`1 regularization. This property encourages sparsity in the expansions and permits scalability of
our MCMC algorithm. Refer to the supplementary material for an illustration of the joint priors
on coefficients, which exhibit concave contours in contrast to the convex elliptical and diamond
contours of `2 and `1 regularization. Furthermore, in the log posterior for the L?evy process there
is a log(J!) complexity penalty term which further encourages sparsity in the expansions. Refer to
Clyde & Wolpert (2007) for further details.
3
L?evy Distributions over Kernels
In this section, we motivate our choice of prior over kernel functions and describe how to generate
samples from this prior distribution in practice.
3.1
L?evy Kernel Processes
By Bochner?s Theorem (1959), a continuous stationary kernel can be represented as the Fourier dual
of a spectral density:
Z
Z
>
>
k(? ) =
S(s)e2?is ? ds, S(s) =
k(? )e?2?is ? d?.
(5)
RD
RD
Hence, the spectral density entirely characterizes a stationary kernel. Therefore, it can be desirable
to model the spectrum rather than the kernel, since we can then view kernel estimation through the
lens of density estimation. In order to emulate the sharp peaks that characterize frequency spectra
of natural phenomena, we model the spectral density with a location-scale mixture of Laplacian
components:
?j ??j |s??j |
?L (s, ?j ) =
e
, ?j ? (?j , ?j ) ? [0, fmax ] ? R+ .
(6)
2
Then the full specification of the symmetric spectral mixture is
J
i
X
1 h?
?
?
S(s) =
S(s) + S(?s)
, S(s)
=
?j ?L (s, ?j ).
(7)
2
j=1
4
As Laplacian spikes have a closed form inverse Fourier transform, the spectral density S(s) represents the following kernel function:
k(? ) =
J
X
?j
j=1
?2j
cos(2??j ? ).
?2j + 4? 2 ? 2
(8)
The parameters J, ?j , ?j , ?j can be interpreted through Eq. (8). The total number of terms to the
mixture is J, while ?j is the scale of the j th frequency contribution, ?j is its central frequency, and ?j
governs how rapidly the term decays (a high ? results in confident, long-term periodic extrapolation).
Other basis functions can be used in place of ?L to model the spectrum as well. For example, if a
Gaussian mixture is chosen, along with maximum likelihood estimation for the learning procedure,
then we obtain the spectral mixture kernel (Wilson & Adams, 2013).
As the spectral density S(s) takes the form of an adaptive expansion, we can define a L?evy prior
over all such densities and hence all corresponding kernels of the above form. For a chosen basis
function ?(s, ?) and L?evy measure ?(d?d?) we say that k(? ) is drawn from a L?evy kernel process
(LKP), denoted as k(? ) ? LKP(?, ?). Wolpert et al. (2011) discuss the necessary regularity
conditions for ? and ?. In summary, we propose the following hierarchical model over functions
? = x ? x0 ,
We now discuss how to generate samples
from the L?evy kernel process in practice. In
short, the kernel parameters are drawn according to {J, {(?j , ?j )}Jj=1 } ? L?evy(?(d?d?)),
and then Eq. (8) is used to evaluate k ?
LKP(?L , ?) at values of ? .
Recall from Section 2.3 that the choice of
L?evy measure ? is completely determined by
the choice of the corresponding L?evy process
and vice versa. Though the processes mentioned there produce sample paths with infinitely many jumps (and cannot be sampled
directly), almost all jumps are infinitesimally
small, and therefore these processes can be approximated in L2 by a compound Poisson process with a jump size distribution truncated by
?.
Power
2
0.5
0
0
0
0.05
0.1
0.15
0.2
1
0.5
0
0
0.05 0.1
Frequency
0.15 0.2
0
Frequency
0.05 0.1
0.15 0.2
Frequency
0.2
K(?)
Sampling L?evy Priors
1.5
1
4
0.1
0
-0.1
0
0.2
0.4
0.6
0.8
1
?
1.2
1.4
1.6
1.8
2
0
2
4
6
8
10
12
14
16
18
20
0.5
f(X)
3.2
(9)
Power
Figure 2 shows three samples from the L?evy
process specified through Eq. (7) and their corresponding covariance kernels. We also show
one GP realization for each of the kernel functions. By placing a L?evy process prior over
spectral densities, we induce a L?evy kernel process prior over stationary covariance functions.
k(? ) ? LKP(?, ?).
Power
f (x)|k(? ) ? GP(0, k(? )),
0
-0.5
X
Figure 2: Samples from a L?evy kernel mixture prior distribution. (top) Three spectra with
Laplace components drawn from a L?evy process
prior. (middle) The corresponding stationary covariance kernel functions and the prior mean with
two standard deviations of the model, as determined by 10,000 samples. (bottom) GP samples
with the respective covariance kernel functions.
Once the desired L?evy process is chosen and the truncation bound is set, the basis expansion
parameters are generated by drawing J ? Poisson(??+ ), and then drawing J i.i.d. samples
?1 , ? ? ? , ?J ? ?? (d?), and J i.i.d. samples ?1 , ? ? ? , ?J ? ?? (d?). Refer to the supplementary
material for L2 error bounds and formulas for ??+ = ?? (R ? ?) for the gamma, symmetric gamma,
and symmetric ?-stable processes.
The form of ?? (?j ) also depends on the choice of L?evy process and can be found in the supplementary material, with further details in Wolpert et al. (2011). We choose to draw ? from an uninformed
uniform prior over a reasonable range in the frequency domain, and ? from a gamma distribution,
? ? Gamma(a? , b? ). The choices for a? , b? , and the frequency limits are left as hyperparameters, which can have their own hyperprior distributions. After drawing the 3J values that specify
5
a L?evy process realization, the corresponding covariance function can be evaluated through the analytical expression for the inverse Fourier transform (e.g. Eq. (8) for Laplacian frequency mixture
components).
4
Scalable Inference
Given observed data D = {xi , yi }N
i=1 , we wish to infer p(y(x? )|D, x? ) over some test set of inputs
x? for interpolation and extrapolation. We model observations y(x) with a hierarchical model:
y(x)|f (x) = f (x) + ?(x),
f (x)|k(? ) ? GP(0, k(? )),
k(? ) ? LKP(?, ?).
iid
?(x) ? N (0, ? 2 ),
0
? =x?x,
(10)
(11)
(12)
Computing the posterior distributions by marginalizing over the LKP will yield a heavy-tailed nonGaussian process for y(x? ) = y? given by an infinite Gaussian mixture model:
Z
H
1 X
p(y? |D) = p(y? |k, D)p(k|D)dk ?
p(y? |kh ), kh ? p(k|D).
(13)
H
h=1
We compute this approximating sum using H RJ-MCMC samples (Green, 2003). Each sample
draws a kernel from the posterior kh ? p(k|D) distribution. Each sample of kh enables us to draw a
sample from the posterior predictive distribution p(y? |D), from which we can estimate the predictive
mean and variance.
Although we have chosen a Gaussian observation model in Eq. (10) (conditioned on f (x)), all of the
inference procedures we have introduced here would also apply to non-Gaussian likelihoods, such
as for Poisson processes with Gaussian process intensity functions, or classification.
The sum in Eq. (13) requires drawing kernels from the distribution p(k|D). This is a difficult distribution to approximate, particularly because there is not a fixed number of parameters as J varies.
We employ RJ-MCMC, which extends the capability of conventional MCMC to allow sequential
samples of different dimensions to be drawn (Green, 2003). Thus, a posterior distribution is not
limited to coefficients and other parameters of a fixed basis expansion, but can represent a changing number of basis functions, as required by the description of L?evy processes described in the
previous section. Indeed, RJ-MCMC can be used to automatically learn the appropriate number
of basis functions in an expansion. In the case of spectral kernel learning, inferring the number of
basis functions corresponds to automatically learning the important frequency contributions to a GP
kernel, which can lead to new interpretable insights into our data.
4.1
Initialization Considerations
The choice of an initialization procedure is often an important practical consideration for machine
learning tasks due to severe multimodality in a likelihood surface (Neal, 1996). In many cases,
however, we find that spectral kernel learning with RJ-MCMC can automatically learn salient frequency contributions with a simple initialization, such as a uniform covering over a broad range
of frequencies with many sharp peaks. The frequencies which are not important in describing the
data are quickly attenuated or removed within RJ-MCMC learning. Typically only a few hundred
RJ-MCMC iterations are needed to discover the salient frequencies in this way.
Wilson (2014) proposes an alternative structured approach to initialization in previous spectral kernel modelling work. First, pass the (squared) data through a Fourier transform to obtain an empirical
spectral density, which can be treated as observed. Next, fit the empirical spectral density using a
standard Gaussian mixture density estimation procedure, assuming a fixed number of mixture components. Then, use the learned parameters of the Gaussian mixture as an initialization of the spectral
mixture kernel hyperparameters, for Gaussian process marginal likelihood optimization. We observe
successful adaptation of this procedure to our L?evy process method, replacing the approximation
with Laplacian mixture terms and using the result to initialize RJ-MCMC.
4.2
Scalability
As with other GP based kernel methods, the computational bottleneck lies in the evaluation of
the log marginal likelihood during MCMC, which requires computing (KX,X + ? 2 I)?1 y and
6
log |KX,X + ? 2 I| for an n ? n kernel matrix KX,X evaluated at the n training points X. A direct approach through computing the Cholesky decomposition of the kernel matrix requires O(n3 )
computations and O(n2 ) storage, restricting the size of training sets to O(104 ). Furthermore, this
computation must be performed at every iteration of RJ-MCMC, compounding standard computational constraints.
However, this bottleneck can be readily overcome through the Structured Kernel Interpolation
approach introduced in Wilson & Nickisch (2015), which approximates the kernel matrix as
? X,X 0 = MX KZ,Z M >0 for an exact kernel matrix KZ,Z evaluated on a much smaller set of
K
X
m n inducing points, and a sparse interpolation matrix MX which facilitates fast computations.
The calculation reduces to O(n + g(m)) computations and O(n + g(m)) storage. As described
in Wilson & Nickisch (2015), we can impose Toeplitz structure on KZ,Z for g(m) = m log m,
allowing our RJ-MCMC procedure to train on massive datasets.
5
Experiments
We conduct four experiments in total. In order to motivate our model for kernel learning in
later experiments, we first demonstrate the ability of a L?evy process to recover?through direct
regression?an observed noise-contaminated spectrum that is characteristic of sharply peaked naturally occurring spectra. In the second experiment we demonstrate the robustness of our RJMCMC sampler by automatically recovering the generative frequencies of a known kernel, even
in presence of significant noise contamination and poor initializations. In the third experiment
we demonstrate the ability of our method to infer the spectrum of airline passenger data, to perform long-range extrapolations on real data, and to demonstrate the utility of accounting for uncertainty in the kernel. In the final experiment we demonstrate the scalability of our method
through training the model on a 100,000 data point sound waveform. Code is available at https:
//github.com/pjang23/levy-spectral-kernel-learning.
5.1
50
Explicit Spectrum Modelling
40
30
f(x)
We begin by applying a L?evy process directly for function modelling (known as LARK
regression), with inference as described in
Wolpert et al. (2011), and Laplacian basis functions. We choose an out of class test function
proposed by Donoho & Johnstone (1993) that
is standard in wavelet literature. The spatially
inhomogeneous function is defined to represent
spectral densities that arise in scientific and engineering applications. Gaussian i.i.d. noise is
added to give a signal-to-noise ratio of 7, to be
consistent with previous studies of the test function Wolpert et al. (2011).
20
10
0
0
1
2
3
4
5
6
7
8
9
10
x
Figure 3: L?evy process regression on a noisy test
function (black). The fit (red) captures the locations and scales of each spike while ignoring
noise, but falls slightly short at its modes since
the black spikes are parameterized as (1 + |x|)?4
rather than Laplacian.
The noisy test function and LARK regression fit are shown in Figure 3. The synthetic spectrum
is well characterized by the L?evy process, with no ?false positive? basis function terms fitting the
noise owing to the strong regularization properties of the L?evy prior. By contrast, GP regression
with an RBF kernel learns a length scale of 0.07 through maximum marginal likelihood training:
the Gaussian process posterior can fit the sharp peaks in the test function only if it also overfits to
the additive noise.
The point of this experiment is to show that the L?evy process with Laplacian basis functions forms
a natural prior over spectral densities. In other words, samples from this prior will typically look
like the types of spectra that occur in practice. Thus, this process will have a powerful inductive bias
when used for kernel learning, which we explore in the next experiments.
7
Based on these observed training data (depicted
as black dots in Figure 4, right), we estimate
the kernel of the Gaussian process by inferring
its spectral density (Figure 4, left) using 1000
RJ-MCMC iterations. The empirical spectrum
initialization described in section 4.1 results in
the discovery of the two generative frequencies.
Critically, we can also recover these salient frequencies even with a very poor initialization, as
shown in Figure 4 (left).
400
5
300
f(X)
Power
5.2 Ground Truth Recovery
We next demonstrate the ability of our method
to recover the generative frequencies of a
known kernel and its robustness to noise and
poor initializations. Data are generated from a
GP with a kernel having two spectral Laplacian
peaks, and partitioned into training and testing
sets containing 256 points each. Moreover, the
training data are contaminated with i.i.d. Gaussian noise (signal-to-noise ratio of 85%).
0
200
-5
100
-10
0
0.2
Frequency
0.4
0
10
20
30
40
50
X
Figure 4: Ground truth recovery of known frequency components. (left) The spectrum of the
Gaussian process that was used to generate the
noisy training data is shown in black. From these
noisy data and the erroneous spectral initialization
shown in dashed blue, the maximum a posteriori
estimate of the spectral density (over 1000 RJMCMC steps) is shown in red. A SM kernel also
identifies the salient frequencies, but with broader
support, shown in magenta. (right) Noisy training
data are shown with a scatterplot, with withheld
testing data shown in green. The learned posterior
predictive distribution (mean in black, with 95%
credible set in grey) captures the test data.
For comparison, we also train a Gaussian SM
kernel, initializing based on the empirical spectrum. The resulting kernel spectrum (Figure 4,
magenta curve) does recover the salient frequencies, though with less confidence and higher overhead than even a poor initialization and spectral kernel learning with RJ-MCMC.
5.3 Spectral Kernel Learning for Long-Range Extrapolation
We next demonstrate the ability of our method
to perform long-range extrapolation on real
data. Figure 5 shows a time series of monthly
airline passenger data from 1949 to 1961 (Hyndman, 2005). The data show a long-term rising trend as well as a short term seasonal waveform, and an absence of white noise artifacts.
As with Wilson & Adams (2013), the first 96
monthly data points are used to train the model
and the last 48 months (4 years) are withheld as
testing data, indicated in green. With an initialization from the empirical spectrum and 2500
RJ-MCMC steps, the model is able to automatically learn the necessary frequencies and the
shape of the spectral density to capture both the
rising trend and the seasonal waveform, allowing for accurate long-range extrapolations without pre-specifying the number of model components in advance.
Figure 5: Learning of Airline passenger data.
Training data is scatter plotted, with withheld testing data shown in green. The learned posterior
distribution with the proposed approach (mean in
black, with 95% credible set in grey) captures the
periodicity and the rising trend in the test data.
The analogous 95% interval using a GP with a SM
kernel is illustrated in magenta.
This experiment also demonstrates the impact of accounting for uncertainty in the kernel, as the
withheld data often appears near or crosses the upper bound of the 95% predictive bands of the SM
fit, whereas our model yields wider and more conservative predictive bands that wholly capture the
test data. As the SM extrapolations are highly sensitive to the choice of parameter values, fixing
the parameters of the kernel will yield overconfident predictions. The L?evy process prior allows us
to account for a range of possible kernel parameters so we can achieve a more realistically broad
coverage of possible extrapolations.
Note that the L?evy process over spectral densities induces a prior over kernel functions. Figure 6
shows a side-by-side comparison of covariance function draws from the prior and posterior distributions over kernels. We see that sample covariance functions from the prior vary quite significantly,
but are concentrated in the posterior, with movement towards the empirical covariance function.
8
Figure 6: Covariance function draws from the kernel prior (left) and posterior (right) distributions,
with the empirical covariance function shown in black. After RJ-MCMC, the covariance distribution
centers upon the correct frequencies and order of magnitude.
We consider a 100,000 data point waveform, taken
from the field of natural sound modelling (Turner,
2010). A L?evy kernel process is trained on a sound
texture sample of howling wind with the middle
10% removed. Training involved initialization from
the signal empirical covariance and 500 RJ-MCMC
samples, and took less than one hour using an Intel i7 3.4 GHz CPU and 8 GB of memory. Four
distinct mixture components in the model were automatically identified through the RJ-MCMC procedure. The learned kernel is then used for GP infilling
with 900 training points, taken by down-sampling
the training data, which is then applied to the original 44,100 Hz natural sound file for infilling.
0.4
0.2
f(X)
5.4 Scalability Demonstration
A flexible and fully Bayesian approach to kernel
learning can come with some additional computational overhead. Here we demonstrate the scalability
that is achieved through the integration of SKI (Wilson & Nickisch, 2015) with our L?evy process model.
0
-0.2
-0.4
0.008
0.009
0.01
0.011
0.012
0.013
0.014
0.015
X (Seconds)
Figure 7: Learning of a natural sound texture. A close-up of the training interval is
displayed with the true waveform data scatter plotted. The learned posterior distribution (mean in black, with 95% credible set
in grey) retains the periodicity of the signal
within the corrupted interval. Three samples
are drawn from the posterior distribution.
The GP posterior distribution over the region of interest is shown in Figure 7, along with sample
realizations, which appear to capture the qualitative behavior of the waveform. This experiment
demonstrates the applicability of our proposed kernel learning method to large datasets, and show
promise for extensions to higher dimensional data.
6
Discussion
We introduced a distribution over covariance kernel functions that is well suited for modelling quasiperiodic data. We have shown how to place a L?evy process prior over the spectral density of a stationary kernel, and the resulting hierarchical model allows the incorporation of kernel uncertainty into
the predictive distribution. Through the spectral regularization properties of L?evy process priors, we
found that our trans-dimensional sampling procedure is suitable for automatically performing inference over model order, and is robust over initialization strategies. Finally, we incorporated structured
kernel interpolation into our training and inference procedures for linear time scalability, enabling
experiments on large datasets. The key advances over conventional spectral mixture kernels are in
being able to interpretably and automatically discover the number of mixture components, and in
representing uncertainty over the kernel. Here, we considered one dimensional inputs and stationary processes to most clearly elucidate the key properties of L?evy kernel processes. However, one
could generalize this process to multidimensional non-stationary kernel learning by jointly inferring properties of transformations over inputs alongside the kernel hyperparameters. Alternatively,
one could consider neural networks as basis functions in the L?evy process, inferring distributions
over the parameters of the network and the numbers of basis functions as a step towards automating
neural network architecture construction.
9
Acknowledgements. This work is supported in part by the Natural Sciences and Engineering Research Council of Canada (PGS-D 502888) and the National Science Foundation DGE 1144153 and
IIS-1563887 awards.
References
Bochner, S. Lectures on Fourier Integrals.(AM-42), volume 42. Princeton University Press, 1959.
Clyde, Merlise A and Wolpert, Robert L. Nonparametric function estimation using overcomplete
dictionaries. Bayesian Statistics, 8:91?114, 2007.
Donoho, D. and Johnstone, J.M. Ideal spatial adaptation by wavelet shrinkage. Biometrika, 81(3):
425?455, 1993.
Green, P.J. Reversible jump monte carlo computation and bayesian model determination.
Biometrika, 89(4):711?732, 1995.
Green, P.J. Trans-dimensional Markov chain Monte Carlo, chapter 6. Oxford University Press,
2003.
Hyndman, R.J. Time series data library. 2005. http://www-personal.buseco.monash.
edu.au/?hyndman/TSDL/.
MacKay, David J.C. Introduction to Gaussian processes. In Bishop, Christopher M. (ed.), Neural
Networks and Machine Learning, chapter 11, pp. 133?165. Springer-Verlag, 1998.
Neal, R.M. Bayesian Learning for Neural Networks. Springer Verlag, 1996. ISBN 0387947248.
Rasmussen, C. E. and Williams, C. K. I. Gaussian processes for Machine Learning. The MIT Press,
2006.
Turner, R. Statistical models for natural sounds. PhD thesis, University College London, 2010.
Wilson, Andrew Gordon. Covariance kernels for fast automatic pattern discovery and extrapolation
with Gaussian processes. PhD thesis, University of Cambridge, 2014.
Wilson, Andrew Gordon and Adams, Ryan Prescott. Gaussian process kernels for pattern discovery
and extrapolation. International Conference on Machine Learning (ICML), 2013.
Wilson, Andrew Gordon and Nickisch, Hannes. Kernel interpolation for scalable structured Gaussian processes (KISS-GP). International Conference on Machine Learning (ICML), 2015.
Wolpert, R.L., Clyde, M.A., and Tu, C. Stochastic expansions using continuous dictionaries: L?evy
adaptive regression kernels. The Annals of Statistics, 39(4):1916?1962, 2011.
10
| 6983 |@word middle:2 rising:3 stronger:2 grey:3 covariance:26 decomposition:1 accounting:2 series:2 tuned:1 elliptical:1 com:1 scatter:2 must:3 readily:1 additive:1 realistic:1 shape:1 enables:1 interpretable:2 stationary:10 generative:3 isotropic:1 short:3 evy:69 location:7 along:2 direct:2 become:1 qualitative:1 fitting:1 overhead:2 multimodality:1 introduce:2 x0:7 indeed:1 expected:1 behavior:4 bility:1 automatically:9 cpu:1 becomes:1 begin:1 discover:3 moreover:3 interpreted:1 pursue:1 differing:1 unobserved:2 transformation:1 every:2 multidimensional:1 act:2 concave:1 biometrika:2 demonstrates:2 control:1 unit:1 appear:1 positive:3 engineering:2 limit:1 monash:1 encoding:1 oxford:1 path:3 interpolation:6 might:1 black:8 initialization:16 resembles:1 au:1 specifying:2 co:1 limited:2 range:9 practical:2 testing:5 practice:4 procedure:11 wholly:1 empirical:8 significantly:1 word:2 induce:2 radial:1 confidence:1 pre:1 prescott:1 cannot:2 close:1 selection:1 storage:2 applying:1 www:1 conventional:2 demonstrated:1 center:1 williams:2 convex:1 recovery:2 pure:1 insight:1 notion:1 increment:1 laplace:2 analogous:1 annals:1 elucidate:1 suppose:2 heavily:1 massive:1 exact:1 sparsityinducing:1 gps:2 construction:1 origin:1 trend:3 approximated:1 particularly:2 observed:6 bottom:1 initializing:1 capture:6 region:2 movement:1 removed:2 contamination:1 valuable:1 mentioned:1 transforming:1 complexity:3 dispense:1 personal:1 motivate:2 trained:1 rewrite:1 predictive:13 upon:1 basis:21 completely:3 joint:2 represented:2 emulate:1 interpolators:1 chapter:2 train:3 distinct:1 fast:2 describe:1 london:1 monte:3 quite:1 encoded:2 supplementary:3 quasiperiodic:1 say:1 drawing:4 toeplitz:1 cov:3 ability:4 statistic:2 gp:18 transform:4 jointly:2 noisy:5 final:1 analytical:1 isbn:1 took:1 propose:2 adaptation:2 tu:1 realization:5 fmax:1 rapidly:1 interpretably:1 achieve:2 realistically:1 description:1 inducing:2 kh:4 scalability:7 exploiting:2 regularity:1 infilling:2 produce:4 adam:7 wider:1 illustrate:1 andrew:5 develop:1 fixing:1 uninformed:1 advocated:1 eq:6 strong:1 coverage:2 recovering:1 come:1 waveform:6 inhomogeneous:1 annotated:1 owing:1 correct:1 stochastic:4 centered:1 enable:1 material:3 generalization:2 ryan:1 strictly:1 extension:1 considered:2 ground:3 exp:1 matthew:1 vary:1 adopt:1 dictionary:2 heavytailed:1 purpose:1 estimation:5 superposition:1 sensitive:2 council:1 vice:1 weighted:2 compounding:1 mit:1 clearly:1 gaussian:33 rather:2 cornell:1 shrinkage:1 wilson:16 broader:1 seasonal:2 improvement:1 modelling:10 likelihood:6 contrast:2 am:1 posteriori:1 inference:11 typically:3 hidden:1 iu:1 dual:1 flexible:4 classification:1 denoted:2 extraneous:2 priori:2 proposes:1 art:1 smoothing:1 mackay:2 special:1 marginal:3 initialize:1 once:1 field:1 having:2 beach:1 sampling:3 placing:2 represents:1 broad:2 look:1 nearly:1 icml:2 peaked:2 contaminated:2 merlise:1 gordon:4 piecewise:1 employ:1 primarily:1 few:2 gamma:11 simultaneously:1 national:1 loeb:1 interest:1 highly:2 evaluation:1 severe:1 introduces:1 mixture:33 yielding:2 chain:2 accurate:1 integral:3 capable:1 necessary:2 respective:1 conduct:1 hyperprior:1 desired:1 plotted:2 overcomplete:1 instance:2 retains:1 applicability:1 deviation:1 uniform:2 hundred:1 successful:1 characterize:2 varies:1 periodic:2 corrupted:1 nickisch:5 combined:1 clyde:3 st:1 density:23 peak:4 confident:1 synthetic:1 international:2 automating:1 probabilistic:1 quickly:2 nongaussian:1 thesis:2 squared:1 reflect:1 central:1 containing:2 choose:2 account:2 summarized:1 coefficient:3 satisfy:1 depends:1 passenger:3 performed:1 view:5 wind:1 extrapolation:15 closed:3 later:1 characterizes:1 red:2 overfits:1 recover:5 capability:1 contribution:4 formed:2 variance:1 characteristic:2 yield:4 conceptually:1 generalize:1 bayesian:5 critically:1 iid:1 carlo:3 ed:1 frequency:25 involved:1 pp:1 e2:1 naturally:3 rational:1 sampled:1 popular:6 recall:1 lim:1 credible:3 appears:1 higher:3 specify:5 rjmcmc:2 hannes:1 evaluated:4 though:2 furthermore:2 just:1 d:1 hand:2 expressive:4 replacing:1 christopher:1 reversible:3 defines:1 mode:1 artifact:1 indicated:1 scientific:1 dge:1 usa:1 building:1 phillip:1 contain:1 true:1 inductive:5 regularization:8 hence:3 spatially:1 symmetric:8 neal:2 illustrated:1 white:1 during:1 encourages:2 covering:1 whereby:1 demonstrate:8 motion:1 consideration:2 volume:1 approximates:1 refer:3 composition:1 significant:1 versa:1 monthly:2 cambridge:1 automatic:5 rd:5 dot:1 stable:4 specification:1 surface:1 posterior:17 brownian:2 recent:1 own:1 krbf:1 manipulation:1 compound:4 verlag:2 integration:1 yi:1 additional:2 impose:1 bochner:2 envisaged:1 signal:4 dashed:1 ii:1 multiple:1 desirable:2 rj:17 infer:4 full:1 reduces:1 sound:6 characterized:3 calculation:1 cross:1 long:9 determination:1 award:1 laplacian:8 impact:1 prediction:6 scalable:4 regression:6 hyndman:3 poisson:9 iteration:3 kernel:115 represent:5 achieved:1 background:1 whereas:1 interval:3 unlike:1 posse:1 airline:3 file:1 hz:1 facilitates:1 extracting:1 near:1 presence:1 ideal:1 fit:6 architecture:1 identified:1 attenuated:1 i7:1 bottleneck:2 expression:2 utility:1 gb:1 penalty:3 algebraic:2 jj:1 generally:1 governs:1 nonparametric:1 band:2 induces:1 concentrated:1 reduced:1 generate:3 http:2 per:1 blue:1 hyperparameter:1 mat:2 promise:1 key:6 salient:5 four:2 drawn:5 changing:1 pj:2 year:1 geostatistical:1 sum:2 inverse:2 parameterized:1 uncertainty:9 powerful:2 khintchine:2 place:4 family:1 reasonable:2 almost:1 extends:1 draw:6 entirely:1 bound:3 correspondence:1 quadratic:1 occur:2 constraint:1 sharply:2 incorporation:1 n3:2 dominated:1 fourier:6 performing:1 infinitesimally:1 ern:2 structured:5 according:2 overconfident:1 combination:1 poor:4 smaller:1 slightly:1 increasingly:1 partitioned:1 taken:2 turn:1 discus:2 describing:1 needed:1 available:1 gaussians:3 permit:1 apply:2 observe:1 hierarchical:4 spectral:45 appropriate:2 alternative:1 jang:1 robustness:2 original:3 denotes:1 top:1 exploit:1 especially:1 approximating:1 implied:2 added:1 spike:3 strategy:1 dependence:1 exhibit:1 mx:4 distance:1 parametrized:1 reason:1 assuming:1 length:2 code:1 illustration:1 providing:1 ratio:2 demonstration:1 difficult:1 robert:1 negative:1 rise:2 ski:1 perform:3 allowing:3 diamond:1 upper:1 observation:2 datasets:4 sm:6 benchmark:1 finite:2 markov:2 withheld:4 enabling:1 displayed:1 truncated:1 situation:1 incorporated:1 sharp:3 expressiveness:1 drift:1 intensity:1 canada:1 introduced:4 david:1 required:1 specified:1 learned:5 hour:1 nip:1 trans:3 able:3 alongside:1 pattern:2 laplacians:1 sparsity:3 including:2 green:9 memory:1 belief:1 power:4 critical:1 event:2 natural:10 treated:1 suitable:1 turner:2 representing:1 scheme:1 github:1 library:1 identifies:1 extract:1 prior:47 review:1 l2:2 literature:1 discovery:3 acknowledgement:1 marginalizing:1 fully:2 expect:1 lecture:1 foundation:1 consistent:1 heavy:2 periodicity:2 summary:1 supported:1 last:1 rasmussen:2 truncation:1 bias:5 allow:1 side:2 johnstone:2 fall:1 sparse:2 distributed:1 benefit:1 overcome:1 dimension:2 xn:3 curve:1 ghz:1 rich:2 contour:2 kz:3 collection:1 jump:16 adaptive:5 commonly:1 pruning:2 approximate:1 xi:3 alternatively:1 spectrum:16 continuous:3 tailed:2 promising:1 learn:5 robust:2 ca:1 ignoring:1 expansion:16 complex:1 domain:1 pgs:1 noise:12 hyperparameters:5 arise:1 n2:2 x1:3 intel:1 inferring:4 wish:2 explicit:1 exponential:1 lie:1 levy:1 third:1 spatial:1 wavelet:2 learns:1 removing:1 theorem:1 formula:1 erroneous:1 magenta:3 down:1 bishop:1 decay:2 dk:1 dl:3 scatterplot:1 restricting:1 sequential:1 false:1 texture:2 magnitude:2 phd:2 conditioned:3 occurring:1 kx:10 suited:2 wolpert:9 depicted:1 likely:2 infinitely:1 explore:1 expressed:1 kiss:1 springer:2 corresponds:1 truth:3 determines:2 satisfies:1 conditional:1 viewed:1 month:1 donoho:2 rbf:7 towards:2 absence:1 determined:6 generalisation:1 infinite:1 sampler:1 conservative:1 lens:1 total:2 pas:1 lkp:6 college:1 support:5 cholesky:1 incorporate:1 evaluate:1 mcmc:21 princeton:1 phenomenon:1 |
6,614 | 6,984 | Deep Hyperspherical Learning
Weiyang Liu1 , Yan-Ming Zhang2 , Xingguo Li3,1 , Zhiding Yu4 , Bo Dai1 , Tuo Zhao1 , Le Song1
1
Georgia Institute of Technology 2 Institute of Automation, Chinese Academy of Sciences
3
University of Minnesota 4 Carnegie Mellon University
{wyliu,tourzhao}@gatech.edu, [email protected], [email protected]
Abstract
Convolution as inner product has been the founding basis of convolutional neural
networks (CNNs) and the key to end-to-end visual representation learning. Benefiting from deeper architectures, recent CNNs have demonstrated increasingly
strong representation abilities. Despite such improvement, the increased depth and
larger parameter space have also led to challenges in properly training a network.
In light of such challenges, we propose hyperspherical convolution (SphereConv),
a novel learning framework that gives angular representations on hyperspheres.
We introduce SphereNet, deep hyperspherical convolution networks that are distinct from conventional inner product based convolutional networks. In particular,
SphereNet adopts SphereConv as its basic convolution operator and is supervised
by generalized angular softmax loss - a natural loss formulation under SphereConv.
We show that SphereNet can effectively encode discriminative representation and
alleviate training difficulty, leading to easier optimization, faster convergence and
comparable (even better) classification accuracy over convolutional counterparts.
We also provide some theoretical insights for the advantages of learning on hyperspheres. In addition, we introduce the learnable SphereConv, i.e., a natural
improvement over prefixed SphereConv, and SphereNorm, i.e., hyperspherical
learning as a normalization method. Experiments have verified our conclusions.
1
Introduction
Recently, deep convolutional neural networks have led to significant breakthroughs on many vision
problems such as image classification [9, 18, 19, 6], segmentation [3, 13, 1], object detection [3, 16],
etc. While showing stronger representation power over many conventional hand-crafted features,
CNNs often require a large amount of training data and face certain training difficulties such as
overfitting, vanishing/exploding gradient, covariate shift, etc. The increasing depth of recently
proposed CNN architectures have further aggravated the problems.
To address the challenges, regularization techniques such as dropout [9] and orthogonality parameter
constraints [21] have been proposed. Batch normalization [8] can also be viewed as an implicit
regularization to the network, by normalizing each layer?s output distribution. Recently, deep
residual learning [6] emerged as a promising way to overcome vanishing gradients in deep networks.
However, [20] pointed out that residual networks (ResNets) are essentially an exponential ensembles
of shallow networks where they avoid the vanishing/exploding gradient problem but do not provide
direct solutions. As a result, training an ultra-deep network still remains an open problem. Besides
vanishing/exploding gradient, network optimization is also very sensitive to initialization. Finding
better initializations is thus widely studied [5, 14, 4]. In general, having a large parameter space is
double-edged considering the benefit of representation power and the associated training difficulties.
Therefore, proposing better learning frameworks to overcome such challenges remains important.
In this paper, we introduce a novel convolutional learning framework that can effectively alleviate
training difficulties, while giving better performance over dot product based convolution. Our idea
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
w ?(w,x) x
g(?(w,x) ) SphereConv
SphereConv
Operator
Operator
...
w3
...
x
x
...
SphereConv
Operator
x
...
w1
x
SphereConv
Operator
x
...
...
x
w2
Cross-entropy
w4
Softmax
x
x
x
Hyperspherical Convolutions
Generalized Angular
Softmax Loss
Figure 1: Deep hyperspherical convolutional network architecture.
is to project parameter learning onto unit hyperspheres, where layer activations only depend on
the geodesic distance between kernels and input signals1 instead of their inner products. To this
end, we propose the SphereConv operator as the basic module for our network layers. We also
propose softmax losses accordingly under such representation framework. Specifically, the proposed
softmax losses supervise network learning by also taking the SphereConv activations from the last
layer instead of inner products. Note that the geodesic distances on a unit hypersphere is the angles
between inputs and kernels. Therefore, the learning objective is essentially a function of the input
angles and we call it generalized angular softmax loss in this paper. The resulting architecture is the
hyperspherical convolutional network (SphereNet), which is shown in Fig. 1.
Our key motivation to propose SphereNet is that angular information matters in convolutional
representation learning. We argue this motivation from several aspects: training stability, training
efficiency, and generalization power. SphereNet can also be viewed as an implicit regularization to
the network by normalizing the activation distributions. The weight norm is no longer important since
the entire network operates only on angles. And as a result, the `2 weight decay is also no longer
needed in SphereNet. SphereConv to some extent also alleviates the covariate shift problem [8]. The
output of SphereConv operators are bounded from ?1 to 1 (0 to 1 if considering ReLU), which makes
the variance of each output also bounded.
Our second intuition is that angles preserve the most abundant discriminative information in convolutional learning. We gain such intuition from 2D Fourier transform, where an image is decomposed by
the combination of a set of templates with magnitude and phase information in 2D frequency domain.
If one reconstructs an image with original magnitudes and random phases, the resulting images
are generally not recognizable. However, if one reconstructs the image with random magnitudes
and original phases. The resulting images are still recognizable. It shows that the most important
structural information in an image for visual recognition is encoded by phases. This fact inspires us
to project the network learning into angular space. In terms of low-level information, SphereConv is
able to preserve the shape, edge, texture and relative color. SphereConv can learn to selectively drop
the color depth but preserve the RGB ratio. Thus the semantic information of an image is preserved.
SphereNet can also be viewed as a non-trivial generalization of [12, 11]. By proposing a loss that
discriminatively supervises the network on a hypersphere, [11] achieves state-of-the-art performance
on face recognition. However, the rest of the network remains a conventional convolution network.
In contrast, SphereNet not only generalizes the hyperspherical constraint to every layer, but also
to different nonlinearity functions of input angles. Specifically, we propose three instances of
SphereConv operators: linear, cosine and sigmoid. The sigmoid SphereConv is the most flexible one
with a parameter controlling the shape of the angular function. As a simple extension to the sigmoid
SphereConv, we also present a learnable SphereConv operator. Moreover, the proposed generalized
angular softmax (GA-Softmax) loss naturaly generalizes the angular supervision in [11] using the
SphereConv operators. Additionally, the SphereConv can serve as a normalization method that is
comparable to batch normalization, leading to an extension to spherical normalization (SphereNorm).
SphereNet can be easily applied to other network architectures such as GoogLeNet [19], VGG [18]
and ResNet [6]. One simply needs to replace the convolutional operators and the loss functions with
the proposed SphereConv operators and hyperspherical loss functions. In summary, SphereConv can
be viewed as an alternative to the original convolution operators, and serves as a new measure of
correlation. SphereNet may open up an interesting direction to explore the neural networks. We ask
the question whether inner product based convolution operator is an optimal correlation measure for
all tasks? Our answer to this question is likely to be ?no?.
1
Without loss of generality, we study CNNs here, but our method is generalizable to any other neural nets.
2
2
2.1
Hyperspherical Convolutional Operator
Definition
The convolutional operator in CNNs is simply a linear matrix multiplication, written as F(w, x) =
w> x + bF where w is a convolutional filter, x denotes a local patch from the bottom feature map
and bF is the bias. The matrix multiplication here essentially computes the similarity between the
local patch and the filter. Thus the standard convolution layer can be viewed as patch-wise matrix
multiplication. Different from the standard convolutional operator, the hyperspherical convolutional
(SphereConv) operator computes the similarity on a hypersphere and is defined as:
Fs (w, x) = g(?(w,x) ) + bFs ,
(1)
where ?(w,x) is the angle between the kernel parameter w and the local patch x. g(?(w,x) ) indicates
a function of ?(w,x) (usually a monotonically decreasing function), and bFs is the bias. To simplify
analysis and discussion, the bias terms are usually left out. The angle ?(w,x) can be interpreted
as the geodesic distance (arc length) between w and x on a unit hypersphere. In contrast to the
convolutional operator that works in the entire space, SphereConv only focuses on the angles between
local patches and the filters, and therefore operates on the hypersphere space. In this paper, we present
three specific instances of the SphereConv Operator. To facilitate the computation, we constrain the
output of SphereConv operators to [?1, 1] (although it is not a necessary requirement).
Linear SphereConv. In linear SphereConv operator, g is a linear function of ?(w,x) , with the form:
g(?(w,x) ) = a?(w,x) + b,
(2)
where a and b are parameters for the linear SphereConv operator. In order to constrain the output
range to [0, 1] while ?(w,x) ? [0, ?], we use a = ? ?2 and b = 1 (not necessarily optimal design).
Cosine SphereConv. The cosine SphereConv operator is a nonlinear function of ?(w,x) , with its g being the form of
1
(3)
0.5
w x
which can be reformulated as kwk
. Therefore, it can be
2 kxk2
viewed as a doubly normalized convolutional operator, which
bridges the SphereConv operator and convolutional operator.
0
g(?(w,x) ) = cos(?(w,x) ),
T
Sigmoid SphereConv. The Sigmoid SphereConv operator is
derived from the Sigmoid function and its g can be written as
g(?(w,x) ) =
1+
1?
?
exp(? 2k
)
?
exp(? 2k
)
?
1 ? exp
1 + exp
?(w,x)
k
?(w,x)
k
Cosine
Linear
Sigmoid (k=0.1)
Sigmoid (k=0.3)
Sigmoid (k=0.7)
-0.5
-1
0
0.5
1
1.5
2
2.5
3
Figure 2: SphereConv operators.
?
?
2k
?
?
2k
,
(4)
where k > 0 is the parameter that controls the curvature of the function. While k is close to 0,
g(?(w,x) ) will approximate the step function. While k becomes larger, g(?(w,x) ) is more like a linear
function, i.e., the linear SphereConv operator. Sigmoid SphereConv is one instance of the parametric
SphereConv family. With more parameters being introduced, the parametric SphereConv can have
richer representation power. To increase the flexibility of the parametric SphereConv, we will discuss
the case where these parameters can be jointly learned via back-prop later in the paper.
2.2
Optimization
The optimization of the SphereConv operators is nearly the same as the convolutional operator
and also follows the standard back-propagation. Using the chain rule, we have the gradient of the
SphereConv with respect to the weights and the feature input:
?g(?(w,x) )
?g(?(w,x) ) ??(w,x)
=
?
,
?w
??(w,x)
?w
For different SphereConv operators, both
lies in the
?g(?(w,x) )
??(w,x)
part. For
??(w,x)
?w ,
?g(?(w,x) )
?g(?(w,x) ) ??(w,x)
=
?
.
?x
??(w,x)
?x
??(w,x)
?w
we have
wT x
and
??(w,x)
?x
(5)
are the same, so the only difference
T
w x
? arccos kwk
??(w,x)
2 kxk2
=
,
?x
?x
? arccos kwk2 kxk2
??(w,x)
=
,
?w
?w
?g(?
(6)
)
(w,x)
which are straightforward to compute and therefore neglected here. Because ??(w,x)
for the
linear SphereConv, the cosine SphereConv and the Sigmoid SphereConv are a, ? sin(?(w,x) ) and
?2 exp(?(w,x) /k??/2k)
k(1+exp(?(w,x) /k??/2k))2
respectively, all these partial gradients can be easily computed.
3
2.3
Theoretical Insights
We provide a fundamental analysis for the cosine SphereConv operator in the case of linear neural
network to justify that the SphereConv operator can improve the conditioning of the problem. In
specific, we consider one layer of linear neural network, where the observation is F = U ? V ?>
(ignore the bias), U ? ? Rn?k is the weight, and V ? ? Rm?k is the input that embeds weights from
previous layers. Without loss of generality, we assume the columns satisfying kUi,: k2 = kVj,: k2 = 1
for all i = 1, . . . , n and j = 1, . . . , m, and consider
min
U ?Rn?k ,V ?Rm?k
G(U , V ) = 21 kF ? U V > k2F .
(7)
This is closely related with the matrix factorization and (7) can be also viewed as the expected version
for the matrix sensing problem [10]. The following lemma demonstrates a critical scaling issue of (7)
for U and V that significantly deteriorate the conditioning without changing the objective of (7).
Lemma 1. Consider a pair of global optimal points U , V satisfying F = U V > and Tr(V > V ?
e = cU and Ve = V /c, then we have
In ) ? Tr(U > U ? Im ). For any real c > 1, let U
?
2
2
2
max
e , Ve )) = ?(c ?(? G(U , V ))), where ? =
?(? G(U
?min is the restricted condition number with
?max being the largest eigenvalue and ?min being the smallest nonzero eigenvalue.
Lemma 1 implies that the conditioning of the problem (7) at a unbalanced global optimum scaled by
a constant c is ?(c2 ) times larger than the conditioning of the problem at a balanced global optimum.
Note that ?min = 0 may happen, thus we consider the restricted condition here. Similar results hold
beyond global optima. This is an undesired geometric structure, which further leads to slow and
unstable optimization procedures, e.g., using stochastic gradient descent (SGD). This motivates us to
consider the SphereConv operator discussed above, which is equivalent to projecting data onto the
hypersphere and leads to a better conditioned problem.
Next, we consider our proposed cosine SphereConv operator for one-layer of the linear neural
network. Based on our previous discussion on SphereConv, we consider an equivalent problem:
min
U ?Rn?k ,V ?Rm?k
1
1
GS (U , V ) = 21 kF ? DU U V > DV k2F ,
n?n
(8)
1
1
where DU = diag kU1,: k2 , . . . , kUn,: k2 ? R
and DV = diag kV1,: k2 , . . . , kVm,: k2 ?
m?m
R
are diagonal matrices. We provide an analogous result to Lemma 1 for (8) .
e , Ve )) =
e = cU and Ve = V /c, then we have ?i (?2 GS (U
Lemma 2. For any real c > 1, let U
2
2
e
?i (? GS (U , V )) for all i ? [(n + m)k] = {1, 2, . . . , (n + m)k} and ?(? G(U , Ve )) =
?(?2 G(U , V )), where ? is defined as in Lemma 1.
We have from Lemma 2 that the issue of increasing condition caused by the scaling is eliminated by
the SphereConv operator in the entire parameter space. This enhances the geometric structure over
(7), which further results in improved convergence of optimization procedures. If we extend the result
from one layer to multiple layers, the scaling issue propagates. Roughly speaking, when we train N
layers, in the worst case, the conditioning of the problem can be cN times worse with a scaling factor
c > 1. The analysis is similar to the one layer case, but the computation of the Hessian matrix and
associated eigenvalues are much more complicated. Though our analysis is elementary, we provide
an important insight and a straightforward illustration of the advantage for using the SphereConv
operator. The extension to more general cases, e..g, using nonlinear activation function (e.g., ReLU),
requires much more sophisticated analysis to bound the eigenvalues of Hessian for objectives, which
is deferred to future investigation.
2.4
Discussion
Comparison to convolutional operators. Convolutional operators compute the inner product between the kernels and the local patches, while the SphereConv operators compute a function of the
angle between the kernels and local patches. If we normalize the convolutional operator in terms of
both w and x, then the normalized convolutional operator is equivalent to the cosine SphereConv
operator. Essentially, they use different metric spaces. Interestingly, SphereConv operators can also
be interpreted as a function of the Geodesic distance on a unit hypersphere.
Extension to fully connected layers. Because the fully connected layers can be viewed as a special
convolution layer with the kernel size equal to the input feature map, the SphereConv operators could
be easily generalized to the fully connected layers. It also indicates that SphereConv operators could
be used not only to deep CNNs, but also to linear models like logistic regression, SVM, etc.
4
Network Regularization. Because the norm of weights is no longer crucial, we stop using the `2
weight decay to regularize the network. SphereNets are learned on hyperspheres, so we regularize the
network based on angles instead of norms. To avoid redundant kernels, we want the kernels uniformly
spaced around the hypersphere, but it is difficult to formulate such constraints. As a tradeoff, we
encourage the orthogonality. Given a set of kernels W where the i-th column Wi is the weights of
the i-th kernel, the network will also minimize kW > W ? Ik2F where I is an identity matrix.
Determining the optimal SphereConv. In practice, we could treat different types of SphereConv as
a hyperparameter and use the cross validation to determine which SphereConv is the most suitable
one. For sigmoid SphereConv, we could also use the cross validation to determine its hyperparameter
k. In general, we need to specify a SphereConv operator before using it, but prefixing a SphereConv
may not be an optimal choice (even using cross validation). What if we treat the hyperparameter k in
sigmoid SphereConv as a learnable parameter and use the back-prop to learn it? Following this idea,
we further extend sigmoid SphereConv to a learnable SphereConv in the next subsection.
SphereConv as normalization. Because SphereConv could partially address the covariate shift, it
could also serve as a normalization method similar to batch normalization. Differently, SphereConv
normalizes the network in terms of feature map and kernel weights, while batch normalization is for
the mini-batches. Thus they do not contradict with each other and can be used simultaneously.
2.5 Extension: Learnable SphereConv and SphereNorm
Learnable SphereConv. It is a natrual idea to replace the current prefixed SphereConv with a
learnable one. There will be plenty of parametrization choices for the SphereConv to be learnable,
and we present a very simple learnable SphereConv operator based on the sigmoid SphereConv.
Because the sigmoid SphereConv has a hyperparameter k, we could treat it as a learnable parameter
that can be updated by back-prop. In back-prop, k is updated using k t+1 = k t + ? ?L
?k where t denotes
?L
the current iteration index and ?k can be easily computed by the chain rule. Usually, we also require
k to be positive. The learning of k is in fact similar to the parameter learning in PReLU [5].
SphereNorm: hyperspherical learning as a normalization method. Similar to batch normalization (BatchNorm), we note that the hyperspherical learning can also be viewed as a way of
normalization, because SphereConv constrain the output value in [?1, 1] ([0, 1] after ReLU). Different from BatchNorm, SphereNorm normalizes the network based on spatial information and the
weights, so it has nothing to do with the mini-batch statistic. Because SphereNorm normalize both
the input and weights, it could avoid covariate shift due to large weights and large inputs while
BatchNorm could only prevent covariate shift caused by the inputs. In such sense, it will work better
than BatchNorm when the batch size is small. Besides, SphereConv is more flexible in terms of
design choices (e.g. linear, cosine, and sigmoid) and each may lead to different advantages.
Similar to BatchNorm, we could use a rescaling strategy for the SphereNorm. Specifically, we rescale
the output of SphereConv via ?Fs (w, x) + ? where ? and ? are learned by back-prop (similar to
BatchNorm, the rescaling parameters can be either learned or prefixed). In fact, SphereNorm does not
contradict with the BatchNorm at all and can be used simultaneously with BatchNorm. Interestingly,
we find using both is empirically better than using either one alone.
3
Learning Objective on Hyperspheres
For learning on hyperspheres, we can either use the conventional loss function such as softmax loss,
or use some loss functions that are tailored for the SphereConv operators. We present some possible
choices for these tailored loss functions.
Weight-normalized Softmax Loss. The input feature and its label are denoted as xi and yi , respec
P
P
fy
tively. The original softmax loss can be written as L = N1 i Li = N1 i ? log Pe eifj where N
j
is the number of training samples and fj is the score of the j-th class (j ? [1, K], K is the number of
classes). The class score vector f is usually the output of a fully connected layer W , so we have
fj = Wj> xi + bj and fyi = Wy>i xi + byi in which xi , Wj , and Wyi are the i-th training sample, the
j-th and yi -th column of W respectively. We can rewrite Li as
Li = ? log
Wy> xi +byi
e
P
j
i
e
Wj> xi +bj
= ? log
ekWyi kkxi k cos(?yi ,i )+byi
P kW kkx k cos(? )+b
j
i
j,i
j
je
,
(9)
where ?j,i (0 ? ?j,i ? ?) is the angle between vector Wj and xi . The decision boundary of the
original softmax loss is determined by the vector f . Specifically in the binary-class case, the
5
decision boundary of the softmax loss is W1> x + b1 = W2> x + b2 . Considering the intuition of the
SphereConv operators, we want to make the decision boundary only depend on the angles. To this
end, we normalize the weights (kWj k = 1) and zero out the biases (bj = 0), following the intuition in
[11] (sometimes we could keep the biases while data is imbalanced). The decision boundary becomes
kxk cos(?1 ) = kxk cos(?2 ). Similar to SphereConv, we could generalize the decision boundary to
kxkg(?1 ) = kxkg(?2 ), so the weight-normalized softmax (W-Softmax) loss can be written as
Li = ? log
ekxi kg(?yi ,i )
P kx kg(? ) ,
i
j,i
je
(10)
where g(?) can take the form of linear SphereConv, cosine SphereConv, or sigmoid SphereConv.
Thus we also term these three difference weight-normalized loss functions as linear W-Softmax loss,
cosine W-Softmax loss, and sigmoid W-Softmax loss, respectively.
Generalized Angular Softmax Loss. Inspired by [11], we use a multiplicative parameter m to impose margins on hyperspheres. We propose a generalized angular softmax (GA-Softmax) loss which
extends the W-Softmax loss to a loss function that favors large angular margin feature distribution. In
general, the GA-Softmax loss is formulated as
Li = ? log
ekxi kg(m?yi ,i )
,
P
ekxi kg(m?yi ,i ) + j6=yi ekxi kg(?j,i )
(11)
where g(?) could also have the linear, cosine and sigmoid form, similar to the W-Softmax loss. We can
see A-Softmax loss [11] is exactly the cosine GA-Softmax loss and W-Softmax loss is the special case
?
], because cos(?j,i ) is only
(m = 1) of GA-Sofmtax loss. Note that we usually require ?j,i ? [0, m
monotonically decreasing in [0, ?]. To address this, [12, 11] construct a monotonically decreasing
?
function recursively using the [0, m
] part of cos(m?j,i ). Although it indeed partially addressed the
issue, it may introduce a number of saddle points (w.r.t. W ) in the loss surfaces. Originally, ?g
?? will
be close to 0 only when ? is close to 0 and ?. However, in L-Softmax [12] or A-Softmax (cosine
k?
GA-Softmax), it is not the case. ?g
?? will be 0 when ? = m , k = 0, ? ? ? , m. It will possibly cause
instability in training. The sigmoid GA-Softmax loss also has similar issues. However, if we use
the linear GA-Softmax loss, this problem will be automatically solved and the training will possibly
become more stable in practice. There will also be a lot of choices of g(?) to design a specific
GA-Sofmtax loss, and each one has different optimization dynamics. The optimal one may depend
on the task itself (e.g. cosine GA-Softmax has been shown effective in deep face recognition [11]).
Discussion of Sphere-normalized Softmax Loss. We have also considered the sphere-normalized
softmax loss (S-Softmax), which simultaneously normalizes the weights (Wj ) and the feature x.
It seems to be a more natural choice than W-Softmax for the proposed SphereConv and makes the
entire framework more unified. In fact, we have tried this and the empirical results are not that good,
because the optimization seems to become very difficult. If we use the S-Softmax loss to train a
network from scratch, we can not get reasonable results without using extra tricks, which is the reason
we do not use it in this paper. For completeness, we give some discussions here. Normally, it is very
difficult to make the S-Softmax loss value to be small enough, because we normalize the features to
unit hypersphere. To make this loss work, we need to either normalize the feature to a value much
larger than 1 (hypersphere with large radius) and then tune the learning rate or first train the network
with the softmax loss from scratch and then use the S-Softmax loss for finetuning.
4
4.1
Experiments and Results
Experimental Settings
We will first perform comprehensive ablation study and exploratory experiments for the proposed
SphereNets, and then evaluate the SphereNets on image classification. For the image classification
task, we perform experiments on CIFAR10 (only with random left-right flipping), CIFAR10+ (with
full data augmentation), CIFAR100 and large-scale Imagenet 2012 datasets [17].
General Settings. For CIFAR10, CIFAR10+ and CIFAR100, we follow the same settings from
[7, 12]. For Imagenet 2012 dataset, we mostly follow the settings in [9]. We attach more details in
Appendix B. For fairness, batch normalization and ReLU are used in all methods if not specified. All
the comparisons are made to be fair. Compared CNNs have the same architecture with SphereNets.
Training. Appendix A gives the network details. For CIFAR-10 and CIFAR-100, we use the ADAM,
starting with the learning rate 0.001. The batch size is 128 if not specified. The learning rate is divided
by 10 at 34K, 54K iterations and the training stops at 64K. For both A-Softmax and GA-Softmax
6
loss, we use m = 4. For Imagenet-2012, we use the SGD with momentum 0.9. The learning rate
starts with 0.1, and is divided by 10 at 200K and 375K iterations. The training stops at 55K iteration.
4.2
Ablation Study and Exploratory Experiments
We perform comprehensive Ablation and exploratory study on the SphereNet and evaluate every
component individually in order to analyze its advantages. We use the 9-layer CNN as default (if not
specified) and perform the image classification on CIFAR-10 without any data augmentation.
SphereConv
Operator / Loss
Original
Softmax
Sigmoid (0.1)
W-Softmax
Sigmoid (0.3)
W-Softmax
Sigmoid (0.7)
W-Softmax
Linear
W-Softmax
Cosine
W-Softmax
A-Softmax
(m=4)
GA-Softmax
(m=4)
Sigmoid (0.1)
Sigmoid (0.3)
Sigmoid (0.7)
Linear
Cosine
Original Conv
90.97
91.08
91.05
91.10
90.89
90.58
90.91
91.44
91.16
90.93
90.88
90.58
90.89
91.37
91.47
91.42
91.08
90.73
90.88
91.21
91.07
90.96
91.22
90.78
91.07
91.34
90.99
90.95
91.17
91.08
91.13
91.28
91.18
91.24
90.99
90.68
91.87
92.13
92.22
92.21
91.94
91.78
91.99
92.38
92.36
92.32
92.19
91.80
Table 1: Classification accuracy (%) with different loss functions.
Comparison of different loss functions. We first evaluate all the SphereConv operators with
different loss functions. All the compared SphereConv operators use the 9-layer CNN architecture
in the experiment. From the results in Table 1, one can observe that the SphereConv operators
consistently outperforms the original convolutional operator. For the compared loss functions except
A-Softmax and GA-Softmax, the effect on accuracy seems to less crucial than the SphereConv
operators, but sigmoid W-Softmax is more flexible and thus works slightly better than the others.
The sigmoid SphereConv operators with a suitably chosen parameter also works better than the
others. Note that, W-Softmax loss is in fact comparable to the original softmax loss, because our
SphereNet optimizes angles and the W-Softmax is derived from the original softmax loss. Therefore,
it is fair to compare the SphereNet with W-Softmax and CNN with softmax loss. From Table 1,
we can see SphereConv operators are consistently better than the covolutional operators. While
we use a large-margin loss function like the A-Softmax [11] and the proposed GA-Softmax, the
accuracy can be further boosted. One may notice that A-Softmax is actually cosine GA-Softmax. The
superior performance of A-Softmax with SphereNet shows that our architecture is more suitable for
the learning of angular loss. Moreover, our proposed large-margin loss (linear GA-Softmax) performs
the best among all these compared loss functions.
Comparison of different network architectures. We are also interested in how our SphereConv
operators work in different architectures. We evaluate all the proposed SphereConv operators with
the same architecture of different layers and a totally different architecture (ResNet). Our baseline
CNN architecture follows the design of VGG network [18] only with different convolutional layers.
For fair comparison, we use cosine W-Softmax for all SphereConv operators and original softmax
for original convolution operators. From the results in Table 2, one can see that SphereNets greatly
outperforms the CNN baselines, usually with more than 1% improvement. While applied to ResNet,
our SphereConv operators also work better than the baseline. Note that, we use the similar ResNet
architecture from the CIFAR-10 experiment in [6]. We do not use data augmentation for CIFAR-10
in this experiment, so the ResNet accuracy is much lower than the reported one in [6]. Our results on
different network architectures show consistent and significant improvement over CNNs.
SphereConv Operator
CNN-3
CNN-9
CNN-18
CNN-45
CNN-60
ResNet-32
SphereConv Operator
Acc. (%)
Sigmoid (0.1)
Sigmoid (0.3)
Sigmoid (0.7)
Linear
Cosine
Original Conv
82.08
81.92
82.4
82.31
82.23
81.19
91.13
91.28
91.18
91.15
90.99
90.68
91.43
91.55
91.69
91.24
91.23
90.62
89.34
89.73
89.85
90.15
90.05
88.23
87.67
87.85
88.42
89.91
89.28
88.15
90.94
91.7
91.19
91.25
91.38
90.40
Sigmoid (0.1)
Sigmoid (0.3)
Sigmoid (0.7)
Linear
Cosine
CNN w/o ReLU
86.29
85.67
85.51
85.34
85.25
80.73
Table 2: Classification accuracy (%) with different network architectures.
Table 3: Acc. w/o ReLU.
Comparison of different width (number of filters). We evaluate the SphereNet with different
number of filters. Fig. 3(c) shows the convergence of different width of SphereNets. 16/32/48 means
conv1.x, conv2.x and conv3.x have 16, 32 and 48 filters, respectively. One could observe that while
the number of filters are small, SphereNet performs similarly to CNNs (slightly worse). However,
while we increase the number of filters, the final accuracy will surpass the CNN baseline even faster
and more stable convergence performance. With large width, we find that SphereNets perform
consistently better than CNN baselines, showing that SphereNets can make better use of the width.
Learning without ReLU. We notice that SphereConv operators are no longer a matrix multiplication,
so it is essentially a non-linear function. Because the SphereConv operators already introduce certain
7
1
0.9
0.6
0.5
0.4
0.3
ResNet baseline on CIFAR10
ResNet baseline on CIFAR10+
SphereResNet (Sigmoid 0.3) on CIFAR10
SphereResNet (Sigmoid 0.3) on CIFAR10+
0.2
0.1
0
0
1
2
3
4
Iteration
5
6
(a) ResNet vs. SphereResNet
on CIFAR-10/10+
7
x104
0.7
Testing Accuracy
0.7
0.6
0.5
CNN Baseline
SphereNet (cosine) w/o orth.
SphereNet (cosine) w/ orth.
SphereNet (linear) w/ orth.
SphereNet (Sigmoid 0.3) w/ orth.
0.4
0.3
0.2
0.1
0
1
2
3
4
Iteration
5
6
(b) CNN vs. SphereNet (orth.)
on CIFAR-10
0.915
0.7
0.91
0.9
0.5
5.5
6
CNN 16/32/48
SphereNet 16/32/48
CNN 64/96/128
SphereNet 64/96/128
CNN 128/192/256
SphereNet 128/192/256
CNN 256/384/512
SphereNet 256/384/512
0.4
0.3
0.1
0.7
0.905
0.6
0.2
7
x10 4
69-layer CNN
69-layer SphereNet
0.8
0.8
0.8
Testing Accuracy
Testing Accuracy
0.8
0.9
0.9
0
1
2
3
Iteration
4
5
6.5
4
x10
Testing Accuracy
1
0.9
0.6
0.5
0.4
0.3
0.2
0.1
6
0
x10 4
(c) Different width of SphereNet
on CIFAR-10
0
0.5
1
1.5
2
Iteration
2.5
3
3.5
(d) Deep CNN vs. SphereNet
on CIFAR-10
4
x10 4
Figure 3: Testing accuracy over iterations. (a) ResNet vs. SphereResNet. (b) Plain CNN vs. plain SphereNet. (c)
Different width of SphereNet. (d) Ultra-deep plain CNN vs. ultra-deep plain SphereNet.
non-linearity to the network, we evaluate how much gain will such non-linearity bring. Therefore, we
remove the ReLU activation and compare our SphereNet with the CNNs without ReLU. The results
are given in Table 3. All the compared methods use 18-layer CNNs (with BatchNorm). Although
removing ReLU greatly reduces the classification accuracy, our SphereNet still outperforms the CNN
without ReLU by a significant margin, showing its rich non-linearity and representation power.
Convergence. One of the most significant advantages of SphereNet is its training stability and
convergence speed. We evaluate the convergence with two different architectures: CNN-9 and
ResNet-32. For fair comparison, we use the original softmax loss for all compared methods (including
SphereNets). ADAM is used for the stochastic optimization and the learning rate is the same for all
networks. From Fig. 3(a), the SphereResNet converges significantly faster than the original ResNet
baseline in both CIFAR-10 and CIFAR-10+ and the final accuracy are also higher than the baselines.
In Fig. 3(b), we evaluate the SphereNet with and without orthogonality constraints on kernel weights.
With the same network architecture, SphereNet also converges much faster and performs better
than the baselines. The orthogonality constraints also can bring performance gains in some cases.
Generally from Fig. 3, one could also observe that the SphereNet converges fast and very stably in
every case while the CNN baseline fluctuates in a relative wide range.
Optimizing ultra-deep networks. Partially because of the alleviation of the covariate shift problem
and the improvement of conditioning, our SphereNet is able to optimize ultra-deep neural networks
without using residual units or any form of shortcuts. For SphereNets, we use the cosine SphereConv
operator with the cosine W-Softmax loss. We directly optimize a very deep plain network with 69
stacked convolutional layers. From Fig. 3(d), one can see that the convergence of SphereNet is much
easier than the CNN baseline and the SphereNet is able to achieve nearly 90% final accuracy.
Frequency
4.3 Preliminary Study towards Learnable SphereConv
Although the learnable SphereConv is not a main theme of this
paper, we still run some preliminary evaluations on it. For the
0.3
proposed learnable sigmoid SphereConv, we learn the parameter
conv1.1
conv2.1
k independently for each filter. It is also trivial to learn it in a
conv3.1
0.2
layer-shared or network-shared fashsion. With the same 9-layer
architecture used in Section 4.2, the learnable SphereConv (with
0.1
cosine W-Softmax loss) achieves 91.64% on CIFAR-10 (without
0
full data augmentation), while the best sigmoid SphereConv (with
0
0.2
0.4
0.6
0.8
1
The value of k
cosine W-Softmax loss) achieves 91.22%. In Fig. 4, we also plot
the frequency histogram of k in Conv1.1 (64 filters), Conv2.1 (96 Figure 4: Frequency histogram of k.
filters) and Conv3.1 (128 filters) of the final learned SphereNet.
From Fig. 4, we observe that each layer learns different distribution of k. The first convolutional
layer (Conv1.1) tends to uniformly distribute k into a large range of values from 0 to 1, potentially
extracting information from all levels of angular similarity. The fourth convolutional layer (Conv2.1)
tends to learn more concentrated distribution of k than Conv1.1, while the seventh convolutional
layer (Conv3.1) learns highly concentrated distribution of k which is centered around 0.8. Note that,
we initialize all k with a constant 0.5 and learn them with the back-prop.
4.4 Evaluation of SphereNorm
From Section 4.2, we could clearly see the convergence advantage of SphereNets. In general, we can
view the SphereConv as a normalization method (comparable to batch normalization) that can be
applied to all kinds of networks. This section evaluates the challenging scenarios where the minibatch size is small (results under 128 batch size could be found in Section 4.2) and we use the same
8
0.8
0.7
BatchNorm
SphereNorm
Rescaled SphereNorm
SphereNorm w/ Orth.
SphereNorm+BatchNorm
0.4
BatchNorm
SphereNorm
SphereNorm+BatchNorm
0.3
0.2
0
1
2
3
Iteration
4
5
(a) Mini-Batch Size = 4
6
x10 4
0.3
0.2
0.1
0.8
0.7
0.7
0.6
0.5
0.4
0.9
0.8
Testing Accuracy
Testing Accuracy
0.6
0.5
0.1
Testing Accuracy
0.7
0.6
0.9
Testing Accuracy
0.9
0.8
0
1
2
3
Iteration
4
5
(b) Mini-Batch Size = 8
0.6
0.5
0.4
0.3
0.2
0.1
6
x10 4
0.5
BatchNorm
SphereNorm
Rescaled SphereNorm
SphereNorm w/ Orth.
SphereNorm+BatchNorm
0
1
2
3
Iteration
4
5
(c) Mini-Batch Size = 16
BatchNorm
SphereNorm
Rescaled SphereNorm
SphereNorm w/ Orth.
SphereNorm+BatchNorm
0.4
0.3
0.2
0.1
6
x10 4
0
1
2
3
Iteration
4
5
(d) Mini-Batch Size = 32
6
x10 4
Figure 5: Convergence under different mini-batch size on CIFAR-10 dataset (Same setting as Section 4.2).
9-layer CNN as in Section 4.2. To be simple, we use the cosine SphereConv as SphereNorm. The
softmax loss is used in both CNNs and SphereNets. From Fig. 5, we could observe that SphereNorm
achieves the final accuracy similar to BatchNorm, but SphereNorm converges faster and more stably.
SphereNorm plus the orthogonal constraint helps convergence a little bit and rescaled SphereNorm
does not seem to work well. While BatchNorm and SphereNorm are used together, we obtain the
fastest convergence and the highest final accuracy, showing excellent compatibility of SphereNorm.
4.5 Image Classification on CIFAR-10+ and CIFAR-100
We first evaluate the SphereNet in a classic image
classification task. We use the CIFAR-10+ and CIMethod
CIFAR-10+
CIFAR-100
ELU [2]
94.16
72.34
FAR100 datasets and perform random flip (both horiFitResNet (LSUV) [14]
93.45
65.72
zontal and vertical) and random crop as data augmentaResNet-1001 [7]
95.38
77.29
tion (CIFAR-10 with full data augmentation is denoted
Baseline ResNet-32 (softmax)
93.26
72.85
SphereResNet-32 (S-SW)
94.47
76.02
as CIFAR-10+). We use the ResNet-32 as a baseline arSphereResNet-32 (L-LW)
94.33
75.62
SphereResNet-32 (C-CW)
94.64
74.92
chitecture. For the SphereNet of the same architecture,
SphereResNet-32 (S-G)
95.01
76.39
we evaluate sigmoid SphereConv operator (k = 0.3)
with sigmoid W-Softmax (k = 0.3) loss (S-SW), lin- Table 4: Acc. (%) on CIFAR-10+ & CIFAR-100.
ear SphereConv operator with linear W-Softmax loss
(L-LW), cosine SphereConv operator with cosine W-Softmax loss (C-CW) and sigmoid SphereConv
operator (k = 0.3) with GA-Softmax loss (S-G). In Table 4, we could see the SphereNet outperforms
a lot of current state-of-the-art methods and is even comparable to the ResNet-1001 which is far
deeper than ours. This experiment further validates our idea that learning on a hyperspheres constrains
the parameter space to a more semantic and label-related one.
Top5 Error Rate
Top1 Error Rate
4.6 Large-scale Image Classification on Imagenet-2012
We evaluate SphereNets on large-scale Imagenet0.9
0.7
2012 dataset. We only use the minimum data
ResNet-18
ResNet-18
SphereResNet-18-v1
SphereResNet-18-v1
0.8
0.6
augmentation strategy in the experiment (details
SphereResNet-18-v2
SphereResNet-18-v2
0.7
0.5
are in Appendix B). For the ResNet-18 base0.6
0.4
line and SphereResNet-18, we use the same filter
numbers in each layer. We develop two types of
0.5
0.3
SphereResNet-18, termed as v1 and v2 respec0.4
0.2
tively. In SphereResNet-18-v2, we do not use
0.3
0.1
0
1
2
3
4
5
0
1
2
3
4
5
SphereConv in the 1 ? 1 shortcut convolutions
Iteration
x10
Iteration
x10
which are used to match the number of channels.
Figure 6: Validation error (%) on ImageNet.
In SphereResNet-18-v1, we use SphereConv in
the 1 ? 1 shortcut convolutions. Fig. 6 shows the single crop validation error over iterations. One could
observe that both SphereResNets converge much faster than the ResNet baseline, while SphereResNet18-v1 converges the fastest but yields a slightly worse yet comparable accuracy. SphereResNet-18-v2
not only converges faster than ResNet-18, but it also shows slightly better accuracy.
5
5
5
Limitations and Future Work
Our work still has some limitations: (1) SphereNets have large performance gain while the network
is wide enough. If the network is not wide enough, SphereNets still converge much faster but yield
slightly worse (still comparable) recognition accuracy. (2) The computation complexity of each
neuron is slightly higher than the CNNs. (3) SphereConvs are still mostly prefixed. Possible future
work includes designing/learning a better SphereConv, efficiently computing the angles to reduce
computation complexity, applications to the tasks that require fast convergence (e.g. reinforcement
learning and recurrent neural networks), better angular regularization to replace orthogonality, etc.
9
Acknowledgements
We thank Zhen Liu (Georgia Tech) for helping with the experiments and providing suggestions. This
project was supported in part by NSF IIS-1218749, NIH BIGDATA 1R01GM108341, NSF CAREER
IIS-1350983, NSF IIS-1639792 EAGER, NSF CNS-1704701, ONR N00014-15-1-2340, Intel ISTC,
NVIDIA and Amazon AWS. Xingguo Li is supported by doctoral dissertation fellowship from
University of Minnesota. Yan-Ming Zhang is supported by the National Natural Science Foundation
of China under Grant 61773376.
References
[1] Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, and Alan L Yuille. Semantic
image segmentation with deep convolutional nets and fully connected crfs. In ICLR, 2015.
[2] Djork-Arn? Clevert, Thomas Unterthiner, and Sepp Hochreiter. Fast and accurate deep network learning
by exponential linear units (elus). arXiv:1511.07289, 2015.
[3] Ross Girshick, Jeff Donahue, Trevor Darrell, and Jitendra Malik. Rich feature hierarchies for accurate
object detection and semantic segmentation. In CVPR, 2014.
[4] Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural
networks. In Aistats, 2010.
[5] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers: Surpassing
human-level performance on imagenet classification. In ICCV, 2015.
[6] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition.
In CVPR, 2016.
[7] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks.
arXiv:1603.05027, 2016.
[8] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing
internal covariate shift. In ICML, 2015.
[9] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional
neural networks. In NIPS, 2012.
[10] Xingguo Li, Zhaoran Wang, Junwei Lu, Raman Arora, Jarvis Haupt, Han Liu, and Tuo Zhao. Symmetry,
saddle points, and global geometry of nonconvex matrix factorization. arXiv:1612.09296, 2016.
[11] Weiyang Liu, Yandong Wen, Zhiding Yu, Ming Li, Bhiksha Raj, and Le Song. Sphereface: Deep
hypersphere embedding for face recognition. In CVPR, 2017.
[12] Weiyang Liu, Yandong Wen, Zhiding Yu, and Meng Yang. Large-margin softmax loss for convolutional
neural networks. In ICML, 2016.
[13] Jonathan Long, Evan Shelhamer, and Trevor Darrell. Fully convolutional networks for semantic segmentation. In CVPR, 2015.
[14] Dmytro Mishkin and Jiri Matas. All you need is a good init. arXiv:1511.06422, 2015.
[15] Yuji Nakatsukasa. Eigenvalue perturbation bounds for hermitian block tridiagonal matrices. Applied
Numerical Mathematics, 62(1):67?78, 2012.
[16] Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster r-cnn: Towards real-time object detection
with region proposal networks. In Advances in neural information processing systems, pages 91?99, 2015.
[17] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang,
Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. Imagenet large scale visual recognition challenge.
IJCV, pages 1?42, 2014.
[18] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv:1409.1556, 2014.
[19] Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru
Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In CVPR, 2015.
10
[20] Andreas Veit, Michael J Wilber, and Serge Belongie. Residual networks behave like ensembles of relatively
shallow networks. In NIPS, 2016.
[21] Di Xie, Jiang Xiong, and Shiliang Pu. All you need is beyond a good init: Exploring better solution for training extremely deep convolutional neural networks with orthonormality and modulation. arXiv:1703.01827,
2017.
11
| 6984 |@word cnn:30 version:1 cu:2 norm:3 stronger:1 seems:3 bf:2 suitably:1 open:2 kokkinos:1 tried:1 rgb:1 sgd:2 tr:2 recursively:1 liu:5 score:2 ours:1 interestingly:2 outperforms:4 current:3 activation:5 yet:1 written:4 numerical:1 happen:1 supervises:1 shape:2 christian:2 remove:1 drop:1 kv1:1 plot:1 v:6 alone:1 accordingly:1 parametrization:1 vanishing:4 dissertation:1 hypersphere:11 completeness:1 zhang:4 c2:1 direct:1 become:2 jiri:1 doubly:1 ijcv:1 veit:1 recognizable:2 hermitian:1 introduce:5 deteriorate:1 li3:1 indeed:1 expected:1 roughly:1 inspired:1 ming:3 decomposed:1 spherical:1 automatically:1 decreasing:3 little:1 considering:3 increasing:2 becomes:2 project:3 conv:2 bounded:2 moreover:2 totally:1 linearity:3 what:1 kg:5 kind:1 interpreted:2 generalizable:1 proposing:2 unified:1 finding:1 every:3 exactly:1 rm:3 k2:6 demonstrates:1 control:1 unit:7 scaled:1 normally:1 grant:1 before:1 positive:1 kvm:1 local:6 treat:3 tends:2 despite:1 jiang:1 meng:1 modulation:1 plus:1 initialization:2 studied:1 doctoral:1 china:1 challenging:1 co:7 fastest:2 factorization:2 range:3 testing:9 practice:2 block:1 procedure:2 evan:1 w4:1 yan:2 empirical:1 significantly:2 get:1 onto:2 ga:17 close:3 operator:75 andrej:1 instability:1 optimize:2 conventional:4 map:3 demonstrated:1 equivalent:3 crfs:1 straightforward:2 sepp:1 starting:1 independently:1 formulate:1 amazon:1 insight:3 rule:2 bfs:2 regularize:2 stability:2 classic:1 exploratory:3 embedding:1 analogous:1 updated:2 cifar100:2 controlling:1 hierarchy:1 nlpr:1 designing:1 kxkg:2 trick:1 fyi:1 recognition:8 satisfying:2 bottom:1 module:1 solved:1 wang:1 worst:1 wj:5 region:1 connected:5 sun:4 rescaled:4 song1:1 highest:1 balanced:1 intuition:4 complexity:2 constrains:1 neglected:1 geodesic:4 dynamic:1 depend:3 rewrite:1 serve:2 yuille:1 efficiency:1 basis:1 easily:4 finetuning:1 differently:1 train:3 stacked:1 distinct:1 fast:3 effective:1 kevin:1 emerged:1 larger:4 widely:1 encoded:1 richer:1 fluctuates:1 cvpr:5 ability:1 statistic:1 favor:1 simonyan:1 transform:1 jointly:1 itself:1 final:6 validates:1 advantage:6 eigenvalue:5 net:2 prefixing:1 wilber:1 propose:6 product:7 clevert:1 jarvis:1 ablation:3 alleviates:1 flexibility:1 achieve:1 academy:1 benefiting:1 normalize:5 sutskever:1 convergence:13 double:1 requirement:1 optimum:3 darrell:2 adam:2 converges:6 object:3 resnet:20 help:1 batchnorm:19 ac:1 develop:1 recurrent:1 andrew:2 rescale:1 lsong:1 strong:1 implies:1 elus:1 direction:1 radius:1 closely:1 cnns:13 filter:13 stochastic:2 centered:1 human:1 require:4 generalization:2 alleviate:2 investigation:1 ultra:5 preliminary:2 elementary:1 im:1 extension:5 helping:1 exploring:1 hold:1 zhang2:1 around:2 considered:1 exp:6 mapping:1 bj:3 achieves:4 smallest:1 label:2 ross:2 sensitive:1 bridge:1 largest:1 individually:1 istc:1 clearly:1 avoid:3 boosted:1 gatech:2 encode:1 derived:2 focus:1 properly:1 improvement:5 consistently:3 indicates:2 greatly:2 contrast:2 tech:1 baseline:16 sense:1 entire:4 going:1 interested:1 compatibility:1 issue:5 classification:13 flexible:3 among:1 denoted:2 arccos:2 art:2 softmax:79 breakthrough:1 special:2 spatial:1 equal:1 construct:1 initialize:1 having:1 beach:1 eliminated:1 kw:2 yu:2 k2f:2 nearly:2 fairness:1 icml:2 plenty:1 future:3 others:2 yoshua:1 simplify:1 wen:2 preserve:3 ve:5 simultaneously:3 comprehensive:2 national:1 murphy:1 phase:4 geometry:1 cns:1 n1:2 detection:3 chitecture:1 highly:1 evaluation:2 deferred:1 light:1 r01gm108341:1 chain:2 accurate:2 edge:1 encourage:1 partial:1 necessary:1 cifar10:8 orthogonal:1 unterthiner:1 abundant:1 girshick:2 theoretical:2 increased:1 instance:3 column:3 top5:1 papandreou:1 rabinovich:1 krizhevsky:1 seventh:1 inspires:1 tridiagonal:1 eager:1 reported:1 answer:1 st:1 yuji:1 fundamental:1 michael:2 together:1 ilya:1 kvj:1 sanjeev:1 w1:2 augmentation:6 ear:1 reconstructs:2 huang:1 possibly:2 worse:4 zhao:1 leading:2 rescaling:2 li:8 szegedy:2 ku1:1 distribute:1 b2:1 zhaoran:1 automation:1 includes:1 matter:1 jitendra:1 caused:2 tion:1 later:1 multiplicative:1 lot:2 view:1 kwk:2 liu1:1 analyze:1 start:1 complicated:1 jia:2 minimize:1 accuracy:24 convolutional:35 variance:1 efficiently:1 ensemble:2 spaced:1 yield:2 serge:1 generalize:1 mishkin:1 vincent:1 ren:4 lu:1 cc:1 russakovsky:1 j6:1 acc:3 trevor:2 definition:1 evaluates:1 frequency:4 associated:2 di:1 gain:4 stop:3 dataset:3 ask:1 color:2 subsection:1 segmentation:4 iasonas:1 sean:1 sophisticated:1 actually:1 back:7 originally:1 higher:2 supervised:1 follow:2 xie:1 specify:1 improved:1 zisserman:1 wei:1 formulation:1 though:1 generality:2 angular:15 implicit:2 djork:1 correlation:2 hand:1 yandong:2 su:1 nonlinear:2 propagation:1 minibatch:1 logistic:1 stably:2 usa:1 facilitate:1 effect:1 normalized:7 orthonormality:1 bhiksha:1 counterpart:1 xavier:1 regularization:5 nonzero:1 semantic:5 undesired:1 sin:1 width:6 cosine:30 generalized:7 performs:3 dragomir:1 bring:2 fj:2 zhiheng:1 image:17 wise:1 novel:2 recently:3 nih:1 sigmoid:43 superior:1 empirically:1 tively:2 conditioning:6 googlenet:1 discussed:1 extend:2 he:4 surpassing:1 kwk2:1 mellon:1 significant:4 anguelov:1 edged:1 mathematics:1 similarly:1 pointed:1 nonlinearity:1 dot:1 minnesota:2 stable:2 han:1 longer:4 supervision:1 similarity:3 etc:4 surface:1 pu:1 curvature:1 imbalanced:1 recent:1 optimizing:1 optimizes:1 raj:1 scenario:1 termed:1 certain:2 n00014:1 top1:1 nvidia:1 binary:1 onr:1 nonconvex:1 yi:7 minimum:1 george:1 impose:1 deng:1 arn:1 xiangyu:3 determine:2 converge:2 redundant:1 monotonically:3 exploding:3 ii:3 multiple:1 full:3 reduces:1 x10:10 alan:1 faster:9 match:1 cross:4 long:2 sphere:2 cifar:22 divided:2 lin:1 basic:2 regression:1 crop:2 vision:1 essentially:5 metric:1 arxiv:6 resnets:1 sergey:1 normalization:16 kernel:12 iteration:16 tailored:2 sometimes:1 histogram:2 preserved:1 addition:1 want:2 fellowship:1 hochreiter:1 addressed:1 proposal:1 aws:1 krause:1 jian:4 crucial:2 w2:2 rest:1 extra:1 seem:1 call:1 extracting:1 structural:1 yang:1 feedforward:1 bengio:1 enough:3 bernstein:1 aggravated:1 relu:11 w3:1 architecture:20 andreas:1 inner:6 cn:2 idea:4 vgg:2 tradeoff:1 reduce:1 shift:7 whether:1 ekxi:4 accelerating:1 song:1 f:2 reformulated:1 karen:1 speaking:1 hessian:2 cause:1 shaoqing:4 deep:26 generally:2 tune:1 dmytro:1 karpathy:1 amount:1 concentrated:2 nsf:4 notice:2 carnegie:1 hyperspherical:13 hyperparameter:4 key:2 yangqing:1 changing:1 prevent:1 verified:1 shiliang:1 v1:5 run:1 angle:14 fourth:1 you:2 extends:1 family:1 reasonable:1 patch:7 raman:1 decision:5 appendix:3 scaling:4 comparable:7 bit:1 dropout:1 layer:34 bound:2 g:3 orthogonality:5 constraint:6 constrain:3 alex:1 aspect:1 fourier:1 speed:1 min:5 extremely:1 xingguo:3 relatively:1 combination:1 slightly:6 increasingly:1 wi:1 shallow:2 alleviation:1 supervise:1 founding:1 restricted:2 projecting:1 dv:2 ik2f:1 iccv:1 remains:3 discus:1 needed:1 flip:1 prefixed:4 end:4 serf:1 generalizes:2 observe:6 v2:5 pierre:1 xiong:1 batch:18 alternative:1 original:15 thomas:1 denotes:2 sw:2 giving:1 chinese:1 chieh:1 objective:4 malik:1 question:2 already:1 flipping:1 matas:1 parametric:3 strategy:2 diagonal:1 enhances:1 gradient:7 iclr:1 distance:4 cw:2 thank:1 argue:1 extent:1 unstable:1 trivial:2 fy:1 reason:1 byi:3 besides:2 length:1 index:1 reed:1 illustration:1 ratio:1 mini:7 providing:1 sermanet:1 liang:1 kun:1 difficult:3 mostly:2 potentially:1 hao:1 zhao1:1 wyi:1 design:4 motivates:1 satheesh:1 conv2:4 perform:6 vertical:1 convolution:15 observation:1 datasets:2 neuron:1 arc:1 descent:1 behave:1 hinton:1 rn:3 perturbation:1 tuo:2 introduced:1 pair:1 specified:3 imagenet:8 kkx:1 learned:5 nip:3 address:3 able:3 beyond:2 usually:6 wy:2 scott:1 challenge:5 max:2 including:1 hyperspheres:8 ia:1 power:5 critical:1 natural:4 difficulty:5 suitable:2 attach:1 residual:6 improve:1 technology:1 arora:1 zhen:1 geometric:2 acknowledgement:1 understanding:1 kf:2 multiplication:4 determining:1 relative:2 loss:70 fully:6 discriminatively:1 haupt:1 interesting:1 limitation:2 suggestion:1 geoffrey:1 validation:5 foundation:1 shelhamer:1 vanhoucke:1 consistent:1 propagates:1 conv1:5 normalizes:3 summary:1 supported:3 last:1 bias:6 deeper:3 institute:2 wide:3 template:1 face:4 taking:1 conv3:4 benefit:1 overcome:2 depth:3 boundary:5 default:1 plain:5 rich:2 computes:2 adopts:1 made:1 x104:1 reinforcement:1 far:1 erhan:1 approximate:1 contradict:2 ignore:1 keep:1 elu:1 global:5 overfitting:1 ioffe:1 b1:1 belongie:1 discriminative:2 xi:7 khosla:1 table:9 additionally:1 promising:1 learn:6 channel:1 delving:1 ca:1 career:1 symmetry:1 init:2 du:2 kui:1 excellent:1 necessarily:1 domain:1 diag:2 aistats:1 main:1 motivation:2 nothing:1 fair:4 crafted:1 fig:10 je:2 intel:1 georgia:2 slow:1 embeds:1 momentum:1 orth:8 theme:1 exponential:2 lie:1 kxk2:3 pe:1 lw:2 learns:2 donahue:1 removing:1 dumitru:1 specific:3 covariate:7 rectifier:1 showing:4 learnable:14 sensing:1 decay:2 svm:1 normalizing:2 glorot:1 effectively:2 texture:1 magnitude:3 conditioned:1 kx:1 margin:6 chen:1 easier:2 entropy:1 led:2 simply:2 explore:1 likely:1 saddle:2 visual:3 kxk:2 aditya:1 kaiming:4 partially:3 bo:1 kwj:1 ma:1 prop:6 viewed:9 identity:2 formulated:1 towards:2 jeff:1 replace:3 shared:2 shortcut:3 specifically:4 respec:1 operates:2 uniformly:2 wt:1 justify:1 determined:1 lemma:7 except:1 surpass:1 reducing:1 olga:1 experimental:1 selectively:1 internal:1 unbalanced:1 jonathan:2 bigdata:1 evaluate:11 scratch:2 |
6,615 | 6,985 | Learning Deep Structured Multi-Scale Features using
Attention-Gated CRFs for Contour Prediction
Dan Xu1
Wanli Ouyang2 Xavier Alameda-Pineda3 Elisa Ricci4
Xiaogang Wang5 Nicu Sebe1
1
The University of Trento, 2 The University of Sydney, 3 Perception Group, INRIA
4
University of Perugia, 5 The Chinese University of Hong Kong
[email protected], [email protected], [email protected]
[email protected], [email protected], [email protected]
Abstract
Recent works have shown that exploiting multi-scale representations deeply learned
via convolutional neural networks (CNN) is of tremendous importance for accurate
contour detection. This paper presents a novel approach for predicting contours
which advances the state of the art in two fundamental aspects, i.e. multi-scale
feature generation and fusion. Different from previous works directly considering multi-scale feature maps obtained from the inner layers of a primary CNN
architecture, we introduce a hierarchical deep model which produces more rich
and complementary representations. Furthermore, to refine and robustly fuse the
representations learned at different scales, the novel Attention-Gated Conditional
Random Fields (AG-CRFs) are proposed. The experiments ran on two publicly
available datasets (BSDS500 and NYUDv2) demonstrate the effectiveness of the
latent AG-CRF model and of the overall hierarchical framework.
1
Introduction
Considered as one of the fundamental tasks in low-level vision, contour detection has been deeply
studied in the past decades. While early works mostly focused on low-level cues (e.g. colors, gradients,
textures) and hand-crafted features [3, 25, 22], more recent methods benefit from the representational
power of deep learning models [31, 2, 38, 19, 24]. The ability to effectively exploit multi-scale
feature representations is considered a crucial factor for achieving accurate predictions of contours
in both traditional [29] and CNN-based [38, 19, 24] approaches. Restricting the attention on deep
learning-based solutions, existing methods [38, 24] typically derive multi-scale representations by
adopting standard CNN architectures and considering directly the feature maps associated to different
inner layers. These maps are highly complementary: while the features from the first layers are
responsible for predicting fine details, the ones from the higher layers are devoted to encode the
basic structure of the objects. Traditionally, concatenation and weighted averaging are very popular
strategies to combine multi-scale representations (see Fig. 1.a). While these strategies typically lead
to an increased detection accuracy with respect to single-scale models, they severly simplify the
complex relationship between multi-scale feature maps.
The motivational cornerstone of this study is the following research question: is it worth modeling
and exploiting complex relationships between multiple scales of a deep representation for contour
detection? In order to provide an answer and inspired by recent works exploiting graphical models
within deep learning architectures [5, 39], we introduce Attention-Gated Conditional Random Fields
(AG-CRFs), which allow to learn robust feature map representations at each scale by exploiting the information available from other scales. This is achieved by incorporating an attention mechanism [27]
seamlessly integrated into the multi-scale learning process under the form of gates [26]. Intuitively,
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
the attention mechanism will further enhance the quality of the learned multi-scale representation,
thus improving the overall performance of the model.
We integrated the proposed AG-CRFs into a two-level hierarchical CNN model, defining a novel
Attention-guided Multi-scale Hierarchical deepNet (AMH-Net) for contour detection. The hierarchical network is able to learn richer multi-scale features than conventional CNNs, the representational
power of which is further enhanced by the proposed AG-CRF model. We evaluate the effectiveness
of the overall model on two publicly available datasets for the contour detection task, i.e. BSDS500
[1] and NYU Depth v2 [33]. The results demonstrate that our approach is able to learn rich and
complementary features, thus outperforming state-of-the-art contour detection methods.
Related work. In the last few years several deep learning models have been proposed for detecting
contours [31, 2, 41, 38, 24, 23]. Among these, some works explicitly focused on devising multi-scale
CNN models in order to boost performance. For instance, the Holistically-Nested Edge Detection
method [38] employed multiple side outputs derived from the inner layers of a primary CNN and
combine them for the final prediction. Liu et al. [23] introduced a framework to learn rich deep
representations by concatenating features derived from all convolutional layers of VGG16. Bertasius
et al. [2] considered skip-layer CNNs to jointly combine feature maps from multiple layers. Maninis
et al. [24] proposed Convolutional Oriented Boundaries (COB), where features from different layers
are fused to compute oriented contours and region hierarchies. However, these works combine the
multi-scale representations from different layers adopting concatenation and weighted averaging
schemes while not considering the dependency between the features. Furthermore, these works do
not focus on generating more rich and diverse representations at each CNN layer.
The combination of multi-scale representations has been also widely investigated for other pixel-level
prediction tasks, such as semantic segmentation [43], visual saliency detection [21] and monocular
depth estimation [39], and different deep architectures have been designed. For instance, to effectively
aggregate the multi-scale information, Yu et al. [43] introduced dilated convolutions. Yang et al. [42]
proposed DAG-CNNs where multi-scale feature outputs from different ReLU layers are combined
through element-wise addition operator. However, none of these works incorporate an attention
mechanism into a multi-scale structured feature learning framework.
Attention models have been successfully exploited in deep learning for various tasks such as image
classification [37], speech recognition [4] and image caption generation [40]. However, to our
knowledge, this work is the first to introduce an attention model for estimating contours. Furthermore,
we are not aware of previous studies integrating the attention mechanism into a probabilistic (CRF)
framework to control the message passing between hidden variables. We model the attention as
gates [26], which have been used in previous deep models such as restricted Boltzman machine
for unsupervised feature learning [35], LSTM for sequence learning [12, 6] and CNN for image
classification [44]. However, none of these works explore the possibility of jointly learning multi-scale
deep representations and an attention model within a unified probabilistic graphical model.
2
2.1
Attention-Gated CRFs for Deep Structured Multi-Scale Feature Learning
Problem Definition and Notation
Given an input image I and a generic front-end CNN model with parameters Wc , we consider a set
of S multi-scale feature maps F = {fs }Ss=1 . Being a generic framework, these feature maps can
be the output of S intermediate CNN layers or of another representation, thus s is a virtual scale.
The feature map at scale s, fs can be interpreted as a set of feature vectors, fs = {fsi }N
i=1 , where
N is the number of pixels. Opposite to previous works adopting simple concatenation or weighted
averaging schemes [16, 38], we propose to combine the multi-scale feature maps by learning a set
of latent feature maps hs = {his }N
i=1 with a novel Attention-Gated CRF model sketched in Fig.1.
Intuitively, this allows a joint refinement of the features by flowing information between different
scales. Moreover, since the information from one scale may or may not be relevant for the pixels at
another scale, we utilise the concept of gate, previously introduced in the literature in the case of
graphical models [36], in our CRF formulation. These gates are binary random hidden variables that
permit or block the flow of information between scales at every pixel. Formally, gsi e ,sr ? {0, 1} is the
gate at pixel i of scale sr (receiver) from scale se (emitter), and we also write gse ,sr = {gsi e ,sr }N
i=1 .
i
i
Precisely, when gse ,sr = 1 then the hidden variable hsr is updated taking (also) into account the
2
I
I
fs
hs
1
1
fs
hs
fs+1
hs+1
???
fs
???
hs
I
fs
1
hs
1
(a) Multi-Scale Neural Network
fs+1 ? ? ?
fs
hs+1 ? ? ?
hs
fs
1
gs
gs,s+1
1,s
hs
1
fs+1 ? ? ?
hs+1 ? ? ?
(c) Attention-Gated CRFs
(b) Multi-Scale CRFs
Figure 1: An illustration of different schemes for multi-scale deep feature learning and fusion. (a)
the traditional approach (e.g. concatenation, weighted average), (b) CRF implementing multi-scale
feature fusion (c) the proposed AG-CRF-based approach.
information from the se -th layer, i.e. hse . As shown in the following, the joint inference of the hidden
features and the gates leads to estimating the optimal features as well as the corresponding attention
model, hence the name Attention-Gated CRFs.
2.2
Attention-Gated CRFs
Given the observed multi-scale feature maps F of image I, the objective is to estimate the hidden multiscale representation H = {hs }Ss=1 and, accessorily the attention gate variables G = {gse ,sr }Sse ,sr =1 .
To do that, we formalize the problem within a conditional random field framework and write the Gibbs
distribution as P (H, G|I, ?) = exp (?E(H, G, I, ?)) /Z (I, ?), where ? is the set of parameters
and E is the energy function. As usual, we exploit both unary and binary potentials to couple the
hidden variables between them and to the observations. Importantly, the proposed binary potential is
gated, and thus only active when the gate is open. More formally the general form1 of the energy
function writes:
XX
XX
E(H, G, I, ?) =
?h (his , fsi ) +
gsi e ,sr ?h (hisr , hjse ) .
(1)
s
|
se ,sr i,j
i
{z
Unary potential
}
|
{z
Gated pairwise potential
}
The first term of the energy function is a classical unary term that relates the hidden features to the
observed multi-scale CNN representations. The second term synthesizes the theoretical contribution
of the present study because it conditions the effect of the pair-wise potential ?h (hise , hjsr ) upon
the gate hidden variable gsi e ,sr . Fig. 1c depicts the model formulated in Equ.(1). If we remove the
attention gate variables, it becomes a general multi-scale CRFs as shown in Fig. 1b.
Given that formulation, and as it is typically the case in conditional random fields, we exploit the
mean-field approximation in order to derive a tractable inference procedure. Under this generic form,
the mean-field inference procedure writes:
XX
q(his ) ? exp ?h (his , fsi ) +
Eq(gi 0 ) {gsi 0 ,s }Eq(hj 0 ) {?h (his , hjs0 )} ,
(2)
s0 6=s
q(gsi 0 ,s ) ? exp gsi 0 ,s Eq(his )
nX
j
s ,s
j
s
n
o o
Eq(hj 0 ) ?h (his , hjs0 )
,
(3)
s
where Eq stands for the expectation with respect to the distribution q.
Before deriving these formulae for our precise choice of potentials, we remark that, since the
gate is a binary variable, the expectation of its value is the same as q(gsi 0 ,s = 1). By defining:
nP
n
oo
j
i
j
Mis0 ,s = Eq(his )
E
?
(h
,
h
)
, the expected value of the gate writes:
0
h
s
s
j q(h 0 )
s
i
?s,s
0 = Eq(g i
0
s
i
) {gs0 ,s }
,s
=
q(gsi 0 ,s
q(gsi 0 ,s = 1)
= ? ?Mis0 ,s ,
= 0) + q(gsi 0 ,s = 1)
(4)
where ?() denotes the sigmoid function. This finding is specially relevant in the framework of CNN
since many of the attention models are typically obtained after applying the sigmoid function to the
1
One could certainly include a unary potential for the gate variables as well. However this would imply that
there is a way to set/learn the a priori distribution of opening/closing a gate. In practice we did not observe any
notable difference between using or skipping the unary potential on g.
3
features derived from a feed-forward network. Importantly, since the quantity Mis0 ,s depends on
the expected values of the hidden features his , the AG-CRF framework extends the unidirectional
connection from the features to the attention model, to a bidirectional connection in which the
expected value of the gate allows to refine the distribution of the hidden features as well.
2.3
AG-CRF Inference
In order to construct an operative model we need to define the unary and gated potentials ?h and ?h .
In our case, the unary potential corresponds to an isotropic Gaussian:
?h (his , fsi ) = ?
ais i
kh ? fsi k2 ,
2 s
(5)
where ais > 0 is a weighting factor.
The gated binary potential is specifically designed for a two-fold objective. On the one hand, we
would like to learn and further exploit the relationships between hidden vectors at the same, as well
as at different scales. On the other hand, we would like to exploit previous knowledge on attention
models and include linear terms in the potential. Indeed, this would implicitly shape the gate variable
to include a linear operator on the features. Therefore, we chose a bilinear potential:
? i Ki,j 0 h
? j0 ,
?h (hi , hj 0 ) = h
(6)
s
s
s
s,s
s
? i = (hi> , 1)> and Ki,j 0 ? R(Cs +1)?(Cs0 +1) being Cs the size, i.e. the number of channels,
where h
s
s
s,s
i,j
i,j
j,i>
i,j
of the representation at scale s. If we write this matrix as Ki,j
s,s0 = (Ls,s0 , ls,s0 ; ls0 ,s , 1), then Ls,s0
i,j
j,i
exploits the relationships between hidden variables, while ls,s0 and ls0 ,s implement the classically used
linear relationships of the attention models. In order words, ?h models the pair-wise relationships
between features with the upper-left block of the matrix. Furthemore, ?h takes into account the linear
relationships by completing the hidden vectors with the unity. In all, the energy function writes:
X X ai
XX
s
? i Ki,j h
?j
khis ? fsi k2 +
gsi e ,sr h
(7)
E(H, G, I, ?) = ?
sr sr ,se se .
2
s
s ,s i,j
i
e
r
Under these potentials, we can consequently update the mean-field inference equations to:
ai
X
X i,j j
i
i
i>
? 0 + li,j 0 ) ,
h
f
)
+
?
h
(L
q(his ) ? exp ? s (khis k ? 2hi>
0
0
s s
s,s s
s,s
s,s s
2
0
j
? j 0 is the expected a posteriori value of hj 0 .
where h
s
s
(8)
s 6=s
The previous expression implies that the a posteriori distribution for his is a Gaussian. The mean
vector of the Gaussian and the function M write:
X
X i,j j i,j
X
i
? j> j,i
? i> i,j
? 0 +l 0 )
?j
? i Li,j 0 h
? i = 1 ai f i +
?s,s
(Ls,s0 h
Mis0 ,s =
h
h
0
s s
s
s s,s s0 + hs ls,s0 + hs0 ls0 ,s
s
s,s
i
as
0
j
j
s 6=s
which concludes the inference procedure. Furthermore, the proposed framework can be simplified to
obtain the traditional attention models. In most of the previous studies, the attention variables are
computed directly from the multi-scale features instead of computing them from the hidden variables.
Indeed, since many of these studies do not propose a probabilistic formulation, there are no hidden
variables and the attention is computed sequentially through the scales. We can emulate the same
behavior within the AG-CRF framework by modifying the gated potential as follows:
j,i
??h (hi , hj 0 , f i , f j0 ) = hi Li,j 0 hj 0 + f i> li,j 0 + f j>
(9)
0 l 0 .
s
s
s
s
s
s,s
s
s
s,s
s
s ,s
This means that we keep the pair-wise relationships between hidden variables (as in any CRF) and let
the attention model be generated by a linear combination of the observed features from the CNN, as it
is traditionally done. The changes in the inference procedure are straightforward and reported in the
supplementary material due to space constraints. We refer to this model as partially-latent AG-CRFs
(PLAG-CRFs), whereas the more general one is denoted as fully-latent AG-CRFs (FLAG-CRFs).
2.4
Implementation with neural network for joint learning
In order to infer the hidden variables and learn the parameters of the AG-CRFs together with those
of the front-end CNN, we implement the AG-CRFs updates in neural network with several steps:
4
C
...
C
...
Front-End CNN
C
C
C
M
D
D
D
...
flD
HIERARCHY 1
AG-CRF
...
C
M
flC
flM
D
D
flC
...
C
C
M
D
D
D
flM
...
0
AG-CRF
D
L
D
C
D
0
AG-CRF
D
...
C
fl
D
L
D
...
...
L
D
...
HIERARCHY 2
C
Convolution
D
Deconvolution
M
Max-pooling
L
Loss
AG-CRF
C
D
L
Figure 2: An overview of the proposed AMH-Net for contour detection.
(i) message passing from the se -th scale to the current sr -th scale is performed with hse ?sr ?
Lse ?sr ? hse , where ? denotes the convolutional operation and Lse ?sr denotes the corresponding
convolution kernel, (ii) attention map estimation q(gse ,sr = 1) ? ?(hsr (Lse ?sr ? hse ) +
lse ?sr ? hse + lsr ?se ? hsr ), where Lse ?sr , lse ?sr and lsr ?se are convolution kernels and
represents element-wise product operation,Pand (iii) attention-gated message passing from other scales
? s = fs ? as
and adding unary term: h
r
r
r
se 6=sr (q(gse ,sr = 1) hse ?sr ), where asr encodes the
i
effect of the asr for weighting the message and can be implemented as a 1 ? 1 convolution. The
symbol ? denotes element-wise addition. In order to simplify the overall inference procedure, and
because the magnitude of the linear term of ?h is in practice negligible compared to the quadratic
term, we discard the message associated to the linear term. When the inference is complete, the final
estimate is obtained by convolving all the scales.
3
Exploiting AG-CRFs with a Multi-scale Hierarchical Network
AMH-Net Architecture. The proposed Attention-guided Multi-scale Hierarchical Network (AMHNet), as sketched in Figure 2, consists of a multi-scale hierarchical network (MH-Net) together with
the AG-CRF model described above. The MH-Net is constructed from a front-end CNN architecture
such as the widely used AlexNet [20], VGG [34] and ResNet [17]. One prominent feature of MH-Net
is its ability to generate richer multi-scale representations. In order to do that, we perform distinct
non-linear mappings (deconvolution D, convolution C and max-pooling M) upon fl , the CNN
feature representation from an intermediate layer l of the front-end CNN. This leads to a three-way
representation: flD , flC and flM . Remarkably, while D upsamples the feature map, C maintains its
original size and M reduces it, and different kernel size is utilized for them to have different receptive
fields, then naturally obtaining complementary inter- and multi-scale representations. The flC and
flM are further aligned to the dimensions of the feature map flD by the deconvolutional operation.
The hierarchy is implemented in two levels. The first level uses an AG-CRF model to fuse the three
representations of each layer l, thus refining the CNN features within the same scale. The second
level of the hierarchy uses an AG-CRF model to fuse the information coming from multiple CNN
layers. The proposed hierarchical multi-scale structure is general purpose and able to involve an
arbitrary number of layers and of diverse intra-layer representations.
End-to-End Network Optimization. The parameters of the model consist of the front-end CNN
parameters, Wc , the parameters to produce the richer decomposition from each layer l, Wl , the
parameters of the AG-CRFs of the first level of the hierarchy, {WlI }L
l=1 , and the parameters of
the AG-CRFs of the second level of the hierarchy, WII . L is the number of intermediate layers
used from the front-end CNN. In order to jointly optimize all these parameters we adopt deep
supervision [38] and we add an optimization loss associated to each AG-CRF module. In addition,
since the contour detection problem is highly unbalanced, i.e. contour pixels are significantly less than
non-contour pixels, we employ the modified cross-entropy loss function of [38]. Given a training data
5
set D = {(Ip , Ep )}P
p=1 consisting of P RGB-contour groundtruth pairs, the loss function ` writes:
X X
X
` W =
?
log P ekp = 1|Ip ; W + 1 ? ?
log P ekp = 0|Ip ; W ,
(10)
p
?
ek
p ?Ep
+
ek
p ?Ep
+
?
+
where ? = |E+
p |/(|Ep | + |Ep |), Ep is the set of contour pixels of image p and W is the set of
all parameters. The optimization is performed via the back-propagation algorithm with standard
stochastic gradient descent.
AMH-Net for contour detection. After training of the whole AMH-Net, the optimized network
parameters W are used for the contour detection task. Given a new test image I, the L + 1 classifiers
? l }L+1 = AMH-Net(I; W). The E
? l are obtained
produce a set of contour prediction maps {E
l=1
from the AG-CRFs with elementary operations as detailed in the supplementary material. We
? =
inspire from [38] to fuse the multiple scale predictions thus obtaining an average prediction E
P ?
l El /(L + 1).
4
Experiments
4.1
Experimental Setup
Datasets. To evaluate the proposed approach we employ two different benchmarks: the BSDS500
and the NYUDv2 datasets. The BSDS500 dataset is an extended dataset based on BSDS300 [1]. It
consists of 200 training, 100 validation and 200 testing images. The groundtruth pixel-level labels for
each sample are derived considering multiple annotators. Following [38, 41], we use all the training
and validation images for learning the proposed model and perform data augmentation as described
in [38]. The NYUDv2 [33] contains 1449 RGB-D images and it is split into three subsets, comprising
381 training, 414 validation and 654 testing images. Following [38] in our experiments we employ
images at full resolution (i.e. 560 ? 425 pixels) both in the training and in the testing phases.
Evaluation Metrics. During the test phase standard non-maximum suppression (NMS) [9] is first
applied to produce thinned contour maps. We then evaluate the detection performance of our approach
according to different metrics, including the F-measure at Optimal Dataset Scale (ODS) and Optimal
Image Scale (OIS) and the Average Precision (AP). The maximum tolerance allowed for correct
matches of edge predictions to the ground truth is set to 0.0075 for the BSDS500 dataset, and to .011
for the NYUDv2 dataset as in previous works [9, 14, 38].
Implementation Details. The proposed AMH-Net is implemented under the deep learning framework Caffe [18]. The implementation code is available on Github2 . The training and testing phase
are carried out on an Nvidia Titan X GPU with 12GB memory. The ResNet50 network pretrained on
ImageNet [8] is used to initialize the front-end CNN of AMH-Net. Due to memory constraints, our
implementation only considers three scales, i.e. we generate multi-scale features from three different
layers of the front-end CNN (i.e. res3d, res4f, res5c). In our CRF model we consider dependencies
between all scales. Within the AG-CRFs, the kernel size for all convolutional operations is set to
3 ? 3 with stride 1 and padding 1. To simplify the model optimization, the parameters aisr are set
as 0.1 for all scales during training. We choose this value as it corresponds to the best performance
after cross-validation in the range [0, 1]. The initial learning rate is set to 1e-7 in all our experiments,
and decreases 10 times after every 10k iterations. The total number of iterations for BSDS500 and
NYUD v2 is 40k and 30k, respectively. The momentum and weight decay parameters are set to 0.9
and 0.0002, as in [38]. As the training images have different resolution, we need to set the batch size
to 1, and for the sake of smooth convergence we updated the parameters only every 10 iterations.
4.2
Experimental Results
In this section, we present the results of our evaluation, comparing our approach with several state
of the art methods. We further conduct an in-depth analysis of our method, to show the impact of
different components on the detection performance.
Comparison with state of the art methods. We first consider the BSDS500 dataset and compare
the performance of our approach with several traditional contour detection methods, including
Felz-Hut [11], MeanShift [7], Normalized Cuts [32], ISCRA [30], gPb-ucm [1], SketchTokens [22],
2
https://github.com/danxuhk/AttentionGatedMulti-ScaleFeatureLearning
6
Figure 3: Qualitative results on the BSDS500 (left) and the NYUDv2 (right) test samples. The 2nd
(4th) and 3rd (6th) columns are the ground-truth and estimated contour maps respectively.
Table 1: BSDS500 dataset: quantitative results. Table 2: NYUDv2 dataset: quantitative results.
Method
ODS
OIS
AP
Human
.800
.800
-
Felz-Hutt[11]
Mean Shift[7]
Normalized Cuts[32]
ISCRA[30]
gPb-ucm[1]
Sketch Tokens[22]
MCG[28]
.610
.640
.641
.724
.726
.727
.747
.640
.680
.674
.752
.760
.746
.779
.560
.560
.447
.783
.727
.780
.759
DeepEdge[2]
DeepContour[31]
LEP[46]
HED[38]
CEDN[41]
COB [24]
RCF [23] (not comp.)
.753
.756
.757
.788
.788
.793
.811
.772
.773
.793
.808
.804
.820
.830
.807
.797
.828
.840
.834
.859
?
AMH-Net (fusion)
.798
.829
.869
Method
ODS
OIS
AP
gPb-ucm [1]
OEF [15]
Silberman et al. [33]
SemiContour [45]
SE [10]
gPb+NG [13]
SE+NG+ [14]
.632
.651
.658
.680
.685
.687
.710
.661
.667
.661
.700
.699
.716
.723
.562
?
?
.690
.679
.629
.738
HED (RGB) [38]
HED (HHA) [38]
HED (RGB + HHA) [38]
RCF (RGB) + HHA) [23]
.720
.682
.746
.757
.734
.695
.761
.771
.734
.702
.786
?
AMH-Net (RGB)
AMH-Net (HHA)
AMH-Net (RGB+HHA)
.744
.716
.771
.758
.729
.786
.765
.734
.802
MCG [28], LEP [46], and more recent CNN-based methods, including DeepEdge [2], DeepContour [31], HED [38], CEDN [41], COB [24]. We also report results of the RCF method [23], although
they are not comparable because in [23] an extra dataset (Pascal Context) was used during RCF
training to improve the results on BSDS500. In this series of experiments we consider AMH-Net with
FLAG-CRFs. The results of this comparison are shown in Table 1 and Fig. 4a. AMH-Net obtains
an F-measure (ODS) of 0.798, thus outperforms all previous methods. The improvement over the
second and third best approaches, i.e. COB and HED, is 0.5% and 1.0%, respectively, which is not
trivial to achieve on this challenging dataset. Furthermore, when considering the OIS and AP metrics,
our approach is also better, with a clear performance gap.
To perform experiments on NYUDv2, following previous works [38] we consider three different
types of input representations, i.e. RGB, HHA [14] and RGB-HHA data. The results corresponding
to the use of both RGB and HHA data (i.e. RGB+HHA) are obtained by performing a weighted
average of the estimates obtained from two AMH-Net models trained separately on RGB and HHA
representations. As baselines we consider gPb-ucm [1], OEF [15], the method in [33], SemiContour [45], SE [10], gPb+NG [13], SE+NG+ [14], HED [38] and RCF [23]. In this case the results
are comparable to the RCF [23] since the experimental protocol is exactly the same. All of them
are reported in Table 2 and Fig. 4b. Again, our approach outperforms all previous methods. In
particular, the increased performance with respect to HED [38] and RCF [23] confirms the benefit of
the proposed multi-scale feature learning and fusion scheme. Examples of qualitative results on the
BSDS500 and the NYUDv2 datasets are shown in Fig. 3.
Ablation Study. To further demonstrate the effectiveness of the proposed model and analyze the
impact of the different components of AMH-Net on the countour detection task, we conduct an
7
1
0.9
0.9
0.8
0.8
0.7
0.7
0.6
[F=0.800] Human
[F=0.798] AMH-Net
[F=0.793] COB
[F=0.788] CEDN
[F=0.788] HED
[F=0.757] LEP
[F=0.756] DeepContour
[F=0.753] DeepEdge
[F=0.747] MCG
[F=0.727] SketchTokens
[F=0.726] UCM
[F=0.724] ISCRA
[F=0.641] Normalized Cuts
[F=0.640] MeanShift
[F=0.610] Felz-Hut
0.5
0.4
0.3
0.2
0.1
0
Precision
Precision
1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.6
0.5
[F=0.800] Human
[F=0.771] AMH-Net
[F=0.746] HED
[F=0.706] SE+NG+
[F=0.695] SE
[F=0.685] gPb+NG
[F=0.680] SemiContour
[F=0.658] Silberman
[F=0.651] OEF
[F=0.632] gPb-ucm
0.4
0.3
0.2
0.1
0.7
0.8
0.9
0
1
Recall
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Recall
(a) BSDS500
(b) NYUDv2
Figure 4: Precision-Recall Curves on the BSDS500 and NYUDv2 test sets.
ablation study considering the NYUDv2 dataset (RGB data). We tested the following models:
(i) AMH-Net (baseline), which removes the first-level hierarchy and directly concatenates the
feature maps for prediction, (ii) AMH-Net (w/o AG-CRFs), which employs the proposed multi-scale
hierarchical structure but discards the AG-CRFs, (iii) AMH-Net (w/ CRFs), obtained by replacing
our AG-CRFs with a multi-scale CRF model without attention gating, (iv) AMH-Net (w/o deep
supervision) obtained removing intermediate loss functions in AMH-Net and (v) AMH-Net with the
proposed two versions of the AG-CRFs model, i.e. PLAG-CRFs and FLAG-CRFs. The results of
our comparison are shown in Table 3, where we also consider as reference traditional multi-scale
deep learning models employing multi-scale representations, i.e. Hypercolumn [16] and HED [38].
These results clearly show the advantages of Table 3: Performance analysis on NYUDv2 RGB data.
our contributions. The ODS F-measure of
Method
ODS OIS
AP
AMH-Net (w/o AG-CRFs) is 1.1% higher
Hypercolumn [16]
.718 .729 .731
than AMH-Net (baseline), clearly demonHED [38]
.720 .734 .734
strating the effectiveness of the proposed hiAMH-Net (baseline)
.711 .720 .724
erarchical network and confirming our intuAMH-Net (w/o AG-CRFs)
.722 .732 .739
ition that exploiting more richer and diverse
AMH-Net (w/ CRFs)
.732 .742 .750
multi-scale representations is beneficial. Ta.725 .738 .747
AMH-Net (w/o deep supervision)
ble 3 also shows that our AG-CRFs plays
AMH-Net
(w/
PLAG-CRFs)
.737 .749 .746
a fundamental role for accurate detection,
AMH-Net (w/ FLAG-CRFs)
.744 .758 .765
as AMH-Net (w/ FLAG-CRFs) leads to an
improvement of 1.9% over AMH-Net (w/o
AG-CRFs) in terms of OSD. Finally, AMH-Net (w/ FLAG-CRFs) is 1.2% and 1.5% better than
AMH-Net (w/ CRFs) in ODS and AP metrics respectively, confirming the effectiveness of embedding
an attention mechanism in the multi-scale CRF model. AMH-Net (w/o deep supervision) decreases
the overall performance of our method by 1.9% in ODS, showing the crucial importance of deep supervision for better optimization of the whole AMH-Net. Comparing the performance of the proposed
two versions of the AG-CRF model, i.e. PLAG-CRFs and FLAG-CRFs, we can see that AMH-Net
(FLAG-CRFs) slightly outperforms AMH-Net (PLAG-CRFs) in both ODS and OIS, while bringing a
significant improvement (around 2%) in AP. Finally, considering HED [38] and Hypercolumn [16],
it is clear that our AMH-Net (FLAG-CRFs) is significantly better than these methods. Importantly,
our approach utilizes only three scales while for HED [38] and Hypercolumn [16] we consider five
scales. We believe that our accuracy could be further boosted by involving more scales.
5
Conclusions
We presented a novel multi-scale convolutional neural network for contour detection. The proposed
model introduces two main components, i.e. a hierarchical architecture for generating more rich
and complementary multi-scale feature representations, and an Attention-Gated CRF model for
robust feature refinement and fusion. The effectiveness of our approach is demonstrated through
extensive experiments on two public available datasets and state of the art detection performance is
8
achieved. The proposed approach addresses a general problem, i.e. how to generate rich multi-scale
representations and optimally fuse them. Therefore, we believe it may be also useful for other
pixel-level tasks.
References
[1] P. Arbel?ez, M. Maire, C. Fowlkes, and J. Malik. Contour detection and hierarchical image segmentation.
TPAMI, 33(5), 2011.
[2] G. Bertasius, J. Shi, and L. Torresani. Deepedge: A multi-scale bifurcated deep network for top-down
contour detection. In CVPR, 2015.
[3] J. Canny. A computational approach to edge detection. TPAMI, (6):679?698, 1986.
[4] J. K. Chorowski, D. Bahdanau, D. Serdyuk, K. Cho, and Y. Bengio. Attention-based models for speech
recognition. In NIPS, 2015.
[5] X. Chu, W. Ouyang, X. Wang, et al. Crf-cnn: Modeling structured information in human pose estimation.
In NIPS, 2016.
[6] J. Chung, C. Gulcehre, K. Cho, and Y. Bengio. Empirical evaluation of gated recurrent neural networks on
sequence modeling. arXiv preprint arXiv:1412.3555, 2014.
[7] D. Comaniciu and P. Meer. Mean shift: A robust approach toward feature space analysis. TPAMI, 24(5),
2002.
[8] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. Imagenet: A large-scale hierarchical image
database. In CVPR, 2009.
[9] P. Doll?r and C. L. Zitnick. Structured forests for fast edge detection. In ICCV, 2013.
[10] P. Doll?r and C. L. Zitnick. Fast edge detection using structured forests. TPAMI, 37(8):1558?1570, 2015.
[11] P. F. Felzenszwalb and D. P. Huttenlocher. Efficient graph-based image segmentation. IJCV, 59(2), 2004.
[12] F. A. Gers, N. N. Schraudolph, and J. Schmidhuber. Learning precise timing with lstm recurrent networks.
Journal of machine learning research, 3(Aug):115?143, 2002.
[13] S. Gupta, P. Arbelaez, and J. Malik. Perceptual organization and recognition of indoor scenes from rgb-d
images. In CVPR, 2013.
[14] S. Gupta, R. Girshick, P. Arbel?ez, and J. Malik. Learning rich features from rgb-d images for object
detection and segmentation. In ECCV, 2014.
[15] S. Hallman and C. C. Fowlkes. Oriented edge forests for boundary detection. In CVPR, 2015.
[16] B. Hariharan, P. Arbel?ez, R. Girshick, and J. Malik. Hypercolumns for object segmentation and finegrained localization. In CVPR, 2015.
[17] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. arXiv preprint
arXiv:1512.03385, 2015.
[18] Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell. Caffe:
Convolutional architecture for fast feature embedding. In ACM MM, 2014.
[19] I. Kokkinos. Pushing the boundaries of boundary detection using deep learning. arXiv preprint
arXiv:1511.07386, 2015.
[20] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural
networks. In NIPS, 2012.
[21] G. Li and Y. Yu. Visual saliency based on multiscale deep features. In CVPR, 2015.
[22] J. J. Lim, C. L. Zitnick, and P. Doll?r. Sketch tokens: A learned mid-level representation for contour and
object detection. In CVPR, 2013.
[23] Y. Liu, M.-M. Cheng, X. Hu, K. Wang, and X. Bai. Richer convolutional features for edge detection. arXiv
preprint arXiv:1612.02103, 2016.
[24] K.-K. Maninis, J. Pont-Tuset, P. Arbel?ez, and L. Van Gool. Convolutional oriented boundaries. In ECCV,
2016.
[25] D. R. Martin, C. C. Fowlkes, and J. Malik. Learning to detect natural image boundaries using local
brightness, color, and texture cues. TPAMI, 26(5):530?549, 2004.
[26] T. Minka and J. Winn. Gates. In NIPS, 2009.
[27] V. Mnih, N. Heess, A. Graves, et al. Recurrent models of visual attention. In NIPS, pages 2204?2212,
2014.
[28] J. Pont-Tuset, P. Arbelaez, J. Barron, F. Marques, and J. Malik. Multiscale combinatorial grouping for
image segmentation and object proposal generation. TPAMI, 2016.
[29] X. Ren. Multi-scale improves boundary detection in natural images. In ECCV, 2008.
[30] Z. Ren and G. Shakhnarovich. Image segmentation by cascaded region agglomeration. In CVPR, 2013.
[31] W. Shen, X. Wang, Y. Wang, X. Bai, and Z. Zhang. Deepcontour: A deep convolutional feature learned by
positive-sharing loss for contour detection. In CVPR, 2015.
[32] J. Shi and J. Malik. Normalized cuts and image segmentation. TPAMI, 22(8), 2000.
9
[33] N. Silberman, D. Hoiem, P. Kohli, and R. Fergus. Indoor segmentation and support inference from rgbd
images. In ECCV, 2012.
[34] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition.
arXiv preprint arXiv:1409.1556, 2014.
[35] Y. Tang. Gated boltzmann machine for recognition under occlusion. In NIPS Workshop on Transfer
Learning by Learning Rich Generative Models, 2010.
[36] J. Winn. Causality with gates. In AISTATS, 2012.
[37] T. Xiao, Y. Xu, K. Yang, J. Zhang, Y. Peng, and Z. Zhang. The application of two-level attention models in
deep convolutional neural network for fine-grained image classification. In CVPR, 2015.
[38] S. Xie and Z. Tu. Holistically-nested edge detection. In ICCV, 2015.
[39] D. Xu, E. Ricci, W. Ouyang, X. Wang, and N. Sebe. Multi-scale continuous crfs as sequential deep
networks for monocular depth estimation. CVPR, 2017.
[40] K. Xu, J. Ba, R. Kiros, K. Cho, A. C. Courville, R. Salakhutdinov, R. S. Zemel, and Y. Bengio. Show,
attend and tell: Neural image caption generation with visual attention. In ICML, 2015.
[41] J. Yang, B. Price, S. Cohen, H. Lee, and M.-H. Yang. Object contour detection with a fully convolutional
encoder-decoder network. 2016.
[42] S. Yang and D. Ramanan. Multi-scale recognition with dag-cnns. In ICCV, 2015.
[43] F. Yu and V. Koltun. Multi-scale context aggregation by dilated convolutions. arXiv preprint
arXiv:1511.07122, 2015.
[44] X. Zeng, W. Ouyang, J. Yan, H. Li, T. Xiao, K. Wang, Y. Liu, Y. Zhou, B. Yang, Z. Wang, et al. Crafting
gbd-net for object detection. arXiv preprint arXiv:1610.02579, 2016.
[45] Z. Zhang, F. Xing, X. Shi, and L. Yang. Semicontour: A semi-supervised learning approach for contour
detection. In CVPR, 2016.
[46] Q. Zhao. Segmenting natural images with the least effort as humans. In BMVC, 2015.
10
| 6985 |@word h:12 kong:1 cnn:27 version:2 kohli:1 kokkinos:1 nd:1 open:1 cs0:1 hu:1 confirms:1 rgb:16 decomposition:1 brightness:1 bai:2 liu:3 contains:1 series:1 initial:1 hoiem:1 deconvolutional:1 past:1 existing:1 outperforms:3 current:1 comparing:2 od:9 skipping:1 com:1 guadarrama:1 chu:1 gpu:1 confirming:2 shape:1 remove:2 designed:2 update:2 cue:2 generative:1 devising:1 isotropic:1 detecting:1 zhang:5 five:1 constructed:1 koltun:1 qualitative:2 consists:2 ijcv:1 dan:2 combine:5 fld:3 thinned:1 introduce:3 pairwise:1 peng:1 inter:1 indeed:2 expected:4 behavior:1 kiros:1 multi:52 inspired:1 salakhutdinov:1 considering:7 becomes:1 motivational:1 estimating:2 notation:1 moreover:1 xx:4 alexnet:1 ouyang:4 interpreted:1 unified:1 ag:35 finding:1 quantitative:2 every:3 exactly:1 k2:2 classifier:1 control:1 ramanan:1 segmenting:1 before:1 negligible:1 positive:1 timing:1 local:1 attend:1 bilinear:1 ap:7 inria:2 chose:1 au:1 studied:1 challenging:1 emitter:1 range:1 responsible:1 testing:4 practice:2 block:2 implement:2 writes:5 ucm:6 procedure:5 maire:1 j0:2 empirical:1 yan:1 significantly:2 word:1 integrating:1 operator:2 hse:6 context:2 applying:1 optimize:1 conventional:1 map:19 demonstrated:1 shi:3 crfs:45 straightforward:1 attention:39 l:6 focused:2 resolution:2 shen:1 importantly:3 deriving:1 his:12 embedding:2 meer:1 traditionally:2 sse:1 updated:2 enhanced:1 hierarchy:8 play:1 caption:2 us:2 element:3 recognition:7 utilized:1 cut:4 database:1 huttenlocher:1 observed:3 ep:6 module:1 role:1 preprint:7 wang:7 region:2 sun:1 decrease:2 deeply:2 ran:1 gpb:8 trained:1 shakhnarovich:1 oef:3 upon:2 localization:1 joint:3 mh:3 various:1 emulate:1 distinct:1 fast:3 zemel:1 tell:1 aggregate:1 bifurcated:1 caffe:2 richer:5 widely:2 supplementary:2 cvpr:12 s:2 ition:1 ability:2 sketchtokens:2 gi:1 simonyan:1 encoder:1 jointly:3 final:2 ip:3 pineda:1 sequence:2 advantage:1 tpami:7 net:45 arbel:4 karayev:1 propose:2 product:1 coming:1 fr:1 canny:1 tu:1 relevant:2 aligned:1 ablation:2 achieve:1 representational:2 kh:1 trento:1 exploiting:6 convergence:1 sutskever:1 darrell:1 produce:4 generating:2 object:7 resnet:1 derive:2 oo:1 recurrent:3 pose:1 aug:1 eq:7 sydney:2 implemented:3 c:2 skip:1 implies:1 ois:6 alameda:2 guided:2 correct:1 cnns:4 modifying:1 stochastic:1 human:5 virtual:1 implementing:1 material:2 public:1 ricci:2 elementary:1 mm:1 hut:2 considered:3 ground:2 around:1 exp:4 mapping:1 early:1 adopt:1 purpose:1 estimation:4 felz:3 label:1 combinatorial:1 wl:1 successfully:1 rcf:7 weighted:5 clearly:2 gaussian:3 modified:1 zhou:1 hj:6 boosted:1 encode:1 derived:4 focus:1 refining:1 improvement:3 niculae:1 seamlessly:1 hk:1 suppression:1 baseline:4 detect:1 posteriori:2 inference:10 el:1 unary:8 typically:4 integrated:2 hidden:17 pont:2 comprising:1 pixel:11 overall:5 among:1 classification:4 sketched:2 denoted:1 priori:1 pascal:1 art:5 initialize:1 field:8 aware:1 construct:1 asr:2 beach:1 ng:6 represents:1 yu:3 unsupervised:1 icml:1 np:1 report:1 simplify:3 torresani:1 employ:4 few:1 opening:1 oriented:4 phase:3 consisting:1 occlusion:1 wli:1 detection:36 organization:1 message:5 highly:2 mnih:1 possibility:1 intra:1 evaluation:3 certainly:1 hed:13 lep:3 introduces:1 devoted:1 accurate:3 edge:8 conduct:2 iv:1 bsds300:1 girshick:3 theoretical:1 increased:2 instance:2 modeling:3 column:1 subset:1 krizhevsky:1 front:9 optimally:1 reported:2 dependency:2 answer:1 hypercolumns:1 combined:1 cho:3 st:1 fundamental:3 lstm:2 probabilistic:3 dong:1 lee:1 enhance:1 together:2 fused:1 meanshift:2 augmentation:1 again:1 nm:1 choose:1 classically:1 convolving:1 ek:2 chung:1 zhao:1 li:8 chorowski:1 account:2 potential:15 stride:1 dilated:2 titan:1 notable:1 xu1:1 explicitly:1 depends:1 performed:2 analyze:1 sebe:2 xing:1 aggregation:1 maintains:1 unidirectional:1 jia:1 contribution:2 publicly:2 accuracy:2 convolutional:14 pand:1 hariharan:1 saliency:2 none:2 ren:3 worth:1 comp:1 sharing:1 definition:1 energy:4 minka:1 naturally:1 associated:3 couple:1 dataset:11 popular:1 finegrained:1 recall:3 color:2 knowledge:2 lim:1 improves:1 segmentation:9 formalize:1 back:1 elisa:2 feed:1 bidirectional:1 higher:2 ta:1 xie:1 supervised:1 flowing:1 inspire:1 zisserman:1 bmvc:1 formulation:3 done:1 furthermore:5 lsr:2 hand:3 sketch:2 replacing:1 zeng:1 multiscale:3 propagation:1 quality:1 believe:2 usa:1 name:1 effect:2 concept:1 normalized:4 xavier:2 hence:1 semantic:1 during:3 maninis:2 comaniciu:1 hong:1 prominent:1 crf:25 demonstrate:3 complete:1 lse:6 image:30 wise:6 novel:5 sigmoid:2 agglomeration:1 overview:1 cohen:1 he:1 wanli:2 refer:1 significant:1 gibbs:1 dag:2 ai:5 rd:1 closing:1 fsi:6 supervision:5 add:1 recent:4 discard:2 schmidhuber:1 nvidia:1 outperforming:1 binary:5 exploited:1 employed:1 deng:1 cuhk:1 semi:1 vgg16:1 multiple:6 relates:1 ii:2 infer:1 reduces:1 full:1 smooth:1 match:1 cross:2 long:2 schraudolph:1 impact:2 prediction:9 involving:1 basic:1 vision:1 expectation:2 metric:4 arxiv:14 iteration:3 kernel:4 adopting:3 achieved:2 proposal:1 addition:3 whereas:1 fine:2 operative:1 remarkably:1 separately:1 winn:2 crucial:2 extra:1 specially:1 sr:25 bringing:1 pooling:2 bahdanau:1 flow:1 effectiveness:6 ee:1 yang:7 intermediate:4 iii:2 split:1 bengio:3 relu:1 architecture:8 opposite:1 inner:3 vgg:1 shift:2 expression:1 gb:1 padding:1 effort:1 f:13 speech:2 passing:3 remark:1 deep:30 cornerstone:1 useful:1 heess:1 se:15 involve:1 detailed:1 clear:2 mid:1 generate:3 http:1 holistically:2 estimated:1 gs0:1 diverse:3 write:4 group:1 achieving:1 consist:1 fuse:5 graph:1 year:1 extends:1 groundtruth:2 utilizes:1 ble:1 comparable:2 layer:22 ki:4 hi:5 completing:1 fl:2 courville:1 cheng:1 fold:1 quadratic:1 refine:2 g:2 xiaogang:1 precisely:1 constraint:2 fei:2 scene:1 encodes:1 sake:1 amh:39 xgwang:1 aspect:1 wc:2 performing:1 martin:1 structured:6 according:1 combination:2 beneficial:1 slightly:1 ls0:3 unity:1 intuitively:2 restricted:1 iccv:3 monocular:2 equation:1 previously:1 mechanism:5 tractable:1 end:11 gulcehre:1 available:5 operation:5 wii:1 permit:1 doll:3 observe:1 hierarchical:13 v2:2 generic:3 barron:1 robustly:1 fowlkes:3 batch:1 gate:18 original:1 bsds500:13 denotes:4 include:3 top:1 graphical:3 pushing:1 exploit:6 chinese:1 classical:1 silberman:3 crafting:1 objective:2 malik:7 question:1 quantity:1 strategy:2 primary:2 receptive:1 usual:1 traditional:5 gradient:2 arbelaez:2 concatenation:4 decoder:1 nx:1 considers:1 trivial:1 toward:1 code:1 relationship:8 illustration:1 furthemore:1 setup:1 mostly:1 bertasius:2 ba:1 implementation:4 boltzmann:1 gated:17 perform:3 upper:1 convolution:7 observation:1 datasets:6 benchmark:1 descent:1 marque:1 defining:2 extended:1 hinton:1 precise:2 arbitrary:1 introduced:3 pair:4 hypercolumn:4 extensive:1 connection:2 optimized:1 imagenet:3 deepnet:1 learned:5 tremendous:1 flm:4 boost:1 nip:7 address:1 able:3 perception:1 indoor:2 max:2 including:3 memory:2 gool:1 power:2 natural:3 predicting:2 cascaded:1 residual:1 scheme:4 improve:1 github:1 mcg:3 imply:1 concludes:1 carried:1 form1:1 resnet50:1 literature:1 graf:1 fully:2 loss:6 generation:4 annotator:1 ekp:2 validation:4 shelhamer:1 s0:9 xiao:2 eccv:4 token:2 last:1 side:1 allow:1 taking:1 felzenszwalb:1 benefit:2 tolerance:1 boundary:7 depth:4 dimension:1 stand:1 curve:1 contour:31 rich:8 tuset:2 forward:1 refinement:2 simplified:1 osd:1 boltzman:1 employing:1 obtains:1 implicitly:1 keep:1 active:1 sequentially:1 receiver:1 severly:1 equ:1 fergus:1 continuous:1 latent:4 decade:1 table:6 learn:7 concatenates:1 channel:1 robust:3 ca:1 transfer:1 obtaining:2 serdyuk:1 improving:1 synthesizes:1 forest:3 investigated:1 complex:2 protocol:1 zitnick:3 did:1 aistats:1 main:1 whole:2 allowed:1 complementary:5 rgbd:1 xu:4 crafted:1 fig:7 causality:1 depicts:1 precision:4 momentum:1 gers:1 khi:2 concatenating:1 hsr:3 perceptual:1 weighting:2 third:1 donahue:1 tang:1 grained:1 formula:1 removing:1 down:1 gating:1 showing:1 symbol:1 nyu:1 decay:1 gupta:2 grouping:1 workshop:1 hha:10 socher:1 fusion:6 sequential:1 incorporating:1 deconvolution:2 restricting:1 adding:1 effectively:2 importance:2 texture:2 magnitude:1 gap:1 entropy:1 explore:1 ez:4 visual:4 nicu:1 partially:1 pretrained:1 van:1 nested:2 utilise:1 corresponds:2 truth:2 acm:1 conditional:4 formulated:1 consequently:1 hs0:1 price:1 change:1 specifically:1 averaging:3 flag:9 total:1 experimental:3 formally:2 support:1 gsi:12 unbalanced:1 unitn:2 incorporate:1 evaluate:3 tested:1 |
6,616 | 6,986 | On-the-fly Operation Batching
in Dynamic Computation Graphs
Graham Neubig?
Language Technologies Institute
Carnegie Mellon University
[email protected]
Yoav Goldberg?
Computer Science Department
Bar-Ilan University
[email protected]
Chris Dyer
DeepMind
[email protected]
Abstract
Dynamic neural network toolkits such as PyTorch, DyNet, and Chainer offer more
flexibility for implementing models that cope with data of varying dimensions and
structure, relative to toolkits that operate on statically declared computations (e.g.,
TensorFlow, CNTK, and Theano). However, existing toolkits?both static and
dynamic?require that the developer organize the computations into the batches
necessary for exploiting high-performance algorithms and hardware. This batching
task is generally difficult, but it becomes a major hurdle as architectures become
complex. In this paper, we present an algorithm, and its implementation in the
DyNet toolkit, for automatically batching operations. Developers simply write
minibatch computations as aggregations of single instance computations, and the
batching algorithm seamlessly executes them, on the fly, using computationally
efficient batched operations. On a variety of tasks, we obtain throughput similar to
that obtained with manual batches, as well as comparable speedups over singleinstance learning on architectures that are impractical to batch manually.2
1
Introduction
Modern CPUs and GPUs evaluate batches of arithmetic operations significantly faster than the
sequential evaluation of the same operations. For example, performing elementwise operations takes
nearly the same amount of time on the GPU whether operating on tens or on thousands of elements,
and multiplying a few hundred different vectors by the same matrix is significantly slower than
executing a single (equivalent) matrix?matrix product using an optimized GEMM implementation on
either a GPU or a CPU. Thus, careful grouping of operations into batches that can execute efficiently
in parallel is crucial for making the most of available hardware resources.
Today, developers who write code to train neural networks are responsible for crafting most of this
batch handling by hand. In some cases this is easy: when inputs and outputs are naturally represented
as fixed sized tensors (e.g., images of a fixed size such those in the MNIST and CIFAR datasets, or
regression problems on fixed sized vector inputs), and the computations required to process each
instance are instance-invariant and expressible as standard operations on tensors (e.g., a series of
matrix multiplications, convolutions, and elementwise nonlinearities), a suitably flexible tensor library
?
Authors contributed equally.
The proposed algorithm is implemented in DyNet (http://dynet.io/), and can be activated by using the
?--dynet-autobatch 1? command line flag.
2
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
RNN
RNN
RNN
RNN
masks
L(1)
m1
Y
(1)
x1
RNN
(1)
(1)
x2
x3
RNN
L(2)
(1)
x4
m3
m2
Y
Y
m4
Y
y(1)
L
L1
L2
L3
L4
RNN
RNN
RNN
RNN
X1
X2
X3
X4
batches
x1
(2)
x2
(2)
y(2)
RNN
RNN
RNN
(3)
x1
(3)
x2
(3)
x3
L
L
(3)
padding
y(3)
Figure 1: Two computation graphs for computing the loss on a minibatch of three training instances
consisting of a sequence of input vectors paired with a fixed sized output vector. On the left is a
?conceptual? computation graph which shows the operations associated with computing the losses
individually for each sequence and then aggregating them. The same computation is executed by
the right-hand (?batched?) computation graph: it aggregates the inputs in order to make better
use of modern processors. This comes with a price in complexity?the variable length of the
sequences requires padding and masking operations. Our aim is for the user to specify the conceptual
computation on the left, and let the framework take care of its efficient execution.
that provides efficient implementations of higher-order generalizations of low-order operations makes
manual batching straightforward. For example, by adding a leading or trailing dimension to the
tensors representing inputs and outputs, multiple instances can be straightforwardly represented in a
single data structure. In other words: in this scenario, the developer conceives of and writes code for
the computation on an individual instance, packs several instances into a tensor as a ?minibatch?, and
the library handles executing these efficiently in parallel.
Unfortunately, this idealized scenario breaks when working with more complex architectures. Deep
learning is increasingly being applied to problems whose inputs, outputs and intermediate representations do not fit easily into fixed sized tensors. For example, images vary in size and sequences in
length; data may be structured as trees [29] or graphs [4, 17, 27], or the model may select its own
computation conditional on the input [16, 28, 33]. In all these cases, while the desired computation
is easy enough to write for a single instance, organizing the computational operations so that they
make optimally efficient use of the hardware is nontrivial. Indeed, many papers that operate on data
structures more complicated than sequences have avoided batching entirely [8, 15, 25]. In fact, until
last year [7, 20], all published work on recursive (i.e., tree-structured) neural networks appears to
have used single instance training.
The premise of this work is that operation batching should not be the responsibility of the user,
but instead should be a service provided by the framework. The user should only be responsible
for specifying a large enough computation so that batching is possible (i.e, summing the losses of
several instances, such as one sees in the left side of Figure 1), and the framework should take care of
the lower-level details of operation batching, much like optimizing compilers or JIT optimizers in
interpreted languages do.3
We take a large step towards this goal by introducing an efficient algorithm?and a corresponding
implementation?for automatic batching in dynamically declared computation graphs.4 Our method
relies on separating the graph construction from its execution, using operator overloading and lazy
3
This is in contrast to other existing options for automatic batching such as TensorFlow Fold, which require
the user to learn an additional domain-specific language to turn computation into a format conducive to automatic
batching [19].
4
Computation graphs (often represented in a form called a Wengert list) are the data structures used to structure
the evaluation of expressions and use reverse mode automatic differentiation to compute their derivatives [3].
Broadly, learning frameworks use two strategies to construct these: static and dynamic. In static toolkits (e.g.,
Theano [6], Tensorflow [1]) the computation graph is defined once and compiled, and then examples are fed into
the same graph. In contrast, dynamic toolkits (e.g., DyNet [21], Chainer [32], PyTorch [http://pytorch.org])
construct the computation graph for each training instance (or minibatch) as the forward computation is executed.
While dynamic declaration means that each minibatch can have its own computational architecture, the user is
still responsible for batching operations themselves.
2
evaluation (?2). Once this separation is in place, we propose a fast batching heuristic that can be
performed in real time, for each training instance (or minibatch), between the graph construction
and its execution (?3). We extend the DyNet toolkit [21] with this capability. From the end-user?s
perspective, the result is a simple mechanism for exploiting efficient data-parallel algorithms in
networks that would be cumbersome to batch by hand. The user simply defines the computation
independently for each instance in the batch (using standard Python or C++ language constructs),
and the framework takes care of the rest. Experiments show that our algorithm compares favorably
to manually batched code, that significant speed improvements are possible on architectures with
no straightforward manual batching design, and that we obtain better performance than TensorFlow
Fold [19], an alternative framework built to simulate dynamic graph definition and automatic batching
on top of TensorFlow (?4).
2
Batching: Conception vs. Efficient Implementation
To illustrate the challenges with batching, consider the problem of predicting a real-valued vector
conditional on a sequence of input vectors (this example is chosen for its simplicity; experiments are
conducted on more standard tasks). We assume that an input sequence of vectors is read sequentially
by an RNN, and then the final state is used to make a prediction; the training loss is the Euclidean
distance between the prediction and target. We compare two algorithms for computing this code: a
na?ve, but developer-friendly one (whose computation graph is shown in the left part of Figure 1),
which reflects how one conceives of what a batch loss computation is; and a computationally efficient?
but more conceptually complex?version that batches up the computations so they are executed in
parallel across the sequences (the right part of Figure 1).
Na?ve (developer-friendly) batched implementation The left part of Figure 1 shows the computations that must be executed to compute the losses associated with three (b = 3) training instances,
implemented na?vely. Pseudo-code for constructing the graph for each of the RNNs on the left using
a dynamic declaration framework is as follows:
function RNN-R EGRESSION -L OSS(x1:n , y; (W, U, b, c) = ?)
h0 = 0
. Initial state of the RNN; ht 2 Rd .
for t 2 1, 2, . . . , n do
ht = tanh(W[ht 1 ; xt ] + b)
? = Uhn + c
y
L = ||?
y y||22
return L
Note that the code does not compute any value, but constructs a symbolic graph describing the
computation. This can then be integrated into a batched training procedure:
(i)
function T RAIN -BATCH -NAIVE(T = {(x1:n(i) , y(i) )}bi=1 ; ?)
N EW-G RAPH()
for i 2 1, 2, . . . , b do
. Na?vely loop over elements of batch.
(i)
(i)
(i)
L = RNN-R EGRESSION -L OSS(x1:n(i) , y ; ?)
. Single instance loss.
P (i)
L= iL
. Aggregate losses for all elements in batch.
F ORWARD(L)
@L
@? = BACKWARD (L)
? = ? ? @L
@?
This code is simple to understand, uses basic flow control present in any programming language and
simple mathematical operations. Unfortunately, executing it will generally be quite inefficient, since
in the resulting computation graph each operation is performed sequentially without exploiting the
fact that similar operations are being performed across the training instances.
Efficient manually batched implementation To make good use of efficient data-parallel algorithms and hardware, it is necessary to batch up the operations so that the sequences are processed in
parallel. The standard way to achieve this is by aggregating the inputs and outputs, altering the code
as follows:
3
function RNN-R EGRESSION -BATCH -L OSS(X1:nmax , Y, n(1:b) ; (W, U, b, c) = ?)
M=0
. Build loss mask; M 2 Rb?nmax .
for i 2 1, 2, . . . , b do
M[i,n(i) ] = 1
. Position where the final symbol in sequence i occurs.
H0 = 0
. Initial states of the RNN (one per instance); Ht 2 Rd?b .
for t 2 1, 2, . . . , nmax do
Ht = tanh(W[Ht 1 ; Xt ] + b)
. Addition broadcasts b over columns.
? t = UHt + c
Y
. Addition broadcasts c over columns.
? t Y)(mt 1> )||2
Lt = ||(Y
.
Compute
masked
losses (mt is the tth column of M).
F
P
L = t Lt
return L
(i)
function T RAIN -BATCH -M ANUAL(T = {(x1:n(i) , y(i) )}bi=1 ; ?)
nmax = maxi n(i)
for t 2 1, 2, . . . , nmax do
. Build sequence of batch input matrices.
Xt = 0 2 Rd?b
for i 2 1, 2, . . . , b do
(i)
Xt,[?,i] = xt if t ? n(i) otherwise 0
. The ith column of Xt .
(1) (2)
(b)
Y = [y y
??? y ]
. Build batch of output targets.
N EW-G RAPH() . Now that inputs are constructed, create graph, evaluate loss and gradient.
L = RNN-R EGRESSION -BATCH -L OSS(X1:nmax , Y, n(1:b) ; ?)
F ORWARD(L)
@L
@? = BACKWARD (L)
? = ? ? @L
@?
This code computes the same value as the na?ve implementation, it does so more efficiently, and
it is significantly more complicated. Because the sequences processed by RNNs will generally be
of different lengths (which is precisely why RNNs are useful!), it is necessary to pad the input
representation with dummy values, and also to mask out the resulting losses at the right times. While
these techniques are part of the inventory of skills that a good ML engineer has, they increase the
difficulty of implementation and probability that bugs will be present in the code.
Implementation comparison The na?ve algorithm has two advantages over manual batching. First,
it is easy to implement: the way we conceive of a model is the way it is implemented, and errors
with padding, masking, and batching are avoided. Second, the na?ve algorithm aggregates any single
instance loss, whereas manual batching efforts are generally problem specific. For these reasons, one
should strongly prefer the first algorithm; however, for efficiency reasons, batching matters. In the
next section we turn to the problem of how to efficiently execute na?ve computation graphs so that
they can take advantage of efficient batched implementations of operations. This provides the best of
both worlds to developers: code is easy to write, but execution is fast.
3
An Algorithm for On-the-fly Batching
Manual batching, discussed in the previous section, mostly operates by aggregating input instances
and feeding them through a network. In RNNs, this means aggregating inputs that share a time
step. This often require padding and masking, as input sizes may differ. It also restricts the kinds
of operations that can be batched. In contrast, our method identifies and aggregates computation
graph nodes that can be executed in a batched fashion for a given graph. This reduces the need
for workarounds such as padding and masking, allows for seamless efficient execution also in
architectures which are hard to conceptualize in the input-centric paradigm, and allows for the
identification of batching opportunities that may not be apparent from an input-centric view.
Our batching procedure operates in three steps (1) graph definition, (2) operation batching, and (3)
computation. Here, steps (1) and (3) are shared with standard execution of computation graphs, while
(2) corresponds to our proposed method.
3.1 Graph Definition
First, we define the graph that represents the computation that we want to perform. From the user?s
perspective, this is done by simply performing computation that they are interested in performing,
such as that defined in the R NN -R EGRESSION -L OSS function from the previous example. While it is
common for dynamic graph frameworks to interleave the graph definition and its forward execution,
4
we separate these parts by using lazy evaluation: we only perform forward evaluation when a resulting
value is requested by the user through the calling of the F ORWARD function. The graph can be further
extended after a call to F ORWARD, and further calls will lazily evaluate the delta of the computation.
This allows the accumulation of large graph chunks before executing forward computations, providing
ample opportunities for operation batching.
3.2 Operation Batching
Next, given a computation graph, such as the one on the left side of Figure 1, our proposed algorithm
converts it into a graph where operations that can be executed together are batched together. This is
done in the two step process described below.
Computing compatibility groups We first partition the nodes into compatibility groups, where
nodes in the same group have the potential for batching. This is done by associating each node with
a signature such that nodes that share the same signature are guaranteed to be able to be executed
in a single operation if their inputs are ready. Signatures vary depending on the operation the node
represents. For example, in nodes representing element-wise operations, all nodes with the same
operation can be batched together, so the signature is simply the operation name (tanh, log, ...). In
nodes where dimensions or other information is also relevant to whether the operations can be batched,
this information is also included in the signature. For example, a node that picks a slice of the input
matrix will also be dependent on the matrix size and range to slice, so the signature will look something
like slice-400x500-100:200. In some other cases (e.g. a parameterized matrix multiply) we may
remember the specific node ID of one of the inputs (e.g. node123 representing the matrix multiply
parameters) while generalizing across other inputs (e.g. data or hidden state vectors on the right-hand
side), resulting in a signature that would look something like matmul-node123-400x1. A more
thorough discussion is given in Appendix A.
Determining execution order A computation graph is essentially a job dependency graph where
each node depends on its input (and by proxy the input of other preceding nodes on the path to
its inputs). Our goal is to select an execution order in which (1) each node is executed after its
dependencies; and (2) nodes that have the same signature and do not depend on each other are
scheduled for execution on the same step (and will be executed in a single batched operation).
Finding an optimal execution order that maximizes the amount of batching in the general case is
NP hard [24]. We discuss two heuristic strategies for identifying execution orders that satisfy these
requirements.
Depth-based batching is used as a method for automatic batching in TensorFlow Fold [19]. This is
done by calculating the depth of each node in the original computation graph, defined as the maximum
length from a leaf node to the node itself, and batching together nodes that have an identical depth and
signature. By construction, nodes of the same depth are not dependent on each-other, as all nodes will
have a higher depth than their input, and thus this batching strategy is guaranteed to satisfy condition
(1) above. However, this strategy will also miss some good batching opportunities. For example, the
loss function calculations in Figure 1 are of different depths due to the different-lengthed sequences,
and similar problems will occur in recurrent neural network language models, tree-structured neural
networks, and a myriad of other situations.
Agenda-based batching is a method we propose that does not depend solely on depth. The core of
this method is an agenda that tracks ?available? nodes that have no unresolved dependencies. For
each node, a count of its unresolved dependencies is maintained; this is initialized to be the number
of inputs to the node. The agenda is initialized by adding nodes that have no incoming inputs (and
thus no unresolved dependencies). At each iteration, we select a node from the agenda together with
all of the available nodes in the same signature, and group them into a single batch operation. These
nodes are then removed from the agenda, and the dependency counter of all of their successors are
decremented. Any new zero-dependency nodes are added to the agenda. This process is repeated
until all nodes have been processed.
How do we prioritize between multiple available nodes in the agenda? Intuitively, we want to avoid
prematurely executing nodes if there is a potential for more nodes of the same signature to be added
to the agenda at a later point, resulting in better batching. A good example of this from our running
example in Figure 1 is the loss-calculating nodes, which will be added to the agenda at different points
due to becoming calculable after different numbers of RNN time steps. To capture this intuition, we
introduce a heuristic method for prioritizing nodes based on the average depth of all nodes with their
5
signature, such that nodes with a lower average depth will be executed earlier. In general (with some
exceptions), this tends to prioritize nodes that occur in earlier parts of the graph, which will result
in the nodes in the later parts of the graph, such as these loss calculations, being executed later and
hopefully batched together.5
Finally, this non-trivial batching procedure must be executed quickly so that overhead due to batch
scheduling calculations doesn?t cancel out the efficiency gains from operation batching. To ensure
this, we perform a number of optimizations in the implementation, which we detail in Appendix B.
3.3
Forward-backward Graph Execution and Update
Once we have determined an execution order (including batching decisions), we perform calculations
of the values themselves. In standard computation graphs, forward computation is done in topological
order to calculate the function itself, and backward calculation is done in reverse topological order to
calculate gradients. In our automatically batched evaluation, the calculation is largely similar with
two exceptions:
Single!batch node conversion First, it is necessary to convert single nodes into a batched node,
which also requires modification of the underlying operations such as converting multiple matrixvector operations Whi to a single matrix-matrix operation WH. This is done internally in the library,
while the user-facing API maintains the original unbatched computation graph structure, making this
process invisible to the user.
Ensuring contiguous memory To ensure that operations can be executed as a batch, the inputs to
(i)
the operations (e.g. the various vectors ht ) must be arranged in contiguous memory (e.g. a matrix
Ht ). In some cases, it is necessary to perform a memory copy to arrange these inputs into contiguous
memory, but in other cases the inputs are already contiguous and in the correct order, and in these
cases we can omit the memory copy and use the inputs as-is.6
4
Experiments
In this section we describe our experiments, designed to answer three main questions: (1) in situations
where manual batching is easy, how close can the proposed method approach the efficiency of a
program that uses hand-crafted manual batching, and how do the depth-based and agenda-based
approaches compare (?4.1)? (2) in situations where manual batching is less easy, is the proposed
method capable of obtaining significant improvements in efficiency (?4.2)? (3) how does the proposed
method compare to TensorFlow Fold, an existing method for batching variably structured networks
within a static declaration framework (?4.3)?
4.1
Synthetic Experiments
Our first experiments stress-test our proposed algorithm in an ideal case for manual batching. Specifically, we train a model on a bi-directional LSTM sequence labeler [12, 23], on synthetic data where
every sequence to be labeled is the same length (40). Because of this, manual batching is easy because
we don?t have to do any padding or adjustment for sentences of different lengths. The network takes
as input a size 200 embedding vector from a vocabulary of size 1000, has 2 layers of 256 hidden node
LSTMs in either direction, then predicts a label from one of 300 classes. The batch size is 64.7
Within this setting we test various batching settings: Without or with manual mini-batching where
we explicitly batch the word vector lookup, LSTM update, and loss calculation for each time step.
5
Even given this prioritization method it is still possible to have ties, in which case we break ties by calculating
?cheap? operations (e.g. tanh and other elementwise ops) before ?heavy? ones (e.g. matrix multiplies).
6
The implication of this is that batched computation will take up to twice as much memory as unbatched
computation, but in practice the memory usage is much less than this. Like manually batched computation,
memory usage can be controlled by adjusting the batch size appropriately so it fits in memory.
7
Experiments were run on a single Tesla K80 GPU or Intel Xeon 2.30GHz E5-2686v4 CPU. To control for
variance in execution time, we perform three runs and report the fastest. We do not report accuracy numbers, as
the functions calculated and thus accuracies are the same regardless of batching strategy.
6
CPU ms/ sent
for graph
20
40
back graph
60
80
back calc
update
for graph
100 120 140 160 180 200
w/o Manual
No
Auto
0
By
Depth
By
Agenda
No
Auto
w/ Manual
w/ Manual
w/o Manual
0
GPU ms/ sent
for calc
By
Depth
By
Agenda
0
2
4
6
8
20
for calc
40
80
back calc
update
100 120 140 160 180 200
No
Auto
By
Depth
By
Agenda
No
Auto
By
Depth
By
Agenda
0
10 12 14 16 18 20
back graph
60
2
4
6
8
10 12 14 16 18 20
Figure 2: Computation time for forward/backward graph construction or computation, as well
as parameter update for a BiLSTM tagger without or with manual batching, and without, with
depth-based, or with agenda-based automatic batching.
Without on-the-fly batching (N OAUTO), with depth-based autobatching (B Y D EPTH), or with agendabased autobatching (B YAGENDA). We measure the speed of each method by ms/sec and also break
down the percentage of computation time spent in (1) forward graph creation/on-the-fly batching, (2)
forward computation, (3) backward graph creation, (4) backward computation, (5) parameter update.
The results can be found in Figure 2. First, comparing the first row with the second two, we can
see that the proposed on-the-fly batching strategy drastically reduces computation time per sentence,
with B YAGENDA reducing per-sentence computation time from 193ms to 16.9ms on CPU and
54.6ms to 5.03ms on GPU, resulting in an approximately 11-fold increase in sentences processed per
second (5.17!59.3 on CPU and 18.3!198 on GPU). B YAGENDA is faster than B Y D EPTH by about
15?30%, demonstrating that our more sophisticated agenda-based strategy is indeed more effective at
batching together operations.
Next, compared to manual batching without automatic batching (the fourth row), we can see that fully
automatic batching with no manual batching is competitive, but slightly slower. The speed decrease
is attributed to the increased overhead for computation graph construction and batch scheduling.
However, even in this extremely idealized scenario where manual batching will be most competitive,
the difference is relatively small (1.27? on CPU and 1.76? on GPU) compared to the extreme
difference between the case of using no batching at all. Given that automatic batching has other
major advantages such as ease of implementation, it may be an attractive alternative even in situations
where manual batching is relatively easy.
Finally, if we compare the fourth and fifth/sixth rows, we can see that on GPU, even with manual
batching, automatic batching still provides gains in computational efficiency, processing sentences
up to 1.1 times faster than without automatic batching. The reason for this can be attributed to the
fact that our BiLSTM implementation performs manual batching across sentences, but not across
time steps within the sentence. In contrast, the auto-batching procedure was able to batch the word
embedding lookup and softmax operations across time-steps as well, reducing the number of GPU
calls and increasing speed. This was not the case for CPU, as there is less to be gained from batching
these less expensive operations.
4.2
Experiments on Difficult-to-batch Tasks
Next, we extend our experiments to cases that are increasingly more difficult to manually batch.
We use realistic dimension sizes for the corresponding tasks, and batches of size b = 64. Exact
dimensions and further details on training settings are in Appendix C.
BiLSTM: This is similar to the ideal case in the previous section, but trained on actual variable
length sequences.
BiLSTM w/char: This is the same as the BiLSTM tagger above, except that we use an additional
BiLSTM over characters to calculate the embeddings over rare words. These sorts of
7
Table 1: Sentences/second on various training tasks for increasingly challenging batching scenarios.
Task
BiLSTM
BiLSTM w/ char
TreeLSTM
Transition-Parsing
CPU
GPU
N OAUTO
B Y D EPTH
B YAGENDA
N OAUTO
B Y D EPTH
B YAGENDA
16.8
15.7
50.2
16.8
139
93.8
348
61.0
156
132
357
61.2
56.2
43.2
76.5
33.0
337
183
672
89.5
367
275
661
90.1
character-based embeddings have been shown to allow the model to generalize better [18],
but also makes batching operations more difficult, as we now have a variably-lengthed
encoding step that may or may not occur for each of the words in the input.
Tree-structured LSTMs: This is the Tree-LSTM model of [31]. Here, each instance is a tree rather
than a sequence, and the network structure follows the tree structures. As discussed in the
introduction, this architecture is notoriously hard to manually batch.
Transition-based Dependency Parsing: The most challenging case we evaluate is that of a
transition-based system, such as a transition based parser with LSTM-based featureextraction [8, 9, 13] and exploration-based training [2, 5, 10]. Here, a sequence is encoded
using an LSTM (or a bi-LSTM), followed by a series of predictions. Each prediction based
on a subset of the encoded vectors, and the vectors that participate in each prediction, as
well as the loss, are determined by the outcomes of the previous predictions. Here, batching
is harder yet as the nature of the computation interleaves sampling from the model and
training, and requires calling F ORWARD at each step, leaving the automatic-batcher very
little room to play with. However, with only a small change to the computation, we can run
b different parsers ?in parallel?, and potentially share the computation across the different
systems in a given time-step. Concretely, we use a modified version of the B IST parser [14].
From the results in Table 1, we can see that in all cases automatic batching gives healthy improvements
in computation time, 3.6x?9.2? on the CPU, and 2.7?8.6? on GPU. Furthermore, the agenda-based
heuristic is generally more effective than the depth-based one.
4.3
Comparison to TensorFlow Fold
We compare the TensorFlow Fold reference implementation of the Stanford Sentiment Treebank regression task [30], using the same TreeLSTM architecture [31].Figure 3 shows how
many trees are processed per second by TF (excluding both evaluation of the dev set and static
graph construction/optimization) on GPU and
CPU relative to the performance of the B YAGENDA algorithm in DyNet (including graph
construction time). The DyNet performance is
better across the board stratified by hardware
type. Furthermore, DyNet has greater throughput on CPU than TensorFlow Fold on GPU until
batch sizes exceed 64. Additionally, we find
that with single instance training, DyNet?s se- Figure 3: Comparison of runtime performance bequential evaluation processes 46.7 trees/second tween TensorFlow Fold and DyNet with autobatchon CPU, whereas autobatching processes 93.6 ing on TreeLSTMs (trees/sec).
trees/second. This demonstrates that in complex
architectures like TreeLSTMs, there are opportunities to batch up operations inside a single training
instance, which are exploited by our batching algorithm. In addition, it should be noted that the DyNet
implementation has the advantage that it is much more straightforward, relying on simple Python data
structures and flow control to represent and traverse the trees, while the Fold implementation requires
implementing the traversal and composition logic in a domain specific functional programming
language (described in Section 3 of Looks et al. [19]).
8
5
Related Work
Optimization of static algorithms is widely studied, and plays an important role in numerical libraries
used in machine learning. Our work is rather different since the code/workload (as represented by
the computation graph) is dynamically specified and must be executed rapidly, which precludes
sophisticated statistic analysis. However, we review some of the important related work here.
Automatic graph optimization and selection of kernels for static computation graphs is used in a
variety of toolkits, including TensorFlow [1] and Theano [6]. Dynamic creation of optimally sized
minibatches (similar to our strategy, except the computation graph is assumed to be static) that make
good use of hardware resources has also been proposed for optimizing convolutional architectures
[11]. The static nature of the computation makes this tools closer to optimizing compilers rather
than efficient interpreters which are required to cope with the dynamic workloads encountered when
dealing with dynamically structured computations.
Related to this is the general technique of automatic vectorization, which is a mainstay of optimizing
compilers. Recent work has begun to explore vectorization in the context of interpreted code which
may cannot be compiled [26]. Our autobatching variant of DyNet similarly provides vectorized
primitives that can be selected dynamically.
Further afield, the problem of scheduling with batching decisions has been widely studied in operations research since at least the 1950s (for a recent survey, see [24]). Although the OR work deals
with similar problems (e.g., scheduling work on machines that can process a ?family? of related item
with minimal marginal cost over a single item), the standard algorithms from this field (which are
often based on polynomial-time dynamic programs or approximations to NP-hard search problems)
are too computationally demanding to execute in the inner loop of a learning algorithm.
6
Conclusion
Deep learning research relies on empirical exploration of architectures. The rapid pace of innovation
we have seen in the last several years has been enabled largely by tools that have automated the
error-prone aspects of engineering, such as writing code that computes gradients. However, our
contention is that operation batching is increasingly becoming another aspect of model coding that is
error prone and amenable to automation.
Our solution is a framework that lets programmers express computations naturally and relies on a
smart yet lightweight interpreter to figure out how to execute the operations efficiently. Our hope is
that this will facilitate the creation of new classes of models that better cope with the complexities of
real-world data.
Acknowledgements: The work of YG is supported by the Israeli Science Foundation (grant number
1555/15) and by the Intel Collaborative Research Institute for Computational Intelligence (ICRI-CI).
References
[1] Mart?n Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro,
Greg S Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, et al. Tensorflow: Large-scale
machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467,
2016.
[2] Miguel Ballesteros, Yoav Goldberg, Chris Dyer, and Noah A. Smith. Training with exploration
improves a greedy stack LSTM parser. In Conference on Empirical Methods in Natural
Language Processing (EMNLP), pages 2005?2010, November 2016.
[3] Michael Bartholomew-Briggs, Steven Brown, Bruce Christianson, and Laurence Dixon. Automatic differentiation of algorithms. Journal of Computational and Applied Mathematics,
124:171?190, 2000.
[4] Peter W. Battaglia, Razvan Pascanu, Matthew Lai, Danilo Rezende, and Koray Kavukcuoglu.
Interaction networks for learning about objects, relations and physics. In Neural Information
Processing Systems (NIPS), 2016.
9
[5] Samy Bengio, Oriol Vinyals, Navdeep Jaitly, and Noam Shazeer. Scheduled sampling for
sequence prediction with recurrent neural networks. CoRR, abs/1506.03099, 2015.
[6] James Bergstra, Olivier Breuleux, Fr?d?ric Bastien, Pascal Lamblin, Razvan Pascanu, Guillaume
Desjardins, Joseph Turian, David Warde-Farley, and Yoshua Bengio. Theano: A CPU and GPU
math compiler in Python. In Proc. 9th Python in Science Conf, pages 1?7, 2010.
[7] Samuel R. Bowman, Jon Gauthier, Abhinav Rastogi, Raghav Gupta, Christopher D. Manning,
and Christopher Potts. A fast unified model for parsing and sentence understanding. In Annual
Conference of the Association for Computational Linguistics (ACL), pages 1466?1477, 2016.
[8] Chris Dyer, Miguel Ballesteros, Wang Ling, Austin Matthews, and Noah A. Smith. Transitionbased dependency parsing with stack long short-term memory. In Annual Conference of the
Association for Computational Linguistics (ACL), pages 334?343, 2015.
[9] Chris Dyer, Adhiguna Kuncoro, Miguel Ballesteros, and Noah A. Smith. Recurrent neural
network grammars. In Conference of the North American Chapter of the Association for
Computational Linguistics (NAACL), pages 199?209, 2016.
[10] Yoav Goldberg and Joakim Nivre. Training deterministic parsers with non-deterministic oracles.
Transactions of the Association for Computational Linguistics, 1:403?414, 2013.
[11] Stefan Hadjis, Firas Abuzaid, Ce Zhang, and Christopher R?. Caffe con troll: Shallow ideas
to speed up deep learning. In Proceedings of the Fourth Workshop on Data analytics at sCale
(DanaC 2015), 2015.
[12] Zhiheng Huang, Wei Xu, and Kai Yu. Bidirectional LSTM-CRF models for sequence tagging.
arXiv preprint arXiv:1508.01991, 2015.
[13] Eliyahu Kiperwasser and Yoav Goldberg. Easy-first dependency parsing with hierarchical tree
LSTMs. Transactions of the Association for Computational Linguistics, 4:445?461, 2016.
[14] Eliyahu Kiperwasser and Yoav Goldberg. Simple and accurate dependency parsing using
bidirectional LSTM feature representations. Transactions of the Association for Computational
Linguistics, 4:313?327, 2016.
[15] Faisal Ladhak, Ankur Gandhe, Markus Dreyer, Lambert Matthias, Ariya Rastrow, and Bj?rn
Hoffmeister. Latticernn: Recurrent neural networks over lattices. In Proc. INTERSPEECH,
2016.
[16] Chengtao Li, Daniel Tarlow, Alexander L. Gaunt, Marc Brockschmidt, and Nate Kushman.
Neural program lattices. In International Conference on Learning Representations (ICLR),
2017.
[17] Xiaodan Liang, Xiaohui Shen, Jiashi Feng, Liang Lin, and Shuicheng Yan. Semantic object
parsing with graph LSTM. In Proc. ECCV, 2016.
[18] Wang Ling, Chris Dyer, Alan W Black, Isabel Trancoso, Ramon Fermandez, Silvio Amir, Luis
Marujo, and Tiago Luis. Finding function in form: Compositional character models for open
vocabulary word representation. In Conference on Empirical Methods in Natural Language
Processing (EMNLP), pages 1520?1530, 2015.
[19] Moshe Looks, Marcello Herreshoff, DeLesley Hutchins, and Peter Norvig. Deep learning with
dynamic computation graphs. In International Conference on Learning Representations (ICLR),
2017.
[20] Gilles Louppe, Kyunghyun Cho, Cyril Becot, and Kyle Cranmer. QCD-aware recursive neural
networks for jet physics. arXiv:1702.00748, 2017.
[21] Graham Neubig, Chris Dyer, Yoav Goldberg, Austin Matthews, Waleed Ammar, Antonios
Anastasopoulos, Miguel Ballesteros, David Chiang, Daniel Clothiaux, Trevor Cohn, Kevin Duh,
Manaal Faruqui, Cynthia Gan, Dan Garrette, Yangfeng Ji, Lingpeng Kong, Adhiguna Kuncoro,
Gaurav Kumar, Chaitanya Malaviya, Paul Michel, Yusuke Oda, Matthew Richardson, Naomi
Saphra, Swabha Swayamdipta, and Pengcheng Yin. DyNet: The dynamic neural network
toolkit. arXiv preprint arXiv:1701.03980, 2017.
10
[22] Joel Nothman, Nicky Ringland, Will Radford, Tara Murphy, and James R. Curran. Learning
multilingual named entity recognition from Wikipedia. Artificial Intelligence, 194:151?175,
2012.
[23] Barbara Plank, Anders S?gaard, and Yoav Goldberg. Multilingual part-of-speech tagging with
bidirectional long short-term memory models and auxiliary loss. In Annual Conference of the
Association for Computational Linguistics (ACL), pages 412?418, 2016.
[24] Chris N. Potts and Mikhail Y. Kovalyov. Scheduling with batching: A review. European Journal
of Operational Research, 20(2):228?249, 2000.
[25] Scott Reed and Nando de Freitas. Neural programmer-interpreters. In International Conference
on Learning Representations (ICLR), 2016.
[26] Erven Rohou, Kevin Williams, and David Yuste. Vectorization technology to improve interpreter
performance. ACM Transactions on Architecture and Code Optimization, 9(4), 2013.
[27] Franco Scarselli, Marco Gori, Ah Chung Tsoi, and Gabriele Monfardini. The graph neural
network model. IEEE Transactions on Neural Networks, 20(1):61?80, 2009.
[28] Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton,
and Jeff Dean. Outrageously large neural networks: The sparsely-gated mixture-of-experts
layer. In International Conference on Learning Representations (ICLR), 2017.
[29] Richard Socher, Cliff C Lin, Chris Manning, and Andrew Y Ng. Parsing natural scenes and
natural language with recursive neural networks. In International Conference on Machine
Learning (ICML), pages 129?136, 2011.
[30] Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher Manning, Andrew Ng,
and Christopher Potts. Recursive deep models for semantic compositionality over a sentiment
treebank. In Conference on Empirical Methods in Natural Language Processing (EMNLP),
2013.
[31] Kai Sheng Tai, Richard Socher, and Christopher D. Manning. Improved semantic representations
from tree-structured long short-term memory networks. In Annual Conference of the Association
for Computational Linguistics (ACL), 2015.
[32] Seiya Tokui, Kenta Oono, Shohei Hido, and Justin Clayton. Chainer: a next-generation open
source framework for deep learning. In Proceedings of Workshop on Machine Learning Systems
(LearningSys) in The Twenty-ninth Annual Conference on Neural Information Processing
Systems (NIPS), 2015.
[33] Dani Yogatama, Phil Blunsom, Chris Dyer, Edward Grefenstette, and Wang Ling. Learning
to compose words into sentences with reinforcement learning. In International Conference on
Learning Representations (ICLR), 2017.
11
| 6986 |@word kong:1 version:2 interleave:1 laurence:1 polynomial:1 suitably:1 open:2 shuicheng:1 pengcheng:1 pick:1 harder:1 initial:2 series:2 lightweight:1 daniel:2 erven:1 existing:3 freitas:1 prioritization:1 comparing:1 com:1 gemm:1 yet:2 must:4 luis:2 gpu:14 parsing:8 devin:1 realistic:1 numerical:1 partition:1 cheap:1 afield:1 designed:1 update:6 v:1 intelligence:2 selected:1 leaf:1 item:2 amir:1 greedy:1 kushman:1 ith:1 smith:3 core:1 short:3 chiang:1 tarlow:1 provides:4 pascanu:2 math:1 troll:1 traverse:1 node:43 org:1 zhang:1 tagger:2 mathematical:1 bowman:1 constructed:1 qcd:1 become:1 abadi:1 calculable:1 compose:1 oda:1 overhead:2 seiya:1 inside:1 dan:1 introduce:1 mask:3 indeed:2 rapid:1 tagging:2 themselves:2 os:5 relying:1 automatically:2 cpu:14 actual:1 little:1 increasing:1 becomes:1 provided:1 dreyer:1 underlying:1 maximizes:1 what:1 kind:1 interpreted:2 developer:7 deepmind:1 interpreter:4 unified:1 finding:2 chengtao:1 differentiation:2 impractical:1 pseudo:1 remember:1 thorough:1 every:1 friendly:2 tie:2 runtime:1 demonstrates:1 control:3 grant:1 internally:1 omit:1 organize:1 before:2 service:1 engineering:1 aggregating:4 tends:1 io:1 api:1 encoding:1 mainstay:1 id:1 cliff:1 toolkits:6 yusuke:1 solely:1 becoming:2 path:1 approximately:1 black:1 rnns:4 twice:1 blunsom:1 studied:2 ankur:1 acl:4 specifying:1 dynamically:4 challenging:2 ease:1 fastest:1 stratified:1 bi:4 analytics:1 range:1 responsible:3 tsoi:1 practice:1 recursive:4 implement:1 x3:3 writes:1 optimizers:1 razvan:2 procedure:4 rnn:21 yan:1 empirical:4 significantly:3 word:7 nmax:6 symbolic:1 cannot:1 close:1 selection:1 operator:1 scheduling:5 faruqui:1 context:1 writing:1 accumulation:1 equivalent:1 dean:2 xiaohui:1 phil:1 deterministic:2 straightforward:3 primitive:1 regardless:1 independently:1 williams:1 survey:1 shen:1 clothiaux:1 simplicity:1 identifying:1 koray:1 matthieu:1 labeler:1 m2:1 lamblin:1 enabled:1 embedding:2 handle:1 target:2 norvig:1 play:2 user:11 construction:7 programming:2 today:1 us:2 curran:1 xiaodan:1 jaitly:1 samy:1 exact:1 element:4 olivier:1 recognition:1 expensive:1 variably:2 sparsely:1 predicts:1 labeled:1 role:1 x500:1 fly:6 steven:1 wang:3 capture:1 preprint:3 calculate:3 louppe:1 dynet:15 thousand:1 counter:1 decrease:1 removed:1 intuition:1 complexity:2 warde:1 traversal:1 dynamic:14 signature:12 trained:1 depend:2 smart:1 myriad:1 creation:4 efficiency:5 easily:1 workload:2 isabel:1 represented:4 various:3 chapter:1 train:2 fast:3 describe:1 effective:2 artificial:1 aggregate:4 kevin:2 outcome:1 h0:2 caffe:1 jean:1 whose:2 stanford:1 valued:1 kai:2 heuristic:4 quite:1 apparent:1 whi:1 grammar:1 otherwise:1 statistic:1 precludes:1 richardson:1 itself:2 final:2 sequence:20 advantage:4 manaal:1 matthias:1 propose:2 interaction:1 product:1 unresolved:3 fr:1 relevant:1 loop:2 rapidly:1 organizing:1 flexibility:1 achieve:1 bug:1 nicky:1 exploiting:3 requirement:1 executing:5 object:2 spent:1 illustrate:1 andrew:2 conceives:2 ac:1 depending:1 recurrent:4 miguel:4 job:1 edward:1 auxiliary:1 implemented:3 c:2 come:1 differ:1 direction:1 correct:1 exploration:3 nando:1 char:2 successor:1 programmer:2 implementing:2 require:3 premise:1 feeding:1 generalization:1 pytorch:3 marco:1 bj:1 matthew:4 trailing:1 major:2 desjardins:1 vary:2 arrange:1 battaglia:1 proc:3 label:1 tanh:4 healthy:1 individually:1 tf:1 create:1 tool:2 reflects:1 stefan:1 dani:1 hope:1 gaurav:1 aim:1 modified:1 rather:3 avoid:1 varying:1 command:1 rezende:1 improvement:3 potts:3 seamlessly:1 contrast:4 dependent:2 anders:1 nn:1 integrated:1 pad:1 hidden:2 relation:1 expressible:1 interested:1 compatibility:2 flexible:1 pascal:1 plank:1 multiplies:1 softmax:1 conceptualize:1 marginal:1 field:1 aware:1 construct:4 once:3 beach:1 sampling:2 manually:6 x4:2 ng:2 identical:1 represents:2 icml:1 marcello:1 look:4 cancel:1 throughput:2 nearly:1 jon:1 yoshua:1 decremented:1 np:2 richard:3 report:2 few:1 modern:2 ve:6 individual:1 m4:1 murphy:1 scarselli:1 consisting:1 jeffrey:1 ab:1 multiply:2 evaluation:8 joel:1 mixture:1 extreme:1 farley:1 activated:1 implication:1 amenable:1 andy:2 calc:4 accurate:1 closer:1 capable:1 necessary:5 raph:2 vely:2 tree:14 euclidean:1 chaitanya:1 initialized:2 desired:1 neubig:2 minimal:1 increased:1 xeon:1 instance:22 column:4 earlier:2 dev:1 contiguous:4 altering:1 yoav:7 lattice:2 cost:1 introducing:1 hutchins:1 subset:1 rare:1 hundred:1 masked:1 jiashi:1 conducted:1 firas:1 too:1 optimally:2 straightforwardly:1 dependency:11 answer:1 synthetic:2 cho:1 chunk:1 st:1 international:6 lstm:10 ops:1 tokui:1 v4:1 physic:2 seamless:1 anual:1 michael:1 together:7 quickly:1 ashish:1 yg:1 na:8 huang:1 broadcast:2 prioritize:2 emnlp:3 conf:1 expert:1 derivative:1 american:1 leading:1 michel:1 return:2 li:1 inefficient:1 potential:2 nonlinearities:1 de:1 lookup:2 gabriele:1 ilan:1 sec:2 coding:1 automation:1 north:1 dixon:1 matter:1 satisfy:2 bergstra:1 explicitly:1 idealized:2 depends:1 performed:3 view:1 break:3 jason:1 later:3 responsibility:1 compiler:4 competitive:2 aggregation:1 sort:1 parallel:7 capability:1 option:1 complicated:2 maintains:1 bruce:1 masking:4 collaborative:1 il:2 greg:1 accuracy:2 conceive:1 who:1 largely:2 convolutional:1 efficiently:5 variance:1 rastogi:1 directional:1 conceptually:1 generalize:1 identification:1 lambert:1 kavukcuoglu:1 craig:1 multiplying:1 notoriously:1 published:1 processor:1 executes:1 ah:1 cumbersome:1 manual:23 trevor:1 definition:4 sixth:1 batcher:1 james:2 naturally:2 associated:2 attributed:2 static:9 transitionbased:1 con:1 gain:2 adjusting:1 begun:1 wh:1 cntk:1 improves:1 sophisticated:2 back:4 centric:2 bidirectional:3 appears:1 higher:2 nivre:1 danilo:1 specify:1 improved:1 wei:1 arranged:1 execute:4 done:7 strongly:1 furthermore:2 until:3 working:1 sheng:1 hand:5 lstms:3 christopher:6 gauthier:1 cohn:1 hopefully:1 google:1 minibatch:6 defines:1 mode:1 scheduled:2 icri:1 usage:2 facilitate:1 naacl:1 brown:1 usa:1 name:1 kyunghyun:1 read:1 semantic:3 deal:1 attractive:1 interspeech:1 maintained:1 noted:1 davis:2 samuel:1 m:7 stress:1 crf:1 invisible:1 performs:1 l1:1 encoded:2 zhiheng:1 image:2 wise:1 contention:1 kyle:1 common:1 wikipedia:1 functional:1 mt:2 ji:1 association:8 discussed:2 m1:1 elementwise:3 extend:2 significant:2 mellon:1 composition:1 automatic:17 rd:3 mathematics:1 similarly:1 bartholomew:1 language:11 l3:1 toolkit:3 interleaf:1 compiled:2 operating:1 something:2 joakim:1 own:2 recent:2 perspective:2 bilstm:8 optimizing:4 reverse:2 scenario:4 barbara:1 exploited:1 matrixvector:1 seen:1 additional:2 care:3 preceding:1 greater:1 converting:1 paradigm:1 nate:1 corrado:1 arithmetic:1 multiple:3 reduces:2 conducive:1 alan:1 ing:1 jet:1 mirhoseini:1 calculation:7 offer:1 long:4 cifar:1 lin:2 faster:3 lai:1 equally:1 hido:1 paired:1 controlled:1 ensuring:1 prediction:7 regression:2 basic:1 heterogeneous:1 essentially:1 cmu:1 variant:1 navdeep:1 arxiv:7 iteration:1 represent:1 kernel:1 faisal:1 agarwal:1 addition:3 whereas:2 want:2 hurdle:1 source:1 leaving:1 crucial:1 appropriately:1 breuleux:1 rest:1 operate:2 sent:2 ample:1 flow:2 call:3 ideal:2 intermediate:1 bengio:2 easy:9 enough:2 automated:1 conception:1 embeddings:2 uht:1 fit:2 exceed:1 architecture:12 associating:1 variety:2 inner:1 idea:1 barham:1 gaunt:1 whether:2 expression:1 padding:6 effort:1 sentiment:2 peter:2 speech:1 cyril:1 compositional:1 deep:6 useful:1 generally:5 se:1 oono:1 amount:2 ten:1 hardware:6 processed:5 tth:1 chainer:3 http:2 percentage:1 restricts:1 goldberg:7 delta:1 per:5 rb:1 pace:1 dummy:1 broadly:1 track:1 carnegie:1 write:4 express:1 group:4 ist:1 ballesteros:4 demonstrating:1 ce:1 ht:8 backward:7 krzysztof:1 graph:56 year:2 convert:2 run:3 parameterized:1 fourth:3 named:1 place:1 family:1 wu:1 separation:1 decision:2 ric:1 prefer:1 appendix:3 comparable:1 graham:2 entirely:1 layer:2 guaranteed:2 followed:1 fold:10 topological:2 encountered:1 oracle:1 annual:5 nontrivial:1 occur:3 yangfeng:1 precisely:1 noah:3 alex:1 x2:4 scene:1 calling:2 markus:1 declared:2 aspect:2 simulate:1 speed:5 franco:1 extremely:1 kumar:1 performing:3 statically:1 format:1 relatively:2 gpus:1 speedup:1 department:1 structured:7 manning:4 across:8 slightly:1 increasingly:4 character:3 joseph:1 shallow:1 making:2 modification:1 quoc:1 intuitively:1 invariant:1 theano:4 yogatama:1 computationally:3 resource:2 tai:1 turn:2 count:1 mechanism:1 describing:1 discus:1 dyer:7 fed:1 end:1 briggs:1 available:4 operation:48 brevdo:1 hierarchical:1 batching:78 outrageously:1 batch:35 alternative:2 slower:2 original:2 chuang:1 gori:1 running:1 ensure:2 linguistics:8 gan:1 top:1 opportunity:4 rain:2 tiago:1 calculating:3 build:3 feng:1 crafting:1 tensor:6 added:3 already:1 occurs:1 question:1 strategy:8 moshe:1 gradient:3 iclr:5 distance:1 separate:1 parser:5 duh:1 separating:1 entity:1 participate:1 chris:9 trivial:1 reason:3 jit:1 raghav:1 shohei:1 maziarz:1 length:7 code:15 reed:1 biu:1 mini:1 widely:2 providing:1 innovation:1 liang:2 difficult:4 unfortunately:2 mostly:1 executed:14 potentially:1 perelygin:1 favorably:1 noam:2 design:1 agenda:17 implementation:17 twenty:1 perform:6 contributed:1 gilles:1 conversion:1 gated:1 convolution:1 datasets:1 november:1 situation:4 hinton:1 excluding:1 extended:1 prematurely:1 shazeer:2 rn:1 workarounds:1 stack:2 ninth:1 prioritizing:1 david:3 compositionality:1 clayton:1 required:2 specified:1 optimized:1 sentence:10 tensorflow:13 nip:3 israeli:1 justin:1 bar:1 able:2 below:1 scott:1 egression:5 challenge:1 epth:4 monfardini:1 program:3 built:1 including:3 memory:12 ramon:1 demanding:1 difficulty:1 natural:5 predicting:1 representing:3 improve:1 technology:2 library:4 abhinav:1 identifies:1 ready:1 naive:1 auto:5 eugene:1 understanding:1 review:2 l2:1 ammar:1 multiplication:1 determining:1 relative:2 python:4 acknowledgement:1 fully:1 loss:19 generation:1 yuste:1 facing:1 geoffrey:1 christianson:1 foundation:1 vectorized:1 proxy:1 treebank:2 share:3 heavy:1 eccv:1 row:3 austin:2 prone:2 uhn:1 supported:1 last:2 copy:2 drastically:1 side:3 allow:1 understand:1 institute:2 fifth:1 mikhail:1 cranmer:1 distributed:1 ghz:1 slice:3 calculated:1 vocabulary:2 world:2 dimension:5 depth:17 transition:4 computes:2 forward:9 concretely:1 reinforcement:1 doesn:1 author:1 avoided:2 cope:3 transaction:5 k80:1 skill:1 multilingual:2 logic:1 dealing:1 ml:1 sequentially:2 incoming:1 summing:1 conceptual:2 assumed:1 naomi:1 don:1 search:1 vectorization:3 why:1 lazily:1 additionally:1 table:2 nature:2 pack:1 learn:1 ca:1 operational:1 obtaining:1 brockschmidt:1 requested:1 e5:1 inventory:1 yu:1 complex:4 european:1 constructing:1 domain:2 marc:1 tween:1 main:1 ling:3 paul:2 turian:1 repeated:1 tesla:1 x1:11 wengert:1 crafted:1 intel:2 xu:1 board:1 batched:18 fashion:1 position:1 chung:1 zhifeng:1 down:1 specific:4 xt:6 bastien:1 cynthia:1 symbol:1 list:1 maxi:1 gupta:1 grouping:1 workshop:2 socher:3 mnist:1 overloading:1 corr:1 sequential:1 adding:2 gained:1 ci:1 execution:15 chen:1 generalizing:1 lt:2 yin:1 simply:4 explore:1 orward:5 lazy:2 vinyals:1 adjustment:1 radford:1 corresponds:1 relies:3 acm:1 mart:1 declaration:3 minibatches:1 conditional:2 grefenstette:1 sized:5 goal:2 careful:1 towards:1 room:1 price:1 jeff:1 shared:1 hard:4 change:1 included:1 specifically:1 except:2 reducing:2 determined:2 operates:2 flag:1 engineer:1 miss:1 called:1 silvio:1 antonios:1 m3:1 citro:1 ew:2 l4:1 exception:2 guillaume:1 select:3 tara:1 alexander:1 oriol:1 evaluate:4 handling:1 |
6,617 | 6,987 | Nonlinear Acceleration of Stochastic Algorithms
Damien Scieur
INRIA, ENS,
PSL Research University,
Paris France
[email protected]
Francis Bach
INRIA, ENS,
PSL Research University,
Paris France
[email protected]
Alexandre d?Aspremont
CNRS, ENS,
PSL Research University,
Paris France
[email protected]
Abstract
Extrapolation methods use the last few iterates of an optimization algorithm to
produce a better estimate of the optimum. They were shown to achieve optimal
convergence rates in a deterministic setting using simple gradient iterates. Here,
we study extrapolation methods in a stochastic setting, where the iterates are
produced by either a simple or an accelerated stochastic gradient algorithm. We
first derive convergence bounds for arbitrary, potentially biased perturbations, then
produce asymptotic bounds using the ratio between the variance of the noise and
the accuracy of the current point. Finally, we apply this acceleration technique
to stochastic algorithms such as SGD, SAGA, SVRG and Katyusha in different
settings, and show significant performance gains.
1
Introduction
We focus on the problem
min f (x)
x?Rd
(1)
where f is a L-smooth and ?-strongly convex function with respect to the Euclidean norm, i.e.,
L
?
ky ? xk2 ? f (y) ? f (x) ? ?f (x)T (y ? x) ? ky ? xk2 .
2
2
We consider a stochastic first-order oracle, which gives a noisy estimate of the gradient of f (x), with
?? f (x) = ?f (x) + ?,
(2)
where ? is a noise term with bounded variance. This is the case for example when f is a sum of
strongly convex functions, and we only have access to the gradient of one randomly selected function.
Stochastic optimization (2) is typically challenging as classical algorithms are not convergent (for
example, gradient descent or Nesterov?s accelerated gradient). Even the averaged version of stochastic
gradient descent with constant step size does not converge to the solution of (1), but to another point
whose proximity to the real minimizer depends of the step size [Nedi?c and Bertsekas, 2001; Moulines
and Bach, 2011].
When f is a finite sum of N functions, then algorithms such as SAG [Schmidt et al., 2013], SAGA
[Defazio et al., 2014], SDCA [Shalev-Shwartz and Zhang, 2013] and SVRG [Johnson and Zhang,
2013] accelerate convergence using a variance reduction technique akin to control variate in MonteCarlo methods. Their rate of convergence depends on 1p? ?/L and thus does not exhibit an
accelerated rate on par with the deterministic setting (in 1 ? ?/L). Recently a generic acceleration
algorithm called Catalyst [Lin et al., 2015], based on the proximal point method improved this rate
of convergence, but the practical performances highly depends on the input parameters. On the
other hand, recent papers, for example [Shalev-Shwartz and Zhang, 2014] (Accelerated SDCA) and
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
[Allen-Zhu, 2016] (Katyusha), propose algorithms with accelerated convergence rates, if the strong
convexity parameter is given.
When f is a quadratic function then averaged SGD converges, but the rate of decay of initial conditions
is very slow. Recently, some results have focused on accelerated versions of SGD for quadratic
optimization, showing that with a two step recursion it is possible to enjoy both the optimal rate for
the bias and variance terms [Flammarion and Bach, 2015], given an estimate of the ratio between the
distance to the solution and the variance of ?.
A novel generic acceleration technique was recently proposed by Scieur et al. [2016] in the deterministic setting. This uses iterates from a slow algorithm to extrapolate estimates of the solution with
asymptotically optimal convergence rate. Moreover, this rate is reached without prior knowledge of
the strong convexity constant, whose online estimation is still a challenge (even in the deterministic
case [Fercoq and Qu, 2016]) but required if one wants to obtain optimal rates of convergence.
Convergence bounds are derived by Scieur et al. [2016], tracking the difference between the deterministic first-order oracle of (1) and iterates from a linearized model. The main contribution of this paper
is to extend the analysis to arbitrary perturbations, including stochastic ones, and to present numerical
results when this acceleration method is used to speed up stochastic optimization algorithms.
In Section 2 we recall the extrapolation algorithm, and quickly summarize its main convergence
bounds in Section 3. In Section 4, we consider a stochastic oracle and analyze its asymptotic
convergence in Section 5. Finally, in Section 6 we describe numerical experiments which confirm the
theoretical bounds and show the practical efficiency of this acceleration.
2
Regularized Nonlinear Acceleration
Consider the optimization problem
min f (x)
x?Rd
where f is a L?smooth and ??strongly convex function [Nesterov, 2013]. Applying the fixed-step
gradient method to this problem yields the following iterates
1
?f (?
xt ).
L
Let x? be the unique optimal point, this algorithm is proved to converge with
x
?t+1 = x
?t ?
k?
xt ? x? k ? (1 ? ?)t k?
x0 ? x? k
(3)
(4)
where k ? k stands for the `2 norm and ? = ?/L ? [0, 1[ is the (inverse of the) condition number of f
[Nesterov, 2013]. Using a two-step recurrence, the accelerated gradient descent by Nesterov [2013]
achieves the improved convergence rate
?
x0 ? x? k .
(5)
k?
xt ? x? k ? O (1 ? ?)t k?
Indeed, (5) converges faster than (4) but the accelerated algorithm requires the knowledge of ? and L.
Extrapolation techniques however obtain a similar convergence rate, but do not need estimates of the
parameters ? and L. The idea is based on the comparison between the process followed by x
?i with a
linearized model around the optimum (obtained by the first-order approximation of ?f (x)), written
1
xt+1 = xt ?
?f (x? ) +?2 f (x? )(xt ? x? ) , x0 = x
?0 .
L | {z }
=0
which can be rewritten as
xt+1 ? x? = (I ? ?2 f (x? )/L)(xt ? x? ),
x0 = x
?0 .
(6)
A better estimate of the optimum in (6) can be obtained by forming a linear combination of the
iterates (see [Anderson, 1965; Cabay and Jackson, 1976; Me?ina, 1977]), with
t
X
ci xi ? x?
kxt ? x? k,
i=0
2
for some specific ci (either data driven, or derived from Chebyshev polynomials). These procedures
were limited to quadratic functions only, i.e. when x
?i = xi but this was recently extended to generic
convex problems by Scieur et al. [2016] and we briefly recall these results below.
To simplify the notations, we write
x
?t+1 = g(?
xt )
(7)
to be one step of algorithm g. We have that g is differentiable, Lipchitz-continuous with constant
(1 ? ?) < 1, g(x? ) = x? and g 0 (x? ) is symmetric. For example, the gradient method (3) matches
exactly this definition with g(x) = x ? ?f (x)/L. Running k steps of (7) produces a sequence
{?
x0 , ..., x
?k }, which we extrapolate using Algorithm 1 from Scieur et al. [2016].
Algorithm 1 Regularized Nonlinear Acceleration (RNA)
Input: Iterates x
?0 , x
?1 , ..., x
?k+1 ? Rd produced by (7), and a regularization parameter ? > 0.
? = [?
1: Compute R
r0 , ..., r?k ], where r?i = x
?i+1 ? x
?i is the ith residue.
2: Solve
? 2 + ?kck2 ,
c?? = argmin kRck
cT 1=1
? + ?I)z = 1 then set c?? = z/1T z.
?T R
or equivalently solve (R
Pk
?
Output: Approximation of x computed as i=0 c??i x
?i
For a good choice of ?, the output of Algorithm (1) is a much better estimate of the optimum than
x
?k+1 (or any other points of the sequence). Using a simple grid search on a few values of ? is usually
sufficient to improve convergence (see [Scieur et al., 2016] for more details).
3
Convergence of Regularized Nonlinear Acceleration
We quickly summarize the argument behind the convergence of Algorithm (1). The theoretical bound
compares x
?i to the iterates produced by the linearized model
xt+1 = x? + ?g(x? )(xt ? x? ),
x0 = x
?0 .
(8)
This sequence is useful to extend the convergence results to the nonlinear case, using sensivity
analysis. We write c? the coefficients computed by Algorithm (1) from the ?linearized? sequence
{x0 , ..., xk+1 } and the error term can be decomposed into three parts,
k
k
k
k
X
X
X
X
?
?
?
?
?
?
?
?
c?i x
?i ? x
?
ci xi ? x
+
c?i ? ci (xi ? x )
+
c?i x
?i ? xi
. (9)
i=0
i=0
i=0
i=0
|
{z
} |
{z
} |
{z
}
Acceleration
Stability
Nonlinearity
Scieur et al. [2016] show that convergence is guaranteed as long as the errors (?
xi ? x? ) and (xi ? x
?i )
converge to zero fast enough, which ensures a good rate of decay for the regularization parameter
?, leading to an asymptotic rate equivalent to the accelerated rate in (5). In this section, we will use
results from Scieur et al. [2016] to bound each individual term, but in this paper we improve the final
convergence result.
The stability term (in c?? ? c? ) is bounded using the perturbation matrix
? T R,
?
P , RT R ? R
(10)
? are the matrices of residuals,
where R and R
R , [r0 ...rk ]
? , [?
R
r0 ...?
rk ]
rt = xt+1 ? xt ,
(11)
r?t = x
?t+1 ? x
?t .
(12)
The proofs of the following propositions were obtained by Scieur et al. [2016].
3
Proposition 3.1 (Stability). Let ?c? = c?? ? c? be the gap between the coefficients computed
by Algorithm (1) using the sequences {?
xi } and {xi } with regularization parameter ?. Let P =
?T R
? be defined in (10), (11) and (12). Then
RT R ? R
k?c? k
?
This implies that the stability term is bounded by
Pk
k i=0 ?c?i (xi ? x? )k ?
kP k
?
? kc k.
(13)
kP k
?
? kc k O(kx0
? x? k).
(14)
The term Nonlinearity is bounded by the norm of the coefficients c?? (controlled thanks to the
regularization parameter) times the norm of the noise matrix
E = [x0 ? x
? 0 , x1 ? x
?1 , ..., xk ? x
?k ].
(15)
?
Proposition 3.2 (Nonlinearity). Let c? be computed by Algorithm 1 using the sequence
? be defined in (12). The norm of c?? is
{?
x0 , ..., x
?k+1 } with regularization parameter ? and R
bounded by
q
q
k?
c? k ?
? 2 +?
kRk
(k+1)?
?
?1
k+1
This bounds the nonlinearity term because
P
q
k
?
xi ? xi )
? 1 +
i=0 c?i (?
1+
? 2
kRk
? .
(16)
? 2 kEk
kRk
?
,
?
k+1
(17)
where E is defined in (15).
These two propositions show that the regularization in Algorithm 1 limits the impact of the noise: the
higher ? is, the smaller these terms are. It remains to control the acceleration term. For small ?, this
term decreases as fast as the accelerated rate (5), as shown in the following proposition.
Proposition 3.3 (Acceleration). Let Pk be the subspace of real polynomials of degree at most k and
S? (k, ?) be the solution of the Regularized Chebychev Polynomial problem,
S? (k, ?) , min
max
p?Pk x?[0,1??]
?,
Let ?
?
kx0 ?x? k2
p2 (x) + ?kpk2
s.t.
p(1) = 1.
be the normalized value of ?. The acceleration term is bounded by
P
p
k
? 2
? 2
?
i=0 c?i xi ? x?
? ?1 S? (k, ?)kx
0 ? x k ? ?kc k .
(18)
(19)
We also get the following corollary, which will be useful for the asymptotic analysis of the rate of
convergence of Algorithm 1.
Corollary 3.4. If ? ? 0, the bound (19) becomes
P
? k
k
1? ?
i=0 c?i xi ? x?
? ?1 1+?? kx0 ? x? k.
p
p
Proof. When ? = 0, (19) becomes ?1 S? (k, 0)kx0 ?x? k. The exact value of S? (k, 0) is obtained
by using the coefficients of a re-scaled?Chebyshev polynomial, derived by Golub and Varga [1961];
?? .
Scieur et al. [2016], and is equal to 1?
1+ ?
These last results controlling stability, nonlinearity and acceleration are proved by Scieur et al. [2016].
We now refine the final step of Scieur et al. [2016] to produce a global bound on the error that will
allow us to extend these results to the stochastic setting in the next sections.
Theorem 3.5. If Algorithm 1 is applied to the sequence x
?i with regularization parameter ?, it
converges with rate
s
k
r
X
? k2 )kP k2
? 2
1
1
O(kx
?
x
kEk
kRk
?
?
c??i x
?i
? kx0 ? x? kS?2 (k, ?)
+
+
1
+
. (20)
?2
?3
?
k+1
i=0
4
Proof. The proof is inspired by Scieur et al. [2016] and is straightforward. We can bound (9) using
(14) (Stability), (17) (Nonlinearity) and (19) (Acceleration). It remains to maximize over the value
of kc? k using the result of Proposition A.2.
?
This last bound is not very explicit, in particular because of the regularized Chebyshev term S? (k, ?).
?
The solution is well known when ? = 0 since it corresponds exactly to the rescaled Chebyshev
polynomial [Golub and Varga, 1961], but as far as we know there is no known result about its
regularized version, thus making the "finite-step" version hard to analyze. However, an asymptotic
analysis simplifies it considerably. The next new proposition shows that when x0 is close to x? , then
extrapolation converges as fast as in (5) in some cases.
? = O(kx0 ? x? k), kEk = O(kx0 ? x? k2 ) and kP k = O(kx0 ? x? k3 ).
Proposition 3.6. Assume kRk
If we chose ? = O(kx0 ? x? ks ) with s ? [2, 38 ] then the bound (20) becomes
lim
?
k
kx0 ?x k?0
Pk
??i x
?i k
i=0 c
kx0 ? x? k
?
1
?
? k
1? ?
?
.
1+ ?
Proof. (Sketch) The proof is based on the fact that ? decreases slowly enough to ensure that the
?
? =
Stability and Nonlinearity terms vanish over time, but fast enough to have ?
kx0 ?x? k2 ? 0.
Then it remains to bound S? (k, 0) with Corollary 3.4. The complete proof can be found in the
Supplementary materials.
Note: The assumptions are satisfied if we apply the gradient method on a twice differentiable,
smooth and strongly convex function with Lipchitz-continuous Hessian [Scieur et al., 2016].
The efficiency of Algorithm 1 is thus ensured by two conditions. First, we need to be able to bound
? kP k and kEk by decreasing quantities. Second, we have to find a proper rate of decay for ?
kRk,
? such that the stability and nonlinearity terms go to zero when perturbations also go to zero. If
and ?
these two conditions are met, then the accelerated rate in Proposition 3.6 holds.
4
Nonlinear and Noisy Updates
In (7) we defined g(x) to be non linear, which generates a sequence x
?i . We now consider noisy
iterates
x
?t+1 = g(?
xt ) + ?t+1 ,
(21)
where ?t is a stochastic noise. To simplify notations, we write (21) as
x
?t+1 = x? + G(?
xt ? x? ) + ?t+1 ,
(22)
where ?t is a stochastic noise (potentially correlated with the iterates xi ) with bounded mean ?t ,
k?t k ? ? and bounded covariance ?t (? 2 /d)I. We also assume 0I G (1 ? ?)I and G is
symmetric. For example, (22) can be linked to (21) if we set ?t = ?t + O(k?
xt ? x? k2 ), which
corresponds to the combination of the noise ?t+1 with the Taylor remainder of g(x) around x? , i.e.,
x
?t+1 = g(?
xt ) + ?t+1 = g(x? ) + ?g(x? )(?
xt ? x? ) + O(k?
xt ? x? k) + ?t+1 .
| {z } | {z }
|
{z
}
=x?
=t+1
=G
The recursion (22) is also valid when we apply the stochastic gradient method with fixed step size h
to the quadratic problem
minx 12 kAx ? bk2 .
This corresponds to (22) with G = I ? hAT A and mean ? = 0. For the theoretical results, we will
compare x
?t with their noiseless counterpart to control convergence,
xt+1 = x? + G(xt ? x? ),
5
x0 = x
?0 .
(23)
5
Convergence Analysis when Accelerating Stochastic Algorithms
We will control convergence in expectation. Bound (9) now becomes
#
" k
k
X
X
h
i
h
i
?
?
c? kkEk .
c?i xi ? x?
+ O(kx0 ? x? k)E k?c? k + E k?
c?i x
E
?i ? x
?
(24)
i=0
i=0
We now need to enforce bounds (14), (17) and (19) in expectation. The proofs of the two next
propositions are in the supplementary material. For simplicity, we will omit all constants in what
follows.
Proposition 5.1. Consider the sequences xi and x
?i generated by (21) and (23). Then,
?
E[kRk]
? O(kx0 ? x? k) + O(? + ?),
E[kEk] ? O(? + ?),
(25)
(26)
E[kP k] ? O((? + ?)kx0 ? x? k) + O((? + ?)2 ).
(27)
We define the following stochastic condition number
?+?
.
?,
kx0 ? x? k
The Proposition 5.2 gives the result when injecting these bounds in (24).
Proposition 5.2. The accuracy of extrapolation Algorithm 1 applied to the sequence {?
x0 , ..., x
?k }
generated by (21) is bounded by
h P
i
!!
r
r
k
E k i=0 c??i x
? i ? x? k
1
? 2 (1 + ? 2 )
O(? 2 (1 + ? )2 )
2
?
+O
? +
. (28)
? S? (k, ?)
+
?3
?
kx0 ? x? k
?2
?
?
Consider a situation where ? is small, e.g. when using stochastic gradient descent with fixed step-size,
? and ? ensuring the
with x0 far from x? . The following proposition details the dependence between ?
upper convergence bound remains stable when ? goes to zero.
? = ?(? s ) with s ?]0, 2 [, we have the accelerated rate
Proposition 5.3. When ? ? 0, if ?
3
? k
Pk
??
kx0 ? x? k.
(29)
E k i=0 c??i x
?i ? x? k ? ?1 1?
1+ ?
Moreover, if ? ? ?, we recover the averaged gradient,
i
h
Pk
1 Pk
?
E k i=0 c??i x
?i ? x? k = E
k+1
x
?
?
x
.
i
i=0
? = ?(? s ), using (28) we have
Proof. Let ?
q
i
h
P
k
E
i=0 c??i x
?i ? x?
? kx0 ? x? kS? (k, ? s ) ?12 O(? 2?3s (1 + ? )2 )
p
+kx0 ? x? kO( ? 2 + ? 2?3s (1 + ? 2 )).
Because s ?]0, 32 [, means 2 ? 3s > 0, thus lim? ?0 ? 2?3s = 0. The limits when ? ? 0 is thus
exactly (29). If ? ? ?, we have also
? + ?kck2 = argminc:1T c=1 kck2 =
lim c?? = lim argminc:1T c=1 kRck
???
???
1
k+1
which yields the desired result.
Proposition 5.3 shows that Algorithm 1 is thus asymptotically optimal provided ? is well chosen
because it recovers the accelerated rate for smooth and strongly convex functions when the perturbations goes to zero. Moreover, we recover Proposition 3.6 when t is the Taylor remainder, i.e. with
? = O(kx0 ? x? k2 ) and ? = 0, which matches the deterministic results.
Algorithm 1 is particularly efficient when combined with a restart scheme [Scieur et al., 2016].
From a theoretical point of view, the acceleration peak arises for small values of k. Empirically, the
6
improvement is usually more important at the beginning, i.e. when k is small. Finally, the algorithmic
complexity is O(k 2 d), which is linear in the problem dimension when k remains bounded.
The benefits of extrapolation are limited in a regime where the noise dominates. However, when
? is relatively small then we can expect a significant speedup. This condition is satisfied in many
cases, for example at the initial phase of the stochastic gradient descent or when optimizing a sum of
functions with variance reduction techniques, such as SAGA or SVRG.
6
6.1
Numerical Experiments
Stochastic gradient descent
We want to solve the least-squares problem
min F (x) =
x?Rd
1
kAx ? bk2 ,
2
where AT A satisfies ?I (AT A) LI. To solve this problem, we have access to the stochastic
first-order oracle
?? F (x) = ?F (x) + ?,
where ? is a zero-mean noise of covariance matrix ?
?2
d I.
We will compare several methods.
? SGD. Fixed step-size, xt+1 = xt ? L1 ?? F (xt ).
? Averaged SGD. Iterate xk is the mean of the k first iterations of SGD.
? AccSGD. The optimal two-step algorithm in Flammarion and Bach [2015], with optimal
parameters (this implies kx0 ? x? k and ? are known exactly).
? RNA+SGD. The regularized nonlinear acceleration Algorithm 1 applied to a sequence of k
?6
? T Rk/10
?
iterates of SGD, with k = 10 and ? = kR
.
By Proposition 5.2, we know that RNA+SGD will not converge to arbitrary precision because the
noise is additive with a non-vanishing variance. However, Proposition 5.3 predicts an improvement
of the convergence at the beginning of the process. We illustrate this behavior in Figure 1. We
clearly see that at the beginning, the performance of RNA+SGD is comparable to that of the optimal
accelerated algorithm. However, because of the restart strategy, in the regime where the level of
noise becomes more important the acceleration becomes less effective and finally the convergence
stalls, as for SGD. Of course, for practical purposes, the first regime is the most important because it
effectively minimizes the generalization error [D?fossez and Bach, 2015; Jain et al., 2016].
6.2
Finite sums of functions
PN
We focus on the composite problem minx?Rd F (x) = i=1 N1 fi (x) + ?2 kxk2 , where fi are convex
and L-smooth functions and ? is the regularization parameter. We will use classical methods for
minimizing F (x) such as SGD (with fixed step size), SAGA [Defazio et al., 2014], SVRG [Johnson
and Zhang, 2013], and also the accelerated algorithm Katyusha [Allen-Zhu, 2016]. We will compare
their performance with and without the (potential) acceleration provided by Algorithm 1 with restart
after k data passes. The parameter ? is found by a grid search of size k, the size of the input sequence,
but it adds only one data pass at each extrapolation. Actually, the grid search can be faster if we
approximate F (x) with fewer samples, but we choose to present Algorithm 1 in its simplest version.
We set k = 10 for all the experiments.
In order to balance the complexity of the extrapolation algorithm and the optimization method we wait
several data queries before adding the current point (the ?snapshot?) of the method to the sequence.
Indeed, the extrapolation algorithm has a complexity of O(k 2 d) + O(N ) (computing the coefficients
c?? and the grid search over ?). If we wait at least O(N ) updates, then the extrapolation method is of
the same order of complexity as the optimization algorithm.
? SGD. We add the current point after N data queries (i.e. one epoch) and k snapshots of
SGD cost kN data queries.
7
PSfrag replacements
SGD
Ave. SGD
Acc. SGD
RNA + SGD
10 4
PSfrag replacements
SGD
Ave. SGD
Acc. SGD
RNA + SGD
f (x) ? f (x? )
PSfrag replacements
10 2
PSfrag replacements
10 0
SGD
Ave. SGD
Acc. SGD
RNA + SGD
10
10
0
?
f (x) ? f (x ) 10 4
10 2
10 0
10 2
10 1
10 2
10 4
10 0
Iteration
Iteration
10 2
10 4
Iteration
replacements
Left: ? = 10, ? = 10?2 . Center: ? PSfrag
= 1000,
? = 10?2 . Right: ? = 1000, ? = 10?6 .
PSfrag replacements
SGD
PSfrag replacements
SGD
Ave. SGD
Acc. SGD
RNA + SGD
10 2
f (x) ? f (x? )
f (x) ? f (x? )
10 3
f (x) ? f (x? )
-2
10 0
SGD
Ave. SGD
Acc. SGD
RNA + SGD
SGD
Ave. SGD
Acc. SGD
RNA + SGD
10 2
10
Ave. SGD
Acc. SGD
RNA + SGD
2
10 3
10 2
?
f (x) ? f (x )
10 0
f (x) ? f (x? )
10 1
10 0
10 0
10 -2
f (x) ? f (x? )
10 0
f (x) ? f (x? ) 10 4
Iteration
10 2
10 0
10 2
Iteration
10 4
10 0
10 2
10 4
Iteration
Left: ? = 10, ? = 1/d. Center: ? = 100, ? = 1/d. Right: ? = 1000, ? = 1/d.
Figure 1: Comparison of performance between SGD, averaged SGD, Accelerated SGD [Flammarion
and Bach, 2015] and RNA+SGD. We tested the performance on a matrix AT A of size d = 500, with
(top) random eigenvalues between ? and 1 and (bottom) decaying eigenvalues from 1 to 1/d. We
start at kx0 ? x? k = 104 , where x0 and x? are generated randomly.
? SAGA. We compute the gradient table exactly, then we add a new point after N queries,
and k snapshots of SAGA cost (k + 1)N queries. Since we optimize a sum of quadratic or
logistic losses, we used the version of SAGA which stores O(N ) scalars.
? SVRG. We compute the gradient exactly, then perform N queries (the inner-loop of SVRG),
and k snapshots of SVRG cost 2kN queries.
? Katyusha. We compute the gradient exactly, then perform 4N gradient calls (the inner-loop
of Katyusha), and k snapshots of Katyusha cost 3kN queries.
We compare these various methods for solving least-squares regression and logistic regression
on several datasets (Table 1), with several condition numbers ?: well (? = 100/N ), moderately
(? = 1/N ) and badly (? = 1/100N ) conditioned. In this section, we present the numerical results
on Sid (Sido0 dataset, where N = 12678 and d = 4932) with bad conditioning, see Figure 2. The
other experiments are highlighted in the supplementary material.
In Figure 2, we clearly see that both SGD and RNA+SGD do not converge. This is mainly due to
the fact that we do not average the points. In any case, except for quadratic problems, the averaged
version of SGD does not converge to the minimum of F with arbitrary precision.
We also notice that Algorithm 1 is unable to accelerate Katyusha. This issue was already raised
by Scieur et al. [2016]: when the algorithm has a momentum term (like Nesterov?s method), the
underlying dynamical system is harder to extrapolate, in particular because the matrix presents in the
linearized version of such systems is not symmetric.
Because the iterates of SAGA and SVRG have low variance, their accelerated version converges
faster to the optimum, and their performance are then comparable to Katyusha. In our experiments,
Katyusha was faster than RNA+SAGA only once, when solving a least square problem on Sido0
8
f (x) ? f (x? )
PSfrag replacements
PSfrag replacements
10 -5
10 -5
PSfrag replacements
f (x) ? f (x? )
Epoch
10 -10
0
f (x) ? f (x? )
200
10 -10
400
0
50
100
150
Time (sec)
Epoch
200
Epoch
Time (sec)
PSfrag replacements
f (x) ? f (x? )
f (x) ? f (x? )
Epoch
PSfrag
replacements
Time (sec)
10
f (x) ? f (x? )
-5
10 -5
Epoch
f (x) ? f (x? )
Time (sec)
Epoch
f (x) ? f (x? )
Time (sec)
Epoch
10 -10
0
200
400
10 -10
0
Epoch
SAGA
SGD
SVRG
Katyusha
RNA+SAGA
100
200
Time (sec)
RNA+SGD
300
RNA+SVRG
RNA+Kat.
Figure 2: Optimization of quadratic loss (Top) and logistic loss (Bottom) with several algorithms,
using the Sid dataset with bad conditioning. The experiments are done in Matlab. Left: Error vs
epoch number. Right: Error vs time.
with a bad condition number. Recall however that the acceleration Algorithm 1 does not require the
specification of the strong convexity parameter, unlike Katyusha.
Acknowledgments
The authors would like to acknowledge support from a starting grant from the European Research
Council (ERC project SIPA), from the European Union?s Seventh Framework Programme (FP7PEOPLE-2013-ITN) under grant agreement number 607290 SpaRTaN, as well as support from the
chaire ?conomie des nouvelles donn?es with the data science joint research initiative with the fonds
AXA pour la recherche and a gift from Soci?t? G?n?rale Cross Asset Quantitative Research.
9
References
Allen-Zhu, Z. [2016], ?Katyusha: The first direct acceleration of stochastic gradient methods?, arXiv preprint
arXiv:1603.05953 .
Anderson, D. G. [1965], ?Iterative procedures for nonlinear integral equations?, Journal of the ACM (JACM)
12(4), 547?560.
Cabay, S. and Jackson, L. [1976], ?A polynomial extrapolation method for finding limits and antilimits of vector
sequences?, SIAM Journal on Numerical Analysis 13(5), 734?752.
Defazio, A., Bach, F. and Lacoste-Julien, S. [2014], Saga: A fast incremental gradient method with support
for non-strongly convex composite objectives, in ?Advances in Neural Information Processing Systems?,
pp. 1646?1654.
D?fossez, A. and Bach, F. [2015], Averaged least-mean-squares: Bias-variance trade-offs and optimal sampling
distributions, in ?Artificial Intelligence and Statistics?, pp. 205?213.
Fercoq, O. and Qu, Z. [2016], ?Restarting accelerated gradient methods with a rough strong convexity estimate?,
arXiv preprint arXiv:1609.07358 .
Flammarion, N. and Bach, F. [2015], From averaging to acceleration, there is only a step-size, in ?Conference on
Learning Theory?, pp. 658?695.
Golub, G. H. and Varga, R. S. [1961], ?Chebyshev semi-iterative methods, successive overrelaxation iterative
methods, and second order richardson iterative methods?, Numerische Mathematik 3(1), 147?156.
Jain, P., Kakade, S. M., Kidambi, R., Netrapalli, P. and Sidford, A. [2016], ?Parallelizing stochastic approximation
through mini-batching and tail-averaging?, arXiv preprint arXiv:1610.03774 .
Johnson, R. and Zhang, T. [2013], Accelerating stochastic gradient descent using predictive variance reduction,
in ?Advances in Neural Information Processing Systems?, pp. 315?323.
Lin, H., Mairal, J. and Harchaoui, Z. [2015], A universal catalyst for first-order optimization, in ?Advances in
Neural Information Processing Systems?, pp. 3384?3392.
Me?ina, M. [1977], ?Convergence acceleration for the iterative solution of the equations x= ax+ f?, Computer
Methods in Applied Mechanics and Engineering 10(2), 165?173.
Moulines, E. and Bach, F. R. [2011], Non-asymptotic analysis of stochastic approximation algorithms for
machine learning, in ?Advances in Neural Information Processing Systems?, pp. 451?459.
Nedi?c, A. and Bertsekas, D. [2001], Convergence rate of incremental subgradient algorithms, in ?Stochastic
optimization: algorithms and applications?, Springer, pp. 223?264.
Nesterov, Y. [2013], Introductory lectures on convex optimization: A basic course, Vol. 87, Springer Science &
Business Media.
Schmidt, M., Le Roux, N. and Bach, F. [2013], ?Minimizing finite sums with the stochastic average gradient?,
Mathematical Programming pp. 1?30.
Scieur, D., d?Aspremont, A. and Bach, F. [2016], Regularized nonlinear acceleration, in ?Advances In Neural
Information Processing Systems?, pp. 712?720.
Shalev-Shwartz, S. and Zhang, T. [2013], ?Stochastic dual coordinate ascent methods for regularized loss
minimization?, Journal of Machine Learning Research 14(Feb), 567?599.
Shalev-Shwartz, S. and Zhang, T. [2014], Accelerated proximal stochastic dual coordinate ascent for regularized
loss minimization., in ?ICML?, pp. 64?72.
10
| 6987 |@word briefly:1 version:9 polynomial:6 norm:5 linearized:5 covariance:2 sgd:51 harder:1 reduction:3 initial:2 kx0:23 current:3 written:1 numerical:5 additive:1 update:2 v:2 intelligence:1 selected:1 fewer:1 xk:3 beginning:3 ith:1 vanishing:1 recherche:1 iterates:13 successive:1 lipchitz:2 zhang:7 mathematical:1 direct:1 psfrag:12 initiative:1 introductory:1 x0:14 indeed:2 behavior:1 pour:1 mechanic:1 moulines:2 inspired:1 chaire:1 decomposed:1 decreasing:1 becomes:6 provided:2 project:1 bounded:10 moreover:3 notation:2 underlying:1 medium:1 gift:1 what:1 argmin:1 minimizes:1 finding:1 quantitative:1 sag:1 exactly:7 ensured:1 k2:7 scaled:1 control:4 grant:2 enjoy:1 omit:1 bertsekas:2 before:1 engineering:1 limit:3 inria:4 chose:1 twice:1 k:3 argminc:2 challenging:1 limited:2 averaged:7 practical:3 unique:1 acknowledgment:1 union:1 kat:1 procedure:2 sdca:2 universal:1 composite:2 wait:2 get:1 close:1 applying:1 optimize:1 equivalent:1 deterministic:6 center:2 straightforward:1 go:4 starting:1 convex:9 nedi:2 focused:1 numerische:1 simplicity:1 roux:1 jackson:2 stability:8 coordinate:2 controlling:1 exact:1 soci:1 programming:1 us:1 agreement:1 particularly:1 predicts:1 bottom:2 preprint:3 ensures:1 decrease:2 rescaled:1 trade:1 convexity:4 complexity:4 moderately:1 nesterov:6 solving:2 predictive:1 efficiency:2 accelerate:2 joint:1 various:1 jain:2 fast:5 describe:1 effective:1 kp:6 query:8 spartan:1 artificial:1 shalev:4 whose:2 supplementary:3 solve:4 statistic:1 richardson:1 highlighted:1 noisy:3 final:2 online:1 kxt:1 differentiable:2 sequence:14 eigenvalue:2 propose:1 fr:3 remainder:2 loop:2 achieve:1 ky:2 convergence:28 optimum:5 produce:4 incremental:2 converges:5 derive:1 illustrate:1 damien:2 p2:1 netrapalli:1 strong:4 implies:2 met:1 stochastic:28 material:3 sido0:2 require:1 generalization:1 proposition:20 hold:1 proximity:1 around:2 k3:1 algorithmic:1 achieves:1 xk2:2 purpose:1 estimation:1 injecting:1 council:1 minimization:2 offs:1 clearly:2 rough:1 rna:18 pn:1 corollary:3 derived:3 focus:2 ax:1 improvement:2 mainly:1 ave:7 cnrs:1 typically:1 kc:4 france:3 issue:1 dual:2 raised:1 equal:1 once:1 beach:1 sampling:1 icml:1 simplify:2 few:2 randomly:2 individual:1 phase:1 replacement:12 n1:1 highly:1 golub:3 behind:1 aspremon:1 integral:1 euclidean:1 taylor:2 re:1 desired:1 theoretical:4 sidford:1 cost:4 seventh:1 johnson:3 kn:3 proximal:2 considerably:1 combined:1 st:1 thanks:1 peak:1 siam:1 quickly:2 satisfied:2 choose:1 slowly:1 kidambi:1 leading:1 li:1 potential:1 scieur:18 de:1 sec:6 coefficient:5 depends:3 view:1 extrapolation:12 analyze:2 francis:2 reached:1 linked:1 recover:2 decaying:1 start:1 contribution:1 square:4 accuracy:2 variance:10 kek:5 yield:2 sid:2 produced:3 asset:1 acc:7 definition:1 pp:10 proof:9 recovers:1 gain:1 proved:2 dataset:2 recall:3 knowledge:2 lim:4 actually:1 alexandre:1 higher:1 katyusha:12 improved:2 done:1 strongly:6 anderson:2 hand:1 sketch:1 nonlinear:9 logistic:3 usa:1 normalized:1 counterpart:1 regularization:8 symmetric:3 recurrence:1 complete:1 allen:3 flammarion:4 l1:1 cabay:2 novel:1 recently:4 fi:2 empirically:1 conditioning:2 extend:3 tail:1 significant:2 rd:5 grid:4 erc:1 nonlinearity:8 access:2 stable:1 specification:1 add:3 feb:1 recent:1 optimizing:1 driven:1 store:1 axa:1 minimum:1 conomie:1 r0:3 converge:6 maximize:1 itn:1 semi:1 harchaoui:1 smooth:5 match:2 faster:4 bach:13 long:2 lin:2 cross:1 controlled:1 impact:1 kax:2 ensuring:1 ko:1 regression:2 basic:1 noiseless:1 expectation:2 arxiv:6 iteration:7 want:2 residue:1 biased:1 unlike:1 pass:1 ascent:2 call:1 enough:3 iterate:1 variate:1 stall:1 inner:2 idea:1 simplifies:1 chebyshev:5 psl:3 defazio:3 accelerating:2 akin:1 hessian:1 matlab:1 useful:2 varga:3 simplest:1 kck2:3 notice:1 write:3 vol:1 lacoste:1 asymptotically:2 overrelaxation:1 subgradient:1 sum:6 inverse:1 comparable:2 bound:19 ct:1 followed:1 guaranteed:1 convergent:1 quadratic:7 refine:1 oracle:4 badly:1 kpk2:1 generates:1 speed:1 argument:1 min:4 fercoq:2 relatively:1 speedup:1 combination:2 smaller:1 kakade:1 qu:2 making:1 nouvelles:1 equation:2 remains:5 mathematik:1 montecarlo:1 know:2 rewritten:1 apply:3 generic:3 enforce:1 batching:1 schmidt:2 hat:1 top:2 running:1 ensure:1 sipa:1 classical:2 objective:1 already:1 quantity:1 strategy:1 rt:3 dependence:1 exhibit:1 gradient:25 minx:2 subspace:1 distance:1 unable:1 restart:3 me:2 mini:1 ratio:2 minimizing:2 balance:1 equivalently:1 potentially:2 proper:1 perform:2 upper:1 snapshot:5 datasets:1 finite:4 acknowledge:1 descent:7 situation:1 extended:1 perturbation:5 arbitrary:4 parallelizing:1 paris:3 required:1 chebychev:1 extrapolate:3 nip:1 able:1 below:1 usually:2 dynamical:1 rale:1 regime:3 challenge:1 summarize:2 including:1 max:1 business:1 regularized:10 recursion:2 residual:1 zhu:3 scheme:1 improve:2 julien:1 aspremont:2 prior:1 epoch:10 asymptotic:6 catalyst:2 ina:2 par:1 expect:1 loss:5 lecture:1 degree:1 sufficient:1 bk2:2 course:2 last:3 svrg:10 bias:2 allow:1 benefit:1 dimension:1 stand:1 valid:1 author:1 programme:1 far:2 approximate:1 restarting:1 confirm:1 global:1 mairal:1 xi:17 shwartz:4 continuous:2 search:4 iterative:5 table:2 donn:1 ca:1 european:2 krk:7 pk:8 main:2 noise:11 x1:1 en:4 slow:2 precision:2 momentum:1 saga:12 explicit:1 kxk2:1 vanish:1 rk:3 theorem:1 bad:3 specific:1 xt:24 showing:1 decay:3 dominates:1 adding:1 effectively:1 kr:1 ci:4 conditioned:1 fonds:1 kx:2 gap:1 jacm:1 forming:1 tracking:1 scalar:1 springer:2 corresponds:3 minimizer:1 satisfies:1 acm:1 acceleration:24 hard:1 except:1 averaging:2 called:1 pas:1 e:1 la:1 support:3 arises:1 accelerated:19 tested:1 correlated:1 |
6,618 | 6,988 | Optimized Pre-Processing for Discrimination
Prevention
Flavio P. Calmon
Harvard University
[email protected]
Dennis Wei
IBM Research AI
[email protected]
Karthikeyan Natesan Ramamurthy
IBM Research AI
[email protected]
Bhanukiran Vinzamuri
IBM Research AI
[email protected]
Kush R. Varshney
IBM Research AI
[email protected]
Abstract
Non-discrimination is a recognized objective in algorithmic decision making. In
this paper, we introduce a novel probabilistic formulation of data pre-processing
for reducing discrimination. We propose a convex optimization for learning a data
transformation with three goals: controlling discrimination, limiting distortion
in individual data samples, and preserving utility. We characterize the impact
of limited sample size in accomplishing this objective. Two instances of the
proposed optimization are applied to datasets, including one on real-world criminal
recidivism. Results show that discrimination can be greatly reduced at a small cost
in classification accuracy.
1
Introduction
Discrimination is the prejudicial treatment of an individual based on membership in a legally protected
group such as a race or gender. Direct discrimination occurs when protected attributes are used
explicitly in making decisions, also known as disparate treatment. More pervasive nowadays is
indirect discrimination, in which protected attributes are not used but reliance on variables correlated
with them leads to significantly different outcomes for different groups. The latter phenomenon is
termed disparate impact. Indirect discrimination may be intentional, as in the historical practice of
?redlining? in the U.S. in which home mortgages were denied in zip codes populated primarily by
minorities. However, the doctrine of disparate impact applies regardless of actual intent.
Supervised learning algorithms, increasingly used for decision making in applications of consequence,
may at first be presumed to be fair and devoid of inherent bias, but in fact, inherit any bias or discrimination present in the data on which they are trained [Calders and ?liobait?e, 2013]. Furthermore,
simply removing protected variables from the data is not enough since it does nothing to address
indirect discrimination and may in fact conceal it. The need for more sophisticated tools has made
discrimination discovery and prevention an important research area [Pedreschi et al., 2008].
Algorithmic discrimination prevention involves modifying one or more of the following to ensure
that decisions made by supervised learning methods are less biased: (a) the training data, (b) the
learning algorithm, and (c) the ensuing decisions themselves. These are respectively classified as
pre-processing [Hajian, 2013], in-processing [Fish et al., 2016, Zafar et al., 2016, Kamishima et al.,
2011] and post-processing approaches [Hardt et al., 2016]. In this paper, we focus on pre-processing
since it is the most flexible in terms of the data science pipeline: it is independent of the modeling
algorithm and can be integrated with data release and publishing mechanisms.
Researchers have also studied several notions of discrimination and fairness. Disparate impact is
addressed by the principles of statistical parity and group fairness [Feldman et al., 2015], which seek
similar outcomes for all groups. In contrast, individual fairness [Dwork et al., 2012] mandates that
similar individuals be treated similarly irrespective of group membership. For classifiers and other
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
predictive models, equal error rates for different groups are a desirable property [Hardt et al., 2016],
as is calibration or lack of predictive bias in the predictions [Zhang and Neill, 2016]. The tension
between the last two notions is described by Kleinberg et al. [2017] and Chouldechova [2016]; the
work of Friedler et al. [2016] is in a similar vein. Corbett-Davies et al. [2017] discuss the trade-offs
in satisfying prevailing notions of algorithmic fairness from a public safety standpoint. Since the
present work pertains to pre-processing and not modeling, balanced error rates and predictive bias are
less relevant criteria. Instead we focus primarily on achieving group fairness while also accounting
for individual fairness through a distortion constraint.
Existing pre-processing approaches include sampling or re-weighting the data to neutralize discriminatory effects [Kamiran and Calders, 2012], changing the individual data records [Hajian and
Domingo-Ferrer, 2013], and using t-closeness [Li et al., 2007] for discrimination control [Ruggieri,
2014]. A common theme is the importance of balancing discrimination control against utility of the
processed data. However, this prior work neither presents general and principled optimization frameworks for trading off these two criteria, nor allows connections to be made to the broader statistical
learning and information theory literature via probabilistic descriptions. Another shortcoming is that
individual distortion or fairness is not made explicit.
In this work, we (i) introduce a probabilistic
framework for discrimination-preventing preLearn/Apply
Original data
Transformed data
Learn/Apply
predictive
? , Y? )}
{(X , Y )}
processing in supervised learning, (ii) formu{(D , X
Transformation
? D)
model (Y? |X,
late an optimization problem for producing preUtility: p
p
Individual distortion: (x , y ) (?
x , y? )
processing transformations that trade off dis- Discriminatory
Discrimination control: Y?
D
variable {D }
crimination control, data utility, and individual distortion, (iii) characterize theoretical prop- Figure 1: The proposed pipeline for predictive learning
erties of the optimization approach (e.g. con- with discrimination prevention. Learn mode applies
vexity, robustness to limited samples), and (iv) with training data and apply mode with novel test data.
benchmark the ensuing pre-processing transfor- Note that test data also requires transformation before
mations on real-word datasets. Our aim in part is predictions can be obtained.
to work toward a more unified view of existing
pre-processing concepts and methods, which may help to suggest refinements. While discrimination
and utility are defined at the level of probability distributions, distortion is controlled on a per-sample
basis, thereby limiting the effect of the transformation on individuals and ensuring a degree of
individual fairness. Figure 1 illustrates the supervised learning pipeline that includes our proposed
discrimination-preventing pre-processing.
i
i
i
X,Y
i
i
? Y?
X,
i
i
i
i
i
i
i
The work of Zemel et al. [2013] is closest to ours in also presenting a framework with three criteria
related to discrimination control (group fairness), individual fairness, and utility. However, the
criteria are manifested less directly than in our proposal. Discrimination control is posed in terms of
intermediate features rather than outcomes, individual distortion does not take outcomes into account
(being an `2 -norm between original and transformed features), and utility is specific to a particular
classifier. Our formulation more naturally and generally encodes these fairness and utility desiderata.
Given the novelty of our formulation, we devote more effort than usual to discussing its motivations
and potential variations. We state conditions under which the proposed optimization problem is
convex. The optimization assumes as input an estimate of the distribution of the data which, in
practice, can be imprecise due to limited sample size. Accordingly, we characterize the possible
degradation in discrimination and utility guarantees at test time in terms of the training sample
size. To demonstrate our framework, we apply specific instances of it to a prison recidivism dataset
[ProPublica, 2017] and the UCI Adult dataset [Lichman, 2013]. We show that discrimination,
distortion, and utility loss can be controlled simultaneously with real data. We also show that the preprocessed data reduces discrimination when training standard classifiers, particularly when compared
to the original data with and without removing protected variables. In the Supplementary Material
(SM), we describe in more detail the resulting transformations and the demographic patterns that they
reveal.
2
General Formulation
We are given a dataset consisting of n i.i.d. samples {(Di , Xi , Yi )}ni=1 from a joint distribution
pD,X,Y with domain D ? X ? Y. Here D denotes one or more protected (discriminatory) variables
such as gender and race, X denotes other non-protected variables used for decision making, and Y
is an outcome random variable. We use the term ?discriminatory? interchangeably with ?protected,?
2
and not in the usual statistical sense. For instance, Yi could represent a loan approval decision for
individual i based on demographic information Di and credit score Xi . We focus in this paper on
discrete (or discretized) and finite domains D and X and binary outcomes, i.e. Y = {0, 1}. There is
no restriction on the dimensions of D and X.
Our goal is to determine a randomized mapping pX,
? Y? |X,Y,D that (i) transforms the given dataset into
n
?
?
a new dataset {(Di , Xi , Yi )}i=1 which may be used to train a model, and (ii) similarly transforms
? i , Y?i ) is drawn independently from the same
data to which the model is applied, i.e. test data. Each (X
domain X ? Y as X, Y by applying pX,
? Y? |X,Y,D to the corresponding triplet (Di , Xi , Yi ). Since Di
is retained as-is, we do not include it in the mapping to be determined. Motivation for retaining D is
discussed later in Section 3. For test samples, Yi is not available at the input while Y?i may not be
needed at the output. In this case, a reduced mapping pX|X,D
is used as given later in (9).
?
It is assumed that pD,X,Y is known along with its marginals and conditionals. This assumption is
often satisfied using the empirical distribution of {(Di , Xi , Yi )}ni=1 . In Section 3, we state a result
ensuring that discrimination and utility loss continue to be controlled if the distribution used to
determine pX,
? Y? |X,Y,D differs from the distribution of test samples.
We propose that the mapping pX,
? Y? |X,Y,D satisfy the three following properties.
I. Discrimination Control. The first objective is to limit the dependence of the transformed outcome
Y? on the protected variables D. We propose two alternative formulations. The first requires the
conditional distribution pY? |D to be close to a target distribution pYT for all values of D,
J pY? |D (y|d), pYT (y) ? y,d ? d ? D, y ? {0, 1},
(1)
where J(?, ?) denotes some distance function. In the second formulation, we constrain the conditional
probability pY? |D to be similar for any two values of D:
J pY? |D (y|d1 ), pY? |D (y|d2 ) ? y,d1 ,d2 ? d1 , d2 ? D, y ? {0, 1}.
(2)
Note that the number of such constraints is O(|D|2 ) as opposed to O(|D|) constraints in (1). The
choice of pYT in (1), and J and in (1) and (2) should be informed by societal aspects, consultations
with domain experts and stakeholders, and legal considerations such as the ?80% rule? [EEOC, 1979].
For this work, we choose J to be the following probability ratio measure:
p
J(p, q) = ? 1 .
q
(3)
This metric is motivated by the ?80% rule.? The combination of (3) and (1) generalizes the extended
lift criterion proposed in the literature [Pedreschi et al., 2012], while the combination of (3) and (2)
generalizes selective and contrastive lift. The latter combination (2), (3) is used in the numerical
results in Section 4. We note that the selection of a ?fair? target distribution pYT in (1) is not
straightforward; see ?liobait?e et al. [2011] for one such proposal. Despite its practical motivation, we
alert the reader that (3) may be unnecessarily restrictive when q is low.
In (1) and (2), discrimination control is imposed jointly with respect to all protected variables, e.g.
all combinations of gender and race if D consists of those two variables. An alternative is to take
the protected variables one at a time, and impose univariate discrimination control. In this work, we
opt for the more stringent joint discrimination control, although legal formulations tend to be of the
univariate type.
Formulations (1) and (2) control discrimination at the level of the overall population in the dataset.
It is also possible to control discrimination within segments of the population by conditioning on
additional variables B, where
B is a subset of X and Xis a collection of features. Constraint (1)
would then generalize to J pY? |D,B (y|d, b), pYT |B (y|b) ? y,d,b for all d ? D, y ? {0, 1}, and
b ? B. Similar conditioning or ?context? for discrimination has been explored before in Hajian and
Domingo-Ferrer [2013] in the setting of association rule mining. For example, B could represent
the fraction of a pool of applicants that applied to a certain department, which enables the metric to
avoid statistical traps such as the Simpson?s paradox [Pearl, 2014]. One may wish to control for such
3
variables in determining the presence of discrimination, while ensuring that population segments
created by conditioning are large enough to derive statistically valid inferences. Moreover, we note
that there may exist inaccessible latent variables that drive discrimination, and the metrics used here
are inherently limited by the available data. Recent definitions of fairness that seek to mitigate
this issue include [Johnson et al., 2016] and [Kusner et al., 2017]. We defer further investigation of
causality and conditional discrimination to future work.
II. Distortion Control. The mapping pX,
? Y? |X,Y,D should satisfy distortion constraints with respect
to the domain X ? Y. These constraints restrict the mapping to reduce or avoid altogether certain
large changes (e.g. a very low credit score being mapped to a very high credit score). Given a
distortion metric ? : (X ? Y)2 ? R+ , we constrain the conditional expectation of the distortion as,
h
i
? Y? )) | D = d, X = x, Y = y ? cd,x,y ? (d, x, y) ? D ? X ? Y.
E ?((x, y), (X,
(4)
We assume that ?(x, y, x, y) = 0 for all (x, y) ? X ? Y. Constraint (4) is formulated with pointwise
conditioning on (D, X, Y ) = (d, x, y) in order to promote individual fairness. It ensures that
distortion is controlled for every combination of (d, x, y), i.e. every individual in the original dataset,
and more importantly, every individual to which a model is later applied. By way of contrast, an
average-case measure in which an expectation is also taken over D, X, Y may result in high distortion
for certain (d, x, y), likely those with low probability. Equation (4) also allows the level of control
cd,x,y to depend on (d, x, y) if desired. We also note that (4) is a property of the mapping pX,
? Y? |D,X,Y ,
and does not depend on the assumed distribution pD,X,Y .
? Y? in (4) encompasses several cases depending on the choices of the metric
The expectation over X,
? and thresholds cd,x,y . If cd,x,y = 0, then no mappings with nonzero distortion are allowed for
individuals with original values (d, x, y). If cd,x,y > 0, then certain mappings may still be disallowed
by assigning them infinite distortion. Mappings with finite distortion are permissible subject to the
budget cd,x,y . Lastly, if ? is binary-valued (perhaps achieved by thresholding a multi-valued distortion
function), it can be seen as classifying mappings into desirable (? = 0) and undesirable ones (? = 1).
Here, (4) reduces to a bound on the conditional probability of an undesirable mapping, i.e.,
? Y? )) = 1 | D = d, X = x, Y = y ? cd,x,y .
Pr ?((x, y), (X,
(5)
III. Utility Preservation. In addition to constraints on individual distortions, we also require that
? Y? ) be statistically close to the distribution of (X, Y ). This is to ensure that a
the distribution of (X,
model learned from the transformed dataset (when averaged over the protected variables D) is not
too different from one learned from the original dataset, e.g. a bank?s existing policy for approving
loans. For a givendissimilaritymeasure ? between probability distributions (e.g. KL-divergence),
we require that ? pX,
be small.
? Y? , pX,Y
Optimization Formulation. Putting together the considerations from the three previous subsections,
we arrive at the optimization problem below for determining a randomized transformation pX,
? Y? |X,Y,D
?
?
mapping each sample (Di , Xi , Yi ) to (Xi , Yi ):
min
? pX,
? Y? , pX,Y
pX,
? Y
? |X,Y,D
s.t. J pY? |D (y|d), pYT (y) ? y,d and
h
i
? Y? )) | D = d, X = x, Y = y ? cd,x,y ? (d, x, y) ? D ? X ? Y,
E ?((x, y), (X,
pX,
? Y? |X,Y,D is a valid distribution.
(6)
We choose to minimize the utility loss ? subject to constraints on individual distortion (4) and
discrimination (we use (1) for concreteness, but (2) can be used instead), since it is more natural to
place bounds on the latter two.
The distortion constraints (4) are an essential component of the problem formulation (6). Without
(4) and assuming that pYT = pY , it is possible to achieve perfect utility and non-discrimination
? i , Y?i ) from the original distribution pX,Y independently of any inputs, i.e.
simply by sampling (X
4
pX,
x, y?|x, y, d) = pX,
x, y?) = pX,Y (?
x, y?). Then ?(pX,
? Y? |X,Y,D (?
? Y? (?
? Y? , pX,Y ) = 0, and pY? |D (y|d) =
pY? (y) = pY (y) = pYT (y) for all d ? D. Clearly, this solution is objectionable from the viewpoint of
individual fairness, especially for individuals to whom a subsequent model is applied since it amounts
to discarding an individual?s data and replacing it with a random sample from the population pX,Y .
Constraint (4) seeks to prevent such gross deviations from occurring. The distortion constraints may,
however, render the optimization infeasible, as illustrated in the SM.
3
Theoretical Properties
I. Convexity. We show conditions under which (6) is a convex or quasiconvex optimization problem,
and can thus be solved to optimality. The proof is presented in the SM.
Proposition 1. Problem (6) is a (quasi)convex optimization if ?(?, ?) is (quasi)convex and J(?, ?) is
quasiconvex in their respective first arguments (with the second arguments fixed). If discrimination
constraint (2) is used in place of (1), then the condition on J is that it be jointly quasiconvex in both
arguments.
II. Generalizability of Discrimination Control. We now discuss the generalizability of discrimination guarantees (1) and (2) to unseen individuals, i.e. those to whom a model is applied. Recall
from Section 2 that the proposed transformation retains the protected variables D. We first consider
the case where models trained on the transformed data to predict Y? are allowed to depend on D.
While such models may qualify as disparate treatment, the intent and effect is to better mitigate
disparate impact resulting from the model. In this respect our proposal shares the same spirit with
?fair? affirmative action in Dwork et al. [2012] (fairer on account of distortion constraint (4)).
Assuming that predictive models for Y? can depend on D, let Ye be the output of such a model
? To remove the separate issue of model accuracy, suppose for simplicity that the
based on D and X.
model provides a good approximation to the conditional distribution of Y? , i.e. pYe |X,D
(e
y |?
x, d) ?
?
p ? ? (e
y |?
x, d). Then for individuals in a protected group D = d, the conditional distribution of Ye
Y |X,D
is given by
pYe |D (e
y |d) =
X
x
?
pYe |X,D
(e
y |?
x, d)pX|D
(?
x|d) ?
?
?
X
x
?
pY? |X,D
(e
y |?
x, d)pX|D
(?
x|d) = pY? |D (e
y |d). (7)
?
?
Hence the model output pYe |D can also be controlled by (1) or (2).
On the other hand, if D must be suppressed from the transformed data, perhaps to comply with legal
? and approximate
requirements regarding its non-use, then a predictive model can depend only on X
pY? |X? , i.e. pYe |X,D
(e
y |?
x, d) = pYe |X? (e
y |?
x) ? pY? |X? (e
y |?
x). In this case we have
?
pYe |D (e
y |d) ?
X
x
?
(8)
pY? |X? (e
y |?
x)pX|D
(?
x|d),
?
which in general is not equal to pY? |D (e
y |d) in (7). The quantity on the right-hand side of (8) is less
straightforward to control. We address this question in the SM.
III. Training and Application Considerations. The proposed optimization framework has two
modes of operation (Fig. 1): train and apply. In train mode, the optimization problem (6) is solved in
order to determine a mapping pX,
? Y? |X,Y,D for randomizing the training set. The randomized training
? D) that approximates p ? ? , where ? are the
set, in turn, is used to fit a classification model f? (X,
Y |X,D
parameters of the model. At apply time, a new data point (X, D) is received and transformed into
? D) through a randomized mapping p ?
(X,
is given by marginalizing
?
X|X,D . The mapping pX|D,X
?
over Y, Y :
X
pX|D,X
(?
x|d, x) =
pX,
x, y?|x, y, d)pY |X,D (y|x, d).
(9)
?
? Y? |X,Y,D (?
y,?
y
Assuming that the variable D is not suppressed, and that the marginals are known, then the utility
and discrimination guarantees set during train time still hold during apply time, as discussed above.
5
However, the distortion control will inevitably change, since the mapping has been marginalized over
Y . More specifically, the bound on the expected distortion for each sample becomes
h h
i
i X
? Y? )) | D = d, X = x, Y | D = d, X = x ?
E E ?((x, Y ), (X,
pY |X,D (y|x, d)cx,y,d , cx,d .
y?Y
(10)
If the distortion control values cx,y,d are independent of y, then the upper-bound on distortion set
during training time still holds during apply time. Otherwise, (10) provides a bound on individual
distortion at apply time. The same guarantee holds for the case when D is suppressed.
IV. Robustness to Mismatched Prior Distribution Estimation. We may also consider the case
where the distribution pD,X,Y used to determine the transformation differs from the distribution
qD,X,Y of test samples. This occurs, for example, when pD,X,Y is the empirical distribution computed
from n i.i.d. samples from an unknown distribution qD,X,Y . In this situation, discrimination control
and utility are still guaranteed for samples drawn from qD,X,Y that are transformed using pY? ,X|X,Y,D
,
?
where the latter is obtained by solving (6) with pD,X,Y . In particular, denoting by qY? |D and qX,
? Y?
? and D when qD,X,Y is transformed using p ? ?
the corresponding distributions for Y? , X
,
we
Y
,X|X,Y,D
have J pY? |D (y|d), pYT (y) ? J qY? |D (y|d), pYT (y) and ? pX,Y , pX,
? Y? ? ? qX,Y , qX,
? Y?
for n sufficiently large (the distortion control constraints (4) only depend on pY? ,X|X,Y,D
). The next
?
proposition provides an estimate of the rate of this convergence in terms of n and assuming pY,D (y, d)
is fixed and bounded away from zero. Its proof can be found in the SM.
Proposition 2. Let pD,X,Y be the empirical distribution obtained from n i.i.d. samples that is used to
determine the mapping pY? ,X|X,Y,D
, and qD,X,Y be the true distribution of the data, with support size
?
m , |X ? Y ? D|. In addition, denote by qD,X,
to
? Y? the joint distribution after applying pY? ,X|X,Y,D
?
samples from qD,X,Y . If for all y ? Y, d ? D we have pY,D (y, d) > 0, J pY? |D (y|d), pYT (y) ? ,
where J is given in (3), and
X
? pX,Y , pX,
pX,Y (x, y) ? pX,
? Y? =
? Y? (x, y) ? ?,
x,y
with probability 1 ? ?,
n
o rm
n log ?
log 1 +
. (11)
?
max J qY? |D (y|d), pYT (y) ? , ? qX,Y , qX,
? Y? ? ? .
n
m
n
Proposition 2 guarantees that, as long as n is sufficiently large, the utility and discrimination control
guarantees will approximately hold when pX,
? Y? |Y,X,D is applied to fresh samples drawn from qD,X,Y .
In particular, the utility and discrimination
guarantees
will converge to the ones used as parameters in
q
the optimization at a rate that is at least n1 log n. The distortion control guarantees (4) are a property
of the mapping pX,
? Y? |Y,X,D , and do not depend on the distribution of the data. The convergence rate
is tied to the support size, and for large m a dimensionality reduction step may be required to assuage
generalization issues. The same upper bound on convergence rate holds for discrimination constraints
of the form (2).
4
Experimental Results
This section provides a numerical demonstration of running the data processing pipeline in Fig. 1. Our
focus here is on the discrimination-accuracy trade-off obtained when the pre-processed data is used
to train standard prediction algorithms. The SM presents additional results on the trade-off between
discrimination control and utility ? as well as an analysis of the optimized data transformations.
We apply the pipeline to ProPublica?s COMPAS recidivism data [ProPublica, 2017] and the UCI
Adult dataset [Lichman, 2013]. From the COMPAS dataset (7214 instances), we select severity of
charge, number of prior crimes, and age category to be the decision variables (X). The outcome
variable (Y ) is a binary indicator of whether the individual recidivated (re-offended), and race is
set to be the protected variable (D). The encoding of categorical variables is described in the SM.
For the Adult dataset (32561 instances), the features were categorized as protected variables (D):
6
gender (male, female); decision variables (X): age (quantized to decades) and education (quantized
to years); and response variable (Y ): income (binary).
Our proposed approach is benchmarked against two baselines, leaving the dataset as-is and suppressing the protected variable D during training and testing. We also compare against the learning
fair representations (LFR) algorithm from Zemel et al. [2013]. As discussed in the introduction,
LFR has fundamental differences from the proposed framework. In particular, LFR only considers
binary-valued D, and consequently, we restrict D to be binary in the experiments presented here.
However, our method is not restricted to D being binary or univariate. Illustrations of our method on
non-binary D are provided in the SM.
The details of applying our method to the datasets are as follows. For each train/test split, we
approximate pD,X,Y using the empirical distribution of (D, X, Y ) in the training set and solve (6)
using a standard convex solver [Diamond
datasets the utility metric
and Boyd,2016].PFor both
1
? is the total variation distance, i.e. ? pX,Y , pX,
=
? Y? (x, y), the
? Y?
x,y pX,Y (x, y) ? pX,
2
distortion constraint is the combination of (2) and (3), and two levels of discrimination control are
used, = {0.05, 0.1}. The distortion function ? is chosen differently for the two datasets as described
below, based on the differing semantics of the variables in the two applications. The specific values
were chosen for demonstration purposes to be reasonable to our judgment and can easily be tuned
according to the desires of a practitioner. We emphasize that the distortion values were not selected
to optimize the results presented here. All experiments run in minutes on a standard laptop.
Distortion function for COMPAS: We use the expected distortion constraint in (4) with cd,x,y =
0.4, 0.3 for d being respectively African-American and Caucasian. The distortion function ? has the
following behavior. Jumps of more than one category in age and prior counts are heavily discouraged
by a high distortion penalty (104 ) for such transformations. We impose the same penalty on increases
in recidivism (change of Y from 0 to 1). Both these choices are made in the interest of individual
fairness. Furthermore, for every jump to an adjacent category for age and prior counts, a penalty of 1
is assessed, and a similar jump in charge degree incurs a penalty of 2. Reduction in recidivism (1 to
0) has a penalty of 2. The total distortion for each individual is the sum of squares of distortions for
each attribute of X.
Distortion function for Adult: We use three conditional probability constraints of the form in (5). In
constraint i, the distortion function returns 1 in case (i) and 0 otherwise: (1) if income is decreased,
age is not changed and education is increased by at most 1 year, (2) if age is changed by a decade
and education is increased by at most 1 year regardless of the change of income, (3) if age is
changed by more than a decade or education is lowered by any amount or increased by more than 1
year. The corresponding probability bounds cd,x,y are 0.1, 0.05, 0 (no dependence on d, x, y). As a
consequence, and in the same broad spirit as for COMPAS, decreases in income, small changes in
age, and small increases in education (events (1), (2)) are permitted with small probabilities, while
larger changes in age and education (event (3)) are not allowed at all.
Once the optimized randomized mapping pX,
? Y? |D,X,Y is determined, we apply it to the training set to
obtain a new perturbed training set, which is then used to fit two classifiers: logistic regression (LR)
and random forest (RF). For the test set, we first compute the test-time mapping pX|D,X
in (9) using
?
pX,
? Y? |D,X,Y and pD,X,Y estimated from the training set. We then independently randomize each
pX|D,X
?
test sample (di , xi ) using pX|D,X
, preserving the protected variable D, i.e. (di , xi ) ?????? (di , x
?i ).
?
Each trained classifier f is applied to the transformed test samples, obtaining an estimate yei =
f (di , x
?i ) which is evaluated
P against yi . These estimates induce an empirical posterior distribution
given by pYe |D (1|d) = n1d {?xi ,di }:di =d f (di , x
?i ), where nd is the number of samples with di = d.
For the two baselines, the above procedure is repeated without data transformation except for dropping
D throughout for the second baseline (D is still used to compute the discrimination of the resulting
classifier). Due to the lack of available code, we implemented LFR ourselves in Python and solved
the associated optimization problem using the SciPy package. The parameters for LFR were set as
recommended in Zemel et al. [2013]: Az = 50 (group fairness), Ax = 0.01 (individual fairness), and
Ay = 1 (prediction accuracy). The results did not significantly change within a reasonable variation
of these three parameters.
7
0.710
0.705
0.70
0.69
0.700
AUC
AUC
0.71
LR
LR?+?Dropping?D
LFR
LR?+?Our?approach(.05)
LR?+?Our?approach(.1)
0.715
0.695
0.690
0.67
0.685
0.66
0.680
0.675
RF
RF?+?Dropping?D
LFR
RF?+?Our?approach(.05)
RF?+?Our?approach(.1)
0.65
0.00
0.82
0.05
0.10
0.15
0.20
0.25
Discrimination
0.30
0.35
0.40
0.0
0.82
LR
LR?+?Dropping?D
LFR
LR?+?Our?approach(.05)
LR?+?Our?approach(.1)
0.81
0.80
0.1
0.2
Discrimination
0.3
0.4
RF
RF?+?Dropping?D
LFR
RF?+?Our?approach(.05)
RF?+?Our?approach(.1)
0.81
0.80
0.79
AUC
AUC
0.68
0.79
0.78
0.77
0.78
0.76
0.77
0.75
0.76
0.00
0.25
0.50
0.75
1.00
Discrimination
1.25
1.50
0.74
1.75
0.00
0.25
0.50
0.75
1.00
Discrimination
1.25
1.50
1.75
Figure 2: Discrimination-AUC plots for two different classifiers. Top row is for COMPAS dataset, and bottom
row for UCI Adult dataset. First column is logistic regression (LR), and second column is random forests (RF).
Results. We report the trade-off between two metrics: (i) the empirical discrimination of the classifier
on the test set, given by maxd,d0 ?D J(pYe |D (1|d), pYe |D (1|d0 )), and (ii) the empirical accuracy, measured by the Area under ROC (AUC) of yei = f (di , x
?i ) compared to yi , using 5-fold cross validation.
Fig. 2 presents the operating points achieved by each procedure in the discrimination-accuracy space
as measured by these metrics. For the COMPAS dataset, there is significant discrimination in the
original dataset, which is reflected by both LR and RF when the data is not transformed. Dropping the
D variable reduces discrimination with a negligible impact on classification. However discrimination
is far from removed since the features X are correlated with D, i.e. there is indirect discrimination.
LFR with the recommended parameters is successful in further reducing discrimination while still
achieving high prediction performance for the task.
Our proposed optimized pre-processing approach successfully decreases the empirical discrimination
close to the target values (x-axis). Deviations are expected due to the approximation of Y? , the output
of the transformation, by Ye , the output of each classifier, and also due to the randomized nature of
the method. The decreased discrimination comes at an accuracy cost, which is greater in this case
than for LFR. A possible explanation is that LFR is free to search across different representations
whereas our method is restricted by the chosen distortion metric and having to preserve the domain of
the original variables. For example, for COMPAS we heavily penalize increases in recidivism from 0
to 1 as well as large changes in prior counts and age. When combined with the other constraints in
the optimization, this may alter the joint distribution after perturbation and by extension the classifier
output. Increased accuracy could be obtained by relaxing the distortion constraint, as long as this
is acceptable to the practitioner. We highlight again that our distortion metric was not chosen to
explicitly optimize performance on this task, and should be guided by the practitioner. Nevertheless,
we do successfully obtain a controlled reduction of discrimination while avoiding unwanted deviations
in the randomized mapping.
For the Adult dataset, dropping the protected variable does significantly reduce discrimination, in
contrast with COMPAS. Our method further reduces discrimination towards the target values. The
loss of prediction performance is again due to satisfying the distortion and discrimination constraints.
On the other hand, LFR with the recommended parameters provides only a small reduction in
discrimination. We note that this does not contradict the results in Zemel et al. [2013], since here we
have adopted a multiplicative discrimination metric (3) whereas Zemel et al. [2013] used an additive
metric. Moreover, we reduced the Adult dataset to 31 binary features which is different from Zemel
et al. [2013] where they additionally considered the test dataset for Adult (12661 instances) also and
created 103 binary features. By varying the LFR parameters, it is possible to attain low empirical
discrimination but with a large loss in prediction performance (below the plotted range). Thus, we
do not claim that our method outperforms LFR since different operating points can be achieved by
8
adjusting parameters in either approach. In our approach however, individual fairness is explicitly
maintained through the design of the distortion metric and discrimination is controlled directly by a
single parameter , whereas the relationship is less clear with LFR.
5
Conclusions
We proposed a flexible, data-driven optimization framework for probabilistically transforming data in
order to reduce algorithmic discrimination, and applied it to two datasets. When used to train standard
classifiers, the transformed dataset led to a fairer classification when compared to the original dataset.
The reduction in discrimination comes at an accuracy penalty due to the restrictions imposed on the
randomized mapping. Moreover, our method is competitive with others in the literature, with the
added benefit of enabling an explicit control of individual fairness and the possibility of multivariate,
non-binary protected variables. The flexibility of the approach allows numerous extensions using
different measures and constraints for utility preservation, discrimination, and individual distortion
control. Investigating such extensions, developing theoretical characterizations based on the proposed
framework, and quantifying the impact of the transformations on additional supervised learning tasks
will be pursued in future work.
References
T. Calders and I. ?liobait?e. Why unbiased computational processes can lead to discriminative decision
procedures. In Discrimination and Privacy in the Information Society, pages 43?57. Springer,
2013.
A. Chouldechova. Fair prediction with disparate impact: A study of bias in recidivism prediction
instruments. arXiv preprint arXiv:1610.07524, 2016.
S. Corbett-Davies, E. Pierson, A. Feller, S. Goel, and A. Huq. Algorithmic decision making and the
cost of fairness. arXiv preprint arXiv:1701.08230, 2017.
S. Diamond and S. Boyd. CVXPY: A Python-embedded modeling language for convex optimization.
Journal of Machine Learning Research, 17(83):1?5, 2016.
C. Dwork, M. Hardt, T. Pitassi, O. Reingold, and R. Zemel. Fairness through awareness. In
Proceedings of the 3rd Innovations in Theoretical Computer Science Conference, pages 214?226.
ACM, 2012.
T. U. EEOC. Uniform guidelines on employee selection procedures. https://www.eeoc.gov/
policy/docs/qanda_clarify_procedures.html, Mar. 1979.
M. Feldman, S. A. Friedler, J. Moeller, C. Scheidegger, and S. Venkatasubramanian. Certifying and
removing disparate impact. In Proc. ACM SIGKDD Int. Conf. Knowl. Disc. Data Min., pages
259?268, 2015.
B. Fish, J. Kun, and ?. D. Lelkes. A confidence-based approach for balancing fairness and accuracy.
In Proceedings of the SIAM International Conference on Data Mining, pages 144?152. SIAM,
2016.
S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian. On the (im) possibility of fairness. arXiv
preprint arXiv:1609.07236, 2016.
S. Hajian. Simultaneous Discrimination Prevention and Privacy Protection in Data Publishing and
Mining. PhD thesis, Universitat Rovira i Virgili, 2013. Available online: https://arxiv.org/
abs/1306.6805.
S. Hajian and J. Domingo-Ferrer. A methodology for direct and indirect discrimination prevention in
data mining. IEEE Trans. Knowl. Data Eng., 25(7):1445?1459, 2013.
M. Hardt, E. Price, and N. Srebro. Equality of opportunity in supervised learning. In Adv. Neur. Inf.
Process. Syst. 29, pages 3315?3323, 2016.
K. D. Johnson, D. P. Foster, and R. A. Stine. Impartial predictive modeling: Ensuring fairness in
arbitrary models. arXiv preprint arXiv:1608.00528, 2016.
9
F. Kamiran and T. Calders. Data preprocessing techniques for classification without discrimination.
Knowledge and Information Systems, 33(1):1?33, 2012.
T. Kamishima, S. Akaho, and J. Sakuma. Fairness-aware learning through regularization approach.
In Data Mining Workshops (ICDMW), IEEE 11th International Conference on, pages 643?650.
IEEE, 2011.
J. Kleinberg, S. Mullainathan, and M. Raghavan. Inherent trade-offs in the fair determination of risk
scores. In Proc. Innov. Theoret. Comp. Sci., 2017.
M. J. Kusner, J. R. Loftus, C. Russell, and R. Silva. Counterfactual fairness. arXiv preprint
arXiv:1703.06856, 2017.
N. Li, T. Li, and S. Venkatasubramanian. t-closeness: Privacy beyond k-anonymity and l-diversity. In
IEEE 23rd International Conference on Data Engineering, pages 106?115. IEEE, 2007.
M. Lichman. UCI machine learning repository, 2013. URL http://archive.ics.uci.edu/ml.
J. Pearl. Comment: understanding simpson?s paradox. The American Statistician, 68(1):8?13, 2014.
D. Pedreschi, S. Ruggieri, and F. Turini. Discrimination-aware data mining. In Proc. ACM SIGKDD
Int. Conf. Knowl. Disc. Data Min., pages 560?568. ACM, 2008.
D. Pedreschi, S. Ruggieri, and F. Turini. A study of top-k measures for discrimination discovery. In
Proc. ACM Symp. Applied Comput., pages 126?131, 2012.
ProPublica.
COMPAS
Recidivism
Risk
Score
Data
and
Analysis.
https://www.propublica.org/datastore/dataset/compas-recidivism-risk-score-data-and-analysis,
2017.
S. Ruggieri. Using t-closeness anonymity to control for non-discrimination. Trans. Data Privacy, 7
(2):99?129, 2014.
M. B. Zafar, I. Valera, M. G. Rodriguez, and K. P. Gummadi. Fairness beyond disparate treatment & disparate impact: Learning classification without disparate mistreatment. arXiv preprint
arXiv:1610.08452, 2016.
R. Zemel, Y. L. Wu, K. Swersky, T. Pitassi, and C. Dwork. Learning fair representations. In Proc. Int.
Conf. Mach. Learn., pages 325?333, 2013.
Z. Zhang and D. B. Neill. Identifying significant predictive bias in classifiers. In Proceedings of the
NIPS Workshop on Interpretable Machine Learning in Complex Systems, 2016. Available online:
https://arxiv.org/abs/1611.08292.
I. ?liobait?e, F. Kamiran, and T. Calders. Handling conditional discrimination. In Proc. IEEE Int.
Conf. Data Mining, pages 992?1001, 2011.
10
| 6988 |@word repository:1 norm:1 nd:1 d2:3 seek:3 fairer:2 accounting:1 eng:1 contrastive:1 incurs:1 thereby:1 reduction:5 venkatasubramanian:3 lichman:3 score:6 denoting:1 ours:1 suppressing:1 tuned:1 outperforms:1 existing:3 com:4 protection:1 assigning:1 must:1 applicant:1 stine:1 numerical:2 subsequent:1 additive:1 assuage:1 enables:1 remove:1 propublica:5 plot:1 interpretable:1 discrimination:84 pursued:1 selected:1 caucasian:1 accordingly:1 record:1 lr:11 provides:5 quantized:2 characterization:1 org:3 zhang:2 along:1 alert:1 direct:2 consists:1 symp:1 privacy:4 introduce:2 expected:3 presumed:1 behavior:1 themselves:1 nor:1 prison:1 multi:1 discretized:1 approval:1 gov:1 actual:1 solver:1 becomes:1 provided:1 moreover:3 bounded:1 laptop:1 benchmarked:1 affirmative:1 informed:1 unified:1 differing:1 transformation:14 guarantee:8 mitigate:2 every:4 charge:2 unwanted:1 classifier:12 rm:1 control:28 producing:1 impartial:1 safety:1 before:2 negligible:1 engineering:1 limit:1 consequence:2 despite:1 encoding:1 mach:1 approximately:1 mortgage:1 studied:1 relaxing:1 limited:4 discriminatory:4 range:1 statistically:2 averaged:1 practical:1 testing:1 practice:2 differs:2 procedure:4 area:2 empirical:9 significantly:3 attain:1 davy:2 imprecise:1 pre:11 word:1 boyd:2 induce:1 confidence:1 suggest:1 close:3 selection:2 undesirable:2 context:1 applying:3 risk:3 py:27 restriction:2 optimize:2 imposed:2 www:2 straightforward:2 regardless:2 independently:3 convex:7 simplicity:1 identifying:1 scipy:1 rule:3 importantly:1 population:4 notion:3 variation:3 limiting:2 controlling:1 target:4 suppose:1 heavily:2 domingo:3 harvard:2 satisfying:2 particularly:1 anonymity:2 legally:1 mistreatment:1 vein:1 bottom:1 preprint:6 solved:3 ensures:1 adv:1 trade:6 decrease:2 removed:1 russell:1 balanced:1 feller:1 principled:1 pd:9 inaccessible:1 gross:1 convexity:1 transforming:1 trained:3 depend:7 solving:1 segment:2 pedreschi:4 predictive:9 yei:2 basis:1 formu:1 joint:4 indirect:5 differently:1 easily:1 train:7 shortcoming:1 describe:1 zemel:8 lift:2 outcome:8 doctrine:1 posed:1 supplementary:1 valued:3 distortion:47 solve:1 otherwise:2 larger:1 unseen:1 jointly:2 online:2 propose:3 eeoc:3 relevant:1 uci:5 moeller:1 flexibility:1 achieve:1 description:1 az:1 convergence:3 requirement:1 sea:1 perfect:1 help:1 derive:1 depending:1 measured:2 received:1 implemented:1 involves:1 trading:1 come:2 qd:8 guided:1 attribute:3 modifying:1 raghavan:1 stringent:1 public:1 material:1 education:6 require:2 recidivated:1 generalization:1 investigation:1 opt:1 proposition:4 im:1 extension:3 transfor:1 hold:5 sufficiently:2 intentional:1 credit:3 considered:1 ic:1 algorithmic:5 predict:1 mapping:23 claim:1 friedler:3 purpose:1 estimation:1 proc:6 knowl:3 neutralize:1 successfully:2 tool:1 offs:2 clearly:1 aim:1 rather:1 avoid:2 mations:1 varying:1 broader:1 probabilistically:1 pervasive:1 release:1 focus:4 ax:1 greatly:1 contrast:3 sigkdd:2 baseline:3 sense:1 inference:1 membership:2 integrated:1 transformed:12 selective:1 quasi:2 semantics:1 overall:1 classification:6 flexible:2 issue:3 html:1 retaining:1 prevention:6 prevailing:1 equal:2 once:1 aware:2 having:1 beach:1 sampling:2 unnecessarily:1 broad:1 fairness:27 promote:1 alter:1 future:2 report:1 others:1 inherent:2 primarily:2 simultaneously:1 divergence:1 preserve:1 individual:33 consisting:1 ourselves:1 statistician:1 n1:1 ab:2 interest:1 dwork:4 mining:7 simpson:2 possibility:2 male:1 nowadays:1 mullainathan:1 respective:1 iv:2 re:2 desired:1 plotted:1 theoretical:4 instance:6 column:2 modeling:4 increased:4 retains:1 cost:3 deviation:3 subset:1 uniform:1 successful:1 johnson:2 too:1 characterize:3 universitat:1 randomizing:1 perturbed:1 generalizability:2 combined:1 st:1 devoid:1 fundamental:1 randomized:8 cvxpy:1 siam:2 international:3 probabilistic:3 off:5 pool:1 together:1 again:2 thesis:1 satisfied:1 opposed:1 choose:2 conf:4 expert:1 american:2 return:1 li:3 syst:1 account:2 potential:1 diversity:1 includes:1 int:4 satisfy:2 explicitly:3 race:4 later:3 view:1 multiplicative:1 competitive:1 pyt:12 defer:1 minimize:1 square:1 ni:2 accuracy:10 accomplishing:1 judgment:1 generalize:1 disc:2 comp:1 researcher:1 drive:1 classified:1 african:1 simultaneous:1 definition:1 against:4 naturally:1 proof:2 di:16 associated:1 con:1 ruggieri:4 dataset:23 treatment:4 hardt:4 consultation:1 adjusting:1 recall:1 subsection:1 knowledge:1 dimensionality:1 counterfactual:1 sophisticated:1 supervised:6 tension:1 response:1 wei:1 permitted:1 reflected:1 formulation:10 evaluated:1 methodology:1 mar:1 furthermore:2 lastly:1 hand:3 dennis:1 replacing:1 lack:2 rodriguez:1 mode:4 logistic:2 reveal:1 perhaps:2 usa:1 effect:3 ye:3 concept:1 true:1 unbiased:1 hence:1 equality:1 regularization:1 nonzero:1 illustrated:1 adjacent:1 interchangeably:1 during:5 auc:6 maintained:1 criterion:5 presenting:1 ay:1 demonstrate:1 silva:1 consideration:3 novel:2 common:1 conditioning:4 discussed:3 association:1 approximates:1 marginals:2 ferrer:3 employee:1 significant:2 feldman:2 ai:4 rd:2 populated:1 similarly:2 akaho:1 language:1 lowered:1 calibration:1 pye:10 operating:2 pitassi:2 closest:1 posterior:1 recent:1 female:1 multivariate:1 inf:1 driven:1 termed:1 certain:4 manifested:1 binary:11 continue:1 maxd:1 discussing:1 qualify:1 flavio:2 yi:10 societal:1 preserving:2 seen:1 additional:3 greater:1 impose:2 zip:1 goel:1 recognized:1 novelty:1 determine:5 converge:1 recommended:3 ii:5 preservation:2 desirable:2 reduces:4 d0:2 determination:1 cross:1 long:3 post:1 gummadi:1 controlled:7 impact:10 prediction:9 ensuring:4 desideratum:1 regression:2 metric:13 expectation:3 arxiv:14 represent:2 achieved:3 qy:3 penalize:1 proposal:3 addition:2 conditionals:1 whereas:3 compas:10 addressed:1 decreased:2 mandate:1 scheidegger:2 leaving:1 standpoint:1 permissible:1 biased:1 archive:1 comment:1 subject:2 tend:1 reingold:1 spirit:2 practitioner:3 presence:1 intermediate:1 iii:3 enough:2 split:1 fit:2 restrict:2 reduce:3 regarding:1 kush:1 motivated:1 whether:1 utility:20 url:1 effort:1 penalty:6 render:1 vinzamuri:2 action:1 generally:1 clear:1 transforms:2 amount:2 kamiran:3 processed:2 category:3 reduced:3 http:5 exist:1 fish:2 estimated:1 per:1 discrete:1 disallowed:1 dropping:7 group:10 putting:1 reliance:1 threshold:1 nevertheless:1 achieving:2 drawn:3 loftus:1 changing:1 preprocessed:1 neither:1 approving:1 prevent:1 concreteness:1 fraction:1 year:4 sum:1 run:1 package:1 sakuma:1 arrive:1 place:2 reader:1 reasonable:2 throughout:1 wu:1 swersky:1 home:1 doc:1 decision:11 acceptable:1 bound:7 guaranteed:1 neill:2 n1d:1 fold:1 constraint:24 constrain:2 encodes:1 certifying:1 kleinberg:2 aspect:1 argument:3 min:3 optimality:1 px:45 recidivism:9 department:1 developing:1 according:1 neur:1 combination:6 across:1 increasingly:1 suppressed:3 kusner:2 making:5 restricted:2 pr:1 pipeline:5 taken:1 legal:3 equation:1 calder:5 discus:2 turn:1 mechanism:1 count:3 needed:1 instrument:1 demographic:2 adopted:1 available:5 generalizes:2 operation:1 apply:11 away:1 alternative:2 robustness:2 altogether:1 hajian:5 original:10 assumes:1 denotes:3 include:3 ensure:2 conceal:1 publishing:2 running:1 marginalized:1 top:2 opportunity:1 restrictive:1 especially:1 society:1 objective:3 question:1 quantity:1 occurs:2 added:1 randomize:1 dependence:2 usual:2 devote:1 discouraged:1 distance:2 separate:1 mapped:1 sci:1 denied:1 ensuing:2 whom:2 considers:1 bhanu:1 toward:1 fresh:1 minority:1 assuming:4 code:2 retained:1 pointwise:1 illustration:1 ratio:1 demonstration:2 relationship:1 innovation:1 kun:1 disparate:11 intent:2 design:1 guideline:1 policy:2 unknown:1 diamond:2 upper:2 datasets:6 sm:8 benchmark:1 finite:2 enabling:1 inevitably:1 situation:1 extended:1 severity:1 paradox:2 perturbation:1 arbitrary:1 criminal:1 required:1 kl:1 lfr:16 optimized:4 connection:1 crime:1 learned:2 pearl:2 nip:2 trans:2 address:2 adult:8 beyond:2 below:3 pattern:1 encompasses:1 rf:11 including:1 max:1 explanation:1 event:2 treated:1 natural:1 indicator:1 valera:1 numerous:1 axis:1 irrespective:1 created:2 categorical:1 prior:6 literature:3 discovery:2 comply:1 python:2 understanding:1 determining:2 marginalizing:1 embedded:1 loss:5 highlight:1 huq:1 srebro:1 age:10 validation:1 awareness:1 degree:2 principle:1 thresholding:1 bank:1 viewpoint:1 classifying:1 share:1 balancing:2 ibm:8 row:2 cd:10 changed:3 parity:1 last:1 free:1 infeasible:1 dis:1 bias:6 side:1 mismatched:1 benefit:1 dimension:1 world:1 valid:2 preventing:2 made:5 refinement:1 collection:1 jump:3 preprocessing:1 historical:1 far:1 income:4 foster:1 qx:5 icdmw:1 turini:2 approximate:2 emphasize:1 contradict:1 varshney:1 ml:1 investigating:1 assumed:2 pierson:1 xi:10 corbett:2 discriminative:1 search:1 latent:1 triplet:1 protected:20 decade:3 why:1 additionally:1 learn:3 nature:1 ca:1 inherently:1 correlated:2 obtaining:1 forest:2 complex:1 zafar:2 domain:6 inherit:1 did:1 motivation:3 karthikeyan:1 nothing:1 fair:7 allowed:3 repeated:1 categorized:1 causality:1 fig:3 roc:1 theoret:1 quasiconvex:3 theme:1 explicit:2 wish:1 comput:1 tied:1 weighting:1 late:1 erties:1 removing:3 minute:1 specific:3 discarding:1 explored:1 closeness:3 trap:1 essential:1 workshop:2 importance:1 phd:1 dissimilarity:1 illustrates:1 budget:1 occurring:1 cx:3 led:1 simply:2 univariate:3 likely:1 desire:1 applies:2 chouldechova:2 gender:4 springer:1 kamishima:2 acm:5 prop:1 conditional:9 goal:2 formulated:1 consequently:1 quantifying:1 towards:1 price:1 change:8 loan:2 determined:2 infinite:1 reducing:2 specifically:1 except:1 degradation:1 total:2 experimental:1 select:1 support:2 pfor:1 latter:4 assessed:1 pertains:1 phenomenon:1 d1:3 avoiding:1 handling:1 |
6,619 | 6,989 | YASS: Yet Another Spike Sorter
JinHyung Lee1 , David Carlson2 , Hooshmand Shokri1 , Weichi Yao1 , Georges Goetz3 , Espen Hagen4 ,
Eleanor Batty1 , EJ Chichilnisky3 , Gaute Einevoll5 , and Liam Paninski1
1
Columbia University, 2 Duke University, 3 Stanford University, 4 University of Oslo, 5 Norwegian
University of Life Sciences
Abstract
Spike sorting is a critical first step in extracting neural signals from large-scale
electrophysiological data. This manuscript describes an efficient, reliable pipeline
for spike sorting on dense multi-electrode arrays (MEAs), where neural signals
appear across many electrodes and spike sorting currently represents a major
computational bottleneck. We present several new techniques that make dense MEA
spike sorting more robust and scalable. Our pipeline is based on an efficient multistage ?triage-then-cluster-then-pursuit? approach that initially extracts only clean,
high-quality waveforms from the electrophysiological time series by temporarily
skipping noisy or ?collided? events (representing two neurons firing synchronously).
This is accomplished by developing a neural network detection method followed
by efficient outlier triaging. The clean waveforms are then used to infer the set
of neural spike waveform templates through nonparametric Bayesian clustering.
Our clustering approach adapts a ?coreset? approach for data reduction and uses
efficient inference methods in a Dirichlet process mixture model framework to
dramatically improve the scalability and reliability of the entire pipeline. The
?triaged? waveforms are then finally recovered with matching-pursuit deconvolution
techniques. The proposed methods improve on the state-of-the-art in terms of
accuracy and stability on both real and biophysically-realistic simulated MEA data.
Furthermore, the proposed pipeline is efficient, learning templates and clustering
faster than real-time for a ' 500-electrode dataset, largely on a single CPU core.
1
Introduction
The analysis of large-scale multineuronal spike train data is crucial for current and future neuroscience
research. These analyses are predicated on the existence of reliable and reproducible methods that
feasibly scale to the increasing rate of data acquisition. A standard approach for collecting these data
is to use dense multi-electrode array (MEA) recordings followed by ?spike sorting? algorithms to
turn the obtained raw electrical signals into spike trains.
A crucial consideration going forward is the ability to scale to massive datasets: MEAs currently scale
up to the order of 104 electrodes, but efforts are underway to increase this number to 106 electrodes1 .
At this scale any manual processing of the obtained data is infeasible. Therefore, automatic spike
sorting for dense MEAs has enjoyed significant recent attention [15, 9, 51, 24, 36, 20, 33, 12]. Despite
these efforts, spike sorting remains the major computational bottleneck in the scientific pipeline when
using dense MEAs, due both to the high computational cost of the algorithms and the human time
spent on manual postprocessing.
To accelerate progress on this critical scientific problem, our proposed methodology is guided by
several main principles. First, robustness is critical, since hand-tuning and post-processing is not
1
DARPA Neural Engineering System Design program BAA-16-09
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
Algorithm 1 Pseudocode for the complete proposed pipeline.
Input: time-series of electrophysiological data V 2 RT ?C , locations 2 R3
[waveforms, timestamps]
Detection(V) % (Section 2.2)
% ?Triage? noisy waveforms and collisions (Section 2.4):
[cleanWaveforms, cleanTimestamps]
Triage(waveforms, timestamps)
% Build a set of representative waveforms and summary statistics (Section 2.5)
[representativeWaveforms, sufficientStatistics]
coresetConstruction(cleanWaveforms)
% DP-GMM clustering via divide-and-conquer (Sections 2.6 and 2.7)
[{representativeWaveformsi , sufficientStatisticsi }i=1,... ]
splitIntoSpatialGroups(representativeWaveforms, sufficientStatistics, locations)
for i=1,. . . do % Run efficient inference for the DP-GMM
[clusterAssignmentsi ]
SplitMergeDPMM(representativeWaveformsi , sufficientStatisticsi )
end for
% Merge spatial neighborhoods and similar templates
[allClusterAssignments, templates]
mergeTemplates({clusterAssignmentsi }i=1,... , {representativeWaveformsi }i=1,... , locations)
% Pursuit stage to recover collision and noisy waveforms
[finalTimestamps, finalClusterAssignments]
deconvolution(templates)
return [finalTimestamps, finalClusterAssignments]
feasible at scale. Second, scalability is key. To feasibly process the oncoming data deluge, we use
efficient data summarizations wherever possible and focus computational power on the ?hard cases,?
using cheap fast methods to handle easy cases. Next, the pipeline should be modular. Each stage in
the pipeline has many potential feasible solutions, and the pipeline is improved by rapidly iterating
and updating each stage as methodology develops further. Finally, prior information is leveraged
as much as possible; we share information across neurons, electrodes, and experiments in order to
extract information from the MEA datastream as efficiently as possible.
We will first outline the methodology that forms the core of our pipeline in Section 2.1, and then we
demonstrate the improvements in performance on simulated data and a 512-electrode recording in
Section 3. Further supporting results appear in the appendix.
2
2.1
Methods
Overview
The inputs to the pipeline are the band-pass filtered voltage recordings from all C electrodes and
their spatial layout, and the end result of the pipeline is the set of K (where K is determined by
the algorithm) binary neural spike trains, where a ?1? in the time series reflects a neural action
potential from the kth neuron at the corresponding time point. The voltage signals are spatially
whitened prior to processing and are modeled as the superposition of action potentials and background
Gaussian noise [12]. Spatial whitening is performed by removing potential spikes using amplitude
thresholding and estimating the whitening filter under a Gaussianity assumption. Succinctly, the
pipeline is a multistage procedure as follows: (i) detecting waveforms and extracting features, (ii)
screening outliers and collided waveforms, (iii) clustering, and (iv) inferring missed and collided
spikes. Pseudocode for the flow of the pipeline can be found in Algorithm 1. A brief overview is
below, followed by additional details.
Our overall strategy can be considered a hybrid of a matching pursuit approach (similar to that
employed by [36]) and a classical clustering approach, generalized and adapted to the large dense
MEA setting. Our guiding philosophy is that it is essential to properly handle ?collisions? between
simultaneous spikes [37, 12], since collisions distort the extracted feature space and hinder clustering.
A typical approach to this issue utilizes matching pursuit methods (or other sparse deconvolution
strategies), but these methods are relatively computationally expensive compared to clustering
primitives. This led us to a ?triage-then-cluster-then-pursuit? approach: we ?triage? collided or overly
noisy waveforms, putting them aside during the feature extraction and clustering stages, and later
recover these spikes during a final ?pursuit? or deconvolution stage. The triaging begins during
the detection stage in Section 2.2, where we develop a neural network based detection method that
2
significantly improves sensitivity and selectivity. For example, on a simulated 30 electrode dataset
with low SNR, the new approach reduces false positives and collisions by 90% for the same rate of
true positives. Furthermore, the neural network is significantly better at the alignment of signals,
which improves the feature space and signal-to-noise power. The detected waveforms then are
projected to a feature space and restricted to a local spatial subset of electrodes as in [24] in Section
2.3. Next, in Section 2.4 an outlier detection method further ?triages? the detected waveforms and
reduces false positives and collisions by an additional 70% while removing only a small percentage
of real detections. All of these steps are achievable in nearly linear time. Simulations demonstrate
that this large reduction in false positives and collisions dramatically improves accuracy and stability.
Following the above steps, the remaining waveforms are partitioned into distinct neurons via clustering. Our clustering framework is based on the Dirichlet Process Gaussian Mixture Model (DP-GMM)
approach [48, 9], and we modify existing inference techniques to improve scalability and performance.
Succinctly, each neuron is represented by a distinct Gaussian distribution in the feature space. Directly
calculating the clustering on all of the channels and all of the waveforms is computationally infeasible.
Instead, the inference first utilizes the spatial locality via masking [24] from Section 2.3. Second, the
inference procedure operates on a coreset of representative points [13] and the resulting approximate
sufficient statistics are used in lieu of the full dataset (Section 2.5). Remarkably, we can reduce a
dataset with 100k points to a coreset of ' 10k points with trivial accuracy loss. Next, split and merge
methods are adapted to efficiently explore the clustering space [21, 24] in Section 2.6. Using these
modern scalable inference techniques is crucial for robustness because they empirically find much
more sensible and accurate optima and permit Bayesian characterization of posterior uncertainty.
For very large arrays, instead of operating on all channels simultaneously, each distinct spatial
neighborhood is processed by a separate clustering algorithm that may be run in parallel. This
parallelization is crucial for processing very large arrays because it allows greater utilization of
computer resources (or multiple machines). It also helps improve the efficacy of the split-merge
inference by limiting the search space. This divide-and-conquer approach and the post-processing
to stitch the results together is discussed in Section 2.7. The computational time required for the
clustering algorithm scales nearly linearly with the number of electrodes C and the experiment time.
After the clustering stage is completed, the means of clusters are used as templates and collided and
missed spikes are inferred using the deconvolution (or ?pursuit? [37]) algorithm from Kilosort [36],
which recovers the final set of binary spike trains. We limit this computationally expensive approach
only to sections of the data that are not well handled by the rest of the pipeline, and use the faster
clustering approach to fill in the well-explained (i.e. easy) sections.
We note finally that when memory is limited compared to the size of the dataset, the preprocessing,
spike detection, and final deconvolution steps are performed on temporal minibatches of data; the
other stages operate on significantly reduced data representations, so memory management issues
typically do not arise here. See Section B.4 for further details on memory management.
2.2
Detection
The detection stage extracts temporal and spatial windows around action potentials from the noisy
raw electrophysiological signal V to use as inputs in the following clustering stage. The number
of clean waveform detections (true positives) should be maximized for a given level of detected
collision and noise events (false positives). Because collisions corrupt feature spaces [37, 12] and
will simply be recovered during pursuit stage, they are not included as true positives at this stage. In
contrast to the plethora of prior work on hand-designed detection rules (detailed in Section C.1), we
use a data-driven approach with neural networks to dramatically improve both detection efficacy and
alignment quality. The neural network is trained to return only clean waveforms on real data, not
collisions, so it de facto performs a preliminary triage prior to the main triage stage in Section 2.4.
The crux of the data-driven approach is the availability of prior training data. We are targeting the
typical case that an experimental lab performs repeated experiments using the same recording setup
from day to day. In this setting hand-curated or otherwise validated prior sorts are saved, resulting
in abundant training data for a given experimental preparation. In the supplemental material, we
discuss the construction of a training set (including data augmentation approaches) in Section C.2, the
architecture and training of the network in Section C.3, the detection using the network in Section C.4,
empirical performance in Section C.5, and scalability in Section C.5. This strategy is effective when
3
this training data exists; however, many research groups are currently using single electrodes and do
not have dense MEA training data. Thus it is worth emphasizing that here we train the detector only
on a single electrode. We have also experimented with training and evaluating on multiple electrodes
with good success; however, these results are more specialized and are not shown here.
A key result is that our neural network dramatically improves both the temporal and spatial alignment
of detected waveforms. This improved alignment improves the fidelity of the feature space and the
signal-to-noise power, and the result of the improved feature space can clearly be seen by comparing
the detected waveform features from one standard detection approach (SpikeDetekt [24]) in Figure
1 (left) to the detected waveform features from our neural network in Figure 1 (middle). Note that the
output of the neural net detection is remarkably more Gaussian compared to SpikeDetekt.
2.3
Feature Extraction and Mask Creation
Following detection we have a collection of N events defined as Xn 2 RR?C for n = 1, . . . , N ,
each with an associated detection time tn . Recall that C is the total number of electrodes, and R is the
number of time samples, in our case chosen to correspond to 1.5ms. Next features are extracted by
using uncentered Principal Components Analysis (PCA) on each channel separately with P features
per channel. Each waveform Xn is transformed to the feature space Yn . To handle duplicate spikes,
Yn is kept only if cn = arg max{||ync ||c2Ncn }, where Ncn contains all electrodes in the local
neighborhood of electrode cn . To address the increasing dimensionality, spikes are localized by using
the sparse masking vector {mn } 2 [0, 1]C method of [24], where the mask should be set to 1 only
where the signal exists. The sparse vector reduces the dimensionality and facilitates sparse updates to
improve computational efficiency. We give additional mathematical details in Supplemental Section
D. We have also experimented with an autoencoder framework to standardize the feature extraction
across channels and facilitate online inference. This approach performed similarly to PCA and is not
shown here, but will be addressed in depth in future work.
2.4
Collision Screening and Outlier Triaging
Many collisions and outliers remain even after our improved detection algorithm. Because these
events destabilize the clustering algorithms, the pipeline benefits from a ?triage? stage to further
screen collisions and noise events. Note that triaging out a small fraction of true positives is a minor
concern at this stage because they will be recovered in the final deconvolution step.
We use a two-fold approach to perform this triaging. First, obvious collisions with nearly overlapping
spike times and spatial locations are removed. Second, k-Nearest Neighbors (k-NN) is used to
detect outliers in the masked feature space based on [27]. To develop a computationally efficient and
effective approach, waveforms are grouped based on their primary (highest-energy) channel, and then
k-NN is run for each channel. Empirically, these approximations do not suffer in efficacy compared
to using the full spatial area. When the dimensionality of P , the number of features per channel, is
low, a kd-tree can find neighbors in O(N log N ) average time. We demonstrate that this method is
effective for triaging false positives and collisions in Figure 1 (middle).
2.5
Coreset Construction
?Big data? improves density estimates for clustering, but the cost per iteration naively scales with the
amount of data. However, often data has some redundant features, and we can take advantage of
these redundancies to create more efficient summarizations of the data. Then running the clustering
algorithm on the summarized data should scale only with the number of summary points. By choosing
representative points (or a ?coreset") carefully we can potentially describe huge datasets accurately
with a relatively small number of points [19, 13, 2].
There is a sizable literature on the construction of coresets for clustering problems; however, the
number of required representative points to satisfy the theoretical guarantees is infeasible in this
problem domain. Instead, we propose a simple approach to build coresets that empirically outperforms
existing approaches in our experiments by forcing adequate coverage of the complete dataset. We
demonstrate in Supplemental Figure S6 that this approach can cover clusters completely missed by
existing approaches, and show the chosen representative points on data in Figure 1 (right). This
algorithm is based on recursively performing k-means; we provide pseudocode and additional details
4
NN-triaged
NN-kept
coreset
PC 2
SpikeDetekt
PC 1
Figure 1: Illustration of Neural Network Detection, Triage, and Coreset from a primate retinal
ganglion cell recording. The first column shows spike waveforms from SpikeDetekt in their PCA
space. Due to poor alignment, clusters have a non-Gaussian shape with many outliers. The second
column shows spike waveforms from our proposed neural network detection in the PCA space. After
triaging outliers, the clusters have cleaner Gaussian shapes in the recomputed feature space. The last
column illustrates the coreset. The size of each coreset diamond represents its weight. For visibility,
only 10% of data are plotted.
in in Supplemental Section E. The worst case time complexity is nearly linear with respect to the
number of representative points, the number of detected spikes, and the number of channels. The
algorithm ends by returning G representative points, their sufficient statistics, and masks.
2.6
Efficient Inference for the Dirichlet Process Gaussian Mixture Model
For the clustering step we use a Dirichlet Process Gaussian Mixture Model (DP-GMM) formulation,
which has been previously used in spike sorting [48, 9], to adaptively choose the number of mixture
components (visible neurons). In contrast to these prior approaches, here we adopt a Variational
Bayesian split-merge approach to explore the clustering space [21] and to find a more robust and
higher-likelihood optimum. We address the high computational cost of this approach with several key
innovations. First, following [24], we fit a mixture model on the virtual masked data to exploit the
localized nature of the data. Second, following [9, 24], the covariance structure is approximated as a
block-diagonal to reduce the parameter space and computation. Finally, we adapted the methodology
to work with the representative points (coreset) rather than the raw data, resulting in a highly scalable
algorithm. A more complete description of this stage can be found in Supplemental Section F, with
pseudocode in Supplemental Algorithm S2.
In terms of computational costs, the dominant cost per iteration in the DPMM algorithm is the
calculation of data to cluster assignments, which in our framework will scale at O(GmP
? 2 K), where
m
? is the average number of channels maintained in the mask for each of the representative points,
G is the number of representative points, and P is the number of features per channel. This is in
stark contrast to a scaling of O(N C 2 P 2 K + P 3 ) without our above modifications. Both K and G
are expected to scale linearly with the number of electrodes and sublinearly with the length of the
recording. Without further modification, the time complexity in the above clustering algorithm would
depend on the square of the number of electrodes for each iteration; fortunately, this can be reduced
to a linear dependency based on a divide-and-conquer approach discussed below in Section 2.7.
5
% of x(%) Stable Clusters
80
60
40
20
0
100
90
80
70
60
Stability % Threshold
50
Accuracy (High Collision ViSAPy)
# of x(%) Accurate Clusters
% of x(%) Stable Clusters
# of x(%) Accurate Clusters
Stability (High Collision ViSAPy)
15
10
5
0
100
YASS
KiloSort
Mountain
SpyKING
90
80
70
60
True Positive % Threshold
50
Stability (Low SNR ViSAPy)
60
40
YASS
Kilosort
Mountain
SpyKing
20
0
100
90
80
70
60
Stability % Threshold
50
Accuracy (Low SNR ViSAPy)
15
10
5
0
100
90
80
70
60
True Positive % Threshold
50
Figure 2: Simulation results on 30-channel ViSAPy datasets. Left panels show the result on
ViSAPy with high collision rate and Right panels show the result on ViSAPy with low SNR setting.
(Top) stability metric (following [5]) and percentage of total discovered clusters above a certain
stability measure. The noticeable gap between stability of YASS and the other methods results
from a combination of high number of stable clusters and lower number of total clusters. (Bottom)
These results show the number of clusters (out of a ground truth of 16 units) above a varying
quality threshold for each pipeline. For each level of accuracy, the number of clusters that pass that
threshold is calculated to demonstrate the relative quality of the competing algorithms on this dataset.
Empirically, our pipeline (YASS) outperforms other methods.
2.7
Divide and Conquer and Template Merging
Neural action potentials have a finite spatial extent [6]. Therefore, the spikes can be divided into
distinct groups based on the geometry of the MEA and the local position of each neuron, and each
group is then processed independently. Thus, each group can be processed in parallel, allowing
for high data throughput. This is crucial for exploiting parallel computer resources and limited
memory structures. Second, the split-and-merge approach in a DP-GMM is greatly hindered when
the numbers of clusters is very high [21]. The proposed divide and conquer approach addresses this
problem by greatly reducing the number of clusters within each subproblem, allowing the split and
merge algorithm to be targeted and effective.
To divide the data based on the spatial location of each spike, the primary channel cn is determined
for every point in the coreset based on the channel with maximum energy, and clustering is applied
on each channel. Because neurons may now end up on multiple channels, it is necessary to merge
templates from nearby channels as a post-clustering step. When the clustering is completed, the
mean of each cluster is taken as a template. Because waveforms are limited to their primary channel,
some neurons may have ?overclustered? and have a distinct mixture component on distinct channels.
Also, overclustering can occur from model mismatch (non-Gaussianity). Therefore, it is necessary to
merge waveforms. Template merging is performed based on two criteria, the angle and the amplitude
of templates, using the best alignment on all temporal shifts between two templates. The pseudocode
to perform this merging is shown in Supplemental Algorithm S3. Additional details can be found in
Supplemental Section G.
6
40
YASS
Kilosort
Mountain
SpyKing
20
0
100
90
80
70
60
Stability % Threshold
50
# of x(%) Accurate Clusters
% of x(%) Stable Clusters
Stability
60
Accuracy
30
20
10
0
100
90
80
70
60
True Positive % Threshold
50
Figure 3: Performance comparison of spike sorting pipelines on primate retina data. (Left)
The same type of plot as in the top panels of Figure 2. (Right) The same type of plot as in the bottom
panels of Figure 2 compared to the ?gold standard? sort. YASS demonstrates both improved stability
and also per-cluster accuracy.
2.8
Recovering Triaged Waveforms and Collisions
After the previous steps, we apply matching pursuit [36] to recover triaged waveforms and collisions.
We detail the available choices for this stage in Supplemental Section I.
3
Performance Comparison
We evaluate performance to compare several algorithms (detailed in Section 3.1) to our proposed
methodology on both synthetic (Section 3.2) and real (Section 3.3) dense MEA recordings. For
each synthetic dataset we evaluate the ability to capture ground truth in addition to the per-cluster
stability metrics. For the ground truth, inferred clusters are matched with ground truth clusters via the
Hungarian algorithm, and then the per-cluster accuracy is calculated as the number of assignments
shared between the inferred cluster and the ground truth cluster over the total number of waveforms
in the inferred cluster. For the per-cluster stability metric, we use the method from Section 3.3 of [5]
with the rate scaling parameter of the Poisson processes set to 0.25. This method evaluates how robust
individual clusters are to perturbations of the dataset. In addition, we provide runtime information to
empirically evaluate the computational scaling of each approach. The CPU runtime was calculated
on a single core of a six-core i7 machine with 32GB of RAM. GPU runtime is given from a Nvidia
Titan X within the same machine.
3.1
Competing Algorithms
We compare our proposed pipeline to three recently proposed approaches for dense MEA spike
sorting: KiloSort [36], Spyking Circus [51], and MountainSort [31]. Kilosort, Spyking Cricus,
and MountainSort were downloaded on January 30, 2017, May 26th, 2017, and June 7th, 2017,
respectively. We dub our algorithm Yet Another Spike Sorter (YASS). We discuss additional details
on the relationships between these approaches and our pipeline in Supplemental Section I. All results
are shown with no manual post-processing.
3.2
Synthetic Datasets
First, we used the biophysics-based spike activity generator ViSAPy [18] to generate multiple 30channel datasets with different noise levels and collision rates. The detection network was trained
on the ground truth from a low signal-to-noise level recording. Then, the trained neural network is
applied to all signal-to-noise levels. The neural network dramatically outperforms existing detection
methodologies on these datasets. For a given level of true positives, the number of false positives
can be reduced by an order of magnitude. The properties of the learned network are shown in
Supplemental Figures S4 and S5.
Performance is evaluated on the known ground truth. For each level of accuracy, the number of
clusters that pass that threshold is calculated to demonstrate the relative quality of the competing
7
Detection (GPU) Data Ext. Triage Coreset Clustering Template Ext.
Total
1m7s
42s
11s
34s
3m12s
54s
6m40s
Table 1: Running times of the main processes on 512-channel primate retinal recording of
30 minutes duration. Results shown using a single CPU core, except for the detection step (2.2),
which was run on GPU. We found that full accuracy was achieved after processing just one-fifth
of this dataset, leading to significant speed gains. Data Extraction refers to waveform extraction
and Performing PCA (2.3). Triage, Coreset, and Clustering refer to 2.4, 2.5, and 2.6, respectively.
Template Extraction describes revisiting the recording to estimate templates and merging them (2.7).
Each step scales approximately linearly (Section B.3).
algorithms on this dataset. Empirically, our pipeline (YASS) outperforms other methods. This is
especially true in low SNR settings, as shown in Figure 2. The per-cluster stability metric is also
shown in Figure 2. The stability result demonstrates that YASS has significantly fewer low-quality
clusters than competing methods.
3.3
Real Datasets
To examine real data, we focused on 30 minutes of extracellular recordings of the peripheral primate
retina, obtained ex-vivo using a high-density 512-channel recording array [30]. The half-hour
recording was taken while the retina was stimulated with spatiotemporal white noise. A ?gold
standard" sort was constructed for this dataset by extensive hand validation of automated techniques,
as detailed in Supplemental Section H. Nonstationarity effects (time-evolution of waveform shapes)
were found to be minimal in this recording (data not shown).
We evaluate the performance of YASS and competing algorithms using 4 distinct sets of 49 spatially
contiguous electrodes. Note that the gold standard sort here uses the information from the full
512-electrode array, while we examine the more difficult problem of sorting the 49-electrode data;
we have less information about the cells near the edges of this 49-electrode subset, allowing us to
quantify the performance of the algorithms over a range of effective SNR levels. By comparing the
inferred results to the gold standard, the cluster-specific true positives are determined in addition to
the stability metric. The results are shown in Figure 3 for one of the four sets of electrodes, and the
remaining three sets are shown in Supplemental Section B.1. As in the simulated data, compared
to KiloSort, which had the second-best overall performance on this dataset, YASS has dramatically
fewer low-stability clusters.
Finally, we evaluate the time required for each step in the YASS pipeline (Table 1). Importantly, we
found that YASS is highly robust to data limitations: as shown in Supplemental Figure S3 and Section
B.3, using only a fraction of the 30 minute dataset has only a minor impact on performance. We
exploit this to speed up the pipeline. Remarkably, running primarily on a single CPU core (only
the detect step utilizes a GPU here), YASS achieves a several-fold speedup in template and cluster
estimation compared to the next fastest competitor2 , Kilosort, which was run in full GPU mode and
spent about 30 minutes on this dataset. We plan to further parallelize and GPU-ize the remaining
steps in our pipeline next, and expect to achieve significant further speedups.
4
Conclusion
YASS has demonstrated state-of-the-art performance in accuracy, stability, and computational efficiency; we believe the tools presented here will have a major practical and scientific impact in
large-scale neuroscience. In our future work, we plan to continue iteratively updating our modular
pipeline to better handle template drift, refractory violations, and dense collisions.
Lastly, YASS is available online at https://github.com/paninski-lab/yass
2
Spyking Circus took over a day to process this dataset. Assuming linear scaling based on smaller-scale
experiments, Mountainsort is expected to take approximately 10 hours.
8
Acknowledgements
This work was partially supported by NSF grants IIS-1546296 and IIS-1430239, and DARPA Contract
No. N66001-17-C-4002.
References
[1] D. Arthur and S. Vassilvitskii. k-means++: The advantages of careful seeding. In ACM-SIAM
Symposium on Discrete Algorithms. Society for Industrial and Applied Mathematics, 2007.
[2] O. Bachem, M. Lucic, and A. Krause. Coresets for nonparametric estimation-the case of
dp-means. In ICML, 2015.
[3] B. Bahmani, B. Moseley, A. Vattani, R. Kumar, and S. Vassilvitskii. Scalable k-means++.
Proceedings of the VLDB Endowment, 2012.
[4] I. N. Bankman, K. O. Johnson, and W. Schneider. Optimal detection, classification, and
superposition resolution in neural waveform recordings. IEEE Trans. Biomed. Eng. 1993.
[5] A. H. Barnett, J. F. Magland, and L. F. Greengard. Validation of neural spike sorting algorithms
without ground-truth information. J. Neuro. Methods, 2016.
[6] G. Buzs?ki. Large-scale recording of neuronal ensembles. Nature neuroscience, 2004.
[7] T. Campbell, J. Straub, J. W. F. III, and J. P. How. Streaming, Distributed Variational Inference
for Bayesian Nonparametrics. In NIPS, 2015.
[8] D. Carlson, V. Rao, J. Vogelstein, and L. Carin. Real-Time Inference for a Gamma Process
Model of Neural Spiking. NIPS, 2013.
[9] D. E. Carlson, J. T. Vogelstein, Q. Wu, W. Lian, M. Zhou, C. R. Stoetzner, D. Kipke, D. Weber,
D. B. Dunson, and L. Carin. Multichannel electrophysiological spike sorting via joint dictionary
learning and mixture modeling. IEEE TBME, 2014.
[10] B. Chen, D. E. Carlson, and L. Carin. On the analysis of multi-channel neural spike data. In
NIPS, 2011.
[11] D. M. Dacey, B. B. Peterson, F. R. Robinson, and P. D. Gamlin. Fireworks in the primate retina:
in vitro photodynamics reveals diverse lgn-projecting ganglion cell types. Neuron, 2003.
[12] C. Ekanadham, D. Tranchina, and E. P. Simoncelli. A unified framework and method for
automatic neural spike identification. J. Neuro. Methods 2014.
[13] D. Feldman, M. Faulkner, and A. Krause. Scalable training of mixture models via coresets. In
NIPS, 2011.
[14] J. Fournier, C. M. Mueller, M. Shein-Idelson, M. Hemberger, and G. Laurent. Consensus-based
sorting of neuronal spike waveforms. PloS one, 2016.
[15] F. Franke, M. Natora, C. Boucsein, M. H. J. Munk, and K. Obermayer. An online spike detection
and spike classification algorithm capable of instantaneous resolution of overlapping spikes. J.
Comp. Neuro. 2010.
[16] S. Gibson, J. W. Judy, and D. Markovi. Spike Sorting: The first step in decoding the brain.
IEEE Signal Processing Magazine, 2012.
[17] I. Goodfellow, Y. Bengio, and A. Courville. Deep learning. MIT Press, 2016.
[18] E. Hagen, T. V. Ness, A. Khosrowshahi, C. S?rensen, M. Fyhn, T. Hafting, F. Franke, and G. T.
Einevoll. ViSAPy: a Python tool for biophysics-based generation of virtual spiking activity for
evaluation of spike-sorting algorithms. J. Neuro. Methods 2015.
[19] S. Har-Peled and S. Mazumdar. On coresets for k-means and k-median clustering. In ACM
Theory of Computing. ACM, 2004.
[20] G. Hilgen, M. Sorbaro, S. Pirmoradian, J.-O. Muthmann, I. Kepiro, S. Ullo, C. J. Ramirez,
A. Maccione, L. Berdondini, V. Murino, et al. Unsupervised spike sorting for large scale, high
density multielectrode arrays. Cell Reports, 2017.
[21] M. C. Hughes and E. Sudderth. Memoized Online Variational Inference for Dirichlet Process
Mixture Models. In NIPS, 2013.
[22] H. Ishwaran and L. F. James. Gibbs sampling methods for stick-breaking priors. JASA, 2001.
9
[23] J. J. Jun, C. Mitelut, C. Lai, S. Gratiy, C. Anastassiou, and T. D. Harris. Real-time spike sorting
platform for high-density extracellular probes with ground-truth validation and drift correction.
bioRxiv, 2017.
[24] S. N. Kadir, D. F. M. Goodman, and K. D. Harris. High-dimensional cluster analysis with the
masked EM algorithm. Neural computation 2014.
[25] K. H. Kim and S. J. Kim. Neural spike sorting under nearly 0-db signal-to-noise ratio using
nonlinear energy operator and artificial neural-network classifier. IEEE TBME, 2000.
[26] D. Kingma and J. Ba. Adam: A method for stochastic optimization. ICLR, 2015.
[27] E. M. Knox and R. T. Ng. Algorithms for mining distance-based outliers in large datasets. In
VLDB. Citeseer, 1998.
[28] K. C. Knudson, J. Yates, A. Huk, and J. W. Pillow. Inferring sparse representations of continuous
signals with continuous orthogonal matching pursuit. In NIPS, 2014.
[29] M. S. Lewicki. A review of methods for spike sorting: the detection and classification of neural
action potentials. Network: Computation in Neural Systems, 1998.
[30] A. Litke, N. Bezayiff, E. Chichilnisky, W. Cunningham, W. Dabrowski, A. Grillo, M. Grivich,
P. Grybos, P. Hottowy, S. Kachiguine, et al. What does the eye tell the brain?: Development of
a system for the large-scale recording of retinal output activity. IEEE Trans. Nuclear Science,
2004.
[31] J. F. Magland and A. H. Barnett. Unimodal clustering using isotonic regression: Iso-split. arXiv
preprint arXiv:1508.04841, 2015.
[32] S. Mukhopadhyay and G. C. Ray. A new interpretation of nonlinear energy operator and its
efficacy in spike detection. IEEE TBME 1998.
[33] J.-O. Muthmann, H. Amin, E. Sernagor, A. Maccione, D. Panas, L. Berdondini, U. S. Bhalla, and
M. H. Hennig. Spike detection for large neural populations using high density multielectrode
arrays. Frontiers in neuroinformatics, 2015.
[34] R. M. Neal. Markov chain sampling methods for dirichlet process mixture models. Journal of
computational and graphical statistics, 2000.
[35] A. Y. Ng, M. I. Jordan, et al. On spectral clustering: Analysis and an algorithm.
[36] M. Pachitariu, N. A. Steinmetz, S. N. Kadir, M. Carandini, and K. D. Harris. Fast and accurate
spike sorting of high-channel count probes with kilosort. In NIPS, 2016.
[37] J. W. Pillow, J. Shlens, E. J. Chichilnisky, and E. P. Simoncelli. A model-based spike sorting
algorithm for removing correlation artifacts in multi-neuron recordings. PloS one 2013.
[38] R. Q. Quiroga, Z. Nadasdy, and Y. Ben-Shaul. Unsupervised spike detection and sorting with
wavelets and superparamagnetic clustering. Neural computation 2004.
[39] H. G. Rey, C. Pedreira, and R. Q. Quiroga. Past, present and future of spike sorting techniques.
Brain research bulletin, 2015.
[40] A. Rodriguez and A. Laio. Clustering by fast search and find of density peaks. Science, 2014.
[41] E. M. Schmidt. Computer separation of multi-unit neuroelectric data: a review. J. Neuro.
Methods 1984.
[42] R. Tarjan. Depth-first search and linear graph algorithms. SIAM journal on computing, 1972.
[43] P. T. Thorbergsson, M. Garwicz, J. Schouenborg, and A. J. Johansson. Statistical modelling
of spike libraries for simulation of extracellular recordings in the cerebellum. In IEEE EMBC.
IEEE, 2010.
[44] V. Ventura. Automatic Spike Sorting Using Tuning Information. Neural Computation, 2009.
[45] R. J. Vogelstein, K. Murari, P. H. Thakur, C. Diehl, S. Chakrabartty, and G. Cauwenberghs.
Spike sorting with support vector machines. In IEEE EMBS, volume 1. IEEE, 2004.
[46] L. Wang and D. B. Dunson. Fast bayesian inference in dirichlet process mixture models. J.
Comp. and Graphical Stat., 2011.
[47] A. B. Wiltschko, G. J. Gage, and J. D. Berke. Wavelet filtering before spike detection preserves
waveform shape and enhances single-unit discrimination. J. Neuro. Methods, 2008.
10
[48] F. Wood and M. J. Black. A nonparametric bayesian alternative to spike sorting. J. Neuro.
Methods, 2008.
[49] F. Wood, M. J. Black, C. Vargas-Irwin, M. Fellows, and J. P. Donoghue. On the variability of
manual spike sorting. IEEE TBME 2004.
[50] X. Yang and S. A. Shamma. A totally automated system for the detection and classification of
neural spikes. IEEE Trans. Biomed. Eng. 1988.
[51] P. Yger, G. L. Spampinato, E. Esposito, B. Lefebvre, S. Deny, C. Gardella, M. Stimberg, F. Jetter,
G. Zeck, S. Picaud, et al. Fast and accurate spike sorting in vitro and in vivo for up to thousands
of electrodes. bioRxiv, 2016.
[52] L. Zelnik-Manor and P. Perona. Self-tuning spectral clustering. In NIPS, volume 17, 2004.
11
| 6989 |@word middle:2 achievable:1 johansson:1 vldb:2 zelnik:1 simulation:3 covariance:1 eng:2 citeseer:1 bahmani:1 recursively:1 reduction:2 series:3 efficacy:4 contains:1 outperforms:4 existing:4 nadasdy:1 recovered:3 current:1 comparing:2 skipping:1 com:1 past:1 yet:2 gpu:6 realistic:1 timestamps:2 visible:1 shape:4 cheap:1 fyhn:1 visibility:1 reproducible:1 designed:1 update:1 plot:2 aside:1 seeding:1 half:1 fewer:2 discrimination:1 iso:1 core:6 filtered:1 detecting:1 characterization:1 location:5 triage:12 mathematical:1 constructed:1 symposium:1 ray:1 yger:1 mask:4 sublinearly:1 expected:2 examine:2 multi:5 brain:3 cpu:4 window:1 increasing:2 totally:1 begin:1 estimating:1 matched:1 panel:4 straub:1 mountain:3 what:1 supplemental:14 unified:1 guarantee:1 temporal:4 fellow:1 every:1 collecting:1 runtime:3 returning:1 demonstrates:2 classifier:1 facto:1 stick:1 utilization:1 grant:1 unit:3 appear:2 yn:2 zeck:1 positive:15 before:1 engineering:1 local:3 modify:1 limit:1 despite:1 ext:2 parallelize:1 laurent:1 firing:1 merge:8 approximately:2 black:2 fastest:1 limited:3 shamma:1 liam:1 range:1 practical:1 hughes:1 block:1 procedure:2 area:1 empirical:1 gibson:1 superparamagnetic:1 significantly:4 matching:5 circus:2 refers:1 targeting:1 operator:2 mea:9 franke:2 isotonic:1 demonstrated:1 layout:1 attention:1 primitive:1 independently:1 duration:1 focused:1 resolution:2 coreset:13 hottowy:1 rule:1 hafting:1 array:8 importantly:1 fill:1 nuclear:1 shlens:1 s6:1 stability:19 handle:4 population:1 limiting:1 construction:3 massive:1 magazine:1 duke:1 us:2 goodfellow:1 standardize:1 expensive:2 approximated:1 updating:2 tranchina:1 curated:1 hagen:1 mukhopadhyay:1 bottom:2 subproblem:1 preprint:1 electrical:1 paninski1:1 worst:1 capture:1 revisiting:1 murino:1 wang:1 thousand:1 plo:2 removed:1 highest:1 complexity:2 peled:1 multistage:2 hinder:1 trained:3 depend:1 creation:1 efficiency:2 completely:1 oslo:1 accelerate:1 darpa:2 joint:1 represented:1 train:5 distinct:7 fast:5 effective:5 describe:1 detected:7 artificial:1 tell:1 neighborhood:3 choosing:1 neuroinformatics:1 modular:2 stanford:1 kadir:2 otherwise:1 ability:2 statistic:4 noisy:5 final:4 online:4 advantage:2 rr:1 lee1:1 net:1 took:1 propose:1 tbme:4 philosophy:1 rapidly:1 achieve:1 adapts:1 gold:4 amin:1 description:1 scalability:4 exploiting:1 electrode:25 cluster:38 optimum:2 plethora:1 adam:1 ben:1 spent:2 help:1 develop:2 stat:1 nearest:1 minor:2 noticeable:1 progress:1 sizable:1 coverage:1 recovering:1 hungarian:1 quantify:1 waveform:35 guided:1 saved:1 filter:1 stochastic:1 human:1 material:1 virtual:2 munk:1 oncoming:1 crux:1 preliminary:1 frontier:1 quiroga:2 correction:1 around:1 considered:1 ground:9 major:3 achieves:1 dictionary:1 dacey:1 adopt:1 estimation:2 currently:3 superposition:2 grouped:1 create:1 tool:2 reflects:1 mit:1 clearly:1 gaussian:8 manor:1 rather:1 zhou:1 ej:1 varying:1 voltage:2 validated:1 focus:1 june:1 improvement:1 properly:1 modelling:1 likelihood:1 greatly:2 contrast:3 industrial:1 litke:1 kim:2 detect:2 inference:13 mueller:1 nn:4 streaming:1 entire:1 typically:1 initially:1 cunningham:1 shaul:1 perona:1 collided:5 going:1 transformed:1 lgn:1 biomed:2 arg:1 overall:2 issue:2 fidelity:1 classification:4 development:1 plan:2 art:2 spatial:12 ness:1 platform:1 extraction:6 beach:1 barnett:2 sampling:2 ng:2 represents:2 bachem:1 unsupervised:2 throughput:1 nearly:5 icml:1 carin:3 future:4 report:1 develops:1 feasibly:2 primarily:1 retina:4 duplicate:1 modern:1 steinmetz:1 simultaneously:1 gamma:1 preserve:1 individual:1 geometry:1 firework:1 detection:32 screening:2 huge:1 highly:2 mining:1 evaluation:1 alignment:6 violation:1 mixture:12 pc:2 har:1 chain:1 accurate:6 edge:1 capable:1 necessary:2 arthur:1 chakrabartty:1 orthogonal:1 tree:1 iv:1 divide:6 abundant:1 plotted:1 biorxiv:2 theoretical:1 minimal:1 column:3 modeling:1 rao:1 cover:1 contiguous:1 assignment:2 cost:5 ekanadham:1 subset:2 snr:6 masked:3 johnson:1 dependency:1 spatiotemporal:1 synthetic:3 adaptively:1 st:1 density:6 knox:1 sensitivity:1 siam:2 peak:1 contract:1 decoding:1 together:1 augmentation:1 management:2 leveraged:1 choose:1 sorter:2 vattani:1 leading:1 return:2 stark:1 potential:7 de:1 retinal:3 summarized:1 gaussianity:2 availability:1 coresets:5 titan:1 satisfy:1 performed:4 later:1 lab:2 cauwenberghs:1 recover:3 sort:4 parallel:3 masking:2 espen:1 vivo:2 square:1 accuracy:12 largely:1 efficiently:2 maximized:1 correspond:1 ensemble:1 bayesian:6 biophysically:1 raw:3 accurately:1 identification:1 dub:1 worth:1 comp:2 simultaneous:1 detector:1 manual:4 nonstationarity:1 distort:1 evaluates:1 energy:4 acquisition:1 james:1 triaged:4 obvious:1 associated:1 recovers:1 gain:1 dataset:16 carandini:1 recall:1 improves:6 electrophysiological:5 dimensionality:3 amplitude:2 carefully:1 campbell:1 manuscript:1 higher:1 day:3 methodology:6 improved:5 formulation:1 evaluated:1 nonparametrics:1 furthermore:2 just:1 predicated:1 stage:17 lastly:1 correlation:1 hand:4 nonlinear:2 overlapping:2 rodriguez:1 berke:1 mode:1 quality:6 artifact:1 scientific:3 believe:1 usa:1 facilitate:1 effect:1 true:10 ize:1 evolution:1 spatially:2 iteratively:1 garwicz:1 neal:1 white:1 anastassiou:1 cerebellum:1 during:4 self:1 maintained:1 m:1 generalized:1 criterion:1 outline:1 complete:3 demonstrate:6 tn:1 performs:2 postprocessing:1 lucic:1 weber:1 variational:3 consideration:1 instantaneous:1 recently:1 specialized:1 pseudocode:5 spiking:2 empirically:6 overview:2 vitro:2 refractory:1 volume:2 discussed:2 interpretation:1 significant:3 s5:1 refer:1 feldman:1 gibbs:1 automatic:3 enjoyed:1 tuning:3 mathematics:1 similarly:1 deny:1 had:1 reliability:1 stable:4 operating:1 whitening:2 dominant:1 buzs:1 posterior:1 recent:1 driven:2 forcing:1 selectivity:1 certain:1 nvidia:1 binary:2 success:1 continue:1 life:1 accomplished:1 seen:1 george:1 additional:6 greater:1 fortunately:1 employed:1 schneider:1 eleanor:1 redundant:1 signal:14 ii:3 vogelstein:3 full:5 multiple:4 simoncelli:2 infer:1 reduces:3 unimodal:1 faster:2 calculation:1 long:1 wiltschko:1 divided:1 lai:1 post:4 biophysics:2 impact:2 scalable:5 neuro:7 regression:1 baa:1 whitened:1 metric:5 poisson:1 arxiv:2 iteration:3 achieved:1 cell:4 background:1 remarkably:3 separately:1 addition:3 addressed:1 krause:2 embs:1 median:1 sudderth:1 crucial:5 goodman:1 parallelization:1 rest:1 operate:1 recording:19 facilitates:1 db:1 flow:1 jordan:1 extracting:2 near:1 yang:1 neuroelectric:1 iii:2 easy:2 split:6 automated:2 faulkner:1 bengio:1 fit:1 architecture:1 competing:5 hindered:1 reduce:2 cn:3 donoghue:1 shift:1 i7:1 bottleneck:2 vassilvitskii:2 six:1 handled:1 pca:5 gb:1 effort:2 suffer:1 multineuronal:1 rey:1 action:5 adequate:1 deep:1 dramatically:6 collision:22 iterating:1 detailed:3 cleaner:1 amount:1 nonparametric:3 s4:1 band:1 processed:3 multichannel:1 reduced:3 generate:1 http:1 percentage:2 rensen:1 nsf:1 s3:2 neuroscience:3 overly:1 per:10 diverse:1 discrete:1 hennig:1 yates:1 bhalla:1 group:4 key:3 putting:1 redundancy:1 recomputed:1 destabilize:1 threshold:9 four:1 gmm:5 clean:4 kept:2 fournier:1 n66001:1 ram:1 graph:1 fraction:2 wood:2 run:5 angle:1 uncertainty:1 wu:1 missed:3 utilizes:3 separation:1 appendix:1 scaling:4 esposito:1 ki:1 multielectrode:2 followed:3 courville:1 fold:2 activity:3 adapted:3 occur:1 mazumdar:1 nearby:1 speed:2 kumar:1 performing:2 relatively:2 extracellular:3 vargas:1 speedup:2 developing:1 peripheral:1 combination:1 poor:1 kd:1 describes:2 across:3 remain:1 smaller:1 markovi:1 partitioned:1 em:1 lefebvre:1 wherever:1 primate:5 modification:2 outlier:9 restricted:1 explained:1 projecting:1 pipeline:26 taken:2 computationally:4 resource:2 remains:1 previously:1 turn:1 r3:1 discus:2 count:1 deluge:1 end:4 boucsein:1 lieu:1 grivich:1 pursuit:11 available:2 permit:1 greengard:1 apply:1 ishwaran:1 probe:2 pachitariu:1 spectral:2 schmidt:1 robustness:2 alternative:1 existence:1 top:2 clustering:36 dirichlet:7 remaining:3 completed:2 running:3 graphical:2 calculating:1 carlson:3 exploit:2 build:2 conquer:5 especially:1 classical:1 society:1 gamlin:1 spike:60 strategy:3 primary:3 rt:1 diagonal:1 obermayer:1 enhances:1 dp:6 kth:1 iclr:1 separate:1 distance:1 simulated:4 sensible:1 extent:1 consensus:1 trivial:1 assuming:1 length:1 modeled:1 relationship:1 illustration:1 ratio:1 innovation:1 setup:1 difficult:1 dunson:2 ventura:1 potentially:1 ncn:1 ba:1 design:1 summarization:2 dpmm:1 perform:2 diamond:1 allowing:3 embc:1 neuron:11 datasets:8 markov:1 finite:1 supporting:1 january:1 norwegian:1 variability:1 discovered:1 synchronously:1 perturbation:1 tarjan:1 drift:2 inferred:5 david:1 required:3 chichilnisky:2 extensive:1 gmp:1 learned:1 hour:2 kingma:1 nip:9 trans:3 address:3 robinson:1 below:2 memoized:1 mismatch:1 program:1 reliable:2 memory:4 including:1 max:1 power:3 critical:3 event:5 hybrid:1 mn:1 representing:1 improve:6 github:1 brief:1 eye:1 library:1 jun:1 extract:3 columbia:1 autoencoder:1 prior:8 literature:1 acknowledgement:1 python:1 review:2 underway:1 relative:2 loss:1 expect:1 generation:1 limitation:1 filtering:1 kipke:1 localized:2 generator:1 validation:3 triaging:7 downloaded:1 jasa:1 sufficient:2 principle:1 thresholding:1 corrupt:1 share:1 endowment:1 succinctly:2 summary:2 supported:1 last:1 infeasible:3 neighbor:2 template:17 ync:1 peterson:1 bulletin:1 fifth:1 sparse:5 stimberg:1 benefit:1 distributed:1 depth:2 xn:2 evaluating:1 calculated:4 pillow:2 forward:1 collection:1 projected:1 preprocessing:1 approximate:1 laio:1 uncentered:1 reveals:1 kilosort:9 search:3 continuous:2 dabrowski:1 table:2 stimulated:1 channel:24 nature:2 robust:4 ca:1 diehl:1 huk:1 domain:1 dense:10 main:3 linearly:3 big:1 noise:10 arise:1 s2:1 repeated:1 neuronal:2 representative:10 screen:1 judy:1 inferring:2 guiding:1 position:1 breaking:1 wavelet:2 removing:3 emphasizing:1 minute:4 specific:1 meas:4 experimented:2 concern:1 deconvolution:7 essential:1 exists:2 naively:1 false:6 merging:4 magnitude:1 illustrates:1 stoetzner:1 sorting:29 gap:1 chen:1 locality:1 led:1 simply:1 explore:2 paninski:1 ganglion:2 ramirez:1 knudson:1 stitch:1 temporarily:1 partially:1 lewicki:1 thorbergsson:1 truth:9 extracted:2 minibatches:1 acm:3 harris:3 targeted:1 careful:1 khosrowshahi:1 shared:1 feasible:2 hard:1 included:1 determined:3 typical:2 operates:1 reducing:1 except:1 principal:1 total:5 pas:3 experimental:2 datastream:1 moseley:1 support:1 irwin:1 preparation:1 evaluate:5 lian:1 ex:1 |
6,620 | 699 | A Practice Strategy for Robot Learning
Control
Terence D. Sanger
Department of Electrical Engineering and Computer Science
Massachusetts Institute of Technology, room E25-534
Cambridge, MA 02139
[email protected]
Abstract
"Trajectory Extension Learning" is a new technique for Learning
Control in Robots which assumes that there exists some parameter
of the desired trajectory that can be smoothly varied from a region
of easy solvability of the dynamics to a region of desired behavior
which may have more difficult dynamics. By gradually varying the
parameter, practice movements remain near the desired path while
a Neural Network learns to approximate the inverse dynamics. For
example, the average speed of motion might be varied, and the inverse dynamics can be "bootstrapped" from slow movements with
simpler dynamics to fast movements. This provides an example of
the more general concept of a "Practice Strategy" in which a sequence of intermediate tasks is used to simplify learning a complex
task. I show an example of the application of this idea to a real
2-joint direct drive robot arm.
1
INTRODUCTION
The most general definition of Adaptive Control is one which includes any controller
whose behavior changes in response to the controlled system's behavior. In practice,
this definition is usually restricted to modifying a small number of controller parameters in order to maintain system stability or global asymptotic stability of the
errors during execution of a single trajectory (Sastry and Bodson 1989, for review).
Learning Control represents a second level of operation, since it uses Adaptive Con335
336
Sanger
trol to modify parameters during repeated performance trials of a desired trajectory
so that future trials result in greater accuracy (Arimoto et al. 1984). In this paper
I present a third level called a "Practice Strategy", in which Learning Control is
applied to a sequence of intermediate trajectories leading ultimately to the true
desired trajectory. I claim that this can significantly increase learning speed and
make learning possible for systems which would otherwise become unstable.
1.1
LEARNING CONTROL
During repeated practice of a single desired trajectory, the actual trajectory followed
by the robot may be significantly different. Many Learning Control algorithms
modify the commands stored in a sequence memory to minimize this difference
(Atkeson 1989, for review). However, the performance errors are usually measured
in a sensory coordinate system, while command corrections must be made in the
motor coordinate system. If the relationship between these two coordinate systems is not known, then command corrections might be in the wrong direction and
inadvertently worsen performance. However, if the practice trajectory is close to
the desired trajectory, then the errors will be small and the relationship between
command and sensory errors can be approximated by the system Jacobian.
An alternative to a stored command sequence is to use a Neural Network to learn an
approximation to the inverse dynamics in the region of interest (Sanner and Slotine
1992, Yabuta and Yamada 1991, Atkeson 1989). In this case, the commands and
results from the actual movement are used as training data for the network, and
smoothness properties are assumed such that the error on the desired trajectory
will decrease. However, a significant problem with this method is that if the actual
practice trajectory is far from the desired trajectory, then its inverse dynamics
information will be of little use in training the inverse dynamics for the desired
trajectory. In fact, the network may achieve perfect approximation on the actual
trajectory while still making significant errors on the desired trajectory. In this
case, learning will stop (since the training error is zero) leading to the phenomenon
of "learning lock-up" (An et al. 1988). So whether Learning Control uses a sequence
memory or a Neural Network, learning may proceed poorly if large errors are made
during the initial practice movements.
1.2
PRACTICE STRATEGIES
I define a "practice strategy" as a sequence of trajectories such that the first element
in the sequence is any previously learned trajectory, and the last element in the
sequence is the ultimate desired trajectory. A well designed practice strategy will
result in a seqence for which learning control of the trajectory for any particular step
is simplified if prior steps have already been learned. This will occur if learning of
prior trajectories reduces the initial performance error for subsequent trajectories,
so that a network will be less likely to experience learning lock-up.
One example of a practice strategy is a three-step sequence in which the intermediate step is a set of independently executable subtasks which partition the desired
trajectory into discrete pieces. Another example is a multi-step sequence in which
intermediate steps are a set of trajectories which are somehow related to the desired trajectory. In this paper I present a multi-step sequence which gradually
A Practice Strategy for Robot Learning Control
---~-------,
I
I
"
A
u
N
P
A
y
"
a.
Figure 1: Training signals for network learning.
transforms some known trajectory into the desired trajectory by varying a single
parameter. This method has the advantage of not requiring detailed knowledge of
the task structure in order to break it up into meaningful subtasks, and conditions
for convergence can be stated explicitly. It has a close relationship to Continuation
Methods for solving differential equations, and can be considered to be a particular
application of the Banach Extension Theorem.
2
METHODS
As in (Sanger 1992), we need to specify 4 aspects of the use of a neural network
within a control system:
1. the networks' function in the control system,
2. the network learning algorithm which modifies the connection weights,
3. the training signals used for network learning, and
4. the practice strategy used to generate sample movements.
The network's function is to learn the inverse dynamics of an equilibrium-point controlled plant (Shadmehr 1990). The LMS-tree learning algorithm trains the network
(Sanger 1991b, Sanger 1991a). The training signals are determined from the actual practice data using either "Actual Trajectory Training" or "Desired Trajectory
Training", as defined below. And the practice strategy is "Trajectory Extension
Learning", in which a parameter of the movement is gradually modified during
training.
337
338
Sanger
2.1
TRAINING SIGNALS
Figure 1 shows the general structure of the network and training signals. A desired
trajectory y is fed into the network N to yield an estimated command U. This
command is then applied to the plant Pcx where the subscript indicates that the
plant is parameterized by the variable a. Although the true command u which
achieves y is unknown, we do know that the estimated command u produces y, so
these signals are used for training by comparing the network response to y given by
~ = Ny to the known value and subtracting these to yield the training error 6,.
u
Normally, network training would use this error signal to modify the network output
for inputs near y, and I refer to this as "Actual Trajectory Training". However, if
y is far from y then no change in response may occur at y and this may lead even
more quickly to learning lock-up. Therefore an alternative is to use the error 6fJ to
train the network output for inputs near y. I refer to this as "Desired Trajectory
Training", and in the figure it is represented by the dotted arrow.
The following discussion will summarize the convergence conditions and theorems
presented in (Sanger 1992).
Define
Ru . (1 - N P(x))u = u - U
to be an operator which maps commands into command errors for states x on the
desired trajectory. Similarly, let
Ru = (1 -
= u- ~
map commands into command errors for states x on the actual trajectory.
N P( x))u
Convergence depends upon the following assumptions:
A1: The plant P is smooth and invertible with respect to both the state x and the
input u with Lipschitz constants k'z; and ku, and it has stable zero-dynamics.
A2: The network N is smooth with Lipschitz constant kN.
A3: Network learning reduces the error in response to a pair (y, 6y ).
A4: The change in network output in response to training is smooth with Lipschitz
constant kL.
A5: There exists a smoothly controllable parameter a such that an inverse dynamics solution is available at a = ao, and the desired performance occurs
when a = ad.
A6: The change in command required to produce a desired output after any change
in a is bounded by the change in a multiplied by a constant kcx ?
A 7: The change in plant response for any fixed input is bounded by the change in
a multiplied by a constant kp ?
Under assumptions A1-A3 we can prove convergence of Desired Trajectory Training:
Theorem 1:
If there exists a k Rn such that
II R nu - Rnull < kRn lI u - ull
A Practice Strategy for Robot Learning Control
then if the learning rate 0 < 'Y :::; 1,
If k Rn
< 1 and
'Y :::; 1, then the network output
u approaches the correct command
u.
Under assumptions A1-A4, we can prove convergence of Actual Trajectory Training:
Theorem 2:
If there exists a kRn such that
IIRn u - Rnull < kRn lIu
- illl
then if the learning rate 0 < 'Y :::; 1,
2.2
TRAJECTORY EXTENSION LEARNING
Let a be some modifiable parameter of the plant such that for a = ao there exists
a simple inverse dynamics solution, and we seek a solution when a = ad. For example, if the plant uses Equilibrium Point Control (Shadmehr 1990), then at low
speeds the inverse dynamics behave like a perfect servo controller yielding desired
trajectories without the need to solve the dynamics. We can continue to train a
learning controller as the average speed of movement (a) is gradually increased.
The inverse dynamics learned at one speed provide an approximation to the inverse
dynamics for a slightly faster speed, and thus the performance errors remain small
during practice. This leads to significantly faster learning rates and greater likelihood that the conditions for convergence at any given speed will be satisfied. Note
that unlike traditional learning schemes, the error does not decrease monotonically
with practice, but instead maintains a steady magnitude as the speed increases,
until the network is no longer able to approximate the inverse dynamics.
The following is a summary of a result from (Sanger 1992). Let a change from al
to a2, and let P = Pal and P' = Pa2 . Then under assumptions AI-A7 we can
prove convergence of Trajectory Extension Learning:
Theorem 3:
If there exists a kR such that for a =
then for a
al
= a2
IIR'u' - R'illl < kRllu' -
ull + (2k a + kNkp)la2 - all
This shows that given the smoothness assumptions and a small enough change in
a, the error will continue to decrease.
339
340
Sanger
3
EXAMPLE
Figure 2 shows the result of 15 learning trials performed by a real direct-drive twojoint robot arm on a sampled desired trajectory. The initial trial required 11.5
seconds to execute, and the speed was gradually increased until the final trial required only 4.5 seconds. Simulated equilibrium point control was used (Bizzi et
al. 1984) with stiffness and damping coefficients of 15 nm/rad and 1.5 nm/rad/sec,
respectively. The grey line in figure 2 shows the equilibrium point control signal
which generated the actual movement represented by the solid line. The difference
between these two indicates the nontrivial nature of the dynamics calculations required to derive the control signal from the desired trajectory. Note that without
Trajectory Extension Learning, the network does not converge and the arm becomes
unstable. The neural network was an LMS tree (Sanger 1991b, Sanger 1991a) with
10 Gaussian basis functions for each of the 6 input dimensions, and a total of 15
subtrees were grown per joint (see (Sanger 1992) for further explanation).
4
CONCLUSION
Trajectory Extension Learning is one example of the way in which a practice strategy can be used to improve convergence for Learning Control. This or other types
of practice strategies might be able to increase the performance of many different
types of learning algorithms both within and outside the Control domain. Such
strategies may also provide a theoretical model for the practice strategies used by
humans to learn complex tasks, and the theoretical analysis and convergence conditions could potentially lead to a deeper understanding of human motor learning
and successful techniques for optimizing performance.
Acknowledgements
Thanks are due to Simon Giszter, Reza Shadmehr, Sandro Mussa-Ivaldi, Emilio
Bizzi, and many people at the NIPS conference for their comments and criticisms.
This report describes research done within the laboratory of Dr. Emilio Bizzi in the
department of Brain and Cognitive Sciences at MIT. The author was supported during this work by a National Defense Science and Engineering Graduate Fellowship,
and by NIH grants 5R37 AR26710 and 5ROINS09343 to Dr. Bizzi.
References
An C. H., Atkeson C. G., Hollerbach J. M., 1988, Model-Based Control of a Robot
Manipulator, MIT Press, Cambridge, MA.
Arimoto S., Kawamura S., Miyazaki F., 1984, Bettering operation of robots by
learning, Journal of Robotic Systems, 1(2):123-140.
Atkeson C. G., 1989, Learning arm kinematics and dynamics, Ann. Rev. Neurosci.,
12:157-183.
Bizzi E., Accornero N., Chapple W., Hogan N., 1984, Posture control and trajectory
formation during arm movement, J. Neurosci, 4:2738-2744.
Sanger T. D., 1991a, A tree-structured adaptive network for function approximation
in high dimensional spaces, IEEE Trans. Neural Networks, 2(2):285-293.
A Practice Strategy for Robot Learning Control
Sanger T. D., 1991b, A tree-structured algorithm for reducing computation in
networks with separable basis functions, Neural Computation, 3(1):67-78.
Sanger T. D., 1992, Neural network learning control of robot manipulators using gradually increasing task difficulty, submitted to IEEE Trans. Robotics and
Automation.
Sanner R. M., Slotine J.-J. E., 1992, Gaussian networks for direct adaptive control,
IEEE Trans. Neural Networks, in press. Also MIT NSL Report 910303, 910503,
March 1991 and Proc. American Control Conference, Boston pages 2153-2159, June
1991.
Sastry S., Bodson M., 1989, Adaptive Control: Stability, Convergence, and Robustness, Prentice Hall, New Jersey.
Shadmehr R., 1990, Learning virtual equilibrium trajectories for control of a robot
arm, Neural Computation, 2:436-446.
Yabuta T., Yamada T., 1991, Learning control using neural networks, Proc. IEEE
Int'l ConJ. on Robotics and Automation, Sacramento, pages 740-745.
Figure 2: Dotted line is the desired trajectory, solid line is the actual trajectory,
and the grey line is the equilibrium point control trajectory.
341
| 699 |@word trial:5 grey:2 seek:1 solid:2 ivaldi:1 liu:1 initial:3 bootstrapped:1 comparing:1 must:1 subsequent:1 partition:1 motor:2 designed:1 yamada:2 provides:1 simpler:1 direct:3 become:1 differential:1 prove:3 behavior:3 pcx:1 multi:2 brain:1 actual:11 little:1 increasing:1 becomes:1 bounded:2 miyazaki:1 ull:2 wrong:1 control:29 normally:1 grant:1 engineering:2 modify:3 subscript:1 path:1 might:3 graduate:1 practice:24 illl:2 roins09343:1 significantly:3 close:2 operator:1 prentice:1 map:2 modifies:1 independently:1 sacramento:1 stability:3 coordinate:3 us:3 element:2 approximated:1 electrical:1 region:3 r37:1 movement:10 decrease:3 servo:1 dynamic:19 hogan:1 ultimately:1 trol:1 solving:1 upon:1 basis:2 ar26710:1 joint:2 represented:2 jersey:1 grown:1 train:3 fast:1 kp:1 formation:1 outside:1 whose:1 solve:1 pa2:1 otherwise:1 final:1 sequence:11 advantage:1 subtracting:1 poorly:1 achieve:1 convergence:10 produce:2 perfect:2 derive:1 measured:1 direction:1 correct:1 modifying:1 human:2 virtual:1 kawamura:1 ao:2 accornero:1 extension:7 correction:2 considered:1 hall:1 equilibrium:6 claim:1 lm:2 achieves:1 a2:3 bizzi:5 chapple:1 proc:2 mit:4 gaussian:2 modified:1 varying:2 command:16 june:1 indicates:2 likelihood:1 criticism:1 tds:1 represents:1 future:1 report:2 simplify:1 national:1 mussa:1 maintain:1 interest:1 a5:1 yielding:1 subtrees:1 conj:1 experience:1 damping:1 tree:4 desired:26 theoretical:2 increased:2 a6:1 successful:1 pal:1 stored:2 iir:1 kn:1 thanks:1 terence:1 invertible:1 e25:1 quickly:1 satisfied:1 nm:2 dr:2 cognitive:1 american:1 leading:2 nsl:1 li:1 sec:1 includes:1 coefficient:1 automation:2 int:1 explicitly:1 depends:1 ad:2 piece:1 performed:1 break:1 maintains:1 worsen:1 simon:1 minimize:1 accuracy:1 yield:2 la2:1 trajectory:49 drive:2 submitted:1 definition:2 slotine:2 stop:1 sampled:1 massachusetts:1 knowledge:1 response:6 specify:1 execute:1 done:1 until:2 a7:1 somehow:1 manipulator:2 concept:1 true:2 requiring:1 laboratory:1 during:8 steady:1 motion:1 fj:1 nih:1 executable:1 arimoto:2 reza:1 banach:1 significant:2 refer:2 cambridge:2 ai:2 smoothness:2 sastry:2 similarly:1 robot:12 stable:1 longer:1 sandro:1 solvability:1 optimizing:1 kcx:1 continue:2 greater:2 converge:1 monotonically:1 signal:9 ii:1 emilio:2 reduces:2 smooth:3 faster:2 calculation:1 a1:3 controlled:2 controller:4 robotics:2 fellowship:1 unlike:1 comment:1 near:3 intermediate:4 easy:1 enough:1 idea:1 whether:1 defense:1 ultimate:1 proceed:1 detailed:1 transforms:1 continuation:1 generate:1 dotted:2 estimated:2 per:1 modifiable:1 discrete:1 inverse:12 parameterized:1 followed:1 nontrivial:1 occur:2 aspect:1 speed:9 separable:1 department:2 structured:2 march:1 remain:2 slightly:1 describes:1 rev:1 making:1 gradually:6 restricted:1 equation:1 previously:1 kinematics:1 know:1 fed:1 available:1 operation:2 multiplied:2 stiffness:1 alternative:2 robustness:1 assumes:1 a4:2 lock:3 sanger:15 already:1 occurs:1 posture:1 strategy:16 traditional:1 simulated:1 unstable:2 ru:2 relationship:3 difficult:1 potentially:1 stated:1 unknown:1 behave:1 rn:2 varied:2 subtasks:2 pair:1 required:4 kl:1 connection:1 rad:2 learned:3 nu:1 nip:1 trans:3 able:2 usually:2 below:1 hollerbach:1 summarize:1 memory:2 explanation:1 difficulty:1 sanner:2 arm:6 scheme:1 improve:1 technology:1 review:2 prior:2 understanding:1 acknowledgement:1 asymptotic:1 plant:7 summary:1 supported:1 last:1 deeper:1 institute:1 dimension:1 sensory:2 author:1 made:2 adaptive:5 simplified:1 atkeson:4 far:2 approximate:2 global:1 robotic:1 assumed:1 learn:3 ku:1 nature:1 controllable:1 complex:2 domain:1 neurosci:2 arrow:1 repeated:2 twojoint:1 slow:1 ny:1 jacobian:1 third:1 learns:1 krn:3 theorem:5 a3:2 exists:6 kr:1 magnitude:1 execution:1 boston:1 smoothly:2 likely:1 ma:2 ann:1 room:1 lipschitz:3 change:10 determined:1 reducing:1 shadmehr:4 giszter:1 called:1 total:1 inadvertently:1 meaningful:1 people:1 phenomenon:1 |
6,621 | 6,990 | Independence clustering (without a matrix)
Daniil Ryabko
INRIA Lillle,
40 avenue de Halley, Villeneuve d?Ascq, France
[email protected]
Abstract
The independence clustering problem is considered in the following formulation:
given a set S of random variables, it is required to find the finest partitioning
{U1 , . . . , Uk } of S into clusters such that the clusters U1 , . . . , Uk are mutually
independent. Since mutual independence is the target, pairwise similarity measurements are of no use, and thus traditional clustering algorithms are inapplicable. The
distribution of the random variables in S is, in general, unknown, but a sample is
available. Thus, the problem is cast in terms of time series. Two forms of sampling
are considered: i.i.d. and stationary time series, with the main emphasis being on
the latter, more general, case. A consistent, computationally tractable algorithm for
each of the settings is proposed, and a number of fascinating open directions for
further research are outlined.
1
Introduction
Many applications face the situation where a set S = {x1 , . . . , xN } of samples has to be divided into
clusters in such a way that inside each cluster the samples are dependent, but the clusters between
themselves are as independent as possible. Here each xi may itself be a sample or a time series
xi = X1i , . . . , Xni . For example, in financial applications, xi can be a series of recordings of prices of
a stock i over time. The goal is to find the segments of the market such that different segments evolve
independently, but within each segment the prices are mutually informative [15, 17]. In biological
applications, each xi may be a DNA sequence, or may represent gene expression data [28, 20], or, in
other applications, an fMRI record [4, 13].
The staple approach to this problem in applications is to construct a matrix of (pairwise) correlations
between the elements, and use traditional clustering methods, e.g., linkage-based methods or k means
and its variants, with this matrix [15, 17, 16]. If mutual information is used, it is used as a (pairwise)
proximity measure between individual inputs, e.g. [14].
We remark that pairwise independence is but a surrogate for (mutual) independence, and, in addition,
correlation is a surrogate for pairwise independence. There is, however, no need to resort to surrogates
unless forced to do so by statistical or computational hardness results. We therefore propose to
reformulate the problem from the first principles, and then show that it is indeed solvable both
statistically and computationally ? but calls for completely different algorithms. The formulation
proposed is as follows.
Given a set S = (x1 , . . . , xN ) of random variables, it is required to find the finest partitioning
{U1 , . . . , Uk } of S into clusters such that the clusters U1 , . . . , Uk are mutually independent.
To our knowledge, this problem in its full generality has not been addressed before. A similar
informal formulation appears in the work [1] that is devoted to optimizing a generalization of the
ICA objective. However, the actual problem considered only concerns the case of tree-structured
dependence, which allows for a solution based on pairwise measurements of mutual information.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
Note that in the fully general case pairwise measurements are useless, as are, furthermore, bottom-up
(e.g., linkage-based) approaches. Thus, in particular, a proximity matrix cannot be used for the
analysis. Indeed, it is easy to construct examples in which any pair or any small group of elements
are independent, but are dependent when the same group is considered jointly with more elements.
For instance, consider a group of Bernoulli 1/2-distributed random variables x1 , . . . , xN +1 , where
PN
x1 , . . . , xN are i.i.d. and xN +1 = i=1 xi mod 2. Note that any N out of these N + 1 random
variables are i.i.d., but together the N + 1 are dependent. Add then two more groups like this, say,
y1 , . . . , yN +1 and z1 , . . . , zN +1 that have the exact same distribution, with the groups of x, y and z
mutually independent. Naturally, these are the three clusters we would want to recover. However, if
we try to cluster the union of the three, then any algorithm based on pairwise correlations will return
an essentially arbitrary result. What is more, if we try to find clusters that are pairwise independent,
then, for example, the clustering {(xi , yi , zi )i=1..N } of the input set into N + 1 clusters appears
correct, but, in fact, the resulting clusters are dependent. Of course, real-world data does not come
in the form of summed up Bernoulli variables, but this simple example shows that considering
independence of small subsets may be very misleading.
The considered problem is split into two parts considered separately: the computational and the
statistical part. This is done by first considering the problem assuming the joint distribution of
all the random variables is known, and is accessible via an oracle. Thus, the problem becomes
computational. A simple, computationally efficient algorithm is proposed for this case. We then
proceed to the time-series formulations: the distribution of (x1 , . . . , xN ) is unknown, but a sample
(X11 , . . . , X1N ), . . . , (Xn1 , . . . , XnN ) is provided, so that xi can be identified with the time series
X1i , . . . , Xni . The sample may be either independent and identically distributed (i.i.d.), or, in a more
general formulation, stationary. As one might expect, relying on the existing statistical machinery, the
case of known distributions can be directly extended to the case of i.i.d. samples. Thus, we show that
it is possible to replace the oracle access with statistical tests and estimators, and then use the same
algorithm as in the case of known distributions. The general case of stationary samples turns out
to be much more difficult, in particular because of a number of strong impossibility results. In fact,
it is challenging already to determine what is possible and what is not from the statistical point of
view. In this case, it is not possible to replicate the oracle access to the distribution, but only its weak
version that we call fickle oracle. We find that, in this case, it is only possible to have a consistent
algorithm for the case of known k. An algorithm that has this property is constructed. This algorithm
is computationally feasible when the number of clusters k is small, as its complexity is O(N 2k ).
Besides, a measure of information divergence is proposed for time-series distributions that may be
of independent interest, since it can be estimated consistently without any assumptions at all on the
distributions or their densities (the latter may not exist).
The main results of this work are theoretical. The goal is to determine, as a first step, what is
possible and what is not from both statistical and computational points of view. The main emphasis
is placed on highly dependent time series, as warranted by the applications cited above, leaving
experimental investigations for future work. The contribution can be summarized as follows:
? a consistent, computationally feasible algorithm for known distributions, unknown number
of clusters, and an extension to the case of unknown distributions and i.i.d. samples;
? an algorithm that is consistent under stationary ergodic sampling with arbitrary, unknown
distributions, but with a known number k of clusters;
? an impossibility result for clustering stationary ergodic samples with k unknown;
? an information divergence measure for stationary ergodic time-series distributions along
with its estimator that is consistent without any extra assumptions;
In addition, an array of open problems and exciting directions for future work is proposed.
Related work. Apart from the work on independence clustering mentioned above, it is worth pointing
out the relation to some other problems. First, the proposed problem formulation can be viewed
as a Bayesian-network learning problem: given an unknown network, it is required to split it into
independent clusters. In general, learning a Bayesian network is NP-hard [5], even for rather restricted
classes of networks (e.g., [18]). Here the problem we consider is much less general, which is why it
admits a polynomial-time solution. A related clustering problem, proposed in [23] (see also [12]) is
clustering time series with respect to distribution. Here, it is required to put two time series samples
x1 , x2 into the same cluster if and only if their distribution is the same. Similar to the independence
clustering introduced here, this problem admits a consistent algorithm if the samples are i.i.d. (or
2
mixing) and the number of distributions (clusters) is unknown, and in the case of stationary ergodic
samples if and only if k is known.
2
Set-up and preliminaries
A set S := {x1 , . . . , xN } is given, where we will either assume that the joint distribution of xi is
known, or else that the distribution is unknown but a sample (X11 , . . . , Xn1 ), . . . , (X1N , . . . , XnN ) is
i
given. In the latter case, we identify each xi with the sequence (sample) X1i , . . . , Xni , or X1..n
for
short, of length n. The lengths of the samples are the same only for the sake of notational convenience;
it is easy to generalize all algorithms to the case of different sample lengths ni , but the asymptotic
would then be with respect to n := mini=1..N ni . It is assumed that Xji ? X := R are real-valued,
but extensions to more general cases are straightforward.
For random variables A, B, C we write (A ? B)|C to say that A is conditionally independent of B
given C, and A ? B ? C to say that A, B and C are mutually independent.
The (unique up to a permutation) partitioning U := {U1 , . . . , Uk } of the set S is called the groundtruth clustering if U1 , . . . , Uk are mutually independent (U1 ? ? ? ? ? Uk ) and no refinement of U
has this property. A clustering algorithm is consistent if it outputs the ground-truth clustering, and
it is asymptotically consistent if w.p. 1 it outputs the ground-truth clustering from some n on.
P
For a discrete A-valued r.v. X its Shannon entropy is defined as H(X) :=
a?A ?P (X =
a) log P (X = a), lettingR 0 log 0 = 0. For a distribution with a density f its (differential) entropy is
defined as H(X) =: ? f (x) log f (x). For two random variables X, Y their mutual information
I(X, Y ) is defined as I(X, Y ) = H(X) + H(Y ) ? H(X, Y ). For discrete random variables, as well
as for continuous ones with aP
density, X ? Y if and only if I(X, Y ) = 0; see, e.g., [6]. Likewise,
I(X1 , . . . , Xm ) is defined as i=1..m H(Xi ) ? H(X1 , . . . , Xm ).
For the sake of convenience, in the next two sections we make the assumption stated below. However,
we will show (Sections 5,6) that this assumption can be gotten rid of as well.
Assumption 1. All distributions in question have densities bounded away from zero on their support.
3
Known distributions
As with any statistical problem, it is instructive to start with the case where the (joint) distribution of
all the random variables in question is known. Finding out what can be done and how to do it in this
case helps us to set the goals for the (more realistic) case of unknown distributions.
Thus, in this section, x1 , . . . , xN are not time series, but random variables whose joint distribution is
known to the statistician. The access to this distribution is via an oracle; specifically, our oracle will
provide answers to the following questions about mutual information (where, for convenience, we
assume that the mutual information with the empty set is 0):
Oracle TEST. Given sets of random variables A, B, C, D ? {x1 , . . . , xN } answer whether
I(A, B) > I(C, D).
Remark 1 ( Conditional independence oracle). Equivalently, one can consider an oracle that answers
conditional independence queries of the form (A ? B)|C. The definition above is chosen for the sake
of continuity with the next section, and it also makes the algorithm below more intuitive. However, in
order to test conditional independence statistically one does not have to use mutual information, but
may resort to any other divergence measure instead.
The proposed algorithm (see the pseudocode listing below) works as follows. It attempts to split the
input set recursively into two independent clusters, until it is no longer possible. To split a set in
two, it starts with putting one element x from the input set S into a candidate cluster C := {x}, and
measures its mutual information I(C, R) with the rest of the set, R := S \ C. If I(C, R) is already 0
then we have split the set into two independent clusters and can stop. Otherwise, the algorithm then
takes the elements out of R one by one without replacement and each time looks whether I(C, R)
has decreased. Once such an element is found, it is moved from R to C and the process is restarted
from the beginning with C thus updated. Note that, if we have started with I(C, R) > 0, then taking
elements out of R without replacement we eventually should find a one that decreases I(C, R), since
I(C, ?) = 0 and I(C, R) cannot increase in the process.
3
Theorem 1. The algorithm CLIN outputs the correct clustering using at most 2kN 2 oracle calls.
Proof. We shall first show that the procedure for splitting a set into two indeed splits the input set into
two independent sets, if and only if such two sets exist. First, note that if I(C, S \ C) = 0 then C ? R
and the function terminates. In the opposite case, when I(C, S \ C) > 0, by removing an element
from R := S \ C, I(C, R) can only decrease (indeed, h(C|R) ? h(C|R \ {x}) by information
processing inequality). Eventually when all elements are removed, I(C, R) = I(C, {}) = 0, so
there must be an element x removing which decreases I(C, R). When such an element x found it is
moved to C. Note that, in this case, indeed x?
\C. However, it is possible that removing an element x
from R does not reduce I(C, R), yet x?
\C. This is why the while loop is needed, that is, the whole
process has to be repeated until no elements can be moved to C. By the end of each for loop, we
have either found at least one element to move to C, or we have assured that C ? S \ C and the
loop terminates. Since there are only finitely many elements in S \ C, the while loop eventually
terminates. Moreover, each of the two loops (while and for) terminates in at most N iterations.
Finally, notice that if (C1 , C2 ) ? C3 and C1 ? C2 then also C1 ? C2 ? C3 , which means that by
repeating the Split function recursively we find the correct clustering.
From the above, the bound on the number of oracle calls is easily obtained by direct calculation.
4
I.I.D. sampling
In this section we assume that the distribution of
(x1 , . . . , xN ) is not known, but an i.i.d. sample
Figure 1: CLIN: cluster with k unknown, (X11 , . . . , X1N ), . . . , (Xn1 , . . . , XnN ) is provided. We ideni
given an oracle for MI
tify xi with the (i.i.d.) time series X1..n
. Formally, N X INPUT: The set S.
valued processes is just a single X N -valued process. The
(C1 , C2 ) := Split(S)
latter can be seen as a matrix (Xji )i=1..N,j=1..? , where
i
if C2 6= ? then
each row i is the sample xi = X1..n..
and each column j
Output:CLIN (C1 ), CLIN (C2 )
is what is observed at time j: Xj1 ..XjN .
else
The case of i.i.d. samples is not much different from the
Output: C1
case of a known distribution. What we need is to replace
end if
the oracle test with (nonparametric) statistical tests. First,
Function Split(Set S of samples)
a test for independence is needed to replace the oracle call
Initialize: C := {x1 }, R := S \ C;
TEST(I(C,
R) > 0) in the while loop. Such tests are
while TEST(I(C; R) > 0) do
indeed
available,
see, for example, [8]. Second, we need
for each x ? R do
if TEST(I(C; R)>I(C; R \ {x})) an estimator of mutual information I(X, Y ), or, which is
sufficient, for entropies, but with a rate of convergence.
then
If the rate of convergence is known to be asymptotically
move x from R to C
bounded by, say, t(n), then, in order to construct an asympbreak the for loop
totically consistent test, we can take any t0 (n) ? 0 such
else
that t(n) = o(t0 (n)) and decide our inequality as folmove x from R to M
?
?
lows: if I(C;
R \ {x}) < I(C;
R) ? t0 (n) then say that
end if
I(C; R \ {x}) < I(C; R).
The
required rates of converend for
?
gence, which are of order n under Assumption 1, can be
M := {}, R := S \ C;
found in [3].
end while
Return(C,R)
Given the result of the previous section, it is clear that if
END function
the oracle is replaced by the tests described, then CLIN is
a.s. consistent. Thus, we have demonstrated the following.
Theorem 2. Under Assumption 1, there is an asymptotically consistent algorithm for independence clustering with i.i.d. sampling.
Remark 2 (Necessity of the assumption). The independence test of [8] does not need Assumption 1,
as it is distribution-free. Since the mutual information is defined in terms of densities, if we want
to completely get rid of Assumption 1, we would need to use some other measure of dependence
for the test. One such measure is defined in the next section already for the general case of process
distributions. However, the rates of convergence of its empirical estimates under i.i.d. sampling
remain to be studied.
4
Remark 3 (Estimators vs. tests). As noted in Remark 1, the tests we are using are, in fact, tests
for (conditional) independence: testing I(C; R) > I(C; R \ {x}) is testing for (C ? {x}|R \
{x}). Conditional independence can be tested directly, without estimating I (see, for example 27),
potentially allowing for tighter performance guarantees under more general conditions.
5
Stationary sampling
We now enter the hard mode. The general case of stationary sampling presents numerous obstacles,
often in the form of theoretical impossibility results: there are (provably) no rates of convergence,
no independence test, and 0 mutual information rate does not guarantee independence. Besides,
some simple-looking questions regarding the existence of consistent tests, which indeed have simple
answers in the i.i.d. case, remain open in the stationary ergodic case. Despite all this, a computationally
feasible asymptotically consistent independence clustering algorithm can be obtained, although only
for the case of a known number of clusters. This parallels the situation of clustering according to the
distribution [23, 12].
In this section we assume that the distribution of (x1 , . . . , xN ) is not known, but a jointly stationary
ergodic sample (X11 , . . . , X1N ), . . . , (Xn1 , . . . , XnN ) is provided. Thus, xi is a stationary ergodic time
i
series X1..n
. Here is also where we drop Assumption 1; in particular, densities do not have to exist.
This new relaxed set of assumptions can be interpreted as using a weaker oracle, as explained in
Remark 5 below.
We start with preliminaries about stationary processes, followed by impossibility results, and concluding with an algorithm for the case of known k.
5.1 Preliminaries: stationary ergodic processes
Stationary, ergodicity, information rate. (Time-series) distributions, or processes, are measures
on the space (X ? , FX ? ), where FX ? is the Borel sigma-algebra of X ? . Recall that N X -valued
process is just a single X N -valued process. So the distributions are on the space ((X N )? , F(AN )? );
this will be often left implicit. For a sequence x ? An and a set B ? B denote ?(x, B) the
frequency with which the sequence x falls in the set B. A process ? is stationary if ?(X1..|B| =
B) = ?(Xt..t+|B|?1 = B) for any measurable B ? X ? and t ? N. We further abbreviate
?(B) := ?(X1..|B| = B). A stationary process ? is called (stationary) ergodic if the frequency of
occurrence of each measurable B ? X ? in a sequence X1 , X2 , . . . generated by ? tends to its a priori
(or limiting) probability a.s.: ?(limn?? ?(X1..n , B) = ?(B)) = 1. By virtue of the ergodic theorem,
this definition can be shown to be equivalent to the more standard definition of stationary ergodic
processes given in terms of shift-invariant sets [26]. Denote S and E the sets of all stationary and
stationary ergodic processes correspondingly. The ergodic decomposition theorem for stationary
processes (see, e.g., 7) states that any stationary process can be expressed as a mixture of stationary
ergodic processes. That is, a stationary process ? can be envisaged as first selecting a stationary
ergodic distribution according to some measure W? over the set of all such distributions, and then
using this ergodic distribution to generate the sequence. More
R formally, for any ? ? S there is a
measure W? on (S, FS ), such that W? (E) = 1, and ?(B) = dW? (?)?(B), for any B ? FX ? .
For a stationary time series x, its m-order entropy hm (x) is defined as EX1..m?1 h(Xm |X1..m?1 ) (so
the usual Shannon entropy is the entropy of order 0). By stationarity, the limit limm?? hm exists
1
and equals limm?? m
h(X1..m ) (see, for example, [6] for more details). This limit is called entropy
rate and is denoted h? . For l stationary processes xi = (X1i , . . . , Xni , . . . ), i = 1..l, the m-order
Pl
mutual information is defined as Im (x1 , . . . , xl ) := i=1 hm (xi ) ? hm (x1 , . . . , xl ) and the mutual
information rate is defined as the limit
I? (x1 , . . . , xl ) := lim Im (x1 , . . . , xl ).
m??
(1)
Discretisations and a metric. For each m, l ? N, let B m,l be a partitioning of X m into 2l sets such
that jointly they generate Fm of X m , i.e. ?(?l?N B m,l ) = Fm . The distributional distance between a
pair of process distributions ?1 , ?2 is defined as follows [7]:
d(?1 , ?2 ) =
?
X
m,l=1
X
wm wl
B?B m,l
5
|?1 (B) ? ?2 (B)|,
(2)
where we set wj := 1/j(j + 1), but any summable sequence of positive weights may be used.
As shown in [22], empirical estimates of this distance are asymptotically consistent for arbitrary
stationary ergodic processes. These estimates are used in [23, 12] to construct time-series clustering
algorithms for clustering with respect to distribution. Here we will only use this distance in the
impossibility results. Basing on these ideas,PGy?rfi [9] suggested
to use a similar construction for
P
?
studying independence, namely d(?1 , ?2 ) = m,l=1 wm wl A,B?B m,l |?1 (A)?2 (B) ? ?(A ? B)|,
where ?1 and ?2 are the two marginals of a process ? on pairs, and noted that its empirical estimates
are asymptotically consistent. The distance we will use is similar, but is based on mutual information.
5.2
Impossibility results
First of all, while the definition of ergodic processes guarantees convergence of frequencies to the
corresponding probabilities, this convergence can be arbitrary slow [26]: there are no meaningful
bounds on |?(X1..n , 0) ? ?(X1 = 0)| in terms of n for ergodic ?. This means that we cannot use
tests for (conditional) independence of the kind employed in the i.i.d. case (Section 4).
Thus, the first question we have to pose is whether it is possible to test independence, that is, to say
1
2
whether x1 ? x2 based on a stationary ergodic samples X1..n
, X1..n
. Here we show that the answer
in a certain sense is negative, but some important questions remain open.
1
2
An (independence) test ? is a function that takes two samples X1..n
, X1..n
and a parameter ? ? (0, 1),
called the confidence level, and outputs a binary answer: independent or not. A test ? is ?-level
1
2
consistent if, for every stationary ergodic distribution ? over a pair of samples (X1..n..
, X1..n..
), for
1
2
every confidence level ?, ?(?? (X1..n , X1..n ) = 1) < ? if the marginal distributions of the samples
1
2
are independent, and ?? (X1..n
, X1..n
) converges to 1 as n ? ? with ?-probability 1 otherwise.
The next proposition can be established using the criterion of [25]. Recall that, for ? ? S, the measure
W? over E is its ergodic decomposition. The criterion states that there is an ?-level consistent test for
H0 against E \ H0 if an only if W? (H0 ) = 1 for every ? ? cl H0 .
Proposition 1. There is no ?-level consistent independence test (jointly stationary ergodic samples).
Proof. The example is based on the so-called translation process, constructed as follows. Fix
some irrational ? ? (0, 1) and select r0 ? [0, 1] uniformly at random. For each i = 1..n.. let
ri = (ri?1 + ?) mod 1 (the previous element is shifted by ? to the right, considering the [0,1]
interval looped). The samples Xi are obtained from ri by thresholding at 1/2, i.e. Xi := I{ri > 0.5}
(here ri can be considered hidden states). This process is stationary and ergodic; besides, it has 0
entropy rate [26], and this is not the last of its peculiarities. Take now two independent copies of this
process to obtain a pair (x1 , x2 ) = (X11 , X12 . . . , Xn1 , Xn2 , . . . ). The resulting process on pairs, which
we denote ?, is stationary, but it is not ergodic. To see the latter, observe that the difference between
the corresponding hidden states remains constant. In fact, each initial state (r1 , r2 ) corresponds to
an ergodic component of our process on pairs. By the same argument, these ergodic components
are not independent. Thus, we have taken two independent copies of a stationary ergodic process,
and obtained a stationary process which is not ergodic and whose ergodic components are pairs of
processes that are not independent! To apply the criterion cited above, it remains to show that the
process ? we constructed can be obtained as a limit of stationary ergodic processes on pairs. To see
this, consider, for each ?, a process ?? , whose construction is identical to ? except that instead of
shifting the hidden states by ? we shift them by ? + u?i where u?i are i.i.d. uniformly random on
[??, ?]. It is easy to see that lim??0 ?? = ? in distributional distance, and all ?? are stationary ergodic.
Thus, if H0 is the set of all stationary ergodic distributions on pairs, we have found a distribution
? ? cl H0 such that W? (H0 ) = 0.
Thus, there is no consistent test that could provide a given level of confidence under H0 , even if
only asymptotic consistency is required under H1 . However, a yet weaker notion of consistency
might suffice to construct asymptotically consistent clustering algorithms. Namely, we could ask
for a test whose answer converges to either 0 or 1 according to whether the distributions generating
the samples are independent or not. Unfortunately, it is not known whether a test consistent in this
weaker sense exists or not. I conjecture that it does not. The conjecture is based not only on the
result above, but also on the result of [24] that shows that there is no such test for the related problem
of homogeneity testing, that is, for testing whether two given samples have the same or different
distributions. This negative result holds even if the distributions are independent, binary-valued, the
6
difference is restricted to P (X0 = 0), and, finally, for a smaller family of processes (B-processes).
Thus, for now what we can say is that there is no test for independence available that would be
consistent under ergodic sampling. Therefore, we cannot distinguish even between the cases of 1 and
2 clusters. Thus, in the following it is assumed that the number of clusters k is given.
The last problem we have to address is mutual information for processes. The analogue of mutual
information for stationary processes is the mutual information rate (1). Unfortunately, 0 mutual
information rate does not imply independence. This is manifest on processes with 0 entropy rate, for
example those of the example in the proof of Proposition 1. What happens is that, if two processes
are dependent, then indeed at least one of the m-order entropy rates Im is non-zero, but the limit may
still be zero. Since we do not know in advance which Im to take, we will have to consider all of them,
as is explained in the next subsection.
5.3
Clustering with the number of clusters known
The quantity introduced below, which we call sum-information, will serve as an analogue of mutual
information in the i.i.d. case, allowing us to get around the problem that the mutual information
rate may be 0 for a pair of dependent stationary ergodic processes. Defined in the same vein as the
distributional distance (2), this new quantity is a weighted sum over all the mutual informations up
to time n; in addition, all the individual mutual informations are computed for quantized versions
of random variables in question, with decreasing cell size of quantization, keeping all the mutual
information resulting from different quantizations. The latter allows us not to require the existence
of densities. Weighting is needed in order to be able to obtain consistent empirical estimates of the
theoretical quantity under study.
First, for a process x = (X1 , . . . , Xn , . . . ) and for each m, l ? N define the l?th quantized version
[X1..m ]l of X1..m as the index of the cell of B m,l to which X1..m belongs. Recall that each of the
partitions B m,l has cell size 2l , and that wl := 1/l(l + 1).
Definition 1 (sum-information). For stationary x1 ..xN define the sum-information
!
?
?
N
X
X
X
1
1
s
i
l
1
N
I(x1 , . . . , xN ) :=
wm
wl
h([X1..m ] ) ? h([X1..m
]l , . . . , [X1..m
]l )
(3)
m
l
m=1
i=1
l=1
The next lemma follows from the fact that ?l?N B m,l generates Fm and ?m?N Fm generates F? .
Lemma 1. sI(x1 , . . . , xN ) = 0 if and only if x1 , . . . , xN are mutually independent.
? n ([X i ]l ) of entropy are defined by replacing unknown probabilities by
The empirical estimates h
1..m
?
frequencies; the estimate sbI n (x1 , . . . , xN ) of is obtained by replacing h in (3) with h.
Remark 4 (Computing sbI n ). The expression (3) might appear to hint at a computational disaster, as
it involves two infinite sums, and, in addition, the number of elements in the sum inside h([]l ) grows
exponentially in l. However, it is easy to see that, when we replace the probabilities with frequencies,
all but a finite number of summands are either zero or can be collapsed (because they are constant).
Moreover, the sums can be further truncated so that the total computation becomes quasilinear in n.
This can be done exactly the same way as for distributional distance, as described in [12, Section 5].
The following lemma can be proven analogously to the corresponding statement about consistency of
empirical estimates of the distributional distance, given in [22, Lemma 1].
Lemma 2. Let the distribution ? of x1 , . . . , xN be jointly stationary ergodic.
Then
sb
I n (x1 , . . . , xk ) ? sI(x1 , . . . , xN ) ?-a.s.
This lemma alone is enough to establish the existence of a consistent clustering algorithm. To see this,
first consider the following problem, which is the ?independence? version of the classical statistical
three-sample problem.
The 3-sample-independence problem. Three samples x1 , x2 , x3 , are given, and it is known that
either (x1 , x2 ) ? x3 or x1 ? (x2 , x3 ) but not both. It is required to find out which one is the case.
Proposition 2. There exists an algorithm for solving the 3-sample-independence problem that is
asymptotically consistent under ergodic sampling.
7
Indeed, it is enough to consider an algorithm that compares sbI n ((x1 , x2 ), x3 ) and sbI n (x1 , (x2 , x3 ))
and answers according to whichever is smaller.
The independence clustering problem which we are after is a generalisation of the 3-sampleindependence problem to N samples. We can also have a consistent algorithm for the clustering
problem, simply comparing all possible clusterings U1 , . . . , Uk of the N samples given and selecting
whichever minimizes sbI n (U1 , . . . , Uk ). Such an algorithm is of course not practical, since the number
of computations it makes must be exponential in N and k. We will show that the number of candidate
clustering can be reduced dramatically, making the problem amenable to computation.
The proposed algorithm CLINk (Algorithm 2 below)
works similarly to CLIN, but with some important difFigure 2: CLINk: cluster given k and an ferences. Like before, the main procedure is to attempt
estimator of mutual sum-information
to split the given set of samples into two clusters. This
Consider all the clusterings obtained splitting procedure starts with a single element x1 and
by applying recursively the function estimates its sum-information sbI(x1 , R) with the rest of
Split to each of the sets in each of the elements, R. It then takes the elements out of R one
the candidate partitions, starting with by one without replacement, each time measuring how
the input set S, until k clusters are this changes sbI(x , R). As before, once and if we find an
1
obtained. Output the clustering U that element that is not independent of x , this change will
1
minimizes sbI(U )
be positive. However, unlike in the i.i.d. case, here we
Function Split(Set S of samples)
cannot test whether this change is 0. Yet, we can say that
Initialize: C := {x1 }, R := S \ C, if, among the tested elements, there is one that gives a
P := {}
non-zero change in sI, then one of such elements will be
while R 6= ? do
the one that gives the maximal change in sbI (provided, of
Initialize:M := {}, d := 0;
sb
I to
xmax:= index of any x in R course, that we have enough data for the estimates
s
be close enough to the theoretical values I). Thus, we
keep each split that arises from such a maximal-change elAdd (C, R) to P
ement, resulting in O(N 2 ) candidate splits for the case of
for each x ? R do
2 clusters. For k clusters, we have to consider all the comr := s?I(C, R)
binations of the splits, resulting in O(N 2k?2 ) candidate
move x from R to M
clusterings. Then select the one that minimizes sbI.
r0 := s?I(C, R); d0 := r ? r0
0
if d > d then
Theorem 3. CLINk is asymptotically consistent under
d := d0 , xmax:=index of(x)
ergodic sampling. This algorithm makes at most N 2k?2
end if
calls to the estimator of mutual sum-information.
end for
Proof. The consistency of sbI (Lemma 2) implies that, for
Move xxmax from M to C; R :=
every ? > 0, from some n on w.p. 1, all the estimates of
S\C
s
I the algorithm uses will be within ? of their sI values.
end while
Since I(U1 , . . . , Uk ) = 0 if and only if U1 , . . . , Uk is
Return(List of candidate splits P)
the correct clustering (Lemma 1), it is enough to show
END function
that, assuming all the sbI estimates are close enough to
the sI values, the clustering that minimizes sbI(U1 , . . . , Uk )
is among those the algorithm searchers through, that is,
among the clusterings obtained by applying recursively the function Split to each of the sets in each
of the candidate partitions, starting with the input set S, until k clusters are obtained.
To see the latter, on each iteration of the while loop, we either already have a correct candidate
split in P, that is, a split (U1 , U2 ) such that sI(U1 , U2 ) = 0, or we find (executing the for loop) an
element x0 to add to the set C such that C?
\x0 . Indeed, if at least one such element x0 exists, then
among all such elements there is one that maximizes the difference d0 . Since the set C is initialized as
a singleton, a correct split is eventually found if it exists. Applying the same procedure exhaustively
to each of the elements of each of the candidate splits producing all the combinations of k candidate
clusterings, under the assumption that all the estimates sbI are sufficiently close the corresponding
values, we are guaranteed to have the one that minimizes I(U1 , . . . , Uk ) among the output.
Remark 5 (Fickle oracle). Another way to look at the difference between the stationary and the
i.i.d. cases is to consider the following ?fickle? version of the oracle test of Section 3. Consider
the oracle that, as before, given sets of random variables A, B, C, D ? {x1 , . . . , xN } answers
whether sI(A, B) > sI(C, D). However, the answer is only guaranteed to be correct in the case
8
s
I(A, B) 6= sI(C, D). If sI(A, B) = sI(C, D) then the answer is arbitrary (and can be considered
adversarial). One can see that Lemma 2 guarantees the existence of the oracle that has the requisite
fickle correctness property asymptotically, that is, w.p. 1 from some n on. It is also easy to see that
Algorithm 2 can be rewritten in terms of calls to such an oracle.
6
Generalizations, future work
A general formulation of the independence clustering problem has been presented, and attempt
has been made to trace out broadly the limits of what is possible and what is not possible in this
formulation. In doing so, clear-cut formulations have been favoured over utmost generality, and over,
on the other end of the spectrum, precise performance guarantees. Thus, many interesting questions
have been left out; some of these are outlined in this section.
Beyond time series. For the case when the distribution of the random variables xi is unknown, we
i
have assumed that a sample X1..n
is available for each i = 1..N . Thus, each xi is represented by a
time series. A time series is but one form the data may come in. Other ways include functional data,
mutli-dimensional- or continuous-time processes, or graphs. Generalizations to some of these models,
such as, for example, space-time stationary processes, are relatively straightforward, while others
require more care. Some generalizations to infinite stationary graphs may be possible along the lines
of [21]. In any case, the generalization problem is statistical (rather than algorithmic). If the number
of clusters is unknown, we need to be able to replace the emulate the oracle test of section 3 with
statistical tests. As explained in Section 4, it is sufficient to find a test for conditional independence,
or an estimator of entropy along with guarantees on its convergence rates. If these are not available,
as is the case of stationary ergodic samples, we can still have a consistent algorithm for k known,
as long as we have an asymptotically consistent estimator of mutual information (without rates), or,
more generally, if we can emulate the fickle oracle (Remark 5).
Beyond independence. The problem formulation considered rests on the assumption that there exists
a partition U1 , . . . , Uk of the input set S such that U1 , . . . , Uk are jointly independent, that is, such
that I(U1 , . . . , Uk ) = 0. In reality, perhaps, nothing is really independent, and so some relaxations
are in order. It is easy to introduce some thresholding in the algorithms (replacing 0 in each test by
some threshold ?) and derive some basic consistency guarantees for the resulting algorithms. The
general problem formulation is to find a finest clustering such that I(U1 , . . . , Uk ) > ?, for a given ?
(note that, unlike in the independence case of ? = 0, such a clustering may not be unique). If one
wants to get rid of ?, a tree of clusterings may be considered for all ? ? 0, which is a common way to
treat unknown parameters in the clustering literature (e.g.,[2]). Another generalization can be obtained
by considering the problem from the graphical model point of view. The random variables xi are
vertices of a graph, and edges represent dependencies. In this representation, clusters are connected
components of the graph. A generalization then is to clusters that are the smallest components that
are connected (to each other) by at most l edges, where l is a parameter. Yet another generalization
would be to decomposable distributions of [10].
Performance guarantees. Non-asymptotic results (finite-sample performance guarantees) can be
obtained under additional assumptions, using the corresponding results on (conditional) independence
tests and on estimators of divergence between distributions. Here it is worth noting that we are
not restricted to using the mutual information I, but any measure of divergence can be used, for
example, R?nyi divergence; a variety of relevant estimators and corresponding bounds, obtained
under such assumptions as H?lder continuity, can be found in [19, 11]. From any such bounds, at
least some performance guarantees for CLIN can be obtained simply using the union bound over all
the invocations of the tests.
Complexity. The algorithmic aspects of the problem have only been started upon in this work. Thus,
it remains to find out what is the computational complexity of the studied problem. So far, we have
presented only some upper bounds, by constructing algorithms and bounding their complexity (kN 2
for CLIN and N 2k for CLINk). Lower bounds (and better upper bounds) are left for future work.
A subtlety worth noting is that, for the case of known distributions, the complexity may be affected
by the choice of the oracle. In other words, some calculations may be ?pushed? inside the oracle.
In this regard, it may be better to consider the oracle for testing conditional independence, rather
than a comparison of mutual informations, as explained in Remarks 1, 3. The complexity of the
stationary-sampling version of the problem can be studied using the fickle oracle of Remark 5. The
consistency of the algorithm should then be established for every assignment of those answers of the
oracle that are arbitrary (adversarial).
9
References
[1] Francis R Bach and Michael I Jordan. Beyond independent components: trees and clusters.
Journal of Machine Learning Research, 4(Dec):1205?1233, 2003.
[2] Maria-Florina Balcan, Yingyu Liang, and Pramod Gupta. Robust hierarchical clustering. Journal
of Machine Learning Research, 15(1):3831?3871, 2014.
[3] Jan Beirlant, Edward J Dudewicz, L?szl? Gy?rfi, and Edward C Van der Meulen. Nonparametric
entropy estimation: An overview. International Journal of Mathematical and Statistical
Sciences, 6(1):17?39, 1997.
[4] Simon Benjaminsson, Peter Fransson, and Anders Lansner. A novel model-free data analysis
technique based on clustering in a mutual information space: application to resting-state fmri.
Frontiers in systems neuroscience, 4:34, 2010.
[5] David Maxwell Chickering. Learning Bayesian networks is NP-complete. In Learning from
data, pages 121?130. Springer, 1996.
[6] Thomas M. Cover and Joy A. Thomas. Elements of information theory. Wiley-Interscience,
New York, NY, USA, 2006.
[7] Robert M. Gray. Probability, Random Processes, and Ergodic Properties. Springer Verlag,
1988.
[8] Arthur Gretton and L?szl? Gy?rfi. Consistent nonparametric tests of independence. Journal of
Machine Learning Research, 11(Apr):1391?1423, 2010.
[9] L?szl? Gy?rfi. Private communication. 2011.
[10] Radim Jirouvsek. Solution of the marginal problem and decomposable distributions. Kybernetika, 27(5):403?412, 1991.
[11] Kirthevasan Kandasamy, Akshay Krishnamurthy, Barnabas Poczos, Larry Wasserman, and
James M Robins. Influence functions for machine learning: Nonparametric estimators for
entropies, divergences and mutual informations. arXiv preprint arXiv:1411.4342, 2014.
[12] Azadeh Khaleghi, Daniil Ryabko, J?r?mie Mary, and Philippe Preux. Consistent algorithms for
clustering time series. Journal of Machine Learning Research, 17:1?32, 2016.
[13] Artemy Kolchinsky, Martijn P van den Heuvel, Alessandra Griffa, Patric Hagmann, Luis M
Rocha, Olaf Sporns, and Joaqu?n Go?i. Multi-scale integration and predictability in resting
state brain activity. Frontiers in Neuroinformatics, 8, 2014.
[14] Alexander Kraskov, Harald St?gbauer, Ralph G Andrzejak, and Peter Grassberger. Hierarchical
clustering using mutual information. EPL (Europhysics Letters), 70(2):278, 2005.
[15] Rosario N Mantegna. Hierarchical structure in financial markets. The European Physical
Journal B-Condensed Matter and Complex Systems, 11(1):193?197, 1999.
[16] Guillaume Marrelec, Arnaud Mess?, and Pierre Bellec. A Bayesian alternative to mutual information for the hierarchical clustering of dependent random variables. PloS one, 10(9):e0137278,
2015.
[17] Gautier Marti, S?bastien Andler, Frank Nielsen, and Philippe Donnat. Clustering financial time
series: How long is enough? In IJCAI?16, 2016.
[18] Christopher Meek. Finding a path is harder than finding a tree. J. Artif. Intell. Res. (JAIR),
15:383?389, 2001.
[19] D?vid P?l, Barnab?s P?czos, and Csaba Szepesv?ri. Estimation of r?nyi entropy and mutual
information based on generalized nearest-neighbor graphs. In Advances in Neural Information
Processing Systems, pages 1849?1857, 2010.
[20] Ido Priness, Oded Maimon, and Irad Ben-Gal. Evaluation of gene-expression clustering via
mutual information distance measure. BMC bioinformatics, 8(1):111, 2007.
10
[21] D. Ryabko. Hypotheses testing on infinite random graphs. In Proceedings of the 28th International Conference on Algorithmic Learning Theory (ALT?17), volume 76 of PMLR, pages
400?411, Kyoto, Japan, 2017. JMLR.
[22] D. Ryabko and B. Ryabko. Nonparametric statistical inference for ergodic processes. IEEE
Transactions on Information Theory, 56(3):1430?1435, 2010.
[23] Daniil Ryabko. Clustering processes. In Proc. the 27th International Conference on Machine
Learning (ICML 2010), pages 919?926, Haifa, Israel, 2010.
[24] Daniil Ryabko. Discrimination between B-processes is impossible. Journal of Theoretical
Probability, 23(2):565?575, 2010.
[25] Daniil Ryabko. Testing composite hypotheses about discrete ergodic processes. Test, 21(2):317?
329, 2012.
[26] P. Shields. The interactions between ergodic theory and information theory. IEEE Trans. on
Information Theory, 44(6):2079?2093, 1998.
[27] K Zhang, J Peters, D Janzing, and B Sch?lkopf. Kernel-based conditional independence test and
application in causal discovery. In Proceedings of the 27th Annual Conference on Uncertainty
in Artificial Intelligence (UAI), 2011.
[28] Xiaobo Zhou, Xiaodong Wang, Edward R Dougherty, Daniel Russ, and Edward Suh. Gene clustering based on clusterwide mutual information. Journal of Computational Biology, 11(1):147?
161, 2004.
11
| 6990 |@word private:1 version:6 radim:1 polynomial:1 replicate:1 open:4 decomposition:2 harder:1 recursively:4 initial:1 necessity:1 series:22 selecting:2 daniel:1 existing:1 comparing:1 si:11 yet:4 must:2 finest:3 luis:1 grassberger:1 realistic:1 partition:4 informative:1 drop:1 joy:1 v:1 stationary:48 alone:1 kandasamy:1 discrimination:1 intelligence:1 xk:1 beginning:1 patric:1 short:1 record:1 quantized:2 zhang:1 mathematical:1 along:3 constructed:3 c2:6 differential:1 direct:1 interscience:1 inside:3 yingyu:1 introduce:1 x0:4 pairwise:9 ica:1 hardness:1 indeed:10 themselves:1 market:2 xji:2 multi:1 brain:1 relying:1 decreasing:1 actual:1 considering:4 becomes:2 provided:4 estimating:1 bounded:2 moreover:2 suffice:1 maximizes:1 what:13 israel:1 kind:1 interpreted:1 minimizes:5 kybernetika:1 finding:3 csaba:1 gal:1 guarantee:10 every:5 pramod:1 exactly:1 uk:17 partitioning:4 yn:1 appear:1 producing:1 before:4 positive:2 treat:1 tends:1 limit:6 despite:1 path:1 ap:1 inria:1 might:3 emphasis:2 studied:3 halley:1 challenging:1 statistically:2 unique:2 practical:1 testing:7 ement:1 union:2 x3:5 procedure:4 jan:1 empirical:6 composite:1 confidence:3 word:1 staple:1 get:3 cannot:5 convenience:3 close:3 put:1 collapsed:1 applying:3 influence:1 impossible:1 measurable:2 equivalent:1 demonstrated:1 comr:1 straightforward:2 go:1 starting:2 independently:1 ergodic:43 decomposable:2 splitting:2 wasserman:1 estimator:11 array:1 financial:3 dw:1 rocha:1 notion:1 fx:3 krishnamurthy:1 updated:1 limiting:1 target:1 construction:2 exact:1 us:1 hypothesis:2 element:28 cut:1 distributional:5 vein:1 bottom:1 observed:1 preprint:1 wang:1 wj:1 connected:2 ryabko:9 plo:1 decrease:3 removed:1 sbi:14 lansner:1 mentioned:1 complexity:6 barnabas:1 exhaustively:1 irrational:1 solving:1 segment:3 algebra:1 serve:1 inapplicable:1 upon:1 completely:2 vid:1 easily:1 joint:4 stock:1 represented:1 emulate:2 forced:1 query:1 artificial:1 h0:8 neuroinformatics:1 whose:4 valued:7 say:8 otherwise:2 lder:1 dougherty:1 jointly:6 itself:1 sequence:7 net:1 propose:1 interaction:1 maximal:2 relevant:1 loop:9 mixing:1 martijn:1 intuitive:1 moved:3 convergence:7 cluster:36 empty:1 r1:1 olaf:1 ijcai:1 generating:1 converges:2 executing:1 ben:1 help:1 derive:1 pose:1 nearest:1 finitely:1 edward:4 strong:1 involves:1 come:2 implies:1 direction:2 correct:7 gotten:1 mie:1 peculiarity:1 larry:1 require:2 fix:1 generalization:8 villeneuve:1 investigation:1 preliminary:3 proposition:4 biological:1 tighter:1 really:1 im:4 extension:2 pl:1 frontier:2 hold:1 proximity:2 sufficiently:1 considered:10 around:1 ground:2 algorithmic:3 pointing:1 smallest:1 estimation:2 gautier:1 proc:1 condensed:1 wl:4 basing:1 correctness:1 weighted:1 rather:3 pn:1 zhou:1 notational:1 consistently:1 bernoulli:2 maria:1 impossibility:6 adversarial:2 sense:2 inference:1 dependent:8 anders:1 sb:2 hidden:3 relation:1 limm:2 france:1 provably:1 ralph:1 x11:5 among:5 denoted:1 priori:1 gbauer:1 summed:1 initialize:3 mutual:36 marginal:2 equal:1 construct:5 once:2 integration:1 beach:1 sampling:11 azadeh:1 biology:1 identical:1 bmc:1 look:2 icml:1 looped:1 fmri:2 future:4 np:2 others:1 hint:1 divergence:7 homogeneity:1 individual:2 intell:1 replaced:1 replacement:3 statistician:1 attempt:3 stationarity:1 interest:1 highly:1 evaluation:1 szl:3 mixture:1 devoted:1 amenable:1 xni:4 edge:2 arthur:1 machinery:1 unless:1 tree:4 initialized:1 re:1 haifa:1 causal:1 theoretical:5 epl:1 instance:1 column:1 obstacle:1 cover:1 measuring:1 zn:1 assignment:1 vertex:1 subset:1 daniil:6 kn:2 answer:12 dependency:1 ido:1 st:2 density:7 cited:2 international:3 accessible:1 michael:1 together:1 analogously:1 summable:1 resort:2 return:3 japan:1 de:1 singleton:1 gy:3 summarized:1 matter:1 hagmann:1 try:2 view:3 h1:1 doing:1 francis:1 start:4 recover:1 wm:3 parallel:1 simon:1 contribution:1 ni:2 likewise:1 listing:1 identify:1 generalize:1 weak:1 bayesian:4 lkopf:1 xiaobo:1 rus:1 worth:3 janzing:1 definition:5 against:1 frequency:5 james:1 naturally:1 proof:4 mi:1 xn1:5 stop:1 ask:1 recall:3 knowledge:1 lim:2 manifest:1 subsection:1 nielsen:1 appears:2 maxwell:1 jair:1 formulation:11 done:3 generality:2 furthermore:1 just:2 ergodicity:1 implicit:1 correlation:3 until:4 replacing:3 christopher:1 continuity:2 mode:1 perhaps:1 gray:1 grows:1 artif:1 mary:1 usa:2 xiaodong:1 xj1:1 arnaud:1 conditionally:1 ex1:1 x1n:4 noted:2 mutli:1 criterion:3 generalized:1 fickle:6 complete:1 balcan:1 novel:1 common:1 pseudocode:1 functional:1 physical:1 overview:1 exponentially:1 heuvel:1 volume:1 resting:2 marginals:1 measurement:3 enter:1 outlined:2 consistency:6 similarly:1 access:3 similarity:1 longer:1 summands:1 add:2 optimizing:1 belongs:1 apart:1 certain:1 verlag:1 inequality:2 binary:2 yi:1 der:1 rosario:1 seen:1 additional:1 relaxed:1 care:1 employed:1 r0:3 determine:2 envisaged:1 xn2:1 full:1 gretton:1 d0:3 kyoto:1 calculation:2 bach:1 long:3 divided:1 mess:1 europhysics:1 variant:1 basic:1 florina:1 dudewicz:1 essentially:1 metric:1 searcher:1 iteration:2 kernel:1 represent:2 arxiv:2 disaster:1 xmax:2 cell:3 c1:6 dec:1 szepesv:1 addition:4 want:3 separately:1 addressed:1 decreased:1 else:3 interval:1 leaving:1 limn:1 sch:1 extra:1 rest:3 unlike:2 recording:1 mod:2 jordan:1 call:8 noting:2 kraskov:1 split:21 identically:1 easy:6 enough:7 variety:1 independence:39 zi:1 identified:1 opposite:1 fm:4 reduce:1 regarding:1 idea:1 avenue:1 shift:2 t0:3 whether:9 expression:3 linkage:2 f:1 quasilinear:1 peter:3 poczos:1 proceed:1 york:1 remark:11 dramatically:1 rfi:4 generally:1 clear:2 utmost:1 repeating:1 nonparametric:5 dna:1 reduced:1 generate:2 exist:3 notice:1 shifted:1 estimated:1 neuroscience:1 broadly:1 write:1 discrete:3 shall:1 affected:1 group:5 putting:1 alessandra:1 threshold:1 asymptotically:11 graph:6 relaxation:1 sum:10 letter:1 uncertainty:1 family:1 groundtruth:1 decide:1 pushed:1 bound:8 meek:1 followed:1 distinguish:1 guaranteed:2 fascinating:1 oracle:28 activity:1 annual:1 x2:9 ri:6 sake:3 generates:2 u1:19 aspect:1 argument:1 concluding:1 harald:1 x12:1 conjecture:2 relatively:1 structured:1 according:4 combination:1 terminates:4 remain:3 smaller:2 making:1 happens:1 den:1 explained:4 restricted:3 invariant:1 taken:1 computationally:6 mutually:7 remains:3 turn:1 eventually:4 needed:3 know:1 tractable:1 whichever:2 end:10 informal:1 studying:1 available:5 rewritten:1 apply:1 observe:1 hierarchical:4 away:1 pierre:1 occurrence:1 pmlr:1 alternative:1 existence:4 thomas:2 clustering:49 include:1 graphical:1 clin:8 establish:1 nyi:2 classical:1 objective:1 move:4 already:4 question:8 quantity:3 dependence:2 usual:1 traditional:2 surrogate:3 distance:9 gence:1 assuming:2 besides:3 length:3 useless:1 index:3 reformulate:1 mini:1 equivalently:1 difficult:1 unfortunately:2 liang:1 robert:1 potentially:1 statement:1 frank:1 sigma:1 stated:1 negative:2 trace:1 kirthevasan:1 unknown:15 allowing:2 upper:2 finite:2 philippe:2 truncated:1 situation:2 extended:1 looking:1 precise:1 communication:1 y1:1 arbitrary:6 introduced:2 david:1 cast:1 required:7 pair:11 c3:2 z1:1 namely:2 barnab:1 established:2 nip:1 trans:1 address:1 able:2 suggested:1 beyond:3 below:6 xm:3 preux:1 shifting:1 analogue:2 sporns:1 solvable:1 abbreviate:1 clink:4 misleading:1 meulen:1 imply:1 numerous:1 ascq:1 started:2 hm:4 literature:1 discovery:1 evolve:1 asymptotic:3 fully:1 expect:1 permutation:1 interesting:1 proven:1 sufficient:2 consistent:31 principle:1 exciting:1 thresholding:2 translation:1 row:1 course:3 placed:1 last:2 free:2 copy:2 keeping:1 czos:1 weaker:3 fall:1 neighbor:1 face:1 taking:1 correspondingly:1 akshay:1 andrzejak:1 distributed:2 regard:1 van:2 xn:20 world:1 made:1 refinement:1 far:1 maimon:1 transaction:1 gene:3 keep:1 uai:1 rid:3 assumed:3 xi:20 spectrum:1 continuous:2 suh:1 why:2 reality:1 robin:1 robust:1 ca:1 warranted:1 cl:2 beirlant:1 constructing:1 european:1 assured:1 complex:1 apr:1 main:4 whole:1 bounding:1 nothing:1 repeated:1 x1:67 oded:1 borel:1 slow:1 wiley:1 ny:1 favoured:1 predictability:1 shield:1 exponential:1 x1i:4 candidate:10 xl:4 invocation:1 chickering:1 marti:1 weighting:1 jmlr:1 theorem:5 removing:3 xt:1 bastien:1 khaleghi:1 r2:1 list:1 admits:2 gupta:1 virtue:1 concern:1 alt:1 exists:6 quantization:2 entropy:15 simply:2 xjn:1 expressed:1 irad:1 xnn:4 subtlety:1 u2:2 restarted:1 springer:2 corresponds:1 truth:2 conditional:10 goal:3 viewed:1 price:2 replace:5 feasible:3 hard:2 change:6 specifically:1 except:1 uniformly:2 infinite:3 generalisation:1 lemma:9 called:5 total:1 experimental:1 shannon:2 meaningful:1 formally:2 select:2 guillaume:1 support:1 latter:7 arises:1 alexander:1 bioinformatics:1 requisite:1 tested:2 instructive:1 |
6,622 | 6,991 | Fast amortized inference of neural activity from
calcium imaging data with variational autoencoders
Artur Speiser12 , Jinyao Yan3 , Evan Archer4?, Lars Buesing4?,
Srinivas C. Turaga3? and Jakob H. Macke1??
1
research center caesar, an associate of the Max Planck Society, Bonn, Germany
2
IMPRS Brain and Behavior Bonn/Florida
3
HHMI Janelia Research Campus
4
Columbia University
[email protected], [email protected], [email protected]
Abstract
Calcium imaging permits optical measurement of neural activity. Since intracellular
calcium concentration is an indirect measurement of neural activity, computational
tools are necessary to infer the true underlying spiking activity from fluorescence
measurements. Bayesian model inversion can be used to solve this problem, but
typically requires either computationally expensive MCMC sampling, or faster but
approximate maximum-a-posteriori optimization. Here, we introduce a flexible
algorithmic framework for fast, efficient and accurate extraction of neural spikes
from imaging data. Using the framework of variational autoencoders, we propose
to amortize inference by training a deep neural network to perform model inversion
efficiently. The recognition network is trained to produce samples from the posterior
distribution over spike trains. Once trained, performing inference amounts to a fast
single forward pass through the network, without the need for iterative optimization
or sampling. We show that amortization can be applied flexibly to a wide range
of nonlinear generative models and significantly improves upon the state of the
art in computation time, while achieving competitive accuracy. Our framework is
also able to represent posterior distributions over spike-trains. We demonstrate the
generality of our method by proposing the first probabilistic approach for separating
backpropagating action potentials from putative synaptic inputs in calcium imaging
of dendritic spines.
1
Introduction
Spiking activity in neurons leads to changes in intra-cellular calcium concentration which can be
measured by fluorescence microscopy of synthetic calcium indicators such as Oregon Green BAPTA-1
[1] or genetically encoded calcium indictors such as GCaMP6 [2]. Such calcium imaging has become
important since it enables the parallel measurement of large neural populations in a spatially resolved
and minimally invasive manner [3, 4]. Calcium imaging can also be used to study neural activity at
subcellular resolution, e.g. for measuring the tuning of dendritic spines [5, 6]. However, due to the
indirect nature of calcium imaging, spike inference algorithms must be used to infer the underlying
neural spiking activity leading to measured fluorescence dynamics.
?
current affiliation: Cogitai.Inc
current affiliation: DeepMind
?
equal contribution
?
current primary affiliation: Centre for Cognitive Science, Technical University Darmstadt
?
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
Most commonly-used approaches to spike inference [7, 8, 9, 10, 11, 12, 13, 14] are based on carefully
designed generative models that describe the process by which spiking activity leads to fluorescence
measurements. Spikes are treated as latent variables, and spike-prediction is performed by inferring
both the parameters of the model and the spike latent variables from fluorescence time series, or
?traces? [7, 8, 9, 10]. The advantage of this approach is that it does not require extensive ground
truth data for training, since simultaneous electrophysiological and fluorescence recordings of neural
activity are difficult to acquire, and that prior knowledge can be incorporated in the specification of the
generative model. The accuracy of the predictions depends on the faithfulness of the generative model
of the transformation of spike trains into fluorescence measurements [14, 12]. The disadvantage
of this approach is that spike-inference requires either Markov-Chain Monte Carlo (MCMC) or
Sequential Monte-Carlo techniques to sample from the posterior distribution over spike-trains or
alternatively, iterative optimization to obtain an approximate maximum a-posteriori (MAP) prediction.
Currently used approaches rely on bespoke, model-specific inference algorithms, which can limit
the flexibility in designing suitable generative models. Most commonly used methods are based on
simple phenomenological (and often linear) models [7, 8, 9, 10, 13].
Recently, a small number of cell-attached electrophysiological recordings of neural activity have
become available, with simultaneous fluorescence calcium measurements in the same neurons.
This has made it possible to train powerful and fast classifiers to perform spike-inference in a
discriminative manner, precluding the need for accurate generative models of calcium dynamics
[15]. The disadvantage of this approach is that it can require large labeled data-sets for every new
combination of calcium indicator, cell-type and microscopy method, which can be expensive or
impossible to acquire. Further, these discriminative methods do not easily allow the incorporation
of prior knowledge about the generative process. Finally, current classification approaches yield
only pointwise predictions of spike probability (i.e. firing rates), independent across time, and ignore
temporal correlations in the posterior distribution of spikes.
Sampled spikes
Predicted probability
Backward
RNN
Forward
RNN
1D CNN
Figure 1: Amortized inference for predicting spikes from imaging data. A) Our goal is to infer a
spike train s from an observed time-series of fluorescence-measurements f . We assume that we have
a generative model of fluorescence given spikes with (unknown) parameters ?, and we simultaneously
learn ? as well as a ?recognition model? which approximates the posterior over spikes s given f
and which can be used for decoding a spike train from imaging data. B) We parameterize the
recognition-model by a multi-layer network architecture: Fluorescence-data is first filtered by a deep
1D convolutional network (CNN), providing input to a stochastic forward running recurrent neural
network (RNN) which predicts spike-probabilities and takes previously sampled spikes as additional
input. An additional deterministic RNN runs backward in time and provides further context.
Here, we develop a new spike inference framework called DeepSpike (DS) based on the variational
autoencoder technique which uses stochastic variational inference (SVI) to teach a classifier to predict
spikes in an unsupervised manner using a generative model. This new strategy allows us to combine
the advantages of generative [7] and discriminative approaches [15] into a single fast classifier-based
method for spike inference. In the variational autoencoder framework, the classifier is called a
recognition model and represents an approximate posterior distribution over spike trains from which
samples can be drawn in an efficient manner. Once trained to perform spike inference on one dataset,
the recognition model can be applied to perform inference on statistically similar datasets without any
retraining: The computational cost of variational spike inference is amortized, dramatically speeding
up inference at test-time by exploiting fast, classifier based recognition models.
2
We introduce two recognition models: The first is a temporal convolutional network which produces
a posterior distribution which is factorized in time, similar to standard classifier-based methods [15].
The second is a recurrent neural network-based recognition model, similar to [16, 17] which can
represent any correlated posterior distribution in the non-parametric limit. Once trained, both models
perform spike inference with state-of-the-art accuracy, and enable simultaneous spike inference for
populations as large as 104 in real time on a single GPU.
We show the generality of this black-box amortized inference method by demonstrating its accuracy
for inference with a classic linear generative model [7, 8], as well as two nonlinear generative models
[12]. Finally, we show an extension of the spike inference method to simultaneous inference and
demixing of synaptic inputs from backpropagating somatic action potentials from simultaneous
somatic and dendritic calcium imaging.
2
2.1
Amortized inference using variational autoencoders
Approach and training procedure
We observe fluorescence traces fti , t = 1 . . . T i representing noisy measurements of the dynamics
of somatic calcium concentration in neurons i = 1 . . . N . We assume a parametrised, probabilistic,
differentiable generative model p?i (f |s) with (unknown) parameters ?i . The generative model
predicts a fluorescence trace given an underlying binary spike train si , where sit = 1 indicates that
the neuron i produced an action potential in the interval indexed by t. Our goal is to infer a latent
spike-train s given only fluorescence observations f . We will solve this problem by training a deep
neural network as a ?recognition model? [18, 19, 20] parametrized by weights ?. Use of a recognition
model enables fast computation of an approximate posterior distribution over spike trains from a
fluorescence trace q? (s|f ). We will share one recognition model across multiple cells, i.e. that
q? (si |f i ) ? p?i (si |f i ) for each i. We describe an unsupervised training procedure which jointly
optimizes parameters of the generative model ? and the recognition network ? in order to maximize a
lower bound on the log likelihood of the observed data, log p(f ) [19, 18, 20].
We learn the parameters ? and ? simultaneously by jointly maximizing LK (?, ?), a multi-sample
importance-weighting lower bound on the log likelihood log p(f ) given by [21]
#
"
K
k
X
p
(s
,
f
)
1
?
LK (?, ?) = Es1 ,...,sK ?q? (s|f ) log
? log p(f ),
(1)
K
q? (sk |f )
k=1
k
where s are spike trains sampled from the recognition model q? (s|f ). This stochastic objective
involves drawing K samples from the recognition model, and evaluating their likelihood by passing
them through the generative model. When K = 1, the bound reduces to the evidence lower bound
(ELBO). Increasing K yields a tighter lower bound (than the ELBO) on the marginal log likelihood,
at the cost of additional training time. We found that increasing the number of samples leads to better
fits of the generative model; in our experiments, we used K = 64.
To train ? and ? by stochastic gradient ascent, we must estimate the gradient ??,? L(?, ?). As our
recognition model produces an approximate posterior over binary spike trains, the gradients have to be
estimated based on samples. Obtaining functional estimates of the gradients ?? L(?, ?) with respect
to parameters of the recognition model is challenging and relies on constructing effective control
variates to reduce variance [22]. We use the variational inference for monte carlo objectives (VIMCO)
approach of [23] to produce low-variance unbiased estimates of the gradients ??,? LK (?, ?). The
generative training procedure could be augmented with a supervised cost term [24, 25], resulting in a
semi-supervised training method.
Gradient optimization: We use ADAM [26], an adaptive gradient update scheme, to perform
online stochastic gradient ascent. The training data is cut into short chunks of several hundred
time-steps and arranged in batches containing samples from a single cell. As we train only one
recognition model but multiple generative models in parallel, we load the respective generative model
and ADAM parameters at each iteration. Finally, we use norm-clipping to scale the gradients acting
on the recognition model: the norm of all gradients is calculated, and if it exceeds a fixed threshold the
gradients are rescaled. While norm-clipping was introduced to prevent exploding gradients in RNNs
3
[27], we found it to be critical to achieve high performance both for RNN and CNN architectures in
our learning problem. Very small threshold values (0.02) empirically yielded best results.
2.2
Generative models p? (f |s)
To demonstrate that our computational strategy can be applied to a wide range of differentiable
models in a black-box manner, we consider four generative models: a simple, but commonly used
linear model of calcium dynamics [7, 8, 9, 10], two more sophisticated nonlinear models which
additionally incorporate saturation and facilitation resulting from the dynamics of calcium binding to
the calcium sensor, and finally a multi-dimensional model for dendritic imaging data.
Linear auto-regressive generative model (SCF): We use the name SCF for the classic linear
convolutional generative model used in [7, 8, 9, 10], since this generative process is described by the
Spikes st , which linearly impact Calcium concentration ct , which in turn determines the observed
Fluorescence intensity ft ,
ct =
p
X
?t0 ct?t0 + ?st ,
ft = ?ct + ? + et ,
(2)
t0 =1
with linear auto-regressive dynamics of order p for the calcium concentration with parameters
?, spike-amplitude ?, gain ?, constant fluorescence baseline ?, and additive measurement noise
et ? N (0, ? 2 ).
Nonlinear auto-regressive and sensor dynamics generative models (SCDF & MLphys): As
examples of nonlinear generative models [28], we consider two simple models of the discrete-time
dynamics of the calcium sensor or dye. In the first (SCDF), the concentration of fluorescent dye
molecules dt is a function of the somatic Calcium concentration ct , and has Dynamics
dt ? dt?1 = ?on c?t ([D] ? dt?1 ) ? ?off dt?1 ,
ft = ?dt + ? + et ,
(3)
where ?on and ?off are the rates at which the calcium sensor binds and unbinds calcium ions, and ? is
a Hill coefficient. We constrained these parameters to be non-negative. [D] is the total concentration
of the dye molecule in the soma, which sets the maximum possible value of dt . The richer dynamics
of the SCDF model allow for facilitation of fluorescence at low firing rates, and saturation at high
rates. The parameters of the SCDF model are ? = {?, ?, ?, ?on , ?off , ?, [D], ? 2 }.
The second nonlinear model (MLphys) is a discrete-time version of the MLspike generative model
[12], simplified by not including a model of the time-varying baseline. The dynamics for ft and ct
are as above, with ? = 1. We replace the dynamics for dt by
dt ? dt?1 =
1
?on
(1 + ?((c0 + ct )? ? c?0 ))(
((c0 + ct )? ? c?0 )
? dt?1 ).
(1 + ?((c0 + ct )? ? c?0 ))
(4)
Multi-dimensional soma + dendrite generative model (DS-F-DEN): The dendritic generative
model is a multi-dimensional SCDF model that incorporates back-propagating action potentials
(bAPs). The calcium concentration at the cell body (superscript c) is generated as for SCDF, whereas
for the spine (superscript s), there are two components: synaptic inputs and bAPs from the soma,
cct =
p
X
?tc0 cct?t0 + ? c sct ,
cst =
t0 =1
p
X
?ts0 cst?t0 + ? s sst + ? bs sct ,
(5)
t0 =1
where ? bs are the amplitude coefficients of bAPs for different spine locations, and c ? {1, ..., Nc },
s ? {1, ..., Ns }. The spines and soma share the same dye dynamics as in (3). The parameters of the
2
dendritic integration model are ? = {?s,c , ?s,c , ?s,c , ?on , ?off , ?, [D], ?s,c
}. We note that this simple
generative model does not attempt to capture the full complexity of nonlinear processing in dendrites
(e.g. it does not incorporate nonlinear phenomena such as dendritic plateau potentials). Its goal is
to separate local influences (synaptic inputs) from global events (bAPs, or potentially regenerative
dendritic events).
4
2.3
Recognition models: parametrization of the approximate posterior q? (s|f )
The goal of the recognition model is to provide a fast and efficient approximation q? (s|f ) to the
true posterior p(s|f ) over discrete latent spike trains s. We will use both a factorized, localized
approximation (parameterized as a convolutional neural network), and a more flexible, non-factorized
and non-localized approximation (parameterized using additional recurrent neural networks).
Convolutional neural network: Factorized posterior approximation (DS-F) In [15], it was
reported that good spike-prediction performance can be achieved by making the spike probability
q? (st |ft??...t+? ) depend on a local window of the fluorescence trace of length 2? + 1 centered at t
when training such a model fully supervised. We implement a scaled up version of this idea, using a
deep neural network which is convolutional in time as the recognition model. We use architectures
with up to five hidden layers and ? 20 filters per layer with Leaky ReLUs units [29]. The output
layer uses a sigmoid nonlinearity to compute the Bernoulli spike probabilities q? (st |f ).
Recurrent neural network: Capturing temporal correlations in the posterior (DS-NF) The
fully-factorized posterior approximation (DS-F) above ignores temporal correlations in the posterior
over spike trains. Such correlations can be useful in modeling uncertainty in the precise timing of a
spike, which induces negative correlations between nearby time bins. To model temporal correlations,
we developed a RNN-based non-factorizing distribution which can approach the true posterior in the
non-parametric limit (see figure 1B). Similar to [16], we use
Q the temporal ordering over spikes and
factorize the joint distribution over spikes as q? (s|f ) = t q? (st |f, s0 , ..., st?1 ), by conditioning
spikes at t on all previously sampled spikes. Our RNN uses a CNN as described above to extract
features from the input trace. Additional input is provided by a a backwards RNN which also receives
input from the CNN features. The outputs of the forward RNN and CNN are transformed into
Bernoulli spike probabilities q? (st |f ) through a dense sigmoid layer. This probability and the sample
drawn from it are relayed to the forward RNN in the next time step. Forward and backward RNN
have a single layer with 64 gated recurrent units each [30].
2.4
Details of synthetic and real data and evaluation methodology
We evaluated our method on simulated and experimental data. From our SCF and SCDF generative
models for spike-inference, we simulated traces of length T = 104 assuming a recording frequency
of 60 Hz. Initial parameters where obtained by fitting the models to real data (see below), and
heterogeneity across neurons was achieved by randomly perturbing parameters. We used 50 neurons
each for training and validation and 100 neurons in the test set. For each cell, we generated three
traces with firing rates of 0.6, 0.9 and 1.1 Hz, assuming i.i.d. spikes.
Finally, we compared methods on two-photon imaging data from 9 + 11 cells from [2], which is
available at www.crcns.org. Layer 2/3 pyramidal neurons in mouse visual cortex were imaged at 60 Hz
using the genetically encoded calcium-indicators GCaMP6s and GCaMP6f, while action-potentials
were measured electrophysiologically using cell-attached recordings. Data was pre-processed by
removing a slow moving baseline using the 5th percentile in a window of 6000 time steps. Furthermore
we used this baseline estimate to calculate ?F/F . Cross-validated results where obtained using 4
folds, where we trained and validated on 3/4 of the cells in each dataset and tested on the remaining
cells to highlight the potential for amortized inference. Early stopping was performed based on the
the correlation achieved on the train/validation set, which was evaluated every 100 update steps.
We report results using the cross-correlation between true and predicted spike-rates, at the sampling
discretization of 16.6 ms for simulated data and 40 ms for real data. As the predictions of our DS-NF
model are not deterministic, we sample 30 times from the model and average over the resulting
probability distributions to obtain an estimate of the marginal probability before we calculate crosscorrelations.
We used multiple generative models to show that our inference algorithm is not tied to a particular
model: SCDF for the experiments depicted in Fig. 2, SCF for a comparison with established methods
based on this linear model (Table 1, column 1), and MLphys on real data as it is used by the current
state-of-the-art inference algorithm (Table 1, columns 2 & 3, Fig. 3).
5
True spikes
Trace
Reconstruction | DS-F
Reconstruction | DS-NF
D
B
300
1.00
0.5
2
4
6
8
C
0
0
0
2
4
Time in seconds
6
8
0.8
0.6
2
3
Mean correlation: 0.80
1.0
0.0
1
Sampled spikes / True spike
0.40.4
500 400 300 200 100 0
0 Loglikelihood
100
DS-F
DS-NF
100
Single cell inference
Sampled spiketrains Marginal probability
200
(True spiketrain)
200
Correlated posterior
A
300
400
500
1.0 Correlation
(Marginal probability)
0.8
0.6
Mean correlation: 0.77
0.6
0.8
Amortized network
0.4
1.0
0.4
0.6
0.8
Factorized posterior
1.0
Figure 2: Model-inversion with variational autoencoders, simulated data A) Illustration of
factorized (CNN, DS-F) and non-factorized posterior approximation (RNN, DS-NF) on simulated
data (SCDF generative model). DS-NF yields more accurate reconstructions, but both methods lead
to similar marginal predictions (i.e. predicted firing rates, bottom). B) Number of spikes sampled for
every true spike for the factorized (red) and non-factorized (red) posterior. The correlated posterior
consistently samples the correct number of spikes while still accounting for the uncertainty in the
spike timing. C) Performance of amortized vs non-amortized inference on simulated data. D) Scatter
plots of achieved log-likelihood of the true spike train under the posterior model (top) and achieved
correlation coefficients between the marginalized spiking probabilities and true spike trains (bottom).
3
3.1
Results
Stochastic variational spike inference of factorized and correlated posteriors
We first illustrate our approach on synthetic data, and compare our two different architectures for
recognition models. We simulated data from the SCDF nonlinear generative model and trained
DeepSpike unsupervised using the same SCDF model. While only the more expressive recognition
model (DS-NF) is able to achieve a close-to-perfect reconstructions of the fluorescence traces (Fig. 2
A, top row), both approaches yield similar marginal firing rate predictions (second row). However,
as the factorized model does not model correlations in the posterior, it yields higher variance in the
number of spikes reconstructed for each true spike (Fig. 2 B). This is because the factorized model
can not capture that a fluorescence increase might be ?explained away? by a spike that has just been
sampled, i.e. it can not capture the difference between uncertainty in spike-timing and uncertainty in
(local) spike-counts. Therefore, while both approaches predict firing rates similarly well on simulated
data (as quantified using correlation, Fig. 2 D), the DS-NF model assigns higher posterior probability
to the true spike trains.
3.2
Amortizing inference leads to fast and accurate test-time inference
In principle, our unsupervised learning procedure could be re-trained on every data-set of interest.
However, it also allows for amortizing inference by sharing one recognition model across multiple
cells, and applying the recognition model directly on new data without additional training for fast
test-time performance. Amortized inference allows for the recognition model to be used for inference
in the same way as a network that was trained fully supervised. Since there is no variational
optimization at test time, inference with this network is just as fast as inference with a supervised
network. Similarly to supervised learning, there will be limitations on the ability of this network to
generalize to different imaging conditions or indicators that where not included in the training set.
To test if our recognition model generalizes well enough for amortized inference to work across
multiple cells, as well as on cells it did not see during training, we trained one DS-NF model on 50
6
cells (simulated data, SCDF) and evaluated its performance on a non-overlapping set of 30 cells. For
comparison, we also trained 30 DS-NF models separately, on each of those cells? this amounts to
standard variational inference using a neural network to parametrize the posterior approximation,
but without amortizing inference. We found that amortizing inference only causes a small drop
in performance (Fig. 2 C). However, this drop in performance is offset by the the large gain in
computational efficiency as training a neural network takes several orders of magnitude more time
then applying it at test time.
Inference using the DS-F model only requires a single forward pass through a convolutional network
to predict firing rates, and DS-NF requires running a stochastic RNN for each sampled spike train.
While the exact running-time of each of these applications will depend on both implementation
and hardware, we give rough indications of computational speed number estimated on an Intel(R)
Xeon(R) CPU E5-2697 v3. On the CPU, our DS-F approach takes 0.05 s to process a single trace of
10K time steps, when using a network appropriate for 60 Hz data. This is on the same order as the
0.07 s (Intel Core i5 2.7 GHz CPU) reported by [31] for their OASIS algorithm, which is currently
the fastest available implementation for constrained deconvolution (CDEC) of SCF, but restricted to
this linear generative model. The DS-NF algorithm requires 4.6 s which still compares favourably
to MLspike which takes 9.2 s (evaluated on the same CPU). As our algorithm is implemented in
Theano [32] it can be easily accelerated and allows for massive parallelization on a single GPU. On a
GTX Titan X, DS-F and DS-NF take 0.001 s and 1.5 s, respectively. When processing 500 traces in
parallel, DS-NF becomes only 2.5 times slower. Extrapolating from these results, this implies that
even when using the DS-NF algorithm, we would be able to perform spike-inference on 1 hour of
recordings at 60 Hz for 500 cells in less then 90 s.
Table 1: Performance comparison. Values are correlations between predicted marginal probabilities
and ground truth spikes.
Algorithm
DS-F
DS-NF
CDEC [10]
MCMC [9]
MLSpike [12]
DS-F-DEN
Foopsi-RR [2]
3.3
Dataset
SCF-Sim.
0.88 ? 0.01
0.89 ? 0.01
0.86 ? 0.01
0.87 ? 0.01
GCaMP6s
0.74 ? 0.02
0.72 ? 0.02
0.39 ? 0.03 *
0.47 ? 0.03 *
0.60 ? 0.02 *
GCaMP6f
0.74 ? 0.02
0.73 ? 0.02
0.58 ? 0.02 *
0.53 ? 0.03 *
0.67 ? 0.01 *
Dendritic dataset
Soma
Spine
0.84 ? 0.01
0.66 ? 0.02
0.78 ? 0.01
0.60 ? 0.01
DS achieves competitive results on simulated and publicly available imaging data
The advantages of our framework (black-box inference for different generative models, fast testtime performance through amortization, correlated posteriors through RNNs) are only useful if the
approach can also achieve competitive performance. To demonstrate that this is the case, we compare
our approach to alternative generative-model based spike prediction methods on data sampled from
the SCF model? as this is the generative model underlying commonly used methods [10, 9], it is
difficult to beat their performance on this data. We find that both DS-F and DS-NF achieve competitive
performance, as measured by correlation between predicted firing rates and true (simulated) spike
trains (Table 1, left column. Values are means and standard error of the mean calculated over cells).
To evaluate our performance on real data we compare to the current state-of-the-art method for spike
inference based on generative models[12]. For these experiments we trained separate models on each
of the GCaMP variants using the MLspike generative model. We achieve competitive accuracy to
the results in [12] (see Table 1, values marked with an asterisk are taken from [12], Fig. 6d) and
clearly outperform methods that are based on the linear SCF model. We note that, while our method
performs inference in an unsupervised fashion and is trained using an un-supervised objective, we
initialized our generative model with the mean values given in [12] (Fig. S6a), which were obtained
using ground truth data. An example of inference and reconstruction using the DS-NF model is
shown in Fig. 3. The reconstruction based on the true spikes (purple line) was obtained using the
generative model parameters which had been acquired from unsupervised learning. This explains why
the reconstruction using the inferred spikes is more accurate and suggests that there is a mismatch
7
GCaMP6s
Corr. posterior
Marginal probability
Corr: 0.73
True spikes
Spikes: 41.74 / 35.0
Trace
Prediction | Infered spiketrain
Prediction | True spiketrain
1.0
0.5
0.0
0
10
20
30
40
Time in seconds
50
Figure 3: Inference and reconstruction using the DS-NF algorithm on GECI data. The reconstruction based on the inferred spike trains (blue) shows that the algorithm converges to a good joint
model while the reconstruction based on the true spikes (purple) shows a mismatch of the generative
model for high activity which results in an overestimate of the overall firing rate.
between the MLphys model and the true data-generating generating process. Developing more
accurate generative models would therefore likely further increase the performance of the algorithm.
Marginal probability
Marginal probability
True soma spikes
Soma trace
Inferred: DS-F-DEN
Inferred: FOOPSI-RR
1.0
0.5
0.0
True synaptic inputs
Spine trace
1.0
0.5
0.0
0
2
4
6
8
10
12
Time in seconds
Cell cartoon
Figure 4: Inference of somatic spikes and synaptic input spikes from simulated dendritic
imaging data. We simulated imaging data from our generative model, and compared our approach
(DS-F-DEN) to an analysis inspired by [2] (Foopsi-RR), and found that our method can extract
synaptic inputs more accurately. Traces at the soma and spines are used to infer somatic spikes and
synaptic inputs at spines. Top: somatic trace and predictions. DS-F-DEN produces better predictions
at the soma since it uses all traces to infer global events. Bottom: spine trace and predictions.
DS-F-DEN performs better in terms of extracting synaptic inputs.
3.4
Extracting putative synaptic inputs from calcium imaging in dendritic spines
We generalized the DeepSpike variational-inference approach to perform simultaneous inference of
backpropagating APs and synaptic inputs, imaged jointly across the entire neuronal dendritic arbor.
We illustrate this idea on synthetic data based on the DS-F-DEN generative model (5). We simulated
15 cells each with 10 dendritic spines with a range of firing rates and noise levels. We then used a
multi-input multi-output convolutional neural network (CNN, DS-F) in the non-amortized setting to
infer a fully-factorized Bernoulli posterior distribution over global action potentials and local synaptic
events.
We compared our results to an analysis technique inspired by [2] which we call Foopsi-RR. We first
apply constrained deconvolution [33] to somatic and dendritic calcium traces, and then use robust
8
linear regression to identify and subtract deconvolved components of the spine signal that correlated
with global back-propagated action potential. Compared to the method suggested by [2], our model
is significantly more accurate. The average correlation of our model is 0.84 for soma and 0.78 for
spines, whereas for Foopsi-RR the average correlation is 0.66 for soma and 0.60 for spines (Table 1).
4
Discussion
Spike inference is an important step in the analysis of fluorescence imaging. We here propose a
strategy based on variational autoencoders that combines the advantages of generative [7] and discriminative approaches [15]. The generative model makes it possible to incorporate knowledge about
underlying mechanisms and thus learn from unlabeled data. A simultaneously-learned recognition
network allows fast test-time performance, without the need for expensive optimization or MCMC
sampling. This opens up the possibility of scaling up spike inference to very large neural populations
[34], and to real-time and closed-loop applications. Furthermore, our approach is able to estimate full
posteriors rather than just marginal firing rates.
It is likely that improvements in performance and interpretability will result from the design of
better, biophysically accurate and possibly dye-, cell-type- and modality-specific models of the
fluorescence measurement process, the dynamics of neurons [28] and indicators, as well as from
taking spatial information into account. Our goal here is not to design such models or to improve
accuracy per se, but rather to develop an inference strategy which can be applied to a large class
of such potential generative models without model-specific modifications: A trained recognition
model that can invert, and provide fast test-time performance, for any such model while preserving
performance in spike-detection.
Our recognition model is designed to serve as the common approximate posterior for multiple,
possibly heterogeneous populations of cells, requiring an expressive model. These assumptions are
supported by prior work [15] and our results on simulated and publicly available data, but might be
suboptimal or not appropriate in other contexts, or for other performance measures. In particular, we
emphasize that our comparisons are based on a specific data-set and performance measure which
is commonly used for comparing spike-inference algorithms, but which can in itself not provide
conclusive evidence for performance in other settings and measures. Our approach includes rich
posterior approximations [35] based on RNNs to make predictions using longer context-windows and
modelling posterior correlations. Possible extensions include causal recurrent recognition models for
real-time spike inference, which would require combining them with fast algorithms for detecting
regions of interest from imaging-movies [10, 36]. Another promising avenue is extending our
variational inference approach so it can also learn from available labeled data to obtain a semisupervised algorithm [37].
As a statistical problem, spike inference has many similarities with other analysis problems in
biological imaging? an underlying, sparse signal needs to be reconstructed from spatio-temporal
imaging observations, and one has substantial prior knowledge about the image-formation process
which can be encapsulated in generative models. As a concrete example of generalization, we
proposed an extension to multi-dimensional inference of inputs from dendritic imaging data, and
illustrated it on simulated data. We expect the approach pursued here to also be applicable in other
inference tasks, such as the localization of particles from fluorescence microscopy [38].
5
Acknowledgements
We thank T. W. Chen, K. Svoboda and the GENIE project at Janelia Research Campus for sharing
their published GCaMP6 data, available at http://crcns.org. We also thank T. Deneux for sharing his
results for comparison and comments on the manuscript and D. Greenberg, L. Paninski and A. Mnih
for discussions. This work was supported by SFB 1089 of the German Research Foundation (DFG)
to J. H. Macke. A. Speiser was funded by an IMPRS for Brain & Behavior scholarship by the Max
Planck Society.
9
References
[1] R. Y. Tsien, ?New calcium indicators and buffers with high selectivity against magnesium and protons:
design, synthesis, and properties of prototype structures,? Biochemistry, vol. 19, no. 11, pp. 2396?2404,
1980.
[2] T.-W. Chen, T. J. Wardill, Y. Sun, S. R. Pulver, S. L. Renninger, A. Baohan, E. R. Schreiter, R. A. Kerr,
M. B. Orger, V. Jayaraman, L. L. Looger, K. Svoboda, and D. S. Kim, ?Ultrasensitive fluorescent proteins
for imaging neuronal activity,? Nature, vol. 499, no. 7458, pp. 295?300, 2013.
[3] J. N. D. Kerr and W. Denk, ?Imaging in vivo: watching the brain in action,? Nat Rev Neurosci, vol. 9,
pp. 195?205, Mar 2008.
[4] C. Grienberger and A. Konnerth, ?Imaging calcium in neurons.,? Neuron, vol. 73, no. 5, pp. 862?885,
2012.
[5] S. L. Smith, I. T. Smith, T. Branco, and M. H?usser, ?Dendritic spikes enhance stimulus selectivity in
cortical neurons in vivo,? Nature, vol. 503, no. 7474, pp. 115?120, 2013.
[6] T.-W. Chen, T. J. Wardill, Y. Sun, S. R. Pulver, S. L. Renninger, A. Baohan, E. R. Schreiter, R. A. Kerr,
M. B. Orger, V. Jayaraman, et al., ?Ultrasensitive fluorescent proteins for imaging neuronal activity,?
Nature, vol. 499, no. 7458, pp. 295?300, 2013.
[7] J. T. Vogelstein, B. O. Watson, A. M. Packer, R. Yuste, B. Jedynak, and L. Paninski, ?Spike inference from
calcium imaging using sequential monte carlo methods,? Biophysical journal, vol. 97, no. 2, pp. 636?655,
2009.
[8] J. T. Vogelstein, A. M. Packer, T. A. Machado, T. Sippy, B. Babadi, R. Yuste, and L. Paninski, ?Fast
nonnegative deconvolution for spike train inference from population calcium imaging,? Journal of neurophysiology, vol. 104, no. 6, pp. 3691?3704, 2010.
[9] E. Pnevmatikakis, J. Merel, A. Pakman, L. Paninski, et al., ?Bayesian spike inference from calcium
imaging data,? in Signals, Systems and Computers, 2013 Asilomar Conference on, pp. 349?353, IEEE,
2013.
[10] E. A. Pnevmatikakis, D. Soudry, Y. Gao, T. A. Machado, J. Merel, D. Pfau, T. Reardon, Y. Mu, C. Lacefield,
W. Yang, et al., ?Simultaneous denoising, deconvolution, and demixing of calcium imaging data,? Neuron,
2016.
[11] E. Ganmor, M. Krumin, L. F. Rossi, M. Carandini, and E. P. Simoncelli, ?Direct estimation of firing rates
from calcium imaging data,? arXiv preprint arXiv:1601.00364, 2016.
[12] T. Deneux, A. Kaszas, G. Szalay, G. Katona, T. Lakner, A. Grinvald, B. R?zsa, and I. Vanzetta, ?Accurate
spike estimation from noisy calcium signals for ultrafast three-dimensional imaging of large neuronal
populations in vivo,? Nature Communications, vol. 7, 2016.
[13] M. Pachitariu, C. Stringer, M. Dipoppa, S. Schr?der, L. F. Rossi, H. Dalgleish, M. Carandini, and K. D.
Harris, ?Suite2p: beyond 10,000 neurons with standard two-photon microscopy,? bioRxiv, 2017.
[14] D. Greenberg, D. Wallace, J. Vogelstein, and J. Kerr, ?Spike detection with biophysical models for gcamp6
and other multivalent calcium indicator proteins,? 2015 Neuroscience Meeting Planner. Washington, DC:
Society for Neuroscience, 2015.
[15] L. Theis, P. Berens, E. Froudarakis, J. Reimer, M. Rom?n Ros?n, T. Baden, T. Euler, A. S. Tolias, and
M. Bethge, ?Benchmarking spike rate inference in population calcium imaging,? Neuron, vol. 90, no. 3,
pp. 471?82, 2016.
[16] A. v. d. Oord, N. Kalchbrenner, and K. Kavukcuoglu, ?Pixel recurrent neural networks,? arXiv preprint
arXiv:1601.06759, 2016.
[17] H. Larochelle and I. Murray, ?The neural autoregressive distribution estimator.,? in AISTATS, vol. 1, p. 2,
2011.
[18] D. J. Rezende, S. Mohamed, and D. Wierstra, ?Stochastic backpropagation and approximate inference in
deep generative models,? arXiv preprint arXiv:1401.4082, 2014.
[19] D. P. Kingma and M. Welling, ?Auto-encoding variational bayes,? arXiv preprint arXiv:1312.6114, 2013.
[20] M. Titsias and M. L?zaro-Gredilla, ?Doubly stochastic variational bayes for non-conjugate inference,? in
Proceedings of the 31st International Conference on Machine Learning (ICML-14), pp. 1971?1979, 2014.
[21] Y. Burda, R. Grosse, and R. Salakhutdinov, ?Importance weighted autoencoders,? arXiv preprint
arXiv:1509.00519, 2015.
[22] A. Mnih and K. Gregor, ?Neural variational inference and learning in belief networks,? arXiv preprint
arXiv:1402.0030, 2014.
[23] A. Mnih and D. J. Rezende, ?Variational inference for monte carlo objectives,? in Proceedings of the 33st
International Conference on Machine Learning, 2016.
10
[24] D. P. Kingma, S. Mohamed, D. J. Rezende, and M. Welling, ?Semi-supervised learning with deep
generative models,? in Advances in Neural Information Processing Systems, pp. 3581?3589, 2014.
[25] L. Maaloe, C. K. Sonderby, S. K. S?nderby, and O. Winther, ?Improving semi-supervised learning with
auxiliary deep generative models,? in NIPS Workshop on Advances in Approximate Bayesian Inference,
2015.
[26] D. Kingma and J. Ba, ?Adam: A method for stochastic optimization,? arXiv preprint arXiv:1412.6980,
2014.
[27] R. Pascanu, T. Mikolov, and Y. Bengio, ?On the difficulty of training recurrent neural networks.,? ICML
(3), vol. 28, pp. 1310?1318, 2013.
[28] V. Rahmati, K. Kirmse, D. Markovi?c, K. Holthoff, and S. J. Kiebel, ?Inferring neuronal dynamics from
calcium imaging data using biophysical models and bayesian inference,? PLoS Comput Biol, vol. 12, no. 2,
p. e1004736, 2016.
[29] A. L. Maas, A. Y. Hannun, and A. Y. Ng, ?Rectifier nonlinearities improve neural network acoustic models,?
in Proc. ICML, vol. 30, 2013.
[30] K. Cho, B. Van Merri?nboer, D. Bahdanau, and Y. Bengio, ?On the properties of neural machine translation:
Encoder-decoder approaches,? arXiv preprint arXiv:1409.1259, 2014.
[31] J. Friedrich, P. Zhou, and L. Paninski, ?Fast Active Set Methods for Online Deconvolution of Calcium
Imaging Data,? arXiv.org, Sept. 2016.
[32] J. Bergstra, O. Breuleux, F. Bastien, P. Lamblin, R. Pascanu, G. Desjardins, J. Turian, D. Warde-Farley,
and Y. Bengio, ?Theano: A cpu and gpu math compiler in python,? in Proc. 9th Python in Science Conf,
pp. 1?7, 2010.
[33] E. A. Pnevmatikakis, Y. Gao, D. Soudry, D. Pfau, C. Lacefield, K. Poskanzer, R. Bruno, R. Yuste, and
L. Paninski, ?A structured matrix factorization framework for large scale calcium imaging data analysis,?
arXiv preprint arXiv:1409.2903, 2014.
[34] M. B. Ahrens, J. M. Li, M. B. Orger, D. N. Robson, A. F. Schier, F. Engert, and R. Portugues, ?Brain-wide
neuronal dynamics during motor adaptation in zebrafish,? Nature, vol. 485, pp. 471?7, May 2012.
[35] C. K. Sonderby, T. Raiko, L. Maaloe, S. K. Sonderby, and O. Winther, ?How to train deep variational
autoencoders and probabilistic ladder networks,? arXiv preprint arXiv:1602.02282, 2016.
[36] N. Apthorpe, A. Riordan, R. Aguilar, J. Homann, Y. Gu, D. Tank, and H. S. Seung, ?Automatic neuron
detection in calcium imaging data using convolutional networks,? in Advances In Neural Information
Processing Systems, pp. 3270?3278, 2016.
[37] L. Maal?e, C. K. S?nderby, S. K. S?nderby, and O. Winther, ?Improving semi-supervised learning with
auxiliary deep generative models,? in NIPS Workshop on Advances in Approximate Bayesian Inference,
2015.
[38] E. Betzig, G. H. Patterson, R. Sougrat, O. W. Lindwasser, S. Olenych, J. S. Bonifacino, M. W. Davidson,
J. Lippincott-Schwartz, and H. F. Hess, ?Imaging intracellular fluorescent proteins at nanometer resolution,?
Science, vol. 313, no. 5793, pp. 1642?1645, 2006.
11
| 6991 |@word neurophysiology:1 cnn:8 version:2 inversion:3 norm:3 retraining:1 c0:3 open:1 accounting:1 initial:1 series:2 precluding:1 current:6 discretization:1 comparing:1 si:3 scatter:1 must:2 gpu:3 kiebel:1 additive:1 maaloe:2 enables:2 motor:1 designed:2 plot:1 update:2 drop:2 v:1 extrapolating:1 generative:55 aps:1 pursued:1 parametrization:1 smith:2 short:1 core:1 filtered:1 regressive:3 provides:1 detecting:1 pascanu:2 location:1 math:1 org:4 relayed:1 five:1 wierstra:1 direct:1 become:2 doubly:1 combine:2 fitting:1 manner:5 introduce:2 jayaraman:2 acquired:1 spine:15 behavior:2 wallace:1 multi:8 brain:4 inspired:2 cct:2 salakhutdinov:1 cpu:5 window:3 increasing:2 becomes:1 fti:1 provided:1 campus:2 underlying:6 project:1 factorized:14 deepmind:1 developed:1 proposing:1 transformation:1 pulver:2 grienberger:1 temporal:7 every:4 nf:19 ro:1 classifier:6 scaled:1 schwartz:1 control:1 unit:2 planck:2 overestimate:1 before:1 bind:1 local:4 timing:3 limit:3 soudry:2 encoding:1 firing:12 black:3 rnns:3 might:2 minimally:1 quantified:1 suggests:1 challenging:1 fastest:1 factorization:1 range:3 statistically:1 jedynak:1 zaro:1 implement:1 backpropagation:1 svi:1 procedure:4 evan:1 rnn:13 significantly:2 pre:1 ganmor:1 protein:4 close:1 unlabeled:1 context:3 impossible:1 influence:1 applying:2 www:1 map:1 deterministic:2 center:1 maximizing:1 flexibly:1 resolution:2 renninger:2 assigns:1 artur:2 estimator:1 aguilar:1 lamblin:1 facilitation:2 his:1 population:7 classic:2 merri:1 massive:1 exact:1 svoboda:2 us:4 designing:1 associate:1 amortized:12 expensive:3 recognition:31 wardill:2 nderby:3 holthoff:1 cut:1 predicts:2 labeled:2 observed:3 ft:5 bottom:3 preprint:10 homann:1 capture:3 parameterize:1 calculate:2 region:1 sun:2 ordering:1 plo:1 rescaled:1 substantial:1 mu:1 complexity:1 seung:1 warde:1 dynamic:16 denk:1 trained:13 depend:2 lippincott:1 serve:1 upon:1 localization:1 efficiency:1 titsias:1 patterson:1 gu:1 resolved:1 easily:2 indirect:2 joint:2 looger:1 train:26 fast:17 describe:2 effective:1 monte:5 ts0:1 formation:1 macke1:1 kalchbrenner:1 encoded:2 richer:1 solve:2 reardon:1 loglikelihood:1 drawing:1 elbo:2 encoder:1 ability:1 jointly:3 noisy:2 itself:1 superscript:2 online:2 advantage:4 differentiable:2 indication:1 rr:5 biophysical:3 propose:2 reconstruction:10 poskanzer:1 adaptation:1 loop:1 combining:1 flexibility:1 achieve:5 subcellular:1 exploiting:1 extending:1 produce:5 generating:2 adam:3 perfect:1 converges:1 illustrate:2 recurrent:8 develop:2 propagating:1 measured:4 sim:1 orger:3 implemented:1 predicted:5 involves:1 implies:1 larochelle:1 auxiliary:2 correct:1 filter:1 lars:1 stochastic:10 centered:1 enable:1 bin:1 explains:1 require:3 darmstadt:1 generalization:1 dendritic:16 tighter:1 biological:1 extension:3 vimco:1 ground:3 algorithmic:1 predict:3 branco:1 biochemistry:1 desjardins:1 achieves:1 early:1 encapsulated:1 schreiter:2 estimation:2 applicable:1 proc:2 robson:1 currently:2 fluorescence:24 pnevmatikakis:3 tool:1 weighted:1 rough:1 clearly:1 sensor:4 rather:2 zhou:1 varying:1 validated:2 rezende:3 improvement:1 consistently:1 bernoulli:3 indicates:1 likelihood:5 modelling:1 baseline:4 kim:1 posteriori:2 inference:72 stopping:1 typically:1 entire:1 hidden:1 transformed:1 germany:1 pixel:1 overall:1 classification:1 flexible:2 tank:1 art:4 constrained:3 integration:1 spatial:1 marginal:11 equal:1 once:3 extraction:1 beach:1 sampling:4 cartoon:1 reimer:1 represents:1 ng:1 washington:1 unsupervised:6 icml:3 caesar:3 report:1 stimulus:1 randomly:1 simultaneously:3 packer:2 dfg:1 attempt:1 detection:3 interest:2 possibility:1 mnih:3 intra:1 evaluation:1 farley:1 parametrised:1 chain:1 accurate:9 konnerth:1 necessary:1 respective:1 indexed:1 initialized:1 re:1 biorxiv:1 causal:1 deconvolved:1 column:3 modeling:1 xeon:1 disadvantage:2 measuring:1 clipping:2 cost:3 euler:1 hundred:1 reported:2 spiketrain:3 synthetic:4 cho:1 chunk:1 st:10 international:2 winther:3 oord:1 probabilistic:3 off:4 decoding:1 enhance:1 synthesis:1 mouse:1 concrete:1 bethge:1 baden:1 containing:1 possibly:2 watching:1 cognitive:1 conf:1 macke:2 leading:1 li:1 account:1 potential:10 photon:2 de:2 amortizing:4 nonlinearities:1 bergstra:1 includes:1 coefficient:3 inc:1 oregon:1 titan:1 depends:1 performed:2 closed:1 red:2 competitive:5 relus:1 dalgleish:1 parallel:3 sct:2 bayes:2 spiketrains:1 compiler:1 vivo:3 contribution:1 purple:2 publicly:2 accuracy:6 convolutional:9 variance:3 efficiently:1 yield:5 identify:1 generalize:1 bayesian:5 biophysically:1 kavukcuoglu:1 accurately:1 produced:1 carlo:5 published:1 simultaneous:7 plateau:1 sharing:3 synaptic:12 against:1 frequency:1 pp:17 invasive:1 mohamed:2 testtime:1 propagated:1 sampled:10 gain:2 dataset:4 carandini:2 knowledge:4 usser:1 improves:1 electrophysiological:2 genie:1 amplitude:2 carefully:1 sophisticated:1 back:2 manuscript:1 higher:2 gcamp6:3 supervised:10 dt:11 methodology:1 baohan:2 arranged:1 evaluated:4 box:3 mar:1 generality:2 furthermore:2 just:3 autoencoders:7 correlation:19 d:39 receives:1 favourably:1 expressive:2 nonlinear:9 overlapping:1 semisupervised:1 usa:1 name:1 requiring:1 true:20 unbiased:1 gtx:1 spatially:1 imaged:2 illustrated:1 during:2 backpropagating:3 percentile:1 turagas:1 m:2 generalized:1 hill:1 demonstrate:3 performs:2 cdec:2 image:1 variational:20 recently:1 sigmoid:2 common:1 functional:1 spiking:5 empirically:1 perturbing:1 machado:2 attached:2 conditioning:1 approximates:1 measurement:11 hess:1 tuning:1 automatic:1 similarly:2 particle:1 centre:1 nonlinearity:1 janelia:3 bruno:1 had:1 phenomenological:1 funded:1 moving:1 specification:1 cortex:1 longer:1 similarity:1 posterior:34 dye:5 optimizes:1 selectivity:2 buffer:1 szalay:1 affiliation:3 binary:2 watson:1 meeting:1 der:1 preserving:1 additional:6 maximize:1 v3:1 signal:4 exploding:1 semi:4 multiple:6 simoncelli:1 full:2 infer:7 reduces:1 vogelstein:3 exceeds:1 technical:1 faster:1 pakman:1 hhmi:2 long:1 cross:2 regenerative:1 impact:1 prediction:15 variant:1 regression:1 heterogeneous:1 gcamp6s:3 arxiv:21 iteration:1 represent:2 microscopy:4 cell:23 ion:1 achieved:5 whereas:2 invert:1 separately:1 interval:1 pyramidal:1 modality:1 parallelization:1 breuleux:1 ascent:2 comment:1 recording:5 hz:5 bahdanau:1 incorporates:1 call:1 extracting:2 yang:1 backwards:1 bengio:3 enough:1 tc0:1 fit:1 variate:1 architecture:4 suboptimal:1 reduce:1 idea:2 prototype:1 avenue:1 scf:8 t0:7 sfb:1 passing:1 cause:1 action:8 deep:9 dramatically:1 useful:2 se:1 sst:1 amount:2 induces:1 processed:1 hardware:1 crosscorrelations:1 http:1 outperform:1 ahrens:1 estimated:2 neuroscience:2 per:2 blue:1 discrete:3 vol:16 four:1 soma:11 demonstrating:1 threshold:2 achieving:1 drawn:2 prevent:1 backward:3 imaging:38 run:1 parameterized:2 powerful:1 uncertainty:4 i5:1 planner:1 zebrafish:1 putative:2 scaling:1 capturing:1 layer:7 bound:5 ct:9 electrophysiologically:1 fold:1 babadi:1 yielded:1 activity:13 nonnegative:1 incorporation:1 katona:1 nanometer:1 nearby:1 bonn:2 speed:1 performing:1 mikolov:1 optical:1 nboer:1 rossi:2 structured:1 developing:1 gredilla:1 combination:1 conjugate:1 across:6 bap:4 markovi:1 rev:1 b:2 making:1 modification:1 den:7 explained:1 restricted:1 theano:2 taken:1 asilomar:1 computationally:1 previously:2 hannun:1 turn:1 count:1 mechanism:1 german:1 kerr:4 maal:1 available:7 generalizes:1 parametrize:1 permit:1 pachitariu:1 apply:1 observe:1 away:1 appropriate:2 batch:1 alternative:1 slower:1 florida:1 lacefield:2 top:3 bespoke:1 running:3 remaining:1 include:1 marginalized:1 scholarship:1 murray:1 society:3 gregor:1 objective:4 spike:97 strategy:4 concentration:9 primary:1 parametric:2 riordan:1 gradient:12 stringer:1 separate:2 thank:2 separating:1 simulated:16 parametrized:1 decoder:1 cellular:1 rom:1 assuming:2 length:2 pointwise:1 illustration:1 providing:1 acquire:2 nc:1 difficult:2 potentially:1 teach:1 trace:20 negative:2 ba:1 implementation:2 design:3 calcium:42 unknown:2 perform:8 gated:1 neuron:16 observation:2 markov:1 datasets:1 beat:1 heterogeneity:1 incorporated:1 precise:1 communication:1 schr:1 dc:1 jakob:2 somatic:8 intensity:1 inferred:4 introduced:1 extensive:1 conclusive:1 faithfulness:1 friedrich:1 pfau:2 proton:1 acoustic:1 learned:1 established:1 hour:1 kingma:3 nip:3 able:4 suggested:1 beyond:1 below:1 mismatch:2 genetically:2 saturation:2 max:2 green:1 including:1 interpretability:1 belief:1 suitable:1 critical:1 treated:1 rely:1 event:4 predicting:1 indicator:7 difficulty:1 representing:1 scheme:1 improve:2 movie:1 ladder:1 lk:3 raiko:1 autoencoder:2 columbia:1 auto:4 extract:2 speeding:1 sept:1 prior:4 acknowledgement:1 python:2 theis:1 fully:4 expect:1 highlight:1 yuste:3 limitation:1 fluorescent:4 merel:2 localized:2 validation:2 asterisk:1 foundation:1 multivalent:1 s0:1 principle:1 gcamp:1 share:2 amortization:2 translation:1 row:2 gcamp6f:2 maas:1 supported:2 allow:2 burda:1 srinivas:1 wide:3 taking:1 magnesium:1 leaky:1 sparse:1 ghz:1 van:1 greenberg:2 calculated:2 cortical:1 evaluating:1 zsa:1 rich:1 autoregressive:1 ignores:1 forward:7 commonly:5 made:1 adaptive:1 simplified:1 welling:2 reconstructed:2 approximate:10 emphasize:1 ignore:1 global:4 active:1 infered:1 spatio:1 discriminative:4 factorize:1 alternatively:1 tolias:1 factorizing:1 un:1 iterative:2 latent:4 davidson:1 sk:2 why:1 table:6 additionally:1 promising:1 nature:6 learn:4 robust:1 ca:1 molecule:2 obtaining:1 improving:2 dendrite:2 e5:1 berens:1 constructing:1 did:1 aistats:1 dense:1 intracellular:2 linearly:1 neurosci:1 noise:2 turian:1 body:1 augmented:1 fig:9 intel:2 crcns:2 neuronal:6 benchmarking:1 fashion:1 amortize:1 slow:1 grosse:1 n:1 inferring:2 grinvald:1 ultrafast:1 comput:1 tied:1 weighting:1 removing:1 load:1 specific:4 rectifier:1 bastien:1 krumin:1 offset:1 evidence:2 demixing:2 sit:1 deconvolution:5 workshop:2 sequential:2 corr:2 importance:2 magnitude:1 nat:1 chen:3 subtract:1 depicted:1 tsien:1 paninski:6 likely:2 gao:2 visual:1 binding:1 oasis:1 truth:3 determines:1 relies:1 ultrasensitive:2 harris:1 goal:5 marked:1 replace:1 sippy:1 change:1 apthorpe:1 cst:2 included:1 acting:1 denoising:1 called:2 total:1 pas:2 experimental:1 arbor:1 phenomenon:1 accelerated:1 incorporate:3 evaluate:1 mcmc:4 es1:1 tested:1 biol:1 correlated:6 |
6,623 | 6,992 | Adaptive Active Hypothesis Testing under Limited
Information
Fabio Cecchi
Eindhoven University of Technology, Eindhoven, The Netherlands
[email protected]
Nidhi Hegde
Nokia Bell Labs, Paris-Saclay, France
[email protected]
Abstract
We consider the problem of active sequential hypothesis testing where a Bayesian
decision maker must infer the true hypothesis from a set of hypotheses. The
decision maker may choose for a set of actions, where the outcome of an action is
corrupted by independent noise. In this paper we consider a special case where the
decision maker has limited knowledge about the distribution of observations for
each action, in that only a binary value is observed. Our objective is to infer the
true hypothesis with low error, while minimizing the number of action sampled.
Our main results include the derivation of a lower bound on sample size for our
system under limited knowledge and the design of an active learning policy that
matches this lower bound and outperforms similar known algorithms.
1
Introduction
We consider the problem of active sequential hypothesis testing with incomplete information. The
original problem, first studied by Chernoff [1], is one where a Bayesian decision maker must infer
the correct hypothesis from a set of J hypotheses. At each step the decision maker may choose from
W actions where the outcome of an action is a random variable that depends on the action and the
true (hidden) hypothesis. In prior work, the probability distribution functions on the outcomes are
assumed to be known. In the present work we assume that these distributions are not known, and
only some rough information about the outcomes of the actions is known, to be made more precise
further on.
Active hypothesis testing is an increasingly important problem these days, with applications that
include the following. (a) Medical diagnostics ([2]) systems that include clinical trials for testing a
new treatment, or diagnostics of a new disease. (b) Crowdsourcing: online platforms for task-worker
matching such as Amazon?s Mechanical Turk or TaskRabbit, where, as new tasks arrive, they must
be matched to workers capable of working on them. (c) Customer hotline centres or Q&A forums:
online platforms such as StackExchange where questions are submitted, and users with varying
capabilities are available for providing an answer. This includes customer service centres where
customer tickets are submitted and the nature of the problem must be learned before its treatment (an
example where supervised learning techniques are used is [3]). (d) Content search problems where
an incoming image must be matched to known contents, as studied in [4].
We now informally describe our model. In the general instance of our problem, the true hypothesis,
?? is one in a set of J hypotheses, J = {?1 , . . . , ?J }, and a set of W actions is available, where
the outcomes of the actions depend on the true hypothesis. When the true hypothesis is ?j and
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
action w is chosen, a noisy outcome Xw,j ? J is observed, whose distribution, pw,j (?) ? P(J ), is
given. The objective then is to select an action at each step so as to infer the true hypothesis in a
minimum number of steps, with a given accuracy. In our model, we assume that the decision maker
has limited information about the outcome distributions. We define the principal set of an action w as
Jw ? J . When action w is sampled, a noisy binary outcome y ? {?1, 1} is observed, which gives
an indication on whether the action classifies the hypothesis in the set Jw . The quality of action w,
?w is related to the noise in the outcome. Rather than the distributions pw,j (?), we assume that the
decision maker only has knowledge of the principal set Jw and quality ?w of each action.
1.1
Related work
Since the seminal work by Chernoff [1], active hypothesis testing and variants of the problemhave
been studied through various perspectives (see [5] for a brief survey). Chernoff derived a simple
heuristic algorithm whose performance is shown to achieve asymptotic optimality in the regime
where the probability of error vanishes. Specifically, it is shown that as the probability of error ?
decreases the expected number of samples needed by Chernoff?s algorithm grows as ? log(?). Most
of the past literature in active sequential hypothesis testing has dealt with extensions of Chernoff?s
model, and has shown that Chernoff?s algorithm performs well in more general settings [6, 7]. A
notable exception is [8], where the impact of the number of hypotheses is analyzed and an algorithm
that performs better than Chernoff?s benchmark is provided for the case of large values of J.
Our work differs from prior work in a few ways. First, the hypothesis need not be locally identifiable.
While in [1] each action is able to distinguish each pair of hypotheses, we assume that each hypothesis
is globally identifiable, i.e., each pair of hypotheses can be discerned by at least one action. This is a
common assumption in the area of distributed hypothesis testing ([9, 10]) and a weaker assumption
than that of Chernoff. Note that dropping this assumption is not novel in itself, and has been done
in other work such as [8]. Second, a novel extension in our work, differing from [8] is that we do
not assume full knowledge on the actions? statistical parameters. The responses of actions are noisy,
and in past literature the probability distributions governing them was assumed to be known. In
our model, we drop this assumption, and we only require to know a lower bound ?w > 1/2 on the
probability that action w will provide a correct response, no matter the hypothesis we want to test. As
far as we know, no previous work in active sequential learning has tackled the problem of incomplete
statistical information and we believe that such an extension may provide a non-negligible impact in
real-life applications.
Active hypothesis testing is similar to the problem of Bayesian active learning. This latter perspective in considered in [11] where noisy Bayesian active learning setting is used on the hypothesis
testing problem with asymmetric noise and a heuristic based on the extrinsic Jensen-Shannon (EJS)
divergence [12] is proposed. As in [8], full knowledge of the probability distributions governing
the noise is available. In contrast, in our work we consider a more restricted model where, only a
binary outcome with noise is given by the actions on the large hypothesis space. Inference with
binary responses is considered in work on generalized binary search (GBS) [13], which is special
case where the label set (outcome of actions) is binary with the case of symmetric, non-peristent
noise. Our work differs from this type of work in that we consider asymmetric label-dependent noise,
that is, ?w varies with action w.
We thus position our work between [11, 8] and [13]. While the former assumes full knowledge on
the noise distributions, we assume that only a binary response is provided and only a lower bound
on the value that governs the outcome is known, and while the latter considers symmetric noise, we
extend to asymmetric label-dependent noise.
Our contribution. Our main objective is to investigate the minimum sample query size of this
system for a certain level of accuracy in the inference of the true hypothesis, and to design efficient
policies for this inference. Our contributions in the present paper are as follows. First, we consider the
system under limited knowledge of outcome distribution. This restricted scenario adds a significant
constraint for the action selection policy, and the belief vector update policy. To the best of our
knowledge, this restricted scenario has not been considered in past literature. Second, under the
limited knowledge constraint, we propose the Incomplete-Bayesian Adaptive Gradient (IBAG) policy
which includes a belief vector update rule that we call Incomplete-Bayesian, and an action selection
rule, named Adaptive Gradient, that follows the drift of the (unknown) coordinate of interest in the
2
belief vector. Third, we derive a lower bound on the sample size for the system under incomplete
information, and show that the performance of IBAG matches this bound. We also carry out numerical
experiments to compare IBAG to prior work.
2
Model
The classic model of the active sequential learning problem consists in sequentially selecting one
of several available sensing actions, in order to collect enough information to identify the true
hypothesis, as considered in [1]. We thus consider a system where a decision maker has at his
disposal a finite set of actions W = {1, . . . , W }, and there are a set of J = |J | < ? possible
hypothesis, J = {?1 , . . . , ?J }. (For the rest of the paper, we refer to a hypothesis only by its index,
i.e., j for hypothesis ?j , for ease of notation.) When the true hypothesis is j and action w is sensed, the
outcome Xw,j ? J is sampled from the distribution pw,j (?) ? P(J ), i.e., P{Xw,j = j 0 } = pw,j (j 0 ).
In our model, we assume to have limited information about the actions and this affects the classic
model in two ways. First, for every sampled action w, a binary outcome y ? {?1, 1} is observed,
indicating whether the inference of hypothesis by this action is in Jw or not, i.e., the response
observed is Yw,j ? {?1, 1} where
1,
if Xw,j ? Jw ,
Yw,j =
?1, if Xw,j ?
/ Jw .
The subset Jw ? J is assumed to be known, and it is described by the matrix g ? {?1, 1}W ?J
where
1,
if j ? Jw ,
gw,j =
(1)
?1, if j ?
/ Jw .
Observe that the probability an action w correctly identifies
the subset to which the true hypothesis
P
j belongs is given by qw,j := P{Yw,j = gw,j } = j 0 :gw,j =gw,j0 pw,j (j 0 ). However, as a second
restriction, instead of knowing qw,j , the capacity, or quality, of an action w is captured by ?w where
we assume that
qw,j ? ?w ,
? j ? J , w ? W.
(2)
We thus characterize each action by its principal set, Jw , and its quality, ?w .
Assumption 1. For every action w ? W, the principal sets Jw ? J and the quality ?w ? (1/2, 1)
are known. Denote by ?w = 2?w ? 1 where ?w ? [?m , ?M ] and ?m , ?M ? (0, 1).
Since each action can only indicate whether the hypothesis belongs to a subset or not, there must exist
an action w ? W for which j1 and j2 belong to different subsets, for all pairs j1 , j2 ? J . Define the
subset Wj1 ,j2 ? W as Wj1 ,j2 = {w ? W : gw,j1 gw,j2 = ?1}.
Assumption 2. For every j1 , j2 ? J , the subset Wj1 ,j2 is nonempty, i.e., each hypothesis is globally
identifiable.
For every action w ? W and hypothesis j ? J we define the subsets Jw,+j and Jw,?j which are,
respectively, given by the hypotheses that action w cannot and can distinguish from j, i.e.,
Jw,+j = {j 0 ? J : gw,j 0 gw,j = 1},
Jw,?j = {j 0 ? J : gw,j 0 gw,j = ?1}.
Note that w ? Wj1 ,j2 if and only if j2 ? Jw,?j1 (or equivalently j1 ? Jw,?j2 ).
We aim to design a simple algorithm to infer the correct hypothesis using as few actions as possible.
The true hypothesis will be denoted by j ? ? J . The learning process is captured by the evolution of
the belief vector ?(t) ? P(J ), where ?j (t) denotes the decision maker?s confidence at time t that
the true hypothesis is j. At the initial step t = 1, the belief vector ?(1) ? P(J ) is initialized so that
?j (1) > 0, j ? J . Since we assume to initially lack any information on the true hypothesis, without
loss of generality, we set ?j (1) = 1/J for every j ? J .
At every step t ? 1, according to the belief vector ?(t), the decision maker determines the next
action to sense FW (?(t)) = w(t) ? W according to some selection rule FW (?). The outcome
y(t) ? {?1, 1} from the chosen action w(t) is used to update the belief vector according to an
update rule F U ?(t), w(t), y(t) = ?(t + 1) ? P(J ). The algorithm ends at time T ? , and the
3
inferred hypothesis is given by ?j = arg maxj?J ?j (T ? ) . Sensing actions is stopped when one of
the posteriors is larger than 1 ? ?, for some ? > 0:
T ? = inf {max ?j (t) > 1 ? ?}.
(3)
t?0 j?J
3
The Incomplete-Bayesian update rule
We now describe how the decision maker updates the belief vector after he observes the outcome of
an action. Given a belief vector ? ? P(J ) and the observation y ? {?1, 1} obtained from action
w ? W, define
q ,
y = gw,j ,
?w ,
y = gw,j ,
f?(y, j, w) = w,j
f (y, j, w) =
1 ? qw,j , y = ?gw,j ,
1 ? ?w , y = ?gw,j .
Note that f?(y, j, w) denotes the probability of having outcome y given that the action w is chosen and
the true hypothesis is j. The standard Bayesian update rule is given by the map F B
U (?, w, y), where
B
FU,j
(?, w, y) =
P
f?(y,j,w)?j
.
f?(y,i,w)?i
In our model, however, the values qw,j for w ? W are unknown to
i?J
the decision maker. Hence, we introduce the Incomplete Bayesian (IB) update rule, which mimics the
Bayesian rule, but with limited knowledge on outcome probailities. The IB update rule is given by
the map F U (?, w, y), where
f (y, j, w)?j
.
i?J f (y, i, w)?i
FU,j (?, w, y) = P
(4)
Observe that Bayesian and IB update rules are identical when qw,j = ?w .
In practice, the ?j (t) evolves according to both the quality of the chosen action, ?w , and the
relation between this action?s principal set Jw and the current state of the belief vector ?(t). This
dependence is formalized in the following lemma whose proof is included in the supplementary
material, Section B.
Lemma 1. Given ?(t) ? P(J ) and w(t) ? W, then it holds that
?
?
1,
w.p.
?
?
?
?
?
indic1{w(t) ?
/ Wj ,j },
?j ? (t)
?j ? (t + 1)
=
? 1+?w(t) ,
w.p. 1{w(t) ? Wj ? ,j }qw(t),j ? ,
?
?j (t + 1)
?j (t)
1??w(t)
?
?
1??
?
w(t)
?
,
w.p. 1{w(t) ? Wj ? ,j }(1 ? qw(t),j ? ).
1+?w(t)
3.1
A lower bound on the sample size
Note that the IB update rule alone sets some constraints on the performance. In particular, if we
require the error probability to be low, then the expected number of samples is necessarily larger than
a certain quantity depending on the model parameters. We show that this quantity asymptotically
grows as ? log ? in the asymptotic regime where ? ? 0.
Theorem 1. Assume the IB update rule is applied to the belief vector and that
lim P{?j ? (T ? ) ? ?} ? ?? < 1.
??0
Then, there exist functions K0l (?), K1l (?) such that
E[T ? ] ? K1l (?) log
1
+ K0l (?),
?
lim Kil (?) ? Kil > 0,
??0
for i = 0, 1.
The proof of this result is presented in the supplement, Section A.2. We sketch the proof here. We
first define
?j (t)
St (j1 , j2 ) = log 1 ,
S(j1 , j2 ) = ST ? (j1 , j2 ),
?j2 (t)
P
and show that, on the one hand, if P{?j 6= j ? } is small, then j6=j ? S(j ? , j) is large with high
P
probability, and on the other hand, if t is small, then j6=j ? St (j ? , j) is small with high probability.
4
We use these properties to derive a lower bound on the tail probability of T ? , and thus on its expected
value.
Further, we can control the belief vector evolution by deriving bounds on the ratio between coordinates
of the belief vector under the IB policy. Specifically, in the supplementary material Section A.3,
we bound the probability that ?j (t) > ?j ? (t) at a certain time, and investigate how this probability
evolves with t.
4
4.1
Adaptive Gradient: the action selection policy
A gradient-based selection policy
We now present an action selection policy that, together with the IB update rule, defines our active
learning algorithm, which we call the Incomplete-Bayesian Adaptive Gradient (IBAG) policy. We
will then analyze the complete algorithm showing that its performance asymptotically matches the
lower bound provided in Theorem 1 as ? ? 0.
We focus on the j ? -th coordinate of the belief vector, and define the drift at time t as
Dw (?(t)) = E[?j ? (t + 1)|?(t), w(t) = w] ? ?j ? (t).
Simple algebra and (4) yield the following Lemma.
Lemma 2. It holds that
q ? ? ? + ? ?
w,j
w
w w,?j ? (t)
Dw (?(t)) = 4?w ?j ? (t)?w,?j ? (t)
2 ,
1 ? ?2w 1 ? 2?w,?j ? (t)
where
?w,+j =
X
?j ,
X
?w,?j =
j?Jw,+j
(5)
?j .
j?Jw,?j
Assume for a moment that we know the true hypothesis j ? and qw,j ? for every w ? W. Then, in
order to let ?j ? (t) grow as much as possible, we would greedily select the action w which maximizes
Dw (?(t)). Our worker selection policy will attempt to mimic as closely as possible this greedy
policy, while operating without complete information.
L
Lemma 3. It holds that Dw (?(t)) ? Dw
(?(t)), where
L
Dw
(?(t)) = 4?j ? (t)
and
??w (t) = min
2
?2w ??w
(t)
1 ? ?2w 1 ? 2??w (t)
n X
X
?j (t),
j?Jw
2 ,
(6)
o
?j (t)
j ?J
/ w
The proof follows from the fact that Dw (?(t)) is increasing both in qw,j ? and ?w,?j ? (t) for every
w ? W, and the observation that that qw,j ? ? ?w and ?w,?j ? (t) ? ??w (t).
L
Note that Dw
(?(t)) provides us a tight lower bound on the expected growth of the coordinate of the
L
true hypothesis if action w is chosen at step t. Indeed, Dw
(?(t)) can be decomposed to a part that
?
uses the j -th coordinate of the belief vector and a part than can be computed without knowing j ? .
The Adaptive Gradient (AG) selection rule, then chooses at step t, the action wD (t) ? W such that
wD (t) = FW (?(t)) = arg max G(??w , ?w ),
w?W
G(v, d) =
d2 v 2
1 ? d2 1 ? 2v
2 ,
(7)
i.e., we select the action maximizing the current lower bound on the expected growth of the j ? coordinate of the belief vector. Ties are broken uniformly.
Remark: Assume the actions have different costs of sensing. The AG selection rule can then be
generalized as follows:
c
wD (t) = FW
(?(t)) = arg max
w?W
5
G(??w , ?w )
.
cw
(8)
4.2
An upper bound
We now present our main result. We show that the expected number of samples required by our
algorithm IBAG asymptotically matches the lower bound obtained in Theorem 1.
Theorem 2. Under the IBAG algorithm, there exist constants K0u , K1u > 0 independent of ? such
that
1
E[T ? ] ? K1u log + K0u .
?
The proof is provided in supplementary material, Section A.5. This result is based on the intuition
that IBAG never selects an action that is too uninformative relative to the other actions. Specifically,
the information provided by an action w at time t depends on its quality ?w and outcome over the
subset Jw,?j ? . In other words, the value ?w,?j ? must decrease to 0, hence the higher this value is
for a given action w, the more we can still learn from sensing this action. As a proxy for ?w,?j ? we
use ??w which also must be as large as possible. The following lemma, whose proof is given in
supplementary material, Section B, provides bounds on the relative quality of ??wD (t) compared to
??w .
Lemma 4. For every w ? W, it holds that ??wD (t) ?
5
?m
?M
??w .
Numerical results
We now present numerical results based on simulations. In order to gain practical insight, we will
focus on a task labelling application. A task labelling problem might arise in a crowdsourcing scenrio
such as Amazon?s Mechanical Turk or Content search problems where an incoming image must be
matched to known contents. The mapping to the hypothesis testing problem is as follows. The set
of hypotheses J corresponds to the set of task labels, with j ? the true hypothesis being the latent
task label that must be inferred. The set of W actions corresponds to W workers who perform the
labelling when sampled, where pw,j (j 0 ) is the probability that worker w assigns the task the label j 0
when the true label is j. For each worker w, we will call Jw the expertise of the worker (principal
set of the actions), and ?w will be the quality of the worker. We will first investigate the impact of
the lack of exact knowledge, i.e., the difference between ?w and qw,j , that we call slack. We then
compare our algorithm to that in [1] and that of [13] for a few scenarios of interest.
5.1
The effect of the slack
Here we present a simulated scenario with J = 100, W = 15, and fixed subsets {Jw }w?W satisfying
Assumption 2. We set ? ? 0.001, and assume the incoming job-type to be j ? = 1. In Figure 1
we present the results of 1000 runs of the simulation for every instance of respectively the first and
second scenario described below. Recalling that the simulation stops as soon as maxj ?j (t) > 1 ? ?,
we specify that out of the entire set of simulations of these scenarios the algorithm never failed to infer
the correct incoming job type j ? = 1. For both scenarios, in Figure 1(left) we display the averaged
sample paths of the coordinate ?j ? (t) and in Figure 1(right) the average sample size required for the
decision maker to make an inference.
The performance upper bound is pessimistic. In the first set of simulations, scenario A, we fix
the quality vector ? with ?w ? (0.55, 0.6) for every worker w ? W. We then let the parameter s
vary in {0, .05, .1, .15, .2, .25, .3} and assume qw,j ? = ?w + s for every w ? W. In Theorem 2 we
proved an upper bound for E[T ? ] when the IBAG algorithm is employed. It can be observed that
the upper bound does not depend on qw,j ? , but only on ?w . In fact, the upper bound is obtained by
looking at the worst case scenario, where qw,j ? = ?w for every w ? W and j ? J . As the slack s
grows, the performance of the algorithm drastically improves even if it is not reflected in the upper
bound term.
Robustness to perturbations in estimate of worker skills. In the second set of simulations,
scenario B we fix the quality vector qw,j ? ? (0.85, 0.9) for every worker w ? W. We then let
the parameter s vary in {0, .05, .1, .15, .2, .25, .3} and set ?w = qw,j ? ? s for every w ? W. It is
observed that the IBAG algorithm performs well even when the decision maker?s knowledge of the
skills is not precise, and he decides to play safe by reducing the lower bound ?(w).
6
(a) Scenario A
(b) Scenario B
Figure 1: ((a), (b) left) Empirical average of the sample paths of the process ?j ? (t), ((a), (b) right)
Empirical average of the sample size T ? .
We therefore deduce that the learning process strongly depends on the true skills of the worker qw,j
(Figure 1(a)), however their exact knowledge is not fundamental for IBAG to behave well (Figure
1(b)) - it is robust to small perturbations.
5.2
Comparison to existing algorithms
Chernoff algorithm. As we mentioned, most of the existing sequential hypothesis testing algorithms
are based on Chernoff?s algorithm presented in [1]. Such an algorithm, at step t identifies the
job-types j1 , j2 ? J associated with the two highest values of ?(t) and selects the class of workers
wC that best distinguishes j1 and j2 , i.e., wC = arg maxw?Wj1 ,j2 ?w . In the asymptotic regime with
? ? 0, the expected sample size required by the Chernoff?s algorithm is of order ? log ?, exactly
as with IBAG. This has been proven ([1, 8]) in the case with full knowledge of the matrix pw,j (?).
What we emphasize here is that by focusing only on the two highest components of ?(t), the decision
maker loses information that might help him make a better selection of worker w(t). In particular,
Chernoff?s algorithm bases its decision largely on the workers? skills and thus does not behave as
well as it should when these are not informative enough.
Soft-Decision GBS algorithm. The algorithm proposed in [13] generalizes the intuition behind
optimal GBS algorithms in noiseless environments. This
P algorithm, given a belief vector
P ?(t) at
step t picks the worker w
? such that w
? = arg minw j?J ?j gw,j = arg minw j?Jw ?j ?
P
j ?J
/ w ?j = arg maxw {??w }. Intuitively, the Soft-Decision GBS algorithm selects the worker that
is the most "unsure", in the sense that the worker splits the belief vector as evenly as possible. Since
the model in [13] does not allow for different qualities of the workers (noise is symmetric there), this
feature does not play a role on the worker selection policy. Note that when the quality of all workers
are identical, the Soft-Decision GBS and the IBAG algorithms are identical. In [13], an asymptotic
performance analysis is presented, and under certain constraints on the problem geometry, it is shown
that the sample size required is of order ? log ? + log J, and once again the performance in terms of
the error probability matches with IBAG.
We now compare our algorithm IBAG with the Chernoff algorithm under three scenarios and with
Soft-Decision GBS only for the third scenario where the quality ?w or workers (noise in GBS) differ
among the workers.
In the first scenario, we set J = 32, j ? = 1, and ? = 0.003. We assume two kinds of worker classes.
We have 5 ?generalist? workers, each of whom has |Jw | = J/2 = 16 and moreover for every pair of
job types (j1 , j2 ) there exists a generalist belonging to Wj1 ,j2 . In addition, we have 32 ?specialist?
workers who can distinguish exactly 1 job-type, i.e., |Jw | = 1. We assume that there is one specialist
per job-type, and note that among them there is also w? such that Jw? = {j ? }. We consider two
cases: in case A, the skills of the workers are identical, ?w = 0.8 for every w ? W, and in case B we
drop the generalists? skill level to ?w = 0.75. We assume qw,j = ?w for every w ? W and j ? J .
In the second scenario, we set J = 30 with only specialists present. We set ? = 0.003 and j ? = 1. In
this scenario we consider two cases as well, in case A ?w = 0.7 for every worker, while in case B we
drop the skill level of the specialist on job-type j ? to 0.65, representing a situation where the system
is ill-prepared for an incoming job. We assume qw,j = ?w for every w ? W and j ? J .
We display the results for both scenarios in Figure 2. In Figure 2(top) we display boxplots of the
number of queries required and in Figure 2(bottom) we show the expectation of the number of
queries per kind of worker. In both scenarios, the performance of Chernoff?s algorithm is drastically
7
(a) Scenario 1
(b) Scenario 2
(c) Scenario 3
Figure 2: (top) Boxplot of the sample size T ? . (bottom) Empirical expected number of times the
different groups of workers are queried.
weakened by only a tiny variation in ?w , yielding a very different behavior. In the first scenario,
although it is very informative to query the generalists in an early explorative stage, under Chernoff?s
algorithm the selection of the workers relies too much on the skill levels and therefore always queries
the specialists. The IBAG algorithm, on the other hand, sensibly decides at each step on the trade-off
between getting rough information on a larger set of job pairs, or getting more precise information on
a smaller set, and seems to better grasp this quality vs quantity dilemma.
Similarly, in case B of the second scenario, the low-quality workers (the specialist in j ? ) are never
selected by Chernoff?s algorithm, even if their responses have a large impact on the growth of ?j ? (t).
For both cases A and B we see that IBAG outperforms Chernoff.
In the third scenario we set J = 32, W = 42, and ? = 0.03. We have five low-quality generalist
workers with ?w = 0.55, five high-quality generalist workers with ?w = 0.75. The remaining
32 workers are specialists with ?w = 0.8. The plots comparing all three algorithms is shown in
Figure 2(iii). We observe again that the Chernoff algorithm never queries generalists and performs
the worst. IBAG outperforms Soft-GBS because it queries high-quality workers preferentially while
Soft-GBS doesn?t consider quality.
6
Discussion and conclusion
We have presented and analyzed the IBAG algorithm, an intuitive active sequential learning algorithm
which requires only a rough knowledge of the quality and principal set of each available action.
The algorithm is shown to be competitive and in many cases outperforms Chernoff?s algorithm, the
benchmark in the area.
As far as we know, this is the first attempt to analyze a scenario where the decision maker has limited
knowledge of the system parameters. In Section 5 we studied through simulations, the effect of
this lack of exact knowledge on the performances of the system, in order to quantify the tradeoff
between caution, i.e., how close ?w is to qw,j , and the cost. The numerical analysis suggests that
a moderate caution does not worsen drastically the performance. In the supplement Section C we
analyze formally this tradeoff and show results on how cautious the decision maker can be while still
ensuring good performance.
A further element of incomplete knowledge would be to allow slight perturbations on the principal
sets of the actions. In the present paper we have assumed to know with certainty, for every w ? W
and j ? J , whether w has j in its principal set (j ? Jw ), or not. In future work we will investigate
the impact of uncertainty in the expertise, for instance having j ? Jw with some probability pj,w .
8
As a last remark, it would be interesting to analyze the model when the different actions have
heterogeneous costs. Note that the IBAG algorithm naturally extends to such case, as mentioned in
equation (8). The IBAG algorithm in the framework of the task-worker system could give definitive
answers on whether it is better to sample a response from a cheap worker with a general expertise
and low skill or from more expensive workers with narrow expertise and higher skill.
References
[1] H. Chernoff, ?Sequential design of experiments,? The Annals of Mathematical Statistics, vol. 30, no. 3,
pp. 755?770, 1959.
[2] S. Berry, B. Carlin, J. Lee, and P. Muller, Bayesian Adaptive Methods for Clinical Trials. CRC press, 2010.
[3] S. C. Hui and G. Jha, ?Data mining for customer service support,? Information & Management, vol. 38,
no. 1, pp. 1?13, 2000.
[4] N. Vaidhiyan, S. P. Arun, and R. Sundaresan, ?Active sequential hypothesis testing with application to a
visual search problem,? in 2012 IEEE International Symposium on Information Theory Proceedings (ISIT),
pp. 2201?2205, IEEE, 2012.
[5] B. Ghosh, ?A brief history of sequential analysis,? Handbook of Sequential Analysis, vol. 1, 1991.
[6] A. Albert, ?The sequential design of experiments for infinitely many states of nature,? The Annals of
Mathematical Statistics, vol. 32, pp. 774?799, 1961.
[7] J. Kiefer and J. Sacks, ?Asymptotically optimum sequential inference and design,? The Annals of Mathematical Statistics, vol. 34, pp. 705?750, 1963.
[8] M. Naghshvar and T. Javidi, ?Active sequential hypothesis testing,? The Annals of Statistics, vol. 41, no. 6,
pp. 2703?2738, 2013.
[9] A. Lalitha, A. Sarwate, and T. Javidi, ?Social learning and distributed hypothesis testing,? in Information
Theory (ISIT), 2014 IEEE International Symposium on, pp. 551?555, IEEE, 2014.
[10] R. Olfati-Saber, J. Fax, and R. Murray, ?Consensus and cooperation in networked multi-agent systems,?
Proceedings of the IEEE, vol. 95, no. 1, pp. 215?233, 2007.
[11] M. Naghshvar, T. Javidi, and K. Chaudhuri, ?Noisy bayesian active learning,? in Communication, Control,
and Computing (Allerton), 2012 50th Annual Allerton Conference on, pp. 1626?1633, IEEE, 2012.
[12] M. Naghshvar and T. Javidi, ?Extrinsic jensen-shannon divergence with application in active hypothesis
testing,? in IEEE International Symposium on Information Theory (ISIT), 2012.
[13] R. Nowak, ?Noisy generalized binary search,? in Advances in neural information processing systems,
pp. 1366?1374, 2009.
9
| 6992 |@word trial:2 pw:7 seems:1 d2:2 simulation:7 sensed:1 pick:1 carry:1 moment:1 initial:1 selecting:1 outperforms:4 past:3 existing:2 current:2 com:1 wd:5 comparing:1 must:10 explorative:1 numerical:4 j1:12 informative:2 cheap:1 drop:3 plot:1 update:13 v:1 alone:1 greedy:1 selected:1 provides:2 allerton:2 five:2 mathematical:3 symposium:3 consists:1 introduce:1 indeed:1 expected:8 behavior:1 multi:1 globally:2 decomposed:1 increasing:1 provided:5 classifies:1 matched:3 notation:1 maximizes:1 moreover:1 qw:21 what:1 kind:2 caution:2 differing:1 ag:2 ghosh:1 certainty:1 every:21 growth:3 tie:1 exactly:2 sensibly:1 control:2 medical:1 before:1 service:2 negligible:1 path:2 might:2 studied:4 weakened:1 collect:1 suggests:1 ease:1 limited:9 averaged:1 practical:1 testing:16 practice:1 differs:2 j0:1 area:2 empirical:3 bell:2 matching:1 confidence:1 word:1 cannot:1 close:1 selection:12 seminal:1 restriction:1 map:2 hegde:2 customer:4 maximizing:1 survey:1 amazon:2 formalized:1 assigns:1 rule:15 insight:1 deriving:1 his:1 dw:9 classic:2 coordinate:7 variation:1 annals:4 play:2 user:1 exact:3 saber:1 us:1 hypothesis:56 element:1 satisfying:1 expensive:1 asymmetric:3 observed:7 role:1 bottom:2 worst:2 naghshvar:3 wj:3 decrease:2 highest:2 trade:1 observes:1 disease:1 intuition:2 vanishes:1 broken:1 mentioned:2 environment:1 depend:2 tight:1 algebra:1 dilemma:1 various:1 derivation:1 describe:2 query:7 outcome:20 whose:4 heuristic:2 larger:3 supplementary:4 statistic:4 noisy:6 itself:1 online:2 indication:1 propose:1 j2:19 networked:1 chaudhuri:1 achieve:1 intuitive:1 cautious:1 getting:2 optimum:1 help:1 derive:2 depending:1 ticket:1 job:9 indicate:1 quantify:1 differ:1 safe:1 closely:1 correct:4 material:4 crc:1 require:2 fix:2 isit:3 pessimistic:1 eindhoven:2 extension:3 hold:4 considered:4 mapping:1 vary:2 early:1 label:7 maker:17 him:1 arun:1 rough:3 always:1 aim:1 rather:1 varying:1 derived:1 focus:2 contrast:1 greedily:1 sense:2 inference:6 dependent:2 entire:1 initially:1 hidden:1 relation:1 france:1 selects:3 arg:7 among:2 ill:1 denoted:1 platform:2 special:2 once:1 never:4 having:2 beach:1 chernoff:20 identical:4 mimic:2 future:1 few:3 distinguishes:1 divergence:2 maxj:2 geometry:1 ejs:1 attempt:2 recalling:1 interest:2 investigate:4 mining:1 grasp:1 analyzed:2 nl:1 diagnostics:2 yielding:1 behind:1 fu:2 capable:1 worker:39 nowak:1 minw:2 incomplete:9 initialized:1 stopped:1 instance:3 soft:6 cost:3 subset:9 olfati:1 too:2 characterize:1 answer:2 varies:1 corrupted:1 chooses:1 st:4 fundamental:1 international:3 lee:1 off:1 together:1 again:2 management:1 choose:2 includes:2 jha:1 matter:1 notable:1 depends:3 lab:2 analyze:4 competitive:1 capability:1 worsen:1 contribution:2 accuracy:2 kiefer:1 who:2 largely:1 yield:1 identify:1 dealt:1 bayesian:14 expertise:4 j6:2 history:1 submitted:2 pp:10 turk:2 naturally:1 proof:6 associated:1 sampled:5 gain:1 stop:1 treatment:2 proved:1 knowledge:18 lim:2 improves:1 focusing:1 disposal:1 higher:2 day:1 supervised:1 reflected:1 response:7 specify:1 discerned:1 jw:30 done:1 strongly:1 generality:1 governing:2 stage:1 working:1 sketch:1 hand:3 lack:3 defines:1 quality:21 grows:3 believe:1 usa:1 effect:2 true:20 former:1 evolution:2 hence:2 symmetric:3 gw:15 generalized:3 complete:2 performs:4 image:2 novel:2 common:1 kil:2 sarwate:1 extend:1 belong:1 he:2 tail:1 slight:1 significant:1 refer:1 queried:1 similarly:1 centre:2 sack:1 operating:1 deduce:1 add:1 base:1 posterior:1 perspective:2 belongs:2 inf:1 moderate:1 scenario:26 certain:4 binary:9 life:1 muller:1 captured:2 minimum:2 employed:1 full:4 infer:6 sundaresan:1 match:5 clinical:2 long:1 impact:5 ensuring:1 variant:1 heterogeneous:1 noiseless:1 expectation:1 albert:1 addition:1 want:1 uninformative:1 grow:1 rest:1 call:4 split:1 enough:2 iii:1 affect:1 carlin:1 knowing:2 tradeoff:2 whether:5 cecchi:2 gb:9 action:67 remark:2 governs:1 informally:1 yw:3 netherlands:1 prepared:1 locally:1 exist:3 extrinsic:2 correctly:1 per:2 dropping:1 vol:7 group:1 pj:1 boxplots:1 asymptotically:4 run:1 uncertainty:1 named:1 arrive:1 extends:1 decision:22 bound:22 distinguish:3 tackled:1 display:3 identifiable:3 annual:1 constraint:4 boxplot:1 wc:2 generalist:7 optimality:1 min:1 according:4 unsure:1 belonging:1 smaller:1 increasingly:1 evolves:2 intuitively:1 restricted:3 equation:1 slack:3 nonempty:1 needed:1 know:5 end:1 available:5 generalizes:1 observe:3 specialist:7 robustness:1 original:1 assumes:1 denotes:2 include:3 top:2 remaining:1 xw:5 murray:1 forum:1 objective:3 question:1 quantity:3 dependence:1 javidi:4 gradient:6 fabio:1 cw:1 stackexchange:1 simulated:1 tue:1 capacity:1 evenly:1 whom:1 considers:1 consensus:1 index:1 providing:1 minimizing:1 ratio:1 preferentially:1 equivalently:1 design:6 policy:13 unknown:2 perform:1 upper:6 observation:3 benchmark:2 finite:1 behave:2 situation:1 looking:1 precise:3 communication:1 perturbation:3 drift:2 inferred:2 pair:5 paris:1 mechanical:2 required:5 learned:1 narrow:1 nip:1 able:1 below:1 regime:3 saclay:1 max:3 belief:18 representing:1 technology:1 brief:2 identifies:2 wj1:6 fax:1 prior:3 literature:3 berry:1 asymptotic:4 relative:2 loss:1 interesting:1 proven:1 agent:1 proxy:1 tiny:1 cooperation:1 last:1 soon:1 drastically:3 weaker:1 allow:2 nokia:2 distributed:2 doesn:1 made:1 adaptive:7 far:2 social:1 skill:10 emphasize:1 active:18 incoming:5 sequentially:1 decides:2 handbook:1 assumed:4 search:5 latent:1 nature:2 learn:1 robust:1 ca:1 necessarily:1 main:3 noise:12 arise:1 definitive:1 position:1 ib:7 third:3 theorem:5 showing:1 jensen:2 sensing:4 exists:1 sequential:14 hui:1 supplement:2 labelling:3 infinitely:1 visual:1 failed:1 k1l:2 maxw:2 corresponds:2 loses:1 determines:1 relies:1 content:4 fw:4 included:1 specifically:3 uniformly:1 reducing:1 principal:9 lemma:7 shannon:2 exception:1 select:3 indicating:1 formally:1 support:1 latter:2 crowdsourcing:2 |
6,624 | 6,993 | Streaming Weak Submodularity:
Interpreting Neural Networks on the Fly
Ethan R. Elenberg
Department of Electrical
and Computer Engineering
The University of Texas at Austin
[email protected]
Moran Feldman
Department of Mathematics
and Computer Science
Open University of Israel
[email protected]
Alexandros G. Dimakis
Department of Electrical
and Computer Engineering
The University of Texas at Austin
[email protected]
Amin Karbasi
Department of Electrical Engineering
Department of Computer Science
Yale University
[email protected]
Abstract
In many machine learning applications, it is important to explain the predictions
of a black-box classifier. For example, why does a deep neural network assign
an image to a particular class? We cast interpretability of black-box classifiers
as a combinatorial maximization problem and propose an efficient streaming
algorithm to solve it subject to cardinality constraints. By extending ideas from
Badanidiyuru et al. [2014], we provide a constant factor approximation guarantee
for our algorithm in the case of random stream order and a weakly submodular
objective function. This is the first such theoretical guarantee for this general class
of functions, and we also show that no such algorithm exists for a worst case stream
order. Our algorithm obtains similar explanations of Inception V3 predictions 10
times faster than the state-of-the-art LIME framework of Ribeiro et al. [2016].
1
Introduction
Consider the following combinatorial optimization problem. Given a ground set N of N elements
and a set function f : 2N 7! R 0 , find the set S of size k which maximizes f (S). This formulation
is at the heart of many machine learning applications such as sparse regression, data summarization,
facility location, and graphical model inference. Although the problem is intractable in general, if
f is assumed to be submodular then many approximation algorithms have been shown to perform
provably within a constant factor from the best solution.
Some disadvantages of the standard greedy algorithm of Nemhauser et al. [1978] for this problem are
that it requires repeated access to each data element and a large total number of function evaluations.
This is undesirable in many large-scale machine learning tasks where the entire dataset cannot fit in
main memory, or when a single function evaluation is time consuming. In our main application, each
function evaluation corresponds to inference on a large neural network and can take a few seconds.
In contrast, streaming algorithms make a small number of passes (often only one) over the data and
have sublinear space complexity, and thus, are ideal for tasks of the above kind.
Recent ideas, algorithms, and techniques from submodular set function theory have been used to
derive similar results in much more general settings. For example, Elenberg et al. [2016a] used
the concept of weak submodularity to derive approximation and parameter recovery guarantees for
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
nonlinear sparse regression. Thus, a natural question is whether recent results on streaming algorithms
for maximizing submodular functions [Badanidiyuru et al., 2014, Buchbinder et al., 2015, Chekuri
et al., 2015] extend to the weakly submodular setting.
This paper answers the above question by providing the first analysis of a streaming algorithm
for any class of approximately submodular functions. We use key algorithmic components of
S IEVE -S TREAMING [Badanidiyuru et al., 2014], namely greedy thresholding and binary search,
combined with a novel analysis to prove a constant factor approximation for -weakly submodular
functions (defined in Section 3). Specifically, our contributions are as follows.
? An impossibility result showing that, even for 0.5-weakly submodular objectives, no randomized streaming algorithm which uses o(N ) memory can have a constant approximation
ratio when the ground set elements arrive in a worst case order.
? S TREAK: a greedy, deterministic streaming algorithm for maximizing -weakly submodular
functions which uses O(" 1 k log k) memory and has an approximation ratio of (1 ") 2 ?
p
(3 e /2 2 2 e /2 ) when the ground set elements arrive in a random order.
? An experimental evaluation of our algorithm in two applications: nonlinear sparse regression using pairwise products of features and interpretability of black-box neural network
classifiers.
The above theoretical impossibility result is quite surprising since it stands in sharp contrast to known
streaming algorithms for submodular objectives achieving a constant approximation ratio even for
worst case stream order.
One advantage of our approach is that, while our approximation guarantees are in terms of , our
algorithm S TREAK runs without requiring prior knowledge about the value of . This is important
since the weak submodularity parameter is hard to compute, especially in streaming applications,
as a single element can alter drastically.
We use our streaming algorithm for neural network interpretability on Inception V3 [Szegedy et al.,
2016]. For that purpose, we define a new set function maximization problem similar to LIME [Ribeiro
et al., 2016] and apply our framework to approximately maximize this function. Experimentally,
we find that our interpretability method produces explanations of similar quality as LIME, but runs
approximately 10 times faster.
2
Related Work
Monotone submodular set function maximization has been well studied, starting with the classical
analysis of greedy forward selection subject to a matroid constraint [Nemhauser et al., 1978, Fisher
et al., 1978]. For the special case of a uniform matroid constraint, the greedy algorithm achieves
an approximation ratio of 1 1/e [Fisher et al., 1978], and a more involved algorithm obtains this
ratio also for general matroid constraints [C?alinescu et al., 2011]. In general, no polynomial-time
algorithm can have a better approximation ratio even for a uniform matroid constraint [Nemhauser
and Wolsey, 1978, Feige, 1998]. However, it is possible to improve upon this bound when the data
obeys some additional guarantees [Conforti and Cornu?jols, 1984, Vondr?k, 2010, Sviridenko et al.,
2015]. For maximizing nonnegative, not necessarily monotone, submodular functions subject to
a general matroid constraint, the state-of-the-art randomized algorithm achieves an approximation
ratio of 0.385 [Buchbinder and Feldman, 2016b]. Moreover, for uniform matroids there is also
a deterministic algorithm achieving a slightly worse approximation ratio of 1/e [Buchbinder and
Feldman, 2016a]. The reader is referred to Bach [2013] and Krause and Golovin [2014] for surveys
on submodular function theory.
A recent line of work aims to develop new algorithms for optimizing submodular functions suitable for large-scale machine learning applications. Algorithmic advances of this kind include
S TOCHASTIC -G REEDY [Mirzasoleiman et al., 2015], S IEVE -S TREAMING [Badanidiyuru et al.,
2014], and several distributed approaches [Mirzasoleiman et al., 2013, Barbosa et al., 2015, 2016, Pan
et al., 2014, Khanna et al., 2017b]. Our algorithm extends ideas found in S IEVE -S TREAMING and
uses a different analysis to handle more general functions. Additionally, submodular set functions
have been used to prove guarantees for online and active learning problems [Hoi et al., 2006, Wei
et al., 2015, Buchbinder et al., 2015]. Specifically, in the online setting corresponding to our setting
2
(i.e., maximizing a monotone function subject to a cardinality constraint), Chan et al. [2017] achieve
a competitive ratio of about 0.3178 when the function is submodular.
The concept of weak submodularity was introduced in Krause and Cevher [2010], Das and Kempe
[2011], where it was applied to the specific problem of feature selection in linear regression. Their
main results state that if the data covariance matrix is not too correlated (using either incoherence or
restricted eigenvalue assumptions), then maximizing the goodness of fit f (S) = RS2 as a function of
the feature set S is weakly submodular. This leads to constant factor approximation guarantees for
several greedy algorithms. Weak submodularity was connected with Restricted Strong Convexity
in Elenberg et al. [2016a,b]. This showed that the same assumptions which imply the success of
regularization also lead to guarantees on greedy algorithms. This framework was later used for
additional algorithms and applications [Khanna et al., 2017a,b]. Other approximate versions of
submodularity were used for greedy selection problems in Horel and Singer [2016], Hassidim and
Singer [2017], Altschuler et al. [2016], Bian et al. [2017]. To the best of our knowledge, this is the
first analysis of streaming algorithms for approximately submodular set functions.
Increased interest in interpretable machine learning models has led to extensive study of sparse
feature selection methods. For example, Bahmani et al. [2013] consider greedy algorithms for logistic
regression, and Yang et al. [2016] solve a more general problem using `1 regularization. Recently,
Ribeiro et al. [2016] developed a framework called LIME for interpreting black-box neural networks,
and Sundararajan et al. [2017] proposed a method that requires access to the network?s gradients with
respect to its inputs. We compare our algorithm to variations of LIME in Section 6.2.
3
Preliminaries
First we establish some definitions and notation. Sets are denoted with capital letters, and all big O
notation is assumed to be scaling with respect to N (the number of elements in the input stream).
Given a set function f , we often use the discrete derivative f (B | A) , f (A [ B) f (A). f is
monotone if f (B | A) 0, 8A, B and nonnegative if f (A) 0, 8A. Using this notation one can
define weakly submodular functions based on the following ratio.
Definition 3.1 (Weak Submodularity, adapted from Das and Kempe [2011]). A monotone nonnegative
set function f : 2N 7! R 0 is called -weakly submodular for an integer r if
P
j2S\L f (j | L)
? r,
min
,
L,S?N :
f (S | L)
|L|,|S\L|?r
where the ratio is considered to be equal to 1 when its numerator and denominator are both 0.
This generalizes submodular functions by relaxing the diminishing returns property of discrete
derivatives. It is easy to show that f is submodular if and only if |N | = 1.
Definition 3.2 (Approximation Ratio). A streaming maximization algorithm ALG which returns
a set S has approximation ratio R 2 [0, 1] if E[f (S)] R ? f (OP T ), where OP T is the optimal
solution and the expectation is over the random decisions of the algorithm and the randomness of the
input stream order (when it is random).
Formally our problem is as follows. Assume that elements from a ground set N arrive in a stream at
either random or worst case order. The goal is then to design a one pass streaming algorithm that
given oracle access to a nonnegative set function f : 2N 7! R 0 maintains at most o(N ) elements in
memory and returns a set S of size at most k approximating
max f (T ) ,
|T |?k
up to an approximation ratio R( k ). Ideally, this approximation ratio should be as large as possible,
and we also want it to be a function of k and nothing else. In particular, we want it to be independent
of k and N .
To simplify notation, we use in place of k in the rest of the paper. Additionally, proofs for all our
theoretical results are deferred to the Supplementary Material.
3
4
Impossibility Result
To prove our negative result showing that no streaming algorithm for our problem has a constant
approximation ratio against a worst case stream order, we first need to construct a weakly submodular
set function fk . Later we use it to construct a bad instance for any given streaming algorithm.
Fix some k 1, and consider the ground set Nk = {ui , vi }ki=1 . For ease of notation, let us define
for every subset S ? Nk
u(S) = |S \ {ui }ki=1 | ,
v(S) = |S \ {vi }ki=1 | .
Now we define the following set function:
fk (S) = min{2 ? u(S) + 1, 2 ? v(S)}
8 S ? Nk .
Lemma 4.1. fk is nonnegative, monotone and 0.5-weakly submodular for the integer |Nk |.
Since |Nk | = 2k, the maximum value of fk is fk (Nk ) = 2 ? v(Nk ) = 2k. We now extend the ground
set of fk by adding to it an arbitrary large number d of dummy elements which do not affect fk at all.
Clearly, this does not affect the properties of fk proved in Lemma 4.1. However, the introduction
of dummy elements allows us to assume that k is an arbitrary small value compared to N , which is
necessary for the proof of the next theorem. In a nutshell, this proof is based on the observation that
the elements of {ui }ki=1 are indistinguishable from the dummy elements as long as no element of
{vi }ki=1 has arrived yet.
Theorem 4.2. For every constant c 2 (0, 1] there is a large enough k such that no randomized
streaming algorithm that uses o(N ) memory to solve max|S|?2k fk (S) has an approximation ratio
of c for a worst case stream order.
We note that fk has strong properties. In particular, Lemma 4.1 implies that it is 0.5-weakly
submodular for every 0 ? r ? |N |. In contrast, the algorithm we show later assumes weak
submodularity only for the cardinality constraint k. Thus, the above theorem implies that worst
case stream order precludes a constant approximation ratio even for functions with much stronger
properties compared to what is necessary for getting a constant approximation ratio when the order is
random.
The proof of Theorem 4.2 relies critically on the fact that each element is seen exactly once. In
other words, once the algorithm decides to discard an element from its memory, this element is gone
forever, which is a standard assumption for streaming algorithms. Thus, the theorem does not apply
to algorithms that use multiple passes over N , or non-streaming algorithms that use o(N ) writable
memory, and their analysis remains an interesting open problem.
5
Streaming Algorithms
In this section we give a deterministic streaming algorithm for our problem which works in a model
in which the stream contains the elements of N in a random order. We first describe in Section 5.1
such a streaming algorithm
p assuming access to a value ? which approximates a ? f (OP T ), where a
is a shorthand for a = ( 2 e /2 1)/2. Then, in Section 5.2 we explain how this assumption
can be removed to obtain S TREAK and bound its approximation ratio, space complexity, and running
time.
5.1
Algorithm with access to ?
Consider Algorithm 1. In addition to the input instance, this algorithm gets a parameter ? 2
[0, a ? f (OP T )]. One should think of ? as close to a ? f (OP T ), although the following analysis
of the algorithm does not rely on it. We provide an outline of the proof, but defer the technical details
to the Supplementary Material.
Theorem 5.1. The expected value of the set produced by Algorithm 1 is at least
p
p
? 3 e /2 2 2 e /2
?
= ? ? ( 2 e /2 1) .
a
2
4
Algorithm 1 T HRESHOLD G REEDY (f, k, ? )
Let S
?.
while there are more elements do
Let u be the next element.
if |S| < k and f (u | S) ? /k then
Update S
S [ {u}.
end if
end while
return: S
Algorithm 2 S TREAK(f, k, ")
Let m
0, and let I be an (originally empty) collection of instances of Algorithm 1.
while there are more elements do
Let u be the next element.
if f (u) m then
Update m
f (u) and um
u.
end if
Update I so that it contains an instance of Algorithm 1 with ? = x for every x 2 {(1 ")i | i 2
Z and (1 ")m/(9k 2 ) ? (1 ")i ? mk}, as explained in Section 5.2.
Pass u to all instances of Algorithm 1 in I.
end while
return: the best set among all the outputs of the instances of Algorithm 1 in I and the singleton
set {um }.
Proof (Sketch). Let E be the event that f (S) < ? , where S is the output produced by Algorithm 1.
Clearly f (S) ? whenever E does not occur, and thus, it is possible to lower bound the expected
value of f (S) using E as follows.
Observation 5.2. Let S denote the output of Algorithm 1, then E[f (S)]
(1
Pr[E]) ? ? .
The lower bound given by Observation 5.2 is decreasing in Pr[E]. Proposition 5.4 provides another
lower bound for E[f (S)] which increases with Pr[E]. An important ingredient of the proof of this
proposition is the next observation, which implies that the solution produced by Algorithm 1 is always
of size smaller than k when E happens.
Observation 5.3. If at some point Algorithm 1 has a set S of size k, then f (S)
?.
The proof of Proposition 5.4 is based on the above observation and on the observation that the random
arrival order implies that every time that an element of OP T arrives in the stream we may assume it
is a random element out of all the OP T elements that did not arrive yet.
Proposition 5.4. For the set S produced by Algorithm 1,
1 ?
E[f (S)]
?
? [Pr[E] e /2 ] ? f (OP T )
2
2?
?
.
The theorem now follows by showing that for every possible value of Pr[E] the guarantee of the
theorem is impliedp
by either Observation 5.2 or Proposition 5.4.pSpecifically, the former happens
when Pr[E] ? 2
2 e /2 and the later when Pr[E] 2
2 e /2 .
5.2
Algorithm without access to ?
In this section we explain how to get an algorithm which does not depend on ? . Instead, S TREAK
(Algorithm 2) receives an accuracy parameter " 2 (0, 1). Then, it uses " to run several instances of
Algorithm 1 stored in a collection denoted by I. The algorithm maintains two variables throughout its
execution: m is the maximum value of a singleton set corresponding to an element that the algorithm
already observed, and um references an arbitrary element satisfying f (um ) = m.
5
The collection I is updated as follows after each element arrival. If previously I contained an instance
of Algorithm 1 with a given value for ? , and it no longer should contain such an instance, then the
instance is simply removed. In contrast, if I did not contain an instance of Algorithm 1 with a given
value for ? , and it should now contain such an instance, then a new instance with this value for ? is
created. Finally, if I contained an instance of Algorithm 1 with a given value for ? , and it should
continue to contain such an instance, then this instance remains in I as is.
Theorem 5.5. The approximation ratio of S TREAK is at least
p
3 e /2 2 2 e
(1 ") ?
2
/2
.
The proof of Theorem 5.5 shows that in the final collection I there is an instance of Algorithm 1
whose ? provides a good approximation for a ? f (OP T ), and thus, this instance of Algorithm 1
should (up to some technical details) produce a good output set in accordance with Theorem 5.1.
It remains to analyze the space complexity and running time of S TREAK. We concentrate on bounding
the number of elements S TREAK keeps in its memory at any given time, as this amount dominates
the space complexity as long as we assume that the space necessary to keep an element is at least as
large as the space necessary to keep each one of the numbers used by the algorithm.
Theorem 5.6. The space complexity of S TREAK is O("
1
k log k) elements.
The running time of Algorithm 1 is O(N f ) where, abusing notation, f is the running time of a single
oracle evaluation of f . Therefore, the running time of S TREAK is O(N f " 1 log k) since it uses at
every given time only O(" 1 log k) instances of the former algorithm. Given multiple threads, this
can be improved to O(N f + " 1 log k) by running the O(" 1 log k) instances of Algorithm 1 in
parallel.
6
Experiments
Running Time (s)
600
400
200
0
Random
Streak(0.75)
Streak(0.1) Local Search
1.00
Oracle Evaluations
Generalization Accuracy Log Likelihood
We evaluate the performance of our streaming algorithm on two sparse feature selection applications.1
Features are passed to all algorithms in a random order to match the setting of Section 5.
0.95
0.90
0.85
0.80
0.75
0.70
Random
Streak(0.75)
k=20
Streak(0.1) Local Search
k=40
k=80
15000
10000
5000
0
Random
Streak(0.75)
Streak(0.1) Local Search
Random
Streak(0.75)
Streak(0.1) Local Search
400000
300000
200000
100000
0
k=20
(a) Performance
k=40
k=80
(b) Cost
Figure 1: Logistic Regression, Phishing dataset with pairwise feature products. Our algorithm is
comparable to L OCAL S EARCH in both log likelihood and generalization accuracy, with much lower
running time and number of model fits in most cases. Results averaged over 40 iterations, error bars
show 1 standard deviation.
6.1
Sparse Regression with Pairwise Features
In this experiment, a sparse logistic regression is fit on 2000 training and 2000 test observations from
the Phishing dataset [Lichman, 2013]. This setup is known to be weakly submodular under mild data
assumptions [Elenberg et al., 2016a]. First, the categorical features are one-hot encoded, increasing
1
Code for these experiments is available at https://github.com/eelenberg/streak.
6
700
2500
2000
Running Time (s)
Log Likelihood
650
600
Random
Streak(0.75)
Streak(0.5)
Streak(0.2)
Streak(0.1)
Streak(0.05)
Local Search
550
500 0
10
101
102
103
104
1500
1000
500
0
105
Running Time (s)
(a) Sparse Regression
LIME + Max Wts LIME + FS
LIME + Lasso
Streak
(b) Interpretability
Figure 2: 2(a): Logistic Regression, Phishing dataset with pairwise feature products, k = 80
features. By varying the parameter ", our algorithm captures a time-accuracy tradeoff between
R ANDOM S UBSET and L OCAL S EARCH. Results averaged over 40 iterations, standard deviation
shown with error bars. 2(b): Running times of interpretability algorithms on the Inception V3
network, N = 30, k = 5. Streaming maximization runs 10 times faster than the LIME framework.
Results averaged over 40 total iterations using 8 example explanations, error bars show 1 standard
deviation.
the feature dimension to 68. Then, all pairwise products are added for a total of N = 4692 features.
To reduce computational cost, feature products are generated and added to the stream on-the-fly as
needed. We compare with 2 other algorithms. R ANDOM S UBSET selects the first k features from
the random stream. L OCAL S EARCH first fills a buffer with the first k features, and then swaps each
incoming feature with the feature from the buffer which yields the largest nonnegative improvement.
Figure 1(a) shows both the final log likelihood and the generalization accuracy for R ANDOM S UBSET,
L OCAL S EARCH, and our S TREAK algorithm for " = {0.75, 0.1} and k = {20, 40, 80}. As expected,
the R ANDOM S UBSET algorithm has much larger variation since its performance depends highly on
the random stream order. It also performs significantly worse than L OCAL S EARCH for both metrics,
whereas S TREAK is comparable for most parameter choices. Figure 1(b) shows two measures of
computational cost: running time and the number of oracle evaluations (regression fits). We note
S TREAK scales better as k increases; for example, S TREAK with k = 80 and " = 0.1 (" = 0.75)
runs in about 70% (5%) of the time it takes to run L OCAL S EARCH with k = 40. Interestingly, our
speedups are more substantial with respect to running time. In some cases S TREAK actually fits
more regressions than L OCAL S EARCH, but still manages to be faster. We attribute this to the fact
that nearly all of L OCAL S EARCH?s regressions involve k features, which are slower than many of
the small regressions called by S TREAK.
Figure 2(a) shows the final log likelihood versus running time for k = 80 and " 2 [0.05, 0.75]. By
varying the precision ", we achieve a gradual tradeoff between speed and performance. This shows
that S TREAK can reduce the running time by over an order of magnitude with minimal impact on the
final log likelihood.
6.2
Black-Box Interpretability
Our next application is interpreting the predictions of black-box machine learning models. Specifically,
we begin with the Inception V3 deep neural network [Szegedy et al., 2016] trained on ImageNet. We
use this network for the task of classifying 5 types of flowers via transfer learning. This is done by
adding a final softmax layer and retraining the network.
We compare our approach to the LIME framework [Ribeiro et al., 2016] for developing sparse,
interpretable explanations. The final step of LIME is to fit a k-sparse linear regression in the space of
interpretable features. Here, the features are superpixels determined by the SLIC image segmentation
algorithm [Achanta et al., 2012] (regions from any other segmentation would also suffice). The
number of superpixels is bounded by N = 30. After a feature selection step, a final regression is
performed on only the selected features. The following feature selection methods are supplied by
7
LIME: 1. Highest Weights: fits a full regression and keep the k features with largest coefficients. 2.
Forward Selection: standard greedy forward selection. 3. Lasso: `1 regularization.
We introduce a novel method for black-box interpretability that is similar to but simpler than LIME.
As before, we segment an image into N superpixels. Then, for a subset S of those regions we can
create a new image that contains only these regions and feed this into the black-box classifier. For a
given model M , an input image I, and a label L1 we ask for an explanation: why did model M label
image I with label L1 . We propose the following solution to this problem. Consider the set function
f (S) giving the likelihood that image I(S) has label L1 . We approximately solve
max f (S) ,
|S|?k
using S TREAK. Intuitively, we are limiting the number of superpixels to k so that the output will
include only the most important superpixels, and thus, will represent an interpretable explanation. In
our experiments we set k = 5.
Note that the set function f (S) depends on the black-box classifier and is neither monotone nor
submodular in general. Still, we find that the greedy maximization algorithm produces very good
explanations for the flower classifier as shown in Figure 3 and the additional experiments in the
Supplementary Material. Figure 2(b) shows that our algorithm is much faster than the LIME approach.
This is primarily because LIME relies on generating and classifying a large set of randomly perturbed
example images.
7
Conclusions
We propose S TREAK, the first streaming algorithm for maximizing weakly submodular functions,
and prove that it achieves a constant factor approximation assuming a random stream order. This
is useful when the set function is not submodular and, additionally, takes a long time to evaluate or
has a very large ground set. Conversely, we show that under a worst case stream order no algorithm
with memory sublinear in the ground set size has a constant factor approximation. We formulate
interpretability of black-box neural networks as set function maximization, and show that S TREAK
provides interpretable explanations faster than previous approaches. We also show experimentally
that S TREAK trades off accuracy and running time in nonlinear sparse regression.
One interesting direction for future work is to tighten the bounds of Theorems 5.1 and 5.5, which
are nontrivial but somewhat loose. For example, there is a gap between the theoretical guarantee
of the state-of-the-art algorithm for submodular functions and our bound for = 1. However, as
our algorithm performs the same computation as that state-of-the-art algorithm when the function
is submodular, this gap is solely an analysis issue. Hence, the real theoretical performance of our
algorithm is better than what we have been able to prove in Section 5.
8
Acknowledgments
This research has been supported by NSF Grants CCF 1344364, 1407278, 1422549, 1618689, ARO
YIP W911NF-14-1-0258, ISF Grant 1357/16, Google Faculty Research Award, and DARPA Young
Faculty Award (D16AP00046).
8
(a)
(b)
(c)
(d)
Figure 3: Comparison of interpretability algorithms for the Inception V3 deep neural network. We
have used transfer learning to extract features from Inception and train a flower classifier. In these
four input images the flower types were correctly classified (from (a) to (d): rose, sunflower, daisy,
and daisy). We ask the question of interpretability: why did this model classify this image as rose.
We are using our framework (and the recent prior work LIME [Ribeiro et al., 2016]) to see which
parts of the image the neural network is looking at for these classification tasks. As can be seen
S TREAK correctly identifies the flower parts of the images while some LIME variations do not. More
importantly, S TREAK is creating subsampled images on-the-fly, and hence, runs approximately 10
times faster. Since interpretability tasks perform multiple calls to the black-box model, the running
times can be quite significant.
9
References
Radhakrishna Achanta, Appu Shaji, Kevin Smith, Aurelien Lucchi, Pascal Fua, and Sabine S?sstrunk.
SLIC Superpixels Compared to State-of-the-art Superpixel Methods. IEEE Transactions on Pattern
Analysis and Machine Intelligence, 34(11):2274?2282, 2012.
Jason Altschuler, Aditya Bhaskara, Gang (Thomas) Fu, Vahab Mirrokni, Afshin Rostamizadeh,
and Morteza Zadimoghaddam. Greedy Column Subset Selection: New Bounds and Distributed
Algorithms. In ICML, pages 2539?2548, 2016.
Francis R. Bach. Learning with Submodular Functions: A Convex Optimization Perspective. Foundations and Trends in Machine Learning, 6, 2013.
Ashwinkumar Badanidiyuru, Baharan Mirzasoleiman, Amin Karbasi, and Andreas Krause. Streaming
Submodular Maximization: Massive Data Summarization on the Fly. In KDD, pages 671?680,
2014.
Sohail Bahmani, Bhiksha Raj, and Petros T. Boufounos. Greedy Sparsity-Constrained Optimization.
Journal of Machine Learning Research, 14:807?841, 2013.
Rafael da Ponte Barbosa, Alina Ene, Huy L. Nguyen, and Justin Ward. The Power of Randomization:
Distributed Submodular Maximization on Massive Datasets. In ICML, pages 1236?1244, 2015.
Rafael da Ponte Barbosa, Alina Ene, Huy L. Nguyen, and Justin Ward. A New Framework for
Distributed Submodular Maximization. In FOCS, pages 645?654, 2016.
Andrew An Bian, Baharan Mirzasoleiman, Joachim M. Buhmann, and Andreas Krause. Guaranteed
Non-convex Optimization: Submodular Maximization over Continuous Domains. In AISTATS,
pages 111?120, 2017.
Niv Buchbinder and Moran Feldman. Deterministic Algorithms for Submodular Maximization
Problems. In SODA, pages 392?403, 2016a.
Niv Buchbinder and Moran Feldman. Constrained Submodular Maximization via a Non-symmetric
Technique. CoRR, abs/1611.03253, 2016b. URL http://arxiv.org/abs/1611.03253.
Niv Buchbinder, Moran Feldman, and Roy Schwartz. Online Submodular Maximization with
Preemption. In SODA, pages 1202?1216, 2015.
Gruia C?alinescu, Chandra Chekuri, Martin P?l, and Jan Vondr?k. Maximizing a Monotone Submodular Function Subject to a Matroid Constraint. SIAM J. Comput., 40(6):1740?1766, 2011.
T-H. Hubert Chan, Zhiyi Huang, Shaofeng H.-C. Jiang, Ning Kang, and Zhihao Gavin Tang. Online
Submodular Maximization with Free Disposal: Randomization Beats 1/4 for Partition Matroids. In
SODA, pages 1204?1223, 2017.
Chandra Chekuri, Shalmoli Gupta, and Kent Quanrud. Streaming Algorithms for Submodular
Function Maximization. In ICALP, pages 318?330, 2015.
Michele Conforti and G?rard Cornu?jols. Submodular set functions, matroids and the greedy
algorithm: Tight worst-case bounds and some generalizations of the Rado-Edmonds theorem.
Discrete Applied Mathematics, 7(3):251?274, March 1984.
Abhimanyu Das and David Kempe. Submodular meets Spectral: Greedy Algorithms for Subset
Selection, Sparse Approximation and Dictionary Selection. In ICML, pages 1057?1064, 2011.
Ethan R. Elenberg, Rajiv Khanna, Alexandros G. Dimakis, and Sahand Negahban. Restricted
Strong Convexity Implies Weak Submodularity. CoRR, abs/1612.00804, 2016a. URL http:
//arxiv.org/abs/1612.00804.
Ethan R. Elenberg, Rajiv Khanna, Alexandros G. Dimakis, and Sahand Negahban. Restricted Strong
Convexity Implies Weak Submodularity. In NIPS Workshop on Learning in High Dimensions with
Structure, 2016b.
Uriel Feige. A Threshold of ln n for Approximating Set Cover. Journal of the ACM (JACM), 45(4):
634?652, 1998.
10
Marshall L. Fisher, George L. Nemhauser, and Laurence A. Wolsey. An analysis of approximations
for maximizing submodular set functions?II. In M. L. Balinski and A. J. Hoffman, editors,
Polyhedral Combinatorics: Dedicated to the memory of D.R. Fulkerson, pages 73?87. Springer
Berlin Heidelberg, Berlin, Heidelberg, 1978.
Avinatan Hassidim and Yaron Singer. Submodular Optimization Under Noise. In COLT, pages
1069?1122, 2017.
Steven C. H. Hoi, Rong Jin, Jianke Zhu, and Michael R. Lyu. Batch Mode Active Learning and its
Application to Medical Image Classification. In ICML, pages 417?424, 2006.
Thibaut Horel and Yaron Singer. Maximization of Approximately Submodular Functions. In NIPS,
2016.
Rajiv Khanna, Ethan R. Elenberg, Alexandros G. Dimakis, Joydeep Ghosh, and Sahand Negahban.
On Approximation Guarantees for Greedy Low Rank Optimization. In ICML, pages 1837?1846,
2017a.
Rajiv Khanna, Ethan R. Elenberg, Alexandros G. Dimakis, Sahand Negahban, and Joydeep Ghosh.
Scalable Greedy Support Selection via Weak Submodularity. In AISTATS, pages 1560?1568,
2017b.
Andreas Krause and Volkan Cevher. Submodular Dictionary Selection for Sparse Representation. In
ICML, pages 567?574, 2010.
Andreas Krause and Daniel Golovin. Submodular Function Maximization. Tractability: Practical
Approaches to Hard Problems, 3:71?104, 2014.
Moshe Lichman. UCI machine learning repository, 2013. URL http://archive.ics.uci.edu/
ml.
Baharan Mirzasoleiman, Amin Karbasi, Rik Sarkar, and Andreas Krause. Distributed Submodular
Maximization: Identifying Representative Elements in Massive Data. NIPS, pages 2049?2057,
2013.
Baharan Mirzasoleiman, Ashwinkumar Badanidiyuru, Amin Karbasi, Jan Vondr?k, and Andreas
Krause. Lazier Than Lazy Greedy. In AAAI, pages 1812?1818, 2015.
George L. Nemhauser and Laurence A. Wolsey. Best Algorithms for Approximating the Maximum
of a Submodular Set Function. Math. Oper. Res., 3(3):177?188, August 1978.
George L. Nemhauser, Laurence A. Wolsey, and Marshall L. Fisher. An analysis of approximations
for maximizing submodular set functions?I. Mathematical Programming, 14(1):265?294, 1978.
Xinghao Pan, Stefanie Jegelka, Joseph E. Gonzalez, Joseph K. Bradley, and Michael I. Jordan.
Parallel Double Greedy Submodular Maximization. In NIPS, pages 118?126, 2014.
Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. ?Why Should I Trust You?? Explaining
the Predictions of Any Classifier. In KDD, pages 1135?1144, 2016.
Mukund Sundararajan, Ankur Taly, and Qiqi Yan. Axiomatic Attribution for Deep Networks. In
ICML, pages 3319?3328, 2017.
Maxim Sviridenko, Jan Vondr?k, and Justin Ward. Optimal approximation for submodular and
supermodular optimization with bounded curvature. In SODA, pages 1134?1148, 2015.
Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. Rethinking
the Inception Architecture for Computer Vision. In CVPR, pages 2818?2826, 2016.
Jan Vondr?k. Submodularity and curvature: the optimal algorithm. RIMS K?ky?roku Bessatsu B23,
pages 253?266, 2010.
Kai Wei, Iyer Rishabh, and Jeff Bilmes. Submodularity in Data Subset Selection and Active Learning.
ICML, pages 1954?1963, 2015.
Zhuoran Yang, Zhaoran Wang, Han Liu, Yonina C. Eldar, and Tong Zhang. Sparse Nonlinear
Regression: Parameter Estimation and Asymptotic Inference. ICML, pages 2472?2481, 2016.
11
| 6993 |@word mild:1 repository:1 faculty:2 version:1 polynomial:1 stronger:1 laurence:3 retraining:1 open:2 gradual:1 covariance:1 kent:1 bahmani:2 liu:1 contains:3 lichman:2 daniel:1 interestingly:1 bradley:1 com:1 surprising:1 yet:2 partition:1 kdd:2 christian:1 interpretable:5 update:3 greedy:19 selected:1 intelligence:1 ieve:3 smith:1 alexandros:5 volkan:1 provides:3 math:1 location:1 org:2 simpler:1 zhang:1 mathematical:1 j2s:1 focs:1 prove:5 shorthand:1 polyhedral:1 introduce:1 pairwise:5 expected:3 nor:1 decreasing:1 treaming:3 cardinality:3 increasing:1 begin:1 moreover:1 notation:6 maximizes:1 suffice:1 bounded:2 israel:1 what:2 kind:2 dimakis:6 developed:1 ghosh:2 guarantee:11 every:7 nutshell:1 exactly:1 um:4 classifier:8 schwartz:1 grant:2 medical:1 before:1 engineering:3 accordance:1 local:5 jiang:1 meet:1 incoherence:1 solely:1 approximately:7 black:11 achanta:2 studied:1 ankur:1 conversely:1 relaxing:1 ease:1 gone:1 obeys:1 averaged:3 acknowledgment:1 practical:1 jan:4 yan:1 significantly:1 word:1 get:2 cannot:1 undesirable:1 selection:15 close:1 zhiyi:1 deterministic:4 maximizing:9 attribution:1 rajiv:4 starting:1 convex:2 survey:1 formulate:1 recovery:1 identifying:1 importantly:1 fill:1 shlens:1 fulkerson:1 handle:1 variation:3 updated:1 limiting:1 massive:3 altschuler:2 programming:1 us:6 superpixel:1 element:31 trend:1 satisfying:1 roy:1 observed:1 steven:1 fly:4 electrical:3 capture:1 worst:9 wang:1 barbosa:3 region:3 connected:1 trade:1 removed:2 highest:1 substantial:1 rose:2 rado:1 convexity:3 complexity:5 ui:3 ideally:1 trained:1 weakly:13 depend:1 badanidiyuru:6 segment:1 tight:1 singh:1 upon:1 swap:1 darpa:1 train:1 describe:1 kevin:1 quite:2 whose:1 supplementary:3 solve:4 encoded:1 larger:1 cvpr:1 kai:1 precludes:1 ward:3 think:1 final:7 online:4 advantage:1 eigenvalue:1 propose:3 aro:1 product:5 uci:2 achieve:2 amin:5 ky:1 getting:1 empty:1 double:1 extending:1 produce:3 generating:1 mirzasoleiman:6 derive:2 develop:1 ac:1 andrew:1 op:9 strong:4 implies:6 concentrate:1 submodularity:13 direction:1 ning:1 attribute:1 hoi:2 material:3 assign:1 fix:1 generalization:4 niv:3 preliminary:1 randomization:2 proposition:5 quanrud:1 rong:1 marco:1 considered:1 ground:8 gavin:1 ic:1 algorithmic:2 lyu:1 achieves:3 dictionary:2 purpose:1 estimation:1 axiomatic:1 combinatorial:2 label:4 utexas:2 largest:2 create:1 hoffman:1 sohail:1 clearly:2 always:1 aim:1 varying:2 joachim:1 improvement:1 rank:1 likelihood:7 superpixels:6 impossibility:3 contrast:4 rostamizadeh:1 hassid:2 inference:3 streaming:26 entire:1 abhimanyu:1 diminishing:1 selects:1 provably:1 issue:1 among:1 classification:2 pascal:1 denoted:2 eldar:1 colt:1 art:5 special:1 kempe:3 softmax:1 yip:1 equal:1 construct:2 once:2 constrained:2 beach:1 yonina:1 icml:9 nearly:1 jon:1 alter:1 future:1 cornu:2 simplify:1 few:1 primarily:1 randomly:1 subsampled:1 ab:4 interest:1 earch:8 highly:1 evaluation:7 deferred:1 arrives:1 rishabh:1 shalmoli:1 hubert:1 fu:1 necessary:4 wts:1 re:1 theoretical:5 joydeep:2 mk:1 cevher:2 increased:1 column:1 instance:20 minimal:1 classify:1 marshall:2 vahab:1 disadvantage:1 w911nf:1 goodness:1 cover:1 maximization:20 cost:3 tractability:1 deviation:3 subset:5 lazier:1 uniform:3 too:1 stored:1 tulio:1 answer:1 perturbed:1 combined:1 st:1 randomized:3 siam:1 negahban:4 off:1 michael:2 lucchi:1 aaai:1 huang:1 worse:2 ocal:8 creating:1 derivative:2 return:5 oper:1 szegedy:3 singleton:2 zhaoran:1 coefficient:1 combinatorics:1 vi:3 stream:16 depends:2 later:4 performed:1 jason:1 analyze:1 francis:1 competitive:1 maintains:2 parallel:2 carlos:1 yaron:2 defer:1 daisy:2 contribution:1 il:1 accuracy:6 yield:1 weak:10 vincent:1 critically:1 produced:4 manages:1 bilmes:1 randomness:1 classified:1 explain:3 whenever:1 definition:3 against:1 involved:1 proof:9 petros:1 dataset:4 proved:1 ask:2 knowledge:2 segmentation:2 andom:4 rim:1 actually:1 feed:1 disposal:1 originally:1 supermodular:1 bian:2 wei:2 improved:1 fua:1 formulation:1 done:1 box:11 rard:1 horel:2 inception:7 uriel:1 chekuri:3 sketch:1 receives:1 trust:1 nonlinear:4 google:1 abusing:1 khanna:6 logistic:4 mode:1 quality:1 michele:1 usa:1 bhiksha:1 concept:2 requiring:1 contain:4 ccf:1 facility:1 regularization:3 former:2 hence:2 symmetric:1 indistinguishable:1 numerator:1 arrived:1 outline:1 performs:2 l1:3 interpreting:3 dedicated:1 image:14 thibaut:1 novel:2 recently:1 hreshold:1 extend:2 approximates:1 isf:1 sundararajan:2 significant:1 feldman:6 fk:10 mathematics:2 balinski:1 submodular:54 access:6 han:1 longer:1 ashwinkumar:2 phishing:3 curvature:2 recent:4 chan:2 showed:1 optimizing:1 zadimoghaddam:1 perspective:1 discard:1 buchbinder:7 raj:1 buffer:2 binary:1 success:1 continue:1 seen:2 guestrin:1 additional:3 rs2:1 somewhat:1 george:3 v3:5 maximize:1 gruia:1 ii:1 multiple:3 full:1 sameer:1 jianke:1 technical:2 faster:7 match:1 bach:2 long:4 award:2 impact:1 prediction:4 scalable:1 regression:19 denominator:1 vision:1 expectation:1 metric:1 chandra:2 arxiv:2 iteration:3 represent:1 sergey:1 addition:1 want:2 krause:8 whereas:1 else:1 rest:1 archive:1 pass:2 subject:5 jordan:1 integer:2 call:1 yang:2 ideal:1 easy:1 enough:1 affect:2 fit:8 matroid:6 architecture:1 lasso:2 reduce:2 idea:3 andreas:6 tradeoff:2 texas:2 whether:1 thread:1 sunflower:1 url:3 passed:1 sahand:4 f:1 deep:4 useful:1 involve:1 amount:1 http:4 supplied:1 nsf:1 dummy:3 correctly:2 edmonds:1 discrete:3 slic:2 key:1 four:1 threshold:1 achieving:2 capital:1 alina:2 neither:1 monotone:8 run:7 letter:1 you:1 soda:4 arrive:4 extends:1 reader:1 place:1 throughout:1 gonzalez:1 decision:1 lime:17 scaling:1 comparable:2 bound:9 ki:5 layer:1 guaranteed:1 yale:2 nonnegative:6 oracle:4 nontrivial:1 adapted:1 occur:1 gang:1 constraint:9 sviridenko:2 aurelien:1 speed:1 min:2 shaji:1 martin:1 speedup:1 department:5 developing:1 march:1 feige:2 slightly:1 pan:2 smaller:1 bessatsu:1 joseph:2 happens:2 explained:1 restricted:4 pr:7 karbasi:5 intuitively:1 ene:2 heart:1 ln:1 remains:3 previously:1 loose:1 singer:4 needed:1 end:4 generalizes:1 available:1 tochastic:1 xinghao:1 apply:2 spectral:1 batch:1 jols:2 slower:1 thomas:1 assumes:1 running:17 include:2 graphical:1 giving:1 especially:1 establish:1 approximating:3 classical:1 alinescu:2 sabine:1 avinatan:1 objective:3 question:3 already:1 added:2 moshe:1 mirrokni:1 nemhauser:6 gradient:1 berlin:2 rethinking:1 assuming:2 afshin:1 code:1 conforti:2 providing:1 ratio:21 setup:1 negative:1 wojna:1 design:1 zbigniew:1 summarization:2 perform:2 observation:9 datasets:1 jin:1 beat:1 d16ap00046:1 looking:1 ponte:2 sharp:1 arbitrary:3 august:1 sarkar:1 introduced:1 david:1 cast:1 namely:1 extensive:1 ethan:5 imagenet:1 kang:1 nip:5 able:1 bar:3 justin:3 flower:5 pattern:1 sparsity:1 baharan:4 interpretability:12 memory:10 explanation:8 max:4 hot:1 suitable:1 event:1 natural:1 rely:1 power:1 buhmann:1 zhu:1 improve:1 github:1 imply:1 identifies:1 created:1 categorical:1 stefanie:1 extract:1 prior:2 asymptotic:1 icalp:1 sublinear:2 interesting:2 wolsey:4 versus:1 ingredient:1 foundation:1 vanhoucke:1 rik:1 ubset:4 jegelka:1 thresholding:1 editor:1 classifying:2 austin:3 supported:1 qiqi:1 free:1 drastically:1 explaining:1 matroids:3 sparse:14 distributed:5 dimension:2 stand:1 forward:3 collection:4 openu:1 nguyen:2 ribeiro:6 tighten:1 transaction:1 approximate:1 obtains:2 vondr:5 forever:1 rafael:2 keep:4 ml:1 active:3 decides:1 incoming:1 ioffe:1 assumed:2 consuming:1 search:6 continuous:1 why:4 additionally:3 elenberg:9 transfer:2 ca:1 golovin:2 streak:15 alg:1 heidelberg:2 necessarily:1 domain:1 da:5 did:4 aistats:2 main:3 big:1 bounding:1 noise:1 arrival:2 huy:2 nothing:1 repeated:1 referred:1 representative:1 tong:1 precision:1 comput:1 young:1 tang:1 bhaskara:1 theorem:14 bad:1 specific:1 showing:3 moran:4 gupta:1 mukund:1 dominates:1 exists:1 intractable:1 workshop:1 adding:2 corr:2 maxim:1 magnitude:1 execution:1 iyer:1 nk:7 gap:2 reedy:2 morteza:1 led:1 simply:1 jacm:1 lazy:1 aditya:1 contained:2 springer:1 zhuoran:1 corresponds:1 relies:2 acm:1 goal:1 jeff:1 fisher:4 hard:2 experimentally:2 specifically:3 determined:1 lemma:3 boufounos:1 total:3 called:3 pas:2 experimental:1 formally:1 support:1 evaluate:2 correlated:1 |
6,625 | 6,994 | Successor Features for
Transfer in Reinforcement Learning
Andr? Barreto, Will Dabney, R?mi Munos, Jonathan J. Hunt,
Tom Schaul, David Silver, Hado van Hasselt
{andrebarreto,wdabney,munos,jjhunt,schaul,davidsilver,hado}@google.com
DeepMind
Abstract
Transfer in reinforcement learning refers to the notion that generalization should
occur not only within a task but also across tasks. We propose a transfer framework for the scenario where the reward function changes between tasks but the
environment?s dynamics remain the same. Our approach rests on two key ideas:
successor features, a value function representation that decouples the dynamics of
the environment from the rewards, and generalized policy improvement, a generalization of dynamic programming?s policy improvement operation that considers
a set of policies rather than a single one. Put together, the two ideas lead to an
approach that integrates seamlessly within the reinforcement learning framework
and allows the free exchange of information across tasks. The proposed method
also provides performance guarantees for the transferred policy even before any
learning has taken place. We derive two theorems that set our approach in firm
theoretical ground and present experiments that show that it successfully promotes
transfer in practice, significantly outperforming alternative methods in a sequence
of navigation tasks and in the control of a simulated robotic arm.
1
Introduction
Reinforcement learning (RL) provides a framework for the development of situated agents that learn
how to behave while interacting with the environment [21]. The basic RL loop is defined in an abstract
way so as to capture only the essential aspects of this interaction: an agent receives observations
and selects actions to maximize a reward signal. This setup is generic enough to describe tasks of
different levels of complexity that may unroll at distinct time scales. For example, in the task of
driving a car, an action can be to turn the wheel, make a right turn, or drive to a given location.
Clearly, from the point of view of the designer, it is desirable to describe a task at the highest level of
abstraction possible. However, by doing so one may overlook behavioral patterns and inadvertently
make the task more difficult than it really is. The task of driving to a location clearly encompasses the
subtask of making a right turn, which in turn encompasses the action of turning the wheel. In learning
how to drive an agent should be able to identify and exploit such interdependencies. More generally,
the agent should be able to break a task into smaller subtasks and use knowledge accumulated in any
subset of those to speed up learning in related tasks. This process of leveraging knowledge acquired
in one task to improve performance on other tasks is called transfer [25, 11].
In this paper we look at one specific type of transfer, namely, when subtasks correspond to different
reward functions defined in the same environment. This setup is flexible enough to allow transfer
to happen at different levels. In particular, by appropriately defining the rewards one can induce
different task decompositions. For instance, the type of hierarchical decomposition involved in the
driving example above can be induced by changing the frequency at which rewards are delivered:
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
a positive reinforcement can be given after each maneuver that is well executed or only at the final
destination. Obviously, one can also decompose a task into subtasks that are independent of each
other or whose dependency is strictly temporal (that is, when tasks must be executed in a certain
order but no single task is clearly ?contained? within another).
The types of task decomposition discussed above potentially allow the agent to tackle more complex
problems than would be possible were the tasks modeled as a single monolithic challenge. However,
in order to apply this divide-and-conquer strategy to its full extent the agent should have an explicit
mechanism to promote transfer between tasks. Ideally, we want a transfer approach to have two
important properties. First, the flow of information between tasks should not be dictated by a rigid
diagram that reflects the relationship between the tasks themselves, such as hierarchical or temporal
dependencies. On the contrary, information should be exchanged across tasks whenever useful.
Second, rather than being posed as a separate problem, transfer should be integrated into the RL
framework as much as possible, preferably in a way that is almost transparent to the agent.
In this paper we propose an approach for transfer that has the two properties above. Our method builds
on two conceptual pillars that complement each other. The first is a generalization of Dayan?s [7]
successor representation. As the name suggests, in this representation scheme each state is described
by a prediction about the future occurrence of all states under a fixed policy. We present a generalization of Dayan?s idea which extends the original scheme to continuous spaces and also facilitates the
use of approximation. We call the resulting scheme successor features. As will be shown, successor
features lead to a representation of the value function that naturally decouples the dynamics of the
environment from the rewards, which makes them particularly suitable for transfer.
The second pillar of our framework is a generalization of Bellman?s [3] classic policy improvement
theorem that extends the original result from one to multiple decision policies. This novel result
shows how knowledge about a set of tasks can be transferred to a new task in a way that is completely
integrated within RL. It also provides performance guarantees on the new task before any learning
has taken place, which opens up the possibility of constructing a library of ?skills? that can be reused
to solve previously unseen tasks. In addition, we present a theorem that formalizes the notion that an
agent should be able to perform well on a task if it has seen a similar task before?something clearly
desirable in the context of transfer. Combined, the two results above not only set our approach in
firm ground but also outline the mechanics of how to actually implement transfer. We build on this
knowledge to propose a concrete method and evaluate it in two environments, one encompassing a
sequence of navigation tasks and the other involving the control of a simulated two-joint robotic arm.
2
Background and problem formulation
As usual, we assume that the interaction between agent and environment can be modeled as a Markov
decision process (MDP, Puterman, [19]). An MDP is defined as a tuple M ? (S, A, p, R, ?). The sets
S and A are the state and action spaces, respectively; here we assume that S and A are finite whenever
such an assumption facilitates the presentation, but most of the ideas readily extend to continuous
spaces. For each s ? S and a ? A the function p(?|s, a) gives the next-state distribution upon taking
action a in state s. We will often refer to p(?|s, a) as the dynamics of the MDP. The reward received at
a
transition s ?
? s0 is given by the random variable R(s, a, s0 ); usually one is interested in the expected
value of this variable, which we will denote by r(s, a, s0 ) or by r(s, a) = ES 0 ?p(?|s,a) [r(s, a, S 0 )].
The discount factor ? ? [0, 1) gives smaller weights to future rewards.
The objective of the agent in RL is to find a policy ??a mapping from states P
to actions?that
?
maximizes the expected discounted sum of rewards, also called the return Gt = i=0 ? i Rt+i+1 ,
where Rt = R(St , At , St+1 ). One way to address this problem is to use methods derived from
dynamic programming (DP), which heavily rely on the concept of a value function [19]. The
action-value function of a policy ? is defined as
Q? (s, a) ? E? [Gt | St = s, At = a] ,
(1)
?
where E [?] denotes expected value when following policy ?. Once the action-value function of a
particular policy ? is known, we can derive a new policy ? 0 which is greedy with respect to Q? (s, a),
that is, ? 0 (s) ? argmaxa Q? (s, a). Policy ? 0 is guaranteed to be at least as good as (if not better than)
policy ?. The computation of Q? (s, a) and ? 0 , called policy evaluation and policy improvement,
define the basic mechanics of RL algorithms based on DP; under certain conditions their successive
application leads to an optimal policy ? ? that maximizes the expected return from every s ? S [21].
2
In this paper we are interested in the problem of transfer, which we define as follows. Let T, T 0 be
two sets of tasks such that T 0 ? T , and let t be any task. Then there is transfer if, after training on T ,
the agent always performs as well or better on task t than if only trained on T 0 . Note that T 0 can be
the empty set. In this paper a task will be defined as a specific instantiation of the reward function
R(s, a, s0 ) for a given MDP. In Section 4 we will revisit this definition and make it more formal.
3
Successor features
In this section we present the concept that will serve as a cornerstone for the rest of the paper. We
start by presenting a simple reward model and then show how it naturally leads to a generalization of
Dayan?s [7] successor representation (SR).
Suppose that the expected one-step reward associated with transition (s, a, s0 ) can be computed as
r(s, a, s0 ) = ?(s, a, s0 )> w,
0
d
0
(2)
d
where ?(s, a, s ) ? R are features of (s, a, s ) and w ? R are weights. This assumption is not
restrictive because we are not making any assumptions about ?(s, a, s0 ): if we have ?i (s, a, s0 ) =
r(s, a, s0 ) for some i, for example, we can clearly recover any reward function exactly. To simplify
the notation, let ?t = ?(st , at , st+1 ). Then, by simply rewriting the definition of the action-value
function in (1) we have
Q? (s, a) = E? [rt+1 + ?rt+2 + ... | St = s, At = a]
h
i
>
= E? ?>
w
+
??
w
+
...
|
S
=
s,
A
=
a
t
t
t+1
t+2
>
? P? i?t
=E
?i+1 | St = s, At = a w = ? ? (s, a)> w.
i=t ?
(3)
The decomposition (3) has appeared before in the literature under different names and interpretations,
as discussed in Section 6. Since here we propose to look at (3) as an extension of Dayan?s [7] SR, we
call ? ? (s, a) the successor features (SFs) of (s, a) under policy ?.
The ith component of ? ? (s, a) gives the expected discounted sum of ?i when following policy ?
starting from (s, a). In the particular case where S and A are finite and ? is a tabular representation
2
of S ? A ? S?that is, ?(s, a, s0 ) is a one-hot vector in R|S| |A| ?? ? (s, a) is the discounted sum
of occurrences, under ?, of each possible transition. This is essentially the concept of SR extended
from the space S to the set S ? A ? S [7].
One of the contributions of this paper is precisely to generalize SR to be used with function approximation, but the exercise of deriving the concept as above provides insights already in the tabular
2
case. To see this, note that in the tabular case the entries of w ? R|S| |A| are the function r(s, a, s0 )
and suppose that r(s, a, s0 ) 6= 0 in only a small subset W ? S ? A ? S. From (2) and (3), it is
clear that the cardinality of W, and not of S ? A ? S, is what effectively defines the dimension of
the representation ? ? , since there is no point in having d > |W|. Although this fact is hinted at by
Dayan [7], it becomes more apparent when we look at SR as a particular case of SFs.
SFs extend SR in two other ways. First, the concept readily applies to continuous state and action
spaces without any modification. Second, by explicitly casting (2) and (3) as inner products involving
feature vectors, SFs make it evident how to incorporate function approximation: as will be shown,
these vectors can be learned from data.
The representation in (3) requires two components to be learned, w and ? ? . Since the latter is
the expected discounted sum of ? under ?, we must either be given ? or learn it as well. Note
? is a supervised learning problem, so we can use
that approximating r(s, a, s0 ) ? ?(s, a, s0 )> w
? too) [9]. As for ? ? , we note
? (and potentially ?,
well-understood techniques from the field to learn w
that
? ? (s, a) = ?t+1 + ?E ? [? ? (St+1 , ?(St+1 )) | St = s, At = a],
(4)
that is, SFs satisfy a Bellman equation in which ?i play the role of rewards?something also noted
by Dayan [7] regarding SR. Therefore, in principle any RL method can be used to compute ? ? [24].
The SFs ? ? summarize the dynamics induced by ? in a given environment. As shown in (3), this
allows for a modular representation of Q? in which the MDP?s dynamics are decoupled from its
3
rewards, which are captured by the weights w. One potential benefit of having such a decoupled
representation is that only the relevant module must be relearned when either the dynamics or the
reward changes, which may serve as an argument in favor of adopting SFs as a general approximation
scheme for RL. However, in this paper we focus on a scenario where the decoupled value-function
approximation provided by SFs is exploited to its full extent, as we discuss next.
4
Transfer via successor features
We now return to the discussion about transfer in RL. As described, we are interested in the scenario
where all components of an MDP are fixed, except for the reward function. One way of formalizing
this model is through (2): if we suppose that ? ? Rd is fixed, any w ? Rd gives rise to a new MDP.
Based on this observation, we define
M? (S, A, p, ?)? {M (S, A, p, r, ?)|r(s, a, s0 )= ?(s, a, s0 )> w},
(5)
that is, M? is the set of MDPs induced by ? through all possible instantiations of w. Since what
differentiates the MDPs in M? is essentially the agent?s goal, we will refer to Mi ? M? as a task.
The assumption is that we are interested in solving (a subset of) the tasks in the environment M? .
Definition (5) is a natural way of modeling some scenarios of interest. Think, for example, how the
desirability of water or food changes depending on whether an animal is thirsty or hungry. One way
to model this type of preference shifting is to suppose that the vector w appearing in (2) reflects the
taste of the agent at any given point in time [17]. Further in the paper we will present experiments
that reflect this scenario. For another illustrative example, imagine that the agent?s goal is to produce
and sell a combination of goods whose production line is relatively stable but whose prices vary
considerably over time. In this case updating the price of the products corresponds to picking a new
w. A slightly different way of motivating (5) is to suppose that the environment itself is changing,
that is, the element wi indicates not only desirability, but also availability, of feature ?i .
In the examples above it is desirable for the agent to build on previous experience to improve its
performance on a new setup. More concretely, if the agent knows good policies for the set of tasks
M ? {M1 , M2 , ..., Mn }, with Mi ? M? , it should be able to leverage this knowledge to improve
its behavior on a new task Mn+1 ?that is, it should perform better than it would had it been exposed
to only a subset of the original tasks, M0 ? M. We can assess the performance of an agent on
task Mn+1 based on the value function of the policy followed after wn+1 has become available but
before any policy improvement has taken place in Mn+1 .1 More precisely, suppose that an agent has
been exposed to each one of the tasks Mi ? M0 . Based on this experience, and on the new wn+1 ,
the agent computes a policy ? 0 that will define its initial behavior in Mn+1 . Now, if we repeat the
0
experience replacing M0 with M, the resulting policy ? should be such that Q? (s, a) ? Q? (s, a)
for all (s, a) ? S ? A.
Now that our setup is clear we can start to describe our solution for the transfer problem discussed
above. We do so in two stages. First, we present a generalization of DP?s notion of policy improvement
whose interest may go beyond the current work. We then show how SFs can be used to implement
this generalized form of policy improvement in an efficient and elegant way.
4.1
Generalized policy improvement
One of the fundamental results in RL is Bellman?s [3] policy improvement theorem. In essence, the
theorem states that acting greedily with respect to a policy?s value function gives rise to another policy
whose performance is no worse than the former?s. This is the driving force behind DP, and most RL
algorithms that compute a value function are exploiting Bellman?s result in one way or another.
In this section we extend the policy improvement theorem to the scenario where the new policy is
to be computed based on the value functions of a set of policies. We show that this extension can
be done in a natural way, by acting greedily with respect to the maximum over the value functions
available. Our result is summarized in the theorem below.
1
Of course wn+1 can, and will be, learned, as discussed in Section 4.2 and illustrated in Section 5. Here we
assume that wn+1 is given to make our performance criterion clear.
4
Theorem 1. (Generalized Policy Improvement) Let ?1 , ?2 , ..., ?n be n decision policies and let
? ?1 , Q
? ?2 , ..., Q
? ?n be approximations of their respective action-value functions such that
Q
? ?i (s, a)| ? for all s ? S, a ? A, and i ? {1, 2, ..., n}.
|Q?i (s, a) ? Q
(6)
Define
? ?i (s, a).
?(s) ? argmax max Q
i
a
(7)
Then,
2
i
1??
for any s ? S and a ? A, where Q? is the action-value function of ?.
Q? (s, a) ? max Q?i (s, a) ?
(8)
The proofs of our theoretical results are in the supplementary material. As one can see, our theorem
covers the case where the policies? value functions are not computed exactly, either because function
approximation is used or because some exact algorithm has not be run to completion. This error is
captured by in (6), which re-appears as a penalty term in the lower bound (8). Such a penalty is
inherent to the presence of approximation in RL, and in fact it is identical to the penalty incurred in
the single-policy case (see e.g. Bertsekas and Tsitsiklis?s Proposition 6.1 [5]).
In order to contextualize generalized policy improvement (GPI) within the broader scenario of DP,
suppose for a moment that = 0. In this case Theorem 1 states that ? will perform no worse than
all of the policies ?1 , ?2 , ..., ?n . This is interesting because in general maxi Q?i ?the function used
to induce ??is not the value function of any particular policy. It is not difficult to see that ? will
be strictly better than all previous policies if no single policy dominates all other policies, that is,
? ?i (s, a) ? argmaxi maxa Q
? ?i (s0 , a) = ? for some s, s0 ? S. If one policy does
if argmaxi maxa Q
dominate all others, GPI reduces to the original policy improvement theorem.
If we consider the usual DP loop, in which policies of increasing performance are computed in
sequence, our result is not of much use because the most recent policy will always dominate all others.
Another way of putting it is to say that after Theorem 1 is applied once adding the resulting ? to the
set {?1 , ?2 , ..., ?n } will reduce the next improvement step to standard policy improvement, and thus
the policies ?1 , ?2 , ..., ?n can be simply discarded. There are however two situations in which our
result may be of interest. One is when we have many policies ?i being evaluated in parallel. In this
case GPI provides a principled strategy for combining these policies. The other situation in which
our result may be useful is when the underlying MDP changes, as we discuss next.
4.2
Generalized policy improvement with successor features
We start this section by extending our notation slightly to make it easier to refer to the quantities
involved in transfer learning. Let Mi be a task in M? defined by wi ? Rd . We will use ?i? to refer
??
to an optimal policy of MDP Mi and use Qi i to refer to its value function. The value function of ?i?
??
when executed in Mj ? M? will be denoted by Qj i .
Suppose now that an agent has computed optimal policies for the tasks M1 , M2 , ..., Mn ? M? . Sup?
?1?
?2?
?n
pose further that when presented with a new task Mn+1 the agent computes {Qn+1
, Qn+1
, ..., Qn+1
},
the evaluation of each ?i? under the new reward function induced by wn+1 . In this case, applying the
GPI theorem to the newly-computed set of value functions will give rise to a policy that performs at
least as well as a policy based on any subset of these, including the empty set. Thus, this strategy
satisfies our definition of successful transfer.
There is a caveat, though. Why would one waste time computing the value functions of ?1? , ?2? , ...,
?n? , whose performance in Mn+1 may be mediocre, if the same amount of resources can be allocated
to compute a sequence of n policies with increasing performance? This is where SFs come into play.
??
Suppose that we have learned the functions Qi i using the representation scheme shown in (3). Now, if
the reward changes to rn+1 (s, a, s0 ) = ?(s, a, s0 )> wn+1 , as long as we have wn+1 we can compute
??
?
i
the new value function of ?i? by simply making Qn+1
(s, a) = ? ?i (s, a)> wn+1 . This reduces the
??
i
computation of all Qn+1
to the much simpler supervised problem of approximating wn+1 .
??
i
Once the functions Qn+1
have been computed, we can apply GPI to derive a policy ? whose
performance on Mn+1 is no worse than the performance of ?1? , ?2? , ..., ?n? on the same task. A
5
question that arises in this case is whether we can provide stronger guarantees on the performance
of ? by exploiting the structure shared by the tasks in M? . The following theorem answers this
question in the affirmative.
??
Theorem 2. Let Mi ? M? and let Qi j be the action-value function of an optimal policy of
?
?
?
? ?1 , Q
? ?2 , ..., Q
? ?n } such that
Mj ? M? when executed in Mi . Given approximations {Q
i
i
i
??
?
j
? ?j (s, a) ?
(9)
Qi (s, a) ? Q
i
?
? ?j (s, a). Finally, let
for all s ? S, a ? A, and j ? {1, 2, ..., n}, let ?(s) ? argmaxa maxj Q
i
?max = maxs,a ||?(s, a)||, where || ? || is the norm induced by the inner product adopted. Then,
??
Qi i (s, a) ? Q?i (s, a) ?
2
(?
minj ||wi ? wj || + ) .
1 ? ? max
(10)
Note that we used Mi rather than Mn+1 in the theorem?s statement to remove any suggestion of
order among the tasks. Theorem 2 is a specialization of Theorem 1 for the case where the set of value
functions used to compute ? are associated with tasks in the form of (5). As such, it provides stronger
guarantees: instead of comparing the performance of ? with that of the previously-computed policies
?j , Theorem 2 quantifies the loss incurred by following ? as opposed to one of Mi ?s optimal policies.
??
As shown in (10), the loss Qi i (s, a) ? Q?i (s, a) is upper-bounded by two terms. The term
2?max minj ||wi ? wj ||/(1 ? ?) is of more interest here because it reflects the structure of M? . This
term is a multiple of the distance between wi , the vector describing the task we are currently interested
in, and the closest wj for which we have computed a policy. This formalizes the intuition that the
agent should perform well in task wi if it has solved a similar task before. More generally, the term in
question relates the concept of distance in Rd with difference in performance in M? . Note that this
correspondence depends on the specific set of features ? used, which raises the interesting question
of how to define ? such that tasks that are close in Rd induce policies that are also similar in some
sense. Regardless of how exactly ? is defined, the bound (10) allows for powerful extrapolations.
For example, by covering the relevant subspace of Rd with balls of appropriate radii centered at wj
we can provide performance guarantees for any task w [14]. This corresponds to building a library of
options (or ?skills?) that can be used to solve any task in a (possibly infinite) set [22]. In Section 5
we illustrate this concept with experiments.
Although Theorem 2 is inexorably related to the characterization of M? in (5), it does not depend
on the definition of SFs in any way. Here SFs are the mechanism used to efficiently apply the
protocol suggested by Theorem 2. When SFs are used the value function approximations are given by
?
? ?j? (s, a)> w
? ?j? are computed and stored when the agent is learning
? ?j (s, a) = ?
? i . The modules ?
Q
i
the tasks Mj ; when faced with a new task Mi the agent computes an approximation of wi , which is a
? ?i? . Note
supervised learning problem, and then uses the policy ? defined in Theorem 2 to learn ?
?
? ?j? and w
?i
that we do not assume that either ? ?j or wi is computed exactly: the effect of errors in ?
are accounted for by the term appearing in (9). As shown in (10), if is small and the agent has
seen enough tasks the performance of ? on Mi should already be good, which suggests it may also
? ?i? .
speed up the process of learning ?
Interestingly, Theorem 2 also provides guidance for some practical algorithmic choices. Since in an
? ?j? stored in memory, the corresponding
actual implementation one wants to limit the number of SFs ?
? ?i? only
? j can be used to decide which ones to keep. For example, one can create a new ?
vectors w
?i ?w
? j || is above a given threshold; alternatively, once the maximum number of SFs
when minj ||w
? ?k? , where k = argmin ||w
? j || (here wi is the current task).
has been reached, one can replace ?
j ?i ?w
5
Experiments
In this section we present our main experimental results. Additional details, along with further results
and analysis, can be found in Appendix B of the supplementary material.
The first environment we consider involves navigation tasks defined over a two-dimensional continuous space composed of four rooms (Figure 1). The agent starts in one of the rooms and must reach a
6
goal region located in the farthest room. The environment has objects that can be picked up by the
agent by passing over them. Each object belongs to one of three classes determining the associated
reward. The objective of the agent is to pick up the ?good? objects and navigate to the goal while
avoiding ?bad? objects. The rewards associated with object classes change at every 20 000 transitions,
giving rise to very different tasks (Figure 1). The goal is to maximize the sum of rewards accumulated
over a sequence of 250 tasks, with each task?s rewards sampled uniformly from [?1, 1]3 .
We defined a straightforward instantia?
tion of our approach in which both w
?
?
and ? are computed incrementally in
order to minimize losses induced by (2)
and (4). Every time the task changes the
? ?i is stored and a new ?
? ?i+1
current ?
is created. We call this method SFQL
as a reference to the fact that SFs are
learned through an algorithm analogous
to Q-learning (QL)?which is used as a
baseline in our comparisons [27] . As a Figure 1: Environment layout and some examples of optimore challenging reference point we re- mal trajectories associated with specific tasks. The shapes
port results for a transfer method called of the objects represent their classes; ?S? is the start state
probabilistic policy reuse [8]. We adopt and ?G? is the goal.
a version of the algorithm that builds on
QL and reuses all policies learned. The resulting method, PRQL, is thus directly comparable to
SFQL. The details of QL, PRQL, and SFQL, including their pseudo-codes, are given in Appendix B.
We compared two versions of SFQL. In the first one, called SFQL-?, we assume the agent has access
to features ? that perfectly predict the rewards, as in (2). The second version of our agent had to
? ? Rh directly from data collected by QL in the first 20 tasks. Note that
learn an approximation ?
h may not coincide with the true dimension of ?, which in this case is 4; we refer to the different
? followed the multi-task learning
instances of our algorithm as SFQL-h. The process of learning ?
protocol proposed by Caruana [6] and Baxter [2], and described in detail in Appendix B.
The results of our experiments can be seen in Figure 2. As shown, all versions of SFQL significantly
outperform the other two methods, with an improvement on the average return of more than 100%
when compared to PRQL, which itself improves on QL by around 100%. Interestingly, SFQL-h
seems to achieve good overall performance faster than SFQL-?, even though the latter uses features
that allow for an exact representation of the rewards. One possible explanation is that, unlike their
counterparts ?i , the features ??i are activated over most of the space S ? A ? S, which results in a
dense pseudo-reward signal that facilitates learning.
The second environment we consider is a set of control tasks defined in the MuJoCo physics
engine [26]. Each task consists in moving a two-joint torque-controlled simulated robotic arm to a
SFQL-8
SFQL-? / SFQL-4
PRQL
Q-Learning
Figure 2: Average and cumulative return per task in the four-room domain. SFQL-h receives no
? Error-bands show one standard error over 30 runs.
reward during the first 20 tasks while learning ?.
7
Normalized Return
Normalized Return
SFDQN
DQN
Tasks Trained
(b) Average performance on test tasks.
Task 1
Task 2
Task 3
Task 4
Tasks Trained
(a) Performance on training tasks (faded dotted lines in the
background are DQN?s results).
(c) Colored and gray circles depict
training and test targets, respectively.
Figure 3: Normalized return on the reacher domain: ?1? corresponds to the average result achieved
by DQN after learning each task separately and ?0? corresponds to the average performance of a
randomly-initialized agent (see Appendix B for details). SFDQN?s results were obtained using the
GPI policies ?i (s) defined in the text. Shading shows one standard error over 30 runs.
specific target location; thus, we refer to this environment as ?the reacher domain.? We defined 12
tasks, but only allowed the agents to train in 4 of them (Figure 3c). This means that the agent must be
able to perform well on tasks that it has never experienced during training.
In order to solve this problem, we adopted essentially the same algorithm as above, but we replaced
QL with Mnih et al.?s DQN?both as a baseline and as the basic engine underlying the SF agent [15].
The resulting method, which we call SFDQN, is an illustration of how our method can be naturally
combined with complex nonlinear approximators such as neural networks. The features ?i used by
SFDQN are the negation of the distances to the center of the 12 target regions. As usual in experiments
of this type, we give the agents a description of the current task: for DQN the target coordinates are
given as inputs, while for SFDQN this is provided as an one-hot vector wt ? R12 [12]. Unlike in the
? ?i through losses
previous experiment, in the current setup each transition was used to train all four ?
? (s, a)> wi .
derived from (4). Here ?i is the GPI policy on the ith task: ?i (s) ? argmaxa maxj ?
j
Results are shown in Figures 3a and 3b. Looking at the training curves, we see that whenever a
task is selected for training SFDQN?s return on that task quickly improves and saturates at nearoptimal performance. The interesting point to be noted is that, when learning a given task, SFDQN?s
performance also improves in all other tasks, including the test ones, for which it does not have
specialized policies. This illustrates how the combination of SFs and GPI can give rise to flexible
agents able to perform well in any task of a set of tasks with shared dynamics?which in turn can be
seen as both a form of temporal abstraction and a step towards more general hierarchical RL [22, 1].
6
Related work
Mehta et al.?s [14] approach for transfer learning is probably the closest work to ours in the literature.
There are important differences, though. First, Mehta et al. [14] assume that both ? and w are always
observable quantities provided by the environment. They also focus on average reward RL, in which
the quality of a decision policy can be characterized by a single scalar. This reduces the process of
selecting a policy for a task to one decision made at the outset, which is in clear contrast with GPI.
8
The literature on transfer learning has other methods that relate to ours [25, 11]. Among the algorithms
designed for the scenario considered here, two approaches are particularly relevant because they also
reuse old policies. One is Fern?ndez et al.?s [8] probabilistic policy reuse, adopted in our experiments
and described in Appendix B. The other approach, by Bernstein [4], corresponds to using our method
? ?i from scratch at each new task.
but relearning all ?
When we look at SFs strictly as a representation scheme, there are clear similarities with Littman
et al.?s [13] predictive state representation (PSR). Unlike SFs, though, PSR tries to summarize the
dynamics of the entire environment rather than of a single policy ?. A scheme that is perhaps closer
to SFs is the value function representation sometimes adopted in inverse RL [18].
SFs are also related to Sutton et al.?s [23] general value functions (GVFs), which extends the notion
of value function to also include ?pseudo-rewards.? If we see ?i as a pseudo-reward, ?i? (s, a)
becomes a particular case of GVF. Beyond the technical similarities, the connection between SFs and
GVFs uncovers some principles underlying both lines of work that, when contrasted, may benefit
both. On one hand, Sutton et al.?s [23] and Modayil et al.?s [16] hypothesis that relevant knowledge
about the world can be expressed as many predictions naturally translates to SFs: if ? is expressive
enough, the agent should be able to represent any relevant reward function. Conversely, SFs not only
provide a concrete way of using this knowledge, they also suggest a possible criterion to select the
pseudo-rewards ?i , since ultimately we are only interested in features that help in the approximation
? ? r(s, a, s0 ).
?(s, a, s0 )> w
Another generalization of value functions that is related to SFs is Schaul et al.?s [20] universal value
function approximators (UVFAs). UVFAs extend the notion of value function to also include as an
argument an abstract representation of a ?goal,? which make them particularly suitable for transfer.
? ?j? (s, a)> w
?
? used in our framework can be seen as a function of s, a, and w?the
The function maxj ?
latter a generic way of representing a goal?, and thus in some sense this representation is a UVFA.
? is simply the description
This connection raises an interesting point: since under this interpretation w
of a task, it can in principle be a direct function of the observations, which opens up the possibility of
? even before seeing any rewards.
the agent determining w
As discussed, our approach is also related to temporal abstraction and hierarchical RL: if we look
at ? ? as instances of Sutton et al.?s [22] options, acting greedily with respect to the maximum over
their value functions corresponds in some sense to planning at a higher level of temporal abstraction
(that is, each ? ? (s, a) is associated with an option that terminates after a single step). This is the
view adopted by Yao et al. [28], whose universal option model closely resembles our approach in
some aspects (the main difference being that they do not do GPI).
Finally, there have been previous attempts to combine SR and neural networks. Kulkarni et al.
? ? (s, a), ?(s,
? a, s0 ) and w.
?
[10] and Zhang et al. [29] propose similar architectures to jointly learn ?
Although neither work exploits SFs for GPI, they both discuss other uses of SFs for transfer. In
principle the proposed (or similar) architectures can also be used within our framework.
7
Conclusion
This paper builds on two concepts, both of which are generalizations of previous ideas. The first
one is SFs, a generalization of Dayan?s [7] SR that extends the original definition from discrete to
continuous spaces and also facilitates the use of function approximation. The second concept is GPI,
formalized in Theorem 1. As the name suggests, this result extends Bellman?s [3] classic policy
improvement theorem from a single to multiple policies.
Although SFs and GPI are of interest on their own, in this paper we focus on their combination to
induce transfer. The resulting framework is an elegant extension of DP?s basic setting that provides a
solid foundation for transfer in RL. As a complement to the proposed transfer approach, we derived
a theoretical result, Theorem 2, that formalizes the intuition that an agent should perform well on
a novel task if it has seen a similar task before. We also illustrated with a comprehensive set of
experiments how the combination of SFs and GPI promotes transfer in practice.
We believe the proposed ideas lay out a general framework for transfer in RL. By specializing the
basic components presented one can build on our results to derive agents able to perform well across
a wide variety of tasks, and thus extend the range of environments that can be successfully tackled.
9
Acknowledgments
The authors would like to thank Joseph Modayil for the invaluable discussions during the development
of the ideas described in this paper. We also thank Peter Dayan, Matt Botvinick, Marc Bellemare,
and Guy Lever for the excellent comments, and Dan Horgan and Alexander Pritzel for their help with
the experiments. Finally, we thank the anonymous reviewers for their comments and suggestions to
improve the paper.
References
[1] Andrew G. Barto and Sridhar Mahadevan. Recent advances in hierarchical reinforcement
learning. Discrete Event Dynamic Systems, 13(4):341?379, 2003.
[2] Jonathan Baxter. A model of inductive bias learning. Journal of Artificial Intelligence Research,
12:149?198, 2000.
[3] Richard E. Bellman. Dynamic Programming. Princeton University Press, 1957.
[4] Daniel S. Bernstein. Reusing old policies to accelerate learning on new MDPs. Technical report,
Amherst, MA, USA, 1999.
[5] Dimitri P. Bertsekas and John N. Tsitsiklis. Neuro-Dynamic Programming. Athena Scientific,
1996.
[6] Rich Caruana. Multitask learning. Machine Learning, 28(1):41?75, 1997.
[7] Peter Dayan. Improving generalization for temporal difference learning: The successor representation. Neural Computation, 5(4):613?624, 1993.
[8] Fernando Fern?ndez, Javier Garc?a, and Manuela Veloso. Probabilistic policy reuse for inter-task
transfer learning. Robotics and Autonomous Systems, 58(7):866?871, 2010.
[9] Trevor Hastie, Robert Tibshirani, and Jerome Friedman. The Elements of Statistical Learning:
Data Mining, Inference, and Prediction. Springer, 2002.
[10] Tejas D. Kulkarni, Ardavan Saeedi, Simanta Gautam, and Samuel J Gershman. Deep successor
reinforcement learning. arXiv preprint arXiv:1606.02396, 2016.
[11] Alessandro Lazaric. Transfer in Reinforcement Learning: A Framework and a Survey. Reinforcement Learning: State-of-the-Art, pages 143?173, 2012.
[12] Timothy P. Lillicrap, Jonathan J. Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval
Tassa, David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning.
arXiv preprint arXiv:1509.02971, 2015.
[13] Michael L. Littman, Richard S. Sutton, and Satinder Singh. Predictive representations of state.
In Advances in Neural Information Processing Systems (NIPS), pages 1555?1561, 2001.
[14] Neville Mehta, Sriraam Natarajan, Prasad Tadepalli, and Alan Fern. Transfer in variable-reward
hierarchical reinforcement learning. Machine Learning, 73(3), 2008.
[15] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G.
Bellemare, Alex Graves, Martin Riedmiller, Andreas K. Fidjeland, Georg Ostrovski, Stig Petersen, Charles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran, Daan
Wierstra, Shane Legg, and Demis Hassabis. Human-level control through deep reinforcement
learning. Nature, 518(7540):529?533, 2015.
[16] Joseph Modayil, Adam White, and Richard S. Sutton. Multi-timescale nexting in a reinforcement
learning robot. Adaptive Behavior, 22(2):146?160, 2014.
[17] Sriraam Natarajan and Prasad Tadepalli. Dynamic preferences in multi-criteria reinforcement
learning. In Proceedings of the International Conference on Machine Learning (ICML), pages
601?608, 2005.
10
[18] Andrew Ng and Stuart Russell. Algorithms for inverse reinforcement learning. In Proceedings
of the International Conference on Machine Learning (ICML), pages 663?670, 2000.
[19] Martin L. Puterman. Markov Decision Processes?Discrete Stochastic Dynamic Programming.
John Wiley & Sons, Inc., 1994.
[20] Tom Schaul, Daniel Horgan, Karol Gregor, and David Silver. Universal Value Function
Approximators. In International Conference on Machine Learning (ICML), pages 1312?1320,
2015.
[21] Richard S. Sutton and Andrew G. Barto. Reinforcement Learning: An Introduction. MIT Press,
1998.
[22] Richard S. Sutton, Doina Precup, and Satinder Singh. Between MDPs and semi-MDPs: a
framework for temporal abstraction in reinforcement learning. Artificial Intelligence, 112:
181?211, 1999.
[23] Richard S. Sutton, Joseph Modayil, Michael Delp, Thomas Degris, Patrick M. Pilarski, Adam
White, and Doina Precup. Horde: A scalable real-time architecture for learning knowledge from
unsupervised sensorimotor interaction. In International Conference on Autonomous Agents and
Multiagent Systems, pages 761?768, 2011.
[24] Csaba Szepesv?ri. Algorithms for Reinforcement Learning. Synthesis Lectures on Artificial
Intelligence and Machine Learning. Morgan & Claypool Publishers, 2010.
[25] Matthew E. Taylor and Peter Stone. Transfer learning for reinforcement learning domains: A
survey. Journal of Machine Learning Research, 10(1):1633?1685, 2009.
[26] Emanuel Todorov, Tom Erez, and Yuval Tassa. MuJoCo: A physics engine for model-based
control. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages
5026?5033, 2012.
[27] Christopher Watkins and Peter Dayan. Q-learning. Machine Learning, 8:279?292, 1992.
[28] Hengshuai Yao, Csaba Szepesv?ri, Richard S Sutton, Joseph Modayil, and Shalabh Bhatnagar.
Universal option models. In Advances in Neural Information Processing Systems (NIPS), pages
990?998. 2014.
[29] Jingwei Zhang, Jost Tobias Springenberg, Joschka Boedecker, and Wolfram Burgard. Deep
reinforcement learning with successor features for navigation across similar environments.
CoRR, abs/1612.05533, 2016.
11
| 6994 |@word multitask:1 version:4 norm:1 stronger:2 pillar:2 seems:1 reused:1 open:2 mehta:3 tadepalli:2 prasad:2 uncovers:1 decomposition:4 pick:1 solid:1 shading:1 moment:1 initial:1 ndez:2 selecting:1 daniel:2 ours:2 interestingly:2 hasselt:1 current:5 com:1 comparing:1 must:5 readily:2 john:2 happen:1 shape:1 remove:1 designed:1 depict:1 greedy:1 selected:1 intelligence:3 amir:1 ith:2 wolfram:1 colored:1 caveat:1 provides:8 characterization:1 location:3 successive:1 preference:2 gautam:1 simpler:1 zhang:2 wierstra:2 along:1 direct:1 become:1 pritzel:2 consists:1 combine:1 dan:1 behavioral:1 gvfs:2 acquired:1 inter:1 expected:7 behavior:3 themselves:1 planning:1 mechanic:2 multi:3 bellman:6 torque:1 discounted:4 food:1 actual:1 cardinality:1 increasing:2 becomes:2 provided:3 notation:2 formalizing:1 underlying:3 bounded:1 maximizes:2 what:2 argmin:1 dharshan:1 deepmind:1 maxa:2 affirmative:1 csaba:2 guarantee:5 temporal:7 formalizes:3 preferably:1 every:3 pseudo:5 tackle:1 exactly:4 decouples:2 botvinick:1 control:6 farthest:1 reuses:1 maneuver:1 bertsekas:2 before:8 positive:1 understood:1 monolithic:1 limit:1 sutton:9 resembles:1 suggests:3 challenging:1 conversely:1 mujoco:2 hunt:2 range:1 practical:1 acknowledgment:1 sriraam:2 practice:2 implement:2 demis:1 riedmiller:1 universal:4 significantly:2 jingwei:1 outset:1 induce:4 refers:1 argmaxa:3 seeing:1 suggest:1 petersen:1 wheel:2 close:1 put:1 context:1 applying:1 mediocre:1 bellemare:2 reviewer:1 center:1 helen:1 go:1 regardless:1 starting:1 straightforward:1 layout:1 survey:2 formalized:1 m2:2 insight:1 deriving:1 dominate:2 classic:2 notion:5 coordinate:1 autonomous:2 analogous:1 imagine:1 suppose:9 heavily:1 play:2 exact:2 programming:5 target:4 us:3 hypothesis:1 element:2 particularly:3 updating:1 located:1 lay:1 natarajan:2 role:1 module:2 preprint:2 solved:1 capture:1 mal:1 wj:4 region:2 russell:1 highest:1 principled:1 subtask:1 environment:19 intuition:2 complexity:1 alessandro:1 reward:35 ideally:1 littman:2 tobias:1 dynamic:16 ultimately:1 trained:3 raise:2 solving:1 depend:1 singh:2 exposed:2 predictive:2 serve:2 upon:1 completely:1 accelerate:1 joint:2 train:2 distinct:1 describe:3 argmaxi:2 artificial:3 firm:2 whose:8 apparent:1 posed:1 solve:3 modular:1 supplementary:2 say:1 pilarski:1 favor:1 unseen:1 think:1 jointly:1 itself:2 timescale:1 delivered:1 final:1 obviously:1 sequence:5 propose:5 interaction:3 product:3 relevant:5 loop:2 combining:1 achieve:1 schaul:4 description:2 exploiting:2 empty:2 extending:1 produce:1 karol:1 silver:4 adam:2 object:6 help:2 derive:4 depending:1 completion:1 pose:1 illustrate:1 andrew:3 received:1 involves:1 come:1 radius:1 closely:1 stochastic:1 centered:1 human:1 successor:13 material:2 garc:1 exchange:1 transparent:1 generalization:11 really:1 decompose:1 anonymous:1 proposition:1 hinted:1 strictly:3 extension:3 around:1 considered:1 ground:2 claypool:1 mapping:1 algorithmic:1 predict:1 matthew:1 driving:4 m0:3 vary:1 adopt:1 integrates:1 currently:1 create:1 successfully:2 reflects:3 mit:1 clearly:5 always:3 desirability:2 rather:4 rusu:1 casting:1 broader:1 barto:2 derived:3 focus:3 improvement:18 legg:1 indicates:1 seamlessly:1 contrast:1 greedily:3 baseline:2 sense:3 inference:1 abstraction:5 rigid:1 dayan:10 accumulated:2 entire:1 integrated:2 selects:1 interested:6 overall:1 among:2 flexible:2 denoted:1 development:2 animal:1 art:1 field:1 once:4 never:1 having:2 beach:1 koray:1 veness:1 identical:1 sell:1 ng:1 look:5 icml:3 stuart:1 unsupervised:1 promote:1 future:2 tabular:3 others:2 report:1 simplify:1 inherent:1 richard:7 intelligent:1 randomly:1 composed:1 comprehensive:1 maxj:3 replaced:1 argmax:1 negation:1 attempt:1 friedman:1 ab:1 interest:5 ostrovski:1 possibility:2 mnih:2 mining:1 evaluation:2 joel:1 navigation:4 behind:1 activated:1 tuple:1 closer:1 experience:3 respective:1 decoupled:3 divide:1 old:2 exchanged:1 re:2 circle:1 initialized:1 guidance:1 taylor:1 theoretical:3 instance:3 modeling:1 cover:1 caruana:2 subset:5 entry:1 burgard:1 successful:1 too:1 motivating:1 stored:3 contextualize:1 dependency:2 answer:1 nearoptimal:1 considerably:1 combined:2 st:11 fundamental:1 amherst:1 international:5 destination:1 probabilistic:3 physic:2 picking:1 michael:2 together:1 quickly:1 concrete:2 yao:2 precup:2 synthesis:1 reflect:1 lever:1 opposed:1 possibly:1 guy:1 worse:3 relearned:1 return:9 dimitri:1 reusing:1 volodymyr:1 potential:1 degris:1 summarized:1 waste:1 availability:1 ioannis:1 inc:1 satisfy:1 explicitly:1 doina:2 depends:1 tion:1 view:2 break:1 extrapolation:1 picked:1 doing:1 sup:1 reached:1 start:5 recover:1 option:5 parallel:1 contribution:1 ass:1 minimize:1 efficiently:1 correspond:1 identify:1 generalize:1 kavukcuoglu:1 overlook:1 fern:3 bhatnagar:1 trajectory:1 drive:2 minj:3 reach:1 whenever:3 trevor:1 definition:6 sensorimotor:1 frequency:1 involved:2 naturally:4 associated:6 mi:12 proof:1 sampled:1 newly:1 emanuel:1 knowledge:8 car:1 improves:3 javier:1 actually:1 appears:1 higher:1 supervised:3 tom:4 formulation:1 done:1 evaluated:1 though:4 stage:1 jerome:1 hand:1 receives:2 replacing:1 expressive:1 nonlinear:1 christopher:1 google:1 incrementally:1 defines:1 quality:1 dabney:1 gray:1 perhaps:1 mdp:9 believe:1 dqn:5 name:3 usa:2 lillicrap:1 normalized:3 concept:9 horde:1 building:1 unroll:1 former:1 effect:1 true:1 counterpart:1 inductive:1 illustrated:2 puterman:2 white:2 during:3 essence:1 noted:2 illustrative:1 covering:1 samuel:1 criterion:3 generalized:6 stone:1 presenting:1 outline:1 evident:1 performs:2 hungry:1 invaluable:1 novel:2 charles:1 specialized:1 rl:18 tassa:2 discussed:5 extend:5 interpretation:2 m1:2 refer:7 rd:6 erez:2 shalabh:1 had:2 moving:1 stable:1 access:1 similarity:2 robot:2 gt:2 patrick:1 something:2 closest:2 own:1 recent:2 dictated:1 delp:1 joschka:1 belongs:1 scenario:8 certain:2 outperforming:1 approximators:3 exploited:1 seen:6 captured:2 additional:1 morgan:1 maximize:2 fernando:1 signal:2 semi:1 relates:1 full:2 hengshuai:1 desirable:3 interdependency:1 multiple:3 reduces:3 alan:1 technical:2 faster:1 characterized:1 veloso:1 long:2 promotes:2 specializing:1 controlled:1 qi:6 prediction:3 involving:2 basic:5 neuro:1 scalable:1 essentially:3 jost:1 arxiv:4 hado:2 represent:2 adopting:1 sometimes:1 achieved:1 robotics:1 addition:1 want:2 background:2 separately:1 szepesv:2 diagram:1 allocated:1 appropriately:1 publisher:1 rest:2 unlike:3 sr:9 probably:1 comment:2 induced:6 shane:1 elegant:2 facilitates:4 contrary:1 leveraging:1 flow:1 call:4 leverage:1 presence:1 bernstein:2 mahadevan:1 enough:4 wn:9 baxter:2 variety:1 todorov:1 architecture:3 perfectly:1 hastie:1 inner:2 idea:7 regarding:1 reduce:1 andreas:1 translates:1 qj:1 whether:2 specialization:1 reuse:4 penalty:3 gpi:14 peter:4 passing:1 action:13 deep:4 cornerstone:1 generally:2 useful:2 clear:5 heess:1 sfs:30 amount:1 discount:1 band:1 situated:1 outperform:1 andr:1 r12:1 revisit:1 dotted:1 designer:1 lazaric:1 per:1 tibshirani:1 discrete:3 georg:1 key:1 putting:1 four:3 threshold:1 changing:2 neither:1 rewriting:1 saeedi:1 iros:1 sum:5 run:3 inverse:2 powerful:1 springenberg:1 place:3 almost:1 extends:5 decide:1 decision:6 appendix:5 comparable:1 bound:2 guaranteed:1 followed:2 tackled:1 correspondence:1 occur:1 precisely:2 alex:1 ri:2 aspect:2 speed:2 argument:2 relatively:1 martin:2 transferred:2 combination:4 ball:1 across:5 remain:1 smaller:2 slightly:2 terminates:1 wi:10 son:1 joseph:4 gvf:1 making:3 modification:1 modayil:5 ardavan:1 taken:3 equation:1 resource:1 previously:2 turn:5 discus:3 mechanism:2 differentiates:1 describing:1 know:1 antonoglou:1 adopted:5 available:2 operation:1 apply:3 hierarchical:6 generic:2 appropriate:1 stig:1 occurrence:2 appearing:2 alternative:1 hassabis:1 original:5 thomas:1 denotes:1 include:2 exploit:2 giving:1 restrictive:1 build:6 conquer:1 approximating:2 gregor:1 rsj:1 objective:2 already:2 quantity:2 question:4 strategy:3 rt:4 usual:3 dp:7 subspace:1 distance:3 separate:1 thank:3 simulated:3 fidjeland:1 athena:1 considers:1 extent:2 collected:1 water:1 code:1 modeled:2 relationship:1 illustration:1 neville:1 setup:5 difficult:2 executed:4 ql:6 potentially:2 statement:1 relate:1 robert:1 rise:5 implementation:1 policy:78 perform:8 upper:1 observation:3 kumaran:1 markov:2 discarded:1 daan:2 finite:2 behave:1 horgan:2 defining:1 extended:1 situation:2 looking:1 saturates:1 interacting:1 rn:1 subtasks:3 david:4 complement:2 namely:1 connection:2 engine:3 learned:6 nip:3 address:1 able:8 beyond:2 suggested:1 usually:1 pattern:1 below:1 scientific:1 appeared:1 challenge:1 summarize:2 encompasses:2 max:6 including:3 memory:1 explanation:1 shifting:1 hot:2 suitable:2 event:1 natural:2 rely:1 force:1 turning:1 arm:3 mn:10 scheme:7 improve:4 representing:1 mdps:5 library:2 created:1 faced:1 text:1 literature:3 taste:1 uvfa:1 determining:2 graf:1 encompassing:1 loss:4 multiagent:1 lecture:1 interesting:4 suggestion:2 faded:1 gershman:1 foundation:1 incurred:2 agent:41 s0:24 principle:4 port:1 production:1 course:1 accounted:1 repeat:1 free:1 tsitsiklis:2 formal:1 allow:3 bias:1 wide:1 taking:1 munos:2 van:1 benefit:2 curve:1 dimension:2 transition:5 cumulative:1 world:1 computes:3 qn:6 concretely:1 made:1 reinforcement:20 coincide:1 author:1 rich:1 adaptive:1 skill:2 observable:1 keep:1 satinder:2 robotic:3 instantiation:2 conceptual:1 manuela:1 alternatively:1 continuous:6 quantifies:1 why:1 reacher:2 nature:1 learn:6 transfer:35 mj:3 ca:1 nicolas:1 improving:1 try:1 excellent:1 complex:2 constructing:1 protocol:2 domain:4 marc:2 main:2 dense:1 rh:1 sridhar:1 allowed:1 andrei:1 wiley:1 experienced:1 explicit:1 sf:1 exercise:1 watkins:1 theorem:26 davidsilver:1 bad:1 specific:5 navigate:1 maxi:1 dominates:1 essential:1 barreto:1 adding:1 effectively:1 corr:1 illustrates:1 relearning:1 easier:1 timothy:1 simply:4 psr:2 boedecker:1 expressed:1 contained:1 scalar:1 applies:1 springer:1 sadik:1 corresponds:6 satisfies:1 ma:1 tejas:1 goal:8 presentation:1 king:1 towards:1 room:4 price:2 shared:2 replace:1 change:7 infinite:1 except:1 uniformly:1 contrasted:1 acting:3 wt:1 yuval:2 beattie:1 called:5 matt:1 e:1 experimental:1 inadvertently:1 select:1 latter:3 arises:1 jonathan:3 alexander:2 avoiding:1 kulkarni:2 incorporate:1 evaluate:1 princeton:1 scratch:1 |
6,626 | 6,995 | Counterfactual Fairness
Matt Kusner ?
The Alan Turing Institute and
University of Warwick
[email protected]
Joshua Loftus ?
New York University
[email protected]
Chris Russell ?
The Alan Turing Institute and
University of Surrey
[email protected]
Ricardo Silva
The Alan Turing Institute and
University College London
[email protected]
Abstract
Machine learning can impact people with legal or ethical consequences when
it is used to automate decisions in areas such as insurance, lending, hiring, and
predictive policing. In many of these scenarios, previous decisions have been made
that are unfairly biased against certain subpopulations, for example those of a
particular race, gender, or sexual orientation. Since this past data may be biased,
machine learning predictors must account for this to avoid perpetuating or creating
discriminatory practices. In this paper, we develop a framework for modeling
fairness using tools from causal inference. Our definition of counterfactual fairness
captures the intuition that a decision is fair towards an individual if it the same in
(a) the actual world and (b) a counterfactual world where the individual belonged
to a different demographic group. We demonstrate our framework on a real-world
problem of fair prediction of success in law school.
1
Contribution
Machine learning has spread to fields as diverse as credit scoring [20], crime prediction [5], and loan
assessment [25]. Decisions in these areas may have ethical or legal implications, so it is necessary for
the modeler to think beyond the objective of maximizing prediction accuracy and consider the societal
impact of their work. For many of these applications, it is crucial to ask if the predictions of a model
are fair. Training data can contain unfairness for reasons having to do with historical prejudices or
other factors outside an individual?s control. In 2016, the Obama administration released a report2
which urged data scientists to analyze ?how technologies can deliberately or inadvertently perpetuate,
exacerbate, or mask discrimination."
There has been much recent interest in designing algorithms that make fair predictions [4, 6, 10,
12, 14, 16?19, 22, 24, 36?39]. In large part, the literature has focused on formalizing fairness
into quantitative definitions and using them to solve a discrimination problem in a certain dataset.
Unfortunately, for a practitioner, law-maker, judge, or anyone else who is interested in implementing
algorithms that control for discrimination, it can be difficult to decide which definition of fairness to
choose for the task at hand. Indeed, we demonstrate that depending on the relationship between a
protected attribute and the data, certain definitions of fairness can actually increase discrimination.
?
Equal contribution. This work was done while JL was a Research Fellow at the Alan Turing Institute.
https://obamawhitehouse.archives.gov/blog/2016/05/04/big-risks-big-opportunities-intersection-big-dataand-civil-rights
2
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
In this paper, we introduce the first explicitly causal approach to address fairness. Specifically, we
leverage the causal framework of Pearl [30] to model the relationship between protected attributes
and data. We describe how techniques from causal inference can be effective tools for designing fair
algorithms and argue, as in DeDeo [9], that it is essential to properly address causality in fairness. In
perhaps the most closely related prior work, Johnson et al. [15] make similar arguments but from a
non-causal perspective. An alternative use of causal modeling in the context of fairness is introduced
independently by [21].
In Section 2, we provide a summary of basic concepts in fairness and causal modeling. In Section 3,
we provide the formal definition of counterfactual fairness, which enforces that a distribution over
possible predictions for an individual should remain unchanged in a world where an individual?s
protected attributes had been different in a causal sense. In Section 4, we describe an algorithm to
implement this definition, while distinguishing it from existing approaches. In Section 5, we illustrate
the algorithm with a case of fair assessment of law school success.
2
Background
This section provides a basic account of two separate areas of research in machine learning, which
are formally unified in this paper. We suggest Berk et al. [1] and Pearl et al. [29] as references.
Throughout this paper, we will use the following notation. Let A denote the set of protected attributes
of an individual, variables that must not be discriminated against in a formal sense defined differently
by each notion of fairness discussed. The decision of whether an attribute is protected or not is taken
as a primitive in any given problem, regardless of the definition of fairness adopted. Moreover, let
X denote the other observable attributes of any particular individual, U the set of relevant latent
attributes which are not observed, and let Y denote the outcome to be predicted, which itself might
be contaminated with historical biases. Finally, Y? is the predictor, a random variable that depends on
A, X and U , and which is produced by a machine learning algorithm as a prediction of Y .
2.1
Fairness
There has been much recent work on fair algorithms. These include fairness through unawareness
[12], individual fairness [10, 16, 24, 38], demographic parity/disparate impact [36], and equality of
opportunity [14, 37]. For simplicity we often assume A is encoded as a binary attribute, but this can
be generalized.
Definition 1 (Fairness Through Unawareness (FTU)). An algorithm is fair so long as any protected
attributes A are not explicitly used in the decision-making process.
Any mapping Y? : X ? Y that excludes A satisfies this. Initially proposed as a baseline, the approach
has found favor recently with more general approaches such as Grgic-Hlaca et al. [12]. Despite its
compelling simplicity, FTU has a clear shortcoming as elements of X can contain discriminatory
information analogous to A that may not be obvious at first. The need for expert knowledge in
assessing the relationship between A and X was highlighted in the work on individual fairness:
Definition 2 (Individual Fairness (IF)). An algorithm is fair if it gives similar predictions to similar
individuals. Formally, given a metric d(?, ?), if individuals i and j are similar under this metric (i.e.,
d(i, j) is small) then their predictions should be similar: Y? (X (i) , A(i) ) ? Y? (X (j) , A(j) ).
As described in [10], the metric d(?, ?) must be carefully chosen, requiring an understanding of the
domain at hand beyond black-box statistical modeling. This can also be contrasted against population
level criteria such as
Definition 3 (Demographic Parity (DP)). A predictor Y? satisfies demographic parity if P (Y? |A =
0) = P (Y? |A = 1).
Definition 4 (Equality of Opportunity (EO)). A predictor Y? satisfies equality of opportunity if
P (Y? = 1|A = 0, Y = 1) = P (Y? = 1|A = 1, Y = 1).
These criteria can be incompatible in general, as discussed in [1, 7, 22]. Following the motivation of
IF and [15], we propose that knowledge about relationships between all attributes should be taken
into consideration, even if strong assumptions are necessary. Moreover, it is not immediately clear
2
for any of these approaches in which ways historical biases can be tackled. We approach such issues
from an explicit causal modeling perspective.
2.2
Causal Models and Counterfactuals
We follow Pearl [28], and define a causal model as a triple (U, V, F ) of sets such that
? U is a set of latent background variables,which are factors not caused by any variable in
the set V of observable variables;
? F is a set of functions {f1 , . . . , fn }, one for each Vi ? V , such that Vi = fi (pai , Upai ),
pai ? V \{Vi } and Upai ? U . Such equations are also known as structural equations [2].
The notation ?pai ? refers to the ?parents? of Vi and is motivated by the assumption that the model
factorizes as a directed graph, here assumed to be a directed acyclic graph (DAG). The model is causal
in that, given a distribution P (U ) over the background variables U , we can derive the distribution of a
subset Z ? V following an intervention on V \ Z. An intervention on variable Vi is the substitution
of equation Vi = fi (pai , Upai ) with the equation Vi = v for some v. This captures the idea of an
agent, external to the system, modifying it by forcefully assigning value v to Vi , for example as in a
randomized experiment.
The specification of F is a strong assumption but allows for the calculation of counterfactual
quantities. In brief, consider the following counterfactual statement, ?the value of Y if Z had taken
value z?, for two observable variables Z and Y . By assumption, the state of any observable variable is
fully determined by the background variables and structural equations. The counterfactual is modeled
as the solution for Y for a given U = u where the equations for Z are replaced with Z = z. We
denote it by YZ?z (u) [28], and sometimes as Yz if the context of the notation is clear.
Counterfactual inference, as specified by a causal model (U, V, F ) given evidence W , is the computation of probabilities P (YZ?z (U ) | W = w), where W , Z and Y are subsets of V . Inference proceeds
in three steps, as explained in more detail in Chapter 4 of Pearl et al. [29]: 1. Abduction: for a given
prior on U , compute the posterior distribution of U given the evidence W = w; 2. Action: substitute
the equations for Z with the interventional values z, resulting in the modified set of equations Fz ;
3. Prediction: compute the implied distribution on the remaining elements of V using Fz and the
posterior P (U |W = w).
3
Counterfactual Fairness
Given a predictive problem with fairness considerations, where A, X and Y represent the protected
attributes, remaining attributes, and output of interest respectively, let us assume that we are given a
causal model (U, V, F ), where V ? A ? X. We postulate the following criterion for predictors of Y .
Definition 5 (Counterfactual fairness). Predictor Y? is counterfactually fair if under any context
X = x and A = a,
P (Y?A?a (U ) = y | X = x, A = a) = P (Y?A?a0 (U ) = y | X = x, A = a),
(1)
for all y and for any value a0 attainable by A.
This notion is closely related to actual causes [13], or token causality in the sense that, to be fair,
A should not be a cause of Y? in any individual instance. In other words, changing A while holding
things which are not causally dependent on A constant will not change the distribution of Y? . We also
emphasize that counterfactual fairness is an individual-level definition. This is substantially different
from comparing different individuals that happen to share the same ?treatment? A = a and coincide
on the values of X, as discussed in Section 4.3.1 of [29] and the Supplementary Material. Differences
between Xa and Xa0 must be caused by variations on A only. Notice also that this definition is
agnostic with respect to how good a predictor Y? is, which we discuss in Section 4.
Relation to individual fairness. IF is agnostic with respect to its notion of similarity metric, which
is both a strength (generality) and a weakness (no unified way of defining similarity). Counterfactuals
and similarities are related, as in the classical notion of distances between ?worlds? corresponding to
different counterfactuals [23]. If Y? is a deterministic function of W ? A ? X ? U , as in several of
3
U
A
Prejudiced
U
A
UA
Y
A
UY
(c)
X
Y
(a)
X
Y
(b)
UA
A
UA
Qualifications
Prejudiced
Employed
(d)
Y
UY
Qualifications
A
Employed
Y
a
Employed a
Ya
a0
Employed a0
Ya
UY
0
(e)
Figure 1: (a), (b) Two causal models for different real-world fair prediction scenarios. See Section 3.1
for discussion. (c) The graph corresponding to a causal model with A being the protected attribute and
Y some outcome of interest, with background variables assumed to be independent. (d) Expanding
the model to include an intermediate variable indicating whether the individual is employed with
two (latent) background variables Prejudiced (if the person offering the job is prejudiced) and
Qualifications (a measure of the individual?s qualifications). (e) A twin network representation of
this system [28] under two different counterfactual levels for A. This is created by copying nodes
descending from A, which inherit unaffected parents from the factual world.
our examples to follow, then IF can be defined by treating equally two individuals with the same W
in a way that is also counterfactually fair.
Relation to Pearl et al. [29]. In Example 4.4.4 of [29], the authors condition instead on X, A, and
the observed realization of Y? , and calculate the probability of the counterfactual realization Y?A?a0
differing from the factual. This example conflates the predictor Y? with the outcome Y , of which
we remain agnostic in our definition but which is used in the construction of Y? as in Section 4. Our
framing makes the connection to machine learning more explicit.
3.1
Examples
To provide an intuition for counterfactual fairness, we will consider two real-world fair prediction scenarios: insurance pricing and crime prediction. Each of these correspond to one of the two causal
graphs in Figure 1(a),(b). The Supplementary Material provides a more mathematical discussion of
these examples with more detailed insights.
Scenario 1: The Red Car. A car insurance company wishes to price insurance for car owners
by predicting their accident rate Y . They assume there is an unobserved factor corresponding to
aggressive driving U , that (a) causes drivers to be more likely have an accident, and (b) causes
individuals to prefer red cars (the observed variable X). Moreover, individuals belonging to a
certain race A are more likely to drive red cars. However, these individuals are no more likely to be
aggressive or to get in accidents than any one else. We show this in Figure 1(a). Thus, using the
red car feature X to predict accident rate Y would seem to be an unfair prediction because it may
charge individuals of a certain race more than others, even though no race is more likely to have an
accident. Counterfactual fairness agrees with this notion: changing A while holding U fixed will also
change X and, consequently, Y? . Interestingly, we can show (Supplementary Material) that in a linear
model, regressing Y on A and X is equivalent to regressing on U , so off-the-shelf regression here is
counterfactually fair. Regressing Y on X alone obeys the FTU criterion but is not counterfactually
fair, so omitting A (FTU) may introduce unfairness into an otherwise fair world.
Scenario 2: High Crime Regions. A city government wants to estimate crime rates by neighborhood to allocate policing resources. Its analyst constructed training data by merging (1) a registry of
residents containing their neighborhood X and race A, with (2) police records of arrests, giving each
resident a binary label with Y = 1 indicating a criminal arrest record. Due to historically segregated
housing, the location X depends on A. Locations X with more police resources have larger numbers
of arrests Y . And finally, U represents the totality of socioeconomic factors and policing practices
that both influence where an individual may live and how likely they are to be arrested and charged.
This can all be seen in Figure 1(b).
In this example, higher observed arrest rates in some neighborhoods are due to greater policing there,
not because people of different races are any more or less likely to break the law. The label Y = 0
4
does not mean someone has never committed a crime, but rather that they have not been caught. If
individuals in the training data have not already had equal opportunity, algorithms enforcing EO will
not remedy such unfairness. In contrast, a counterfactually fair approach would model differential
enforcement rates using U and base predictions on this information rather than on X directly.
In general, we need a multistage procedure in which we first derive latent variables U , and then based
on them we minimize some loss with respect to Y . This is the core of the algorithm discussed next.
3.2
Implications
One simple but important implication of the definition of counterfactual fairness is the following:
Lemma 1. Let G be the causal graph of the given model (U, V, F ). Then Y? will be counterfactually
fair if it is a function of the non-descendants of A.
Proof. Let W be any non-descendant of A in G. Then WA?a (U ) and WA?a0 (U ) have the same
distribution by the three inferential steps in Section 2.2. Hence, the distribution of any function Y? of
the non-descendants of A is invariant with respect to the counterfactual values of A.
This does not exclude using a descendant W of A as a possible input to Y? . However, this will only
be possible in the case where the overall dependence of Y? on A disappears, which will not happen in
general. Hence, Lemma 1 provides the most straightforward way to achieve counterfactual fairness.
In some scenarios, it is desirable to define path-specific variations of counterfactual fairness that allow
for the inclusion of some descendants of A, as discussed by [21, 27] and the Supplementary Material.
Ancestral closure of protected attributes. Suppose that a parent of a member of A is not in A.
Counterfactual fairness allows for the use of it in the definition of Y? . If this seems counterintuitive,
then we argue that the fault should be at the postulated set of protected attributes rather than with the
definition of counterfactual fairness, and that typically we should expect set A to be closed under
ancestral relationships given by the causal graph. For instance, if Race is a protected attribute, and
Mother?s race is a parent of Race, then it should also be in A.
Dealing with historical biases and an existing fairness paradox. The explicit difference between
Y? and Y allows us to tackle historical biases. For instance, let Y be an indicator of whether a client
defaults on a loan, while Y? is the actual decision of giving the loan. Consider the DAG A ? Y ,
shown in Figure 1(c) with the explicit inclusion of set U of independent background variables. Y is
the objectively ideal measure for decision making, the binary indicator of the event that the individual
defaults on a loan. If A is postulated to be a protected attribute, then the predictor Y? = Y = fY (A, U )
is not counterfactually fair, with the arrow A ? Y being (for instance) the result of a world that
punishes individuals in a way that is out of their control. Figure 1(d) shows a finer-grained model,
where the path is mediated by a measure of whether the person is employed, which is itself caused
by two background factors: one representing whether the person hiring is prejudiced, and the other
the employee?s qualifications. In this world, A is a cause of defaulting, even if mediated by other
variables3 . The counterfactual fairness principle however forbids us from using Y : using the twin
network 4 of Pearl [28], we see in Figure 1(e) that Ya and Ya0 need not be identically distributed
given the background variables.
In contrast, any function of variables not descendants of A can be used a basis for fair decision
making. This means that any variable Y? defined by Y? = g(U ) will be counterfactually fair for any
function g(?). Hence, given a causal model, the functional defined by the function g(?) minimizing
some predictive error for Y will satisfy the criterion, as proposed in Section 4.1. We are essentially
learning a projection of Y into the space of fair decisions, removing historical biases as a by-product.
Counterfactual fairness also provides an answer to some problems on the incompatibility of fairness
criteria. In particular, consider the following problem raised independently by different authors (e.g.,
3
For example, if the function determining employment fE (A, P, Q) ? I(Q>0,P =0 or A6=a) then an individual
with sufficient qualifications and prejudiced potential employer may have a different counterfactual employment
value for A = a compared to A = a0 , and a different chance of default.
4
In a nutshell, this is a graph that simultaneously depicts ?multiple worlds? parallel to the factual realizations.
In this graph, all multiple worlds share the same background variables, but with different consequences in the
remaining variables depending on which counterfactual assignments are provided.
5
[7, 22]), illustrated below for the binary case: ideally, we would like our predictors to obey both
Equality of Opportunity and the predictive parity criterion defined by satisfying
P (Y = 1 | Y? = 1, A = 1) = P (Y = 1 | Y? = 1, A = 0),
as well as the corresponding equation for Y? = 0. It has been shown that if Y and A are marginally
associated (e.g., recidivism and race are associated) and Y is not a deterministic function of Y? ,
then the two criteria cannot be reconciled. Counterfactual fairness throws a light in this scenario,
suggesting that both EO and predictive parity may be insufficient if Y and A are associated: assuming
that A and Y are unconfounded (as expected for demographic attributes), this is the result of A being
a cause of Y . By counterfactual fairness, we should not want to use Y as a basis for our decisions,
instead aiming at some function Y?A of variables which are not caused by A but are predictive of Y .
Y? is defined in such a way that is an estimate of the ?closest? Y?A to Y according to some preferred
risk function. This makes the incompatibility between EO and predictive parity irrelevant, as A and
Y?A will be independent by construction given the model assumptions.
4
Implementing Counterfactual Fairness
As discussed in the previous Section, we need to relate Y? to Y if the predictor is to be useful, and we
restrict Y? to be a (parameterized) function of the non-descendants of A in the causal graph following
Lemma 1. We next introduce an algorithm, then discuss assumptions that can be used to express
counterfactuals.
4.1
Algorithm
Let Y? ? g? (U, XA ) be a predictor parameterized by ?, such as a logistic regression or a neural
network, and where XA ? X are non-descendants of A. Given a loss function l(?, ?) such as
squared loss or log-likelihood, and training data D ? {(A(i) , X (i) , Y (i) )} for i = 1, 2, . . . , n, we
Pn
(i)
define L(?) ? i=1 E[l(y (i) , g? (U (i) , xA )) | x(i) , a(i) ]/n as the empirical loss to be minimized
with respect to ?. Each expectation is with respect to random variable U (i) ? PM (U | x(i) , a(i) )
where PM (U | x, a) is the conditional distribution of the background variables as given by a causal
model M that is available by assumption. If this expectation cannot be calculated analytically,
Markov chain Monte Carlo (MCMC) can be used to approximate it as in the following algorithm.
1: procedure FAIR L EARNING(D, M)
. Learned parameters ??
(i)
(i)
For each data point i ? D, sample m MCMC samples U1 , . . . , Um ? PM (U | x(i) , a(i) ).
Let D0 be the augmented dataset where each point (a(i) , x(i) , y (i) ) in D is replaced with the
(i)
corresponding m points {(a(i) , x(i) , y (i) , uj )}.
P
0
0
(i0 )
4:
?? ? argmin? i0 ?D0 l(y (i ) , g? (U (i ) , xA )).
5: end procedure
2:
3:
At prediction time, we report Y? ? E[Y? (U ? , x?A ) | x? , a? ] for a new data point (a? , x? ).
Deconvolution perspective. The algorithm can be understood as a deconvolution approach that,
given observables A ? X, extracts its latent sources and pipelines them into a predictive model. We
advocate that counterfactual assumptions must underlie all approaches that claim to extract the
sources of variation of the data as ?fair? latent components. As an example, Louizos et al. [24] start
from the DAG A ? X ? U to extract P (U | X, A). As U and A are not independent given X in this
representation, a type of penalization is enforced to create a posterior Pf air (U |A, X) that is close
to the model posterior P (U | A, X) while satisfying Pf air (U |A = a, X) ? Pf air (U |A = a0 , X).
But this is neither necessary nor sufficient for counterfactual fairness. The model for X given A
and U must be justified by a causal mechanism, and that being the case, P (U | A, X) requires no
postprocessing. As a matter of fact, model M can be learned by penalizing empirical dependence
measures between U and pai for a given Vi (e.g. Mooij et al. [26]), but this concerns M and not Y? ,
and is motivated by explicit assumptions about structural equations, as described next.
6
4.2
Designing the Input Causal Model
Model M must be provided to algorithm FAIR L EARNING. Although this is well understood, it is
worthwhile remembering that causal models always require strong assumptions, even more so when
making counterfactual claims [8]. Counterfactuals assumptions such as structural equations are in
general unfalsifiable even if interventional data for all variables is available. This is because there
are infinitely many structural equations compatible with the same observable distribution [28], be it
observational or interventional. Having passed testable implications, the remaining components of a
counterfactual model should be understood as conjectures formulated according to the best of our
knowledge. Such models should be deemed provisional and prone to modifications if, for example,
new data containing measurement of variables previously hidden contradict the current model.
We point out that we do not need to specify a fully deterministic model, and structural equations can
be relaxed as conditional distributions. In particular, the concept of counterfactual fairness holds
under three levels of assumptions of increasing strength:
Level 1. Build Y? using only the observable non-descendants of A. This only requires partial
causal ordering and no further causal assumptions, but in many problems there will be few, if any,
observables which are not descendants of protected demographic factors.
Level 2. Postulate background latent variables that act as non-deterministic causes of observable
variables, based on explicit domain knowledge and learning algorithms5 . Information about X is
passed to Y? via P (U | x, a).
Level 3. Postulate a fully deterministic model with latent variables. For instance, the distribution
P (Vi | pai ) can be treated as an additive error model, Vi = fi (pai )+ei [31]. The error term ei then
becomes an input to Y? as calculated from the observed variables. This maximizes the information
extracted by the fair predictor Y? .
4.3
Further Considerations on Designing the Input Causal Model
One might ask what we can lose by defining causal fairness measures involving only noncounterfactual causal quantities, such as enforcing P (Y? = 1 | do(A = a)) = P (Y? = 1 | do(A = a0 ))
instead of our counterfactual criterion. The reason is that the above equation is only a constraint
on an average effect. Obeying this criterion provides no guarantees against, for example, having
half of the individuals being strongly ?negatively? discriminated and half of the individuals strongly
?positively? discriminated. We advocate that, for fairness, society should not be satisfied in pursuing
only counterfactually-free guarantees. While one may be willing to claim posthoc that the equation
above masks no balancing effect so that individuals receive approximately the same distribution of
outcomes, that itself is just a counterfactual claim in disguise. Our approach is to make counterfactual
assumptions explicit. When unfairness is judged to follow only some ?pathways? in the causal graph
(in a sense that can be made formal, see [21, 27]), nonparametric assumptions about the independence
of counterfactuals may suffice, as discussed by [27]. In general, nonparametric assumptions may not
provide identifiable adjustments even in this case, as also discussed in our Supplementary Material.
If competing models with different untestable assumptions are available, there are ways of simultaneously enforcing a notion of approximate counterfactual fairness in all of them, as introduced by us in
[32]. Other alternatives include exploiting bounds on the contribution of hidden variables [29, 33].
Another issue is the interpretation of causal claims involving demographic variables such as race
and sex. Our view is that such constructs are the result of translating complex events into random
variables and, despite some controversy, we consider counterproductive to claim that e.g. race and sex
cannot be causes. An idealized intervention on some A at a particular time can be seen as a notational
shortcut to express a conjunction of more specific interventions, which may be individually doable
but jointly impossible in practice. It is the plausibility of complex, even if impossible to practically
manipulate, causal chains from A to Y that allows us to claim that unfairness is real [11]. Experiments
for constructs exist, such as randomizing names in job applications to make them race-blind. They do
not contradict the notion of race as a cause, and can be interpreted as an intervention on a particular
aspect of the construct ?race,? such as ?race perception? (e.g. Section 4.4.4 of [29]).
5
In some domains, it is actually common to build a model entirely around latent constructs with few or no
observable parents nor connections among observed variables [2].
7
5
Illustration: Law School Success
We illustrate our approach on a practical problem that requires fairness, the prediction of success in
law school. A second problem, understanding the contribution of race to police stops, is described in
the Supplementary Material. Following closely the usual framework for assessing causal models in
the machine learning literature, the goal of this experiment is to quantify how our algorithm behaves
with finite sample sizes while assuming ground truth compatible with a synthetic model.
Problem definition: Law school success
The Law School Admission Council conducted a survey across 163 law schools in the United States
[35]. It contains information on 21,790 law students such as their entrance exam scores (LSAT), their
grade-point average (GPA) collected prior to law school, and their first year average grade (FYA).
Given this data, a school may wish to predict if an applicant will have a high FYA. The school would
also like to make sure these predictions are not biased by an individual?s race and sex. However, the
LSAT, GPA, and FYA scores, may be biased due to social factors. We compare our framework with
two unfair baselines: 1. Full: the standard technique of using all features, including sensitive features
such as race and sex to make predictions; 2. Unaware: fairness through unawareness, where we
do not use race and sex as features. For comparison, we generate predictors Y? for all models using
logistic regression.
Fair prediction. As described in Section 4.2, there are three ways in which we can model a
counterfactually fair predictor of FYA. Level 1 uses any features which are not descendants of race
and sex for prediction. Level 2 models latent ?fair? variables which are parents of observed variables.
These variables are independent of both race and sex. Level 3 models the data using an additive error
model, and uses the independent error terms to make predictions. These models make increasingly
strong assumptions corresponding to increased predictive power. We split the dataset 80/20 into a
train/test set, preserving label balance, to evaluate the models.
As we believe LSAT, GPA, and FYA are all biased by race and sex, we cannot use any observed
features to construct a counterfactually fair predictor as described in Level 1.
In Level 2, we postulate that a latent variable: a student?s knowledge (K), affects GPA, LSAT, and
FYA scores. The causal graph corresponding to this model is shown in Figure 2, (Level 2). This is a
short-hand for the distributions:
K
R
S
GPA ? N (bG + wG
K + wG
R + wG
S, ?G ),
K
R
S
LSAT ? Poisson(exp(bL + wL K + wL R + wL
S)),
FYA ? N (wFK K + wFR R + wFS S, 1),
K ? N (0, 1)
We perform inference on this model using an observed training set to estimate the posterior distribution
of K. We use the probabilistic programming language Stan [34] to learn K. We call the predictor
constructed using K, Fair K.
black $ white
asian $ white
Level 2
Level 3
0
?1.0
?0.5
0.0
0.5
?0.5
swapped
0.0
0.5
type
original
original
swapped
swapped
1
1.5
0.5
?1.0
0.0
V
V
pred_zfya
FYA
FYA
0.4
0.8
0.0
counterfactual
0.5
pred_zfya
1.5
1.5
type
original
swapped
1.0
type
original
original
swapped
1.0
swapped
0.5
0.0
?0.4
?0.5
2.0
0.5
0.0
0.0
pred_zfya
FYA
0.5
type
original
swapped
1.0
0.5
0.0
0.0
pred_zfya
density
density
density
density
0.5
V
?0.5
2.0
type
1.0
?0.5
0
?1.0
pred_zfya
2.0
1.5
density
density
Sex
type
original
1
0
?1.0
pred_zfya
2.0
Unaware
Sex
Know
density
density
FYA
Race
LSAT
swapped
original
data
0
Race
2
2
type
original
1
density
density
FYA
?F
density
density
LSAT
?L
2
1
female $ male
3
3
type
density
density
GPA
?G
Full
2
GPA
mexican $ white
3
density
density
3
0.0
?0.4
0.0
V
V
pred_zfya
FYA
FYA
0.4
0.8
?0.4
0.0
V
V
0.4
0.8
pred_zfya
FYA
FYA
Figure 2: Left: A causal model for the problem of predicting law school success fairly. Right:
Density plots of predicted FYAa and FYAa0 .
In Level 3, we model GPA, LSAT, and FYA as continuous variables with additive error terms
independent of race and sex (that may in turn be correlated with one-another). This model is shown
8
Table 1: Prediction results using logistic regression. Note that we must sacrifice a small amount of
accuracy to ensuring counterfactually fair prediction (Fair K, Fair Add), versus the models that use
unfair features: GPA, LSAT, race, sex (Full, Unaware).
Full Unaware Fair K Fair Add
RMSE 0.873
0.894
0.929
0.918
in Figure 2, (Level 3), and is expressed by:
R
S
GPA = bG + wG
R + wG
S + G , G ? p(G )
R
S
LSAT = bL + wL
R + wL
S + L , L ? p(L )
FYA = bF + wFR R + wFS S + F , F ? p(F )
We estimate the error terms G , L by first fitting two models that each use race and sex to individually
predict GPA and LSAT. We then compute the residuals of each model (e.g., G = GPA? Y?GPA (R, S)).
We use these residual estimates of G , L to predict FYA. We call this Fair Add.
Accuracy. We compare the RMSE achieved by logistic regression for each of the models on the test
set in Table 1. The Full model achieves the lowest RMSE as it uses race and sex to more accurately
reconstruct FYA. Note that in this case, this model is not fair even if the data was generated by one of
the models shown in Figure 2 as it corresponds to Scenario 3. The (also unfair) Unaware model still
uses the unfair variables GPA and LSAT, but because it does not use race and sex it cannot match the
RMSE of the Full model. As our models satisfy counterfactual fairness, they trade off some accuracy.
Our first model Fair K uses weaker assumptions and thus the RMSE is highest. Using the Level 3
assumptions, as in Fair Add we produce a counterfactually fair model that trades slightly stronger
assumptions for lower RMSE.
Counterfactual fairness. We would like to empirically test whether the baseline methods are
counterfactually fair. To do so we will assume the true model of the world is given by Figure 2,
(Level 2). We can fit the parameters of this model using the observed data and evaluate counterfactual
fairness by sampling from it. Specifically, we will generate samples from the model given either
the observed race and sex, or counterfactual race and sex variables. We will fit models to both the
original and counterfactual sampled data and plot how the distribution of predicted FYA changes for
both baseline models. Figure 2 shows this, where each row corresponds to a baseline predictor and
each column corresponds to the counterfactual change. In each plot, the blue distribution is density of
predicted FYA for the original data and the red distribution is this density for the counterfactual data. If
a model is counterfactually fair we would expect these distributions to lie exactly on top of each other.
Instead, we note that the Full model exhibits counterfactual unfairness for all counterfactuals except
sex. We see a similar trend for the Unaware model, although it is closer to being counterfactually
fair. To see why these models seem to be fair w.r.t. to sex we can look at weights of the DAG which
generates the counterfactual data. Specifically the DAG weights from (male,female) to GPA are
(0.93,1.06) and from (male,female) to LSAT are (1.1,1.1). Thus, these models are fair w.r.t. to sex
simply because of a very weak causal link between sex and GPA/LSAT.
6
Conclusion
We have presented a new model of fairness we refer to as counterfactual fairness. It allows us
to propose algorithms that, rather than simply ignoring protected attributes, are able to take into
account the different social biases that may arise towards individuals based on ethically sensitive
attributes and compensate for these biases effectively. We experimentally contrasted our approach
with previous fairness approaches and show that our explicit causal models capture these social biases
and make clear the implicit trade-off between prediction accuracy and fairness in an unfair world. We
propose that fairness should be regulated by explicitly modeling the causal structure of the world.
Criteria based purely on probabilistic independence cannot satisfy this and are unable to address how
unfairness is occurring in the task at hand. By providing such causal tools for addressing fairness
questions we hope we can provide practitioners with customized techniques for solving a wide array
of fairness modeling problems.
9
Acknowledgments
This work was supported by the Alan Turing Institute under the EPSRC grant EP/N510129/1. CR
acknowledges additional support under the EPSRC Platform Grant EP/P022529/1. We thank Adrian
Weller for insightful feedback, and the anonymous reviewers for helpful comments.
References
[1] Berk, R., Heidari, H., Jabbari, S., Kearns, M., and Roth, A. Fairness in criminal justice risk
assessments: The state of the art. arXiv:1703.09207v1, 2017. 2
[2] Bollen, K. Structural Equations with Latent Variables. John Wiley & Sons, 1989. 3, 7
[3] Bollen, K. and (eds.), J. Long. Testing Structural Equation Models. SAGE Publications, 1993.
13
[4] Bolukbasi, Tolga, Chang, Kai-Wei, Zou, James Y, Saligrama, Venkatesh, and Kalai, Adam T.
Man is to computer programmer as woman is to homemaker? debiasing word embeddings. In
Advances in Neural Information Processing Systems, pp. 4349?4357, 2016. 1
[5] Brennan, Tim, Dieterich, William, and Ehret, Beate. Evaluating the predictive validity of the
compas risk and needs assessment system. Criminal Justice and Behavior, 36(1):21?40, 2009.
1
[6] Calders, Toon and Verwer, Sicco. Three naive bayes approaches for discrimination-free classification. Data Mining and Knowledge Discovery, 21(2):277?292, 2010. 1
[7] Chouldechova, A. Fair prediction with disparate impact: a study of bias in recidivism prediction
instruments. Big Data, 2:153?163, 2017. 2, 6
[8] Dawid, A. P. Causal inference without counterfactuals. Journal of the American Statistical
Association, pp. 407?448, 2000. 7
[9] DeDeo, Simon. Wrong side of the tracks: Big data and protected categories. arXiv preprint
arXiv:1412.4643, 2014. 2
[10] Dwork, Cynthia, Hardt, Moritz, Pitassi, Toniann, Reingold, Omer, and Zemel, Richard. Fairness
through awareness. In Proceedings of the 3rd Innovations in Theoretical Computer Science
Conference, pp. 214?226. ACM, 2012. 1, 2
[11] Glymour, C. and Glymour, M. R. Commentary: Race and sex are causes. Epidemiology, 25(4):
488?490, 2014. 7
[12] Grgic-Hlaca, Nina, Zafar, Muhammad Bilal, Gummadi, Krishna P, and Weller, Adrian. The case
for process fairness in learning: Feature selection for fair decision making. NIPS Symposium on
Machine Learning and the Law, 2016. 1, 2
[13] Halpern, J. Actual Causality. MIT Press, 2016. 3
[14] Hardt, Moritz, Price, Eric, Srebro, Nati, et al. Equality of opportunity in supervised learning. In
Advances in Neural Information Processing Systems, pp. 3315?3323, 2016. 1, 2
[15] Johnson, Kory D, Foster, Dean P, and Stine, Robert A. Impartial predictive modeling: Ensuring
fairness in arbitrary models. arXiv preprint arXiv:1608.00528, 2016. 2
[16] Joseph, Matthew, Kearns, Michael, Morgenstern, Jamie, Neel, Seth, and Roth, Aaron. Rawlsian
fairness for machine learning. arXiv preprint arXiv:1610.09559, 2016. 1, 2
[17] Kamiran, Faisal and Calders, Toon. Classifying without discriminating. In Computer, Control
and Communication, 2009. IC4 2009. 2nd International Conference on, pp. 1?6. IEEE, 2009.
[18] Kamiran, Faisal and Calders, Toon. Data preprocessing techniques for classification without
discrimination. Knowledge and Information Systems, 33(1):1?33, 2012.
10
[19] Kamishima, Toshihiro, Akaho, Shotaro, and Sakuma, Jun. Fairness-aware learning through
regularization approach. In Data Mining Workshops (ICDMW), 2011 IEEE 11th International
Conference on, pp. 643?650. IEEE, 2011. 1
[20] Khandani, Amir E, Kim, Adlar J, and Lo, Andrew W. Consumer credit-risk models via
machine-learning algorithms. Journal of Banking & Finance, 34(11):2767?2787, 2010. 1
[21] Kilbertus, N., Carulla, M. R., Parascandolo, G., Hardt, M., Janzing, D., and Sch?lkopf, B.
Avoiding discrimination through causal reasoning. Advances in Neural Information Processing
Systems 30, 2017. 2, 5, 7
[22] Kleinberg, J., Mullainathan, S., and Raghavan, M. Inherent trade-offs in the fair determination
of risk scores. Proceedings of The 8th Innovations in Theoretical Computer Science Conference
(ITCS 2017), 2017. 1, 2, 6
[23] Lewis, D. Counterfactuals. Harvard University Press, 1973. 3
[24] Louizos, Christos, Swersky, Kevin, Li, Yujia, Welling, Max, and Zemel, Richard. The variational
fair autoencoder. arXiv preprint arXiv:1511.00830, 2015. 1, 2, 6
[25] Mahoney, John F and Mohen, James M. Method and system for loan origination and underwriting, October 23 2007. US Patent 7,287,008. 1
[26] Mooij, J., Janzing, D., Peters, J., and Scholkopf, B. Regression by dependence minimization
and its application to causal inference in additive noise models. In Proceedings of the 26th
Annual International Conference on Machine Learning, pp. 745?752, 2009. 6
[27] Nabi, R. and Shpitser, I. Fair inference on outcomes. arXiv:1705.10378v1, 2017. 5, 7, 16
[28] Pearl, J. Causality: Models, Reasoning and Inference. Cambridge University Press, 2000. 3, 4,
5, 7
[29] Pearl, J., Glymour, M., and Jewell, N. Causal Inference in Statistics: a Primer. Wiley, 2016. 2,
3, 4, 7, 12, 16
[30] Pearl, Judea. Causal inference in statistics: An overview. Statistics Surveys, 3:96?146, 2009. 2
[31] Peters, J., Mooij, J. M., Janzing, D., and Sch?lkopf, B. Causal discovery with continuous
additive noise models. Journal of Machine Learning Research, 15:2009?2053, 2014. URL
http://jmlr.org/papers/v15/peters14a.html. 7
[32] Russell, C., Kusner, M., Loftus, J., and Silva, R. When worlds collide: integrating different
counterfactual assumptions in fairness. Advances in Neural Information Processing Systems,
31, 2017. 7, 13
[33] Silva, R. and Evans, R. Causal inference through a witness protection program. Journal of
Machine Learning Research, 17(56):1?53, 2016. 7
[34] Stan Development Team. Rstan: the r interface to stan, 2016. R package version 2.14.1. 8
[35] Wightman, Linda F. Lsac national longitudinal bar passage study. lsac research report series.
1998. 8
[36] Zafar, Muhammad Bilal, Valera, Isabel, Rodriguez, Manuel Gomez, and Gummadi, Krishna P.
Learning fair classifiers. arXiv preprint arXiv:1507.05259, 2015. 1, 2
[37] Zafar, Muhammad Bilal, Valera, Isabel, Rodriguez, Manuel Gomez, and Gummadi, Krishna P.
Fairness beyond disparate treatment & disparate impact: Learning classification without disparate mistreatment. arXiv preprint arXiv:1610.08452, 2016. 2
[38] Zemel, Richard S, Wu, Yu, Swersky, Kevin, Pitassi, Toniann, and Dwork, Cynthia. Learning
fair representations. ICML (3), 28:325?333, 2013. 2
[39] Zliobaite, Indre. A survey on measuring indirect discrimination in machine learning. arXiv
preprint arXiv:1511.00148, 2015. 1
11
| 6995 |@word version:1 stronger:1 seems:1 nd:1 bf:1 sex:22 justice:2 adrian:2 willing:1 closure:1 zliobaite:1 attainable:1 substitution:1 series:1 score:4 united:1 punishes:1 contains:1 offering:1 interestingly:1 bilal:3 longitudinal:1 past:1 existing:2 current:1 comparing:1 manuel:2 protection:1 assigning:1 must:8 applicant:1 john:2 evans:1 stine:1 happen:2 additive:5 entrance:1 fn:1 treating:1 plot:3 discrimination:8 alone:1 half:2 amir:1 bolukbasi:1 brennan:1 core:1 short:1 record:2 lsat:14 provides:5 lending:1 node:1 location:2 org:1 provisional:1 admission:1 mathematical:1 constructed:2 differential:1 symposium:1 driver:1 scholkopf:1 descendant:11 fitting:1 advocate:2 pathway:1 owner:1 introduce:3 sacrifice:1 mask:2 indeed:1 expected:1 behavior:1 nor:2 grade:2 company:1 gov:1 actual:4 pf:3 ua:3 becomes:1 provided:2 increasing:1 notation:3 unfairness:7 maximizes:1 linda:1 formalizing:1 suffice:1 what:1 moreover:3 agnostic:3 lowest:1 substantially:1 argmin:1 morgenstern:1 interpreted:1 differing:1 unified:2 unobserved:1 guarantee:2 fellow:1 quantitative:1 act:1 charge:1 tackle:1 finance:1 nutshell:1 exactly:1 um:1 wrong:1 classifier:1 uk:3 control:4 underlie:1 grant:2 intervention:5 impartial:1 causally:1 scientist:1 understood:3 qualification:6 aiming:1 consequence:2 despite:2 path:2 approximately:1 black:2 might:2 someone:1 discriminatory:2 obeys:1 directed:2 acknowledgment:1 uy:3 enforces:1 practical:1 testing:1 practice:3 vi:11 implement:1 procedure:3 area:3 empirical:2 inferential:1 projection:1 word:2 tolga:1 subpopulation:1 integrating:1 refers:1 suggest:1 get:1 cannot:6 close:1 selection:1 judged:1 context:3 impossible:2 risk:6 influence:1 descending:1 live:1 equivalent:1 dean:1 charged:1 roth:2 deterministic:5 reviewer:1 straightforward:1 primitive:1 regardless:1 independently:2 caught:1 focused:1 survey:3 maximizing:1 simplicity:2 immediately:1 stats:1 insight:1 array:1 counterintuitive:1 population:1 notion:7 variation:3 counterfactually:16 analogous:1 construction:2 suppose:1 programming:1 us:5 distinguishing:1 designing:4 harvard:1 element:2 dawid:1 satisfying:2 trend:1 mistreatment:1 observed:11 epsrc:2 ep:2 factual:3 preprint:7 capture:3 calculate:1 region:1 verwer:1 ordering:1 russell:2 trade:4 jewell:1 highest:1 intuition:2 ideally:1 multistage:1 employment:2 halpern:1 controversy:1 sicco:1 solving:1 policing:4 predictive:11 purely:1 negatively:1 crussell:1 eric:1 basis:2 observables:2 seth:1 indirect:1 differently:1 isabel:2 chapter:1 arrested:1 collide:1 train:1 describe:2 effective:1 monte:1 shortcoming:1 london:1 zemel:3 kevin:2 outside:1 neighborhood:3 outcome:5 encoded:1 kai:1 larger:1 warwick:1 supplementary:6 solve:1 reconstruct:1 wg:5 otherwise:1 favor:1 statistic:3 objectively:1 dieterich:1 think:1 jointly:1 highlighted:1 itself:3 housing:1 doable:1 ucl:1 propose:3 jamie:1 product:1 saligrama:1 relevant:1 realization:3 omer:1 achieve:1 exploiting:1 parent:6 assessing:2 produce:1 adam:1 tim:1 depending:2 exam:1 andrew:1 develop:1 ac:3 illustrate:2 derive:2 school:11 job:2 strong:4 throw:1 predicted:4 judge:1 quantify:1 closely:3 attribute:20 modifying:1 raghavan:1 observational:1 programmer:1 material:6 implementing:2 muhammad:3 translating:1 require:1 government:1 forcefully:1 f1:1 anonymous:1 hold:1 practically:1 around:1 credit:2 ground:1 exp:1 mapping:1 predict:4 claim:7 automate:1 matthew:1 driving:1 neel:1 achieves:1 released:1 lose:1 label:3 maker:1 council:1 individually:2 sensitive:2 wl:5 agrees:1 create:1 city:1 tool:3 minimization:1 hope:1 offs:1 mit:1 fya:22 always:1 modified:1 rather:4 kalai:1 pn:1 shelf:1 unconfounded:1 incompatibility:2 factorizes:1 avoid:1 cr:1 publication:1 conjunction:1 properly:1 notational:1 likelihood:1 abduction:1 contrast:2 kim:1 sense:4 baseline:5 helpful:1 inference:12 dependent:1 i0:2 typically:1 a0:9 initially:1 hidden:2 relation:2 interested:1 overall:1 classification:3 issue:2 among:1 orientation:1 html:1 development:1 raised:1 platform:1 fairly:1 art:1 field:1 aware:1 construct:5 equal:2 beach:1 sampling:1 never:1 having:3 pai:7 represents:1 look:1 icml:1 fairness:63 yu:1 minimized:1 contaminated:1 others:1 report:2 richard:3 inherent:1 kilbertus:1 few:2 simultaneously:2 national:1 asian:1 individual:33 replaced:2 william:1 interest:3 mining:2 dwork:2 insurance:4 regressing:3 mahoney:1 weakness:1 male:3 gpa:16 light:1 chain:2 implication:4 closer:1 partial:1 mullainathan:1 necessary:3 causal:47 theoretical:2 homemaker:1 increased:1 wfr:2 instance:5 column:1 modeling:8 v15:1 compelling:1 measuring:1 assignment:1 a6:1 addressing:1 subset:2 predictor:18 socioeconomic:1 johnson:2 conducted:1 weller:2 answer:1 randomizing:1 synthetic:1 st:1 density:19 international:3 randomized:1 discriminating:1 person:3 epidemiology:1 ancestral:2 probabilistic:2 off:3 michael:1 squared:1 postulate:4 satisfied:1 containing:2 choose:1 woman:1 external:1 creating:1 expert:1 disguise:1 shpitser:1 american:1 ricardo:2 li:1 suggesting:1 potential:1 aggressive:2 exclude:1 account:3 twin:2 student:2 matter:1 satisfy:3 postulated:2 explicitly:3 caused:4 idealized:1 blind:1 bg:2 depends:2 view:1 break:1 closed:1 race:33 analyze:1 counterfactuals:9 red:5 start:1 xa0:1 bayes:1 parallel:1 simon:1 rmse:6 contribution:4 minimize:1 air:3 accuracy:5 who:1 correspond:1 counterproductive:1 lkopf:2 weak:1 itcs:1 accurately:1 produced:1 marginally:1 carlo:1 drive:1 finer:1 unaffected:1 janzing:3 ed:1 definition:19 against:4 pp:7 surrey:1 james:2 obvious:1 proof:1 associated:3 modeler:1 judea:1 stop:1 sampled:1 dataset:3 hardt:3 treatment:2 exacerbate:1 counterfactual:49 ask:2 knowledge:7 car:6 khandani:1 carefully:1 actually:2 higher:1 supervised:1 follow:3 specify:1 wei:1 done:1 box:1 strongly:2 generality:1 though:1 xa:1 implicit:1 just:1 heidari:1 hand:4 ei:2 assessment:4 rodriguez:2 resident:2 logistic:4 perhaps:1 pricing:1 believe:1 usa:1 effect:2 omitting:1 validity:1 true:1 remedy:1 deliberately:1 contain:2 hence:3 requiring:1 equality:5 moritz:2 regularization:1 analytically:1 illustrated:1 white:3 hiring:2 arrest:4 criterion:11 generalized:1 demonstrate:2 interface:1 passage:1 silva:3 postprocessing:1 reasoning:2 variational:1 consideration:3 recently:1 fi:3 common:1 behaves:1 functional:1 empirically:1 discriminated:3 overview:1 patent:1 debiasing:1 jl:1 association:1 interpretation:1 discussed:8 louizos:2 employee:1 refer:1 measurement:1 cambridge:1 dag:5 mother:1 rd:1 pm:3 inclusion:2 akaho:1 language:1 had:3 specification:1 similarity:3 pitassi:2 base:1 add:4 closest:1 posterior:5 recent:2 female:3 perspective:3 irrelevant:1 scenario:8 certain:5 blog:1 binary:4 success:6 posthoc:1 fault:1 societal:1 joshua:1 scoring:1 krishna:3 seen:2 additional:1 remembering:1 preserving:1 relaxed:1 employed:6 greater:1 commentary:1 accident:5 eo:4 multiple:2 desirable:1 full:7 d0:2 alan:5 match:1 determination:1 plausibility:1 calculation:1 long:3 compensate:1 totality:1 equally:1 manipulate:1 gummadi:3 impact:5 ensuring:2 prediction:27 basic:2 regression:6 involving:2 essentially:1 metric:4 expectation:2 poisson:1 arxiv:16 represent:1 faisal:2 sometimes:1 achieved:1 mkusner:1 justified:1 want:2 receive:1 compas:1 background:12 else:2 source:2 crucial:1 sch:2 biased:5 swapped:8 archive:1 sure:1 comment:1 thing:1 member:1 reingold:1 name:1 seem:2 parascandolo:1 practitioner:2 call:2 structural:8 leverage:1 ideal:1 intermediate:1 split:1 embeddings:1 identically:1 independence:2 fit:2 affect:1 restrict:1 competing:1 registry:1 idea:1 administration:1 defaulting:1 whether:6 motivated:2 allocate:1 url:1 passed:2 peter:2 york:1 cause:10 action:1 useful:1 n510129:1 detailed:1 clear:4 ftu:4 amount:1 nonparametric:2 kamiran:2 category:1 generate:2 http:2 fz:2 exist:1 notice:1 track:1 blue:1 diverse:1 express:2 group:1 loftus:3 interventional:3 changing:2 neither:1 penalizing:1 v1:2 graph:11 excludes:1 year:1 enforced:1 turing:7 package:1 parameterized:2 sakuma:1 employer:1 throughout:1 swersky:2 decide:1 pursuing:1 wu:1 earning:2 incompatible:1 prefer:1 decision:12 banking:1 entirely:1 bound:1 gomez:2 tackled:1 identifiable:1 annual:1 strength:2 constraint:1 generates:1 aspect:1 u1:1 kleinberg:1 anyone:1 argument:1 conjecture:1 recidivism:2 glymour:3 according:2 unawareness:3 belonging:1 remain:2 across:1 slightly:1 increasingly:1 son:1 kusner:2 joseph:1 making:5 modification:1 explained:1 invariant:1 taken:3 pipeline:1 legal:2 equation:17 calder:3 previously:1 resource:2 discus:2 turn:1 mechanism:1 enforcement:1 know:1 instrument:1 demographic:7 end:1 jabbari:1 adopted:1 available:3 obey:1 worthwhile:1 alternative:2 primer:1 shotaro:1 original:11 substitute:1 top:1 remaining:4 include:3 opportunity:7 toon:3 giving:2 testable:1 build:2 yz:3 uj:1 classical:1 society:1 unchanged:1 bl:2 implied:1 objective:1 already:1 quantity:2 question:1 dependence:3 usual:1 exhibit:1 regulated:1 dp:1 distance:1 separate:1 link:1 thank:1 unable:1 chris:1 argue:2 collected:1 fy:1 reason:2 enforcing:3 nina:1 assuming:2 consumer:1 analyst:1 copying:1 modeled:1 illustration:1 relationship:5 balance:1 minimizing:1 innovation:2 insufficient:1 difficult:1 unfortunately:1 october:1 fe:1 robert:1 relate:1 wfk:1 holding:2 statement:1 sage:1 disparate:5 bollen:2 perform:1 markov:1 finite:1 defining:2 witness:1 communication:1 committed:1 team:1 paradox:1 arbitrary:1 police:3 conflates:1 introduced:2 venkatesh:1 criminal:3 specified:1 connection:2 crime:5 learned:2 framing:1 pearl:9 nip:2 ic4:1 address:3 beyond:3 bar:1 proceeds:1 able:1 perception:1 below:1 yujia:1 belonged:1 program:1 max:1 including:1 power:1 event:2 treated:1 client:1 predicting:2 valera:2 indicator:2 residual:2 customized:1 representing:1 historically:1 brief:1 technology:1 stan:3 disappears:1 created:1 deemed:1 acknowledges:1 jun:1 naive:1 mediated:2 extract:3 autoencoder:1 prior:3 literature:2 understanding:2 discovery:2 nati:1 mooij:3 determining:1 segregated:1 law:13 toniann:2 fully:3 expect:2 loss:4 srebro:1 acyclic:1 versus:1 triple:1 penalization:1 awareness:1 agent:1 sufficient:2 principle:1 foster:1 classifying:1 share:2 balancing:1 row:1 lo:1 compatible:2 prone:1 token:1 beate:1 concept:2 summary:1 supported:1 parity:6 free:2 unfairly:1 formal:3 weaker:1 bias:9 side:1 institute:5 allow:1 wide:1 distributed:1 feedback:1 calculated:2 default:3 world:17 evaluating:1 unaware:6 author:2 made:2 icdmw:1 coincide:1 preprocessing:1 historical:6 ya0:1 welling:1 social:3 approximate:2 contradict:2 observable:8 preferred:1 emphasize:1 dealing:1 assumed:2 forbids:1 continuous:2 latent:12 protected:15 why:1 table:2 toshihiro:1 learn:1 ca:1 expanding:1 ignoring:1 wightman:1 complex:2 zou:1 zafar:3 domain:3 obama:1 inherit:1 reconciled:1 spread:1 arrow:1 big:5 motivation:1 arise:1 noise:2 fair:54 positively:1 augmented:1 causality:4 depicts:1 lsac:2 untestable:1 wiley:2 christos:1 explicit:8 obeying:1 wish:2 lie:1 unfair:6 jmlr:1 grained:1 removing:1 specific:2 insightful:1 cynthia:2 nyu:1 evidence:2 concern:1 essential:1 deconvolution:2 workshop:1 merging:1 effectively:1 occurring:1 civil:1 intersection:1 simply:2 likely:6 infinitely:1 expressed:1 adjustment:1 ethical:2 chang:1 chouldechova:1 gender:1 corresponds:3 truth:1 satisfies:3 chance:1 kamishima:1 acm:1 lewis:1 extracted:1 conditional:2 goal:1 formulated:1 consequently:1 towards:2 price:2 man:1 shortcut:1 change:4 experimentally:1 loan:5 specifically:3 except:1 contrasted:2 sexual:1 determined:1 prejudice:1 berk:2 lemma:3 mexican:1 kearns:2 matt:1 ya:3 inadvertently:1 indicating:2 formally:2 college:1 aaron:1 carulla:1 support:1 people:2 providing:1 evaluate:2 mcmc:2 avoiding:1 correlated:1 |
6,627 | 6,996 | Prototypical Networks for Few-shot Learning
Jake Snell
University of Toronto?
Vector Institute
Kevin Swersky
Twitter
Richard Zemel
University of Toronto
Vector Institute
Canadian Institute for Advanced Research
Abstract
We propose Prototypical Networks for the problem of few-shot classification, where
a classifier must generalize to new classes not seen in the training set, given only
a small number of examples of each new class. Prototypical Networks learn a
metric space in which classification can be performed by computing distances
to prototype representations of each class. Compared to recent approaches for
few-shot learning, they reflect a simpler inductive bias that is beneficial in this
limited-data regime, and achieve excellent results. We provide an analysis showing
that some simple design decisions can yield substantial improvements over recent
approaches involving complicated architectural choices and meta-learning. We
further extend Prototypical Networks to zero-shot learning and achieve state-ofthe-art results on the CU-Birds dataset.
1
Introduction
Few-shot classification [22, 18, 15] is a task in which a classifier must be adapted to accommodate
new classes not seen in training, given only a few examples of each of these classes. A naive approach,
such as re-training the model on the new data, would severely overfit. While the problem is quite
difficult, it has been demonstrated that humans have the ability to perform even one-shot classification,
where only a single example of each new class is given, with a high degree of accuracy [18].
Two recent approaches have made significant progress in few-shot learning. Vinyals et al. [32]
proposed Matching Networks, which uses an attention mechanism over a learned embedding of the
labeled set of examples (the support set) to predict classes for the unlabeled points (the query set).
Matching Networks can be interpreted as a weighted nearest-neighbor classifier applied within an
embedding space. Notably, this model utilizes sampled mini-batches called episodes during training,
where each episode is designed to mimic the few-shot task by subsampling classes as well as data
points. The use of episodes makes the training problem more faithful to the test environment and
thereby improves generalization. Ravi and Larochelle [24] take the episodic training idea further
and propose a meta-learning approach to few-shot learning. Their approach involves training an
LSTM [11] to produce the updates to a classifier, given an episode, such that it will generalize well to
a test-set. Here, rather than training a single model over multiple episodes, the LSTM meta-learner
learns to train a custom model for each episode.
We attack the problem of few-shot learning by addressing the key issue of overfitting. Since data is
severely limited, we work under the assumption that a classifier should have a very simple inductive
bias. Our approach, Prototypical Networks, is based on the idea that there exists an embedding in
which points cluster around a single prototype representation for each class. In order to do this,
we learn a non-linear mapping of the input into an embedding space using a neural network and
take a class?s prototype to be the mean of its support set in the embedding space. Classification
is then performed for an embedded query point by simply finding the nearest class prototype. We
?
Initial work done while at Twitter.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
c2
c2
x
c1
v1
x
c1
c3
c3
(a) Few-shot
v2
v3
(b) Zero-shot
Figure 1: Prototypical Networks in the few-shot and zero-shot scenarios. Left: Few-shot prototypes
ck are computed as the mean of embedded support examples for each class. Right: Zero-shot
prototypes ck are produced by embedding class meta-data vk . In either case, embedded query points
are classified via a softmax over distances to class prototypes: p? (y = k|x) ? exp(?d(f? (x), ck )).
follow the same approach to tackle zero-shot learning; here each class comes with meta-data giving
a high-level description of the class rather than a small number of labeled examples. We therefore
learn an embedding of the meta-data into a shared space to serve as the prototype for each class.
Classification is performed, as in the few-shot scenario, by finding the nearest class prototype for an
embedded query point.
In this paper, we formulate Prototypical Networks for both the few-shot and zero-shot settings.
We draw connections to Matching Networks in the one-shot setting, and analyze the underlying
distance function used in the model. In particular, we relate Prototypical Networks to clustering [4]
in order to justify the use of class means as prototypes when distances are computed with a Bregman
divergence, such as squared Euclidean distance. We find empirically that the choice of distance
is vital, as Euclidean distance greatly outperforms the more commonly used cosine similarity. On
several benchmark tasks, we achieve state-of-the-art performance. Prototypical Networks are simpler
and more efficient than recent meta-learning algorithms, making them an appealing approach to
few-shot and zero-shot learning.
2
2.1
Prototypical Networks
Notation
In few-shot classification we are given a small support set of N labeled examples S =
{(x1 , y1 ), . . . , (xN , yN )} where each xi ? RD is the D-dimensional feature vector of an example
and yi ? {1, . . . , K} is the corresponding label. Sk denotes the set of examples labeled with class k.
2.2
Model
Prototypical Networks compute an M -dimensional representation ck ? RM , or prototype, of each
class through an embedding function f? : RD ? RM with learnable parameters ?. Each prototype
is the mean vector of the embedded support points belonging to its class:
X
1
ck =
f? (xi )
(1)
|Sk |
(xi ,yi )?Sk
Given a distance function d : RM ? RM ? [0, +?), Prototypical Networks produce a distribution
over classes for a query point x based on a softmax over distances to the prototypes in the embedding
space:
exp(?d(f? (x), ck ))
p? (y = k | x) = P
(2)
0
k0 exp(?d(f? (x), ck ))
Learning proceeds by minimizing the negative log-probability J(?) = ? log p? (y = k | x) of the
true class k via SGD. Training episodes are formed by randomly selecting a subset of classes from
the training set, then choosing a subset of examples within each class to act as the support set and a
2
Algorithm 1 Training episode loss computation for Prototypical Networks. N is the number of
examples in the training set, K is the number of classes in the training set, NC ? K is the number
of classes per episode, NS is the number of support examples per class, NQ is the number of query
examples per class. R ANDOM S AMPLE(S, N ) denotes a set of N elements chosen uniformly at
random from set S, without replacement.
Input: Training set D = {(x1 , y1 ), . . . , (xN , yN )}, where each yi ? {1, . . . , K}. Dk denotes the
subset of D containing all elements (xi , yi ) such that yi = k.
Output: The loss J for a randomly generated training episode.
V ? R ANDOM S AMPLE({1, . . . , K}, NC )
. Select class indices for episode
for k in {1, . . . , NC } do
Sk ? R ANDOM S AMPLE(DVk , NS )
. Select support examples
Qk ? R ANDOM S AMPLE(DVk \ Sk , NQ )
. Select query examples
X
1
ck ?
f? (xi )
. Compute prototype from support examples
NC
(xi ,yi )?Sk
end for
J ?0
for k in {1, . . . , NC } do
for (x, y) in Qk do "
#
X
1
J ?J+
d(f? (x), ck )) + log
exp(?d(f? (x), ck0 ))
NC NQ
0
k
end for
end for
. Initialize loss
. Update loss
subset of the remainder to serve as query points. Pseudocode to compute the loss J(?) for a training
episode is provided in Algorithm 1.
2.3
Prototypical Networks as Mixture Density Estimation
For a particular class of distance functions, known as regular Bregman divergences [4], the Prototypical Networks algorithm is equivalent to performing mixture density estimation on the support set
with an exponential family density. A regular Bregman divergence d? is defined as:
d? (z, z0 ) = ?(z) ? ?(z0 ) ? (z ? z0 )T ??(z0 ),
(3)
where ? is a differentiable, strictly convex function of the Legendre type. Examples of Bregman
divergences include squared Euclidean distance kz ? z0 k2 and Mahalanobis distance.
Prototype computation can be viewed in terms of hard clustering on the support set, with one cluster
per class and each support point assigned to its corresponding class cluster. It has been shown [4]
for Bregman divergences that the cluster representative achieving minimal distance to its assigned
points is the cluster mean. Thus the prototype computation in Equation (1) yields optimal cluster
representatives given the support set labels when a Bregman divergence is used.
Moreover, any regular exponential family distribution p? (z|?) with parameters ? and cumulant
function ? can be written in terms of a uniquely determined regular Bregman divergence [4]:
p? (z|?) = exp{zT ? ? ?(?) ? g? (z)} = exp{?d? (z, ?(?)) ? g? (z)}
Consider now a regular exponential family mixture model with parameters ? =
p(z|?) =
K
X
?k p? (z|?k ) =
k=1
K
X
(4)
{?k , ?k }K
k=1 :
?k exp(?d? (z, ?(?k )) ? g? (z))
(5)
k=1
Given ?, inference of the cluster assignment y for an unlabeled point z becomes:
?k exp(?d? (z, ?(?k )))
0
k0 ?k exp(?d? (z, ?(?k )))
p(y = k|z) = P
(6)
For an equally-weighted mixture model with one cluster per class, cluster assignment inference
(6) is equivalent to query class prediction (2) with f? (x) = z and ck = ?(?k ). In this case,
3
Prototypical Networks are effectively performing mixture density estimation with an exponential
family distribution determined by d? . The choice of distance therefore specifies modeling assumptions
about the class-conditional data distribution in the embedding space.
2.4
Reinterpretation as a Linear Model
A simple analysis is useful in gaining insight into the nature of the learned classifier. When we use
Euclidean distance d(z, z0 ) = kz ? z0 k2 , then the model in Equation (2) is equivalent to a linear
model with a particular parameterization [21]. To see this, expand the term in the exponent:
>
?kf? (x) ? ck k2 = ?f? (x)> f? (x) + 2c>
k f? (x) ? ck ck
(7)
The first term in Equation (7) is constant with respect to the class k, so it does not affect the softmax
probabilities. We can write the remaining terms as a linear model as follows:
>
>
>
2c>
k f? (x) ? ck ck = wk f? (x) + bk , where wk = 2ck and bk = ?ck ck
(8)
We focus primarily on squared Euclidean distance (corresponding to spherical Gaussian densities) in
this work. Our results indicate that Euclidean distance is an effective choice despite the equivalence
to a linear model. We hypothesize this is because all of the required non-linearity can be learned
within the embedding function. Indeed, this is the approach that modern neural network classification
systems currently use, e.g., [16, 31].
2.5
Comparison to Matching Networks
Prototypical Networks differ from Matching Networks in the few-shot case with equivalence in the
one-shot scenario. Matching Networks [32] produce a weighted nearest neighbor classifier given the
support set, while Prototypical Networks produce a linear classifier when squared Euclidean distance
is used. In the case of one-shot learning, ck = xk since there is only one support point per class, and
Matching Networks and Prototypical Networks become equivalent.
A natural question is whether it makes sense to use multiple prototypes per class instead of just one.
If the number of prototypes per class is fixed and greater than 1, then this would require a partitioning
scheme to further cluster the support points within a class. This has been proposed in Mensink
et al. [21] and Rippel et al. [27]; however both methods require a separate partitioning phase that is
decoupled from the weight updates, while our approach is simple to learn with ordinary gradient
descent methods.
Vinyals et al. [32] propose a number of extensions, including decoupling the embedding functions of
the support and query points, and using a second-level, fully-conditional embedding (FCE) that takes
into account specific points in each episode. These could likewise be incorporated into Prototypical
Networks, however they increase the number of learnable parameters, and FCE imposes an arbitrary
ordering on the support set using a bi-directional LSTM. Instead, we show that it is possible to
achieve the same level of performance using simple design choices, which we outline next.
2.6
Design Choices
Distance metric Vinyals et al. [32] and Ravi and Larochelle [24] apply Matching Networks using
cosine distance. However for both Prototypical Networks and Matching Networks any distance is
permissible, and we found that using squared Euclidean distance can greatly improve results for both.
For Protypical Networks, we conjecture this is primarily due to cosine distance not being a Bregman
divergence, and thus the equivalence to mixture density estimation discussed in Section 2.3 does not
hold.
Episode composition A straightforward way to construct episodes, used in Vinyals et al. [32] and
Ravi and Larochelle [24], is to choose Nc classes and NS support points per class in order to match
the expected situation at test-time. That is, if we expect at test-time to perform 5-way classification
and 1-shot learning, then training episodes could be comprised of Nc = 5, NS = 1. We have found,
however, that it can be extremely beneficial to train with a higher Nc , or ?way?, than will be used
at test-time. In our experiments, we tune the training Nc on a held-out validation set. Another
consideration is whether to match NS , or ?shot?, at train and test-time. For Prototypical Networks,
we found that it is usually best to train and test with the same ?shot? number.
4
2.7
Zero-Shot Learning
Zero-shot learning differs from few-shot learning in that instead of being given a support set of
training points, we are given a class meta-data vector vk for each class. These could be determined
in advance, or they could be learned from e.g., raw text [8]. Modifying Prototypical Networks to deal
with the zero-shot case is straightforward: we simply define ck = g? (vk ) to be a separate embedding
of the meta-data vector. An illustration of the zero-shot procedure for Prototypical Networks as
it relates to the few-shot procedure is shown in Figure 1. Since the meta-data vector and query
point come from different input domains, we found it was helpful empirically to fix the prototype
embedding g to have unit length, however we do not constrain the query embedding f .
3
Experiments
For few-shot learning, we performed experiments on Omniglot [18] and the miniImageNet version
of ILSVRC-2012 [28] with the splits proposed by Ravi and Larochelle [24]. We perform zero-shot
experiments on the 2011 version of the Caltech UCSD bird dataset (CUB-200 2011) [34].
3.1
Omniglot Few-shot Classification
Omniglot [18] is a dataset of 1623 handwritten characters collected from 50 alphabets. There are 20
examples associated with each character, where each example is drawn by a different human subject.
We follow the procedure of Vinyals et al. [32] by resizing the grayscale images to 28 ? 28 and
augmenting the character classes with rotations in multiples of 90 degrees. We use 1200 characters
plus rotations for training (4,800 classes in total) and the remaining classes, including rotations, for
test. Our embedding architecture mirrors that used by Vinyals et al. [32] and is composed of four
convolutional blocks. Each block comprises a 64-filter 3 ? 3 convolution, batch normalization layer
[12], a ReLU nonlinearity and a 2 ? 2 max-pooling layer. When applied to the 28 ? 28 Omniglot
images this architecture results in a 64-dimensional output space. We use the same encoder for
embedding both support and query points. All of our models were trained via SGD with Adam [13].
We used an initial learning rate of 10?3 and cut the learning rate in half every 2000 episodes. No
regularization was used other than batch normalization.
We trained Prototypical Networks using Euclidean distance in the 1-shot and 5-shot scenarios with training episodes containing 60 classes
and 5 query points per class. We found that it
is advantageous to match the training-shot with
the test-shot, and to use more classes (higher
?way?) per training episode rather than fewer.
We compare against various baselines, including
the Neural Statistician [7], Meta-Learner LSTM
[24], MAML [9], and both the fine-tuned and
non-fine-tuned versions of Matching Networks
[32]. We computed classification accuracy for
our models averaged over 1,000 randomly generated episodes from the test set. The results
are shown in Table 1 and to our knowledge are
competitive with state-of-the-art on this dataset.
Figure 2 shows a sample t-SNE visualization
[20] of the embeddings learned by Prototypical
Networks. We visualize a subset of test characters from the same alphabet in order to gain
better insight, despite the fact that classes in
actual test episodes are likely to come from different alphabets. Even though the visualized
characters are minor variations of each other,
the network is able to cluster the hand-drawn
characters closely around the class prototypes.
Figure 2: A t-SNE visualization of the embeddings
learned by Prototypical networks on the Omniglot
dataset. A subset of the Tengwar script is shown
(an alphabet in the test set). Class prototypes are
indicated in black. Several misclassified characters
are highlighted in red along with arrows pointing
to the correct prototype.
5
Table 1: Few-shot classification accuracies on Omniglot. ? Uses non-standard train/test splits.
Model
M ATCHING N ETWORKS [32]
M ATCHING N ETWORKS [32]
N EURAL S TATISTICIAN [7]
MAML [9]?
P ROTOTYPICAL N ETWORKS (O URS )
Dist.
Fine Tune
Cosine
Cosine
Euclid.
N
Y
N
N
N
5-way Acc.
1-shot
5-shot
20-way Acc.
1-shot
5-shot
98.1%
97.9%
98.1%
98.7%
98.8%
93.8%
93.5%
93.2%
95.8%
96.0%
98.9%
98.7%
99.5%
99.9%
99.7%
98.5%
98.7%
98.1%
98.9%
98.9%
Table 2: Few-shot classification accuracies on miniImageNet. All accuracy results are averaged over
600 test episodes and are reported with 95% confidence intervals. ? Results reported by [24].
5-way Acc.
Model
BASELINE N EAREST NEIGHBORS?
M ATCHING N ETWORKS [32]?
M ATCHING N ETWORKS FCE [32]?
M ETA -L EARNER LSTM [24]?
MAML [9]
P ROTOTYPICAL N ETWORKS (O URS )
3.2
Dist.
Fine Tune
1-shot
5-shot
Cosine
Cosine
Cosine
Euclid.
N
N
N
N
N
N
28.86 ? 0.54%
43.40 ? 0.78%
43.56 ? 0.84%
43.44 ? 0.77%
48.70 ? 1.84%
49.42 ? 0.78%
49.79 ? 0.79%
51.09 ? 0.71%
55.31 ? 0.73%
60.60 ? 0.71%
63.15 ? 0.91%
68.20 ? 0.66%
miniImageNet Few-shot Classification
The miniImageNet dataset, originally proposed by Vinyals et al. [32], is derived from the larger
ILSVRC-12 dataset [28]. The splits used by Vinyals et al. [32] consist of 60,000 color images of size
84 ? 84 divided into 100 classes with 600 examples each. For our experiments, we use the splits
introduced by Ravi and Larochelle [24] in order to directly compare with state-of-the-art algorithms
for few-shot learning. Their splits use a different set of 100 classes, divided into 64 training, 16
validation, and 20 test classes. We follow their procedure by training on the 64 training classes and
using the 16 validation classes for monitoring generalization performance only.
We use the same four-block embedding architecture as in our Omniglot experiments, though here
it results in a 1,600-dimensional output space due to the increased size of the images. We also
use the same learning rate schedule as in our Omniglot experiments and train until validation loss
stops improving. We train using 30-way episodes for 1-shot classification and 20-way episodes for
5-shot classification. We match train shot to test shot and each class contains 15 query points per
episode. We compare to the baselines as reported by Ravi and Larochelle [24], which include a simple
nearest neighbor approach on top of features learned by a classification network on the 64 training
classes. The other baselines are two non-fine-tuned variants of Matching Networks (both ordinary and
FCE) and the Meta-Learner LSTM. in the non-fine-tuned setting because the fine-tuning procedure
as proposed by Vinyals et al. [32] is not fully described. As can be seen in Table 2, Prototypical
Networks achieves state-of-the-art by a wide margin on 5-shot accuracy.
We conducted further analysis, to determine the effect of distance metric and the number of training
classes per episode on the performance of Prototypical Networks and Matching Networks. To make
the methods comparable, we use our own implementation of Matching Networks that utilizes the
same embedding architecture as our Prototypical Networks. In Figure 3 we compare cosine vs.
Euclidean distance and 5-way vs. 20-way training episodes in the 1-shot and 5-shot scenarios, with
15 query points per class per episode. We note that 20-way achieves higher accuracy than 5-way
and conjecture that the increased difficulty of 20-way classification helps the network to generalize
better, because it forces the model to make more fine-grained decisions in the embedding space. Also,
using Euclidean distance improves performance substantially over cosine distance. This effect is even
more pronounced for Prototypical Networks, in which computing the class prototype as the mean of
embedded support points is more naturally suited to Euclidean distances since cosine distance is not
a Bregman divergence.
6
70%
80%
Matching / Proto. Nets
5-shot Accuracy (5-way)
1-shot Accuracy (5-way)
80%
60%
50%
40%
30%
20%
70%
Matching Nets
Proto. Nets
60%
50%
40%
30%
20%
5-way
Cosine
5-way
Euclid.
20-way
Cosine
1-shot
5-way
Cosine
20-way
Euclid.
5-way
Euclid.
20-way
Cosine
5-shot
20-way
Euclid.
Figure 3: Comparison showing the effect of distance metric and number of classes per training
episode on 5-way classification accuracy for both Matching Networks and Prototypical Networks
on miniImageNet. The x-axis indicates configuration of the training episodes (way, distance, and
shot), and the y-axis indicates 5-way test accuracy for the corresponding shot. Error bars indicate
95% confidence intervals as computed over 600 test episodes. Note that Matching Networks and
Prototypical Networks are identical in the 1-shot case.
Table 3: Zero-shot classification accuracies on CUB-200.
Model
ALE [1]
SJE [2]
S AMPLE C LUSTERING [19]
SJE [2]
DS-SJE [25]
DA-SJE [25]
S YNTHESIZED C LASSIFIERS [6]
P ROTOTYPICAL N ETWORKS (O URS )
Z HANG AND S ALIGRAMA [36]
3.3
Image
Features
50-way Acc.
0-shot
Fisher
AlexNet
AlexNet
GoogLeNet
GoogLeNet
GoogLeNet
GoogLeNet
GoogLeNet
VGG-19
26.9%
40.3%
44.3%
50.1%
50.4%
50.9%
54.7%
54.8%
55.3% ? 0.8
CUB Zero-shot Classification
In order to assess the suitability of our approach for zero-shot learning, we also run experiments on
the Caltech-UCSD Birds (CUB) 200-2011 dataset [34]. The CUB dataset contains 11,788 images of
200 bird species. We closely follow the procedure of Reed et al. [25] in preparing the data. We use
their splits to divide the classes into 100 training, 50 validation, and 50 test. For images we use 1,024dimensional features extracted by applying GoogLeNet [31] to middle, upper left, upper right, lower
left, and lower right crops of the original and horizontally-flipped image2 . At test time we use only
the middle crop of the original image. For class meta-data we use the 312-dimensional continuous
attribute vectors provided with the CUB dataset. These attributes encode various characteristics of
the bird species such as their color, shape, and feather patterns.
We learned a simple linear mapping on top of both the 1024-dimensional image features and the
312-dimensional attribute vectors to produce a 1,024-dimensional output space. For this dataset we
found it helpful to normalize the class prototypes (embedded attribute vectors) to be of unit length,
since the attribute vectors come from a different domain than the images. Training episodes were
constructed with 50 classes and 10 query images per class. The embeddings were optimized via SGD
with Adam at a fixed learning rate of 10?4 and weight decay of 10?5 . Early stopping on validation
loss was used to determine the optimal number of epochs for retraining on the training plus validation
set.
Table 3 shows that we achieve state-of-the-art results when compared to methods utilizing attributes
as class meta-data. We compare our method to variety of zero-shot learning methods, including other
embedding approaches such as ALE [1], SJE [2], and DS-SJE/DA-SJE [25]. We also compare to a
recent clustering approach [19] which trains an SVM on a learned feature space obtained by fine2
Features downloaded from https://github.com/reedscot/cvpr2016.
7
tuning AlexNet [16]. The Synthesized Classifiers approach of [6] is a manifold learning technique
that aligns the class meta-data space with the visual model space, and the method of Zhang and
Saligrama [36] is a structured prediction approach trained on top of VGG-19 features [30]. Since
Zhang and Saligrama [36] is a randomized method, we include their reported error bars in Table 3.
Our Protypical Networks outperform Synthesized Classifiers and are within error bars of Zhang and
Saligrama [36], while being a much simpler approach than either.
We also ran an additional set of zero-shot experiments with stronger class meta-data. We extracted
1,024-dimensional meta-data vectors for each CUB-200 class using the pretrained Char CNN-RNN
model of [25], then trained zero-shot Prototypical Networks using the same procedure described
above except we used a 512-dimensional output embedding, as chosen via validation accuracy. We
obtained test accuracy of 58.3%, compared to the 54.0% accuracy obtained by DS-SJE [25] with
a Char CNN-RNN model. Moreover, our result exceeds the 56.8% accuracy attained by DS-SJE
with even stronger Word CNN-RNN class-metadata representations. Taken together, these zero-shot
classification results demonstrate that our approach is general enough to be applied even when the
data points (images) are from a different domain relative to the classes (attributes).
4
Related Work
The literature on metric learning is vast [17, 5]; we summarize here the work most relevant to
our proposed method. Neighborhood Components Analysis (NCA) [10] learns a Mahalanobis
distance to maximize K-nearest-neighbor?s (KNN) leave-one-out accuracy in the transformed space.
Salakhutdinov and Hinton [29] extend NCA by using a neural network to perform the transformation.
Large margin nearest neighbor (LMNN) classification [33] also attempts to optimize KNN accuracy
but does so using a hinge loss that encourages the local neighborhood of a point to contain other
points with the same label. The DNet-KNN [23] is another margin-based method that improves
upon LMNN by utilizing a neural network to perform the embedding instead of a simple linear
transformation. Of these, our method is most similar to the non-linear extension of NCA [29] because
we use a neural network to perform the embedding and we optimize a softmax based on Euclidean
distances in the transformed space, as opposed to a margin loss. A key distinction between our
approach and non-linear NCA is that we form a softmax directly over classes, rather than individual
points, computed from distances to each class?s prototype representation. This allows each class to
have a concise representation independent of the number of data points and obviates the need to store
the entire support set to make predictions.
Our approach is also similar to the nearest class mean approach [21], where each class is represented
by the mean of its examples. This approach was developed to rapidly incorporate new classes into
a classifier without retraining, however it relies on a linear embedding and was designed to handle
the case where the novel classes come with a large number of examples. In contrast, our approach
utilizes neural networks to non-linearly embed points and we couple this with episodic training in
order to handle the few-shot scenario. Mensink et al. [21] attempt to extend their approach to also
perform non-linear classification, but they do so by allowing classes to have multiple prototypes.
They find these prototypes in a pre-processing step by using k-means on the input space and then
perform a multi-modal variant of their linear embedding. Prototypical Networks, on the other hand,
learn a non-linear embedding in an end-to-end manner with no such pre-processing, producing a
non-linear classifier that still only requires one prototype per class. In addition, our approach naturally
generalizes to other distance functions, particularly Bregman divergences.
The center loss proposed by Wen et al. [35] for face recognition is similar to ours but has two main
differences. First, they learn the centers for each class as parameters of the model whereas we
compute protoypes as a function of the labeled examples within each episode. Second, they combine
the center loss with a softmax loss in order to prevent representations collapsing to zero, whereas we
construct a softmax loss from our prototypes which naturally prevents such collapse. Moreover, our
approach is designed for the few-shot scenario rather than face recognition.
A relevant few-shot learning method is the meta-learning approach proposed in Ravi and Larochelle
[24]. The key insight here is that LSTM dynamics and gradient descent can be written in effectively
the same way. An LSTM can then be trained to itself train a model from a given episode, with the
performance goal of generalizing well on the query points. MAML [9] is another meta-learning
approach to few-shot learning. It seeks to learn a representation that is easily fit to new data with few
8
steps of gradient descent. Matching Networks and Prototypical Networks can also be seen as forms
of meta-learning, in the sense that they produce simple classifiers dynamically from new training
episodes; however the core embeddings they rely on are fixed after training. The FCE extension to
Matching Networks involves a secondary embedding that depends on the support set. However, in
the few-shot scenario the amount of data is so small that a simple inductive bias seems to work well,
without the need to learn a custom embedding for each episode.
Prototypical Networks are also related to the Neural Statistician [7] from the generative modeling
literature, which extends the variational autoencoder [14, 26] to learn generative models of datasets
rather than individual points. One component of the Neural Statistician is the ?statistic network?
which summarizes a set of data points into a statistic vector. It does this by encoding each point within
a dataset, taking a sample mean, and applying a post-processing network to obtain an approximate
posterior over the statistic vector. Edwards and Storkey [7] test their model for one-shot classification
on the Omniglot dataset by considering each character to be a separate dataset and making predictions
based on the class whose approximate posterior over the statistic vector has minimal KL-divergence
from the posterior inferred by the test point. Like the Neural Statistician, we also produce a summary
statistic for each class. However, ours is a discriminative model, as befits our discriminative task of
few-shot classification.
With respect to zero-shot learning, the use of embedded meta-data in Prototypical Networks resembles
the method of [3] in that both predict the weights of a linear classifier. The DS-SJE and DA-SJE
approach of [25] also learns deep multimodal embedding functions for images and class meta-data.
Unlike ours, they learn using an empirical risk loss. Neither [3] nor [25] uses episodic training, which
allows us to help speed up training and regularize the model.
5
Conclusion
We have proposed a simple method called Prototypical Networks for few-shot learning based on the
idea that we can represent each class by the mean of its examples in a representation space learned
by a neural network. We train these networks to specifically perform well in the few-shot setting by
using episodic training. The approach is far simpler and more efficient than recent meta-learning
approaches, and produces state-of-the-art results even without sophisticated extensions developed for
Matching Networks (although these can be applied to Prototypical Networks as well). We show how
performance can be greatly improved by carefully considering the chosen distance metric, and by
modifying the episodic learning procedure. We further demonstrate how to generalize Prototypical
Networks to the zero-shot setting, and achieve state-of-the-art results on the CUB-200 dataset. A
natural direction for future work is to utilize Bregman divergences other than squared Euclidean
distance, corresponding to class-conditional distributions beyond spherical Gaussians. We conducted
preliminary explorations of this, including learning a variance per dimension for each class. This did
not lead to any empirical gains, suggesting that the embedding network has enough flexibility on its
own without requiring additional fitted parameters per class. Overall, the simplicity and effectiveness
of Prototypical Networks makes it a promising approach for few-shot learning.
Acknowledgements
We would like to thank Marc Law, Sachin Ravi, Hugo Larochelle, Renjie Liao, and Oriol Vinyals
for helpful discussions. This work was supported by the Samsung GRP project and the Canadian
Institute for Advanced Research.
References
[1] Zeynep Akata, Florent Perronnin, Zaid Harchaoui, and Cordelia Schmid. Label-embedding for attributebased classification. In IEEE Computer Vision and Pattern Recognition, pages 819?826, 2013.
[2] Zeynep Akata, Scott Reed, Daniel Walter, Honglak Lee, and Bernt Schiele. Evaluation of output embeddings for fine-grained image classification. In IEEE Computer Vision and Pattern Recognition, 2015.
[3] Jimmy Ba, Kevin Swersky, Sanja Fidler, and Ruslan Salakhutdinov. Predicting deep zero-shot convolutional
neural networks using textual descriptions. In International Conference on Computer Vision, pages 4247?
4255, 2015.
9
[4] Arindam Banerjee, Srujana Merugu, Inderjit S Dhillon, and Joydeep Ghosh. Clustering with bregman
divergences. Journal of Machine Learning Research, 6(Oct):1705?1749, 2005.
[5] Aur?lien Bellet, Amaury Habrard, and Marc Sebban. A survey on metric learning for feature vectors and
structured data. arXiv preprint arXiv:1306.6709, 2013.
[6] Soravit Changpinyo, Wei-Lun Chao, Boqing Gong, and Fei Sha. Synthesized classifiers for zero-shot
learning. In IEEE Computer Vision and Pattern Recognition, pages 5327?5336, 2016.
[7] Harrison Edwards and Amos Storkey. Towards a neural statistician. International Conference on Learning
Representations, 2017.
[8] Mohamed Elhoseiny, Babak Saleh, and Ahmed Elgammal. Write a classifier: Zero-shot learning using
purely textual descriptions. In International Conference on Computer Vision, pages 2584?2591, 2013.
[9] Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation of deep
networks. International Conference on Machine Learning, 2017.
[10] Jacob Goldberger, Geoffrey E. Hinton, Sam T. Roweis, and Ruslan Salakhutdinov. Neighbourhood
components analysis. In Advances in Neural Information Processing Systems, pages 513?520, 2004.
[11] Sepp Hochreiter and J?rgen Schmidhuber. Long short-term memory. Neural Computation, 9(8):1735?1780,
1997.
[12] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing
internal covariate shift. arXiv preprint arXiv:1502.03167, 2015.
[13] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint
arXiv:1412.6980, 2014.
[14] Diederik P. Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114,
2013.
[15] Gregory Koch. Siamese neural networks for one-shot image recognition. Master?s thesis, University of
Toronto, 2015.
[16] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional
neural networks. In Advances in Neural Information Processing Systems, pages 1097?1105, 2012.
[17] Brian Kulis. Metric learning: A survey. Foundations and Trends in Machine Learning, 5(4):287?364,
2012.
[18] Brenden M. Lake, Ruslan Salakhutdinov, Jason Gross, and Joshua B. Tenenbaum. One shot learning of
simple visual concepts. In CogSci, 2011.
[19] Renjie Liao, Alexander Schwing, Richard Zemel, and Raquel Urtasun. Learning deep parsimonious
representations. Advances in Neural Information Processing Systems, 2016.
[20] Laurens van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of Machine Learning
Research, 9(Nov):2579?2605, 2008.
[21] Thomas Mensink, Jakob Verbeek, Florent Perronnin, and Gabriela Csurka. Distance-based image classification: Generalizing to new classes at near-zero cost. IEEE Transactions on Pattern Analysis and Machine
Intelligence, 35(11):2624?2637, 2013.
[22] Erik G Miller, Nicholas E Matsakis, and Paul A Viola. Learning from one example through shared densities
on transforms. In IEEE Computer Vision and Pattern Recognition, volume 1, pages 464?471, 2000.
[23] Renqiang Min, David A Stanley, Zineng Yuan, Anthony Bonner, and Zhaolei Zhang. A deep non-linear
feature mapping for large-margin knn classification. In IEEE International Conference on Data Mining,
pages 357?366, 2009.
[24] Sachin Ravi and Hugo Larochelle. Optimization as a model for few-shot learning. International Conference
on Learning Representations, 2017.
[25] Scott Reed, Zeynep Akata, Bernt Schiele, and Honglak Lee. Learning deep representations of fine-grained
visual descriptions. In IEEE Computer Vision and Pattern Recognition, 2016.
[26] Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate inference in deep generative models. arXiv preprint arXiv:1401.4082, 2014.
10
[27] Oren Rippel, Manohar Paluri, Piotr Dollar, and Lubomir Bourdev. Metric learning with adaptive density
discrimination. International Conference on Learning Representations, 2016.
[28] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang,
Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. Imagenet large
scale visual recognition challenge. International Journal of Computer Vision, 115(3):211?252, 2015.
[29] Ruslan Salakhutdinov and Geoffrey E. Hinton. Learning a nonlinear embedding by preserving class
neighbourhood structure. In AISTATS, pages 412?419, 2007.
[30] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
[31] Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru
Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In IEEE Computer
Vision and Pattern Recognition, pages 1?9, 2015.
[32] Oriol Vinyals, Charles Blundell, Tim Lillicrap, Daan Wierstra, et al. Matching networks for one shot
learning. In Advances in Neural Information Processing Systems, pages 3630?3638, 2016.
[33] Kilian Q Weinberger, John Blitzer, and Lawrence K Saul. Distance metric learning for large margin nearest
neighbor classification. In Advances in Neural Information Processing Systems, pages 1473?1480, 2005.
[34] P. Welinder, S. Branson, T. Mita, C. Wah, F. Schroff, S. Belongie, and P. Perona. Caltech-UCSD Birds 200.
Technical Report CNS-TR-2010-001, California Institute of Technology, 2010.
[35] Yandong Wen, Kaipeng Zhang, Zhifeng Li, and Yu Qiao. A discriminative feature learning approach for
deep face recognition. In European Conference on Computer Vision, pages 499?515. Springer, 2016.
[36] Ziming Zhang and Venkatesh Saligrama. Zero-shot recognition via structured prediction. In European
Conference on Computer Vision, pages 533?548. Springer, 2016.
11
| 6996 |@word kulis:1 cu:1 middle:2 version:3 cnn:3 advantageous:1 stronger:2 retraining:2 seems:1 pieter:1 seek:1 jacob:1 image2:1 concise:1 sgd:3 thereby:1 tr:1 accommodate:1 shot:95 initial:2 configuration:1 contains:2 liu:1 selecting:1 rippel:2 daniel:1 tuned:4 ours:3 jimenez:1 outperforms:1 com:1 goldberger:1 diederik:2 must:2 written:2 john:1 shape:1 christian:2 hypothesize:1 designed:3 zaid:1 update:3 v:2 discrimination:1 half:1 fewer:1 generative:3 nq:3 parameterization:1 intelligence:1 xk:1 core:1 short:1 toronto:3 attack:1 simpler:4 zhang:6 wierstra:2 along:1 c2:2 constructed:1 become:1 yuan:1 feather:1 combine:1 manner:1 notably:1 expected:1 indeed:1 paluri:1 dist:2 nor:1 multi:1 salakhutdinov:5 lmnn:2 spherical:2 actual:1 considering:2 becomes:1 project:1 provided:2 underlying:1 notation:1 moreover:3 linearity:1 alexnet:3 agnostic:1 reedscot:1 interpreted:1 substantially:1 developed:2 finding:2 transformation:2 ghosh:1 every:1 act:1 tackle:1 classifier:16 rm:4 k2:3 partitioning:2 unit:2 sje:11 yn:2 producing:1 local:1 severely:2 despite:2 encoding:2 black:1 plus:2 bird:6 resembles:1 equivalence:3 dynamically:1 branson:1 limited:2 collapse:1 bi:1 averaged:2 faithful:1 nca:4 block:3 differs:1 backpropagation:1 procedure:8 episodic:5 rnn:3 empirical:2 matching:21 confidence:2 word:1 regular:5 pre:2 unlabeled:2 andrej:1 risk:1 applying:2 optimize:2 equivalent:4 demonstrated:1 center:3 straightforward:2 attention:1 sepp:1 jimmy:2 convex:1 survey:2 formulate:1 simplicity:1 insight:3 utilizing:2 regularize:1 embedding:34 handle:2 variation:1 us:3 element:2 storkey:2 recognition:13 particularly:1 trend:1 cut:1 labeled:5 levine:1 preprint:6 kilian:1 episode:36 ordering:1 ran:1 substantial:1 gross:1 environment:1 schiele:2 dynamic:1 babak:1 trained:5 reinterpretation:1 serve:2 upon:1 purely:1 learner:3 easily:1 multimodal:1 samsung:1 k0:2 various:2 represented:1 bonner:1 alphabet:4 train:11 walter:1 fast:1 effective:1 cogsci:1 query:18 zemel:2 kevin:2 choosing:1 neighborhood:2 quite:1 whose:1 larger:1 bernt:2 resizing:1 ability:1 encoder:1 knn:4 statistic:5 simonyan:1 highlighted:1 itself:1 shakir:1 differentiable:1 net:3 propose:3 srujana:1 remainder:1 saligrama:4 adaptation:1 relevant:2 rapidly:1 flexibility:1 achieve:6 roweis:1 description:4 pronounced:1 normalize:1 sutskever:1 cluster:11 produce:8 adam:3 leave:1 help:2 tim:1 bourdev:1 andrew:2 gong:1 augmenting:1 blitzer:1 nearest:9 minor:1 progress:1 edward:2 involves:2 come:5 larochelle:9 indicate:2 differ:1 direction:1 laurens:1 closely:2 correct:1 attribute:7 modifying:2 filter:1 stochastic:2 exploration:1 human:2 char:2 require:2 fix:1 generalization:2 abbeel:1 suitability:1 snell:1 preliminary:1 brian:1 manohar:1 strictly:1 extension:4 hold:1 gabriela:1 around:2 koch:1 exp:9 lawrence:1 mapping:3 predict:2 visualize:1 pointing:1 rgen:1 achieves:2 early:1 cub:8 estimation:4 ruslan:4 schroff:1 label:4 currently:1 lustering:1 weighted:3 amos:1 gaussian:1 rather:6 ck:20 encode:1 derived:1 focus:1 rezende:1 improvement:1 vk:3 indicates:2 greatly:3 contrast:1 baseline:4 sense:2 dollar:1 helpful:3 inference:3 twitter:2 stopping:1 perronnin:2 entire:1 perona:1 expand:1 transformed:2 lien:1 misclassified:1 going:1 issue:1 classification:32 overall:1 exponent:1 art:8 softmax:7 initialize:1 changpinyo:1 construct:2 miniimagenet:5 beach:1 cordelia:1 piotr:1 identical:1 preparing:1 flipped:1 yu:1 future:1 mimic:1 report:1 richard:2 few:36 primarily:2 modern:1 randomly:3 wen:2 composed:1 divergence:13 individual:2 phase:1 replacement:1 statistician:5 cns:1 attempt:2 mining:1 maml:4 custom:2 evaluation:1 mixture:6 held:1 bregman:12 decoupled:1 euclidean:14 divide:1 re:1 atching:4 joydeep:1 minimal:2 fitted:1 increased:2 modeling:2 eta:1 assignment:2 rabinovich:1 ordinary:2 cost:1 addressing:1 subset:6 habrard:1 comprised:1 krizhevsky:1 welinder:1 conducted:2 reported:4 gregory:1 st:1 density:8 lstm:8 randomized:1 international:8 aur:1 lee:2 michael:1 together:1 ilya:1 sanjeev:1 squared:6 reflect:1 thesis:1 containing:2 choose:1 opposed:1 huang:1 earner:1 collapsing:1 li:2 szegedy:2 account:1 suggesting:1 wk:2 depends:1 performed:4 script:1 jason:1 csurka:1 analyze:1 red:1 competitive:1 bayes:1 complicated:1 jia:2 ass:1 formed:1 accuracy:18 convolutional:4 qk:2 characteristic:1 likewise:1 variance:1 yield:2 ofthe:1 miller:1 merugu:1 directional:1 generalize:4 raw:1 handwritten:1 vincent:1 euclid:6 produced:1 monitoring:1 russakovsky:1 classified:1 acc:4 aligns:1 against:1 mohamed:2 naturally:3 associated:1 couple:1 sampled:1 gain:2 dataset:15 stop:1 knowledge:1 color:2 improves:3 stanley:1 schedule:1 andom:4 sean:1 sophisticated:1 carefully:1 akata:3 higher:3 originally:1 attained:1 follow:4 danilo:1 modal:1 improved:1 wei:2 zisserman:1 mensink:3 done:1 though:2 just:1 until:1 overfit:1 hand:2 d:5 yandong:1 su:1 nonlinear:1 banerjee:1 dvk:2 indicated:1 usa:1 effect:3 lillicrap:1 contain:1 true:1 requiring:1 concept:1 inductive:3 regularization:1 assigned:2 fidler:1 dhillon:1 deal:1 mahalanobis:2 visualizing:1 during:1 uniquely:1 encourages:1 cosine:15 outline:1 demonstrate:2 dragomir:1 zhiheng:1 image:17 variational:2 consideration:1 novel:1 arindam:1 charles:1 rotation:3 pseudocode:1 sebban:1 empirically:2 hugo:2 volume:1 extend:3 discussed:1 googlenet:6 synthesized:3 significant:1 composition:1 anguelov:1 honglak:2 rd:2 tuning:2 omniglot:9 nonlinearity:1 sanja:1 similarity:1 posterior:3 own:2 recent:6 chelsea:1 boqing:1 scenario:8 store:1 schmidhuber:1 meta:24 yi:6 joshua:1 caltech:3 der:1 seen:4 preserving:1 greater:1 additional:2 deng:1 determine:2 v3:1 maximize:1 ale:2 relates:1 multiple:4 harchaoui:1 siamese:1 exceeds:1 technical:1 match:4 ahmed:1 long:2 divided:2 post:1 equally:1 prediction:5 involving:1 variant:2 crop:2 liao:2 vision:11 metric:10 verbeek:1 arxiv:12 normalization:3 represent:1 sergey:2 hochreiter:1 oren:1 c1:2 addition:1 whereas:2 fine:10 krause:1 interval:2 harrison:1 permissible:1 unlike:1 subject:1 pooling:1 ample:5 effectiveness:1 near:1 bernstein:1 canadian:2 vital:1 split:6 embeddings:5 enough:2 variety:1 affect:1 relu:1 fit:1 grp:1 architecture:4 florent:2 elgammal:1 idea:3 prototype:29 lubomir:1 vgg:2 shift:1 blundell:1 whether:2 accelerating:1 karen:1 deep:11 useful:1 tune:3 karpathy:1 amount:1 transforms:1 tenenbaum:1 visualized:1 sachin:2 http:1 specifies:1 outperform:1 per:20 write:2 key:3 four:2 achieving:1 drawn:2 yangqing:1 prevent:1 neither:1 ravi:9 utilize:1 v1:1 vast:1 run:1 master:1 raquel:1 swersky:2 extends:1 family:4 architectural:1 utilizes:3 lake:1 draw:1 parsimonious:1 decision:2 summarizes:1 maaten:1 comparable:1 layer:2 adapted:1 constrain:1 fei:3 alex:1 speed:1 extremely:1 min:1 performing:2 conjecture:2 structured:3 belonging:1 legendre:1 beneficial:2 bellet:1 character:9 ur:3 sam:1 appealing:1 making:2 taken:1 equation:3 visualization:2 etworks:7 mechanism:1 finn:1 end:5 generalizes:1 gaussians:1 apply:1 v2:1 nicholas:1 pierre:1 neighbourhood:2 batch:4 weinberger:1 matsakis:1 original:2 obviates:1 denotes:3 clustering:4 subsampling:1 include:3 remaining:2 top:3 thomas:1 hinge:1 giving:1 jake:1 question:1 sha:1 gradient:3 distance:40 separate:3 thank:1 manifold:1 zeynep:3 collected:1 urtasun:1 erik:1 length:2 index:1 reed:4 mini:1 illustration:1 minimizing:1 sermanet:1 nc:10 difficult:1 sne:3 relate:1 hao:1 negative:1 ba:2 design:3 implementation:1 zt:1 satheesh:1 perform:9 allowing:1 upper:2 convolution:2 datasets:1 benchmark:1 daan:2 descent:3 situation:1 hinton:5 incorporated:1 viola:1 y1:2 ucsd:3 jakob:1 arbitrary:1 brenden:1 inferred:1 bk:2 venkatesh:1 introduced:1 david:1 required:1 kl:1 c3:2 connection:1 optimized:1 imagenet:2 wah:1 california:1 qiao:1 learned:10 distinction:1 textual:2 kingma:2 nip:1 renjie:2 able:1 bar:3 proceeds:1 usually:1 pattern:8 beyond:1 scott:3 regime:1 summarize:1 challenge:1 gaining:1 including:5 max:2 memory:1 natural:2 difficulty:1 force:1 rely:1 predicting:1 advanced:2 scheme:1 improve:1 github:1 technology:1 axis:2 naive:1 metadata:1 autoencoder:1 schmid:1 auto:1 text:1 epoch:1 literature:2 acknowledgement:1 chao:1 kf:1 relative:1 law:1 embedded:8 loss:14 fully:2 fce:5 expect:1 prototypical:42 ziming:1 geoffrey:4 validation:8 foundation:1 downloaded:1 degree:2 vanhoucke:1 amaury:1 imposes:1 summary:1 supported:1 bias:3 deeper:1 institute:5 neighbor:7 wide:1 face:3 taking:1 saul:1 van:1 dimension:1 xn:2 kz:2 made:1 commonly:1 adaptive:1 far:1 erhan:1 welling:1 transaction:1 approximate:3 hang:1 nov:1 overfitting:1 ioffe:1 belongie:1 xi:6 discriminative:3 grayscale:1 continuous:1 sk:6 khosla:1 table:7 promising:1 learn:10 nature:1 ca:1 decoupling:1 improving:1 excellent:1 european:2 anthony:1 domain:3 da:3 marc:2 did:1 aistats:1 main:1 linearly:1 arrow:1 paul:1 x1:2 eural:1 representative:2 n:5 lun:1 comprises:1 exponential:4 learns:3 grained:3 zhifeng:1 z0:7 embed:1 dumitru:1 specific:1 covariate:1 showing:2 learnable:2 dk:1 decay:1 svm:1 exists:1 consist:1 effectively:2 mirror:1 margin:6 suited:1 generalizing:2 simply:2 likely:1 visual:4 vinyals:11 horizontally:1 prevents:1 aditya:1 inderjit:1 pretrained:1 springer:2 relies:1 extracted:2 saleh:1 ma:1 oct:1 conditional:3 viewed:1 goal:1 towards:1 shared:2 fisher:1 hard:1 determined:3 except:1 uniformly:1 specifically:1 justify:1 reducing:1 schwing:1 olga:1 called:2 total:1 specie:2 secondary:1 select:3 ilsvrc:2 internal:1 support:24 berg:1 jonathan:1 cumulant:1 alexander:2 oriol:2 incorporate:1 mita:1 proto:2 |
6,628 | 6,997 | Triple Generative Adversarial Nets
Chongxuan Li, Kun Xu, Jun Zhu?, Bo Zhang
Dept. of Comp. Sci. & Tech., TNList Lab, State Key Lab of Intell. Tech. & Sys.,
Center for Bio-Inspired Computing Research, Tsinghua University, Beijing, 100084, China
{licx14, xu-k16}@mails.tsinghua.edu.cn, {dcszj, dcszb}@mail.tsinghua.edu.cn
Abstract
Generative Adversarial Nets (GANs) have shown promise in image generation
and semi-supervised learning (SSL). However, existing GANs in SSL have two
problems: (1) the generator and the discriminator (i.e. the classifier) may not
be optimal at the same time; and (2) the generator cannot control the semantics
of the generated samples. The problems essentially arise from the two-player
formulation, where a single discriminator shares incompatible roles of identifying
fake samples and predicting labels and it only estimates the data without considering
the labels. To address the problems, we present triple generative adversarial
net (Triple-GAN), which consists of three players?a generator, a discriminator
and a classifier. The generator and the classifier characterize the conditional
distributions between images and labels, and the discriminator solely focuses on
identifying fake image-label pairs. We design compatible utilities to ensure that
the distributions characterized by the classifier and the generator both converge to
the data distribution. Our results on various datasets demonstrate that Triple-GAN
as a unified model can simultaneously (1) achieve the state-of-the-art classification
results among deep generative models, and (2) disentangle the classes and styles
of the input and transfer smoothly in the data space via interpolation in the latent
space class-conditionally.
1
Introduction
Deep generative models (DGMs) can capture the underlying distributions of the data and synthesize
new samples. Recently, significant progress has been made on generating realistic images based on
Generative Adversarial Nets (GANs) [7, 3, 22]. GAN is formulated as a two-player game, where the
generator G takes a random noise z as input and produces a sample G(z) in the data space while the
discriminator D identifies whether a certain sample comes from the true data distribution p(x) or the
generator. Both G and D are parameterized as deep neural networks and the training procedure is to
solve a minimax problem:
min max U (D, G) = Ex?p(x) [log(D(x))] + Ez?pz (z) [log(1 ? D(G(z)))],
G
D
where pz (z) is a simple distribution (e.g., uniform or normal) and U (?) denotes the utilities. Given a
generator and the defined distribution pg , the optimal discriminator is D(x) = p(x)/(pg (x) + p(x))
in the nonparametric setting, and the global equilibrium of this game is achieved if and only if
pg (x) = p(x) [7], which is desired in terms of image generation.
GANs and DGMs in general have also proven effective in semi-supervised learning (SSL) [11],
while retaining the generative capability. Under the same two-player game framework, Cat-GAN [26]
generalizes GANs with a categorical discriminative network and an objective function that minimizes
the conditional entropy of the predictions given the real data while maximizes the conditional entropy
?
J. Zhu is the corresponding author.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
CE
?? , ?? ? ?(?, ?)
?? ? ?(?)
C
?? ? ?? (?)
?? , ?? ? ?(?, ?)
?? , ??
~?? (?, ?)
CE
?? , ??
~?? (?, ?)
G
?? ? ?(?)
D
A/R
A
A/R
Figure 1: An illustration of Triple-GAN (best view in color). The utilities of D, C and G are colored
in blue, green and yellow respectively, with ?R? denoting rejection, ?A? denoting acceptance and
?CE? denoting the cross entropy loss for supervised learning. ?A?s and ?R?s are the adversarial losses
and ?CE?s are unbiased regularizations that ensure the consistency between pg , pc and p, which are
the distributions defined by the generator, classifier and true data generating process, respectively.
of the predictions given the generated samples. Odena [20] and Salimans et al. [25] augment the
categorical discriminator with one more class, corresponding to the fake data generated by the
generator. There are two main problems in existing GANs for SSL: (1) the generator and the
discriminator (i.e. the classifier) may not be optimal at the same time [25]; and (2) the generator
cannot control the semantics of the generated samples.
For the first problem, as an instance, Salimans et al. [25] propose two alternative training objectives
that work well for either classification or image generation in SSL, but not both. The objective of
feature matching works well in classification but fails to generate indistinguishable samples (See
Sec.5.2 for examples), while the other objective of minibatch discrimination is good at realistic image
generation but cannot predict labels accurately. The phenomena are not analyzed deeply in [25] and
here we argue that they essentially arise from the two-player formulation, where a single discriminator
has to play two incompatible roles?identifying fake samples and predicting labels. Specifically,
assume that G is optimal, i.e p(x) = pg (x), and consider a sample x ? pg (x). On one hand, as a
discriminator, the optimal D should identify x as a fake sample with non-zero probability (See [7] for
the proof). On the other hand, as a classifier, the optimal D should always predict the correct class
of x confidently since x ? p(x). It conflicts as D has two incompatible convergence points, which
indicates that G and D may not be optimal at the same time. Moreover, the issue remains even given
imperfect G, as long as pg (x) and p(x) overlaps as in most of the real cases. Given a sample form
the overlapped area, the two roles of D still compete by treating the sample differently, leading to
a poor classifier2 . Namely, the learning capacity of existing two-player models is restricted, which
should be addressed to advance current SSL results.
For the second problem, disentangling meaningful physical factors like the object category from the
latent representations with limited supervision is of general interest [30, 2]. However, to our best
knowledge, none of the existing GANs can learn the disentangled representations in SSL, though
some work [22, 5, 21] can learn such representations given full labels. Again, we believe that the
problem is caused by their two-player formulation. Specifically, the discriminators in [26, 25] take
a single data instead of a data-label pair as input and the label information is totally ignored when
justifying whether a sample is real or fake. Therefore, the generators will not receive any learning
signal regarding the label information from the discriminators and hence such models cannot control
the semantics of the generated samples, which is not satisfactory.
To address these problems, we present Triple-GAN, a flexible game-theoretical framework for both
classification and class-conditional image generation in SSL, where we have a partially labeled
dataset. We introduce two conditional networks?a classifier and a generator to generate pseudo labels
given real data and pseudo data given real labels, respectively. To jointly justify the quality of the
samples from the conditional networks, we define a single discriminator network which has the sole
role of distinguishing whether a data-label pair is from the real labeled dataset or not. The resulting
model is called Triple-GAN because not only are there three networks, but we consider three joint
distributions, i.e. the true data-label distribution and the distributions defined by the conditional
networks (See Figure 1 for the illustration of Triple-GAN). Directly motivated by the desirable
equilibrium that both the classifier and the conditional generator are optimal, we carefully design
2
The results of minibatch discrimination approach in [25] well support our analysis.
2
compatible utilities including adversarial losses and unbiased regularizations (See Sec. 3), which lead
to an effective solution to the challenging SSL task, justified both in theory and practice.
In particular, theoretically, instead of competing as stated in the first problem, a good classifier will
result in a good generator and vice versa in Triple-GAN (See Sec. 3.2 for the proof). Furthermore, the
discriminator can access the label information of the unlabeled data from the classifier and then force
the generator to generate correct image-label pairs, which addresses the second problem. Empirically,
we evaluate our model on the widely adopted MNIST [14], SVHN [19] and CIFAR10 [12] datasets.
The results (See Sec. 5) demonstrate that Triple-GAN can simultaneously learn a good classifier and
a conditional generator, which agrees with our motivation and theoretical results.
Overall, our main contributions are two folded: (1) we analyze the problems in existing SSL
GANs [26, 25] and propose a novel game-theoretical Triple-GAN framework to address them with
carefully designed compatible objectives; and (2) we show that on the three datasets with incomplete
labels, Triple-GAN can advance the state-of-the-art classification results of DGMs substantially and,
at the same time, disentangle classes and styles and perform class-conditional interpolation.
2
Related Work
Recently, various approaches have been developed to learn directed DGMs, including Variational
Autoencoders (VAEs) [10, 24], Generative Moment Matching Networks (GMMNs) [16, 6] and
Generative Adversarial Nets (GANs) [7]. These criteria are systematically compared in [28].
One primal goal of DGMs is to generate realistic samples, for which GANs have proven effective.
Specifically, LAP-GAN [3] leverages a series of GANs to upscale the generated samples to high
resolution images through the Laplacian pyramid framework [1]. DCGAN [22] adopts (fractionally)
strided convolution layers and batch normalization [8] in GANs and generates realistic natural images.
Recent work has introduced inference networks in GANs. For instance, InfoGAN [2] learns explainable latent codes from unlabeled data by regularizing the original GANs via variational mutual
information maximization. In ALI [5, 4], the inference network approximates the posterior distribution of latent variables given true data in unsupervised manner. Triple-GAN also has an inference
network (classifier) as in ALI but there exist two important differences in the global equilibria and
utilities between them: (1) Triple-GAN matches both the distributions defined by the generator
and classifier to true data distribution while ALI only ensures that the distributions defined by the
generator and inference network to be the same; (2) the discriminator will reject the samples from
the classifier in Triple-GAN while the discriminator will accept the samples from the inference
network in ALI, which leads to different update rules for the discriminator and inference network.
These differences naturally arise because Triple-GAN is proposed to solve the existing problems
in SSL GANs as stated in the introduction. Indeed, ALI [5] uses the same approach as [25] to deal
with partially labeled data and hence it still suffers from the problems. In addition, Triple-GAN
outperforms ALI significantly in the semi-supervised classification task (See comparison in Table. 1).
To handle partially labeled data, the conditional VAE [11] treats the missing labels as latent variables
and infer them for unlabeled data. ADGM [17] introduces auxiliary variables to build a more
expressive variational distribution and improve the predictive performance. The Ladder Network [23]
employs lateral connections between a variation of denoising autoencoders and obtains excellent SSL
results. Cat-GAN [26] generalizes GANs with a categorical discriminator and an objective function.
Salimans et al. [25] propose empirical techniques to stabilize the training of GANs and improve the
performance on SSL and image generation under incompatible learning criteria. Triple-GAN differs
significantly from these methods, as stated in the introduction.
3
Method
We consider learning DGMs in the semi-supervised setting,3 where we have a partially labeled dataset
with x denoting the input data and y denoting the output label. The goal is to predict the labels y
for unlabeled data as well as to generate new samples x conditioned on y. This is different from the
unsupervised setting for pure generation, where the only goal is to sample data x from a generator
to fool a discriminator; thus a two-player game is sufficient to describe the process as in GANs.
3
Supervised learning is an extreme case, where the training set is fully labeled.
3
In our setting, as the label information y is incomplete (thus uncertain), our density model should
characterize the uncertainty of both x and y, therefore a joint distribution p(x, y) of input-label pairs.
A straightforward application of the two-player GAN is infeasible because of the missing values on
y. Unlike the previous work [26, 25], which is restricted to the two-player framework and can lead
to incompatible objectives, we build our game-theoretic objective based on the insight that the joint
distribution can be factorized in two ways, namely, p(x, y) = p(x)p(y|x) and p(x, y) = p(y)p(x|y),
and that the conditional distributions p(y|x) and p(x|y) are of interest for classification and classconditional generation, respectively. To jointly estimate these conditional distributions, which are
characterized by a classifier network and a class-conditional generator network, we define a single
discriminator network which has the sole role of distinguishing whether a sample is from the true data
distribution or the models. Hence, we naturally extend GANs to Triple-GAN, a three-player game to
characterize the process of classification and class-conditional generation in SSL, as detailed below.
3.1
A Game with Three Players
Triple-GAN consists of three components: (1) a classifier C that (approximately) characterizes the
conditional distribution pc (y|x) ? p(y|x); (2) a class-conditional generator G that (approximately)
characterizes the conditional distribution in the other direction pg (x|y) ? p(x|y); and (3) a discriminator D that distinguishes whether a pair of data (x, y) comes from the true distribution p(x, y).
All the components are parameterized as neural networks. Our desired equilibrium is that the joint
distributions defined by the classifier and the generator both converge to the true data distribution. To
this end, we design a game with compatible utilities for the three players as follows.
We make the mild assumption that the samples from both p(x) and p(y) can be easily obtained.4
In the game, after a sample x is drawn from p(x), C produces a pseudo label y given x following
the conditional distribution pc (y|x). Hence, the pseudo input-label pair is a sample from the joint
distribution pc (x, y) = p(x)pc (y|x). Similarly, a pseudo input-label pair can be sampled from
G by first drawing y ? p(y) and then drawing x|y ? pg (x|y); hence from the joint distribution
pg (x, y) = p(y)pg (x|y). For pg (x|y), we assume that x is transformed by the latent style variables z
given the label y, namely, x = G(y, z), z ? pz (z), where pz (z) is a simple distribution (e.g., uniform
or standard normal). Then, the pseudo input-label pairs (x, y) generated by both C and G are sent to
the single discriminator D for judgement. D can also access the input-label pairs from the true data
distribution as positive samples. We refer the utilities in the process as adversarial losses, which can
be formulated as a minimax game:
min max U (C, G, D) =E (x,y)?p(x,y) [log D(x, y)] + ?E(x,y)?pc (x,y) [log(1 ? D(x, y))]
C,G
D
+(1 ? ?)E(x,y)?pg (x,y) [log(1 ? D(G(y, z), y))],
(1)
where ? ? (0, 1) is a constant that controls the relative importance of generation and classification
and we focus on the balance case by fixing it as 1/2 throughout the paper.
The game defined in Eqn. (1) achieves its equilibrium if and only if p(x, y) = (1 ? ?)pg (x, y) +
?pc (x, y) (See details in Sec. 3.2). The equilibrium indicates that if one of C and G tends to the
data distribution, the other will also go towards the data distribution, which addresses the competing
problem. However, unfortunately, it cannot guarantee that p(x, y) = pg (x, y) = pc (x, y) is the unique
global optimum, which is not desirable. To address this problem, we introduce the standard supervised
loss (i.e., cross-entropy loss) to C, RL = E(x,y)?p(x,y) [? log pc (y|x)], which is equivalent to the
KL-divergence between pc (x, y) and p(x, y). Consequently, we define the game as:
? (C, G, D) =E (x,y)?p(x,y) [log D(x, y)] + ?E(x,y)?p (x,y) [log(1 ? D(x, y))]
min max U
c
C,G
D
+(1 ? ?)E(x,y)?pg (x,y) [log(1 ? D(G(y, z), y))] + RL .
(2)
? has the unique global optimum for C and G.
It will be proven that the game with utilities U
3.2
Theoretical Analysis and Pseudo Discriminative Loss
4
In semi-supervised learning, p(x) is the empirical distribution of inputs and p(y) is assumed same to the
distribution of labels on labeled data, which is uniform in our experiment.
4
Algorithm 1 Minibatch stochastic gradient descent training of Triple-GAN in SSL.
for number of training iterations do
? Sample a batch of pairs (xg , yg ) ? pg (x, y) of size mg , a batch of pairs (xc , yc ) ? pc (x, y)
of size mc and a batch of labeled data (xd , yd ) ? p(x, y) of size md .
? Update D by ascending along its stochastic gradient:
?
?
X
X
X
1
?
1
?
?
??d?
(
log D(xd , yd ))+
log(1?D(xc , yc ))+
log(1?D(xg , yg ))? .
md
mc
mg
(xd ,yd )
(xc ,yc )
(xg ,yg )
? L and R
? P of RL and RP respectively.
? Compute the unbiased estimators R
? Update C by descending
along its stochastic gradient:
?
?
X
?
? L + ?P R
?P? .
?? c ?
pc (yc |xc ) log(1 ? D(xc , yc )) + R
mc
(xc ,yc )
? Update G by descending along
?
? its stochastic gradient:
X
1
?
?
log(1 ? D(xg , yg ))? .
?? g ?
mg
(xg ,yg )
end for
We now provide a formal theoretical analysis of Triple-GAN under nonparametric assumptions and
introduce the pseudo discriminative loss, which is an unbiased regularization motivated by the global
equilibrium. For clarity of the main text, we defer the proof details to Appendix A.
First, we can show that the optimal D balances between the true data distribution and the mixture
distribution defined by C and G, as summarized in Lemma 3.1.
Lemma 3.1 For any fixed C and G, the optimal D of the game defined by the utility function
U (C, G, D) is:
p(x, y)
?
,
(3)
DC,G
(x, y) =
p(x, y) + p? (x, y)
where p? (x, y) := (1 ? ?)pg (x, y) + ?pc (x, y) is a mixture distribution for ? ? (0, 1).
?
Given DC,G
, we can omit D and reformulate the minimax game with value function U as: V (C, G) =
maxD U (C, G, D), whose optimal point is summarized as in Lemma 3.2.
Lemma 3.2 The global minimum of V (C, G) is achieved if and only if p(x, y) = p? (x, y).
We can further show that C and G can at least capture the marginal distributions of data, especially
for pg (x), even there may exist multiple global equilibria, as summarized in Corollary 3.2.1.
Corollary 3.2.1 Given p(x, y) = p? (x, y), the marginal distributions are the same for p, pc and pg ,
i.e. p(x) = pg (x) = pc (x) and p(y) = pg (y) = pc (y).
Given the above result that p(x, y) = p? (x, y), C and G do not compete as in the two-player based
formulation and it is easy to verify that p(x, y) = pc (x, y) = pg (x, y) is a global equilibrium
point. However, it may not be unique and we should minimize an additional objective to ensure the
? (C, G, D) in problem (2), as stated below.
uniqueness. In fact, this is true for the utility function U
? (C, G, D) is achieved if and only if p(x, y) = pg (x, y) =
Theorem 3.3 The equilibrium of U
pc (x, y).
The conclusion essentially motivates our design of Triple-GAN, as we can ensure that both C and G
will converge to the true data distribution if the model has been trained to achieve the optimum.
? , which allows us to regularize our model for stable
We can further show another nice property of U
and better convergence in practice without bias, as summarized below.
5
Corollary 3.3.1 Adding any divergence (e.g. the KL divergence) between any two of the joint
? as the additional
distributions or the conditional distributions or the marginal distributions, to U
?
regularization to be minimized, will not change the global equilibrium of U .
Because label information is extremely insufficient in SSL, we propose pseudo discriminative loss
RP = Epg [? log pc (y|x)], which optimizes C on the samples generated by G in the supervised
manner. Intuitively, a good G can provide meaningful labeled data beyond the training set as
extra side information for C, which will boost the predictive performance (See Sec. 5.1 for the
empirical evidence). Indeed, minimizing pseudo discriminative loss with respect to C is equivalent to
minimizing DKL (pg (x, y)||pc (x, y)) (See Appendix A for proof) and hence the global equilibrium
remains following Corollary 3.3.1. Also note that directly minimizing DKL (pg (x, y)||pc (x, y)) is
infeasible since its computation involves the unknown likelihood ratio pg (x, y)/pc (x, y). The pseudo
discriminative loss is weighted by a hyperparameter ?P . See Algorithm 1 for the whole training
procedure, where ?c , ?d and ?g are trainable parameters in C, D and G respectively.
4
Practical Techniques
In this section we introduce several practical techniques used in the implementation of Triple-GAN,
which may lead to a biased solution theoretically but work well for challenging SSL tasks empirically.
One crucial problem of SSL is the small size of the labeled data. In Triple-GAN, D may memorize
the empirical distribution of the labeled data, and reject other types of samples from the true data
distribution. Consequently, G may collapse to these modes. To this end, we generate pseudo labels
through C for some unlabeled data and use these pairs as positive samples of D. The cost is on
introducing some bias to the target distribution of D, which is a mixture of pc and p instead of the
pure p. However, this is acceptable as C converges quickly and pc and p are close (See results in
Sec.5).
Since properly leveraging the unlabeled data is key to success in SSL, it is necessary to regularize
C heuristically as in many existing methods [23, 26, 13, 15] to make more accurate predictions.
We consider two alternative losses on the unlabeled data. The confidence loss [26] minimizes
the conditional entropy of pc (y|x) and the cross entropy
between
p(y) and pc (y), weighted by
a hyperparameter ?B , as RU = Hpc (y|x) + ?B Ep ? log pc (y) , which encourages C to make
predictions confidently and be balanced on the unlabeled data. The consistency loss [13] penalizes
the network if it predicts the same unlabeled data inconsistently given different noise , e.g., dropout
masks, as RU = Ex?p(x) ||pc (y|x, ) ? pc (y|x, 0 )||2 , where || ? ||2 is the square of the l2 -norm. We
use the confidence loss by default except on the CIFAR10 dataset (See details in Sec. 5).
Another consideration is to compute the gradients of Ex?p(x),y?pc (y|x) [log(1 ? D(x, y))] with
respect to the parameters ?c in C, which involves summation over the discrete random variable
y, i.e. the class label. On one hand, integrating out the class label is time consuming. On the
other hand, directly sampling one label to approximate the expectation via the Monte Carlo method
makes the feedback of the discriminator not differentiable with respect to ?c . As the REINFORCE
algorithm [29] can deal with such cases with discrete variables, we use a variant of it for the endto-end training of our classifier. The gradients in the original REINFORCE algorithm should be
Ex?p(x) Ey?pc (y|x) [??c log pc (y|x) log(1 ? D(x, y))]. In our experiment, we find the best strategy
is to use most probable y instead of sampling one to approximate the expectation over y. The bias is
small as the prediction of C is rather confident typically.
5
Experiments
We now present results on the widely adopted MNIST [14], SVHN [19], and CIFAR10 [12] datasets.
MNIST consists of 50,000 training samples, 10,000 validation samples and 10,000 testing samples of
handwritten digits of size 28 ? 28. SVHN consists of 73,257 training samples and 26,032 testing
samples and each is a colored image of size 32 ? 32, containing a sequence of digits with various
backgrounds. CIFAR10 consists of colored images distributed across 10 general classes?airplane,
automobile, bird, cat, deer, dog, frog, horse, ship and truck. There are 50,000 training samples and
10,000 testing samples of size 32 ? 32 in CIFAR10. We split 5,000 training data of SVHN and
6
Table 1: Error rates (%) on partially labeled MNIST, SHVN and CIFAR10 datasets, averaged by 10
runs. The results with ? are trained with more than 500,000 extra unlabeled data on SVHN.
MNIST n = 100
SVHN n = 1000
M1+M2 [11]
VAT [18]
Ladder [23]
Conv-Ladder [23]
ADGM [17]
SDGM [17]
MMCVA [15]
3.33 (?0.14)
2.33
1.06 (?0.37)
0.89 (?0.50)
0.96 (?0.02)
1.32 (?0.07)
1.24 (?0.54)
36.02 (?0.10)
CatGAN [26]
Improved-GAN [25]
ALI [5]
Triple-GAN (ours)
1.39 (?0.28)
0.93 (?0.07)
Algorithm
CIFAR10 n = 4000
24.63
20.40 (?0.47)
22.86 ?
16.61(?0.24)?
4.95 (?0.18) ?
8.11 (?1.3)
7.3
5.77(?0.17)
0.91 (?0.58)
19.58 (?0.58)
18.63 (?2.32)
18.3
16.99 (?0.36)
Table 2: Error rates (%) on MNIST with different number of labels, averaged by 10 runs.
Algorithm
Improved-GAN [25]
Triple-GAN (ours)
n = 20
n = 50
n = 200
16.77 (?4.52)
4.81 (?4.95)
2.21 (?1.36)
1.56 (?0.72)
0.90 (?0.04)
0.67 (?0.16)
CIFAR10 for validation if needed. On CIFAR10, we follow [13] to perform ZCA for the input of C
but still generate and estimate the raw images using G and D.
We implement our method based on Theano [27] and here we briefly summarize our experimental
settings.5 Though we have an additional network, the generator and classifier of Triple-GAN have
comparable architectures to those of the baselines [26, 25] (See details in Appendix F). The pseudo
discriminative loss is not applied until the number of epochs reach a threshold that the generator could
generate meaningful data. We only search the threshold in {200, 300}, ?P in {0.1, 0.03} and the
global learning rate in {0.0003, 0.001} based on the validation performance on each dataset. All of
the other hyperparameters including relative weights and parameters in Adam [9] are fixed according
to [25, 15] across all of the experiments. Further, in our experiments, we find that the training
techniques for the original two-player GANs [3, 25] are sufficient to stabilize the optimization of
Triple-GAN.
5.1 Classification
For fair comparison, all the results of the baselines are from the corresponding papers and we average
Triple-GAN over 10 runs with different random initialization and splits of the training data and report
the mean error rates with the standard deviations following [25].
Firstly, we compare our method with a large body of approaches in the widely used settings on MNIST,
SVHN and CIFAR10 datasets given 100, 1,000 and 4,000 labels6 , respectively. Table 1 summarizes
the quantitative results. On all of the three datasets, Triple-GAN achieves the state-of-the-art results
consistently and it substantially outperforms the strongest competitors (e.g., Improved-GAN) on more
challenging SVHN and CIFAR10 datasets, which demonstrate the benefit of compatible learning
objectives proposed in Triple-GAN. Note that for a fair comparison with previous GANs, we do not
leverage the extra unlabeled data on SVHN, while some baselines [17, 15] do.
Secondly, we evaluate our method with 20, 50 and 200 labeled samples on MNIST for a systematical
comparison with our main baseline Improved-GAN [25], as shown in Table 2. Triple-GAN consistently outperforms Improved-GAN with a substantial margin, which again demonstrates the benefit
of Triple-GAN. Besides, we can see that Triple-GAN achieves more significant improvement as the
number of labeled data decreases, suggesting the effectiveness of the pseudo discriminative loss.
Finally, we investigate the reasons for the outstanding performance of Triple-GAN. We train a single
C without G and D on SVHN as the baseline and get more than 10% error rate, which shows that G
is important for SSL even though C can leverage unlabeled data directly. On CIFAR10, the baseline
5
6
Our source code is available at https://github.com/zhenxuan00/triple-gan
We use these amounts of labels as default settings throughout the paper if not specified.
7
(a) Feature Matching
(b) Triple-GAN
(c) Automobile
(d) Horse
Figure 2: (a-b) Comparison between samples from Improved-GAN trained with feature matching
and Triple-GAN on SVHN. (c-d) Samples of Triple-GAN in specific classes on CIFAR10.
(a) SVHN data
(b) SVHN samples
(c) CIFAR10 data
(d) CIFAR10 samples
Figure 3: (a) and (c) are randomly selected labeled data. (b) and (d) are samples from Triple-GAN,
where each row shares the same label and each column shares the same latent variables.
(a) SVHN
(b) CIFAR10
Figure 4: Class-conditional latent space interpolation. We first sample two random vectors in the
latent space and interpolate linearly from one to another. Then, we map these vectors to the data
level given a fixed label for each class. Totally, 20 images are shown for each class. We select two
endpoints with clear semantics on CIFAR10 for better illustration.
(a simple version of ? model [13]) achieves 17.7% error rate. The smaller improvement is reasonable
as CIFAR10 is more complex and hence G is not as good as in SVHN. In addition, we evaluate
Triple-GAN without the pseudo discriminative loss on SVHN and it achieves about 7.8% error rate,
which shows the advantages of compatible objectives (better than the 8.11% error rate of ImprovedGAN) and the importance of the pseudo discriminative loss (worse than the complete Triple-GAN by
2%). Furthermore, Triple-GAN has a comparable convergence speed with Improved-GAN [25], as
shown in Appendix E.
5.2 Generation
We demonstrate that Triple-GAN can learn good G and C simultaneously by generating samples in
various ways with the exact models used in Sec. 5.1. For fair comparison, the generative model and
the number of labels are the same to the previous method [25].
In Fig. 2 (a-b), we first compare the quality of images generated by Triple-GAN on SVHN and the
Improved-GAN with feature matching [25],7 which works well for semi-supervised classification.
We can see that Triple-GAN outperforms the baseline by generating fewer meaningless samples and
7
Though the Improved-GAN trained with minibatch discrimination [25] can generate good samples, it fails
to predict labels accurately.
8
clearer digits. Further, the baseline generates the same strange sample four times, labeled with red
rectangles in Fig. 2 . The comparison on MNIST and CIFAR10 is presented in Appendix B. We
also evaluate the samples on CIFAR10 quantitatively via the inception score following [25]. The
value of Triple-GAN is 5.08 ? 0.09 while that of the Improved-GAN trained without minibatch
discrimination [25] is 3.87 ? 0.03, which agrees with the visual comparison. We then illustrate
images generated from two specific classes on CIFAR10 in Fig. 2 (c-d) and see more in Appendix C.
In most cases, Triple-GAN is able to generate meaningful images with correct semantics.
Further, we show the ability of Triple-GAN to disentangle classes and styles in Fig. 3. It can be
seen that Triple-GAN can generate realistic data in a specific class and the latent factors encode
meaningful physical factors like: scale, intensity, orientation, color and so on. Some GANs [22, 5, 21]
can generate data class-conditionally given full labels, while Triple-GAN can do similar thing given
much less label information.
Finally, we demonstrate the generalization capability of our Triple-GAN on class-conditional latent
space interpolation as in Fig. 4. Triple-GAN can transit smoothly from one sample to another with
totally different visual factors without losing label semantics, which proves that Triple-GANs can
learn meaningful latent spaces class-conditionally instead of overfitting to the training data, especially
labeled data. See these results on MNIST in Appendix D.
Overall, these results confirm that Triple-GAN avoid the competition between C and G and can lead
to a situation where both the generation and classification are good in semi-supervised learning.
6
Conclusions
We present triple generative adversarial networks (Triple-GAN), a unified game-theoretical framework
with three players?a generator, a discriminator and a classifier, to do semi-supervised learning with
compatible utilities. With such utilities, Triple-GAN addresses two main problems of existing
methods [26, 25]. Specifically, Triple-GAN ensures that both the classifier and the generator can
achieve their own optima respectively in the perspective of game theory and enable the generator
to sample data in a specific class. Our empirical results on MNIST, SVHN and CIFAR10 datasets
demonstrate that as a unified model, Triple-GAN can simultaneously achieve the state-of-the-art
classification results among deep generative models and disentangle styles and classes and transfer
smoothly on the data level via interpolation in the latent space.
Acknowledgments
The work is supported by the National NSF of China (Nos. 61620106010, 61621136008, 61332007),
the MIIT Grant of Int. Man. Comp. Stan (No. 2016ZXFB00001), the Youth Top-notch Talent Support
Program, Tsinghua Tiangong Institute for Intelligent Computing, the NVIDIA NVAIL Program and a
Project from Siemens.
References
[1] Peter Burt and Edward Adelson. The Laplacian pyramid as a compact image code. IEEE
Transactions on communications, 1983.
[2] Xi Chen, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever, and Pieter Abbeel. InfoGAN: Interpretable representation learning by information maximizing generative adversarial
nets. In NIPS, 2016.
[3] Emily L Denton, Soumith Chintala, and Rob Fergus. Deep generative image models using a
Laplacian pyramid of adversarial networks. In NIPS, 2015.
[4] Jeff Donahue, Philipp Kr?henb?hl, and Trevor Darrell. Adversarial feature learning. arXiv
preprint arXiv:1605.09782, 2016.
[5] Vincent Dumoulin, Ishmael Belghazi, Ben Poole, Alex Lamb, Martin Arjovsky, Olivier
Mastropietro, and Aaron Courville. Adversarially learned inference. arXiv preprint
arXiv:1606.00704, 2016.
9
[6] Gintare Karolina Dziugaite, Daniel M Roy, and Zoubin Ghahramani. Training generative neural
networks via maximum mean discrepancy optimization. arXiv preprint arXiv:1505.03906,
2015.
[7] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil
Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In NIPS, 2014.
[8] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training
by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015.
[9] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint
arXiv:1412.6980, 2014.
[10] Diederik P Kingma and Max Welling. Auto-encoding variational Bayes. arXiv preprint
arXiv:1312.6114, 2013.
[11] Diederik P Kingma, Shakir Mohamed, Danilo Jimenez Rezende, and Max Welling. Semisupervised learning with deep generative models. In NIPS, 2014.
[12] Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images.
Citeseer, 2009.
[13] Samuli Laine and Timo Aila. Temporal ensembling for semi-supervised learning. arXiv preprint
arXiv:1610.02242, 2016.
[14] Yann LeCun, L?on Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning
applied to document recognition. Proceedings of the IEEE, 86(11):2278?2324, 1998.
[15] Chongxuan Li, Jun Zhu, and Bo Zhang. Max-margin deep generative models for (semi-)
supervised learning. arXiv preprint arXiv:1611.07119, 2016.
[16] Yujia Li, Kevin Swersky, and Richard S Zemel. Generative moment matching networks. In
ICML, 2015.
[17] Lars Maal?e, Casper Kaae S?nderby, S?ren Kaae S?nderby, and Ole Winther. Auxiliary deep
generative models. arXiv preprint arXiv:1602.05473, 2016.
[18] Takeru Miyato, Shin-ichi Maeda, Masanori Koyama, Ken Nakae, and Shin Ishii. Distributional
smoothing with virtual adversarial training. arXiv preprint arXiv:1507.00677, 2015.
[19] Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, and Andrew Y Ng.
Reading digits in natural images with unsupervised feature learning. In NIPS workshop on deep
learning and unsupervised feature learning, 2011.
[20] Augustus Odena. Semi-supervised learning with generative adversarial networks. arXiv preprint
arXiv:1606.01583, 2016.
[21] Augustus Odena, Christopher Olah, and Jonathon Shlens. Conditional image synthesis with
auxiliary classifier gans. arXiv preprint arXiv:1610.09585, 2016.
[22] Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with
deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015.
[23] Antti Rasmus, Mathias Berglund, Mikko Honkala, Harri Valpola, and Tapani Raiko. Semisupervised learning with ladder networks. In NIPS, 2015.
[24] Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation
and approximate inference in deep generative models. arXiv preprint arXiv:1401.4082, 2014.
[25] Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen.
Improved techniques for training GANs. In NIPS, 2016.
[26] Jost Tobias Springenberg. Unsupervised and semi-supervised learning with categorical generative adversarial networks. arXiv preprint arXiv:1511.06390, 2015.
10
[27] Theano Development Team. Theano: A Python framework for fast computation of mathematical
expressions. arXiv e-prints, abs/1605.02688, May 2016. URL http://arxiv.org/abs/1605.
02688.
[28] Lucas Theis, A?ron van den Oord, and Matthias Bethge. A note on the evaluation of generative
models. arXiv preprint arXiv:1511.01844, 2015.
[29] Ronald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229?256, 1992.
[30] Jimei Yang, Scott E Reed, Ming-Hsuan Yang, and Honglak Lee. Weakly-supervised disentangling with recurrent transformations for 3d view synthesis. In NIPS, 2015.
11
| 6997 |@word mild:1 version:1 briefly:1 judgement:1 norm:1 heuristically:1 pieter:1 pg:27 citeseer:1 tnlist:1 moment:2 series:1 score:1 jimenez:2 daniel:1 denoting:5 ours:2 document:1 outperforms:4 existing:8 current:1 com:1 diederik:3 john:1 ronald:1 realistic:5 christian:1 treating:1 designed:1 update:4 interpretable:1 discrimination:4 generative:25 selected:1 fewer:1 alec:2 sys:1 timo:1 bissacco:1 colored:3 philipp:1 ron:1 firstly:1 org:1 zhang:2 wierstra:1 along:3 olah:1 mathematical:1 consists:5 manner:2 introduce:4 theoretically:2 mask:1 indeed:2 inspired:1 ming:1 duan:1 soumith:2 considering:1 totally:3 conv:1 project:1 underlying:1 moreover:1 maximizes:1 factorized:1 gintare:1 minimizes:2 substantially:2 developed:1 unified:3 transformation:1 guarantee:1 pseudo:16 quantitative:1 temporal:1 jimei:1 xd:3 zaremba:1 classifier:23 demonstrates:1 sherjil:1 bio:1 control:4 grant:1 omit:1 nakae:1 positive:2 treat:1 tsinghua:4 tends:1 karolina:1 encoding:1 solely:1 interpolation:5 approximately:2 yd:3 bird:1 frog:1 china:2 initialization:1 challenging:3 luke:1 limited:1 collapse:1 catgan:1 averaged:2 directed:1 unique:3 practical:2 acknowledgment:1 testing:3 lecun:1 practice:2 implement:1 differs:1 backpropagation:1 tiangong:1 digit:4 procedure:2 shin:2 area:1 empirical:5 epg:1 yan:1 reject:2 significantly:2 matching:6 confidence:2 integrating:1 zoubin:1 get:1 cannot:5 unlabeled:12 close:1 descending:2 equivalent:2 map:1 center:1 missing:2 maximizing:1 straightforward:1 go:1 williams:1 jimmy:1 emily:1 resolution:1 hsuan:1 identifying:3 pure:2 pouget:1 m2:1 rule:1 insight:1 estimator:1 shlens:1 regularize:2 disentangled:1 handle:1 variation:1 target:1 play:1 exact:1 losing:1 olivier:1 distinguishing:2 us:1 goodfellow:2 mikko:1 overlapped:1 synthesize:1 roy:1 recognition:1 nderby:2 predicts:1 labeled:17 distributional:1 ep:1 role:5 preprint:16 wang:1 capture:2 ensures:2 decrease:1 deeply:1 balanced:1 substantial:1 alessandro:1 warde:1 tobias:1 trained:5 weakly:1 ali:7 predictive:2 easily:1 joint:7 differently:1 various:4 cat:3 harri:1 k16:1 train:1 fast:1 effective:3 describe:1 monte:1 ole:1 zemel:1 horse:2 deer:1 kevin:1 vicki:1 rein:1 whose:1 jean:1 widely:3 solve:2 drawing:2 ability:1 augustus:2 jointly:2 shakir:2 sequence:1 mg:3 differentiable:1 net:7 advantage:1 matthias:1 propose:4 achieve:4 competition:1 sutskever:1 convergence:3 optimum:4 darrell:1 produce:2 generating:4 adam:3 converges:1 ben:1 object:1 tim:1 illustrate:1 andrew:1 clearer:1 fixing:1 recurrent:1 sole:2 progress:1 edward:1 auxiliary:3 involves:2 come:2 memorize:1 direction:1 kaae:2 correct:3 stochastic:6 lars:1 jonathon:1 enable:1 virtual:1 abbeel:1 generalization:1 probable:1 summation:1 secondly:1 normal:2 equilibrium:12 predict:4 achieves:5 uniqueness:1 miit:1 label:43 honkala:1 agrees:2 hpc:1 vice:1 weighted:2 always:1 rather:1 avoid:1 vae:1 corollary:4 encode:1 rezende:2 focus:2 properly:1 consistently:2 improvement:2 indicates:2 likelihood:1 masanori:1 tech:2 adversarial:17 zca:1 baseline:8 ishii:1 inference:8 typically:1 accept:1 transformed:1 semantics:6 tao:1 issue:1 classification:13 among:2 flexible:1 augment:1 overall:2 lucas:1 retaining:1 development:1 art:4 ssl:20 smoothing:1 mutual:1 marginal:3 beach:1 sampling:2 ng:1 adversarially:1 adelson:1 unsupervised:6 denton:1 icml:1 discrepancy:1 minimized:1 report:1 mirza:1 quantitatively:1 intelligent:1 employ:1 strided:1 distinguishes:1 yoshua:2 randomly:1 richard:1 simultaneously:4 divergence:3 intell:1 interpolate:1 national:1 ab:2 acceptance:1 interest:2 investigate:1 evaluation:1 introduces:1 analyzed:1 extreme:1 mixture:3 farley:1 pc:32 primal:1 accurate:1 cifar10:22 necessary:1 sdgm:1 netzer:1 incomplete:2 penalizes:1 desired:2 theoretical:6 uncertain:1 instance:2 column:1 ishmael:1 maximization:1 cost:1 introducing:1 deviation:1 uniform:3 krizhevsky:1 characterize:3 confident:1 st:1 density:1 winther:1 upscale:1 oord:1 lee:1 synthesis:2 quickly:1 yg:5 gans:25 ilya:1 bethge:1 again:2 containing:1 berglund:1 worse:1 leading:1 style:5 wojciech:1 li:3 szegedy:1 suggesting:1 sec:9 stabilize:2 summarized:4 int:1 caused:1 view:2 lab:2 dumoulin:1 analyze:1 characterizes:2 red:1 bayes:1 metz:1 capability:2 defer:1 contribution:1 minimize:1 square:1 convolutional:1 identify:1 yellow:1 handwritten:1 raw:1 vincent:1 accurately:2 none:1 mc:3 carlo:1 comp:2 ren:1 strongest:1 reach:1 suffers:1 trevor:1 competitor:1 mohamed:2 naturally:2 proof:4 chintala:2 sampled:1 dataset:5 color:2 knowledge:1 carefully:2 supervised:17 follow:1 danilo:2 improved:11 formulation:4 though:4 furthermore:2 inception:1 autoencoders:2 until:1 hand:4 eqn:1 expressive:1 mehdi:1 christopher:1 minibatch:5 mode:1 quality:2 believe:1 semisupervised:2 usa:1 dziugaite:1 verify:1 true:13 unbiased:4 regularization:4 hence:7 satisfactory:1 deal:2 conditionally:3 indistinguishable:1 game:19 encourages:1 nvail:1 criterion:2 theoretic:1 demonstrate:6 complete:1 svhn:18 image:24 variational:4 consideration:1 novel:1 recently:2 regularizing:1 physical:2 empirically:2 rl:3 endpoint:1 extend:1 approximates:1 m1:1 significant:2 refer:1 versa:1 honglak:1 talent:1 consistency:2 similarly:1 access:2 stable:1 supervision:1 patrick:1 disentangle:4 posterior:1 own:1 recent:1 perspective:1 systematical:1 optimizes:1 ship:1 certain:1 nvidia:1 success:1 maxd:1 samuli:1 seen:1 minimum:1 additional:3 arjovsky:1 tapani:1 ey:1 converge:3 signal:1 semi:12 full:2 desirable:2 multiple:2 infer:1 takeru:1 match:1 characterized:2 youth:1 cross:3 long:2 dept:1 justifying:1 dkl:2 laplacian:3 vat:1 prediction:5 variant:1 jost:1 essentially:3 expectation:2 arxiv:34 iteration:1 normalization:2 sergey:1 pyramid:3 achieved:3 receive:1 dcszb:1 justified:1 addition:2 background:1 addressed:1 source:1 crucial:1 extra:3 biased:1 unlike:1 meaningless:1 sent:1 thing:1 leveraging:1 effectiveness:1 leverage:3 yang:2 split:2 easy:1 mastropietro:1 bengio:2 architecture:1 competing:2 imperfect:1 regarding:1 cn:2 haffner:1 airplane:1 shift:1 whether:5 motivated:2 expression:1 utility:12 notch:1 accelerating:1 url:1 explainable:1 peter:1 henb:1 deep:12 ignored:1 fake:6 fool:1 detailed:1 clear:1 amount:1 nonparametric:2 category:1 ken:1 generate:12 http:2 exist:2 nsf:1 coates:1 blue:1 dgms:6 hyperparameter:2 promise:1 discrete:2 ichi:1 key:2 four:1 fractionally:1 threshold:2 drawn:1 clarity:1 ce:4 rectangle:1 beijing:1 houthooft:1 compete:2 run:3 parameterized:2 uncertainty:1 laine:1 springenberg:1 swersky:1 throughout:2 reasonable:1 strange:1 lamb:1 yann:1 wu:1 incompatible:5 appendix:7 acceptable:1 summarizes:1 comparable:2 dropout:1 layer:2 courville:2 truck:1 alex:2 generates:2 orientation:1 speed:1 min:3 extremely:1 martin:1 according:1 poor:1 across:2 smaller:1 aila:1 rob:1 hl:1 intuitively:1 restricted:2 den:1 theano:3 classconditional:1 remains:2 bing:1 needed:1 ascending:1 end:4 maal:1 adopted:2 generalizes:2 available:1 salimans:4 zxfb00001:1 alternative:2 batch:5 rp:2 original:3 inconsistently:1 denotes:1 top:1 ensure:4 miyato:1 gan:72 xc:6 ghahramani:1 build:2 especially:2 licx14:1 prof:1 objective:11 print:1 strategy:1 md:2 gradient:8 valpola:1 reinforce:2 sci:1 capacity:1 lateral:1 koyama:1 transit:1 mail:2 argue:1 reason:1 ozair:1 ru:2 code:3 besides:1 reed:1 rasmus:1 illustration:3 reformulate:1 balance:2 insufficient:1 minimizing:3 ratio:1 kun:1 disentangling:2 unfortunately:1 stated:4 ba:1 design:4 implementation:1 motivates:1 unknown:1 perform:2 convolution:1 datasets:9 daan:1 descent:1 situation:1 hinton:1 communication:1 team:1 dc:2 intensity:1 burt:1 introduced:1 david:1 pair:13 namely:3 kl:2 dog:1 connection:1 discriminator:24 adgm:2 conflict:1 specified:1 learned:1 boost:1 kingma:3 nip:9 address:7 beyond:1 able:1 poole:1 below:3 scott:1 yc:6 yujia:1 maeda:1 reading:1 confidently:2 summarize:1 program:2 max:6 green:1 including:3 endto:1 odena:3 overlap:1 natural:2 force:1 predicting:2 zhu:3 minimax:3 improve:2 github:1 ladder:4 identifies:1 stan:1 raiko:1 xg:5 categorical:4 jun:2 auto:1 text:1 nice:1 epoch:1 l2:1 schulman:1 python:1 theis:1 relative:2 loss:19 fully:1 generation:12 proven:3 geoffrey:1 triple:61 generator:29 chongxuan:2 validation:3 sufficient:2 systematically:1 tiny:1 share:3 casper:1 row:1 compatible:7 supported:1 antti:1 infeasible:2 formal:1 bias:3 side:1 institute:1 distributed:1 benefit:2 feedback:1 default:2 van:1 author:1 made:1 adopts:1 reinforcement:1 welling:2 transaction:1 approximate:3 obtains:1 compact:1 belghazi:1 confirm:1 global:11 overfitting:1 ioffe:1 assumed:1 consuming:1 discriminative:10 xi:2 fergus:1 connectionist:1 search:1 latent:13 table:5 learn:6 transfer:2 ca:1 excellent:1 automobile:2 complex:1 bottou:1 improvedgan:1 main:5 linearly:1 motivation:1 noise:2 arise:3 whole:1 hyperparameters:1 fair:3 xu:3 body:1 fig:5 ensembling:1 fails:2 infogan:2 learns:1 donahue:1 ian:2 theorem:1 specific:4 covariate:1 pz:4 abadie:1 evidence:1 workshop:1 mnist:11 adding:1 importance:2 kr:1 conditioned:1 margin:2 chen:2 rejection:1 smoothly:3 entropy:6 lap:1 ez:1 visual:2 dcgan:1 partially:5 bo:3 radford:2 dcszj:1 conditional:24 goal:3 formulated:2 cheung:1 consequently:2 towards:1 jeff:1 man:1 change:1 specifically:4 folded:1 except:1 reducing:1 justify:1 yuval:1 denoising:1 lemma:4 called:1 mathias:1 experimental:1 player:16 siemens:1 meaningful:6 vaes:1 aaron:2 select:1 internal:1 support:2 outstanding:1 evaluate:4 trainable:1 phenomenon:1 ex:4 |
6,629 | 6,998 | Efficient Sublinear-Regret Algorithms for Online
Sparse Linear Regression with Limited Observation
Shinji Ito
NEC Corporation
[email protected]
Hanna Sumita
National Institute of Informatics
[email protected]
Daisuke Hatano
National Institute of Informatics
[email protected]
Akihiro Yabe
NEC Corporation
[email protected]
Naonori Kakimura
Keio University
[email protected]
Takuro Fukunaga
JST, PRESTO
[email protected]
Ken-ichi Kawarabayashi
National Institute of Informatics
[email protected]
Abstract
Online sparse linear regression is the task of applying linear regression analysis
to examples arriving sequentially subject to a resource constraint that a limited
number of features of examples can be observed. Despite its importance in many
practical applications, it has been recently shown that there is no polynomialtime sublinear-regret algorithm unless NP?BPP, and only an exponential-time
sublinear-regret algorithm has been found. In this paper, we introduce mild assumptions to solve the problem. Under these assumptions, we present polynomialtime sublinear-regret algorithms for the online sparse linear regression. In addition, thorough experiments with publicly available data demonstrate that our algorithms outperform other known algorithms.
1
Introduction
In online regression, a learner receives examples one by one, and aims to make a good prediction
from the features of arriving examples, learning a model in the process. Online regression has
attracted attention recently in the research community in managing massive learning data.In realworld scenarios, however, with resource constraints, it is desired to make a prediction with only a
limited number of features per example. Such scenarios arise in the context of medical diagnosis of
a disease [3] and in generating a ranking of web pages in a search engine, in which it costs to obtain
features or only partial features are available in each round. In both these examples, predictions need
to be made sequentially because a patient or a search query arrives online.
To resolve the above issue of limited access to features, Kale [7] proposed online sparse regression.
In this problem, a learner makes a prediction for the labels of examples arriving sequentially over
a number of rounds. Each example has d features that can be potentially accessed by the learner.
However, in each round, the learner can acquire the values of at most k 0 features out of the d features,
where k 0 is a parameter set in advance. The learner then makes a prediction for the label of the
example. After the prediction, the true label is revealed to the learner, and the learner suffers a
loss for making an incorrect prediction. The performance of the prediction is measured here by the
standard notion of regret, which is the difference between the total loss of the learner and the total
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
Table 1: Computational complexity of online sparse linear regression.
Assumptions
(1) (2) (a)
X
X
X
X
X
X
X
X
X
Time complexity
(b)
X
Hard [5]
Hard (Theorem 1)
Polynomial time (Algorithms 1, 2)
Polynomial time (Algorithm 3)
loss of the best predictor. In [7], the best predictor is defined as the best k-sparse linear predictor,
i.e., the label is defined as a linear combination of at most k features.
Online sparse regression is a natural online variant of sparse regression; however, its computational
complexity was not well known until recently, as Kale [7] raised a question of whether it is possible
to achieve sublinear regret in polynomial time for online sparse linear regression. Foster et al. [5]
answered the question by proving that no polynomial-time algorithm achieves sublinear regret unless
NP?BPP. Indeed, this hardness result holds even when observing ?(k log d) features per example.
On the positive side, they also proposed an exponential-time algorithm with sublinear regret, when
we can observe at least k + 2 features in each round. However, their algorithm
is not expected to
work efficiently in practice. In fact, the algorithm enumerates all the kd0 possibilities to determine
k 0 features in each round, which requires exponential time for any instance.
Our contributions. In this paper, we show that online sparse linear regression admits a
polynomial-time algorithm with sublinear regret, under mild practical assumptions. First, we assume that the features of examples arriving online are determined by a hidden distribution (Assumption (1)), and the labels of the examples are determined by a weighted average of k features, where
the weights are fixed through all rounds (Assumption (2)). These are natural assumptions in the
online linear regression. However, Foster et al. [5] showed that no polynomial-time algorithm can
achieve sublinear regret unless NP?BPP even under these two assumptions.1
Owing to this hardness, we introduce two types of conditions on the distribution of features, both
of which are closely related to the restricted isometry property (RIP) that has been studied in the
literature of sparse recovery. The first condition, which we call linear independence of features
(Assumption (a)), is stronger than RIP. This condition roughly says that all the features are linearly independent. The second condition, which we call compatibility (Assumption (b)), is weaker
than RIP. Thus, an instance having RIP always satisfies the compatibility condition. Under these
assumptions, we propose the following three algorithms. Here, T is the number of rounds.
?
? Algorithm 1: A polynomial-time algorithm that achieves O( k0d?k T ) regret, under Assumptions (1), (2), and (a), which requires at least k + 2 features to be observed per example.
?
16
? Algorithm 2: A polynomial-time algorithm that achieves O( dT + kd016 ) regret, under
Assumptions (1), (2), and (a), which requires at least k features to be observed per example.
?
16
? Algorithm 3: A polynomial-time algorithm that achieves O( dT + kd016 ) regret, under
Assumptions (1), (2), and (b), which requires at least k features to be observed per example.
?
We can also construct an algorithm achieving O( k0d?k T ) regret under Assumption (b) for the case
where k 0 ? k + 2, analogous to Algorithm 1, but we omit it due to space limitations.
Assumptions (1)+(2)+(a) or (1)+(2)+(b) seem to be minimal assumptions needed to achieve sublinear regret in polynomial time. Indeed, as listed in Table 1, the problem is hard if any one of the
assumptions is violated, where hard means that no polynomial-time algorithm can achieve sublinear
regret unless NP?BPP. Note that Assumption (a) is stronger than (b).
In addition to proving theoretical regret bounds of our algorithms, we perform thorough experiments to evaluate the algorithms. We verified that our algorithms outperform the exponential-time
algorithm [5] in terms of computational complexity as well as performance of the prediction. Our
algorithms also outperform (baseline) heuristic-based algorithms and algorithms proposed in [2, 6]
1
Although the statement in [5] does not mention the assumptions, its proof indicates that the hardness holds
even with these assumptions.
2
for online learning based on limited observation. Moreover, we observe that our algorithms perform
well even for a real dataset, which may not satisfy our assumptions (deciding whether the model
satisfies our assumptions is difficult; for example, the RIP parameter cannot be approximated within
any constant factor under a reasonable complexity assumption [9]). Thus, we can conclude that our
algorithm is applicable in practice.
Overview of our techniques. One naive strategy for choosing a limited number of features is to
choose ?large-weight? features in terms of estimated ground-truth regression weights. This strategy,
however, does not achieve sublinear regret, as it ignores small-weight features. When we have
Assumption (a), we show that if we observe two more features chosen uniformly at random, together
with the largest k features, we can make a good prediction. More precisely, using the observed
features, we output the label that minimizes the least-square loss function, based on the technique
using an unbiased estimator of the gradient [2, 6] and the regularized dual averaging (RDA) method
(see, e.g., [11, 4]). This idea gives Algorithm 1, and the details are given in Section 4. The reason
why we use RDA is that it is efficient in terms of computational time and memory space as pointed
out in [11] and, more importantly, we will combine this with the `1 regularization later. However,
this requires at least k + 2 features to be observed in each round.
To avoid the requirement of two extra observations, the main idea is to employ Algorithm 1 with
a partial dataset. As a by-product of Algorithm 1, we can estimate the ground-truth regression
weight vector with high probability, even without observing extra features in each round. We use
the ground-truth weight vector estimated by Algorithm 1 to choose k features. Combining this idea
with RDA adapted for the sparse regression gives Algorithm 2 (Section 5.1) under Assumption (a).
The compatibility condition (Assumption (b)) is often used in LASSO (Least Absolute Shrinkage
and Selection Operator), and it is known that minimization with an `1 regularizer converges to the
sparse solution under the compatibility condition [1]. We introduce `1 regularization into Algorithm 1 to estimate the ground-truth regression weight vector when we have Assumption (b) instead
of Assumption (a). This gives Algorithm 3 (Section 5.2).
Related work. In the online learning problem, a learner aims to predict a model based on the
arriving examples. Specifically, in the linear function case, a learner predicts the coefficient wt of
a linear function wt> xt whenever an example with features xt arrives in round t. The learner then
PT
suffers a loss `t (wt ) = (yt ? wt> xt )2 . The aim is to minimize the total loss t=1 (`t (wt ) ? `t (w))
for an arbitrary w. It is ?
known that both the gradient descent method [12] and the dual averaging
method [11] attain an O( T ) regret even for the more general convex function case. However, these
methods require access to all features of the examples.
In linear regression with limited observation, the limited access to features in regression has been
considered [2, 6]. In this problem, a learner can acquire only the values of at most k 0 features among
d features. The purpose here is to estimate a good weight vector, e.g., minimize the loss function
`(w) or the loss function with `1 regularizer `(w) + kwk1 . Let us note that, even if we obtain a
good weight vector w with small `(w), we cannot always compute w> xt from limited observation
of xt and, hence, in our setting the prediction error might not be as small as `(w). Thus, our setting
uses a different loss function, defined in Section 2, to minimize the prediction error.
Another problem incorporating the limited access is proposed by Zolghadr et al. [13]. Here, instead
of observing k 0 features, one considers the situation where obtaining a feature has an associated cost.
In each round, one chooses a set of features to pay some amount of money, and the purpose is to
minimize the sum of the regret and the total cost. They designed an exponential-time algorithm for
the problem.
Online sparse linear regression has been studied in [5, 7], but only an exponential-time algorithm
has been proposed so far. In fact, Foster et al. [5] suggested designing an efficient algorithm for a
special class of the problem as future work. The present paper aims to follow this suggestion.
Recently, Kale et al. [8]2 presented computationally efficient algorithms to achieve sublinear regret
under the assumption that input features satisfy RIP. Though this study includes similar results to
ours, we can realize some differences. Our paper considers the assumption of the compatibility
condition without extra observation (i.e., the case of k 0 = k), whereas Kale et al. [8] studies a
2
The paper [8] was published after our manuscript was submitted.
3
stronger assumption with extra observation (k 0 ? k + 2) that yields a smaller regret bound than
ours. They also studies the agnostic (adversarial) setting.
2
Problem setting
Online sparse linear regression. We suppose that there are T rounds, and an example arrives
online in each round. Each example is represented by d features and is associated with a label,
where features and labels are all real numbers. We denote the features of the example arriving in
round t by xt = (xt1 , . . . , xtd )> ? {x ? Rd | kxk ? 1}, where the norm k ? k without subscripts
denotes the `2 norm. The label of each example is denoted by yt ? [?1, 1].
The purpose of the online sparse regression is to predict the label yt ? R from a partial observation
of xt in each round t = 1, . . . , T . The prediction is made through the following four steps: (i) we
choose a set St ? [d] := {1, . . . , d} of features to observe, where |St | is restricted to be at most k 0 ;
(ii) observe the selected features {xti }i?St ; (iii) on the basis of observation {xti }i?St , estimate a
predictor y?t of yt ; and (iv) observe the true value of yt .
From St , we define Dt ? Rd?d to be the diagonal matrix such that its (i, i)th entries are 1 for i ? St
and the other entries are 0. Then, observing the selected features {xti }i?St in (ii) is equivalent to
observing Dt xt . The predictor y?t is computed by y?t = wt> Dt xt in (iii).
Throughout the paper, we assume the following conditions, corresponding to Assumptions (1) and
(2) in Section 1, respectively.
Assumption (1) There exists a weight vector w? ? Rd such that kwk ? 1 and yt = w?> xt + t
for all t = 1, . . . , T , where t ? D , independent and identically distributed (i.i.d.), and
E[t ] = 0, E[t 2 ] = ? 2 . There exists a distribution Dx on Rd such that xt ? Dx , i.i.d. and
independent of {t }.
Assumption (2) The true weight vector w? is k-sparse, i.e., S ? = supp(w? ) = {i ? [d] | wi? 6= 0}
satisfies |S ? | ? k.
Regret.
The performance of the prediction is evaluated based on the regret RT (w) defined by
RT (w) =
T
X
(?
yt ? yt ) 2 ?
t=1
T
X
(w> xt ? yt )2 .
(1)
t=1
Our goal is to achieve smaller regret RT (w) for an arbitrary w ? Rd such that kwk ? 1 and
kwk0 ? k. For random inputs and randomized algorithms, we consider the expected regret
maxw:kwk0 ?k,kwk?1 E[RT (w)].
Define the loss function `t (w) = (w> xt ? yt )2 . If we compute a predictor y?t = wt> Dt xt using
a weight vector wt = (wt1 , . . . , wtd )> ? Rd in each step, we can rewrite the regret RT (w) in (1)
using Dt and wt as
T
X
RT (w) =
(`t (Dt wt ) ? `t (w))
(2)
t=1
because (?
yt ? yt )2 = (wt> Dt xt ? yt )2 = `t (Dt wt ). It is worth noting that if our goal is only to
construct wt that minimizes the loss function `t (wt ), then the definition of the regret should be
RT0 (w) =
T
X
(`t (wt ) ? `t (w)).
(3)
t=1
However, the goal of online sparse regression involves predicting yt from the limited observation.
Hence, we use (2) to evaluate the performance. In terms of the regret defined by (3), several algorithms based on limited observation have been developed. For
? example, the algorithms proposed by
Cesa-Bianchi et al. [3] and Hazan and Koren [6] achieve O( T ) regret of (3).
4
3
Extra assumptions on features of examples
Foster et al. [5] showed that Assumptions (1) and (2) are not sufficient to achieve sublinear regret.
Owing to this observation, we impose extra assumptions.
d?d
Let V := E[x>
and let L be the Cholesky decomposition of V (i.e., V = L> L). Denote
t xt ] ? R
the largest and the smallest singular values of L by ?1 and ?d , respectively. Under Assumption (1)
in Section 2, we have ?1 ? 1 because, for arbitrary unit vector u ? Rd , it holds that u> V u =
E[(u> x)2 ] ? 1. For a vector w ? R[d] and S ? [d], we let wS denote the restriction of w onto S.
For S ? [d], S c denotes [d] \ S. We assume either one of the following conditions holds.
(a) Linear independence of features: ?d > 0.
(b) Compatibility: There exists a constant ?0 > 0 that satisfies ?20 kwS ? k21 ? kw> V w for all
w ? Rd with kw(S ? )c k1 ? 2kwS ? k1 .
We assume the linear independence of features in Sections 4 and 5.1, and the compatibility in Section 5.2 to develop efficient algorithms.
Note that condition (a) means that L is non-singular, and so is V . In other words, condition (a)
indicates that the features in xt are linearly independent. This is the reason why we call condition
(a) the ?linear independence of features? assumption. Note that the linear independence of features
does not imply the stochastic independence of features.
Conditions (a) and (b) are closely related to RIP. Indeed, condition (b) is a weaker assumption than
RIP, and RIP is weaker than condition (a), i.e., (a) linear independence of features =? RIP =?
(b) compatibility (see, e.g., [1]). We now clarify how the above two assumptions are connected to
the regret. The expectation of the loss function `t (w) is equal to
Ext ,yt [`t (w)] = Ext ?Dx ,t ?D [(w> xt ? w?> xt ? t )2 ]
? >
?
2
= Ext ?Dx [((w ? w? )> xt )2 ] + Et ?D [>
t t ] = (w ? w ) V (w ? w ) + ?
for all t, where the second equality comes from E[t ] = 0 and that xt and t are independent. Denote
this function by `(w), and then `(w) is minimized when w = w? . If Dt and wt are determined
independently of xt and yt , the expectation of the regret RT (w) satisfies
E[RT (w)] = E[
T
T
X
X
(`(Dt wt ) ? `(w))] ? E[ (`(Dt wt ) ? `(w? ))]
t=1
t=1
= E[
T
X
T
X
(Dt wt ? w? )> V (Dt wt ? w? )] = E[
kL(Dt wt ? w? )k2 ].
t=1
(4)
t=1
We bound (4) in the analysis.
Hardness result. Similarly to [5], we can show that it remains hard under Assumptions (1), (2),
and (a). Refer to Appendix A for the proof.
Theorem 1. Let D be any positive constant, and let cD ? (0, 1) be a constant dependent on D.
Suppose that Assumptions (1) and (2) hold with k = O(dcD ) and k 0 = bkD ln dc. If an algorithm
for the online sparse regression problem runs in poly(d, T ) time per iteration and achieves a regret
at most poly(d, 1/?d )T 1?? in expectation for some constant ? > 0, then NP?BPP.
4
Algorithm with extra observations and linear independence of features
In this section, we present Algorithm 1. Here we assume k 0 ? k + 2, in addition to the linear
independence of features (Assumption (a)). The additional assumption will be removed in Section 5.
As noted in Section 2, our algorithm first computes a weight vector wt , chooses a set St of k 0
features to be observed, and computes a label y?t by y?t = wt> Dt xt in each round t. In addition,
?t of the gradient gt of the loss function `t (w) at
our algorithm constructs an unbiased estimator g
w = wt , i.e., gt = ?w `t (wt ) = 2xt (x>
w
?
yt ) at the end of the round. In the following, we
t
t
?t in round t, respectively, assuming that wt0 , St0 , and g
?t0 are
describe how to compute wt , St , and g
computed in the previous rounds t0 = 1, . . . , t ? 1. The entire algorithm is described in Algorithm 1.
5
Algorithm 1
Input: {xt , yt } ? Rd ? R, {?t } ? R>0 , k 0 ? 2 and k1 ? 0 such that k1 ? k 0 ? 2.
? 0 = 0.
1: Set h
2: for t = 1, . . . , T do
3:
Define wt by (5) and define St by Observe(wt , k 0 , k1 ).
4:
Observe Dt xt and output y?t := wt> Dt xt .
?t = h
? t?1 + g
?t by (6) and set h
?t
5:
Observe yt and define g
6: end for
?1 , . . . , g
?t?1 to estimate wt by the dual averaging method as follows.
Computing wt . We use g
? t?1 = Pt?1 g
?j , which is the average of all estimators of gradients computed in the preDefine h
j=1
vious rounds. Moreover, let (?1 , . . . , ?T ) be a monotonically non-decreasing sequence of positive
numbers. From these, we define wt by
1
?t
2
>
? ,
?
h
(5)
wt = arg min
ht?1 w + kwk = ?
? t?1 k} t?1
2
d
max{?t , kh
w?R ,kwk?1
Computing St . Let k1 be an integer such that k1 ? k 0 ? 2. We define Ut ? [d] as the set of the k1
largest features with respect to wt , i.e., choose Ut so that |Ut | = k1 and all i ? Ut and j ? [d] \ Ut
satisfy |wti | ? |wtj |. Let Vt be the set of (k 0 ? k1 ) elements chosen from [d] \ Ut uniformly at
random. Then our algorithm observes the set St = Ut ? Vt of the k 0 features. We call this procedure
to obtain St Observe(wt , k 0 , k1 ).
Observation 1. We observe that Ut ? St and Prob[i, j ? St ] ?
Thus, Prob[i, j ? St ] > 0 for all i, j ? [d] if k 0 ? k1 + 2.
(t)
(k0 ?k1 )(k0 ?k1 ?1)
d(d?1)
=: Cd,k0 ,k1 .
(t)
For simplicity, we use the notation pi = Prob[i ? St ] and pij = Prob[i, j ? St ] for i, j ? [d].
d?d
? t = (?
? t = Dt x>
?t . Define X
be a matrix
Computing g
xtij ) ? Rd?d by X
t xt Dt and let Xt ? R
(t)
.
Similarly,
whose (i, j)-th entry is x
?tij /pij . It follows that Xt is an unbiased estimator of xt x>
t
(t)
defining zt = (zti ) ? Rd by zti = xti /pi for i ? St and zti = 0 for i ?
/ St , we see that zt is an
?t to be
unbiased estimator of xt . Using Xt and zt , we define g
?t = 2Xt wt ? 2yt zt .
g
(6)
Regret ?
bound of Algorithm 1. Let us show that the regret achieved by Algorithm 1 is
O( k0d?k T ) in expectation.
Theorem 2. Suppose that the linear independence of features is satisfied and k ? k 0 ? 2. Let k1
be an arbitrary integer such that k ?
k1 ? k 0 ? 2. Then, for arbitrary
w ? Rd with kwk ? 1,
?
P
T
?
1
T +1
Algorithm 1 achieves E[RT (w)] ? ?32 C 160
. By setting ?t = 8 t/Cd,k0 ,k1
t=1 ?t +
2
d,k ,k1
d
for each t = 1, . . . , T , we obtain
s
?
24
d(d ? 1)
E[RT (w)] ? 2
?
T + 1.
(7)
?d (k 0 ? k1 )(k 0 ? k1 ? 1)
The rest of this section is devoted to proving Theorem 2. By (4), it suffices to evaluate
PT
E[ t=1 kL(Dt wt ? w? )k2 ] instead of E[RT (w)]. The following lemma asserts that each term
of (4) can be bounded, assuming the linear independence of features. Proofs of all lemmas are given
in the supplementary material.
Lemma 3. Suppose that the linear independence of features is satisfied. If St ? Ut ,
kL(Dt wt ? w? )k2 ?
6
3
kL(wt ? w? )k2 .
?d2
(8)
Proof. We have
?
?
kL(Dt wt ? w? )k2 ? ?12 kDt wt ? w? k2 = ?12 ?
X
(wti ? wi? )2 +
i?S ? ?St
X
wi?2 +
i?S ? \St
?
X
2?
wti
i?St \S ?
?
X
? ?12 ?kwt ? w? k2 +
wi?2 ? ,
(9)
i?S ? \St
where the second inequality holds since wi? = 0 for i ? [d] \ S ? . It holds that
X
X
X
2
wi?2 ?
wi?2 ?
2wti
+ 2(wti ? wi? )2
i?S ? \St
i?S ? \Ut
X
?2
2
wti
+2
i?Ut \S ?
X
i?S ? \Ut
(wti ? wi? )2 ? 2kwt ? w? k2 .
(10)
i?S ? \Ut
The first and third inequalities come from Ut ? St and the definition of Ut . Putting (10) into (9),
we have
3? 2
kL(Dt wt ? w? )k2 ? 3?12 kwt ? w? k2 ? 21 kL(wt ? w? )k2 .
?d
It follows from the above lemma that, if wt converges to w? , we have Dt wt = w? , and hence St
PT
PT
includes the support of w? . Moreover, it holds that t=1 E[kL(wt ? w? )k2 ] = E[ t=1 (`t (wt ) ?
PT
`t (w? ))] = E[RT0 (w? )], since wt is independent of xt and yt . Thus, to bound t=1 E[kL(wt ?
w? )k2 ], we shall evaluate E[RT0 (w? )].
Lemma 4 ([11]). Suppose that wt is defined by (5) for each t = 1, . . . , T , and w ? Rd satisfies
kwk ? 1. Let Gt = E[k?
gt k2 ] for t = 1, . . . , T . Then,
E[RT0 (w)] ?
T
X
1
?T +1
Gt +
.
?
2
t=1 t
(11)
?
?
If Gt = O(1) and ?t = ?( t), the right-hand side of (11) is O( T ). The following lemma shows
(t)
that this is true if pij = ?(1).
Lemma 5. Suppose that the linear independence of features is satisfied. Let t ? [T ], and let q be a
(t) (t)
positive number such that q ? min{pi , pij }. Then we have Gt ? 16/q.
We are now ready to prove Theorem 2.
Proof of Theorem 2. The expectation E[RT (w)] of the regret is bounded as E[RT (w)] ?
PT
PT
3
3
? 2
? 2
0
?
t=1 E[kL(wt ? w )k ] = ?d2 E[RT (w )], where the first
t=1 E[kL(Dt wt ? w )k ] ? ?d2
inequality comes from (4) and the second comes from Lemma 3. From Lemma 4, E[RT0 (w? )]
PT 1
is bounded by E[RT0 (w? )] ? HT :=
Gt + ?T2+1 . Lemma 5 and Observation 1 yield
p t=1 ?t
PT
Gt ? 16/Cd,k0 ,k1 . Hence, for ?t = 8 Cd,k0 ,k1 t, HT satisfies HT ? t=1 C 16
+ ?T2+1 =
d,k0 ,k1 ?t
?
?
PT
? 2
+ ?C 4
T + 1 ? 8 ?C 1
T + 1. Combining the above three inequalit=1
C
t
d,k0 ,k1
d,k0 ,k1
d,k0 ,k1
ties, we obtain (7).
5
5.1
Algorithms without extra observations
Algorithm 2: Assuming (a) the linear independence of features
3? 2
In Section 4, Lemma 3 showed a connection between RT and RT0 : E[RT (w)] ? ?d 12 E[RT0 (w? )]
?
under Ut ? St . Then, Lemmas 4 and 5 gave an upper bound of E[RT0 (w? )]: E[RT0 (w? )] = O( T )
7
(t)
(t)
under pij = ?(1). In the case of k 0 = k, however, the conditions Ut ? St and pij = ?(1) may
not be satisfied simultaneously, since, if Ut ? St and |St | = k 0 = k ? k1 = |Ut |, then we have
(t)
Ut = St , which means pij = 0 for i ?
/ Ut or j ?
/ Ut . Thus, we cannot use both relationships for the
analysis. In Algorithm 2, we bound RT (w) without bounding RT0 (w).
Let us describe an idea of Algorithm 2. To achieve the claimed regret,
? we first define a subset J
of {1, 2, . . . , T } by the set of squares, i.e., J = {s2 | s = 1, . . . , b T c}. Let ts denote the s-th
smallest number in J for each s = 1, . . . , |J|. In each round t, the algorithm computes St , a weight
? t , and a vector Dt g
?t , where g
?t is the gradient of `t (w)P
? t . In addition, if t = ts ,
vector w
at w = Dt w
? s := 1s sj=1 wj , and an unbiased estimator
the algorithm computes other weight vectors ws and w
?s of the gradient of the loss function `t (w) at ws .
g
? s is defined as the
At the beginning of round t, if t = ts , the algorithm first computes ws , and w
average of w1 , . . . , ws . Roughly speaking, ws is the weight vector computed with Algorithm 1
applied to the examples (xt1 , yt1 ), . . . , (xts , yts ), setting k1 to be at most k ? 2. Then, we can
? s is a consistent estimator of w? . This step is only performed if t ? J. Then St is
show that w
? s , where s is the largest number such that ts ? t. Thus, St does not change for any
defined from w
? t from D1 g
?1 , . . . , Dt?1 g
?t?1 , and predicts
t ? [ts , ts+1 ? 1]. After this, the algorithm computes w
? t> Dt xt . At the end of the round, the true label yt is observed, and Dt g
?t
the label of xt as y?t := w
?s is computed as in Algorithm 1. We
is computed from wt and (Dt xt , yt ). In addition, if t = ts , g
?s for computing ws0 with s0 > s in the subsequent rounds ts0 .
need g
The following theorem bounds the regret of Algorithm 2. See the supplementary material for details
of the algorithm and the proof of the theorem.
Theorem 6. Suppose that (a), the linear independence of features, is satisfied and k ? k 0 . Then,
there exists a polynomial-time algorithm such that E[RT (w)] is at most
1
2
? 2 2
X
X
? ?
Cd,k
0 ,0 (T 4 ? 1)|wi | ?d
4096
2
?
8(1+ d) T + 1+12T
)+4
|wi? |( 2
|wi | exp(?
?4 4 +1) ,
18432
C
d,k0 ,0 wi ?d
?
?
i?S
i?S
for arbitrary w ? Rd with kwk ? 1, where Cd,k0 ,0 =
5.2
0
0
k (k ?1)
d(d?1)
02
= O( kd2 ).2
Algorithm 3: Assuming (b) the compatibility condition
Algorithm 3 adopts the same strategy as Algorithm 2 except for the procedure for determining ws
? s . In the analysis of Algorithm 2, we show that,
and w
?to achieve the claimed regret, it suffices to
PT
generate {St } that satisfies t=1 Prob[i ?
was satisfied
/ St ] = O( T ) for i ? S ? . The condition P
? s = sj=1 wj /s.
by defining St as the set of k largest features with respect to a weight vector w
? s computed in Algorithm 2 converges to w? ,
The linear independence of features guarantees that w
and hence {St } defined as above possesses the required property. Unfortunately, if the assumption
? s does
of the independence of features is not satisfied, e.g., if we have almost same features, then w
not converge to w? . However, if we introduce an `1 -regularization to the minimization problem in
? s to a weighted average of the modified vectors
the definition of ws and change the definition of w
w1 , . . . , ws , then we can generate a required set {St } under the compatibility assumption. See the
supplementary material for details and the proof of the following theorem.
Theorem 7. Suppose that (b), the compatibility assumption, is satisfied and k ? k 0 . Then, there
exists a polynomial-time algorithm such that E[RT (w)] is at most
p 1
X
X
? ?
0 ,0
C
T 4 ?1|wi? |2 ?20
64 ? 364 k 2
d,k
|wi? | exp(?
)+4
|wi? |( 2
+1)2 ,
8(1+ d) T +1 + 12T
5832k
Cd,k0 ,0 wi?4 ?40
?
?
i?S
i?S
for arbitrary w ? Rd with kwk ? 1, where Cd,k0 ,0 =
3
0
0
k (k ?1)
d(d?1)
02
= O( kd2 ).3,4
The asymptotic regret bound mentioned in Section 1, can be yielded by bounding the second term with
?1
the aid of the following: maxT ?0 T exp(??T ? ) = (??) ? exp(?1/?) for arbitrary ? > 0, ? > 0.
4
Note that ?0 is the constant appearing in Assumption (b) in Section 3.
8
6
Experiments
In this section, we compare our algorithms with the following four baseline algorithms: (i) a greedy
method that chooses the k 0 largest features with respect to wt computed as in Algorithm 1; (ii)
a uniform-random method that chooses k 0 features uniformly at random; (iii) the algorithm of [6]
(called AELR); and (iv) the algorithm of [5] (called FKK). Owing to space limitations, we only
present typical results here. Other results and the detailed descriptions on experiment settings are
provided in the supplementary material.
Synthetic data. First we show results on two kinds of synthetic datasets: instances with (d, k, k 0 )
and instances with (d, k1 , k). We set k1 = k in the setting of (d, k, k 0 ) and k 0 = k in the setting of
(d, k1 , k). The instances with (d, k, k 0 ) assume that Algorithm 1 can use the ground truth k, while
Algorithm 1 cannot use k in the instances with (d, k1 , k). For each (d, k, k 0 ) and (d, k1 , k), we
executed all algorithms on five instances with T = 5000 and computed the averages of regrets and
run time, respectively. When (d, k, k 0 ) = (20, 5, 7), FKK spent 1176 s on average, while AELR
spent 6 s, and the others spent at most 1 s.
Figures 1 and 2 plot the regrets given by (1) over the number of rounds on a typical instance with
(d, k, k 0 ) = (20, 5, 7). Tables 2 and 3 summarize the average regrets at T = 5000, where A1, A2,
A3, G, and U denote Algorithm 1, 2, 3, greedy, and uniform random, respectively. We observe that
Algorithm 1 achieves smallest regrets in the setting of (d, k, k 0 ), whereas Algorithms 2 and 3 are
better than Algorithm 1 in the setting of (d, k1 , k). The results match our theoretical results.
4000
3000
2000
2000
1000
1000
0
1000
2000
T
3000
4000
5000
Figure 1: Plot of regrets with
(d, k, k 0 ) = (20, 5, 7)
1.00
0.75
0.50
0.25
0
0
Algorithm 1
Algorithm 2
Algorithm 3
greedy
uniform random
AELR
1.25
T
3000
5000
1.50
? (yt? ? yt)2
4000
1e8
Algorithm 1
Algorithm 2
Algorithm 3
greedy
uniform random
AELR
FKK
6000
RT
5000
RT
7000
Algorithm 1
Algorithm 2
Algorithm 3
greedy
uniform random
AELR
FKK
6000
t=0
7000
0
1000
2000
T
3000
4000
5000
0.00
0
Figure 2: Plot of regrets with
(d, k1 , k) = (20, 5, 7)
10000
20000
30000
T
40000
50000
Figure 3: CT-slice datasets
Table 2: Values of RT /102 when changing Table 3: Values of RT /102 when changing
(d, k, k 0 ).
(d, k1 , k).
(d, k1 , k)
(10,2,4)
A1
1.53
A2
2.38
A3
3.60
G
33.28
U
25.73
AELR
60.76
FKK
24.05
(d, k1 , k)
(10,2,4)
A1
26.88
A2
20.59
A3
17.19
G
43.03
U
60.02
AELR
64.75
FKK
58.71
Real data. We next conducted experiments using a CT-slice dataset, which is available online [10].
Each data consists of 384 features retrieved from 53500 CT images associated with a label that
denotes the relative position of an image on the axial axis.
We executed all algorithms except FKK, which does not work due to its expensive run time. Since
we do not know the ground-truth regression weights, we measure the performance by the first term
of (1), i.e., square loss of predictions. Figure 3 plots the losses over the number of rounds. The
parameters are k1 = 60 and k 0 = 70. For this instance, the run times of Algorithms 1 and 2, greedy,
uniform random, and AELR were 195, 35, 147, 382, and 477 s, respectively.
We observe that Algorithms 2 and 3 are superior to the others, which implies that Algorithm 2 and 3
are suitable for instances where the ground truth k is not known, such as real data-based instances.
Acknowledgement
This work was supported by JST ERATO Grant Number JPMJER1201, Japan.
References
[1] P. B?uhlmann and S. van de Geer. Statistics for high-dimensional data. 2011.
9
[2] N. Cesa-Bianchi, S. Shalev-Shwartz, and O. Shamir. Some impossibility results for budgeted
learning. In Joint ICML-COLT workshop on Budgeted Learning, 2010.
[3] N. Cesa-Bianchi, S. Shalev-Shwartz, and O. Shamir. Efficient learning with partially observed
attributes. Journal of Machine Learning Research, 12:2857?2878, 2011.
[4] X. Chen, Q. Lin, and J. Pena. Optimal regularized dual averaging methods for stochastic
optimization. In Advances in Neural Information Processing Systems, pages 395?403, 2012.
[5] D. Foster, S. Kale, and H. Karloff. Online sparse linear regression. In 29th Annual Conference
on Learning Theory, pages 960?970, 2016.
[6] E. Hazan and T. Koren. Linear regression with limited observation. In Proceedings of the 29th
International Conference on Machine Learning (ICML-12), pages 807?814, 2012.
[7] S. Kale. Open problem: Efficient online sparse regression. In Proceedings of The 27th Conference on Learning Theory, pages 1299?1301, 2014.
[8] S. Kale, Z. Karnin, T. Liang, and D. P?al. Adaptive feature selection: Computationally efficient
online sparse linear regression under rip. In Proceedings of the 34th International Conference
on Machine Learning (ICML-17), pages 1780?1788, 2017.
[9] P. Koiran and A. Zouzias. Hidden cliques and the certification of the restricted isometry property. IEEE Trans. Information Theory, 60(8):4999?5006, 2014.
[10] M. Lichman. UCI machine learning repository, 2013.
[11] L. Xiao. Dual averaging methods for regularized stochastic learning and online optimization.
Journal of Machine Learning Research, 11:2543?2596, 2010.
[12] M. Zinkevich. Online convex programming and generalized infinitesimal gradient ascent. In
Proceedings of the 20th International Conference on Machine Learning (ICML-03), pages
928?936, 2003.
[13] N. Zolghadr, G. Bart?ok, R. Greiner, A. Gy?orgy, and C. Szepesv?ari. Online learning with costly
features and labels. In Advances in Neural Information Processing Systems, pages 1241?1249,
2013.
10
| 6998 |@word mild:2 repository:1 polynomial:13 stronger:3 norm:2 open:1 d2:3 decomposition:1 mention:1 lichman:1 nii:4 ours:2 com:2 dx:4 attracted:1 realize:1 subsequent:1 designed:1 plot:4 bart:1 greedy:6 selected:2 beginning:1 math:1 accessed:1 five:1 incorrect:1 prove:1 consists:1 combine:1 introduce:4 hardness:4 expected:2 indeed:3 roughly:2 zti:3 decreasing:1 resolve:1 xti:4 kwk0:2 provided:1 moreover:3 notation:1 bounded:3 agnostic:1 kind:1 minimizes:2 developed:1 st0:1 corporation:2 guarantee:1 thorough:2 tie:1 k2:14 unit:1 medical:1 omit:1 grant:1 positive:4 despite:1 ext:3 subscript:1 might:1 studied:2 limited:13 practical:2 practice:2 regret:47 procedure:2 vious:1 attain:1 word:1 cannot:4 onto:1 selection:2 operator:1 wt1:1 context:1 applying:1 restriction:1 equivalent:1 zinkevich:1 yt:25 kale:7 attention:1 rt0:11 independently:1 convex:2 simplicity:1 recovery:1 estimator:7 importantly:1 proving:3 notion:1 analogous:1 pt:12 suppose:8 shamir:2 massive:1 rip:11 programming:1 us:1 designing:1 element:1 approximated:1 expensive:1 predicts:2 observed:9 akihiro:1 wj:2 connected:1 removed:1 e8:1 observes:1 disease:1 mentioned:1 complexity:5 certification:1 yts:1 rewrite:1 wtj:1 learner:12 basis:1 joint:1 k0:14 represented:1 regularizer:2 describe:2 query:1 ts0:1 choosing:1 shalev:2 whose:1 heuristic:1 supplementary:4 solve:1 say:1 statistic:1 online:29 sequence:1 propose:1 product:1 uci:1 combining:2 achieve:11 description:1 kh:1 asserts:1 requirement:1 generating:1 converges:3 spent:3 develop:1 ac:5 axial:1 measured:1 involves:1 come:4 implies:1 closely:2 owing:3 attribute:1 stochastic:3 jst:2 material:4 require:1 suffices:2 rda:3 clarify:1 hold:8 considered:1 ground:7 deciding:1 exp:4 predict:2 koiran:1 achieves:7 smallest:3 a2:3 purpose:3 applicable:1 label:15 uhlmann:1 largest:6 weighted:2 minimization:2 always:2 aim:4 modified:1 avoid:1 shrinkage:1 indicates:2 impossibility:1 adversarial:1 baseline:2 dependent:1 entire:1 hidden:2 w:9 compatibility:11 issue:1 dual:5 among:1 arg:1 denoted:1 colt:1 raised:1 special:1 equal:1 construct:3 karnin:1 having:1 beach:1 kw:4 kd2:2 icml:4 future:1 minimized:1 np:5 t2:2 others:2 employ:1 simultaneously:1 sumita:2 national:3 kakimura:2 kwt:3 keio:2 possibility:1 arrives:3 devoted:1 daisuke:1 naonori:1 partial:3 unless:4 iv:2 desired:1 theoretical:2 minimal:1 instance:11 cost:3 entry:3 subset:1 predictor:6 uniform:6 conducted:1 synthetic:2 chooses:4 st:42 international:3 randomized:1 informatics:3 together:1 w1:2 cesa:3 satisfied:8 choose:4 supp:1 japan:1 de:1 gy:1 includes:2 coefficient:1 satisfy:3 ranking:1 later:1 performed:1 observing:5 kwk:9 hazan:2 polynomialtime:2 contribution:1 minimize:4 square:3 publicly:1 efficiently:1 yield:2 worth:1 published:1 submitted:1 suffers:2 whenever:1 definition:4 infinitesimal:1 proof:7 associated:3 dataset:3 kawarabayashi:1 bpp:5 enumerates:1 ut:22 manuscript:1 ok:1 dt:33 follow:1 evaluated:1 though:1 until:1 hand:1 receives:1 web:1 usa:1 true:5 unbiased:5 fkk:7 regularization:3 hence:5 equality:1 round:26 erato:1 noted:1 generalized:1 demonstrate:1 image:2 recently:4 ari:1 superior:1 overview:1 jp:7 kd0:1 pena:1 refer:1 rd:15 xtd:1 pointed:1 similarly:2 hatano:2 access:4 money:1 gt:9 isometry:2 showed:3 wtd:1 retrieved:1 yabe:2 scenario:2 claimed:2 inequality:3 kwk1:1 vt:2 additional:1 impose:1 managing:1 zouzias:1 determine:1 ws0:1 converge:1 monotonically:1 kdt:1 ii:3 match:1 long:1 lin:1 a1:3 prediction:15 variant:1 regression:29 patient:1 expectation:5 iteration:1 achieved:1 addition:6 whereas:2 szepesv:1 singular:2 extra:8 rest:1 posse:1 ascent:1 subject:1 seem:1 call:4 integer:2 noting:1 revealed:1 iii:3 identically:1 independence:17 gave:1 wti:7 lasso:1 karloff:1 idea:4 t0:2 whether:2 speaking:1 tij:1 detailed:1 listed:1 amount:1 ken:1 generate:2 outperform:3 estimated:2 per:6 diagnosis:1 shall:1 ichi:1 putting:1 four:2 achieving:1 changing:2 budgeted:2 verified:1 ht:4 sum:1 realworld:1 run:4 prob:5 throughout:1 reasonable:1 almost:1 yt1:1 appendix:1 bound:9 ct:3 pay:1 koren:2 yielded:1 annual:1 adapted:1 constraint:2 precisely:1 answered:1 fukunaga:1 min:2 combination:1 smaller:2 wi:17 making:1 restricted:3 computationally:2 resource:2 ln:1 remains:1 needed:1 know:1 end:3 presto:1 available:3 predefine:1 observe:13 appearing:1 takuro:2 denotes:3 zolghadr:2 xtij:1 k1:40 question:2 strategy:3 costly:1 rt:23 diagonal:1 gradient:7 me:1 considers:2 reason:2 assuming:4 relationship:1 cq:1 acquire:2 liang:1 difficult:1 unfortunately:1 executed:2 potentially:1 statement:1 zt:4 perform:2 bianchi:3 upper:1 observation:17 datasets:2 descent:1 t:7 situation:1 defining:2 dc:1 arbitrary:8 community:1 required:2 kl:11 connection:1 engine:1 nip:1 trans:1 suggested:1 summarize:1 max:1 memory:1 suitable:1 natural:2 regularized:3 predicting:1 imply:1 axis:1 ready:1 naive:1 literature:1 acknowledgement:1 determining:1 asymptotic:1 shinji:1 relative:1 loss:16 sublinear:14 suggestion:1 limitation:2 sufficient:1 pij:7 consistent:1 s0:1 foster:5 xiao:1 pi:3 cd:9 maxt:1 supported:1 arriving:6 dcd:1 side:2 weaker:3 institute:3 absolute:1 sparse:22 distributed:1 slice:2 van:1 computes:6 ignores:1 adopts:1 made:2 adaptive:1 far:1 sj:2 clique:1 sequentially:3 xt1:2 conclude:1 shwartz:2 search:2 wt0:1 why:2 table:5 ca:1 obtaining:1 hanna:1 orgy:1 poly:2 main:1 linearly:2 bounding:2 s2:1 arise:1 inequalit:1 aid:1 position:1 exponential:6 third:1 ito:2 theorem:11 xt:39 k21:1 admits:1 a3:3 incorporating:1 exists:5 workshop:1 importance:1 nec:4 chen:1 greiner:1 kxk:1 partially:1 maxw:1 truth:7 satisfies:8 goal:3 hard:5 change:2 determined:3 specifically:1 uniformly:3 except:2 averaging:5 wt:54 typical:2 lemma:12 total:4 called:2 geer:1 cholesky:1 support:1 violated:1 evaluate:4 d1:1 |
6,630 | 6,999 | Mapping distinct timescales of functional interactions
among brain networks
Mali Sundaresan
Centre for Neuroscience
Indian Institute of Science
Bangalore, India 560 012
[email protected]
Arshed Nabeel
Centre for Neuroscience
Indian Institute of Science
Bangalore, India 560 012
[email protected]
Devarajan Sridharan?
Centre for Neuroscience
Indian Institute of Science
Bangalore, India 560 012
[email protected]
Abstract
Brain processes occur at various timescales, ranging from milliseconds (neurons)
to minutes and hours (behavior). Characterizing functional coupling among brain
regions at these diverse timescales is key to understanding how the brain produces
behavior. Here, we apply instantaneous and lag-based measures of conditional
linear dependence, based on Granger-Geweke causality (GC), to infer network
connections at distinct timescales from functional magnetic resonance imaging
(fMRI) data. Due to the slow sampling rate of fMRI, it is widely held that GC
produces spurious and unreliable estimates of functional connectivity when applied
to fMRI data. We challenge this claim with simulations and a novel machine
learning approach. First, we show, with simulated fMRI data, that instantaneous
and lag-based GC identify distinct timescales and complementary patterns of functional connectivity. Next, we analyze fMRI scans from 500 subjects and show
that a linear classifier trained on either instantaneous or lag-based GC connectivity
reliably distinguishes task versus rest brain states, with ?80-85% cross-validation
accuracy. Importantly, instantaneous and lag-based GC exploit markedly different spatial and temporal patterns of connectivity to achieve robust classification.
Our approach enables identifying functionally connected networks that operate at
distinct timescales in the brain.
1
Introduction
Processes in the brain occur at various timescales. These range from the timescales of milliseconds for
extremely rapid processes (e.g. neuron spikes), to timescales of tens to hundreds of milliseconds for
processes coordinated across local populations of neurons (e.g. synchronized neural oscillations), to
timescales of seconds for processes that are coordinated across diverse brain networks (e.g. language)
and even up to minutes, hours or days for processes that involve large-scale neuroplastic changes
(e.g. learning a new skill). Coordinated activity among brain regions that mediate each of these
cognitive processes would manifest in the form of functional connections among these regions at
the corresponding timescales. Characterizing patterns of functional connectivity that occur at these
different timescales is, hence, essential for understanding how the brain produces behavior.
Measures of linear dependence and feedback, based on Granger-Geweke causality (GC) [10][11]),
have been used to estimate instantaneous and lagged functional connectivity in recordings of brain
activity made with electroencephalography (EEG, [6]), and electrocorticography (ECoG, [3]). However, the application of GC measures to brain recordings made with functional magnetic resonance
imaging (fMRI) remains controversial [22][20][2]. Because the hemodynamic response is produced
and sampled at a timescale (seconds) several orders of magnitude slower than the underlying neural
?
Corresponding author
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
processes (milliseconds), previous studies have argued that GC measures, particularly lag-based GC,
produce spurious and unreliable estimates of functional connectivity from fMRI data [22][20].
Three primary confounds have been reported with applying lag-based GC to fMRI data. First,
systematic hemodynamic lags: a slower hemodynamic response in one region, as compared to
another could produce a spurious directed GC connection from the second to the first [22] [4].
Second, in simulations, measurement noise added to the signal during fMRI acquisition was shown to
produce significant degradation in GC functional connectivity estimates [20]. Finally, downsampling
recordings to the typical fMRI sampling rate (seconds), three orders of magnitude slower than the
timescale of neural spiking (milliseconds), was shown to effectively eliminate all traces of functional
connectivity inferred by GC [20]. Hence, a previous, widely cited study argued that same-time
correlation based measures of functional connectivity, such as partial correlations, fare much better
than GC for estimating functional connectivity from fMRI data [22].
The controversy over the application of GC measures to fMRI data remains unresolved to date,
primarily because of the lack of access to ?ground truth?. On the one hand, claims regarding
the efficacy of GC estimates based on simulations, are only as valid as the underlying model of
hemodynamic responses. Because the precise mechanism by which neural responses generate
hemodynamic responses is an active area of research [7], strong conclusions cannot be drawn based
on simulated fMRI data alone. On the other hand, establishing ?ground truth? validity for connections
estimated by GC on fMRI data require concurrent, brain-wide invasive neurophysiological recordings
during fMRI scans, a prohibitive enterprise.
Here, we seek to resolve this controversy by introducing a novel application of machine learning that
works around these criticisms. We estimate instantaneous and lag-based GC connectivity, first, with
simulated fMRI time series under different model network configurations and, next, from real fMRI
time series (from 500 human subjects) recorded under different task conditions. Based on the GC
connectivity matrices, we train a linear classifier to discriminate model network configurations or
subject task conditions, and assess classifier accuracy with cross validation. Our results show that
instantaneous and lag-based GC connectivity estimated from empirical fMRI data can distinguish
task conditions with over 80% cross-validation accuracies. To permit such accurate classification, GC
estimates of functional connectivity must be robustly consistent within each model configuration (or
task condition) and reliably different across configurations (or task conditions). In addition, drawing
inspiration from simulations, we show that GC estimated on real fMRI data downsampled to 3x-7x
the original sampling rate provides novel insights into functional brain networks that operate at
distinct timescales.
2
2.1
Simulations and Theory
Instantaneous and lag-based measures of conditional linear dependence
The linear relationship among two multivariate signals x and y conditioned on a third multivariate
signal z can be measured as the sum of linear feedback from x to y (Fx?y ), linear feedback
from y to x (Fy?x ), and instantaneous linear feedback (Fx?y ) [11][16]. To quantify these linear
relationships, we model the future of each time series in terms of their past values with a wellestablished multivariate autoregressive (MVAR) model (detailed in Supplementary Material, Section
S1).
Briefly, Fx?y is a measure of the improvement in the ability to predict the future values of y given
the past values of x, over and above what can be predicted from the past values of z and y, itself (and
vice versa for Fy?x ). Fx?y , on the other hand, measures the instantaneous influence between x and
y conditioned on z (see Supplementary Material, Section S1). We refer to Fx?y , as instantaneous GC
(iGC), and Fx?y Fy?x as lag-based GC or directed GC (dGC), with the direction of the influence
(x to y or vice versa) being indicated by the arrow. The ?full? measure of linear dependence and
feedback Fx,y is given by :
Fx,y = Fx?y + Fy?x + Fx?y
(1)
Fx,y measures the complete conditional linear dependence between two time series. If, at a given
instant, no aspect of one time series can be explained by a linear model containing all the values (past
and present) of the other, Fx,y will evaluate to zero [16]. These measures are firmly grounded in
information theory and statistical inferential frameworks [9].
2
B
Destination
C
-200ms-1
D
Network H
E
F
C
F
slow
0.02
2s
B
C
Network J
D
E
F
6s-1
-6s-1
D
slow
-0.02
0.02
Re
fast
0
0.002
Source
A
B
C
D
E
F
50ms
5s
Fast interaction (50ms)
0.002
fast
Im
Im
E
Network J
0.002
Re
B
A
Node time-constant
Network H
-0.02
D
50ms
6s-1
Connection strength
C
A
B
Destination
Source
A
0
intermediate
0.002
600s
Amplitude (a.u.)
A
Slow interaction (5s)
600s
Time (s)
Figure 1: Network simulations. (A) Network configuration H. (Left) Connectivity matrix. Red vs.
blue: Excitatory vs. inhibitory connections. Deeper hues: Higher connection strengths. Non-zero
value at (i, j) corresponds to a connection from node j to node i (column to row). Sub-network A-B-C
operates at a fast timescale (50 ms) whereas D-E-F operates at a slow timescale (2 s). (Right) Network
schematic showing the connectivity matrix as a graph. (B) Network configuration J. Conventions are
the same as in A. (C) The eigenspectra of networks H (left) and J (right). (D) Simulated time series
in network configuration J with fast (top panel) and slow (bottom panel) dynamics, corresponding to
nodes A-B and E-F, respectively. Within each panel, the top plot is the simulated neural time series,
and the bottom plot is the simulated fMRI time series.
2.2
Simulating functional interactions at different timescales
To test the ability of GC measures to reliably recover functional interactions at different timescales,
we simulated fMRI time series for model networks with two configurations of directed connectivity.
Simulated fMRI time series were generated using a two-stage model (2): the first stage involved
a latent variable model that described neural dynamics, and the second stage that convolved these
dynamics with the hemodynamic response function (HRF) to obtain the simulated fMRI time series.
y =H ?x
x? = Ax + ?
(2)
where A is the neural (?ground truth?) connectivity matrix, x is the neural time series, x? is dx/dt, H
is the canonical hemodynamic response function (HRF; simulated with spm_hrf in SPM8 software),
? is the convolution operation, y is the simulated BOLD time series, and ? is i.i.d Gaussian noise.
Other than noise ?, other kinds of external input were not included in these simulations. Similar
models have been employed widely for simulating fMRI time series data previously [22][2][20].
First, we sought to demonstrate the complementary nature of connections estimated by iGC and dGC.
For this, we used network configuration H, shown in Fig. 1A. Note that this corresponds to two
non-interacting sub-networks, each operating at distinctly different timescales (50 ms and 2000 ms
node decay times, respectively) as revealed by the eigenspectrum of the connectivity matrix (Fig. 1C).
For convenience, we term these two timescales as ?fast? and ?slow?. Moreover, each sub-network
operated with a distinct pattern of connectivity, either purely feedforward, or with feedback (E-I).
Dynamics were simulated with a 1 ms integration step (Euler scheme), convolved with the HRF and
then downsampled to 0.5 Hz resolution (interval of 2 s) to match the sampling rate (repeat time, TR)
of typical fMRI recordings.
Second, we sought to demonstrate the ability of dGC to recover functional interactions at distinct
timescales. For this, we simulated a different network configuration J, whose connectivity matrix
3
A
iGC
A
D
B
E
C
F
Ground
Truth
0.06
dGC
0
B
A
B
C
D
E
F
Ground
Truth
0.02
0
50ms
500ms
5s
Sampling Interval
Figure 2: Connectivity estimated from simulated data. (A) iGC and dGC values estimated from
simulated fMRI time series, network H. (Leftmost) Ground truth connectivity used in simulations.
(Top) Estimated iGC connectivity matrix (left) and significant connections (right, p<0.05) estimated
by a bootstrap procedure using 1000 phase scrambled surrogates[18]. (Bottom) Same as top panel, but
for dGC. (B) dGC estimates from simulated fMRI time series, network J, sampled at three different
sampling intervals: 50 ms (left), 500 ms (middle) and 5 s (right). In each case the estimated dGC
matrix and significant connections are shown, with the same conventions as in panel (A).
is shown in Fig. 1B. This network comprised three non-interacting sub-networks operating at three
distinct timescales (50 ms, 0.5 s, and 5 s node decay times; eigenspectrum in Fig. 1C). As before,
simulated dynamics were downsampled at various rates ? 20 Hz, 2 Hz, 0.2 Hz ? corresponding to
sampling intervals of 50 ms, 0.5 s, and 5 s, respectively. The middle interval (0.5 s) is closest to the
repeat time (TR=0.7 s) of the experimental fMRI data used in our analyses; the first and last intervals
were chosen to be one order of magnitude faster and slower, respectively.
Sufficiently long (3000 s) simulated fMRI timeseries were generated for each network configuration
(H and J). Sample time series from a subset of these simulations before and after hemodynamic
convolution and downsampling are shown in Fig. 1D.
2.3
Instantaneous and lag-based GC identify complementary connectivity patterns
Our goal was to test if the ground truth neural connectivity matrix (A in equation 2) could be estimated
by applying iGC and dGC to the fMRI time series y. dGC was estimated from the time series with
the MVGC toolbox (GCCA mode) [1][19] and iGC was estimated from the MVAR residuals [16].
For simulations with network configuration H, iGC and dGC identified connectivity patterns that
differed in two key respects (Fig. 2A). First, iGC identified feedforward interactions at both fast and
slow timescales whereas dGC was able to estimate only the slow interactions, which occurred at a
timescale comparable to the sampling rate of the measurement. Second, dGC was able to identify
the presence of the E-I feedback connection at the slow timescale, whereas iGC entirely failed to
estimate this connection. In the Supplementary Material (Section S2), we show theoretically why
iGC can identify mutually excitatory or mutually inhibitory feedback connections, but fails to identify
the presence of reciprocal excitatory-inhibitory (E-I) feedback connections, particularly when the
connection strengths are balanced.
For simulations with network configuration J, dGC identified distinct connections depending on the
sampling rate. At the highest sampling rate (20 Hz), connections at the fastest timescales (50 ms)
were estimated most effectively, whereas at the slowest sampling rates (0.2 Hz), only the slowest
timescale connections (5 s) were estimated; intermediate sampling rates (2 Hz) estimated connections
at intermediate timescales (0.5 s). Thus, dGC estimated robustly those connections whose process
timescale was closest to the sampling rate of the data.
The first finding ? that connections at fast timescales (50 ms) could not be estimated from data
sampled at much lower rates (0.2 Hz) ? is expected, and in line with previous findings. However, the
converse finding ? that the slowest timescale connections (5 s) could not be detected at the fastest
sampling rates (20 Hz) ? was indeed surprising. To better understand these puzzling findings, we
performed simulations over a wide range of sampling rates for each of these connection timescales; the
results are shown in Supplementary Figure S1. dGC values (both with and without convolution with
the hemodynamic response function) systematically increased from baseline, peaked at a sampling
rate corresponding to the process timescale and decreased rapidly at higher sampling rates, matching
4
recent analytical findings[2]. Thus, dGC for connections at the at a particular timescale was highest
when the data were sampled at a rate that closely matched that timescale.
Two key conclusions emerged from these simulations. First, functional connections estimated by
dGC can be distinct from and complementary to connections identified by iGC, both spatially and
temporally. Second, connections that operate at distinct timescales can be detected by estimating
dGC on data sampled at distinct rates that match the timescales of the underlying processes.
3
Experimental Validation
We demonstrated the success of instantaneous and lag-based GC to accurately estimate functional
connectivity with simulated fMRI data. Nevertheless, application of GC measures to real fMRI data
is fraught with significant caveats, associated with hemodynamic confounds and measurement noise,
as described above. We asked whether, despite these confounds, iGC and dGC would be able to
produce reliable estimates of connectivity in real fMRI data. Moreover, as with simulated data, would
iGC and dGC reveal complementary patterns of connectivity that varied reliably with different task
conditions?
3.1
Machine learning, cross-validation and recursive feature elimination
We analyzed minimally preprocessed brain scans of 500 subjects, drawn from the Human Connectome
Project (HCP) database [12]. We analyzed data from resting state and seven other task conditions (total
of 4000 scans; Supplementary Table S1). In the main text we present results for classifying the resting
state from the language task; the other classifications are reported in the Supplementary Material.
The language task involves subjects listening to short segments of stories and evaluating semantic
content in the stories. This task is expected to robustly engage a network of language processing
regions in the brain. The resting state scans served as a ?task-free? baseline, for comparison.
Brain volumes were parcellated with a 14-network atlas [21] (see Supplementary Material Section
S3; Supplementary Table S2). Network time series were computed by averaging time series across all
voxels in a given network using Matlab and SPM8. These multivariate network time series were then
fit with an MVAR model (Supplementary Material Section S1). Model order was determined with
the Akaike Information Criterion for each subject, was typically 1, and did not change with further
downsampling of the data (see next section). The MVAR model fit was then used to estimate both an
instantaneous connectivity matrix using iGC (Fx?y ) and a lag-based connectivity matrix using dGC
(Fx?y ).
The connection strengths in these matrices were used as feature vectors in a linear classifier based on
support vector machines (SVMs) for high dimensional predictor data. We used Matlab?s fitclinear
function, optimizing hyperparameters using a 5-fold approach: by estimating hyperparameters with
five sets of 100 subjects in turn, and measuring classification accuracies with the remaining 400
subjects; the only exception was for the classification analysis with averaging GC matrices (Fig. 3B)
for which classification was run with default hyperparameters (regularization strength = 1/(cardinality
of training-set), ridge penalty). The number of features for iGC-based classification was 91 (upper
triangular portion of the symmetric 14?14 iGC matrix) and for dGC-based classification was 182
(all entries of the 14?14 dGC matrix, barring self-connections on the main diagonal). Based on
these functional connectivity features, we asked if we could reliably predict the task condition (e.g.
language versus resting). Classification performance was tested with leave-one-out and k-fold crossvalidation. We also assessed the significance of the classification accuracy with permutation testing
[14] (Supplementary Material, Section S4).
Finally, we wished to identify a key set of connections that permitted accurately classifying task
from resting states. To accomplish this, we applied a two-stage recursive feature elimination (RFE)
algorithm [5], which identified a minimal set of features that provided maximal cross validation
accuracy (generalization performance). Details are provided in the Supplementary Material (Section
S5, Supplementary Figs. S2-S3).
5
A
100
dGC
iGC
B 100
fGC
90
Accuracy
Accuracy
90
80
70
60
50
80
70
Classi?cation using dGC
Classi?cation using iGC
60
14-Network
90-Node
0
10
20
30
40
50
No. of Subjects
Parcellation Scheme
Figure 3: Classification based on GC connectivity estimates in real data. (A) Leave-one-out
classification accuracies for different GC measures for the 14-network parcellation (left) and the
90-node parcellation (right). Within each group, the first two bars represent the classification accuracy
with dGC and iGC respectively. The third bar is the classification accurcay with fGC (see equation 1).
Chance: 50% (two-way classification). Error-bars: Clopper-Pearson binomial confidence intervals.
(B) Classification accuracy when the classifier is tested on average GC matrices, as a function of
number of subjects being averaged (see text for details).
3.2
Instantaneous and lag-based GC reliably distinguish task from rest
Both iGC and dGC connectivity were able to distinguish task from resting state significantly above
chance (Fig. 3A). Average leave-one-out cross validation accuracy was 80.0% with iGC and 83.4%
with dGC (Fig. 3A, left). Both iGC and dGC classification exhibited high precision and recall at
identifying language task (precision= 0.81, recall= 0.78 for iGC and precision= 0.85, recall= 0.81 for
dGC). k-fold (k=10) cross-validation accuracy was also similar for both the GC measures (79.4% for
iGC and 83.7% for dGC).
dGC and iGC are complementary measures of linear dependence, by their definition. We asked if
combining them would produce better classification performance. We combined dGC and iGC in two
ways. First, we performed classification after pooling features (connectivity matrices) across both
dGC and iGC (?iGC ? dGC?). Second, we estimated the full GC measure (Fx,y ), which is a direct
sum of dGC and iGC estimates (see equation 1). Both of these approaches yielded marginally higher
classification accuracies ? 88.2% for iGC ? dGC and 84.6% for fGC ? than dGC or iGC alone.
Next, we asked if classification would be more accurate if we averaged the GC measures across a
few subjects, to remove uncorrelated noise (e.g. measurement noise) in connectivity estimates. For
this, the data were partitioned into two groups of 250 subjects: a training (T) group and a test (S)
group. The classifier was trained on group T and the classifier prediction was tested by averaging GC
matrices across several folds of S, each fold containing a few (m=2,4,5,10 or 25) subjects. Prediction
accuracy for both dGC and iGC reached ?90% with averaging as few as two subjects? GC matrices,
and reached ?100%, with averaging 10 subjects? matrices (Fig. 3B).
We also tested if these classification accuracies were brain atlas or cognitive task specific. First, we
tested an alternative atlas with 90 functional nodes based on a finer regional parcellation of the 14
functional networks [21]. Classification accuracies for iGC and fGC improved (87.9% and 90.8%,
respectively), and for dGC remained comparable (81.4%), to the 14 network case (Fig. 3A, right).
Second, we performed the same GC-based classification analysis for six other tasks drawn from the
HCP database (Supplementary Table S1) . We discovered that all of the remaining six tasks could be
classified from the resting state with accuracy comparable to the language versus resting classification
(Supplementary Fig. S4).
Finally, we asked how iGC and dGC classification accuracies would compare to those of other
functional connectivity estimators. For example, partial correlations (PC) have been proposed
as a robust measure of functional connectivity in previous studies [22]. Classification accuracies
for PC varied between 81-96% across tasks (Supplementary Fig. S5B). PC?s better performance
is expected: estimators based on same-time covariance are less susceptible to noise than those
based on lagged covariance, a result we derive analytically in the Supplementary Material (Section
S6). Also, when classifying language task versus rest, PC and iGC relied on largely overlapping
connections (?60% overlap) whereas PC and dGC relied on largely non-overlapping connections
(?25% overlap; Supplementary Fig. S5C). These results highlight the complementary nature of PC
and dGC connectivity. Moreover, we demonstrate, both with simulations and with real-data, that
6
A
B
0
0.4309
0
0.4001
0.5320
0
0.6161
0
0
0.7246
Accuracy
1
D-DMN
LECN
RECN
A-SAL
P-SAL
LANG
AUD
SENMOT
BG
PREC
V-DMN
VISPA
PR-VIS
HI-VIS
1
31 61
# Features
0.5
1 31 61 91 121 151 181 1
Sampling rate
1x (0.72s)
31 61 91 121 151 181 1
3x (2.16s)
31 61 91 121 151 181 1
5x (3.60s)
31 61 91 121 151 181
7x (5.04s)
Figure 4: Maximally discriminative connections identified with RFE (A) (Top) iGC connections
that were maximally discriminative between the language task and resting state, identified using
recursive feature elimination (RFE). Darker gray shades denote more discriminative connections
(higher beta weights) (Bottom) RFE curves, with classification accuracy plotted as a function of the
number of remaining features. The dots mark the elbow-points of the RFE curves, corresponding
to the optimal number of discriminative connections. (B) Same as in (A), except that RFE was
performed on dGC connectivity matrices with data sampled at 1x, 3x, 5x, and 7x of the original
sampling interval (TR=0.72 s). Non-zero value at (i, j) corresponds to a connection from node j to
node i (column to row).
classification accuracy with GC typically increased with more scan timepoints, consistent with GC
being an information theoretic measure (Supplementary Fig. S6).
These superior classification accuracies show that, despite conventional caveats for estimating GC
with fMRI data, both iGC and dGC yield functional connectivity estimates that are reliable across
subjects. Moreover, dGC?s lag-based functional connectivity provides a robust feature space for
classifying brain states into task or rest. In addition, we found that dGC connectivity can be used to
predict task versus rest brain states with near-perfect (>95-97%) accuracy, by averaging connectivity
estimates across as few as 10 subjects, further confirming the robustness of these estimates.
3.3
Characterizing brain functional networks at distinct timescales
Recent studies have shown that brain regions, across a range of species, operate at diverse timescales.
For example, a recent calcium imaging study demonstrated the occurrence of fast (?100 ms) and
slow (?1 s) functional interactions in mouse cortex [17]. In non-human primates, cortical brain
regions operate at a hierarchy of intrinsic timescales, with the sensory cortex operating at faster
timescales compared to prefrontal cortex [13]. In the resting human brain, cortical regions organize
into a hierarchy of functionally-coupled networks characterized by distinct timescales [24]. It is
likely that these characteristic timescales of brain networks are also modulated by task demands. We
asked if the framework presented in our study could characterize brain networks operating at distinct
timescales across different tasks (and rest) from fMRI data.
We had already observed, in simulations, that instantaneous and lag-based GC measures identified
functional connections that operate at different timescales (Fig. 2A). We asked if these measures
could identify connections at fast versus slow timescales (compared to TR=0.72s) that were specific
to task verus rest, from fMRI recordings. To identify these task-specific connections, we performed
recursive feature elimination (described in Supplementary Material, Section S5) with the language
task and resting state scans, separately with iGC and dGC features (connections). Prior to analysis
of real data, we validated RFE by applying it to estimate key differences in two simulated networks
(Supplementary Material Fig. S2 and Fig. S3). RFE accurately identified connections that differed in
simulation ?ground truth?: specifically, differences in fast timescale connections were identified by
iGC, and in slow timescale connections by dGC.
When applied to the language task versus resting state fMRI data, RFE identified a small subset of
18(/91) connections based on iGC (Fig. 4A), and an overlapping but non-identical set of 17(/182)
connections based on dGC (Fig. 4B); these connections were key to distinguishing task (language)
7
from resting brain states. Specifically, the highest iGC beta weights, corresponding to the most
discriminative iGC connections, occurred among various cognitive control networks, including the
anterior and posterior salience networks, the precuneus and the visuospatial network (Fig. 5A). Some
of these connections were also detected by dGC. Nevertheless, the highest dGC beta weights occurred
for connections to and from the language network, for example from the language network to dorsal
default mode network and from the precuneus to the language network (Fig. 5B). Notably, these
latter connections were important for classification based on dGC, but not based on iGC. Moreover,
iGC identified a connection between the language network and the basal ganglia whereas dGC, in
addition, identified the directionality of the connection, as being from the language network to the
basal ganglia. In summary, dGC and iGC identified several complementary connections, but dGC
alone identified many connections with the language network, indicating that slow processes in this
network significantly distinguished language from resting states.
Next, we tested whether estimating dGC after systematically downsampling the fMRI time series
would permit identifying maximally discriminative connections at progressively slower timescales.
To avoid degradation of GC estimates because of fewer numbers of samples with downsampling
(by decimation), we concatenated the different downsampled time series to maintain an identical
total number of samples. RFE was applied to GC estimates based on data sampled at different rates:
1.4 Hz, 0.5 Hz, 0.3 Hz and 0.2 Hz corresponding to 1x, 3x, 5x, and 7x of TR (sampling period of
0.72 s, 2.16 s, 3.6 s and 5.04 s), respectively. RFE with dGC identified 17(/182) key connections
at each of these timescales (Fig. 4B). Interestingly, some connections manifested in dGC estimates
across all sampling rates. For instance, the connection from the precuneus to the language network
was important for classification across all sampling rates (Fig. 5C). On the other hand, connections
between the language network and various other networks manifested at specific sampling rates only.
For instance an outgoing connection from the language network to the basal ganglia manifested only
at the 1.4 Hz sampling rate, to the visuospatial network and default mode networks only at 0.5 Hz, to
the higher-visual network only at 0.2-0.3 Hz, and an incoming connection from the anterior salience
only at 0.2 Hz. None of these connections were identified by the iGC classifier (compare Fig. 5A
and 5C). Similar timescale generic and timescale specific connections were observed in other tasks
as well (Supplementary Fig. S7). Despite downsampling, RFE accuracies were significantly above
chance, although accuracies decreased at lower sampling rates (Fig. 4 lower panels) [20]. Thus, dGC
identified distinct connectivity profiles for data sampled at different timescales, without significantly
compromising classification performance.
Finally, we sought to provide independent evidence to confirm whether these network connections
operated at different timescales. For this, we estimated the average cross coherence (Supplementary
Material, Section S7) between the fMRI time series of two connections from the language network
that were identified by RFE exclusively at 0.2-0.3 Hz (language to higher visual) and 0.5 Hz (language
to visuospatial) sampling rates, respectively (Fig. 5C). Each connection exhibited an extremum in the
coherence plot at a frequency which closely matched the respective connection?s timescale (Fig. 5D).
These findings, from experimental data, provide empirical validation to our simulation results, which
indicate that estimating dGC on downsampled data is a tenable approach for identifying functional
connections that operate at specific timescales.
4
Conclusions
These results contain three novel insights. First, we show that two measures of conditional linear
dependence ? instantaneous and directed Granger-Geweke causality ? provide robust measures of
functional connectivity in the brain, resolving over a decade of controversy in the field [23][22].
Second, functional connections identified by iGC and dGC carry complementary information, both
in simulated and in real fMRI recordings. In particular, dGC is a powerful approach for identifying
reciprocal excitatory-inhibitory connections, which are easily missed by iGC and other same-time
correlation based metrics like partial correlations [22]. Third, when processes at multiple timescales
exist in the data, our results show that downsampling the time series to different extents provides an
effective method for recovering connections at these distinct timescales.
Our simulations highlight the importance of capturing emergent timescales in simulations of neural
data. For instance, a widely-cited study [22] employed purely feedforward connectivity matrices with
a 50 ms neural timescale in their simulations, and argued that functional connections are not reliably
inferred with GC on fMRI data. However, such connectivity matrices preclude the occurrence of
8
A
D-DMN
D-DMN
RECN
A-SAL
PREC
V-DMN
VISPA
LANG
PREC
V-DMN
LANG
BG
PREC
PREC
VISPA
V-DMN
VISPA
HI-VIS
D
Both
HI-VIS
D-DMN
RECN
A-SAL
0.01
P-SAL
0.01
0.32Hz
LANG
BG
PREC
VISPA
RECN
LANG
Coherence
dGC only
A-SAL
All
HI-VIS
iGC only
RECN
0.19 Hz
P-SAL
VISPA
HI-VIS
0.28 Hz
A-SAL
P-SAL
LANG
AUD
BG
PREC
V-DMN
AUD
BG
0.46 Hz
D-DMN
A-SAL
P-SAL
B
1.39 Hz
VISPA
0.46Hz
Freq
0.7Hz
-0.01
HI-VIS
LANG?VISPA (dGC 3x)
0.14Hz
Coherence
D-DMN
LECN
RECN
A-SAL
P-SAL
LANG
AUD
SENMOT
BG
PREC
V-DMN
VISPA
PR-VIS
HI-VIS
C
Freq
0.28Hz
0.7Hz
-0.01
LANG?HI-VIS (dGC 5x)
Figure 5: Connectivity at different timescales. (A-B) Discriminative connections identified
exclusively by iGC (teal), exclusively by dGC (blue), or by both (yellow). Each connection is
represented as a band going from a source node on the left to a destination node on the right. (C)
(Top) Discriminative connections identified by dGC, exclusively at different sampling intervals (1x,
3x, 5x, 7x TR). (D) (Left) Directed connection between language network and visuospatial network
identified by dGC with fMRI data sampled at 0.5 Hz (sampling interval, 3x TR). (Right) Directed
connection between language network and higher visual network identified by dGC with fMRI data
sampled at 0.3 Hz (sampling interval, 5x TR). (Lower plots) Cross coherence between respective
network time series. Shaded area: Frequencies from Fs /2 to Fs , where Fs is the sampling rate of the
fMRI timeseries from which dGC was estimated.
slower, behaviorally relevant timescales of seconds, which readily emerge in the presence of feedback
connections, both in simulations [8][15] and in the brain [17][24]. Our simulations explicitly
incorporated these slow timescales to show that connections at these timescales could be robustly
estimated with GC on simulated fMRI data. Moreover, we show that such slow interactions also occur
in human brain networks. Our approach is particularly relevant for studies that seek to investigate
dynamic functional connectivity with slow sampling techniques, such as fMRI or calcium imaging.
Our empirical validation of the robustness of GC measures, by applying machine learning to fMRI
data from 500 subjects (and 4000 functional scans), is widely relevant for studies that seek to
apply GC to estimate directed functional networks from fMRI data. Although, scanner noise or
hemodynamic confounds can influence GC estimates in fMRI data [20][4], our results demonstrate
that dGC contains enough directed connectivity information for robust prediction, reaching over
95% validation accuracy with averaging even as few as 10 subjects? connectivity matrices (Fig. 3B).
These results strongly indicate the existence of slow information flow networks in the brain that
can be meaningfully inferred from fMRI data. Future work will test if these functional networks
influence behavior at distinct timescales.
Acknowledgments. This research was supported by a Wellcome Trust DBT-India Alliance
Intermediate Fellowship, a SERB Early Career Research award, a Pratiksha Trust Young Investigator
award, a DBT-IISc Partnership program grant, and a Tata Trusts grant (all to DS). We would like to
thank Hritik Jain for help with data analysis.
References
[1] L. Barnett and A. K. Seth. The mvgc multivariate granger causality toolbox: A new approach to grangercausal inference. Journal of Neuroscience Methods, 223:50 ? 68, 2014.
9
[2] L. Barnett and A. K. Seth. Detectability of granger causality for subsampled continuous-time neurophysiological processes. Journal of Neuroscience Methods, 275:93 ? 121, 2017.
[3] A. M. Bastos, J. Vezoli, C. A. Bosman, J.-M. Schoffelen, R. Oostenveld, J. R. Dowdall, P. De Weerd,
H. Kennedy, and P. Fries. Visual areas exert feedforward and feedback influences through distinct frequency
channels. Neuron, 85(2):390?401, 2015.
[4] C. Chang, M. E. Thomason, and G. H. Glover. Mapping and correction of vascular hemodynamic latency
in the {BOLD} signal. NeuroImage, 43(1):90 ? 102, 2008.
[5] F. De Martino, G. Valente, N. Staeren, J. Ashburner, R. Goebel, and E. Formisano. Combining multivariate
voxel selection and support vector machines for mapping and classification of fmri spatial patterns.
Neuroimage, 43(1):44?58, 2008.
[6] M. Dhamala, G. Rangarajan, and M. Ding. Analyzing information flow in brain networks with nonparametric granger causality. NeuroImage, 41(2):354 ? 362, 2008.
[7] K. J. Friston, A. Mechelli, R. Turner, and C. J. Price. Nonlinear responses in fmri: the balloon model,
volterra kernels, and other hemodynamics. NeuroImage, 12(4):466?477, 2000.
[8] S. Ganguli, J. W. Bisley, J. D. Roitman, M. N. Shadlen, M. E. Goldberg, and K. D. Miller. One-dimensional
dynamics of attention and decision making in lip. Neuron, 58(1):15?25, 2008.
[9] I. M. Gel?fand and A. M. Yaglom. Calculation of the amount of information about a random function
contained in another such function. American Mathematical Society Translations, 12(1):199?246, 1959.
[10] J. Geweke. Measurement of linear dependence and feedback between multiple time series. Journal of the
American statistical association, 77(378):304?313, 1982.
[11] J. F. Geweke. Measures of conditional linear dependence and feedback between time series. Journal of the
American Statistical Association, 79(388):907?915, 1984.
[12] M. F. Glasser, S. N. Sotiropoulos, J. A. Wilson, T. S. Coalson, B. Fischl, J. L. Andersson, J. Xu, S. Jbabdi,
M. Webster, J. R. Polimeni, et al. The minimal preprocessing pipelines for the human connectome project.
Neuroimage, 80:105?124, 2013.
[13] J. D. Murray, A. Bernacchia, D. J. Freedman, R. Romo, J. D. Wallis, X. Cai, C. Padoa-Schioppa, T. Pasternak, H. Seo, D. Lee, et al. A hierarchy of intrinsic timescales across primate cortex. Nature neuroscience,
17(12):1661?1663, 2014.
[14] M. Ojala and G. C. Garriga. Permutation tests for studying classifier performance. Journal of Machine
Learning Research, 11(Jun):1833?1863, 2010.
[15] K. Rajan and L. Abbott. Eigenvalue spectra of random matrices for neural networks. Physical review
letters, 97(18):188104, 2006.
[16] A. Roebroeck, E. Formisano, and R. Goebel. Mapping directed influence over the brain using granger
causality and fmri. Neuroimage, 25(1):230?242, 2005.
[17] C. A. Runyan, E. Piasini, S. Panzeri, and C. D. Harvey. Distinct timescales of population coding across
cortex. Nature, 548(7665):92?96, 2017.
[18] S. Ryali, K. Supekar, T. Chen, and V. Menon. Multivariate dynamical systems models for estimating causal
interactions in fmri. Neuroimage, 54(2):807?823, 2011.
[19] A. K. Seth. A matlab toolbox for granger causal connectivity analysis. Journal of Neuroscience Methods,
186(2):262?273, 2010.
[20] A. K. Seth, P. Chorley, and L. C. Barnett. Granger causality analysis of fmri bold signals is invariant to
hemodynamic convolution but not downsampling. Neuroimage, 65:540?555, 2013.
[21] W. Shirer, S. Ryali, E. Rykhlevskaia, V. Menon, and M. Greicius. Decoding subject-driven cognitive states
with whole-brain connectivity patterns. Cerebral cortex, 22(1):158?165, 2012.
[22] S. M. Smith, K. L. Miller, G. Salimi-Khorshidi, M. Webster, C. F. Beckmann, T. E. Nichols, J. D. Ramsey,
and M. W. Woolrich. Network modelling methods for fmri. Neuroimage, 54(2):875?891, 2011.
[23] D. Sridharan, D. J. Levitin, and V. Menon. A critical role for the right fronto-insular cortex in switching
between central-executive and default-mode networks. Proceedings of the National Academy of Sciences,
105(34):12569?12574, 2008.
[24] D. Vidaurre, S. M. Smith, and M. W. Woolrich. Brain network dynamics are hierarchically organized in
time. Proceedings of the National Academy of Sciences, page 201705120, 2017.
10
| 6999 |@word oostenveld:1 middle:2 briefly:1 seek:3 simulation:22 covariance:2 tr:8 carry:1 configuration:13 series:29 efficacy:1 exclusively:4 contains:1 hemodynamic:13 interestingly:1 past:4 ramsey:1 com:1 anterior:2 surprising:1 lang:9 gmail:1 dx:1 must:1 readily:1 confirming:1 enables:1 webster:2 remove:1 plot:4 atlas:3 progressively:1 v:2 alone:3 mvar:4 prohibitive:1 fewer:1 reciprocal:2 smith:2 short:1 precuneus:3 caveat:2 provides:3 node:13 five:1 glover:1 mathematical:1 enterprise:1 direct:1 beta:3 theoretically:1 notably:1 indeed:1 expected:3 rapid:1 behavior:4 brain:35 resolve:1 preclude:1 electroencephalography:1 cardinality:1 elbow:1 iisc:3 estimating:7 underlying:3 moreover:6 panel:6 matched:2 project:2 provided:2 what:1 kind:1 finding:6 extremum:1 temporal:1 classifier:9 control:1 converse:1 grant:2 organize:1 before:2 local:1 switching:1 despite:3 analyzing:1 insular:1 establishing:1 exert:1 minimally:1 shaded:1 fastest:2 greicius:1 range:3 averaged:2 directed:9 acknowledgment:1 testing:1 recursive:4 bootstrap:1 procedure:1 dmn:12 area:3 empirical:3 significantly:4 inferential:1 matching:1 confidence:1 downsampled:5 dbt:2 cannot:1 convenience:1 selection:1 runyan:1 applying:4 influence:6 conventional:1 demonstrated:2 romo:1 polimeni:1 attention:1 resolution:1 identifying:5 insight:2 estimator:2 importantly:1 s6:2 coalson:1 population:2 fx:15 hierarchy:3 engage:1 akaike:1 distinguishing:1 goldberg:1 decimation:1 particularly:3 database:2 bottom:4 observed:2 role:1 ding:1 region:8 connected:1 balloon:1 highest:4 jbabdi:1 balanced:1 sal:13 piasini:1 asked:7 electrocorticography:1 dynamic:8 controversy:3 trained:2 sotiropoulos:1 segment:1 purely:2 easily:1 seth:4 emergent:1 various:5 represented:1 train:1 distinct:20 fast:11 effective:1 jain:1 detected:3 pearson:1 whose:2 lag:17 widely:5 supplementary:22 emerged:1 drawing:1 triangular:1 ability:3 timescale:18 itself:1 eigenvalue:1 analytical:1 cai:1 interaction:11 maximal:1 unresolved:1 relevant:3 combining:2 date:1 rapidly:1 achieve:1 academy:2 crossvalidation:1 rfe:13 rangarajan:1 produce:8 perfect:1 leave:3 help:1 coupling:1 depending:1 ac:2 derive:1 measured:1 wished:1 strong:1 recovering:1 predicted:1 involves:1 indicate:2 salimi:1 synchronized:1 quantify:1 direction:1 convention:2 aud:4 closely:2 compromising:1 human:6 material:12 elimination:4 argued:3 require:1 generalization:1 khorshidi:1 im:2 ecog:1 correction:1 scanner:1 around:1 sufficiently:1 ground:8 vidaurre:1 panzeri:1 mapping:4 predict:3 claim:2 sought:3 early:1 seo:1 concurrent:1 vice:2 behaviorally:1 gaussian:1 reaching:1 avoid:1 wilson:1 ax:1 validated:1 improvement:1 martino:1 modelling:1 slowest:3 garriga:1 criticism:1 baseline:2 inference:1 ganguli:1 eliminate:1 typically:2 spurious:3 going:1 among:6 classification:34 schoffelen:1 resonance:2 spatial:2 integration:1 field:1 barring:1 beach:1 sampling:31 barnett:3 identical:2 peaked:1 fmri:56 future:3 bangalore:3 primarily:1 distinguishes:1 few:5 national:2 subsampled:1 phase:1 maintain:1 investigate:1 analyzed:2 operated:2 pc:6 held:1 accurate:2 partial:3 respective:2 re:2 plotted:1 alliance:1 causal:2 fronto:1 minimal:2 increased:2 column:2 instance:3 measuring:1 introducing:1 subset:2 euler:1 entry:1 hundred:1 comprised:1 predictor:1 characterize:1 reported:2 accomplish:1 combined:1 st:1 cited:2 systematic:1 destination:3 lee:1 decoding:1 connectome:2 mouse:1 connectivity:56 central:1 recorded:1 woolrich:2 containing:2 prefrontal:1 cognitive:4 external:1 american:3 de:2 bold:3 coding:1 coordinated:3 explicitly:1 bg:6 vi:10 performed:5 analyze:1 weerd:1 red:1 portion:1 recover:2 reached:2 relied:2 ass:1 accuracy:28 largely:2 characteristic:1 miller:2 yield:1 identify:8 confounds:4 yellow:1 accurately:3 produced:1 marginally:1 none:1 served:1 kennedy:1 finer:1 cation:2 classified:1 ashburner:1 definition:1 acquisition:1 frequency:3 involved:1 invasive:1 associated:1 sampled:10 manifest:1 recall:3 geweke:5 organized:1 amplitude:1 higher:7 dt:1 day:1 permitted:1 response:9 improved:1 maximally:3 strongly:1 stage:4 correlation:5 d:1 hand:4 trust:3 nonlinear:1 overlapping:3 lack:1 mode:4 indicated:1 reveal:1 gray:1 menon:3 usa:1 roitman:1 validity:1 contain:1 nichols:1 hence:2 inspiration:1 regularization:1 spatially:1 symmetric:1 analytically:1 semantic:1 freq:2 during:2 self:1 m:18 leftmost:1 criterion:1 yaglom:1 complete:1 demonstrate:4 ridge:1 theoretic:1 ranging:1 instantaneous:17 novel:4 superior:1 functional:39 spiking:1 physical:1 volume:1 cerebral:1 association:2 fare:1 occurred:3 resting:14 functionally:2 measurement:5 significant:4 refer:1 versa:2 s5:2 goebel:2 centre:3 language:27 had:1 dot:1 access:1 cortex:7 operating:4 multivariate:7 closest:2 recent:3 posterior:1 optimizing:1 fgc:4 driven:1 manifested:3 harvey:1 success:1 employed:2 period:1 signal:5 resolving:1 full:2 multiple:2 sundaresan:1 infer:1 match:2 faster:2 characterized:1 cross:9 long:2 calculation:1 mali:1 award:2 schematic:1 prediction:3 devarajan:1 dgc:73 metric:1 grounded:1 represent:1 kernel:1 addition:3 whereas:6 separately:1 fellowship:1 interval:11 decreased:2 source:3 rest:7 operate:7 exhibited:2 regional:1 markedly:1 eigenspectra:1 subject:20 recording:7 hz:31 pooling:1 meaningfully:1 flow:2 sridharan:2 near:1 presence:3 intermediate:4 revealed:1 feedforward:4 clopper:1 enough:1 fit:2 identified:24 regarding:1 listening:1 whether:3 six:2 glasser:1 s7:2 bastos:1 penalty:1 vascular:1 f:3 matlab:3 latency:1 detailed:1 involve:1 amount:1 s4:2 nonparametric:1 hue:1 ten:1 band:1 wellestablished:1 svms:1 generate:1 exist:1 canonical:1 millisecond:5 inhibitory:4 s3:3 visuospatial:4 neuroscience:7 estimated:22 blue:2 diverse:3 fischl:1 detectability:1 levitin:1 rajan:1 group:5 key:7 basal:3 nevertheless:2 drawn:3 preprocessed:1 abbott:1 tenable:1 imaging:4 graph:1 sum:2 run:1 letter:1 powerful:1 missed:1 oscillation:1 coherence:5 decision:1 comparable:3 entirely:1 capturing:1 hi:8 distinguish:3 fold:5 yielded:1 activity:2 strength:5 occur:4 software:1 aspect:1 extremely:1 across:16 partitioned:1 primate:2 s1:6 making:1 explained:1 invariant:1 pr:2 wellcome:1 pipeline:1 equation:3 mutually:2 remains:2 previously:1 turn:1 granger:9 mechanism:1 teal:1 studying:1 operation:1 permit:2 apply:2 prec:8 generic:1 magnetic:2 simulating:2 pasternak:1 occurrence:2 robustly:4 distinguished:1 alternative:1 robustness:2 fry:1 slower:6 convolved:2 existence:1 original:2 top:6 remaining:3 binomial:1 igc:51 instant:1 exploit:1 parcellation:4 concatenated:1 murray:1 society:1 added:1 already:1 spike:1 mechelli:1 volterra:1 primary:1 dependence:9 diagonal:1 surrogate:1 thank:1 simulated:23 seven:1 tata:1 fy:4 eigenspectrum:2 extent:1 relationship:2 gel:1 beckmann:1 downsampling:8 susceptible:1 trace:1 lagged:2 reliably:7 calcium:2 upper:1 neuron:5 convolution:4 timeseries:2 incorporated:1 precise:1 gc:51 interacting:2 varied:2 discovered:1 bisley:1 inferred:3 toolbox:3 connection:81 hour:2 nip:1 able:4 bar:3 dynamical:1 pattern:9 challenge:1 program:1 reliable:2 including:1 hemodynamics:1 overlap:2 critical:1 friston:1 residual:1 turner:1 scheme:2 firmly:1 temporally:1 jun:1 coupled:1 text:2 prior:1 understanding:2 voxels:1 review:1 permutation:2 highlight:2 versus:7 validation:11 executive:1 controversial:1 consistent:2 shadlen:1 story:2 systematically:2 classifying:4 uncorrelated:1 translation:1 row:2 excitatory:4 spm8:2 summary:1 repeat:2 last:1 free:1 supported:1 salience:2 deeper:1 understand:1 india:4 institute:3 wide:2 characterizing:3 formisano:2 emerge:1 distinctly:1 feedback:13 default:4 curve:2 valid:1 evaluating:1 cortical:2 autoregressive:1 sensory:1 author:1 made:2 preprocessing:1 voxel:1 skill:1 unreliable:2 confirm:1 active:1 incoming:1 discriminative:8 ryali:2 spectrum:1 scrambled:1 continuous:1 latent:1 decade:1 why:1 table:3 lip:1 nature:4 channel:1 robust:5 ca:1 career:1 eeg:1 fraught:1 did:1 roebroeck:1 hierarchically:1 significance:1 timescales:50 main:2 whole:1 arrow:1 s2:4 noise:8 parcellated:1 mediate:1 sridhar:1 hyperparameters:3 profile:1 complementary:9 freedman:1 xu:1 causality:8 fig:31 differed:2 slow:18 darker:1 precision:3 sub:4 fails:1 neuroimage:9 timepoints:1 hrf:3 third:3 s5c:1 young:1 supekar:1 minute:2 remained:1 shade:1 specific:6 showing:1 bosman:1 decay:2 evidence:1 essential:1 intrinsic:2 effectively:2 importance:1 magnitude:3 conditioned:2 demand:1 chen:1 likely:1 ganglion:3 neurophysiological:2 visual:4 failed:1 hcp:2 contained:1 chang:1 corresponds:3 truth:8 chance:3 conditional:5 goal:1 price:1 content:1 change:2 directionality:1 included:1 typical:2 determined:1 operates:2 except:1 averaging:7 specifically:2 classi:2 degradation:2 total:2 specie:1 discriminate:1 andersson:1 experimental:3 wallis:1 exception:1 indicating:1 puzzling:1 ojala:1 support:2 mark:1 scan:8 assessed:1 modulated:1 dorsal:1 indian:3 latter:1 investigator:1 partnership:1 evaluate:1 outgoing:1 tested:6 |
6,631 | 7 | 377
EXPERIMENTAL DEMONSTRATIONS OF
OPTICAL NEURAL COMPUTERS
Ken Hsu, David Brady, and Demetri Psaltis
Department of Electrical Engineering
California Institute of Technology
Pasadena, CA 91125
ABSTRACT
We describe two expriments in optical neural computing. In the first
a closed optical feedback loop is used to implement auto-associative image
recall. In the second a perceptron-Iike learning algorithm is implemented with
photorefractive holography.
INTRODUCTION
The hardware needs of many neural computing systems are well matched
with the capabilities of optical systems l ,2,3. The high interconnectivity
required by neural computers can be simply implemented in optics because
channels for optical signals may be superimposed in three dimensions with
little or no cross coupling. Since these channels may be formed holographically,
optical neural systems can be designed to create and maintain interconnections
very simply.
Thus the optical system designer can to a large extent
avoid the analytical and topological problems of determining individual
interconnections for a given neural system and constructing physical paths
for these interconnections.
An archetypical design for a single layer of an optical neural computer is
shown in Fig. 1. Nonlinear thresholding elements, neurons, are arranged on
two dimensional planes which are interconnected via the third dimension by
holographic elements. The key concerns in implementing this design involve
the need for suitable nonlinearities for the neural planes and high capacity,
easily modifiable holographic elements. While it is possible to implement the
neural function using entirely optical nonlinearities, for example using etalon
arrays\ optoelectronic two dimensional spatial light modulators (2D SLMs)
suitable for this purpose are more readily available. and their properties,
i.e. speed and resolution, are well matched with the requirements of neural
computation and the limitations imposed on the system by the holographic
interconnections 5 ,6. Just as the main advantage of optics in connectionist
machines is the fact that an optical system is generally linear and thus
allows the superposition of connections, the main disadvantage of optics is
that good optical nonlinearities are hard to obtain. Thus most SLMs are
optoelectronic with a non-linearity mediated by electronic effects. The need for
optical nonlinearities arises again when we consider the formation of modifiable
optical interconnections, which must be an all optical process. In selecting
@
American Institute of Physics 1988
378
a holographic material for a neural computing application we would like to
have the capability of real-time recording and slow erasure. Materials such
as photographic film can provide this only with an impractical fixing process.
Photorefractive crystals are nonlinear optical materials that promise to have
a relatively fast recording response and long term memory4,5,6,7,B.
'. '.
.
"
.....
..
'.
.. .'
- ~ :-w:-=7 -~---
" . '.
......
. '.
Fourier
lens
hologro.phlc I"IealuI"I
Fourier
lens
Figure 1. Optical neural computer architecture.
In this paper we describe two experimental implementations of optical
neural computers which demonstrate how currently available optical devices
may be used in this application. The first experiment we describe involves an
optical associative loop which uses feedback through a neural plane in the form
of a pinhole array and a separate thresholding plane to implement associate
regeneration of stored patterns from correlated inputs. This experiment
demonstrates the input-output dynamics of an optical neural computer similar
to that shown in Fig. 1, implemented using the Hughes Liquid Crystal Light
Valve. The second experiment we describe is a single neuron optical perceptron
implemented with a photorefractive crystal. This experiment demonstrates
how the learning dynamics of long term memory may be controlled optically.
By combining these two experiments we should eventually be able to construct
high capacity adaptive optical neural computers.
OPTICAL ASSOCIATIVE LOOP
A schematic diagram of the optical associative memory loop is shown in
Fig. 2. It is comprised of two cascaded Vander Lugt correlators9. The input
section of the system from the threshold device P1 through the first hologram
P2 to the pinhole array P3 forms the first correlator. The feedback section
from P3 through the second hologram P4 back to the threshold device P1
forms the second correlator. An array of pinholes sits on the back focal plane
of L2, which coincides with the front focal plane of L3. The purpose of the
pinholes is to link the first and the second (reversed) correlator to form a closed
optical feedback loop 10.
There are two phases in operating this optical loop, the learning phase
and the recal phase. In the learning phase, the images to be stored are
spatially multiplexed and entered simultaneously on the threshold device. The
379
thresholded images are Fourier transformed by the lens Ll. The Fourier
spectrum and a plane wave reference beam interfere at the plane P2 and
record a Fourier transform hologram. This hologram is moved to plane P4
as our stored memory. We then reconstruct the images from the memory to
form a new input to make a second Fourier transform hologram that will stay
at plane P2.
This completes the
learning phase. In the recalling phase
an input is imaged on the threshold Input
~~~*+++~~~~
device. This image is correlated with
the reference images in the hologram
at P2. If the correlation between the
input and one of the stored images is
high a bright peak appears at one of
the pinholes. This peak is sampled by
~ -,.....,.- Second
Pinhole
Hologram
Array - -.... L z
the pinhole to reconstruct the stored
I
I
image from the hologram at P4. The
reconstructed beam is then imaged
back to the threshold device to form a
closed loop. If the overall optical gain Figure. 2.
All-optical associative
in the loop exceeds the loss the loop loop. The threshold device is a LCLV,
signal will grow until the threshold and the holograms are thermoplastic
device is saturated. In this case, we plates.
can cutoff the external input image
and the optical loop will be latched at
the stable memory.
The key elements in this optical loop are the holograms, the pinhole array,
and the threshold device. If we put a mirror 10 or a phase conjugate mirror 7 ,11
at the pinhole plane P3 to reflect the correlation signal back through the
system then we only need one hologram to form a closed loop. The use of two
holograms, however, improves system performance. We make the hologram at
P2 with a high pass characteristic so that the input section of the loop has
high spectral discrimination. On the other hand we want the images to be
reconstructed with high fidelity to the original images. Thus the hologram at
plane P4 must have broadband characteristics. We use a diffuser to achieve
this when making this hologram. Fig. 3a shows the original images. Fig. 3b
and Fig. 3c are the images reconstructed from first and second holograms,
respectively. As desired, Fig. 3b is a high pass version of the stored image
while Fig. 3c is broadband .
Each of the pinholes at the correlation plane P3 has a diameter of 60
j.lm. The separations between the pinholes correspond to the separations of
the input images at plane P 1. If one of the stored images appears at P 1 there
will be a bright spot at the corresponding pinhole on plane P3. If the input
image shifts to the position of another image the correlation peak will also
380
,.
~
?.. ?
'
.a:..J
~
a.
. , .'"
.
~
~.
(
i
\~ .~
-y::'
. ..
?Il,... .'
.r
I
K~?';t
L
?
? ?
.#
b.
c.
Figure 3. (a) The original images. (b)The reconstructed images from the highpass hologram P2. (c) The reconstructed images from the band-pass hologram
P4.
shift to another pinhole. But if the shift is not an exact image spacing the
correlation peak can not pass the pinhole and we lose the feedback signal.
Therefore this is a loop with "discrete" shift invariance. Without the pinholes
the cross-correlation noise and the auto-correlation peak will be fed back to
the loop together and the reconstructed images won't be recognizable. There
is a compromise between the pinhole size and the loop performance. Small
pinholes allow good memory discrimination and sharp reconstructed images,
but can cut the signal to below the level that can be detected by the threshold
device and reduce the tolerance of the system to shifts in the input. The
function of the pinhole array in this system might also be met by a nonlinear
spatial light modulator, in which case we can achieve full shift invariance 12 ?
The threshold device at plane PI is a Hughes Liquid Crystal Light Valve.
The device has a resolution of 16 Ip/mm and uniform aperture of 1 inch
diameter. This gives us about 160,000 neurons at PI. In order to compensate
for the optical loss in the loop, which is on the order of 10- 5 , we need the
neurons to provide gain on the order of 105. In our system this is achieved
by placing a Hamamatsu image intensifier at the write side of the LCLV.
Since the microchannel plate of the image intensifier can give gains of 104 , the
combination of the LCLV and the image intensifier can give gains of 10 6 with
sensitivity down to n W /cm 2 . The optical gain in the loop can be adjusted by
changing the gain of the image intensifier.
Since the activity of neurons and the dynamics of the memory loop is
a continuously evolving phenomenon, we need to have a real time device to
monitor and record this behavior. We do this by using a prism beam splitter
to take part of the read out beam from the LCLV and image it onto a CCD
camera. The output is displayed on a CRT monitor and also recorded on a
video tape recorder. Unfortunately, in a paper we can only show static pictures
taken from the screen. We put a window at the CCD plane so that each time
we can pick up one of the stored images. Fig. 4a shows the read out image
381
a.
b.
c.
Figure 4. (a) The external input to the optical loop. (b) The feedback image
superimposed with the input image. (c) The latched loop image.
from the LCLV which comes from the external input shifted away from its
stored position. This shift moves its correlation peak so that it does not match
the position of the pinhole. Thus there is no feedback signal going through
the loop. If we cut off the input image the read out image will die out with a
characteristic time on the order of 50 to 100 ms, corresponding to the response
time of the LCLV. Now we shift the input image around trying to search for
the correct position. Once the input image comes close enough to the correct
position the correlation peak passes through the right pinhole, giving a strong
feedback signal superimposed with the external input on the neurons. The
total signal then goes through the feedback loop and is amplified continuously
until the neurons are saturated. Depending on the optical gain of the neurons
the time required for the loop to reach a stable state is between 100 ms and
several seconds. Fig. 4b shows the superimposed images of the external input
and the loop images. While the feedback signal is shifted somewhat with
respect to the input, there is sufficient correlation to induce recall. If the
neurons have enough gain then we can cut off the input and the loop stays in
its stable state. Otherwise we have to increase the neuron gain until the loop
can sustain itself. Fig. 4c shows the image in the loop with the input removed
and the memory latched. If we enter another image into the system, again
we have to shift the input within the window to search the memory until we
are close enough to the correct position. Then the loop will evolve to another
stable state and give a correct output.
The input images do not need to match exactly with the memory. Since
the neurons can sense and amplify the feedback signal produced by a partial
match between the input and a stored image, the stored memory can grow
in the loop. Thus the loop has the capability to recall the complete memory
from a partial input. Fig. 5a shows the image of a half face input into the
system. Fig. 5b shows the overlap of the input with the complete face from
the memory. Fig. 5c shows the stable state of the loop after we cut off the
external input. In order to have this associative behavior the input must have
enough correlation with the stored memory to yield a strong feedback signal.
For instance, the loop does not respond to the the presentation of a picture of
382
a.
c.
Figure 5. (a) Partial face used as the external input. (b) The superimposed
images of the partial input with the complete face recalled by the loop. (c)
The complete face latched in the loop.
a.
b.
c.
Figure 6. (a) Rotated image used as the external input. (b) The superimposed
images of the input with the recalled image from the loop. (c) The image
latched in the optical loop.
a person not stored in memory.
Another way to demonstrate the associative behavior of the loop is to use
a rotated image as the input. Experiments show that for a small rotation the
loop can recognize the image very quickly. As the input is rotated more, it
takes longer for the loop to reach a stable state. If it is rotated too much,
depending on the neuron gain, the input won't be recognizable. Fig. 6a shows
the rotated input. Fig. 6b shows the overlap of loop image with input after
we turn on the loop for several seconds. Fig. 6c shows the correct memory
recalled from the loop after we cut the input. There is a trade-off between the
degree of distortion at the input that the system can tolerate and its ability
to discriminate against patterns it has not seen before. In this system the
feedback gain (which can be adjusted through the image intensifier) controls
this trade-off.
PHOTOREFRACTIVE PERCEPTRON
Holograms are recorded in photorefractive crystals via the electrooptic
modulation of the index of refraction by space charge fields created by
the migration of photogenerated charge 13 ,14. Photorefractive crystals are
attractive for optical neural applications because they may be used to store
383
long term interactions between a very large number of neurons. While
photorefractive recording does not require a development step, the fact that
the response is not instantaneous allows the crystal to store long term traces
of the learning process. Since the photorefractive effect arises from the
reversible redistribution of a fixed pool of charge among a fixed set of optically
addressable trapping sites, the photorefractive response of a crystal does not
deteriorate with exposure. Finally, the fact that photorefractive holograms
may extend over the entire volume of the crystal has previously been shown to
imply that as many as 10 10 interconnections may be stored in a single crystal
with the independence of each interconnection guaranteed by an appropriate
spatial arrangement of the interconnected neurons 6 ,5.
In this section we consider a rudimentary optical neural system which uses
the dynamics of photorefractive crystals to implement perceptron-like learning.
The architecture of this system is shown schematically in Fig. 7. The input
to the system, x, corresponds to a two dimensional pattern recorded from a
video monitor onto a liquid crystal light valve. The light valve transfers this
pattern on a laser beam. This beam is split into two paths which cross in a
photorefractive crystal. The light propagating along each path is focused such
that an image of the input pattern is formed on the crystal. The images along
both paths are of the same size and are superposed on the crystal, which is
assumed to be thinner than the depth of focus of the images. The intensity
diffracted from one of the two paths onto the other by a hologram stored in
the crystal is isolated by a polarizer and spatially integrated by a single output
detector. The thresholded output of this detector corresponds to the output
of a neuron in a perceptron.
laser
~---,t+
-
PB
LCL V TV
--f4HJ
ucl
BS$- -
COl"lputer
Xtal
PM
Figure 7. Photorefractive perceptron. PB is a polarizing beam splitter. Ll
and L2 are imaging lenses. WP is a quarter waveplate. PM is a piezoelectric
mirror. P is a polarizer. D is a detector. Solid lines show electronic control.
Dashed lines show the optical path.
The ith component of the input to this system corresponds to the intensity
in the ith pixel of the input pattern. The interconnection strength, Wi, between
the ith input and the output neuron corresponds to the diffraction efficiency
of the hologram taking one path into the other at the ith pixel of the image
plane. While the dynamics of Wi can be quite complex in some geometries
384
and crystals, it is possible to show from the band transport model for the
photorefractive effect that under certain circumstances the time development
of Wi may be modeled by
(1)
where m(s) and 4>(s) are the modulation depth and phase, respectively, of the
interference pattern formed in the crystal between the light in the two paths 15 ?
T is a characteristic time constant for crystal. T is inversely proportional to
the intensity incident on the ith pixel of the crystal. Using Eqn. 1 it is possible
to make Wi(t) take any value between 0 and W m l1Z by properly exposing the
ith pixel of the crystal to an appropriate modulation depth and intensity. The
modulation depth between two optical beams can be adjusted by a variety of
simple mechanisms. In Fig. 7 we choose to control met) using a mirror mounted
on a piezoelectric crystal. By varying the frequency and the amplitude of
oscillations in the piezoelectric crystal we can electronically set both met) and
4>(t) over a continuous range without changing the intensity in the optical
beams or interrupting readout of the system. With this control over met) it
is possible via the dynamics described in Eqn. (1) to implement any learning
algorithm for which Wi can be limited to the range (0, w maz ).
The architecture of Fig. 7 classifies input patterns into two classes
according to the thresholded output of the detector. The goal of a learning
algorithm for this system is to correctly classify a set of training patterns. The
perceptron learning algorithm involves simply testing each training vector and
adding training vectors which yield too Iowan output to the weight vector
and subtracting training vectors which yield too high an output from the
weight vector until all training vectors are correctly classified 16. This training
algorithm is described by the equation L\wi = aXj where alpha is positive
(negative) if the output for x is too low (high). An optical analog of this
method is implemented by testing each training pattern and exposing the
crystal with each incorrectly classified pattern. Training vectors that yield
a high output when a low output is desired are exposed at zero modulation
depth . Training vectors that yield a low output when high output is desired
are exposed at a modulation depth of one.
The weight vector for the k + 1th iteration when erasure occurs in the kth
iteration is given by
(2)
where we assume that the exposure time, L\t, is much less than T. Note that
since T is inversely proportional to the intensity in the ith pixel, the change in
385
Wi is proportional to the ith input. The weight vector at the k + 1th iteration
when recording occurs in the kth iteration is given by
-2~t
-~t
_ /
-~t
-~t
wi(k+ 1) = e-r-Wi(k) +2y Wi(k)Wmcue-r- (l-e-r- ) +wmaz(l-e-r-)
To lowest order in
6.t
.,.
2
(3)
and ~,
Eqn. (3) yields
w m ....
_/
~t
~t 2
wi(k + 1) = wi(k) + 2y wi(k)Wmaz(-) + Wmaz(-)
T
T
(4)
Once again the change in Wi is proportional to the ith input.
We have implemented the architecture of Fig. 7 using a SBN60:Ce crystal
provided by the Rockwell International Science Center. We used the 488 nm
line of an argon ion laser to record holograms in this crystal. Most of the
patterns we considered were laid out on 10 x 10 grids of pixels, thus allowing
100 input channels. Ultimately, the number of channels which may be achieved
using this architecture is limited by the number of pixels which may be imaged
onto the crystal with a depth of focus sufficient to isolate each pixel along the
length of the crystal.
-
??
+.+
Y
....... ? ? ?
1
3
2
..... ? ?
4
Figure 8. Training patterns.
...
1'1
j
Ia.
8.
!
l
t
I
,
0
0
aCOftClS
~
W
CIII)
Figure 9. Output in the second training cycle.
Using the variation on the perceptron learning algorithm described above
with a fixed exposure times ~tr and ~te for recording and erasing, we have
been able to correctly classify various sets of input patterns. One particular
set which we used is shown in Fig. 8. In one training sequence, we grouped
patterns 1 and 2 together with a high output and patterns 3 and 4 together
with a low output. After all four patterns had been presented four times,
the system gave the correct output for all patterns. The weights stored in
the crystal were corrected seven times, four times by recording and three by
erasing. Fig . 9a shows the output of the detector as pattern 1 is recorded in
the second learning cycle. The dashed line in this figure corresponds to the
threshold level. Fig. 9b shows the output of the detector as pattern 3 is erased
in the second learning cycle.
386
CONCLUSION
The experiments described in this paper demonstrate how neural network
architectures can be implemented using currently available optical devices. By
combining the recall dynamics of the first system with the learning capability
of the second, we can construct sophisticated optical neural computers.
ACKNOWLEDGEMENTS
The authors thank Ratnakar Neurgaonkar and Rockwell International for
supplying the SBN crystal used in our experiments and Hamamatsu Photonics
K.K. for assistance with image intesifiers. We also thank Eung Gi Paek and
Kelvin Wagner for their contributions to this research.
This research is supported by the Defense Advanced Research Projects
Agency, the Army Research Office, and the Air Force Office of Scientific
Research.
REFERENCES
1. Y. S. Abu-Mostafa and D. Psaltis, Scientific American, pp.88-95, March,
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
1987.
D. Psaltis and N. H. Farhat, Opt. Lett., 10,(2),98(1985).
A. D. Fisher, R. C. Fukuda, and J. N. Lee, Proc. SPIE 625, 196(1986).
K. Wagner and D. Psaltis, Appl. opt., 26(23), pp.5061-5076(1987).
D. Psaltis, D. Brady, and K. Wagner, Applied optics, March 1988.
D. Psaltis, J. Yu, X. G. Gu, and H. Lee, Second Topical Meeting on
Optical Computing, Incline Village, Nevada, March 16-18,1987.
A. Yariv, S.-K. Kwong, and K. Kyuma, SPIE proc. 613-01,(1986).
D. Z. Anderson, Proceedings of the International Conference on Neural
Networks, San Diego, June 1987.
A. B. Vander Lugt, IEEE Trans. Inform. Theory, IT-I0(2), pp.139145(1964).
E. G. Paek and D. Psaltis, Opt. Eng., 26(5), pp.428-433(1987).
Y. Owechko, G. J. Dunning, E. Marom, and B. H. Soffer, Appl. Opt.
26,(10) ,1900(1987).
D. Psaltis and J. Hong, Opt. Eng. 26,10(1987).
N. V. Kuktarev, V. B. Markov, S. G. Odulov, M. S. Soskin, and V. L.
Vinetskii, Ferroelectrics, 22,949(1979).
J. Feinberg, D. Heiman, A. R. Tanguay, and R. W. Hellwarth, J. Appl.
Phys. 51,1297(1980).
T. J. Hall, R. Jaura, L. M. Connors, P. D. Foote, Prog. Quan. Electr.
10,77(1985).
F. Rosenblatt, ' Principles of Neurodynamics: Perceptron and the Theory
of Brain Mechanisms, Spartan Books, Washington,(1961).
| 7 |@word maz:1 version:1 eng:2 pick:1 tr:1 solid:1 selecting:1 liquid:3 optically:2 must:3 readily:1 exposing:2 designed:1 discrimination:2 half:1 electr:1 device:14 plane:18 trapping:1 ith:9 record:3 supplying:1 sits:1 along:3 eung:1 soffer:1 recognizable:2 diffuser:1 deteriorate:1 behavior:3 p1:2 brain:1 little:1 valve:4 correlator:3 window:2 provided:1 classifies:1 matched:2 linearity:1 project:1 lowest:1 vinetskii:1 cm:1 brady:2 impractical:1 charge:3 exactly:1 demonstrates:2 demetri:1 control:4 kelvin:1 before:1 positive:1 engineering:1 thinner:1 path:8 modulation:6 might:1 appl:3 limited:2 range:2 camera:1 testing:2 yariv:1 hughes:2 implement:5 spot:1 addressable:1 erasure:2 evolving:1 induce:1 onto:4 amplify:1 close:2 put:2 superposed:1 imposed:1 center:1 go:1 exposure:3 focused:1 resolution:2 array:7 variation:1 diego:1 exact:1 us:2 associate:1 element:4 cut:5 electrical:1 sbn:1 readout:1 cycle:3 trade:2 removed:1 agency:1 dynamic:7 ultimately:1 compromise:1 exposed:2 efficiency:1 gu:1 easily:1 various:1 laser:3 fast:1 describe:4 modulators:1 detected:1 spartan:1 formation:1 lcl:1 quite:1 film:1 distortion:1 regeneration:1 interconnection:8 reconstruct:2 otherwise:1 ability:1 gi:1 transform:2 itself:1 ip:1 associative:7 advantage:1 sequence:1 analytical:1 ferroelectrics:1 ucl:1 nevada:1 subtracting:1 interconnected:2 interaction:1 p4:5 loop:44 combining:2 entered:1 achieve:2 amplified:1 moved:1 requirement:1 rotated:5 coupling:1 depending:2 propagating:1 fixing:1 strong:2 p2:6 implemented:7 involves:2 come:2 met:4 correct:6 crt:1 kwong:1 material:3 implementing:1 redistribution:1 require:1 opt:5 adjusted:3 mm:1 around:1 considered:1 hall:1 lm:1 mostafa:1 purpose:2 proc:2 lose:1 psaltis:8 currently:2 superposition:1 grouped:1 village:1 interconnectivity:1 create:1 latched:5 avoid:1 varying:1 axj:1 office:2 focus:2 june:1 properly:1 argon:1 superimposed:6 sense:1 i0:1 entire:1 integrated:1 pasadena:1 transformed:1 going:1 pixel:8 overall:1 fidelity:1 among:1 development:2 spatial:3 field:1 construct:2 once:2 washington:1 placing:1 yu:1 connectionist:1 simultaneously:1 recognize:1 individual:1 phase:8 geometry:1 maintain:1 recalling:1 saturated:2 feinberg:1 photonics:1 light:8 partial:4 desired:3 isolated:1 instance:1 classify:2 disadvantage:1 uniform:1 comprised:1 holographic:4 rockwell:2 front:1 too:4 stored:16 person:1 migration:1 peak:7 sensitivity:1 international:3 stay:2 lee:2 physic:1 off:5 pool:1 together:3 continuously:2 quickly:1 again:3 reflect:1 recorded:4 nm:1 choose:1 external:8 book:1 american:2 nonlinearities:4 closed:4 wave:1 iike:1 capability:4 contribution:1 formed:3 bright:2 il:1 air:1 characteristic:4 correspond:1 yield:6 inch:1 produced:1 classified:2 detector:6 reach:2 inform:1 phys:1 farhat:1 against:1 polarizing:1 frequency:1 pp:4 refraction:1 spie:2 static:1 gain:11 hsu:1 sampled:1 recall:4 improves:1 amplitude:1 sophisticated:1 back:5 appears:2 marom:1 tolerate:1 response:4 sustain:1 arranged:1 anderson:1 just:1 correlation:11 until:5 hand:1 eqn:3 transport:1 nonlinear:3 reversible:1 odulov:1 interfere:1 holography:1 scientific:2 effect:3 spatially:2 imaged:3 read:3 wp:1 dunning:1 attractive:1 ll:2 assistance:1 die:1 coincides:1 won:2 m:2 hong:1 trying:1 plate:2 crystal:30 complete:4 demonstrate:3 rudimentary:1 image:59 instantaneous:1 rotation:1 quarter:1 physical:1 lugt:2 volume:1 extend:1 analog:1 enter:1 focal:2 pm:2 grid:1 had:1 l3:1 stable:6 longer:1 operating:1 store:2 certain:1 prism:1 meeting:1 seen:1 somewhat:1 hologram:23 signal:11 dashed:2 full:1 photographic:1 exceeds:1 match:3 cross:3 long:4 compensate:1 recorder:1 controlled:1 schematic:1 circumstance:1 iteration:4 achieved:2 ion:1 beam:9 schematically:1 want:1 spacing:1 diagram:1 completes:1 grow:2 pass:1 recording:6 isolate:1 quan:1 split:1 enough:4 variety:1 independence:1 gave:1 architecture:6 modulator:1 reduce:1 shift:9 defense:1 tape:1 generally:1 involve:1 band:2 hardware:1 ken:1 diameter:2 shifted:2 designer:1 correctly:3 modifiable:2 rosenblatt:1 discrete:1 write:1 promise:1 abu:1 key:2 four:3 threshold:11 pb:2 monitor:3 changing:2 cutoff:1 ce:1 thresholded:3 imaging:1 electrooptic:1 respond:1 prog:1 laid:1 electronic:2 p3:5 separation:2 oscillation:1 interrupting:1 diffraction:1 entirely:1 layer:1 guaranteed:1 topological:1 activity:1 strength:1 optic:4 fourier:6 speed:1 optical:44 relatively:1 department:1 tv:1 according:1 combination:1 march:3 conjugate:1 wi:14 making:1 b:1 interference:1 taken:1 equation:1 previously:1 turn:1 eventually:1 mechanism:2 fed:1 available:3 away:1 spectral:1 appropriate:2 optoelectronic:2 original:3 paek:2 ccd:2 fukuda:1 giving:1 move:1 arrangement:1 occurs:2 kth:2 reversed:1 separate:1 link:1 thank:2 capacity:2 seven:1 extent:1 length:1 index:1 modeled:1 demonstration:1 unfortunately:1 trace:1 negative:1 design:2 implementation:1 allowing:1 neuron:16 markov:1 polarizer:2 displayed:1 incorrectly:1 topical:1 highpass:1 sharp:1 vander:2 intensity:6 david:1 required:2 connection:1 california:1 recalled:3 trans:1 able:2 below:1 pattern:20 memory:16 video:2 ia:1 suitable:2 overlap:2 force:1 cascaded:1 advanced:1 technology:1 splitter:2 imply:1 inversely:2 picture:2 created:1 mediated:1 auto:2 l2:2 acknowledgement:1 evolve:1 determining:1 loss:2 limitation:1 proportional:4 mounted:1 degree:1 incident:1 sufficient:2 thresholding:2 principle:1 pi:2 erasing:2 supported:1 electronically:1 side:1 allow:1 perceptron:9 institute:2 foote:1 face:5 taking:1 wagner:3 tolerance:1 feedback:13 dimension:2 depth:7 lett:1 author:1 adaptive:1 san:1 reconstructed:7 alpha:1 aperture:1 soskin:1 connors:1 assumed:1 spectrum:1 recal:1 search:2 continuous:1 neurodynamics:1 channel:4 transfer:1 incline:1 ca:1 complex:1 constructing:1 main:2 noise:1 fig:24 broadband:2 site:1 screen:1 heiman:1 slow:1 position:6 archetypical:1 col:1 third:1 down:1 concern:1 adding:1 mirror:4 te:1 simply:3 army:1 corresponds:5 goal:1 presentation:1 erased:1 fisher:1 hard:1 change:2 corrected:1 photorefractive:14 lens:4 total:1 pas:4 invariance:2 experimental:2 discriminate:1 pinhole:20 arises:2 ciii:1 multiplexed:1 phenomenon:1 correlated:2 |
6,632 | 70 | 137
On the Power of Neural Networks for
Solving Hard Problems
J ehoshua Bruck
Joseph W. Goodman
Information Systems Laboratory
Departmen t of Electrical Engineering
Stanford University
Stanford, CA 94305
Abstract
This paper deals with a neural network model in which each neuron
performs a threshold logic function. An important property of the model
is that it always converges to a stable state when operating in a serial
mode [2,5]. This property is the basis of the potential applications of the
model such as associative memory devices and combinatorial optimization
[3,6].
One of the motivations for use of the model for solving hard combinatorial
problems is the fact that it can be implemented by optical devices and
thus operate at a higher speed than conventional electronics.
The main theme in this work is to investigate the power of the model for
solving NP-hard problems [4,8], and to understand the relation between
speed of operation and the size of a neural network. In particular, it will
be shown that for any NP-hard problem the existence of a polynomial
size network that solves it implies that NP=co-NP. Also, for Traveling
Salesman Problem (TSP), even a polynomial size network that gets an
?-approximate solution does not exist unless P=NP.
The above results are of great practical interest, because right now it is
possible to build neural networks which will operate fast but are limited
in the number of neurons.
1
Background
The neural network model is a discrete time system that can be represented by
a weighted and undirected graph. There is a weight attached to each edge of
the graph and a threshold value attached to each node (neuron) of the graph.
? American Institute of Physics 1988
138
The order of the network is the number of nodes in the corresponding graph.
Let N be a neural network of order n; then N is uniquely defined by (W, T)
where:
? W is an n X n symmetric matrix, Wii is equal to the weight attached to
edge (i, j) .
? T is a vector of dimension n, Ti denotes the threshold attached to node i.
Every node (neuron) can be in one of two possible states, either 1 or -1. The
state of node i at time t is denoted by Vi(t). The state of the neural network at
time t is the vector V(t).
The next state of a node is computed by:
Vi(t + 1) = sgn(H,(t)) = {
where
~1 ~t~;2i~ 0
(1)
n
Hi(t) =
L
WiiVj(t) - Ti
i=l
The next state of the network, i.e. V(t + 1), is computed from the current
state by performing the evaluation (1) at a subset of the nodes of the network,
to be denoted by S. The modes of operation are determined by the method
by which the set S is selected in each time interval. If the computation is
performed at a single node in any time interval, i.e. 1S 1= 1, then we will say
that the network is operating in a serial mode; if 1S 1= n then we will say that
that the network is operating in a fully parallel mode. All the other cases, i.e.
1 <I S 1< n will be called parallel modes of operation. The set S can be chosen
at random or according to some deterministic rule.
A state V(t) is called stable iff V(t) = sgn(WV(t) - T), i.e. there is no
change in the state of the network no matter what the mode of operation is.
One of the most important properties of the model is the fact that it always
converges to a stable state while operating in a serial mode. The main idea in
the proof of the convergence property is to define a so called energy function
and to show that this energy function is nondecreasing when the state of the
network changes. The energy function is:
(2)
An important note is that originally the energy function was defined such that
it is nonincreasing [5]; we changed it such that it will comply with some known
graph problems (e.g. Min Cut).
A neural network will always get to a stable state which corresponds to a
local maximum in the energy function. This suggests the use of the network as a
139
device for performing a local search algorithm for finding a maximal value of the
energy function [6]. Thus, the network will perform a local search by operating
in a random and serial mode. It is also known [2,9] that maximization of E
associated with a given network N in which T = 0 is equivalent to finding
the Minimum Cut in N. Actually, many hard problems can be formulated as
maximization of a quadratic form (e.g. TSP [6)) and thus can be mapped to a
neural network.
.
2
The Main Results
The set of stable states is the set of possible final solutions that one will get
using the above approach. These final solutions correspond to local maxima of
the energy function but do not necessarily correspond to global optima of the
corresponding problem. The main question is: suppose we allow the network to
operate for a very long time until it converges; can we do better than just getting
some local optimum? i.e., is it possible to design a network which will always
find the exact solution (or some guaranteed approximation) of the problem?
Definition: Let X be an instance of problem. Then 1 X 1 denotes the size of
X, that is, the number of bits required to represent X. For example, for X
being an instance of TSP, 1 X I is the number of bits needed to represent the
matrix of the distances between cities.
Definition: Let N be a neural network. Then 1 N 1 denotes the size of the
network N. Namely, the number of bits needed to represent Wand T.
Let us start by defining the desired setup for using the neural network as a
model for solving hard problems.
Consider an optimization problem L, we would like to have for every instance
X of L a neural network N x with the following properties:
? Every local maximum of the energy function associated with N x corresponds to a global optimum of X .
? The network N x is small, that is,
in 1X I.
I
Nx
1
is bounded by some polynomial
Moreover, we would like to have an algorithm, to be denoted by A L , which given
an instance X E L, generates the description for N x in polynomial (in I X I)
time.
Now, we will define the desired setup for using the neural network as a model
for finding approximate solutions for hard problems.
Definition: Let
Eglo
be the global maximum of the energy function. Let
Eloc
140
be a local maximum of the energy function. We will say that a local maximum
is an f-approximate of the global iff:
Eglo - Eloc
--:;.--<
Eglo
f
-
The setup for finding approximate solutions is similar to the one for finding
exact solutions. For fo> 0 being some fixed number. We would like to have a
network N x~ in which every local maximum is an f-approximate of the global
and that the global corresponds to an optimum of X. The network N x? should
be small, namely, 1 N x~ 1 should be bounded by a polynomial in 1 X I. Also,
we would like to have an algorithm AL~, such that, given an instance X E L, it
generates the description for N x? in polynomial (in 1 X I) time.
Note that in both the exact case and the approximate case we do not put any
restriction on the time it takes the network to converge to a solution (it can be
exponential) .
A t this point the reader should convince himself that the above description is
what he imagined as the setup for using the neural network model for solving
hard problems, because that is what the following definition is about.
Definition: We will say that a neural network for solving (or finding an fapproximation of) a problem L exists if the algorithm AL (or ALJ which generates the description of N x (or Nx~) exists.
The main results in the paper are summarized by the following two propositions. The first one deals with exact solutions of NP-hard problems while the
second deals with approximate solutions to TSP.
Proposition 1 Let L be an NP-hard problem. Then the existence of a neural
network for solving L implies that NP = co-NP.
Proposition 2 Let f > 0 be some fixed number. The existence of a neural
network for finding an f-approximate solution to TSP implies that P=NP.
Both (P=NP) and (NP=co-NP) are believed to be false statements, hence,
we can not use the model in the way we imagine.
The key observation for proving the above propositions is the fact that a
single iteration in a neural network takes time which is bounded by a polynomial
in the size of the instance of the corresponding problem. The proofs of the above
two propositions follow directly from known results in complexity theory and
should not be considered as new results in complexity theory.
141
3
The Proofs
Proof of Proposition 1: The proof follows from the definition of the classes
NP and co-NP, and Lemma 1. The definitions and the lemma appear in Chapters 15 and 16 in [8] and also in Chapters 2 and 7 in [4].
Lemma 1 If the complement of an NP-complete problem is in NP,
then NP=co-NP.
Let L be an NP-hard problem. Suppose there exists a neural network that solves
L. Let 1 be an NP-complete problem. By definition, 1 can be polynomialy
reduced to L. Thus, for every instance X E 1, we have a neural network such
that from any of its global maxima we can efficiently recognize whether X is a
'yes' or a 'no' instance of 1.
We claim that we have a nondeterministic polynomial time algorithm to decide
that a given instance X E 1 is a 'no' instance. Here is how we do it: for X E 1
we construct the neural network that solves it by using the reduction to L. We
then check every state of the network to see if it is a local maximum (that is
done in polynomial time). In case it is a local maximum, we check if the instance
is a 'yes' or a 'no' instance (this is also done in polynomial time).
Thus, we have a nondeterministic polynomial time algorithm to recognize any
'no' instance of 1. Thus, the complement of the problem 1 is in NP. But 1 is
an NP-complete problem, hence, from Lemma 1 it follows that NP=co-NP. 0
Proof of Proposition 2: The result is a corollary of the results in [7], the
reader can refer to it for a more complete presentation.
The proof uses the fact that the Restricted Hamiltonian Circuit (RHC) is an
NP-complete problem.
Definiton of RHC: Given a graph G = (V, E) and a Hamiltonian path in G.
The question is whether there is a Hamiltonian circuit in G?
It is proven in [7] that RHC is NP-complete.
Suppose there exists a polynomial size neural network for finding an
f-approximate solution to TSP. Then it can be shown that an instance X E
RHC can be reduced to an instance X E TSP, such that in the network N x
the following holds: if the Hamiltonian path that is given in X corresponds to a
local maximum in N x? then X is a 'no' instance; else, if it does not correspond
to a local maximum in N x? then X is a 'yes' instance. Note that we can check
for locality in polynomial time.
Hence, the existence of N xe for all X E TSP implies that we have a polynomial
time algorithm for RHC. 0
?
142
4
Concluding Remarks
1. In Proposition 1 we let I W I and I T I be arbitrary but bounded by a
polynomial in the size of a given instance of a problem. If we assume
that I W I and I T I are fixed for all instances then a similar result to
Proposition 1 can be proved without using complexity theory; this result
appears in [1].
2. The network which corresponds to TSP, as suggested in [6], can not solve
the TSP with guaranteed quality. However, one should note that all the
analysis in this paper is a worst case type of analysis. So, it might be that
there exist networks that have good behavior on the average.
3. Proposition 1 is general to all NP-hard problems while Proposition 2 is
specific to TSP. Both propositions hold for any type of networks in which
an iteration takes polynomial time.
4. Clearly, every network has an algorithm which is equivalent to it, but an
algorithm does not necessarily have a corresponding network. Thus, if we
do not know of an algorithmic solution to a problem we also will not be able
to find a network which solves the problem. If one believes that the neural
network model is a good model (e.g. it is amenable to implementation with
optics), one should develop techniques to program the network to perform
an algorithm that is known to have some guaranteed good behavior.
Acknowledgement: Support of the U.S. Air Force Office of Scientific Research
is gratefully acknowledged.
References
[1] Y. Abu Mostafa, Neural Networks for Computing? in Neural Networks
for Computing, edited by J. Denker (AlP Conference Proceedings no. 151,
1986).
[2] J. Bruck and J. Sanz, A Study on Neural Networks, IBM Tech Rep, RJ
5403, 1986. To appear in International Journal of Intelligent Systems, 1988.
[3] J. Bruck and J. W. Goodman, A Generalized Convergence Theorem for
Neural Networks and its Applications in Combinatorial Optimization, IEEE
First ICNN, San-Diego, June 1987.
[4] M. R. Garey and D. S. Johnson, Computers and Intractability: A Guide to
the Theory of NP-Completeness, W. H. Freeman and Company, 1979.
143
[5] J. J. Hopfield, Neural Networks and Physical Systems with Emergent Collective Computational Abilities, Proc. Nat. Acad. Sci .. USA, Vol. 79, pp.
2554-2558, 1982.
[6] J. J. Hopfield and D. W. Tank, Neural Computations of Decisions in Optimization Problems, BioI. Cybern. 52, pp. 141-152, 1985.
[7] C. H. Papadimitriou and K. Steiglitz, On the Complexity of Local Search
for the Traveling Salesman Problem, SIAM J. on Comp., Vol. 6, No.1, pp.
76-83, 1977.
[8] C. H. Papadimitriou and K. Steiglitz, Combinatorial Optimization: Algo:rithms and Complexity, Prentice-Hall, Inc., 1982.
[9] J. C. Picard and H. D. Ratliff, Minimum Cuts and Related Problems, Networks, Vol 5, pp. 357-370, 1974.
| 70 |@word build:1 implemented:1 implies:4 polynomial:16 hence:3 question:2 symmetric:1 laboratory:1 deal:3 sgn:2 alp:1 uniquely:1 distance:1 mapped:1 sci:1 reduction:1 electronics:1 generalized:1 nx:2 icnn:1 proposition:12 complete:6 performs:1 current:1 hold:2 considered:1 hall:1 great:1 algorithmic:1 setup:4 claim:1 mostafa:1 statement:1 physical:1 attached:4 ratliff:1 imagined:1 design:1 proc:1 he:1 implementation:1 selected:1 device:3 combinatorial:4 refer:1 collective:1 perform:2 neuron:4 observation:1 polynomialy:1 hamiltonian:4 city:1 weighted:1 defining:1 completeness:1 clearly:1 node:8 always:4 gratefully:1 steiglitz:2 stable:5 arbitrary:1 operating:5 office:1 corollary:1 complement:2 namely:2 required:1 june:1 nondeterministic:2 check:3 tech:1 wv:1 rep:1 behavior:2 xe:1 able:1 suggested:1 freeman:1 scientific:1 minimum:2 company:1 relation:1 converge:1 program:1 bounded:4 moreover:1 circuit:2 tank:1 rj:1 memory:1 what:3 denoted:3 belief:1 power:2 force:1 bruck:3 believed:1 long:1 equal:1 finding:8 construct:1 serial:4 every:7 ti:2 himself:1 papadimitriou:2 np:29 iteration:2 intelligent:1 represent:3 comply:1 appear:2 acknowledgement:1 background:1 recognize:2 engineering:1 local:14 interval:2 else:1 fully:1 acad:1 goodman:2 operate:3 proven:1 path:2 interest:1 might:1 investigate:1 undirected:1 picard:1 evaluation:1 usa:1 suggests:1 co:6 intractability:1 limited:1 ibm:1 nonincreasing:1 changed:1 practical:1 amenable:1 edge:2 guide:1 idea:1 understand:1 allow:1 institute:1 unless:1 desired:2 whether:2 dimension:1 instance:19 get:3 san:1 remark:1 put:1 prentice:1 cybern:1 maximization:2 eglo:3 subset:1 restriction:1 conventional:1 deterministic:1 equivalent:2 approximate:9 logic:1 johnson:1 global:7 reduced:2 exist:2 rule:1 convince:1 search:3 international:1 siam:1 proving:1 physic:1 discrete:1 ca:1 vol:3 abu:1 imagine:1 suppose:3 diego:1 key:1 exact:4 threshold:3 acknowledged:1 us:1 necessarily:2 main:5 american:1 motivation:1 graph:6 cut:3 potential:1 wand:1 electrical:1 summarized:1 worst:1 rhc:5 matter:1 inc:1 reader:2 decide:1 vi:2 theme:1 performed:1 decision:1 edited:1 exponential:1 bit:3 complexity:5 start:1 hi:1 parallel:2 guaranteed:3 quadratic:1 theorem:1 solving:7 air:1 algo:1 optic:1 specific:1 efficiently:1 correspond:3 basis:1 yes:3 exists:4 generates:3 hopfield:2 emergent:1 speed:2 chapter:2 represented:1 min:1 concluding:1 performing:2 optical:1 false:1 comp:1 nat:1 fast:1 according:1 locality:1 fo:1 ehoshua:1 definition:8 stanford:2 solve:1 energy:10 say:4 pp:4 joseph:1 garey:1 ability:1 proof:7 associated:2 restricted:1 nondecreasing:1 tsp:11 rithms:1 final:2 associative:1 proved:1 corresponds:5 bioi:1 formulated:1 presentation:1 needed:2 maximal:1 know:1 actually:1 appears:1 hard:12 higher:1 originally:1 salesman:2 follow:1 iff:2 operation:4 wii:1 denker:1 change:2 determined:1 done:2 description:4 lemma:4 called:3 just:1 getting:1 sanz:1 until:1 convergence:2 traveling:2 optimum:4 existence:4 denotes:3 support:1 converges:3 develop:1 mode:8 quality:1 alj:1 solves:4 |
6,633 | 700 | Neural Network Model Selection Using
Asymptotic Jackknife Estimator and
Cross-Validation Method
Yong Liu
Department of Physics and
Institute for Brain and Neural Systems
Box 1843, Brown University
Providence, RI, 02912
Abstract
Two theorems and a lemma are presented about the use of jackknife estimator and the cross-validation method for model selection. Theorem 1
gives the asymptotic form for the jackknife estimator. Combined with the
model selection criterion, this asymptotic form can be used to obtain the
fit of a model. The model selection criterion we used is the negative of the
average predictive likehood, the choice of which is based on the idea of the
cross-validation method. Lemma 1 provides a formula for further exploration of the asymptotics of the model selection criterion. Theorem 2 gives
an asymptotic form of the model selection criterion for the regression case,
when the parameters optimization criterion has a penalty term. Theorem
2 also proves the asymptotic equivalence of Moody's model selection criterion (Moody, 1992) and the cross-validation method, when the distance
measure between response y and regression function takes the form of a
squared difference.
1
INTRODUCTION
Selecting a model for a specified problem is the key to generalization based on the
training data set. In the context of neural network, this corresponds to selecting
an architecture. There has been a substantial amount of work in model selection
(Lindley, 1968; Mallows, 1973j Akaike, 1973; Stone, 1977; Atkinson, 1978j Schwartz,
599
600
Liu
1978; Zellner, 1984; MacKay, 1991; Moody, 1992; etc.). In Moody's paper (Moody,
1992), the author generalized Akaike Information Criterion (AIC) (Akaike, 1973)
in the regression case and introduced the term effective number of parameters. It
is thus of great interest to see what the link between this criterion and the crossvalidation method (Stone, 1974) is and what we can gain from it, given the fact
that AIC is asymptotically equivalent to the cross-validation method (Stone, 1977).
In the method of cross-validation (Stone, 1974), a data set, which has a data point
deleted from the original training data set, is used to estimate the parameters of a
model by optimizing a parameters optimization criterion. The optimal parameters
thus obtained are called the jackknife estimator (Miller, 1974). Then the predictive
likelihood of the deleted data point is calculated, based on the estimated parameters. This is repeated for each data point in the original training data set. The fit
of the model, or the model selection criterion, is chosen as the negative of the average of these predictive likelihoods. However, the computational cost of estimating
parameters for different data point deletion is expensive. In section 2, we obtained
an asymptotic formula (theorem 1) for the jackknife estimator based on optimizing
a parameters optimization criterion with one data point deleted from the training
data set. This somewhat relieves the computational cost mentioned above. This
asymptotic formula can be used to obtain the model selection criterion by plugging
it into the criterion. Furthermore, in section 3, we obtained the asymptotic form
of the model selection criterion for the general case (Lemma 1) and for the special
case when the parameters optimization criterion has a penalty term (theorem 2).
We also proved the equivalence of Moody's model selection criterion (Moody, 1992)
and the cross-validation method (theorem 2). Only sketchy proofs are given when
these theorems and lemma are introduced. The detail of the proofs are given in
section 4.
2
APPROXIMATE JACKKNIFE ESTIMATOR
Let the parameters optimization criterion, with data set w = {(:Vi, yd, i = 1, ... , n}
and parameters 9, be Cw (9), and let W-i denote the data set with ith data point
deleted from w. If we denote 8 and 8_ i as the optimal parameters for criterion Cw (9)
and Cw _.(9), respectively, \79 as the derivative with respect to 9 and superscript t
~s transpose, we have the following theorem about the relationship between 8 and
9_ i ?
Theorem 1 If the criterion function C w (9) is an infinite- order differentiable function and its derivatives are bounded around 8. The estimator 8-i (also called jackknife estimator (Miller, 1974)) can be approzimated as
8_ i -
8~
-(\79\7~Cw(8) - \79\7~Ci(8?-1\79Ci(8)
(1)
in which Ci(9) = Cw(9) - Cw_.(9).
Proof. Use the Taylor expansion of equation \7 9Cw_.(8_d
terms higher than the second order.
o around
9. Ignore
Model Selection Using Asymptotic Jackknife Estimator & Cross-Validation Method
Example 1: Using the generalized maximum likelihood method from Bayesian
analysis 1 (Berger, 1985), if 7r( 0) is the prior on the parameters and the observations
are mutually independent, for which the distribution is modeled as ylx ,..... f(Ylx, 0),
the parameters optimization criterion is
Thus Ci(O) = logf(Yilxi, 0). If we ignore the influence of the deleted data point in
the dt nominator of equation 1, we have
(3)
Example 2: In the special case of example I, with noninformative prior 7r( 0) = 1,
the criterion is the ordinary log-likelihood function, thus
L
9_i-9~- [
VeV~logf(Yj lxj,O) j-lVelogf(Yilxi,O).
(4)
(xi,Y.:)Ew
3
CROSS-VALIDATION METHOD AND MODEL
SELECTION CRITERION
Hereafter we use the negative of the average predictive likelihood, or,
1
L
Tm(w) = -n
logf(Yi lXi, O-i)
(5)
(x"y.:)Ew
as the model selection criterion, in which n is the size of the training data set w,
mE ..Vi denotes parametric probability models f(Ylx, 0) and .tVi is the set of all the
models in consideration. It is well known that Tm(w) is an unbiased estimator of
r(00,9(.)), the risk of using the model m and estimator 0, when the true parameters
are 00 and the training data set is w (Stone, 1974; Efron and Gong, 1983; etc.), i.e.,
r(Oo, 0(.))
E{Tm(w)}
E{ -logf(Ylx, 9(w))}
E{
-~
L
logf(Yj IXj, O(w)) }
(6)
(x] ,Y] )Ew ...
in which Wn = {( Xj , Yj ), j = I, ... k} is the test data set, 9(.) is an implicit
function of the training data set wand it is the estimator we decide to use after
we have observed the training data set w. The expectation above is taken over the
randomness of w, x, Y and W n . The optimal model will be the one that minimizes
this criterion. This procedure of using 9_ t and Tm(w) to obtain an estimation of risk
is often called the cross-validation method (Stone, 1974; Efron and Gong, 1983) .
Remark: After we have obtained 9 for a model, we can use equation 1 to calculate
9_ i for each i, and put the resulting 9_ i into equation 5 to get the fit of the model,
thus we will be able to compare different models m E .tVi.
1 Strictly
speaking, it is a method to find the posterior mode.
601
602
Liu
Lemma 1 If the probability model f(ylx, 8), as a function,. of 8, is differentiable up
to infinite order and its derivatives are bounded around 8. The approximation to
the model selection criterion, equation 5, can be written as
Tm(w)
~ -~n
L
logf(Yi lXi, 8) -
(Xi,Yi)Ew
L
~n
V'~logf(Yi lXi, 8)(8_ i
- 8)
(7)
(Xi,Yi)Ew
Proof. Igoring the terms higher than the second order of the Taylor expansion of
logf(Yj IXj, 8_ i ) around 8 will yield the result.
Ezample 2 (continued): Using equation 4, we have, for the modei selection criterion,
1
n
1
n
"
L"
logf(Yi lXi, 9) A
(xi,y.)Ew
2:
V'~logf(Yi lXi, 8)A -IV' /}logf(Yi lXi, 8).
(8)
(:Ci,y.)Ew
in which A = E(X]'Y3)EW V'/}V'~logf(Yjlxj,9). If the model f(Ylx,8) is the true
one, the second term is asymptotically equal to p, the number of parameters in the
model. So the model selection criterion is
- log-likelihood + number of parameters of the model.
This is the well known Akaike's Information Criterion (AIC) (Akaike, 1973).
Ezample 1( continued): Consider the probability model
f(Ylx,8) = ,8exp( -
1
20'2
(9)
E(y, 1}/}( X)))
in which,8 is a normalization factor, E(y, 1}/}(x)) is a distance measure between Y and
regression function 1}/} (x). E(?) as function of 9 is assumed differentiable. Denoting 2
U(8,~, w)
E(Xi,Yi)EW E(Yi' 1}/}(xd) - 20'2log1T(81~), we have the following theorem,
=
Theorem 2 For the model specified in equation 9 and the parameters optimization
criterion specified in equation 2 (ezample 1), under regular condition, the unbiased
estimator of
~
2:
E(Yi,1}e(xd)}
(10)
V'~E(Yi,1}e(xd){V'/}V'~U(8,).,w)}-IV'/}E(Yi,1}9(xd)?
(11)
E{
(xi,y.)Ew ..
asymptotically equals to
1""
L
n
1
n
E(Yi' 1}e(x~))
+
(:Ci,y .. )Ew
L
(Xi,Yi)Ew
2For example, 1r(OIA) = Alp(O, (72 fA), this corresponds to
U(O, A, w) =
L
(z"Yi)Ew
?(Yi,l1s(xi))
+
A02
+
const(A, (72).
Model Selection Using Asymptotic Jackknife Estimator & Cross-Validation Method
For the case when ?(Y, 179(Z)) = (y -179(Z))2, we get, for the asymptotic equivalency
of the equation 11,
(12)
in wh.ich W = {(Zi,Yi), i = 1, ... , n} is the training data set, Wn = {(zi,yd, ~ =
1, ... , k} is the test data set, and ?(8,w) = ~ L(:z:"y,)EW E(Yi,179(Zi)).
Proof. This result comes directly from theorem 1 and lemma 1. Some asymptotic
technique has to be used.
Remark: The result in equation 12 was first proposed by Moody (Moody, 1992).
The effective number of parameters formulated in his paper corresponds to the
summation in equation 12. Since the result in this theorem comes directly from
the asymptotics of the cross-validation method and the jackknife estimator, it gives
the equivalency proof between Moody's model selection criterion and the crossvalidation method. The detailed proof of this theorem, presented in section 4, is
in spirit the same as the one presented in Stone's paper about the proof of the
asymptotic equivalence of Ale and the cross-validation method (Stone, 1977).
4
DETAILED PROOF OF LEl\1MAS AND THEOREMS
In order to prove theorem 1, lemma 1 and theorem 2, we will present three auxiliary
lemmas first.
Lemma 2 For random variable sequence Zn and Yn, if limn-+co Zn
liffin-+co Yn = z, then Zn and Yn are asymptotically equivalent.
Z
and
Proof. This comes from the definition of asymptotic equivalence. Because asymptotically the two random variable will behave the same as random variable z.
Lemma 3 Consider the summation Li h(Zi' Ydg(Zi' z). If E(h(z, y)lz, z) is a
constant c independent of z, y, z, then the summation is asymptotically equivalent
to cLig(Zi'Z).
Proof. According to the theorem of large number,
lim ~ ' " h(Zi' Yi)g(Zt, z)
n-+co n ~
E(h(z, y)g(z, z))
t
E(E(h(z, y)lz, z)g(z, z)) = cE(g(z, z))
which is the same as the limit of ~ Li g(Zt' z). Using lemma 2, we get the result of
this lemma.
Lemma 4 If T}9 (.) and g( 8, .) are differentiable up to the second order, and the
model Y = T}9 (z) + f with f ,...., ,/V (0, (]'2) is the true model, the second derivative with
603
604
Liu
respect to 8 of
n
i=l
evaluated at the minimum of U, i. e., iJ, is asymptotically independent of random
variable {Yi, i = 1, ... , n}.
Pro~of. Explicit calculation of the second derivative of U with respect to 8, evaluated
at 8, gives
n
V9V~U(iJ,'\,W) = 2:LV977J(1;JV~179(:Z:t)
i=l
i=l
+
As n approaches infinite, the effect of the second term in U vanishes, iJ approach
the mean squared error estimator with infinite amount of data points, or the true
parameters 80 of the model (consistency of MSE estimator (Jennrich, 1969)), E(y779(z)) approaches E(Y-779 0(Z)) which is O. According to lemma 2 and lemma 3, the
second term of this second derivative vanishes asymptotically. So as n approaches
infinite, the second derivative of U with respect to 8, evaluated at iJ, approaches
n
V' 9V~U(80), '\, w) = 2
which is independent of {Yi, i
lemma is readily obtained.
L V' 97790 (zi)V~7790 (Zi) + V' 9V~g( 8
0 , ,\)
= 1,
... , n}. According to lemma 2, the result of this
Now we give the detailed proof of theorem 1, lemma 1 and theorem 2.
Proof of Theorem 1. The jackknife estimator iJ- i satisfies, V 9Cw_ . (ILi)
The Taylor expansion of the left side of this equation around 8 gives
VeCW_i(iJ)
+ VeV~Cw_.(iJ)(iJ_i -
iJ) + O(liJ- i
91 2 )
-
O.
=0
According to the definition of iJ and iJ_ i , their difference is thus a small quantity.
Also because of the boundness of the derivatives, we can ignore higher order terms
in the Taylor expansion and get the approximation
iJ- i - iJ ~ -(V9V~CW_i(iJ))-1V'9Cw_.(iJ)
Since
9 satisfies V' 9Cw(iJ) =
0, we can rewrite this equation and obtain equation 1.
Proof of Lemma 1. The Taylor expansion of 10gf(Yi IZi' iJ-d around iJ is
10gf(Yi IZi, iJ-d = 10gf(Yi IZi, iJ)
+ V'~logf(Yi IZi, iJ)(iJ_ i
-
iJ)
+ O(liJ- i - 91 2 )
Putting this into equation 5 and ignoring higher order terms for the same argument
as that presented in the proof of theorem 1, we readily get equation 7.
Proof of Theorem 2. Up to an additive constant dependent only on ,\ and cr 2 ,
the optimization criterion, or equation 2, can be rewritten as
(13)
Model Selection Using Asymptotic Jackknife Estimator & Cross-Validation Method
Now putting equation 9 and 13 into equation 3, we get,
0 ~ -{V' 9V'~U(8, .\, w)} -IV' 9?(Yi, 17e(:z:d)
O-i -
(14)
Putting equation 14 into equation 7, we get, for the model selection criterion,
Tm(w) = n1
'~
"
1 ?(Yi, 17e(:z:d)
2u2
+
(:t:"Yi)Ew
1
r..
t
~
}-l V'9?(Yi,17e(:Z:i ))
2u1 2 V'9?(Yi,
17e(:z:d){V'9V'9t U (O,>.,w)
'~
"
(15)
(:t:i,Yi)Ew
Recall the discussion associated with equation 6 and now
E{
-"k1
~
10gf(Y;I:Z:j,0)}
'~
"
= E{"k1
(:t:"y,)Ew"
'~
"
2u1 2 ?(Yj,17e(:Z:;))}
(16)
(:t:"Yj)Ew"
after some simple algebra, we can obtain the unbiased estimator of equation 10.
The result is equation 15 multiplied by 2u 2 , or equation 11. Thus we prove the first
part of the theorem.
Now consider the case when
?(Y,179(:Z:))
= (y -179(:z:))2
(17)
The second term of equation 11 now becomes
~
L
-17e(:z:d)2V'~17e(:Z:i){V'9V'~U(8, >',w)}-1V'917e(:Z:i)
4(Yi
(18)
(:t:"Yi)Ew
As n approaches infinite, 0 approach the true parameters 0o , V' 917e(:Z:') approaches
V'9179 0 (:Z:.) and E((y -17e(:z:)))2 asymptotically equals to u 2 ? Using lemma 4 and
lemma 3, we get, for the asymptotic equivalency of equation 18,
.!..u2
n
2V'~17?(:z:d{V'9V'~U(0,>.,W)}-12V'917?(:z:d
L
(19)
If we use notation ?(O,w) = ~ L(:t:"Yi)EW ?(Yi,179(:z:d), with ?(Y,179(:Z:)) of the form
specified in equation 17, we can get,
a
-a V'9 n ?(0,w) = -2V'9179(:Z:i)
Yi
(20)
Combining this with equation 19 and equation 11, we can readily obtain equation 12.
5
SUMMARY
In this paper, we used asymptotics to obtain the jackknife estimator, which can
be used to get the fit of a model by plugging it into the model selection criterion.
Based on the idea of the cross-validation method, we used the negative of the
average predicative likelihood as the model selection criterion. We also obtained
the asymptotic form of the model selection criterion and proved that when the
parameters optimization criterion is the mean squared error plus a penalty term,
this asymptotic form is the same as the form presented by (Moody, 1992). This
also served to prove the asymptotic equivalence of this criterion to the method of
cross-validation.
605
606
Liu
Acknowledgements
The author thanks all the members of the Institute for Brain and Neural Systems,
in particular, Professor Leon N Cooper for reading the draft of this paper, and Dr.
Nathan Intrator, Michael P. Perrone and Harel Shouval for helpful comments. This
research was supported by grants from NSF, ONR and ARO.
References
Akaike, H. (1973). Information theory and an extension of the maximum likelihood principle. In Petrov and Czaki, editors, Proceedings of the 2nd International
Symposium on Information Theory, pages 267-281.
Atkinson, A. C. (1978). Posterior probabilities for choosing a regression model.
Biometrika, 65:39-48.
Berger, J. O. (1985). Statistical Decision Theory and Bayesian Analysis. SpringerVerlag.
Efron, B. and Gong, G. (1983). A leisurely look at the bootstrap, the jackknife and
cross-validation. Amer. Stat., 37:36-48.
Jennrich, R. (1969). Asymptotic properties of nonlinear least squares estimators.
Ann. Math. Stat., 40:633-643.
Lindley, D. V. (1968). The choice of variables in multiple regression (with discussion). J. Roy. Stat. Soc., Ser. B, 30:31-66.
MacKay, D. (1991). Bayesian methods for adaptive models. PhD thesis, California
Institute of Technology.
Mallows, C. L. (1973). Some comments on Cpo Technometrics, 15:661-675.
Miller, R. G. (1974). The jackknife - a review. Biometrika, 61:1-15.
Moody, J. E. (1992). The effective number of parameters, an analysis of generalization and regularization in nonlinear learning system. In Moody, J. E., Hanson,
S. J., and Lippmann, R. P., editors, Advances in Neural Information Processing
System 4. Morgan Kaufmann Publication.
Schwartz, G. (1978). Estimating the dimension of a model. Ann. Stat, 6:461-464.
Stone, M. (1974). Cross-validatory choice and assessment of statistical predictions
(with discussion). J. Roy. Stat. Soc., Ser. B.
Stone, M. (1977). An asymptotic equivalence of choice of model by cross-validation
and Akaike's criterion. J. Roy. Stat. Soc., Ser. B, 39(1):44-47.
Zellner, A. (1984). Posterior odds ratios for regression hypotheses: General consideration and some specific results. In Zellner, A., editor, Basic Issues in Econometrics, pages 275-305. University of Chicago Press.
| 700 |@word soc:3 auxiliary:1 brown:1 unbiased:3 come:3 true:5 effect:1 regularization:1 nd:1 prof:1 quantity:1 a02:1 parametric:1 fa:1 exploration:1 alp:1 distance:2 link:1 cw:6 criterion:38 liu:5 generalization:2 generalized:2 hereafter:1 selecting:2 stone:10 denoting:1 me:1 summation:3 strictly:1 extension:1 pro:1 ixj:2 around:6 modeled:1 relationship:1 consideration:2 exp:1 great:1 written:1 readily:3 berger:2 ratio:1 chicago:1 additive:1 yilxi:2 noninformative:1 negative:4 estimation:1 zt:2 lz:2 observation:1 ith:1 consistency:1 behave:1 draft:1 provides:1 math:1 cr:1 etc:2 symposium:1 publication:1 posterior:3 introduced:2 prove:3 specified:4 optimizing:2 hanson:1 likelihood:8 california:1 deletion:1 onr:1 helpful:1 yi:37 brain:2 sketchy:1 dependent:1 morgan:1 minimum:1 somewhat:1 able:1 reading:1 becomes:1 ale:1 estimating:2 bounded:2 notation:1 jennrich:2 issue:1 multiple:1 oia:1 what:2 minimizes:1 calculation:1 special:2 mackay:2 cross:19 equal:3 validatory:1 technology:1 plugging:2 y3:1 look:1 prediction:1 regression:7 xd:4 basic:1 expectation:1 biometrika:2 l1s:1 lij:2 schwartz:2 ser:3 normalization:1 grant:1 gf:4 yn:3 prior:2 harel:1 acknowledgement:1 review:1 asymptotic:21 cpo:1 limit:1 relief:1 limn:1 n1:1 technometrics:1 comment:2 yd:2 interest:1 validation:18 plus:1 predicative:1 member:1 equivalence:6 spirit:1 odds:1 co:3 nominator:1 logf:13 principle:1 editor:3 wn:2 summary:1 xj:1 fit:4 zi:9 yj:6 mallow:2 equivalency:3 architecture:1 tvi:2 supported:1 idea:2 tm:6 bootstrap:1 transpose:1 procedure:1 side:1 iv:3 taylor:5 asymptotics:3 institute:3 calculated:1 dimension:1 regular:1 penalty:3 author:2 adaptive:1 get:10 ij_:2 selection:25 zn:3 speaking:1 put:1 context:1 influence:1 risk:2 ordinary:1 cost:2 remark:2 equivalent:3 detailed:3 ylx:7 amount:2 approximate:1 lippmann:1 ignore:3 providence:1 assumed:1 xi:8 nsf:1 combined:1 estimator:22 continued:2 thanks:1 international:1 estimated:1 boundness:1 his:1 physic:1 michael:1 ignoring:1 satisfies:2 moody:13 key:1 putting:3 squared:3 thesis:1 expansion:5 mse:1 akaike:7 deleted:5 hypothesis:1 jv:1 ce:1 dr:1 roy:3 expensive:1 derivative:8 econometrics:1 asymptotically:9 li:2 repeated:1 observed:1 wand:1 ich:1 calculate:1 decide:1 cooper:1 vi:2 decision:1 explicit:1 lxj:1 substantial:1 mentioned:1 vanishes:2 atkinson:2 aic:3 lindley:2 formula:3 theorem:25 rewrite:1 square:1 algebra:1 specific:1 predictive:4 kaufmann:1 miller:3 yield:1 ri:1 yong:1 u1:2 bayesian:3 nathan:1 argument:1 shouval:1 leon:1 ci:6 phd:1 served:1 jackknife:15 effective:3 randomness:1 department:1 according:4 v9v:2 perrone:1 choosing:1 definition:2 petrov:1 vev:2 proof:16 associated:1 u2:2 gain:1 corresponds:3 superscript:1 proved:2 taken:1 wh:1 sequence:1 differentiable:4 recall:1 lim:1 efron:3 equation:31 aro:1 mutually:1 ezample:3 ma:1 formulated:1 ann:2 professor:1 combining:1 springerverlag:1 higher:4 dt:1 infinite:6 rewritten:1 response:1 izi:4 multiplied:1 amer:1 evaluated:3 box:1 intrator:1 lemma:21 furthermore:1 gong:3 crossvalidation:2 implicit:1 called:3 ili:1 lxi:6 ew:20 zellner:3 original:2 denotes:1 nonlinear:2 assessment:1 likehood:1 ydg:1 oo:1 mode:1 stat:6 const:1 ij:20 lel:1 k1:2 |
6,634 | 7,000 | Multi-Armed Bandits with Metric Movement Costs
Tomer Koren
Google Brain
[email protected]
Roi Livni
Princeton University
[email protected]
Yishay Mansour
Tel Aviv University and Google
[email protected]
Abstract
We consider the non-stochastic Multi-Armed Bandit problem in a setting where
there is a fixed and known metric on the action space that determines a cost for
switching between any pair of actions. The loss of the online learner has two
components: the first is the usual loss of the selected actions, and the second is an
additional loss due to switching between actions. Our main contribution gives a
tight characterization of the expected minimax regret in this setting, in terms of
a complexity measure C of the underlying metric which depends on its covering
numbers. In finite metric spaces with k actions, we
? give an efficient algorithm
1/3T 2/3, kT }), and show that this is
e
that achieves regret of the form O(max{C
the best possible. Our regret bound generalizes previous known regret bounds
?
1/3T 2/3, kT })
e
for some special cases: (i) the unit-switching cost regret ?(max{k
?
2/3, kT }) where
e
where C = ?(k), and (ii) the interval metric with regret ?(max{T
C = ?(1). For infinite metrics
spaces
with
Lipschitz
loss
functions,
we derive a
e d+1
d+2 ) where d ? 1 is the Minkowski dimension of the
tight regret bound of ?(T
space, which is known to be tight even when there are no switching costs.
1
Introduction
Multi-Armed Bandit (MAB) is perhaps one of the most well studied model for learning that allows to
incorporate settings with limited feedback. In its simplest form, MAB can be thought of as a game
between a learner and an adversary: At first, the adversary chooses an arbitrary sequence of losses
`1, . . . , `T (possibly adversarially). Then, at each round the learner chooses an action it from a finite
set of actions K. At the end of each round, the learner gets to observe her loss `t (it ), and only the loss
of her chosen action. The objective of the
PTlearner is to minimize her (external) regret, defined as
the expected
difference
between
her
loss,
t=1 `t (it ), and the loss of the best action in hindsight, i.e.,
P
mini ?K Tt=1 `t (i).
One simplification of the MAB is that it assumes that the learner can switch between actions without
any cost, this is in contrast to online algorithms that maintain a state and have a cost of switching
between states. One simple intermediate solution is to add further costs to the learner that penalize
movements between actions. (Since we compare the learner to the single best action, the adversary
has no movement and hence no movement cost.) This approach has been studied in the MAB with
unit switching costs [2, 12], where the learner is not only penalized for her loss but also pays a
unit cost for any time she switches between actions. This simple penalty implicitly advocates the
construction of algorithms that avoid frequent fluctuation in their decisions. Regulating switching has
been successfully applied to many interesting instances such as buffering problems [16], limited-delay
lossy coding [19] and dynamic pricing with patient buyers [15].
The unit switching cost assumes that any pair of actions have the same cost, which in many scenarios
is far from true. For example, consider an ice-cream vendor on a beach, where his actions are to select
a location and price. Clearly, changing location comes at a cost, while changing prices might come
with no cost. In this case we can define a interval metric (the coast line) and the movement cost is the
distance. A more involved case is a hot-dog vendor in Manhattan, which needs to select a location
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
and price. Again, it makes sense to charge a switching cost between locations according to their
distance, and in this case the Manhattan-distance seems the most appropriate. Such settings are at the
core of our model for MAB with movement cost. The authors of [24] considered a MAB problem
equipped with an interval metric, i.e, the actions are [0, 1] and the movement cost is the distance
between the actions. They proposed a new online algorithm, called the Slowly Moving Bandit (SMB)
algorithm, that achieves optimal regret bound for this setting, and applied it to a dynamic pricing
problem with patient buyers to achieve a new tight regret bound.
The objective of this paper is to handle general metric spaces, both finite and infinite. We show
how to generalize the SMB algorithm and its analysis to design optimal moving-cost algorithms
for any metric space over finite decision space. Our main result identifies an intrinsic complexity
measure of the metric space, which we call the covering/packing complexity, and give a tight
characterization of the expected movement regret in terms of the complexity of the underlying metric.
In particular, in finite metric
spaces of complexity C with k actions, we give a regret bound of the
?
1/3T 2/3, kT }) and present an efficient algorithm that achieves it. We also give a
e
form O(max{C
?
1/3T 2/3, kT }) lower bound that applies to any metric with complexity C.
e
matching ?(max{C
We extend out results to general continuous metric spaces. For such a settings we clearly have to make
some assumption about the losses, and we make the rather standard assumption that the losses are
Lipchitz with respect to the underlying metric. In this setting our results depend on a quite different
complexity measures: the upper and lower Minkowski dimensions of the space, thus exhibiting a
phase transition between the finite case (that corresponds to Minkowski
dimension zero) and the
e d+1
d+2 ) where d ? 1 is the upper
infinite case. Specifically, we give an upper bound on the regret of O(T
Minkowski dimension. When the upper and lower Minkowski dimensions coincide?which is the
case in many natural spaces, such as normed vector spaces?the latter bound matches a lower bound
of [10] that holds even when there are no switching costs. Thus, a surprising implication of our result
is that in infinite actions spaces (of bounded Minkowski dimension), adding movement costs do not
add to the complexity of the MAB problem!
Our approach extends the techniques of [24] for the SMB algorithm, which was designed to optimize
over an interval metric, which is equivalent to a complete binary Hierarchally well-Separated Tree
(HST) metric space. By carefully balancing and regulating its sampling distributions, the SMB
algorithm avoids switching between far-apart nodes in the tree and possibly incurring large movement
costs with respect to the associated metric. We show that the SMB regret guarantees are much more
general than just binary balanced trees, and give an analysis of the SMB algorithm when applied to
general HSTs. As a second step, we show that a rich class of trees, on which the SMB algorithm can
be applied, can be used to upper-bound any general metric. Finally, we reduce the case of an infinite
metric space to the finite case via simple discretization, and show that this reduction gives rise to
the Minkowski dimension as a natural complexity measure. All of these contractions turn out to be
optimal (up to logarithmic factors), as demonstrated by our matching lower bounds.
1.1
Related Work
Perhaps the most well known?classical algorithm for non-stochastic bandit is the Exp3 Algorithm [4]
e kT) without movement costs. However, for general MAB algorithms
that guarantee a regret of O(
there are no guarantees for slow movement between actions. In fact, it is known that in a worst case
e
?(T)
switches between actions are expected (see [12]).
A simple case of MAB with movement cost is the uniform metric, i.e., when the distance between any
two actions is the same. This setting has seen intensive study, both in terms of analyzing optimal
regret rates [2, 12], as well as applications [16, 19, 15]. Our main technical tools for achieving lower
bounds is through the lower bound of Dekel et al. [12] that achieve such bound for this special case.
The general problem of bandits with movement costs has been first introduced in [24], where the
authors gave an efficient algorithm for a 2-HST binary balanced tree metric, as well as for evenly
spaced points on the interval. The main contribution of this paper is a generalization of these results
to general metric spaces.
There is a vast and vigorous study of MAB in continuous spaces [23, 11, 5, 10, 32]. These works
relate the change in the payoff to the change in the action. Specifically, there has been a vast research
on Lipschitz MAB with stochastic payoffs [22, 29, 30, 21, 26], where, roughly, the expected reward
is Lipschitz. For applying our results in continuous spaces we too need to assume Lipschitz losses,
2
however, our metric defines also the movement cost between actions and not only relates the losses of
similar actions. Our general findings is that in Euclidean spaces, one can achieve the same regret
bounds when movement cost is applied. Thus, the SMB algorithm can achieve the optimal regret rate.
One can model our problem as a deterministic Markov Decision Process (MDP), where the states
are the MAB actions and in every state there is an action to move the MDP to a given state (which
correspond to switching actions). The payoff would be the payoff of the MAB action associated with
the state plus the movement cost to the next state. The work of Ortner [28] studies deterministic MDP
where the payoffs are stochastic, and also allows for a fixed uniform switching cost. The work of
Even-Dar et al. [13] and it extensions [27, 33] studies a MDP where the payoffs are adversarial but
there is full information of the payoffs. Latter this work was extended to the bandit model by Neu et al.
[27]. This line of works imposes various assumptions regarding the MDP and the benchmark policies,
specifically, that the MDP is ?mixing? and that the policies considered has full support stationary
distributions, assumptions that clearly fail in our very specific setting.
Bayesian MAB, such as in the Gittins index (see [17]), assume that the payoffs are from some
stochastic process. It is known that when there are switching costs then the existence of an optimal
index policy is not guaranteed [6]. There have been some works on special cases with a fixed uniform
switching cost [1, 3]. The most relevant work is that of Guha and Munagala [18] which for a general
metric over the actions gives a constant approximation off-line algorithm. For a survey of switching
costs in this context see [20].
The MAB problem with movement costs is related to the literature on online algorithms and the
competitive analysis framework [8]. A prototypical online problem is the Metrical Task System
(MTS) presented by Borodin et al. [9]. In a metrical task system there are a collection of states and
a metric over the states. Similar to MAB, the online algorithm at each time step moves to a state,
incurs a movement cost according to the metric, and suffers a loss that corresponds to that state.
However, unlike MAB, in an MTS the online algorithm is given the loss prior to selecting the new
state. Furthermore, competitive analysis has a much more stringent benchmark: the best sequence of
actions in retrospect. Like most of the regret minimization literature, we use the best single action in
hindsight as a benchmark, aiming for a vanishing average regret.
One of our main technical tools is an approximation from above of a metric via a Metric Tree (i.e.,
2-HST). k-HST metrics have been vastly studied in the online algorithms starting with [7]. The main
goal is to derive a simpler metric representation (using randomized trees) that will both upper and
lower bound the given metric. The main result is to show a bound of O(log n) on the expected stretch
of any edge, and this is also the best possible [14]. It is noteworthy that for bandit learning, and in
contrast with these works, an upper bound over the metric suffices to achieve optimal regret rate. This
is since in online learning we compete against the best static action in hindsight, which does not move
at all and hence has zero movement cost. In contrast, in a MTS, where one compete against the best
dynamic sequence of actions, one needs both an upper a lower bound on the metric.
2
Problem Setup and Background
In this section we recall the setting of Multi-armed Bandit with Movement Costs introduced in [24],
and review the necessary background required to state our main results.
2.1
Multi-armed Bandits with Movement Costs
In the Multi-armed Bandits (MAB) with Movement Costs problem, we consider a game between an
online learner and an adversary continuing for T rounds. There is a set K, possibly infinite, of actions
(or ?arms?) that the learner can choose from. The set of actions is equipped with a fixed and known
metric ? that determines a cost ?(i, j) ? [0, 1] for moving between any pair of actions i, j ? K.
Before the game begins, an adversary fixes a sequence `1, . . . , `T : K 7? [0, 1] of loss functions
assigning loss values in [0, 1] to actions in K (in particular, we assume an oblivious adversary). Then,
on each round t = 1, . . . , T, the learner picks an action it ? K, possibly at random. At the end of each
round t, the learner gets to observe her loss (namely, `t (it )) and nothing else. In contrast with the
standard MAB setting, in addition to the loss `t (it ) the learner suffers an additional cost due to her
movement between actions, which is determined by the metric and is equal to ?(it , it?1 ). Thus, the
total cost at round t is given by `t (it ) + ?(it?1, it ).
3
The goal of the learner, over the course of T rounds of the game, is to minimize her expected
movement-regret, which is defined as the difference between her (expected) total costs and the total
costs of the best fixed action in hindsight (that incurs no movement costs); namely, the movement
regret with respect to a sequence `1:T of loss vectors and a metric ? equals
"
#
T
T
T
X
X
X
RegretMC (`1:T , ?) = E
`t (it ) +
?(it , it?1 ) ? min
`t (i) .
t=1
t=2
i ?K
t=1
Here, the expectation is taken with respect to the learner?s randomization in choosing the actions
i1, . . . , iT ; notice that, as we assume an oblivious adversary, the loss functions `t are deterministic and
cannot depend on the learner?s randomization.
2.2
Basic Definitions in Metric Spaces
We recall basic notions in metric space that govern the regret in the MAB with movement costs
setting. Throughout we assume a bounded metric space (K, ?), where for normalization we assume
?(i, j) ? [0, 1] for all i, j ? K. Given a point i ? K we will denote by B (i) = { j ? K : ?(i, j) ? }
the ball of radius around i.
The following definitions are standard.
Definition 1 (Packing numbers). A subset P ? K in a metric space (K, ?) is an -packing if the sets
p
{B (i)}i ?P are disjoint sets. The -packing number of ?, denoted N (?), is the maximum cardinality
of any -packing of K.
Definition 2 (Covering numbers). A subset C ? K in a metric space (K, ?) is an -covering if
K ? ?i ?C B (i). The -covering number of K, denoted Nc (?), is the minimum cardinality of any
-covering of K.
Tree metrics and HSTs. We recall the notion of a tree metric, and in particular, a metric induced
by an Hierarchically well-Separated (HST) Tree; see [7] for more details. Any weighted tree defines
a metric over the vertices, by considering the shortest path between each two nodes. An HST tree
(2-HST tree, to be precise) is a rooted weighted tree such that: 1) the edge weight from any node
to each of its children is the same and 2) the edge weight along any path from the root to a leaf are
decreasing by a factor 2 per edge. We will also assume that all leaves are of the same depth in the tree
(this does not imply that the tree is complete).
Given a tree T we let depth(T ) denote its height, which is the maximal length of a path from any leaf
to the root. Let level(v) be the level of a node v ? T , where the level of the leaves is 0 and the level of
the root is depth(T ). Given nodes u, v ? T , let LCA(u, v) be their least common ancestor node in T .
The metric which we next define is equivalent (up to a constant factor) to standard tree?metric induced
over the leaves by an HST. By a slight abuse of terminology, we will call it HST metric:
Definition 3 (HST metric). Let K be a finite set and let T be a tree whose leaves are at the same
depth and are indexed by elements of K. Then the HST metric ? T over K induced by the tree T is
defined as follows:
? T (i, j) =
2level(LCA(i, j))
2depth(T)
? i, j ? K.
For a HST metric ? T , observe that the packing number and covering number are simple to characterize:
for all 0 ? h < depth(T ) we have that for = 2h?H ,
p
Nc (? T ) = N (? T ) = {v ? T : level(v) = h}.
Complexity measures for finite metric spaces. We next define the two notions of complexity that,
as we will later see, governs the complexity of MAB with metric movement costs.
Definition 4 (covering complexity). The covering complexity of a metric space (K, ?) denoted Cc (?)
is given by
Cc (?) = sup ? Nc (?).
0< <1
4
Definition 5 (packing complexity). The packing complexity of a metric space (K, ?) denoted Cp (?)
is given by
p
Cp (?) = sup ? N (?).
0< <1
For a HST metric, the two complexity measures coincide as its packing and covering numbers are the
same. Therefore, for a HST metric ? T we will simply denote the complexity of (K, ? T ) as C(T ). In
p
fact, it is known that in any metric space N (?) ? Nc (?) ? Np/2 (?) for all > 0. Thus, for a general
metric space we obtain that
Cp (?) ? Cc (?) ? 2Cp (?).
(1)
Complexity measures for infinite metric spaces. For infinite metric spaces, we require the
following definition.
Definition 6 (Minkowski dimensions). Let (K, ?) be a bounded metric space. The upper Minkowski
dimension of (K, ?), denoted D(?), is defined as
D(?) = lim sup
?0
p
log Nc (?)
log N (?)
= lim sup
.
log(1/)
log(1/)
?0
Similarly, the lower Minkowski dimension is denoted by D(?) and is defined as
D(?) = lim inf
?0
p
log Nc (?)
log N (?)
= lim inf
.
?0
log(1/)
log(1/)
We refer to [31] for more background on the Minkowski dimensions and related notions in metric
spaces theory.
3
Main Results
We now state the main results of the paper, which give a complete characterization of the expected
regret in the MAB with movement costs problem.
3.1
Finite Metric Spaces
The following are the main results of the paper.
Theorem 7 (Upper Bound). Let (K, ?) be a finite metric space over |K | = k elements with diameter
? 1 and covering complexity Cc = Cc (?). There exists an algorithm such that for any sequence of
loss functions `1, . . . , `T guarantees that
?
e max Cc1/3T 2/3, kT .
RegretMC (`1:T , ?) = O
Theorem 8 (Lower Bound). Let (K, ?) be a finite metric space over |K | = k elements with diameter
? 1 and packing complexity Cp = Cp (?). For any algorithm there exists a sequence `1, . . . , `T of loss
functions such that
?
e max Cp1/3T 2/3, kT .
RegretMC (`1:T , ?) = ?
For the detailed proofs, see the full version of the paper [25]. Recalling Eq. (1), we see that the regret
bounds obtained in Theorems 7 and 8 are matching up to logarithmic factors. Notice that the tightness
is achieved per instance; namely, for any given metric we are able to fully characterize the regret?s
rate of growth as a function of the intrinsic properties of the metric. (In particular, this is substantially
stronger than demonstrating a specific metric for which the upper bound cannot be improved.) Note
that for the lower bound statement in Theorem 8 we require that the diameter of K is bounded away
from zero, where for simplicity we assume a constant bound of 1. Such an assumption is necessary
to avoid degenerate metrics. Indeed, when the diameter is very small, the problem?reduces to the
standard MAB setting without any additional costs and we obtain a regret rate of ?( kT).
Notice how the above results extend known instances of the problem from previous work: for uniform
movement costs (i.e., unit switching costs) over K = {1, . . . , k} we have Cc = ?(k), so that the
5
?
1/3T 2/3, kT }), which recovers the results in [2, 12]; and for a 2-HST binary
e
obtain bound is ?(max{k
?
2/3, kT }), which
e
balanced tree with k leaves, we have Cc = ?(1) and the resulting bound is ?(max{T
is identical to the bound proved in [24].
The 2-HST regret bound in [24] was primarily used to obtain regret bounds for the action space
K = [0, 1]. In the next section we show how this technique is extended for infinite metric space to
obtain regret bounds that depend on the dimensionality of the action space.
3.2
Infinite Metric Spaces
When (K, ?) is an infinite metric space, without additional constraints on the loss functions, the
problem becomes ill-posed with a linear regret rate, even without movement costs. Therefore, one
has to make additional assumptions on the loss functions in order to achieve sublinear regret. One
natural assumption, which is common in previous work, is to assume that the loss functions `1, . . . , `T
are all 1-Lipschitz with respect to the metric ?. Under this assumption, we have the following result.
Theorem 9. Let (K, ?) be a metric space with diameter ? 1 and upper Minkowski dimension
d = D(?), such that d ? 1. There exists a strategy such that for any sequence of loss functions
`1, . . . , `T , which are all 1-Lipschitz with respect to ?, guarantees that
d+1
e T d+2
RegretMC (`1:T , ?) = O
.
We refer the full version of the paper [25] for a proof of the theorem. Again, we observe that the above
result extend the case of K = [0, 1] where d = 1. Indeed, for Lipschitz functions over the interval a
e 2/3 ) was achieved in [24], which is exactly the bound we obtain above.
tight regret bound of ?(T
e d+1
d+2 ) is known for MAB in metric spaces with Lipschitz cost
We mention that a lower bound of ?(T
functions?even without movement costs?where d = D(?) is the lower Minkowski dimension.
Theorem 10 (Bubeck et al. [10]). Let (K, ?) be a metric space with diameter ? 1 and lower Minkowski
dimension d = D(?), such that d ? 1. Then for any learning algorithm, there exists a sequence
of loss function `1, . . . , `d+1
are all 1-Lipschitz with respect to ?, such that the regret (without
T , which
e T d+2 .
movement costs) is ?
In many natural metric spaces in which the upper and lower Minkowski dimensions coincide (e.g.,
normed spaces), the bound of Theorem 9 is tight up to logarithmic factors in T. In particular, and
quite surprisingly, we see that the movement costs do not add to the regret of the problem!
It is important to note that Theorem 9 holds only for metric spaces whose (upper) Minkowski
dimension is at least 1. Indeed, finite metric
spaces are of Minkowski dimension zero, and as we
?
demonstrated in Section 3.1 above, a O( T) regret bound is not achievable. Finite matric spaces are
associated with a complexity measure which is very different from the Minkowski dimension (i.e.,
the covering/packing complexity). In other words, we exhibit a phase transition between dimension
d = 0 and d ? 1 in the rate of growth of the regret induced by the metric.
4
Algorithms
In this section we turn to prove Theorem 7. Our strategy is much inspired by the approach in [24],
and we employ a two-step approach: First, we consider the case that the metric is a HST metric; we
then turn to deal with general metrics, and show how to upper-bound any metric with a HST metric.
4.1
Tree Metrics: The Slowly-Moving Bandit Algorithm
In this section we analyze the simplest case of the problem, in which the metric ? = ? T is induced
by a HST tree T (whose leaves are associated with actions in K). In this case, our main tool is the
Slowly-Moving Bandit (SMB) algorithm [24]: we demonstrate how it can be applied to general tree
metrics, and analyze its performance in terms of intrinsic properties of the metric.
We begin by reviewing the SMB algorithm. In order to present the algorithm we require few additional
notations. The algorithm receives as input a tree structure over the set of actions K, and its operation
depends on the tree structure. We fix a HST tree T and let H = depth(T ). For any level 0 ? h ? H
and action i ? K, let Ah (i) be the set of leaves of T that share a common ancestor with i at level h
6
(recall that level h = 0 is the bottom?most level corresponding to the singletons). In terms of the tree
metric we have that Ah (i) = { j : ? T (i, j) ? 2?H+h }.
The SMB algorithm is presented in Algorithm 1. The algorithm is based on the multiplicative update
method, in the spirit of Exp3 algorithms [4]. Similarly to Exp3, the algorithm computes at each round
t an estimator `et to the loss vector `t using the single loss value `t (it ) observed. In addition to being
an (almost) unbiased estimate for the true loss vector, the estimator `et used by SMB has the additional
property of inducing slowly-changing sampling distributions pt : This is done by choosing at random
a level ht of the tree to be rebalanced (in terms of the weights maintained by the algorithm): As a
result, the marginal probabilities pt+1 (Aht (i)) are not changed at round t.
In turn, and in contrast with Exp3, the algorithm choice of action at round t + 1 is not purely sampled
from pt , but rather conditioned on our last choice of level ht . This is informally justified by the fact
that pt and pt+1 agree on the marginal distribution of Aht (it ), hence we can think of the level drawn
at round t as if it were drawn subject to pt+1 (Aht ) = pt (Aht ).
Input: A tree T with a set of finite leaves K, ? > 0.
Initialize: H = depth(T ), Ah (i) = B2?H +h (i), ?i ? K, 0 ? h ? H
Initialize p1 = unif(K), h0 = H and i0 ? p1
For t = 1, . . . , T:
(1) Choose action it ? pt ( ? | Aht ?1 (it?1 )), observe loss `t (it )
(2) Choose ?t,0, . . . , ?t, H?1 ? {?1} uniformly at random;
let ht = min{0 ? h ? H : ?t,h < 0} where ?t, H = ?1
(3) Compute vectors `?t,0, . . . , `?t, H?1 recursively via
1{it = i}
`?t,0 (i) =
`t (it ),
pt (i)
and for all h ? 1:
1
`?t,h (i) = ? ln
?
X
j ? Ah (i)
pt ( j) ??(1+?t, h?1 )`?t, h?1 (j)
e
pt (Ah (i))
!
(4) Define Et = {i : pt (Ah (i)) < 2h ? for some 0 ? h < H} and set:
(
0
if it ? Et ;
e
`t =
P H?1
?
?
`t,0 + h=0 ?t,h `t,h otherwise
(5) Update:
pt+1 (i) = P
k
pt (i) e?? `t (i)
j=1
e
pt ( j) e?? `et (j)
?i ? K
Algorithm 1: The SMB algorithm.
A key observation is that by directly applying SMB to the metric ? T , we can achieve the following
regret bound:
Theorem 11. Let (K, ? T ) be a metric space defined by a 2-HST T with depth(T ) = H and complexity
C(T ) = C. Using SMB algorithm we can achieve the following regret bound:
p
RegretMC (`1:T , ? T ) = O H 2 H T Clog C + H2?H T .
(2)
To show Theorem 11, we adapt the analysis of [24] (that applies only to complete binary HSTs) to
handle more general HSTs. We defer this part of our analysis to the full version of the paper [25],
since it follows from a technical modification of the original proof.
For a tree that is either too deep or too shallow, Eq. (2) may not necessarily lead to a sublinear regret
bound, let alone optimal. The main idea behind achieving optimal regret bound for a general tree, is
to modify it until one of two things happen: Either we have optimized the depth so that the two terms
in the left-hand side of Eq. (2) are of same order: In that case, we will show that one can achieve
7
regret rate of order O(C(T )1/3T 2/3 ). If we fail to do?that, we show that the first term in the left-hand
side is the dominant one, and it will be of order O( kT).
For trees that are in some sense ?well behaved" we have the following Corollary of Theorem 11.
Corollary 12. Let (K, ? T ) be a metric space defined by a tree T over |K | = k leaves with
depth(T ) = H and complexity C(T ) = C. Assume that T satisfies the following:
?
(1) 2?H HT ? 2 H HCT;
(2) One of the following is true:
(a) 2 H C ? k;
p
(b) 2?(H?1) (H ? 1)T ? 2 H?1 (H ? 1)CT.
?
e max C 1/3T 2/3, kT .
Then, the SMB algorithm can be used to attain RegretMC (`1:T , ? T ) = O
The following establishes Theorem 7 for the special case of tree metrics.
Lemma 13. For any tree T and time horizon T, there exists a tree T 0 (over the same set K of k leaves)
that satisfies the conditions of Corollary 12, such that ? T 0 ? ? T and C(T 0) = C(T ). Furthermore,
T 0 can be constructed efficiently from T (i.e., in time polynomial in |K| and T). Hence,
? applying
e max C(T )1/3T 2/3, kT .
SMB to the metric space (K, ? T 0 ) leads to RegretMC (`1:T , ? T ) = O
We refer to [25] for the proofs of both results.
4.2
General Finite Metrics
Finally, we obtain the general finite case as a corollary of the following.
Lemma 14. Let (K, ?) be a finite metric space. There exists a tree metric ? T over K (with
|K | = k) such that 4? T , dominates ? (i.e., such that 4? T (i, j) ? ?(i, j) for all i, j ? K) for which
C(T ) = O(Cc (?) log k). Furthermore, T can be constructed efficiently.
Proof. Let H be such that the minimal distance in ? is larger than 2?H . For each r = 2?1, 2?2, . . . , 2?H
we let {Br (i {1,r } ), . . . , Br (i {mr ,r } )} = Br be a covering of K of size Nrc (T ) log k using balls of radius r.
Note that finding a minimal set of balls of radius r that covers K is exactly the set cover problem.
Hence, we can efficiently approximate it (to within a O(log k) factor) and construct the sets Br .
We now construct a tree graph, whose nodes are associated with the cover balls: The leaves correspond
to singleton balls, hence correspond to the action space. For each leaf i we find an action a1 (i) ? K
such that: i ? B2?H +1 (a1 (i)) ? B2?H +1 . If there is more than one, we arbitrarily choose one, and we
connect an edge between i and B2?H +1 (a1 (i)). We continue in this manner inductively to define ar (i) for
every a and r < 1: given ar?1 (i) we find an action ar (i) such that ar?1 (i) ? B2?H +r (ar (i)) ? B2?H +r ,
and we connect an edge from B2?H +r ?1 (ar?1 (i)) and B2?H +r (ar (i)).
We now claim that the metric induced by the tree graph dominates up to factor 4 the original metric.
Let i, j ? K such that ? T (i, j) < 2?H+r then by construction there are i, a1 (i), a2 (i), . . . ar (i) and
j, a1 ( j), a2 ( j), . . . ar ( j), such that ar (i) = ar ( j) and for which it holds that ?(as (i), as?1 (i)) ? 2?H+s
and similarly ?(as ( j), as?1 ( j)) ? 2?H+s for every s ? r. Denoting a0 (i) = i and a0 ( j) = j, we have
that
r
r
X
X
?(i, j) ?
?(as?1 (i), as (i)) +
?(as?1 ( j), as ( j))
s=1
r
X
?2
s=1
2?H+s ? 2?2?H ?2r+1 ? 4? T (i, j).
s=1
4.3
Infinite Metric Spaces
Finally, we address infinite spaces by discretizing the space K and reducing to the finite case. Recall
that in this case we also assume that the loss functions are Lipschitz.
Proof of Theorem 9. Given the definition of the covering dimension d = D(?) ? 1, it is straightforward that for some constant C > 0 (that might depend on the metric ?) it holds that Nrc (?) ? Cr ?d for
8
all r > 0. Fix some > 0, and take a minimal 2-covering K 0 of K of size |K 0 | ? C(2)?d ? C ?d .
Observe that by restricting the algorithm to pick actions from K 0, we might lose at most O(T) in the
regret. Also, since K 0 is minimal, the distance between any two elements in K 0 is at least , thus the
covering complexity of the space has
Cc (?) = sup r ? Nrc (?) ? C sup r ?d+1 ? C ?d+1,
r ?
r ?
as we assume that d ? 1. Hence, by Theorem 7 and the Lipschitz assumption, there exists an
algorithm for which
2
d
1
e max ? d?1
3 T 3 , ? 2 T 2 , T
.
RegretMC (`1:T , ?) = O
1
e d+1
d+2 )
A simple computation reveals that = ?(T ? d+2 ) optimizes the above bound, and leads to O(T
movement regret.
Acknowledgements
RL is supported in funds by the Eric and Wendy Schmidt Foundation for strategic innovations. YM is
supported in part by a grant from the Israel Science Foundation, a grant from the United States-Israel
Binational Science Foundation (BSF), and the Israeli Centers of Research Excellence (I-CORE)
program (Center No. 4/11).
References
[1] R. Agrawal, M. V. Hegde, and D. Teneketzis. Asymptotically efficient adaptive allocation rules
for the multiarmed bandit problem with switching costs. IEEE Transactions on Optimal Control,
33(10):899?906, 1988.
[2] R. Arora, O. Dekel, and A. Tewari. Online bandit learning against an adaptive adversary:
from regret to policy regret. In Proceedings of the 29th International Conference on Machine
Learning (ICML-12), pages 1503?1510, 2012.
[3] M. Asawa and D. Teneketzis. Multi-armed bandits with switching penalties. IEEE Transactions
on Automatic Control, 41(3):328?348, 1996.
[4] P. Auer, N. Cesa-Bianchi, Y. Freund, and R. E. Schapire. The nonstochastic multiarmed bandit
problem. SIAM Journal on Computing, 32(1):48?77, 2002.
[5] P. Auer, R. Ortner, and C. Szepesv?ri. Improved rates for the stochastic continuum-armed bandit
problem. Proceedings of the 20th Annual Conference on Learning Theory, pages 454?468,
2007.
[6] J. S. Banks and R. K. Sundaram. Switching costs and the gittins index. Econometrica, 62:
687?694, 1994.
[7] Y. Bartal. Probabilistic approximations of metric spaces and its algorithmic applications. In
37th Annual Symposium on Foundations of Computer Science, FOCS ?96, Burlington, Vermont,
USA, 14-16 October, 1996, pages 184?193, 1996.
[8] A. Borodin and R. El-Yaniv. Online Computation and Competitive Analysis. Cambridge
University Press, 1998.
[9] A. Borodin, N. Linial, and M. E. Saks. An optimal on-line algorithm for metrical task system.
Journal of the ACM (JACM), 39(4):745?763, 1992.
[10] S. Bubeck, R. Munos, G. Stoltz, and C. Szepesv?ri. X-armed bandits. Journal of Machine
Learning Research, 12:1587?1627, 2011.
[11] E. Cope. Regret and convergence bounds for a class of continuum-armed bandit problems. IEEE
Transactions on Automatic Control, 54(6):1243?1253, 2009.
9
[12] O. Dekel, J. Ding, T. Koren, and Y. Peres. Bandits with switching costs: T 2/3 regret. In
Proceedings of the 46th Annual ACM Symposium on Theory of Computing, pages 459?467.
ACM, 2014.
[13] E. Even-Dar, S. M. Kakade, and Y. Mansour. Online markov decision processes. Math. Oper.
Res., 34(3):726?736, 2009.
[14] J. Fakcharoenphol, S. Rao, and K. Talwar. A tight bound on approximating arbitrary metrics by
tree metrics. J. Comput. Syst. Sci., 69(3):485?497, 2004.
[15] M. Feldman, T. Koren, R. Livni, Y. Mansour, and A. Zohar. Online pricing with strategic and
patient buyers. In Annual Conference on Neural Information Processing Systems, 2016.
[16] S. Geulen, B. V?cking, and M. Winkler. Regret minimization for online buffering problems
using the weighted majority algorithm. In COLT, pages 132?143, 2010.
[17] J. Gittins, K. Glazebrook, and R. Weber. Multi-Armed Bandit Allocation Indices, 2nd Edition.
John Wiley, 2011.
[18] S. Guha and K. Munagala. Multi-armed bandits with metric switching costs. In International
Colloquium on Automata, Languages, and Programming, pages 496?507. Springer, 2009.
[19] A. Gyorgy and G. Neu. Near-optimal rates for limited-delay universal lossy source coding.
IEEE Transactions on Information Theory, 60(5):2823?2834, 2014.
[20] T. Jun. A survey on the bandit problem with switching costs. De Economist, 152(4):513?541,
2004.
[21] R. Kleinberg and A. Slivkins. Sharp dichotomies for regret minimization in metric spaces. In
Proceedings of the twenty-first annual ACM-SIAM symposium on Discrete Algorithms, pages
827?846. Society for Industrial and Applied Mathematics, 2010.
[22] R. Kleinberg, A. Slivkins, and E. Upfal. Multi-armed bandits in metric spaces. In Proceedings
of the fortieth annual ACM symposium on Theory of computing, pages 681?690. ACM, 2008.
[23] R. D. Kleinberg. Nearly tight bounds for the continuum-armed bandit problem. In Advances in
Neural Information Processing Systems, pages 697?704, 2004.
[24] T. Koren, R. Livni, and Y. Mansour. Bandits with movement costs and adaptive pricing. In
COLT, 2017.
[25] T. Koren, R. Livni, and Y. Mansour. Multi-armed bandits with metric movement costs. arXiv
preprint arXiv:1710.08997, 2017.
[26] S. Magureanu, R. Combes, and A. Proutiere. Lipschitz bandits: Regret lower bound and optimal
algorithms. In COLT, pages 975?999, 2014.
[27] G. Neu, A. Gy?rgy, C. Szepesv?ri, and A. Antos. Online markov decision processes under
bandit feedback. IEEE Trans. Automat. Contr., 59(3):676?691, 2014.
[28] R. Ortner. Online regret bounds for markov decision processes with deterministic transitions.
Theor. Comput. Sci., 411(29-30):2684?2695, 2010.
[29] A. Slivkins. Multi-armed bandits on implicit metric spaces. In Advances in Neural Information
Processing Systems, pages 1602?1610, 2011.
[30] A. Slivkins, F. Radlinski, and S. Gollapudi. Ranked bandits in metric spaces: learning diverse
rankings over large document collections. Journal of Machine Learning Research, 14(Feb):
399?436, 2013.
[31] T. Tao. 245c, notes 5: Hausdorff dimension. http://terrytao.wordpress.com/2009/05/
19/245c-notes-5-hausdorff-dimension-optional/, 2009.
[32] J. Yu and S. Mannor. Unimodal bandits. In Proceedings of the 28th International Conference
on Machine Learning, 2011.
[33] J. Y. Yu, S. Mannor, and N. Shimkin. Markov decision processes with arbitrary reward processes.
Math. Oper. Res., 34(3):737?757, Aug. 2009. ISSN 0364-765X.
10
| 7000 |@word version:3 polynomial:1 achievable:1 seems:1 stronger:1 dekel:3 nd:1 unif:1 contraction:1 automat:1 pick:2 incurs:2 mention:1 recursively:1 reduction:1 selecting:1 united:1 denoting:1 document:1 com:2 discretization:1 surprising:1 assigning:1 john:1 happen:1 designed:1 update:2 fund:1 sundaram:1 stationary:1 alone:1 selected:1 leaf:14 vanishing:1 core:2 characterization:3 math:2 node:7 location:4 mannor:2 lipchitz:1 simpler:1 height:1 along:1 constructed:2 symposium:4 focs:1 prove:1 advocate:1 manner:1 excellence:1 indeed:3 expected:9 roughly:1 p1:2 multi:12 brain:1 inspired:1 decreasing:1 armed:16 equipped:2 cardinality:2 considering:1 becomes:1 begin:2 bounded:4 underlying:3 notation:1 israel:2 substantially:1 hindsight:4 finding:2 guarantee:5 every:3 charge:1 growth:2 exactly:2 control:3 unit:5 grant:2 ice:1 before:1 modify:1 aiming:1 switching:23 analyzing:1 fluctuation:1 path:3 noteworthy:1 abuse:1 might:3 plus:1 studied:3 limited:3 regret:53 universal:1 thought:1 attain:1 matching:3 word:1 glazebrook:1 get:2 cannot:2 context:1 applying:3 optimize:1 equivalent:2 deterministic:4 demonstrated:2 center:2 hegde:1 straightforward:1 starting:1 normed:2 automaton:1 survey:2 simplicity:1 estimator:2 bsf:1 rule:1 his:1 handle:2 notion:4 yishay:1 construction:2 pt:15 programming:1 smb:17 element:4 vermont:1 bottom:1 observed:1 preprint:1 ding:1 worst:1 movement:38 balanced:3 colloquium:1 govern:1 complexity:26 reward:2 econometrica:1 inductively:1 dynamic:3 depend:4 tight:9 reviewing:1 purely:1 linial:1 eric:1 learner:16 packing:11 various:1 separated:2 dichotomy:1 choosing:2 h0:1 quite:2 whose:4 posed:1 larger:1 tightness:1 otherwise:1 saks:1 winkler:1 think:1 online:17 sequence:9 agrawal:1 maximal:1 frequent:1 relevant:1 mixing:1 degenerate:1 achieve:9 inducing:1 rgy:1 gollapudi:1 convergence:1 bartal:1 yaniv:1 gittins:3 derive:2 ac:1 aug:1 eq:3 c:2 hst:21 come:2 exhibiting:1 radius:3 stochastic:6 munagala:2 stringent:1 require:3 suffices:1 generalization:1 fix:3 mab:24 randomization:2 theor:1 extension:1 stretch:1 hold:4 around:1 considered:2 roi:1 algorithmic:1 claim:1 achieves:3 continuum:3 a2:2 lose:1 wordpress:1 successfully:1 tool:3 weighted:3 establishes:1 minimization:3 clearly:3 rather:2 avoid:2 cr:1 corollary:4 she:1 contrast:5 adversarial:1 industrial:1 sense:2 contr:1 el:1 i0:1 a0:2 her:9 bandit:33 ancestor:2 proutiere:1 i1:1 tao:1 ill:1 colt:3 denoted:6 special:4 initialize:2 marginal:2 equal:2 construct:2 beach:2 sampling:2 identical:1 adversarially:1 buffering:2 yu:2 icml:1 nearly:1 ortner:3 oblivious:2 primarily:1 employ:1 few:1 phase:2 maintain:1 recalling:1 regulating:2 behind:1 antos:1 implication:1 kt:14 metrical:3 edge:6 necessary:2 stoltz:1 tree:41 indexed:1 euclidean:1 continuing:1 re:2 minimal:4 instance:3 rao:1 cover:3 ar:11 cost:62 strategic:2 vertex:1 subset:2 uniform:4 delay:2 guha:2 too:3 characterize:2 connect:2 chooses:2 st:1 international:3 randomized:1 siam:2 probabilistic:1 off:1 clog:1 ym:1 again:2 vastly:1 cesa:1 choose:4 possibly:4 slowly:4 external:1 oper:2 syst:1 singleton:2 de:1 gy:1 coding:2 b2:8 ranking:1 depends:2 later:1 root:3 multiplicative:1 analyze:2 sup:6 competitive:3 defer:1 contribution:2 minimize:2 il:1 efficiently:3 spaced:1 correspond:3 generalize:1 bayesian:1 cc:9 ah:6 suffers:2 neu:3 definition:10 against:3 involved:1 shimkin:1 associated:5 proof:6 recovers:1 static:1 sampled:1 proved:1 recall:5 lim:4 dimensionality:1 carefully:1 auer:2 improved:2 done:1 furthermore:3 just:1 implicit:1 until:1 retrospect:1 hand:2 receives:1 combes:1 google:3 defines:2 perhaps:2 behaved:1 pricing:4 lossy:2 aviv:1 mdp:6 usa:2 true:3 unbiased:1 hausdorff:2 hence:7 geulen:1 deal:1 round:11 game:4 covering:16 rooted:1 maintained:1 tt:1 complete:4 demonstrate:1 cp:6 weber:1 coast:1 common:3 mt:3 rl:1 binational:1 extend:3 slight:1 refer:3 multiarmed:2 cambridge:1 feldman:1 automatic:2 mathematics:1 similarly:3 language:1 moving:5 rebalanced:1 add:3 feb:1 dominant:1 inf:2 apart:1 optimizes:1 scenario:1 binary:5 arbitrarily:1 continue:1 discretizing:1 seen:1 minimum:1 additional:7 gyorgy:1 mr:1 shortest:1 ii:1 relates:1 full:5 unimodal:1 reduces:1 technical:3 match:1 exp3:4 adapt:1 long:1 a1:5 basic:2 patient:3 metric:114 expectation:1 arxiv:2 normalization:1 achieved:2 penalize:1 justified:1 background:3 addition:2 szepesv:3 interval:6 else:1 source:1 unlike:1 induced:6 subject:1 thing:1 cream:1 spirit:1 call:2 near:1 intermediate:1 switch:3 gave:1 nonstochastic:1 reduce:1 regarding:1 idea:1 br:4 intensive:1 penalty:2 action:53 dar:2 deep:1 tewari:1 governs:1 detailed:1 informally:1 simplest:2 diameter:6 schapire:1 http:1 cp1:1 notice:3 disjoint:1 per:2 rlivni:1 wendy:1 diverse:1 discrete:1 key:1 terminology:1 demonstrating:1 achieving:2 drawn:2 changing:3 ht:4 vast:2 graph:2 asymptotically:1 compete:2 talwar:1 fortieth:1 extends:1 throughout:1 almost:1 decision:7 bound:48 ct:1 pay:1 guaranteed:1 koren:5 simplification:1 annual:6 constraint:1 ri:3 aht:5 kleinberg:3 min:2 minkowski:18 according:2 lca:2 ball:5 kakade:1 shallow:1 modification:1 taken:1 ln:1 vendor:2 agree:1 turn:4 fail:2 end:2 generalizes:1 operation:1 incurring:1 hct:1 observe:6 away:1 appropriate:1 schmidt:1 existence:1 original:2 assumes:2 matric:1 approximating:1 classical:1 society:1 objective:2 move:3 strategy:2 usual:1 exhibit:1 distance:7 sci:2 majority:1 evenly:1 economist:1 length:1 issn:1 index:4 mini:1 teneketzis:2 innovation:1 cc1:1 setup:1 october:1 statement:1 relate:1 rise:1 design:1 policy:4 twenty:1 bianchi:1 upper:15 observation:1 markov:5 benchmark:3 finite:19 magureanu:1 optional:1 payoff:8 extended:2 peres:1 precise:1 mansour:6 arbitrary:3 sharp:1 tomer:1 introduced:2 pair:3 dog:1 required:1 namely:3 optimized:1 slivkins:4 nip:1 israeli:1 address:1 able:1 adversary:8 zohar:1 trans:1 borodin:3 program:1 max:12 tau:1 hot:1 natural:4 ranked:1 arm:1 minimax:1 imply:1 identifies:1 arora:1 jun:1 prior:1 literature:2 review:1 acknowledgement:1 manhattan:2 freund:1 loss:34 fully:1 sublinear:2 interesting:1 prototypical:1 allocation:2 h2:1 foundation:4 upfal:1 imposes:1 bank:1 share:1 balancing:1 course:1 penalized:1 changed:1 surprisingly:1 last:1 supported:2 side:2 munos:1 livni:4 feedback:2 dimension:22 depth:11 transition:3 avoids:1 rich:1 computes:1 author:2 collection:2 adaptive:3 coincide:3 far:2 cope:1 transaction:4 approximate:1 implicitly:1 reveals:1 continuous:3 ca:1 tel:1 necessarily:1 main:13 hierarchically:1 nrc:3 edition:1 nothing:1 child:1 fakcharoenphol:1 slow:1 wiley:1 comput:2 burlington:1 theorem:16 specific:2 tkoren:1 dominates:2 intrinsic:3 exists:7 restricting:1 adding:1 conditioned:1 horizon:1 vigorous:1 logarithmic:3 simply:1 jacm:1 bubeck:2 applies:2 springer:1 corresponds:2 determines:2 satisfies:2 acm:6 goal:2 lipschitz:12 price:3 change:2 infinite:13 specifically:3 determined:1 uniformly:1 reducing:1 lemma:2 called:1 total:3 buyer:3 select:2 support:1 radlinski:1 latter:2 incorporate:1 princeton:2 |
6,635 | 7,001 | Learning A Structured Optimal Bipartite Graph
for Co-Clustering
1
Feiping Nie1 , Xiaoqian Wang2 , Cheng Deng3 , Heng Huang2?
School of Computer Science, Center for OPTIMAL, Northwestern Polytechnical University, China
2
Department of Electrical and Computer Engineering, University of Pittsburgh, USA
3
School of Electronic Engineering, Xidian University, China
[email protected],[email protected]
[email protected],[email protected]
Abstract
Co-clustering methods have been widely applied to document clustering and gene
expression analysis. These methods make use of the duality between features and
samples such that the co-occurring structure of sample and feature clusters can be
extracted. In graph based co-clustering methods, a bipartite graph is constructed
to depict the relation between features and samples. Most existing co-clustering
methods conduct clustering on the graph achieved from the original data matrix,
which doesn?t have explicit cluster structure, thus they require a post-processing
step to obtain the clustering results. In this paper, we propose a novel co-clustering
method to learn a bipartite graph with exactly k connected components, where k is
the number of clusters. The new bipartite graph learned in our model approximates
the original graph but maintains an explicit cluster structure, from which we can
immediately get the clustering results without post-processing. Extensive empirical
results are presented to verify the effectiveness and robustness of our model.
1
Introduction
Clustering has long been a fundamental topic in unsupervised learning. The goal of clustering is to
partition data into different groups. Clustering methods have been successfully applied to various
areas, such as document clustering [3, 17], image segmentation [18, 7, 8] and bioinformatics [16, 14].
In clustering problems, the input data is usually formatted as a matrix, where one dimension represents
samples and the other denotes features. Each sample can be seen as a data point characterized by
a vector in the feature space. Alternatively, each feature can be regarded as a vector spanning in
the sample space. Traditional clustering methods propose to cluster samples according to their
distribution on features, or conversely, cluster features in terms of their distribution on samples.
In several types of data, such as document data and gene expression data, duality exists between
samples and features. For example, in document data, we can reasonably assume that documents
can be clustered based on their relations with different word clusters, while word clusters are formed
according to their associations with distinct document clusters. However, in the one-sided clustering
mechanism, the duality between samples and features is not taken into consideration. To make full
use of the duality information, co-clustering methods (also known as bi-clustering methods) are
proposed. The co-clustering mechanism takes advantage of the co-occurring cluster structure among
features and samples to strengthen the clustering performance and gain better interpretation of the
pragmatic meaning of the clusters.
?
This work was partially supported by U.S. NSF-IIS 1302675, NSF-IIS 1344152, NSF-DBI 1356628,
NSF-IIS 1619308, NSF-IIS 1633753, NIH AG049371.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
Several co-clustering methods have been put forward to depict the relations between samples and
features. In the graph based methods, the co-occurring structure between samples and features
is usually treated as a bipartite graph, where the weights of edges indicate the relations between
sample-feature pairs. In the left part of Fig. 1 we show an illustration of such bipartite graph, where
the blue nodes on the left represent features while red nodes on the right show samples. The affinity
between the features and samples is denoted by the weight of the corresponding edge. For example,
Bij denotes the affinity between the i-th feature and the j-sample. In [4], the authors propose to
minimize the cut between samples and features, which is equivalent to conducting spectral clustering
on the bipartite graph. However, in this method, since the original graph doesn?t display an explicit
cluster structure, it still calls for the post-processing step like K-mean clustering to obtain the final
clustering indicators, which may not be optimal.
To address this problem, in this paper, we propose a novel graph based co-clustering model to learn a
bipartite graph with exactly k connected components, where k is the number of clusters. The new
bipartite graph learned in our model approximates the original graph but maintains an explicit cluster
structure, from which we can directly get the clustering results without post-processing steps. To
achieve such an ideal structure of the new bipartite graph, we impose constraints on the rank of
its Laplacian or normalized Laplacian matrix and derive algorithms to optimize the objective. We
conduct several experiments to evaluate the effectiveness and robustness of our model. On both
synthetic and benchmark datasets we gain equivalent or even better clustering results than other
related methods.
Notations: Throughout the paper, all the matrices are written as uppercase. For matrix M , the ij-th
element of M is denoted by mij . The trace of matrix M is denoted by T r(M ). The `2 -norm of
vector v is denoted by kvk2 , the Frobenius norm of matrix M is denoted by kM kF .
2
Bipartite Spectral Graph Partitioning Revisited
The classic Bipartite Spectral Graph Partitioning (BSGP) method [4] is very effective for co-clustering.
In order to simultaneously partition the rows and columns of a data matrix B ? Rn1 ?n2 , we first
view B as the weight matrix of a bipartite graph, where the left-side nodes are the n1 rows of B, the
right-side nodes are the n2 columns of B, and the weight to connect the i-th left-side node and the
j-th right-side node is bij (see Fig.1). The procedure of BSGP is as follows:
1
1
?
?
1) Calculate A? = Du 2 BDv 2 , where the diagonal matrices Du and Dv are defined in Eq.(6).
? respectively.
2) Calculate U and V , which are the leading k left and right singular vectors of A,
3) Run the K-means on the rows of F defined in Eq. (6) to obtain the final clustering results.
The bipartite graph can be viewed as an undirected weighted graph G = {V, A} with n = n1 + n2
nodes, where V is the node set and the affinity matrix A ? Rn?n is
0 B
A=
(1)
BT 0
In the following, we will show that the BSGP method essentially performs spectral clustering with
normalized cut on the graph G.
Suppose the graph G is partitioned into k components V = {V1 , V2 , ..., Vk } . According to the
spectral clustering, the normalized cut on the graph G = {V, A} is defined as
Ncut =
k
X
cut(Vi , V\Vi )
assoc(Vi , V)
P
assoc(Vi , V) = i?Vi ,j?V aij .
(2)
i=1
where cut(Vi , V\Vi ) =
P
i?Vi ,j?V\Vi
aij ;
Let Y ? Rn?k be the partition indicator matrix, i.e., yij = 1 indicates the i-th node is partitioned
into the j-th component. Then minimizing the normalized cut defined in Eq.(2) can be rewritten as
the following problem:
k
X
yiT Lyi
min
(3)
Y
y T Dyi
i=1 i
2
Figure 1: Illustration of the structured optimal bipartite graph.
where yi is the i-th column of Y , L = D P
? A ? Rn?n is the Laplacian matrix, and D ? Rn?n is the
diagonal degree matrix defined as dii = j aij .
1
Let Z = Y (Y T DY )? 2 , and denote the identity matrix by I, then problem (3) can be rewritten as
min
Z T DZ=I
1
1
T r(Z T LZ)
(4)
1
Further, denotes F = D 2 Z = D 2 Y (Y T DY )? 2 , then the problem (4) can be rewritten as
? )
min T r(F T LF
F T F =I
1
(5)
1
? = I ? D? 2 AD? 2 is the normalized Laplacian matrix.
where L
We rewrite F and D as the following block matrices:
U
Du
F =
,D =
V
Dv
(6)
where U ? Rn1 ?k , V ? Rn2 ?k , Du ? Rn1 ?n1 , Dv ? Rn2 ?n2 .
Then according to the definition of A in Eq. (1), the problem (5) can be further rewritten as
max
?1
U T U +V T V =I
?1
T r(U T Du 2 BDv 2 V )
(7)
Note that in addition to the constraint U T U + V T V = I, the U, V should be constrained to be
discrete values according to the definitions of U and V . This discrete constraint makes the problem
very difficult to solve. To address it, we first remove the discrete constraint to make the problem (7)
solvable with Lemma 1 , and then run K-means on U and V to get the discrete solution.
Lemma 1 Suppose M ? Rn1 ?n2 , X ? Rn1 ?k , Y ? Rn2 ?k . The optimal solutions to the problem
max
X T X+Y T Y =I
?
are X = 22 U1 , Y =
respectively.
T r(X T M Y )
(8)
?
2
2 V1 ,
where U1 , V1 are the leading k left and right singular vectors of M ,
Proof: Denote the Lagrangian function of the problem is L(X, Y, ?) = T r(X T AY )?T r(?(X T X +
Y T Y ? I)) By setting the derivative of L(X, Y, ?) w.r.t. X to zero, we have AY = X?. By taking
the derivative of L(X, Y, ?) w.r.t. Y to zero, we have AT X = Y ?. Thus AAT X = AY ? = X?2 .
Therefore, the optimal solution X should be the eigenvectors of AAT , i.e, the left singular vectors
of M . Similarly, the optimal solution Y should be the right singular vectors of M . Since it is a
maximization problem, the optimal solution X, Y should be the leading k left and right singular
vectors of M , respectively.
According to Lemma 1, if the discrete constraint on U and V is not considered, the optimal solution
?1
?1
U and V to the problem (7) are the leading k left and right singular vectors of A? = Du 2 BDv 2 ,
respectively.
Since the solution U and V are not discrete values, we need to run the K-means on the rows of F
defined in Eq.(6) to obtain the final clustering results.
3
3
3.1
Learning Structured Optimal Bipartite Graph for Co-Clustering
Motivation
We can see from the previous section that the given B or A does not have a very clear clustering
structure (i.e., A is not a block diagonal matrix with proper permutation) and the U and V are
not discrete values, thus we need run the K-means to obtain the final clustering results. However,
K-means is very sensitive to the initialization, which makes the clustering performance unstable and
suboptimal.
To address this challenging and fundamental problem, we target to learn a new graph similarity matrix
S ? Rn?n or P ? Rn1 ?n2 as
0 P
S=
,
(9)
PT 0
such that the new graph is more suitable for clustering task. In our strategy, we learn an S that has
exact k connected components, see Fig. 1. Obviously such a new graph can be considered as the
ideal graph for clustering task with providing clear clustering structure. If S has exact k connected
components, we can directly obtain the final clustering result based on S, without running K-means
or other discretization procedures as traditional graph based clustering methods have to do.
The learned structured optimal graph similarity matrix S should be as close as possible to the given
graph affinity matrix A, so we propose to solve the following problem:
2
min
P ?0,P 1=1,S??
kS ? AkF
(10)
where ? is the set of matrices S ? Rn?n which have exact k connected components.
According to the special structure of A and S in Eq. (1) and Eq. (9), the problem (10) can be written
as
2
min
kP ? BkF
(11)
P ?0,P 1=1,S??
The problem (11) seems very difficult to solve since the constraint S ? ? is intractable to handle. In
the next subsection, we will propose a novel and efficient algorithm to solve this problem.
3.2
Optimization
If the similarity matrix S is nonnegative, then the Laplacian matrix LS = DS ? S associated with S
has an important property as follows [13, 12, 11, 2].
Theorem 1 The multiplicity k of the eigenvalue 0 of the Laplacian matrix LS is equal to the number
of connected components in the graph associated with S.
Theorem 1 indicates that if rank(LS ) = n ? k, the constraint S ? ? will be held. Therefore, the
problem (11) can be rewritten as:
2
min
P ?0,P 1=1,rank(LS )=n?k
kP ? BkF
(12)
Suppose ?i (LS ) is the i-th smallest eigenvalue of LS . Note that ?i (LS ) ? 0 because LS is positive
semi-definite. The problem (12) is equivalent to the following problem for a large enough ?:
min
P ?0,P 1=1
2
kP ? BkF + ?
k
X
?i (LS )
(13)
i=1
When ? is large enough (note that ?i (LS ) ? 0 for every i), the optimal solution S to the problem
Pk
(13) will make the second term i=1 ?i (LS ) to be zero, and thus the constraint rank(LS ) = n ? k
in the problem (12) would be satisfied.
According to the Ky Fan?s Theorem [6], we have:
k
X
i=1
?i (LS ) =
min
F ?Rn?k ,F T F =I
4
T r(F T LS F )
(14)
Therefore, the problem (13) is further equivalent to the following problem
2
min kP ? BkF + ?T r(F T LS F )
(15)
s.t. P ? 0, P 1 = 1, F ? Rn?k , F T F = I
The problem (15) is much easier to solve compared with the rank constrained problem (12). We can
apply the alternating optimization technique to solve this problem.
P,F
When P is fixed, the problem (15) becomes:
min
F ?Rn?k ,F T F =I
T r(F T LS F )
(16)
The optimal solution F is formed by the k eigenvectors of LS corresponding to the k smallest
eigenvalues.
When F is fixed, the problem (15) becomes
min
P ?0,P 1=1
2
kP ? BkF + ?T r(F T LS F )
According to the property of Laplacian matrix, we have the following relationship:
n
n
1 XX
2
T r(F T LS F ) =
kfi ? fj k2 sij
2 i=1 j=1
(17)
(18)
where fi is the i-th row of F .
Thus according to the structure of S defined in Eq.(9), Eq.(18) can be rewritten as
n2
n1 X
X
2
kfi ? fj k2 pij
T r(F T LS F ) =
(19)
i=1 j=1
Based on Eq. (19), the problem (17) can be rewritten as
n1 X
n2
X
2
2
(pij ? bij ) + ?kfi ? fj k2 pij
min
P ?0,P 1=1
(20)
i=1 j=1
Note that the problem (20) is independent between different i, so we can solve the following problem
2
individually for each i. Denote vij = kfi ? fj k2 , and denote vi as a vector with the j-th element as
vij (same for pi and bi ), then for each i, the problem (20) can be written in the vector form as
2
pi ? (bi ? ? vi )
min
(21)
T
2
2
pi 1=1,pi ?0
This problem can be solved by an efficient iterative algorithm [9].
The detailed algorithm to solve the problem (15) is summarized in Algorithm 1. In the algorithm,
we can only update the m nearest similarities for each data points in P and thus the complexity of
updating P and updating F (only need to compute top k eigenvectors on very sparse matrix) can
be reduced significantly. Nevertheless, Algorithm 1 needs to conduct eigen-decomposition on an
n ? n(n = n1 + n2 ) matrix in each iteration, which is time consuming. In the next section, we will
propose another optimization algorithm, which only needs to conduct SVD on an n1 ? n2 matrix in
each iteration, and thus is much more efficient than Algorithm 1.
Algorithm 1 Algorithm to solve the problem (15).
input B ? Rn1 ?n2 , cluster number k, a large enough ?.
output P ? Rn1 ?n2 and thus S ? Rn?n defined in Eq.(9) with exact k connected components.
Initialize F ? Rn?k , which is formed by the k eigenvectors of L = D ? A corresponding to the k
smallest eigenvalues, A is defined in Eq. (1).
while not converge do
1. For each i, update the i-th row of P by solving the problem (21), where the j-th element of
2
vi is vij = kfi ? fj k2 .
2. Update F , which is formed by the k eigenvectors of LS = DS ? S corresponding to the k
smallest eigenvalues.
end while
5
4
Speed Up the Model
1
1
? S = I ?D? 2 SD? 2
If the similarity matrix S is nonnegative, then the normalized Laplacian matrix L
S
S
associated with S also has an important property as follows [11, 2].
? S is equal to
Theorem 2 The multiplicity k of the eigenvalue 0 of the normalized Laplacian matrix L
the number of connected components in the graph associated with S.
? S ) = n ? k, the constraint S ? ? will be hold. Therefore, the
Theorem 2 indicates that if rank(L
problem (11) can also be rewritten as
2
min
? S )=n?k
P ?0,P 1=1,rank(L
kP ? BkF
(22)
Similarly, the problem (22) is equivalent to the following problem for a large enough value of ?:
2
?S F )
min kP ? BkF + ?T r(F T L
P,F
(23)
s.t. P ? 0, P 1 = 1, F ? Rn?k , F T F = I
Again, we can apply the alternating optimization technique to solve problem (23).
1
1
? S = I ? D? 2 SD? 2 , the problem (23) becomes
When P is fixed, since L
S
S
max
?1
F ?Rn?k ,F T F =I
?1
T r(F T DS 2 SDS 2 F )
We rewrite F and DS as the following block matrices:
U
DSu
F =
,
DS =
V
(24)
(25)
DSv
where U ? Rn1 ?k , V ? Rn2 ?k , DSu ? Rn1 ?n1 , DSv ? Rn2 ?n2 .
Then according to the definition of S in Eq. (9), the problem (24) can be further rewritten as
max
U T U +V T V =I
?1
?1
T r(U T DSu2 P DSv2 V )
(26)
According to Lemma 1, the optimal solution U and V to the problem (26) are the leading k left and
?1
?1
right singular vectors of S? = DSu2 P DSv2 , respectively.
When F is fixed, the problem (23) becomes
2
?S F )
min kP ? BkF + ?T r(F T L
P
s.t.
(27)
P ? 0, P 1 = 1
According to the property of normalized Laplacian matrix, we have the following relationship:
2
n
n
1 XX
fj
fi
T?
T r(F LS F ) =
(28)
? ? p
sij
2 i=1 j=1
di
dj
2
2
f
fj
i
,the problem
?
?
Thus according to the structure of S defined in Eq.(9), and denote vij =
d ?
i
dj
2
(27) can be rewritten as
min
P ?0,P 1=1
n1 X
n2
X
2
(pij ? bij ) + ?vij pij ,
i=1 j=1
which has the same form as in Eq. (20) and thus can be solved efficiently.
The detailed algorithm to solve the problem (23) is summarized in Algorithm 2. In the algorithm, we
can also only update the m nearest similarities for each data points in P and thus the complexity of
updating P and updating F can be reduced significantly.
6
Note that Algorithm 2 only needs to conduct SVD on an n1 ? n2 matrix in each iteration. In
some cases, min(n1 , n2 ) (n1 + n2 ), thus Algorithm 2 is much more efficient than Algorithm 1.
Therefore, in the next section, we use Algorithm 2 to conduct the experiments.
Algorithm 2 Algorithm to solve the problem (23).
input B ? Rn1 ?n2 , cluster number k, a large enough ?.
output P ? Rn1 ?n2 and thus S ? Rn?n defined in Eq.(9) with exact k connected components.
? = I ?D? 12 AD? 21 corresponding
Initialize F ? Rn?k , which is formed by the k eigenvectors of L
to the k smallest eigenvalues, A is defined in Eq. (1).
while not converge do
1. For each i, update the i-th row of P by solving the problem (21), where the j-th element of
2
f
fj
i
.
vi is vij =
?d ? ?
i
dj
2
U
2. Update F =
, where U and V are the leading k left and right singular vectors of
V
?1
?1
DSu
.
S? = DSu2 P DSv2 respectively and DS =
DSv
end while
5
Experimental Results
In this section, we conduct multiple experiments to evaluate our model. We will first introduce the
experimental settings throughout the section and then present evaluation results on both synthetic and
benchmark datasets.
5.1
Experimental Settings
We compared our method (denoted by SOBG) with two related co-clustering methods, including
Bipartite Spectral Graph Partition (BSGP) [4] and Orthogonal Nonnegative Matrix Tri-Factorizations
(ONMTF) [5]. Also, we introduced several one-sided clustering methods to the comparison, which
are K-means clustering, Normalized Cut (NCut) and Nonnegative Matrix Factorization (NMF).
For methods requiring a similarity graph as the input, i.e., NCut and NMF, we adopted the self-tuning
Gaussian method [19] to construct the graph, where the number of neighbors was set to be 5 and the
? value was self-tuned. In the experiment, there are four methods involving K-means clustering,
which are K-means, NCut, BSGP and ONMTF (the latter three methods need K-means as the
post-processing step to get the clustering results). When running K-means we used 100 random
initializations for all these four methods and recorded the average performance over these 100 runs as
well as the best one with respect to the K-means objective function value.
In our method, to accelerate the algorithmic procedure, we determined the parameter ? in an heuristic
way: first specify the value of ? with an initial guess; next, we computed the number of zero
? S in each iteration, if it was larger than k, then divided ? by 2; if smaller then
eigenvalues in L
multiplied ? by 2; otherwise we stopped the iteration.
The number of clusters was set to be the ground truth. The evaluation of different methods was based
on the percentage of correctly clustered samples, i.e., clustering accuracy.
5.2
Results on Synthetic Data
In this subsection, we first apply our method to the synthetic data as a sanity check. The synthetic
data is constructed as a two-dimensional matrix, where rows and columns come from three clusters
respectively. Row clusters and column clusters maintain mutual dependence, i.e., rows and columns
from the first cluster form a block along the diagonal of the data matrix, and this also holds true
for the second and third cluster. The number of rows for each cluster is 20, 30 and 40 respectively,
while the number of columns is 30, 40 and 50. Each block is generated randomly with elements
i.i.d. sampled from Gaussian distribution N (0, 1). Also, we add noise to the ?non-block" area of the
data matrix, i.e., all entries in the matrix excluding elements in the three clusters. The noise can be
denoted as r ? ?, where ? is Gaussian noise i.i.d. sampled from Gaussian distribution N (0, 1) and r
7
0.8
0.8
0.6
0.6
0.4
0.4
0.2
0.2
(a) Noise = 0.6
(b) Noise = 0.6
(c) Noise = 0.7
(d) Noise = 0.7
0.8
0.8
0.6
0.6
0.4
0.4
0.2
0.2
(e) Noise = 0.8
(f) Noise = 0.8
(g) Noise = 0.9
(h) Noise = 0.9
Figure 2: Illustration of the data matrix in different settings of noise. Different rows of figures come
from different settings of noise. In each row, figures on the left column are the original data matrices
generated in the experiment, while on the right column display the bipartite matrix B learned in our
model which approximates the original data matrix and maintains the block structure.
Clustering
Accuracy(%)
on Rows
Clustering
Accuracy(%)
on Columns
Methods
K-means
NCut
NMF
BSGP
ONMTF
SOBG
K-means
NCut
NMF
BSGP
ONMTF
SOBG
Noise = 0.6
99.17
99.17
98.33
100.00
99.17
100.00
100.00
100.00
100.00
100.00
100.00
100.00
Noise = 0.7
97.50
95.00
95.00
93.33
97.50
100.00
95.56
91.11
90.00
93.33
95.56
100.00
Noise = 0.8
71.67
46.67
46.67
62.50
71.67
98.33
51.11
60.00
47.78
63.33
51.11
100.00
Noise = 0.9
39.17
38.33
37.50
40.00
39.17
84.17
46.67
38.89
37.78
46.67
46.67
87.78
Table 1: Clustering accuracy comparison on rows and columns of the synthetic data in different
portion of noise.
is the portion of noise. We set r to be {0.6, 0.7, 0.8, 0.9} respectively so as to evaluate the robustness
of different methods under the circumstances of various disturbance.
We apply all comparing methods to the synthetic data and assess their ability to cluster the rows and
columns. One-sided clustering methods are applied to the data twice (once to cluster rows and the
other time to cluster columns) such that clustering accuracy on these two dimensions can be achieved.
Co-clustering methods can obtain clustering results on both dimensions simultaneously in one run.
In Table 1 we summarize the clustering accuracy comparison on both rows and columns under
different settings of noise. In Fig. 2 we display the corresponding original data matrix and the
bipartite matrix B learned in our model. We can notice that when the portion of noise r is relatively
low, i.e., 0.6 and 0.7, the block structure of the original data is clear, then all methods perform fairly
well in clustering both rows and columns. However, as r increases, the block structure in the original
data blurs thus brings obstacles to the clustering task. With high portion of noise, all other methods
seem to be disturbed to a large extent while our method shows apparent robustness. Even when the
portion of noise becomes as high as 0.9, such that the structure of clusters in the original data becomes
hard to distinguish with eyes, our method still excavates a reasonable block arrangement with a
clustering accuracy of over 80%. Also, we can find that co-clustering methods usually outperform
one-sided clustering methods since they utilize the interrelations between rows and columns. The
interpretation of the co-clustering structure strengthens the performance, which conforms to our
theoretical analysis.
8
Methods
Ave
K-means
Best
Ave
NCut
Best
NMF
Ave
BSGP
Best
Ave
ONMTF
Best
SOBG
Reuters21578
40.86?4.59
32.77
26.92?0.93
29.18
30.91
11.44?0.39
11.26
17.57?1.95
27.90
43.94
LUNG
61.91?6.00
71.43
69.67?14.26
79.80
75.86
64.95?5.06
70.94
61.31?10.34
71.43
78.82
Prostate-MS
46.47?3.26
45.34
46.86?1.19
47.20
47.83
46.27?0.00
46.27
45.46?3.18
45.34
62.73
prostateCancerPSA410
64.15?9.40
62.92
55.06?0.00
55.06
55.06
57.30?0.00
57.30
62.92?0.00
62.92
69.66
Table 2: Clustering accuracy comparison on four benchmark datasets. For the four methods involving
K-means clustering, i.e., K-means, NCut, BSGP and ONMTF, their average performance (Ave) over
100 repetitions and the best one (Best) w.r.t. K-means objective function value were both reported.
5.3
Results on Benchmark Data
In this subsection, we use four benchmark datasets for the evaluation. There are one document dataset
and three gene expression datasets participating in the experiment, the property of which is introduced
in details as below.
Reuters21578 dataset is processed and downloaded from http://www.cad.zju.edu.cn/
home/dengcai/Data/TextData.html. It contains 8293 documents in 65 topics. Each
document is depicted by its frequency on 18933 terms.
LUNG dataset [1] provides a source for the study of lung cancer. It has 203 samples in five classes,
among which there are 139 adenocarcinoma (AD), 17 normal lung (NL), 6 small cell lung cancer
(SMCL), 21 squamous cell carcinoma (SQ) as well as 20 pulmonary carcinoid (COID) samples. Each
sample has 3312 genes.
Prostate-MS dataset [15] contains a total of 332 samples from three different classes, which are
69 samples diagnosed as prostate cancer, 190 samples of benign prostate hyperplasia, as well as 63
normal samples showing no evidence of disease. Each sample has 15154 genes.
ProstateCancerPSA410 dataset [10] describes gene information of patients with prostate-specific
antigen (PSA)-recurrent prostate cancer. It includes a total of 89 samples from two classes. Each
sample has 15154 genes.
Before the clustering process, feature scaling was performed on each dataset such that features are on
the same scale of [0, 1]. Also, the `2 -norm of each feature was normalized to 1.
Table 2 summarizes the clustering accuracy comparison on these benchmark datasets. Our method
performs equally or even better than the alternatives on all these datasets. This verifies the effectiveness of our method in the practical situation. There is an interesting phenomenon that the advantage of
our method tends to be more obvious for higher dimensional data. This is because high-dimensional
features make the differences in the distance between samples to be smaller thus the cluster structure
of the original data becomes vague. In this case, since our model is more robust compared with the
alternative methods (verified in the synthetic experiments), we can get better clustering results.
6
Conclusions
In this paper, we proposed a novel graph based co-clustering model. Different from existing methods
which conduct clustering on the graph achieved from the original data, our model learned a new
bipartite graph with explicit cluster structure. By imposing the rank constraint on the Laplacian matrix
of the new bipartite graph, we guaranteed the learned graph to have exactly k connected components,
where k is the number of clusters. From this ideal structure of the new bipartite graph learned in
our model, the obvious clustering structure can be obtained without resorting to post-processing
steps. We presented experimental results on both synthetic data and four benchmark datasets, which
validated the effectiveness and robustness of our model.
9
References
[1] A. Bhattacharjee, W. G. Richards, J. Staunton, C. Li, S. Monti, P. Vasa, C. Ladd, J. Beheshti,
R. Bueno, M. Gillette, et al. Classification of human lung carcinomas by mrna expression
profiling reveals distinct adenocarcinoma subclasses. Proceedings of the National Academy of
Sciences, 98(24):13790?13795, 2001.
[2] F. R. K. Chung. Spectral Graph Theory. CBMS Regional Conference Series in Mathematics,
No. 92, American Mathematical Society, February 1997.
[3] X. Cui and T. E. Potok. Document clustering analysis based on hybrid pso+ k-means algorithm.
Journal of Computer Sciences (special issue), 27:33, 2005.
[4] I. S. Dhillon. Co-clustering documents and words using bipartite spectral graph partitioning. In
Proceedings of the seventh ACM SIGKDD international conference on Knowledge discovery
and data mining, pages 269?274. ACM, 2001.
[5] C. Ding, T. Li, W. Peng, and H. Park. Orthogonal nonnegative matrix t-factorizations for
clustering. In Proceedings of the 12th ACM SIGKDD international conference on Knowledge
discovery and data mining, pages 126?135. ACM, 2006.
[6] K. Fan. On a theorem of weyl concerning eigenvalues of linear transformations. i. 35(11):652?
655, 1949.
[7] P. F. Felzenszwalb and D. P. Huttenlocher. Efficient graph-based image segmentation. International Journal of Computer Vision, 59(2):167?181, 2004.
[8] M. Gong, Y. Liang, J. Shi, W. Ma, and J. Ma. Fuzzy c-means clustering with local information
and kernel metric for image segmentation. Image Processing, IEEE Transactions on, 22(2):573?
584, 2013.
[9] J. Huang, F. Nie, and H. Huang. A new simplex sparse learning model to measure data similarity
for clustering. In Proceedings of the 24th International Conference on Artificial Intelligence,
pages 3569?3575, 2015.
[10] Z. Liao and M. W. Datta. A simple computer program for calculating psa recurrence in prostate
cancer patients. BMC urology, 4(1):8, 2004.
[11] B. Mohar. The laplacian spectrum of graphs. In Graph Theory, Combinatorics, and Applications,
pages 871?898. Wiley, 1991.
[12] F. Nie, X. Wang, and H. Huang. Clustering and projected clustering with adaptive neighbors. In
Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and
data mining, pages 977?986, 2014.
[13] F. Nie, X. Wang, M. I. Jordan, and H. Huang. The constrained laplacian rank algorithm for
graph-based clustering. In AAAI, pages 1969?1976, 2016.
[14] H.-W. N?tzmann and A. Osbourn. Gene clustering in plant specialized metabolism. Current
opinion in biotechnology, 26:91?99, 2014.
[15] E. F. Petricoin, D. K. Ornstein, C. P. Paweletz, A. Ardekani, P. S. Hackett, B. A. Hitt, A. Velassco,
C. Trucco, L. Wiegand, K. Wood, et al. Serum proteomic patterns for detection of prostate
cancer. Journal of the National Cancer Institute, 94(20):1576?1578, 2002.
[16] F. Piano, A. J. Schetter, D. G. Morton, K. C. Gunsalus, V. Reinke, S. K. Kim, and K. J.
Kemphues. Gene clustering based on rnai phenotypes of ovary-enriched genes in c. elegans.
Current Biology, 12(22):1959?1964, 2002.
[17] F. Shahnaz, M. W. Berry, V. P. Pauca, and R. J. Plemmons. Document clustering using
nonnegative matrix factorization. Information Processing & Management, 42(2):373?386,
2006.
[18] J. Shi and J. Malik. Normalized cuts and image segmentation. Pattern Analysis and Machine
Intelligence, IEEE Transactions on, 22(8):888?905, 2000.
[19] L. Zelnik-Manor and P. Perona. Self-tuning spectral clustering. In NIPS, 2004.
10
| 7001 |@word norm:3 seems:1 km:1 zelnik:1 decomposition:1 initial:1 contains:2 series:1 tuned:1 document:12 existing:2 current:2 com:2 discretization:1 comparing:1 cad:1 gmail:2 written:3 partition:4 blur:1 benign:1 weyl:1 remove:1 depict:2 update:6 intelligence:2 guess:1 metabolism:1 provides:1 node:9 revisited:1 five:1 mathematical:1 along:1 constructed:2 kvk2:1 dengcai:1 urology:1 introduce:1 peng:1 plemmons:1 becomes:7 xx:2 notation:1 bhattacharjee:1 fuzzy:1 transformation:1 every:1 subclass:1 exactly:3 assoc:2 k2:5 partitioning:3 dsu:3 positive:1 before:1 engineering:2 aat:2 local:1 sd:3 tends:1 interrelation:1 xidian:2 twice:1 initialization:2 china:2 k:1 conversely:1 challenging:1 antigen:1 co:20 factorization:4 bi:3 kfi:5 practical:1 block:10 definite:1 lf:1 sq:1 procedure:3 area:2 empirical:1 significantly:2 word:3 get:5 close:1 put:1 optimize:1 equivalent:5 disturbed:1 lagrangian:1 center:1 dz:1 www:1 mrna:1 shi:2 serum:1 l:22 immediately:1 regarded:1 dbi:1 classic:1 handle:1 target:1 suppose:3 pt:1 strengthen:1 exact:5 element:6 strengthens:1 updating:4 richards:1 cut:8 textdata:1 huttenlocher:1 ding:1 electrical:1 solved:2 wang:2 calculate:2 connected:10 disease:1 complexity:2 nie:3 rewrite:2 solving:2 bipartite:23 vague:1 accelerate:1 various:2 distinct:2 effective:1 kp:8 artificial:1 sanity:1 apparent:1 heuristic:1 widely:1 solve:12 larger:1 otherwise:1 ability:1 final:5 obviously:1 advantage:2 eigenvalue:9 propose:7 huang2:1 achieve:1 academy:1 frobenius:1 participating:1 ky:1 cluster:31 derive:1 recurrent:1 gong:1 nearest:2 pauca:1 ij:1 school:2 eq:17 indicate:1 come:2 proteomic:1 human:1 opinion:1 dii:1 require:1 clustered:2 yij:1 hold:2 considered:2 ground:1 normal:2 algorithmic:1 pitt:1 smallest:5 sensitive:1 individually:1 repetition:1 reuters21578:2 successfully:1 weighted:1 gaussian:4 manor:1 dsv:3 validated:1 morton:1 vk:1 rank:9 indicates:3 feipingnie:1 check:1 zju:1 ave:5 sigkdd:3 kim:1 bt:1 perona:1 relation:4 issue:1 among:2 html:1 classification:1 denoted:7 constrained:3 special:2 initialize:2 mutual:1 fairly:1 equal:2 construct:1 once:1 lyi:1 beach:1 bmc:1 represents:1 park:1 biology:1 unsupervised:1 simplex:1 prostate:8 randomly:1 simultaneously:2 national:2 ag049371:1 n1:12 maintain:1 detection:1 mining:3 evaluation:3 monti:1 nl:1 uppercase:1 held:1 dyi:1 edge:2 conforms:1 orthogonal:2 conduct:8 theoretical:1 stopped:1 column:16 obstacle:1 maximization:1 entry:1 seventh:1 reported:1 connect:1 synthetic:9 st:1 fundamental:2 international:5 again:1 aaai:1 satisfied:1 rn1:12 recorded:1 huang:5 management:1 american:1 derivative:2 leading:6 chung:1 li:2 rn2:5 summarized:2 includes:1 mohar:1 combinatorics:1 vi:13 ad:3 ornstein:1 performed:1 view:1 polytechnical:1 red:1 portion:5 lung:6 maintains:3 pso:1 minimize:1 formed:5 ass:1 accuracy:9 conducting:1 efficiently:1 definition:3 frequency:1 obvious:2 proof:1 associated:4 di:1 gain:2 sampled:2 dataset:6 subsection:3 knowledge:3 segmentation:4 cbms:1 higher:1 specify:1 gillette:1 diagnosed:1 d:6 brings:1 smcl:1 feiping:1 usa:2 verify:1 normalized:11 requiring:1 true:1 alternating:2 dhillon:1 psa:2 self:3 recurrence:1 m:2 ay:3 performs:2 fj:8 meaning:1 image:5 consideration:1 novel:4 fi:2 nih:1 specialized:1 association:1 interpretation:2 approximates:3 imposing:1 tuning:2 resorting:1 mathematics:1 similarly:2 dj:3 similarity:8 add:1 formatted:1 yi:1 seen:1 impose:1 converge:2 ii:4 semi:1 full:1 multiple:1 characterized:1 profiling:1 long:2 divided:1 concerning:1 post:6 equally:1 laplacian:13 involving:2 liao:1 essentially:1 circumstance:1 patient:2 vision:1 metric:1 iteration:5 represent:1 kernel:1 achieved:3 cell:2 addition:1 singular:8 source:1 regional:1 tri:1 undirected:1 elegans:1 effectiveness:4 seem:1 call:1 jordan:1 ideal:3 enough:5 suboptimal:1 cn:2 expression:4 biotechnology:1 clear:3 eigenvectors:6 detailed:2 processed:1 reduced:2 http:1 outperform:1 percentage:1 nsf:5 notice:1 correctly:1 carcinoma:2 blue:1 discrete:7 group:1 four:6 nevertheless:1 yit:1 verified:1 utilize:1 v1:3 graph:51 wood:1 run:6 throughout:2 reasonable:1 electronic:1 home:1 dy:2 scaling:1 summarizes:1 guaranteed:1 distinguish:1 display:3 cheng:1 fan:2 nonnegative:6 constraint:10 u1:2 speed:1 min:18 relatively:1 structured:4 department:1 according:14 cui:1 smaller:2 describes:1 partitioned:2 dv:3 multiplicity:2 sij:2 sided:4 taken:1 mechanism:2 hyperplasia:1 xiaoqian:1 end:2 adopted:1 rewritten:10 multiplied:1 apply:4 v2:1 spectral:9 alternative:2 robustness:5 wang2:1 eigen:1 original:12 denotes:3 clustering:81 running:2 top:1 calculating:1 february:1 society:1 objective:3 malik:1 arrangement:1 strategy:1 dependence:1 traditional:2 diagonal:4 affinity:4 distance:1 topic:2 mail:1 extent:1 unstable:1 spanning:1 relationship:2 illustration:3 providing:1 minimizing:1 liang:1 difficult:2 trace:1 proper:1 perform:1 datasets:8 benchmark:7 situation:1 excluding:1 rn:15 datta:1 nmf:5 introduced:2 pair:1 extensive:1 learned:8 akf:1 nip:2 address:3 usually:3 below:1 pattern:2 summarize:1 program:1 max:4 including:1 suitable:1 treated:1 hybrid:1 disturbance:1 indicator:2 solvable:1 squamous:1 xqwang1991:1 wiegand:1 eye:1 piano:1 discovery:3 berry:1 kf:1 plant:1 permutation:1 northwestern:1 interesting:1 downloaded:1 degree:1 pij:5 vij:6 heng:2 pi:4 row:20 cancer:7 supported:1 aij:3 side:4 institute:1 neighbor:2 taking:1 felzenszwalb:1 sparse:2 dimension:3 doesn:2 forward:1 author:1 adaptive:1 projected:1 lz:1 transaction:2 gene:10 reveals:1 ovary:1 pittsburgh:1 consuming:1 alternatively:1 spectrum:1 iterative:1 ladd:1 table:4 learn:4 reasonably:1 robust:1 ca:1 du:6 pk:1 motivation:1 noise:23 n2:19 verifies:1 enriched:1 fig:4 wiley:1 explicit:5 adenocarcinoma:2 third:1 bij:4 theorem:6 specific:1 showing:1 evidence:1 exists:1 intractable:1 occurring:3 easier:1 phenotype:1 depicted:1 ncut:8 partially:1 mij:1 pulmonary:1 truth:1 extracted:1 acm:5 ma:2 goal:1 viewed:1 identity:1 hard:1 determined:1 lemma:4 total:2 duality:4 svd:2 experimental:4 pragmatic:1 latter:1 bioinformatics:1 rnai:1 evaluate:3 bkf:8 phenomenon:1 |
6,636 | 7,002 | Learning Low-Dimensional Metrics
Lalit Jain ?
University of Michigan
Ann Arbor, MI 48109
[email protected]
Blake Mason ?
University of Wisconsin
Madison, WI 53706
[email protected]
Robert Nowak
University of Wisconsin
Madison, WI 53706
[email protected]
Abstract
This paper investigates the theoretical foundations of metric learning, focused on
three key questions that are not fully addressed in prior work: 1) we consider
learning general low-dimensional (low-rank) metrics as well as sparse metrics;
2) we develop upper and lower (minimax) bounds on the generalization error; 3)
we quantify the sample complexity of metric learning in terms of the dimension
of the feature space and the dimension/rank of the underlying metric; 4) we also
bound the accuracy of the learned metric relative to the underlying true generative
metric. All the results involve novel mathematical approaches to the metric learning
problem, and also shed new light on the special case of ordinal embedding (aka
non-metric multidimensional scaling).
1
Low-Dimensional Metric Learning
This paper studies the problem of learning a low-dimensional Euclidean metric from comparative
judgments. Specifically, consider a set of n items with high-dimensional features xi 2 Rp and
suppose we are given a set of (possibly noisy) distance comparisons of the form
sign(dist(xi , xj )
dist(xi , xk )),
for a subset of all possible triplets of the items. Here we have in mind comparative judgments
made by humans and the distance function implicitly defined according to human perceptions of
similarities and differences. For example, the items could be images and the xi could be visual
features automatically extracted by a machine. Accordingly, our goal is to learn a p ? p symmetric
positive semi-definite (psd) matrix K such that the metric dK (xi , xj ) := (xi xj )T K(xi xj ),
where dK (xi , xj ) denotes the squared distance between molecules i and j with respect to a matrix
K, predicts the given distance comparisons as well as possible. Furthermore, it is often desired that
the metric is low-dimensional relative to the original high-dimensional feature representation (i.e.,
rank(K) ? d < p). There are several motivations for this:
? Learning a high-dimensional metric may be infeasible from a limited number of comparative
judgments, and encouraging a low-dimensional solution is a natural regularization.
? Cognitive scientists are often interested in visualizing human perceptual judgments (e.g., in a
two-dimensional representation) and determining which features most strongly influence human
perceptions. For example, educational psychologists in [1] collected comparisons between visual
representations of chemical molecules in order to identify a small set of visual features that most
significantly influence the judgments of beginning chemistry students.
? It is sometimes reasonable to hypothesize that a small subset of the high-dimensional features
dominate the underlying metric (i.e., many irrelevant features).
? Downstream applications of the learned metric (e.g., for classification purposes) may benefit from
robust, low-dimensional metrics.
?
Authors contributed equally to this paper and are listed alphabetically.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
(b) A sparse and low rank
psd matrix
(a) A general low rank psd
matrix
Figure 1: Examples of K for p = 20 and d = 7. The sparse case depicts a situation in which only
some of the features are relevant to the metric.
With this in mind, several authors have proposed nuclear norm and `1,2 group lasso norm regularization to encourage low-dimensional and sparse metrics as in Fig. 1b (see [2] for a review). Relative to
such prior work, the contributions of this paper are three-fold:
1. We develop novel upper bounds on the generalization error and sample complexity of learning lowdimensional metrics from triplet distance comparisons. Notably, unlike previous generalization
bounds, our bounds allow one to easily quantify how the feature space dimension p and rank or
sparsity d < p of the underlying metric impacts the sample complexity.
2. We establish minimax lower bounds for learning low-rank and sparse metrics that match the upper
bounds up to polylogarithmic factors, demonstrating the optimality of learning algorithms for the
first time. Moreover, the upper and lower bounds demonstrate that learning sparse (and low-rank)
metrics is essentially as difficult as learning a general low-rank metric. This suggests that nuclear
norm regularization may be preferable in practice, since it places less restrictive assumptions on
the problem.
3. We use the generalization error bounds to obtain model identification error bounds that quantify
the accuracy of the learned K matrix. This problem has received very little, if any, attention in
the past and is crucial for interpreting the learned metrics (e.g., in cognitive science applications).
This is a bit surprising, since the term ?metric learning? strongly suggests accurately determining
a metric, not simply learning a predictor that is parameterized by a metric.
1.1
Comparison with Previous Work
There is a fairly large body of work on metric learning which is nicely reviewed and summarized
in the monograph [2], and we refer the reader to it for a comprehensive summary of the field. Here
we discuss a few recent works most closely connected to this paper. Several authors have developed
generalization error bounds for metric learning, as well as bounds for downstream applications, such
as classification, based on learned metrics. To use the terminology of [2], most of the focus has
been on must-link/cannot-link constraints and less on relative constraints (i.e., triplet constraints as
considered in this paper). Generalization bounds based on algorithmic robustness are studied in [3],
but the generality of this framework makes it difficult to quantify the sample complexity of specific
cases, such as low-rank or sparse metric learning. Rademacher complexities are used to establish
generalization error bounds in the must-link/cannot-link situation in [4, 5, 6], but do not consider the
case of relative/triplet constraints. The sparse compositional metric learning framework of [7] does
focus on relative/triplet constraints and provides generalization error bounds in terms of covering
numbers. However, this work does not provide bounds on the covering numbers, making it difficult
to quantify the sample complexity. To sum up, prior work does not quantify the sample complexity
of metric learning based on relative/triplet constraints in terms of the intrinsic problem dimensions
(i.e., dimension p of the high-dimensional feature space and the dimension of the underlying metric),
there is no prior work on lower bounds, and no prior work quantifying the accuracy of learned
metrics themselves (i.e., only bounds on prediction errors, not model identification errors). Finally
we mention that Fazel et a.l [8] also consider the recovery of sparse and low rank matrices from linear
observations. Our situation is very different, our matrices are low rank because they are sparse - not
sparse and simultaneously low rank as in their case.
2
2
The Metric Learning Problem
Consider n known points X := [x1 , x2 , . . . , xn ] 2 Rp?n . We are interested in learning a symmetric
positive semidefinite matrix K that specifies a metric on Rp given ordinal constraints on distances
between the known points. Let S denote a set of triplets, where each t = (i, j, k) 2 S is drawn
uniformly at random from the full set of n n 2 1 triplets T := {(i, j, k) : 1 ? i 6= j 6= k ? n, j < k}.
For each triplet, we observe a yt 2 {?1} which is a noisy indication of the triplet constraint
dK (xi , xj ) < dK (xi , xk ). Specifically we assume that each t has an associated probability qt of
yt = 1, and all yt are statistically independent.
c from S that predicts triplets as well as possible.
Objective 1: Compute an estimate K
In many instances, our triplet measurements are noisy observations of triplets from a true positive
semi-definite matrix K ? . In particular we assume
qt > 1/2 () dK ? (xi , xj ) < dK ? (xi , xk ) .
We can also assume an explicit known link function, f : R ! [0, 1], so that qt = f (dK ? (xi , xj )
dK ? (xi , xk )).
Objective 2: Assuming an explicit known link function f estimate K ? from S.
2.1
Definitions and Notation
Our triplet observations are nonlinear transformations of a linear function of the Gram matrix
G := X T KX. Indeed for any triple t = (i, j, k), define
M t (K)
:=
dK (xi , xj )
=
xTi Kxk
+
dK (xi , xk )
T
xk Kxi xTi Kxj
xTj Kxi + xTj Kxj
xTk Kxk .
So for every t 2 S, yt is a noisy measurement of sign(M t (K)). This linear operator may also be
expressed as a matrix
M t := xi xTk + xk xTi
xi xTj
xj xTi + xj xTj
xk xTk ,
so that M t (K) = hM t , Ki = Trace(M Tt K). We will use M t to denote the operator and
associated matrix interchangeably. Ordering the elements of T lexicographically, we let M denote
the linear map,
n 1
M(K) = (M t (K)| for t 2 T ) 2 Rn( 2 )
Given a PSD matrix K and a sample, t 2 S, we let `(yt hM t , Ki) denote the loss of K with respect
to t; e.g., the 0-1 loss {sign(yt hM t ,Ki)6=1} , the hinge-loss max{0, 1 yt hM t , Ki}, or the logistic
loss log(1 + exp( yt hM t , Ki)). Note that we insist that our losses be functions of our triplet
differences hM t , Ki. Note that this makes our losses invariant to rigid motions of the points xi .
Other models proposed for metric learning use scale-invariant loss functions [9].
For a given loss `, we then define the empirical risk with respect to our set of observations S to be
X
bS (K) := 1
R
`(yt hM t , Ki).
|S|
t2S
This is an unbiased estimator of the true risk R(K) := E[`(yt hM t , Ki)] where the expectation is
taken with respect to a triplet t selected uniformly at random and the random value of yt .
Finally, we let I n denote the identity matrix in Rn , 1n the n-dimensional vector of all ones and
V := I n n1 1n 1Tn the centering matrix. In particular if X 2 Rp?n is a set of points, XV subtracts
the mean of the columns of X from each column. We say that X is centered if XV = 0, or
equivalently X1n = 0. If G is the Gram matrix of the set of points X, i.e. G = X T X, then we say
that G is centered if X is centered or if equivalently, G1n = 0. Furthermore we use k ? k? to denote
the nuclear norm, and k ? k1,2 to denote the mixed `1,2 norm of a matrix, the sum of the `2 norms of
its rows. Unless otherwise specified, we take k ? k to be the standard operator norm when applied to
matrices and the standard Euclidean norm when applied to vectors. Finally we define the K-norm of
a vector as kxk2K := xT Kx.
3
2.2
Sample Complexity of Learning Metrics.
In most applications, we are interested in learning a matrix K that is low-rank and positivesemidefinite. Furthermore as we will show in Theorem 2.1, such matrices can be learned using fewer
samples than general psd matrices. As is common in machine learning applications, we relax the
rank constraint to a nuclear norm constraint. In particular, let our constraint set be
K
,
= {K 2 Rp?p |K positive-semidefinite, kKk? ? , maxhM t , Ki ? }.
t2T
Up to constants, a bound on hM t , Ki is a bound on xTi Kxi . This bound along with assuming our
bS (K) from R(K) crucial
loss function is Lipschitz, will lead to a tighter bound on the deviation of R
in our upper bound theorem.
c :=
Let K ? := minK2K , R(K) be the true risk minimizer in this class, and let K
bS (K) be the empirical risk minimizer. We achieve the following prediction error
minK2K , R
bounds for the empirical risk minimzer.
Theorem 2.1. Fix , , > 0. In addition assume that max1?i?n kxi k2 = 1. If the loss function `
is L-Lipschitz, then with probability 1
0s
1 s
T
2 kXX k log p
140
2 log p A
2L2 2 log 2/
n
c
R(K)
R(K ? ) ? 4L @
+
+
|S|
|S|
|S|
Note that past generalization error bounds in the metric learning literature have failed to quantify
the precise dependence on observation noise, dimension, rank, and our features X. Consider the
fact that a p ? p matrix with rank d has O(dp) degrees of freedom. With that in mind, one expects
the sample complexity to be also roughly O(dp). We next show that this intuition is correct if the
original representation X is isotropic (i.e., has no preferred direction).
The Isotropic Case. Suppose that x1 , ? ? ? , xn , n > p, are drawn independently from the isotropic
Gaussian N (0, p1 I). Furthermore, suppose that K ? = ppd U U T with U 2 Rp?d is a generic (dense)
orthogonal matrix with unit norm columns. The factor ppd is simply the scaling needed so that the
average magnitude of the entries in K ? is a constant, independent of the dimensions p and d. In
this case, rank(K ? ) = d and kK ? kF = trace(U T Up) = p. These two facts imply
p that the tightest
?
bound on the nuclear norm of Kq
is kK ? k? ? p d. Thus, we take = p d for the nuclear
pp U T xi ? N (0, I d ) and note that kxi k2 = kz i k2 ? 2 .
norm constraint. Now let z i =
K
d
d
Therefore, Ekxi k2K = d and it follows from standard concentration bounds that with large probability
maxi kxi k2K ? 5d log n =: see [10]. Also, because the xi ? N (0, p1 I) it follows that if
n > p log p, say, then with large probability kXX T k ? 5n/p. We now plug these calculations into
Theorem 2.1 to obtain the following corollary.
p
Corollary 2.1.1 (Sample complexity for isotropic points). Fix > 0, set = p d, and assume
that kXX T k = O(n/p) and := maxi kxi k2K = O(d log n). Then for a generic K ? 2 K , , as
constructed above, with probability at least 1
,
0s
1
2
dp(log
p
+
log
n)
c
A
R(K)
R(K ? ) = O @
|S|
This bound agrees with the intuition that the sample complexity should grow roughly like dp, the
degrees of freedom on K ? . Moreover, our minimax lower bound in Theorem 2.3 below shows that,
ignoring logarithmic factors, the general upper bound in Theorem 2.1 is unimprovable in general.
Beyond low rank metrics, in many applications it is reasonable to assume that only a few of the
features are salient and should be given nonzero weight. Such a metric may be learned by insisting
K to be row sparse in addition to being low rank. Whereas learning a low rank K assumes that
distance is well represented in a low dimensional subspace, a row sparse (and hence low rank) K
defines a metric using only a subset of the features. Figure 1 gives a comparison of a low rank versus
a low rank and sparse matrix K.
4
Analogous to the convex relaxation of rank by the nuclear norm, it is common to relax row sparsity
by using the mixed `1,2 norm. In fact, the geometry of the `1,2 and nuclear norm balls are tightly
related as the following lemma shows.
Lemma 2.2. For a symmetric positive semi-definite matrix K 2 Rp?p , kKk? ? kKk1,2 .
Proof. kKk1,2
v
p uX
X
u p
t
=
K 2i,j
i=1
j=1
p
X
K i,i = Trace(K) =
i=1
p
X
i (K)
i=1
= kKk?
This implies that the `1,2 ball of a given radius is contained inside the nuclear norm ball of the
same radius. In particular, it is reasonable to assume that it is easier to learn a K that is sparse in
addition to being low rank. Surprisingly, however, the following minimax bound shows that this is
not necessarily the case.
To make this more precise, we will consider optimization over the set
K0 , = {K 2 Rp?p |K positive-semidefinite, kKk1,2 ? , maxhM t , Ki ? }.
t2T
Furthermore, we must specify the way in which our data could be generated from noisy triplet
observations of a fixed K ? . To this end, assume the existence of a link function f : R ! [0, 1]
so that qt = P(yt = 1) = f (M t (K ? )) governs the observations. There is a natural associated
logarithmic loss function `f corresponding to the log-likelihood, where the loss of an arbitrary K is
`f (yt hM t , Ki) =
{yt = 1}
log
1
+
f (hM t , Ki)
{yt =1}
log
1
1
f (hM t , Ki)
Theorem 2.3. Choose a link function f and let `f be the associated logarithmic loss. For p sufficiently
large, then there exists a choice of , , X, and |S| such that
s
kXX T k
C13 ln 4 2 n
c
inf sup E[R(K)]
R(K) C
2
|S|
c K2K0
K
,
where C =
Cf2
32
r
inf |x|? f (x)(1 f (x))
sup|?|? f 0 (?)2
with Cf = inf |x|? f 0 (x), C1 is an absolute constant, and the
c of K from |S| samples.
infimum is taken over all estimators K
Importantly, up to polylogarithmic factors and constants, our minimax lower over the `1,2 ball bound
matches the upper bound over the nuclear norm ball given in Theorem 2.1. In particular, in the worst
case, learning a sparse and low rank matrix K is no easier than learning a K that is simply low
rank. However in many realistic cases, a slight performance gain is seen from optimizing over the
`1,2 ball when K ? is row sparse, while optimizing over the nuclear norm ball does better when K ? is
dense. We show examples of this in the Section 3. The proof is given in the supplementary materials.
Note that if is in a bounded range, then the constant C has little effect. For the case that f is the
1
1
yt hM t ,Ki
logistic function, Cf
. Likewise, the term under the root will be also be
4e
4e
bounded for in a constant range. The terms in the constant C arise when translating from risk and a
KL-divergence to squared distance and reflects the noise in the problem.
2.3
Sample Complexity Bounds for Identification
Under a general loss function and arbitrary K ? , we can not hope to convert our prediction error
bounds into a recovery statement. However in this section we will show that as long as K ? is low
rank, and if we choose the loss function to be the log loss `f of a given link function f as defined
prior to the statement of Theorem 2.3, recovery is possible. Firstly note that under these assumptions
we have an explicit formula for the risk,
1 X
1
1
R(K) =
f (hM t , K ? i) log
+ (1 f (hM t , K ? i)) log
|T |
f (hM t , Ki)
1 f (hM t , Ki)
t2T
5
and
R(K)
R(K ? ) =
1 X
KL(f (hM t , K ? i)||f (hM t , Ki)).
|T |
t2T
c approximates R(K ? ) well,
The following theorem shows that if the excess risk is small, i.e. R(K)
?
c approximates M(K ) well. The proof, given in the supplementary materials, uses
then M(K)
standard Taylor series arguments to show the KL-divergence is bounded below by squared-distance.
Lemma 2.4. Let Cf = inf |x|? f 0 (x). Then for any K 2 K , ,
2Cf2
kM(K)
|T |
M(K ? )k2 ? R(K)
R(K ? ).
The following may give us hope that recovering K ? from M(K ? ) is trivial, but the linear operator
M is non-invertible in general, as we discuss next. To see why, we must consider a more general
class of operators defined on Gram matrices. Given a symmetric matrix G, define the operator Lt by
Lt (G) = 2Gik
2Gij + Gjj
Gkk
If G = X T KX then Lt (G) = M t (K), and more so M t = XLt X T . Analogous to M, we will
combine the Lt operators into a single operator L,
L(G) = (Lt (G)| for t 2 T ) 2 Rn(
n
1
2
).
Lemma 2.5. The null space of L is one dimensional, spanned by V = I n
1
T
n 1n 1n .
The proof is contained in the supplementary materials. In particular we see that M is not invertible
in general, adding a serious complication to our argument. However L is still invertible on the subset
of centered symmetric matrices orthogonal to V , a fact that we will now exploit. We can decompose
G into V and a component orthogonal to V denoted H,
G=H+
GV
hG,V i
,
kV k2F
?
where G :=
and under the assumption that G is centered, G = kGk
n 1 . Remarkably, the
following lemma tells us that a non-linear function of H uniquely determines G.
Lemma 2.6. If n > d + 1, and G is rank d and centered, then
G is an eigenvalue of H with
multiplicity n d 1. In addition, given another Gram matrix G0 of rank d0 , G0
G is an
eigenvalue of H H 0 with multiplicity at least n d d0 1.
Proof. Since G is centered, 1n 2 ker G, and in particular dim(1?
d 1. If
n \ ker G) = n
x 2 1?
n \ ker G, then
Gx = Hx + G V x ) Hx =
G x.
0
?
For the second statement, notice that dim(1n \ ker G G ) n d d0 1. A similar argument
then applies.
If n > 2d, then the multiplicity of the eigenvalue
G is at least n/2. So we can trivially identify it
from the spectrum of H. This gives us a non-linear way to recover G from H.
c Indeed the above lemma implies that
Now we can return to the task of recovering K ? from M(K).
?
?
G (and hence K if X is full rank) can be recovered from H ? by computing an eigenvalue of H ? .
c = M(K).
c
However H ? is recoverable from L(H ? ), which is itself well approximated by L(H)
The proof of the following theorem makes this argument precise.
c is rank d0 , n > d + d0 + 1, X is rank p and X T K ? X
Theorem 2.7. Assume that K ? is rank d, K
?
?
Tc
n 1
and X KX are all centered. Let Cd,d0 = 1 +
. Then
0
(n d d
n
min (XX
|T |
where
T 2
)
c
kK
min (XX
T
20s
2LCd,d0 4@
K ? k2F ?
Cf2
140
1)
T
2 kXX k
n
|S|
) is the smallest eigenvalue of XX T .
6
log p
1 s
3
2L2 2 log 2
2 log p A
5
+
+
|S|
|S|
The proof, given in the supplementary materials, relies on two key components, Lemma 2.6 and a
type of restricted isometry property for M on V ? . Our proof technique is a streamlined and more
general approach similar to that used in the special case of ordinal embedding. In fact, our new bound
improves on the recovery bound given in [11] for ordinal embedding.
We have several remarks about the bound in the theorem. If X is well conditioned, e.g. isotropic, then
n min (XX T )2
T
n
? p12 , so the left hand side is the average squared error
min (XX ) ? p . In that case
|T |
c is approximately
of the recovery. In most applications the rank of the empirical risk minimizer K
equal to the rank of K ? , i.e. d ? d0 . Note that If d + d0 ? 12 (n 1) then Cd,d0 ? 3. Finally, the
assumption that X T K ? X are centered can be guaranteed by centering X, which has no impact on
the triplet differences hM t , K ? i, or insisting that K ? is centered. As mentioned above Cf will be
have little effect assuming that our measurements hM t , Ki are bounded.
2.4
Applications to Ordinal Embedding
In the ordinal embedding setting, there are a set of items with unknown locations, z 1 , ? ? ? , z n 2 Rd
and a set of triplet observations S where as in the metric learning case observing yt = 1, for a triplet
t = (i, j, k) is indicative of the kz i z j k2 ? kz i z k k2 , i.e. item i is closer to j than k. The goal is
to recover the z i ?s, up to rigid motions, by recovering their Gram matrix G? from these comparisons.
Ordinal embedding case reduces to metric learning through the following observation. Consider
the case when n = p and X = I p , i.e. the xi are standard basis vectors. Letting K ? = G? , we
see that kxi xj k2K = kz i z j k2 . So in particular, Lt = M t for each triple t, and observations
are exactly comparative distance judgements. Our results then apply, and extend previous work on
sample complexity in the ordinal embedding setting given in [11]. In particular, though Theorem 5 in
b will converge to G? , they
[11] provides a consistency guarantee that the empirical risk minimizer G
do not provide a convergence rate. We resolve this issue now.
p
In their work, it is assumed that kz i k2 ? and kGk? ? dn . In particular, sample complexity
results of the form O(dn log n) are obtained. However, these results are trivial in the following
sense, if kz i k2 ? then kGk? ? n, and their results (as well as our upper bound) implies that true
sample complexity is significantly smaller, namely O( n log n) which is independent of the ambient
dimension d. As before, assume an explicit link function f with Lipschitz constant L, so the samples
are noisy observations governed by G? , and take the loss to be the logarithmic loss associated to f .
We obtain the following improved recovery bound in this case. The proof is immediate from Theorem
2.7.
Corollary 2.7.1. Let G? be the Gram matrix of n centered points in d dimensions with kG? k2F =
2 2
n
b
b
d . Let G = minkGk? ? n,kGk1 ? RS (G) and assume that G is rank d, with n > 2d + 1. Then,
s
!
b G ? k2
kG
LCd,d
n log n
F
=O
n2
Cf2
|S|
3
Experiments
To validate our complexity and recovery guarantees, we ran the following simulations. We generate
iid
x1 , ? ? ? , xn ? N (0, p1 I), with n = 200, and K ? = ppd U U T for a random orthogonal matrix
U 2 Rp?d with unit norm columns. In Figure 2a, K ? has d nonzero rows/columns. In Figure 2b,
K ? is a dense rank-d matrix. We compare the performance of nuclear norm and `1,2 regularization
in each setting against an unconstrained baseline where we only enforce that K be psd. Given a fixed
number of samples, each method is compared in terms of the relative excess risk,
the relative squared recovery error,
been trimmed for readability.
c K ? k2
kK
F
kK ? k2F
c R(K ? )
R(K)
,
R(K ? )
and
, averaged over 20 trials. The y-axes of both plots have
In the case that K ? is sparse, `1,2 regularization outperforms nuclear norm regularization. However,
in the case of dense low rank matrices, nuclear norm reularization is superior. Notably, as expected
from our upper and lower bounds, the performances of the two approaches seem to be within constant
7
factors of each other. Therefore, unless there is strong reason to believe that the underlying K ? is
sparse, nuclear norm regularization achieves comparable performance with a less restrictive modeling
assumption. Furthermore, in the two settings, both the nuclear norm and `1,2 constrained methods
outperform the unconstrained baseline, especially in the case where K ? is low rank and sparse.
To empirically validate our sample complexity results, we compute the number of samples averaged
over 20 runs to achieve a relative excess risk of less than 0.1 in Figure 3. First, we fix p = 100 and
increment d from 1 to 10. Then we fix d = 10 and increment p from 10 to 100 to clearly show the
linear dependence of the sample complexity on d and p as demonstrated in Corollary 2.1.1. To our
knowledge, these are the first results quantifying the sample complexity in terms of the number of
features, p, and the embedding dimension, d.
(a) Sparse low rank metric
(b) Dense low rank metric
Figure 2: `1,2 and nuclear norm regularization performance
(a) d varying
(b) p varying
Figure 3: Number of samples to achieve relative excess risk < 0.1
Acknowledgments This work was partially supported by the NSF grants CCF-1218189 and IIS1623605
8
References
[1] Martina A Rau, Blake Mason, and Robert D Nowak. How to model implicit knowledge?
similarity learning methods to assess perceptions of visual representations. In Proceedings of
the 9th International Conference on Educational Data Mining, pages 199?206, 2016.
[2] Aur?lien Bellet, Amaury Habrard, and Marc Sebban. Metric learning. Synthesis Lectures on
Artificial Intelligence and Machine Learning, 9(1):1?151, 2015.
[3] Aur?lien Bellet and Amaury Habrard. Robustness and generalization for metric learning.
Neurocomputing, 151:259?267, 2015.
[4] Zheng-Chu Guo and Yiming Ying. Guaranteed classification via regularized similarity learning.
Neural Computation, 26(3):497?522, 2014.
[5] Yiming Ying, Kaizhu Huang, and Colin Campbell. Sparse metric learning via smooth optimization. In Advances in neural information processing systems, pages 2214?2222, 2009.
[6] Wei Bian and Dacheng Tao. Constrained empirical risk minimization framework for distance
metric learning. IEEE transactions on neural networks and learning systems, 23(8):1194?1205,
2012.
[7] Yuan Shi, Aur?lien Bellet, and Fei Sha. Sparse compositional metric learning. arXiv preprint
arXiv:1404.4105, 2014.
[8] Samet Oymak, Amin Jalali, Maryam Fazel, Yonina C Eldar, and Babak Hassibi. Simultaneously
structured models with application to sparse and low-rank matrices. IEEE Transactions on
Information Theory, 61(5):2886?2908, 2015.
[9] Eric Heim, Matthew Berger, Lee Seversky, and Milos Hauskrecht. Active perceptual similarity
modeling with auxiliary information. arXiv preprint arXiv:1511.02254, 2015.
[10] Kenneth R Davidson and Stanislaw J Szarek. Local operator theory, random matrices and
banach spaces. Handbook of the geometry of Banach spaces, 1(317-366):131, 2001.
[11] Lalit Jain, Kevin G Jamieson, and Rob Nowak. Finite sample prediction and recovery bounds for
ordinal embedding. In Advances In Neural Information Processing Systems, pages 2703?2711,
2016.
[12] Mark A Davenport, Yaniv Plan, Ewout Van Den Berg, and Mary Wootters. 1-bit matrix
completion. Information and Inference: A Journal of the IMA, 3(3):189?223, 2014.
[13] Joel A. Tropp. An introduction to matrix concentration inequalities, 2015.
[14] Felix Abramovich and Vadim Grinshtein. Model selection and minimax estimation in generalized linear models. IEEE Transactions on Information Theory, 62(6):3721?3730, 2016.
[15] Florentina Bunea, Alexandre B Tsybakov, Marten H Wegkamp, et al. Aggregation for gaussian
regression. The Annals of Statistics, 35(4):1674?1697, 2007.
[16] Philippe Rigollet and Alexandre Tsybakov. Exponential screening and optimal rates of sparse
estimation. The Annals of Statistics, pages 731?771, 2011.
[17] Jon Dattorro. Convex Optimization & Euclidean Distance Geometry. Meboo Publishing USA,
2011.
9
| 7002 |@word kgk:3 trial:1 judgement:1 norm:26 km:1 r:1 simulation:1 mention:1 series:1 past:2 outperforms:1 recovered:1 surprising:1 chu:1 must:4 realistic:1 gv:1 hypothesize:1 plot:1 generative:1 selected:1 fewer:1 item:5 intelligence:1 accordingly:1 indicative:1 xk:8 beginning:1 isotropic:5 provides:2 complication:1 location:1 gx:1 readability:1 firstly:1 mathematical:1 along:1 constructed:1 dn:2 yuan:1 combine:1 inside:1 notably:2 expected:1 indeed:2 roughly:2 themselves:1 dist:2 p1:3 insist:1 automatically:1 resolve:1 encouraging:1 little:3 xti:5 xx:5 underlying:6 moreover:2 notation:1 bounded:4 null:1 kg:2 c13:1 developed:1 szarek:1 transformation:1 hauskrecht:1 guarantee:2 every:1 multidimensional:1 shed:1 preferable:1 exactly:1 k2:11 unit:2 grant:1 jamieson:1 positive:6 before:1 scientist:1 local:1 felix:1 xv:2 approximately:1 studied:1 suggests:2 limited:1 range:2 statistically:1 averaged:2 fazel:2 acknowledgment:1 practice:1 definite:3 ker:4 empirical:6 significantly:2 cannot:2 selection:1 operator:9 risk:14 influence:2 map:1 demonstrated:1 yt:17 shi:1 marten:1 educational:2 attention:1 independently:1 convex:2 focused:1 recovery:9 estimator:2 importantly:1 dominate:1 nuclear:17 spanned:1 embedding:9 increment:2 analogous:2 annals:2 suppose:3 us:1 element:1 approximated:1 predicts:2 preprint:2 worst:1 connected:1 ordering:1 ran:1 monograph:1 intuition:2 mentioned:1 complexity:19 lcd:2 babak:1 max1:1 eric:1 basis:1 easily:1 kxj:2 k0:1 represented:1 jain:2 artificial:1 tell:1 kevin:1 supplementary:4 say:3 relax:2 otherwise:1 statistic:2 noisy:6 itself:1 xlt:1 indication:1 eigenvalue:5 lowdimensional:1 maryam:1 relevant:1 achieve:3 t2s:1 amin:1 kv:1 validate:2 convergence:1 yaniv:1 rademacher:1 comparative:4 yiming:2 develop:2 completion:1 qt:4 received:1 strong:1 recovering:3 auxiliary:1 implies:3 quantify:7 direction:1 radius:2 closely:1 correct:1 centered:11 human:4 translating:1 material:4 hx:2 fix:4 generalization:10 samet:1 decompose:1 kaizhu:1 tighter:1 sufficiently:1 considered:1 blake:2 exp:1 algorithmic:1 matthew:1 achieves:1 smallest:1 purpose:1 estimation:2 agrees:1 reflects:1 bunea:1 hope:2 minimization:1 clearly:1 gaussian:2 varying:2 corollary:4 ax:1 focus:2 rank:44 likelihood:1 aka:1 baseline:2 sense:1 dim:2 inference:1 rigid:2 ppd:3 lien:3 interested:3 tao:1 issue:1 classification:3 eldar:1 denoted:1 plan:1 constrained:2 special:2 fairly:1 field:1 equal:1 nicely:1 beach:1 yonina:1 k2f:4 jon:1 serious:1 few:2 simultaneously:2 tightly:1 comprehensive:1 divergence:2 neurocomputing:1 xtj:4 ima:1 geometry:3 n1:1 psd:6 freedom:2 screening:1 unimprovable:1 gkk:1 mining:1 zheng:1 joel:1 semidefinite:3 light:1 hg:1 ambient:1 nowak:3 encourage:1 closer:1 ewout:1 orthogonal:4 unless:2 kxx:5 euclidean:3 taylor:1 desired:1 theoretical:1 instance:1 column:5 modeling:2 deviation:1 subset:4 expects:1 entry:1 predictor:1 habrard:2 kq:1 kxi:8 st:1 international:1 oymak:1 aur:3 lee:1 invertible:3 wegkamp:1 synthesis:1 squared:5 choose:2 possibly:1 huang:1 davenport:1 cognitive:2 return:1 chemistry:1 student:1 summarized:1 abramovich:1 root:1 observing:1 sup:2 recover:2 aggregation:1 contribution:1 ass:1 accuracy:3 likewise:1 judgment:5 identify:2 identification:3 lalit:2 accurately:1 iid:1 definition:1 centering:2 streamlined:1 against:1 g1n:1 pp:1 meboo:1 associated:5 mi:1 proof:9 gain:1 knowledge:2 improves:1 campbell:1 alexandre:2 bian:1 specify:1 improved:1 wei:1 though:1 strongly:2 generality:1 furthermore:6 implicit:1 hand:1 gjj:1 tropp:1 nonlinear:1 defines:1 logistic:2 infimum:1 believe:1 mary:1 usa:2 effect:2 true:5 unbiased:1 ccf:1 regularization:8 hence:2 chemical:1 symmetric:5 nonzero:2 visualizing:1 interchangeably:1 uniquely:1 covering:2 x1n:1 generalized:1 tt:1 demonstrate:1 tn:1 motion:2 interpreting:1 p12:1 image:1 novel:2 common:2 superior:1 sebban:1 empirically:1 rigollet:1 banach:2 extend:1 slight:1 approximates:2 refer:1 measurement:3 dacheng:1 rau:1 rd:1 unconstrained:2 trivially:1 consistency:1 similarity:4 t2t:4 isometry:1 recent:1 optimizing:2 irrelevant:1 inf:4 inequality:1 seen:1 converge:1 colin:1 semi:3 recoverable:1 full:2 reduces:1 d0:10 smooth:1 match:2 lexicographically:1 plug:1 calculation:1 long:2 equally:1 impact:2 prediction:4 regression:1 essentially:1 metric:54 expectation:1 arxiv:4 sometimes:1 c1:1 addition:4 whereas:1 remarkably:1 addressed:1 grow:1 crucial:2 vadim:1 unlike:1 seem:1 xj:12 lasso:1 ekxi:1 trimmed:1 compositional:2 remark:1 wootters:1 governs:1 involve:1 listed:1 tsybakov:2 generate:1 specifies:1 outperform:1 nsf:1 notice:1 sign:3 milo:1 group:1 key:2 salient:1 terminology:1 demonstrating:1 drawn:2 wisc:2 kenneth:1 relaxation:1 downstream:2 sum:2 convert:1 run:1 parameterized:1 place:1 reasonable:3 reader:1 florentina:1 scaling:2 investigates:1 comparable:1 bit:2 bound:42 ki:19 guaranteed:2 fold:1 heim:1 constraint:12 fei:1 x2:1 argument:4 optimality:1 min:4 xtk:3 structured:1 according:1 ball:7 smaller:1 bellet:3 wi:2 rob:1 making:1 b:3 psychologist:1 den:1 invariant:2 multiplicity:3 restricted:1 taken:2 ln:1 discus:2 needed:1 ordinal:9 mind:3 letting:1 end:1 umich:1 tightest:1 apply:1 observe:1 generic:2 enforce:1 robustness:2 rp:9 existence:1 original:2 denotes:1 assumes:1 cf:4 publishing:1 hinge:1 madison:2 exploit:1 restrictive:2 k1:1 especially:1 establish:2 objective:2 g0:2 question:1 concentration:2 dependence:2 sha:1 jalali:1 dp:4 subspace:1 distance:12 link:10 collected:1 trivial:2 reason:1 stanislaw:1 assuming:3 berger:1 kk:5 ying:2 equivalently:2 difficult:3 robert:2 statement:3 trace:3 unknown:1 contributed:1 upper:9 observation:11 finite:1 philippe:1 immediate:1 situation:3 precise:3 rn:3 arbitrary:2 namely:1 dattorro:1 specified:1 kl:3 learned:8 polylogarithmic:2 nip:1 beyond:1 below:2 perception:3 martina:1 sparsity:2 max:1 natural:2 regularized:1 minimax:6 imply:1 hm:21 prior:6 review:1 l2:2 literature:1 kf:1 determining:2 relative:11 wisconsin:2 fully:1 loss:18 lecture:1 mixed:2 versus:1 triple:2 foundation:1 degree:2 amaury:2 cd:2 row:6 rdnowak:1 summary:1 surprisingly:1 supported:1 infeasible:1 side:1 allow:1 absolute:1 sparse:25 benefit:1 van:1 dimension:11 xn:3 gram:6 kz:6 author:3 made:1 subtracts:1 alphabetically:1 transaction:3 excess:4 implicitly:1 preferred:1 active:1 handbook:1 assumed:1 xi:22 davidson:1 spectrum:1 triplet:20 why:1 reviewed:1 learn:2 molecule:2 robust:1 ca:1 ignoring:1 necessarily:1 marc:1 dense:5 k2k:4 motivation:1 noise:2 arise:1 n2:1 body:1 x1:3 fig:1 depicts:1 hassibi:1 explicit:4 exponential:1 governed:1 perceptual:2 theorem:15 formula:1 specific:1 xt:1 kkk:3 maxi:2 mason:2 dk:10 gik:1 intrinsic:1 exists:1 adding:1 magnitude:1 conditioned:1 kx:4 easier:2 tc:1 michigan:1 logarithmic:4 simply:3 lt:6 cf2:4 visual:4 failed:1 kxk:2 expressed:1 contained:2 ux:1 partially:1 applies:1 minimizer:4 determines:1 relies:1 extracted:1 insisting:2 goal:2 identity:1 ann:1 quantifying:2 lipschitz:3 specifically:2 uniformly:2 lemma:8 gij:1 arbor:1 berg:1 mark:1 guo:1 kgk1:1 kkk1:3 |
6,637 | 7,003 | The Marginal Value of Adaptive Gradient Methods
in Machine Learning
Ashia C. Wilson] , Rebecca Roelofs] , Mitchell Stern] , Nathan Srebro? , and Benjamin Recht]
{ashia,roelofs,mitchell}@berkeley.edu, [email protected], [email protected]
?
]
University of California, Berkeley
Toyota Technological Institute at Chicago
Abstract
Adaptive optimization methods, which perform local optimization with a metric
constructed from the history of iterates, are becoming increasingly popular for
training deep neural networks. Examples include AdaGrad, RMSProp, and Adam.
We show that for simple overparameterized problems, adaptive methods often find
drastically different solutions than gradient descent (GD) or stochastic gradient
descent (SGD). We construct an illustrative binary classification problem where
the data is linearly separable, GD and SGD achieve zero test error, and AdaGrad,
Adam, and RMSProp attain test errors arbitrarily close to half. We additionally
study the empirical generalization capability of adaptive methods on several stateof-the-art deep learning models. We observe that the solutions found by adaptive
methods generalize worse (often significantly worse) than SGD, even when these
solutions have better training performance. These results suggest that practitioners
should reconsider the use of adaptive methods to train neural networks.
1
Introduction
An increasing share of deep learning researchers are training their models with adaptive gradient
methods [3, 12] due to their rapid training time [6]. Adam [8] in particular has become the default
algorithm used across many deep learning frameworks. However, the generalization and out-ofsample behavior of such adaptive gradient methods remains poorly understood. Given that many
passes over the data are needed to minimize the training objective, typical regret guarantees do not
necessarily ensure that the found solutions will generalize [17].
Notably, when the number of parameters exceeds the number of data points, it is possible that the
choice of algorithm can dramatically influence which model is learned [15]. Given two different
minimizers of some optimization problem, what can we say about their relative ability to generalize?
In this paper, we show that adaptive and non-adaptive optimization methods indeed find very different
solutions with very different generalization properties. We provide a simple generative model for
binary classification where the population is linearly separable (i.e., there exists a solution with large
margin), but AdaGrad [3], RMSProp [21], and Adam converge to a solution that incorrectly classifies
new data with probability arbitrarily close to half. On this same example, SGD finds a solution with
zero error on new data. Our construction suggests that adaptive methods tend to give undue influence
to spurious features that have no effect on out-of-sample generalization.
We additionally present numerical experiments demonstrating that adaptive methods generalize worse
than their non-adaptive counterparts. Our experiments reveal three primary findings. First, with
the same amount of hyperparameter tuning, SGD and SGD with momentum outperform adaptive
methods on the development/test set across all evaluated models and tasks. This is true even when
the adaptive methods achieve the same training loss or lower than non-adaptive methods. Second,
adaptive methods often display faster initial progress on the training set, but their performance quickly
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
plateaus on the development/test set. Third, the same amount of tuning was required for all methods,
including adaptive methods. This challenges the conventional wisdom that adaptive methods require
less tuning. Moreover, as a useful guide to future practice, we propose a simple scheme for tuning
learning rates and decays that performs well on all deep learning tasks we studied.
2
Background
The canonical optimization algorithms used to minimize risk are either stochastic gradient methods
or stochastic momentum methods. Stochastic gradient methods can generally be written
? (wk ),
?k rf
wk+1 = wk
(2.1)
? (wk ) := rf (wk ; xi ) is the gradient of some loss function f computed on a batch of data
where rf
k
x ik .
Stochastic momentum methods are a second family of techniques that have been used to accelerate
training. These methods can generally be written as
wk+1 = wk
? (wk +
?k rf
k (wk
wk
1 ))
+
k (wk
wk
The sequence of iterates (2.2) includes Polyak?s heavy-ball method (HB) with
Accelerated Gradient method (NAG) [19] with k = k .
(2.2)
1 ).
= 0, and Nesterov?s
k
Notable exceptions to the general formulations (2.1) and (2.2) are adaptive gradient and adaptive
momentum methods, which choose a local distance measure constructed using the entire sequence of
iterates (w1 , ? ? ? , wk ). These methods (including AdaGrad [3], RMSProp [21], and Adam [8]) can
generally be written as
? (wk +
?k Hk 1 rf
wk+1 = wk
k (wk
wk
1 ))
+
k Hk
1
Hk
1 (wk
wk
(2.3)
1 ),
where Hk := H(w1 , ? ? ? , wk ) is a positive definite matrix. Though not necessary, the matrix Hk is
usually defined as
0(
)1/2 1
k
X
A,
Hk = diag @
?i g i g i
(2.4)
i=1
? (wk + k (wk wk 1 )), and ?k is
where ? ? denotes the entry-wise or Hadamard product, gk = rf
some set of coefficients specified for each algorithm. That is, Hk is a diagonal matrix whose entries
are the square roots of a linear combination of squares of past gradient components. We will use the
fact that Hk are defined in this fashion in the sequel. For the specific settings of the parameters for
many of the algorithms used in deep learning, see Table 1. Adaptive methods attempt to adjust an
algorithm to the geometry of the data. In contrast, stochastic gradient descent and related variants use
the `2 geometry inherent to the parameter space, and are equivalent to setting Hk = I in the adaptive
methods.
Gk
?k
k
SGD HB NAG AdaGrad
I
I
I Gk 1 + D k
?
?
0
0
0
?
2 Gk
RMSProp
1 + (1
2 )Dk
?
?
0
0
0
0
2
1
k Gk
2
Adam
(1
1+ 1
? 11
1 (1
1
0
2)
k
2
Dk
1
k
1
k 1
)
1
k
1
Table 1: Parameter settings of algorithms used in deep learning. Here, Dk = diag(gk gk ) and
Gk := Hk Hk . We omit the additional ? added to the adaptive methods, which is only needed to ensure
non-singularity of the matrices Hk .
In this context, generalization refers to the performance of a solution w on a broader population.
Performance is often defined in terms of a different loss function than the function f used in training.
For example, in classification tasks, we typically define generalization in terms of classification error
rather than cross-entropy.
2
2.1
Related Work
Understanding how optimization relates to generalization is a very active area of current machine
learning research. Most of the seminal work in this area has focused on understanding how early
stopping can act as implicit regularization [22]. In a similar vein, Ma and Belkin [10] have shown
that gradient methods may not be able to find complex solutions at all in any reasonable amount of
time. Hardt et al. [17] show that SGD is uniformly stable, and therefore solutions with low training
error found quickly will generalize well. Similarly, using a stability argument, Raginsky et al. [16]
have shown that Langevin dynamics can find solutions than generalize better than ordinary SGD
in non-convex settings. Neyshabur, Srebro, and Tomioka [15] discuss how algorithmic choices can
act as implicit regularizer. In a similar vein, Neyshabur, Salakhutdinov, and Srebro [14] show that a
different algorithm, one which performs descent using a metric that is invariant to re-scaling of the
parameters, can lead to solutions which sometimes generalize better than SGD. Our work supports
the work of [14] by drawing connections between the metric used to perform local optimization and
the ability of the training algorithm to find solutions that generalize. However, we focus primarily on
the different generalization properties of adaptive and non-adaptive methods.
A similar line of inquiry has been pursued by Keskar et al. [7]. Hochreiter and Schmidhuber [4]
showed that ?sharp? minimizers generalize poorly, whereas ?flat? minimizers generalize well. Keskar
et al. empirically show that Adam converges to sharper minimizers when the batch size is increased.
However, they observe that even with small batches, Adam does not find solutions whose performance
matches state-of-the-art. In the current work, we aim to show that the choice of Adam as an optimizer
itself strongly influences the set of minimizers that any batch size will ever see, and help explain why
they were unable to find solutions that generalized particularly well.
3
The potential perils of adaptivity
The goal of this section is to illustrate the following observation: when a problem has multiple global
minima, different algorithms can find entirely different solutions when initialized from the same point.
In addition, we construct an example where adaptive gradient methods find a solution which has
worse out-of-sample error than SGD.
To simplify the presentation, let us restrict our attention to the binary least-squares classification
problem, where we can easily compute closed the closed form solution found by different methods.
In least-squares classification, we aim to solve
minimizew
RS [w] := 12 kXw
yk22 .
(3.1)
Here X is an n ? d matrix of features and y is an n-dimensional vector of labels in { 1, 1}. We
aim to find the best linear classifier w. Note that when d > n, if there is a minimizer with loss 0
then there is an infinite number of global minimizers. The question remains: what solution does an
algorithm find and how well does it perform on unseen data?
3.1
Non-adaptive methods
Most common non-adaptive methods will find the same solution for the least squares objective (3.1).
Any gradient or stochastic gradient of RS must lie in the span of the rows of X. Therefore, any
method that is initialized in the row span of X (say, for instance at w = 0) and uses only linear
combinations of gradients, stochastic gradients, and previous iterates must also lie in the row span
of X. The unique solution that lies in the row span of X also happens to be the solution with
minimum Euclidean norm. We thus denote wSGD = X T (XX T ) 1 y. Almost all non-adaptive
methods like SGD, SGD with momentum, mini-batch SGD, gradient descent, Nesterov?s method,
and the conjugate gradient method will converge to this minimum norm solution. The minimum norm
solutions have the largest margin out of all solutions of the equation Xw = y. Maximizing margin
has a long and fruitful history in machine learning, and thus it is a pleasant surprise that gradient
descent naturally finds a max-margin solution.
3
3.2
Adaptive methods
Next, we consider adaptive methods where Hk is diagonal. While it is difficult to derive the general
form of the solution, we can analyze special cases. Indeed, we can construct a variety of instances
where adaptive methods converge to solutions with low `1 norm rather than low `2 norm.
For a vector x 2 Rq , let sign(x) denote the function that maps each component of x to its sign.
Lemma 3.1 Suppose there exists a scalar c such that X sign(X T y) = cy. Then, when initialized at
w0 = 0, AdaGrad, Adam, and RMSProp all converge to the unique solution w / sign(X T y).
In other words, whenever there exists a solution of Xw = y that is proportional to sign(X T y), this
is precisely the solution to which all of the adaptive gradient methods converge.
Proof We prove this lemma by showing that the entire trajectory of the algorithm consists of iterates
whose components have constant magnitude. In particular, we will show that
wk =
for some scalar
k.
k
sign(X T y) ,
The initial point w0 = 0 satisfies the assertion with
Now, assume the assertion holds for all k ? t. Observe that
rRS (wk +
k (wk
wk
1 ))
= X T (X(wk +
= XT (
= {(
k
k
+
+
k (wk
k( k
k( k
wk
1 ))
k 1 ))X
0
= 0.
y)
sign(X T y)
y
T
k 1 ))c
1} X y
T
= ?k X y,
where the last equation defines ?k . Hence, letting gk = rRS (wk + k (wk wk 1 )), we also have
0(
0(
1
)1/2 1
)1/2
k
k
X
X
A = diag @
Hk = diag @
?s g s g s
?s ?2s
|X T y|A = ?k diag |X T y| ,
s=1
s=1
where |u| denotes the component-wise absolute value of a vector and the last equation defines ?k .
In sum,
wk+1 = wk ?k Hk 1 rf (wk + k (wk
?
?k ?k
k ?k 1
=
+
( k
k
?k
?k
wk
k 1)
1 ))
+
t Hk
1
Hk
1 (wk
wk
1)
sign(X T y),
proving the claim.1
This solution is far simpler than the one obtained by gradient methods, and it would be surprising if
such a simple solution would perform particularly well. We now turn to showing that such solutions
can indeed generalize arbitrarily poorly.
3.3
Adaptivity can overfit
Lemma 3.1 allows us to construct a particularly pernicious generative model where AdaGrad fails
to find a solution that generalizes. This example uses infinite dimensions to simplify bookkeeping,
but one could take the dimensionality to be 6n. Note that in deep learning, we often have a number
of parameters equal to 25n or more [20], so this is not a particularly high dimensional example by
contemporary standards. For i = 1, . . . , n, sample the label yi to be 1 with probability p and 1 with
probability 1 p for some p > 1/2. Let xi be an infinite dimensional vector with entries
8
yi j = 1
>
>
<
1 j = 2, 3
xij =
.
>
1 j = 4 + 5(i 1), . . . , 4 + 5(i 1) + 2(1 yi )
>
:
0 otherwise
1
In the event that X T y has a component equal to 0, we define 0/0 = 0 so that the update is well-defined.
4
In other words, the first feature of xi is the class label. The next 2 features are always equal to 1.
After this, there is a set of features unique to xi that are equal to 1. If the class label is 1, then there
is 1 such unique feature. If the class label is 1, then there are 5 such features. Note that the only
discriminative feature useful for classifying data outside the training set is the first one! Indeed,
one can perform perfect classification using only the first feature. The other features are all useless.
Features 2 and 3 are constant, and each of the remaining features only appear for one example in the
data set. However, as we will see, algorithms without such a priori knowledge may not be able to
learn these distinctions.
Take n samples and consider the AdaGrad solution
for minimizing 12 ||Xw y||2 . First we show that
Pn
the conditions of Lemma 3.1 hold. Let b = i=1 yi and assume for the sake of simplicity that b > 0.
This will happen with arbitrarily high probability for large enough n. Define u = X T y and observe
that
8
8
>
>
n j=1
1 j=1
>
>
>
>
<
<
b j = 2, 3
1 j = 2, 3
uj =
and
sign(uj ) =
j+1
y
if
j
>
3
and
x
=
1
=1
>
>
j
b 5 c,j
>
>yj if j > 3 and xb j+1
5 c,j
>
>
:
:
0 otherwise
0 otherwise
Thus we have hsign(u), xi i = yi + 2 + yi (3 2yi ) = 4yi as desired. Hence, the AdaGrad solution
wada / sign(u). In particular, wada has all of its components equal to ?? for some positive constant
? . Now since wada has the same sign pattern as u, the first three components of wada are equal to
each other. But for a new data point, xtest , the only features that are nonzero in both xtest and wada
are the first three. In particular, we have
hwada , xtest i = ? (y (test) + 2) > 0 .
Therefore, the AdaGrad solution will label all unseen data as a positive example!
Now, we turn to the minimum 2-norm solution. Let P and N denote the set of positive and negative
examples respectively. Let n+ = |P| and n = |N |. Assuming ?i = ?+ when yi = 1 and ?i = ?
when
form wSGD = X T ? =
P yi = 1,Pwe have that the minimum norm solution will have the
T
i2P ?+ xi +
j2N ? xj . These scalars can be found by solving XX ? = y. In closed form we
have
4n + 3
4n+ + 1
?+ =
and
? =
.
(3.2)
9n+ + 3n + 8n+ n + 3
9n+ + 3n + 8n+ n + 3
The algebra required to compute these coefficients can be found in the Appendix. For a new data
point, xtest , again the only features that are nonzero in both xtest and wSGD are the first three. Thus
we have
hwSGD , xtest i = y test (n+ ?+ n ? ) + 2(n+ ?+ + n ? ) .
Using (3.2), we see that whenever n+ > n /3, the SGD solution makes no errors.
A formal construction of this example using a data-generating distribution can be found in Appendix C.
Though this generative model was chosen to illustrate extreme behavior, it shares salient features
with many common machine learning instances. There are a few frequent features, where some
predictor based on them is a good predictor, though these might not be easy to identify from first
inspection. Additionally, there are many other features which are sparse. On finite training data
it looks like such features are good for prediction, since each such feature is discriminatory for a
particular training example, but this is over-fitting and an artifact of having fewer training examples
than features. Moreover, we will see shortly that adaptive methods typically generalize worse than
their non-adaptive counterparts on real datasets.
4
Deep Learning Experiments
Having established that adaptive and non-adaptive methods can find different solutions in the convex
setting, we now turn to an empirical study of deep neural networks to see whether we observe a
similar discrepancy in generalization. We compare two non-adaptive methods ? SGD and the heavy
ball method (HB) ? to three popular adaptive methods ? AdaGrad, RMSProp and Adam. We study
performance on four deep learning problems: (C1) the CIFAR-10 image classification task, (L1)
5
Name
Network type
Architecture
Dataset
Framework
C1
Deep Convolutional
cifar.torch
CIFAR-10
Torch
L1
2-Layer LSTM
torch-rnn
War & Peace
Torch
L2 2-Layer LSTM + Feedforward span-parser Penn Treebank
DyNet
L3
3-Layer LSTM
emnlp2016 Penn Treebank Tensorflow
Table 2: Summaries of the models we use for our experiments.2
character-level language modeling on the novel War and Peace, and (L2) discriminative parsing
and (L3) generative parsing on Penn Treebank. In the interest of reproducibility, we use a network
architecture for each problem that is either easily found online (C1, L1, L2, and L3) or produces
state-of-the-art results (L2 and L3). Table 2 summarizes the setup for each application. We take care
to make minimal changes to the architectures and their data pre-processing pipelines in order to best
isolate the effect of each optimization algorithm.
We conduct each experiment 5 times from randomly initialized starting points, using the initialization
scheme specified in each code repository. We allocate a pre-specified budget on the number of epochs
used for training each model. When a development set was available, we chose the settings that
achieved the best peak performance on the development set by the end of the fixed epoch budget.
CIFAR-10 did not have an explicit development set, so we chose the settings that achieved the lowest
training loss at the end of the fixed epoch budget.
Our experiments show the following primary findings: (i) Adaptive methods find solutions that generalize worse than those found by non-adaptive methods. (ii) Even when the adaptive methods achieve
the same training loss or lower than non-adaptive methods, the development or test performance
is worse. (iii) Adaptive methods often display faster initial progress on the training set, but their
performance quickly plateaus on the development set. (iv) Though conventional wisdom suggests
that Adam does not require tuning, we find that tuning the initial learning rate and decay scheme for
Adam yields significant improvements over its default settings in all cases.
4.1
Hyperparameter Tuning
Optimization hyperparameters have a large influence on the quality of solutions found by optimization
algorithms for deep neural networks. The algorithms under consideration have many hyperparameters:
the initial step size ?0 , the step decay scheme, the momentum value 0 , the momentum schedule
k , the smoothing term ?, the initialization scheme for the gradient accumulator, and the parameter
controlling how to combine gradient outer products, to name a few. A grid search on a large space
of hyperparameters is infeasible even with substantial industrial resources, and we found that the
parameters that impacted performance the most were the initial step size and the step decay scheme.
We left the remaining parameters with their default settings. We describe the differences between the
default settings of Torch, DyNet, and Tensorflow in Appendix B for completeness.
To tune the step sizes, we evaluated a logarithmically-spaced grid of five step sizes. If the best
performance was ever at one of the extremes of the grid, we would try new grid points so that the
best performance was contained in the middle of the parameters. For example, if we initially tried
step sizes 2, 1, 0.5, 0.25, and 0.125 and found that 2 was the best performing, we would have tried
the step size 4 to see if performance was improved. If performance improved, we would have tried 8
and so on. We list the initial step sizes we tried in Appendix D.
For step size decay, we explored two separate schemes, a development-based decay scheme (devdecay) and a fixed frequency decay scheme (fixed-decay). For dev-decay, we keep track of the best
validation performance so far, and at each epoch decay the learning rate by a constant factor if the
model does not attain a new best value. For fixed-decay, we decay the learning rate by a constant
factor every k epochs. We recommend the dev-decay scheme when a development set is available;
2
Architectures can be found at the following links: (1) cifar.torch: https://github.
com/szagoruyko/cifar.torch; (2) torch-rnn: https://github.com/jcjohnson/torch-rnn; (3)
span-parser: https://github.com/jhcross/span-parser; (4) emnlp2016: https://github.com/
cdg720/emnlp2016.
6
(a) CIFAR-10 (Train)
(b) CIFAR-10 (Test)
Figure 1: Training (left) and top-1 test error (right) on CIFAR-10. The annotations indicate where the
best performance is attained for each method. The shading represents ? one standard deviation computed
across five runs from random initial starting points. In all cases, adaptive methods are performing worse on
both train and test than non-adaptive methods.
not only does it have fewer hyperparameters than the fixed frequency scheme, but our experiments
also show that it produces results comparable to, or better than, the fixed-decay scheme.
4.2
Convolutional Neural Network
We used the VGG+BN+Dropout network for CIFAR-10 from the Torch blog [23], which in prior
work achieves a baseline test error of 7.55%. Figure 1 shows the learning curve for each algorithm
on both the training and test dataset.
We observe that the solutions found by SGD and HB do indeed generalize better than those found
by adaptive methods. The best overall test error found by a non-adaptive algorithm, SGD, was
7.65 ? 0.14%, whereas the best adaptive method, RMSProp, achieved a test error of 9.60 ? 0.19%.
Early on in training, the adaptive methods appear to be performing better than the non-adaptive
methods, but starting at epoch 50, even though the training error of the adaptive methods is still lower,
SGD and HB begin to outperform adaptive methods on the test error. By epoch 100, the performance
of SGD and HB surpass all adaptive methods on both train and test. Among all adaptive methods,
AdaGrad?s rate of improvement flatlines the earliest. We also found that by increasing the step size,
we could drive the performance of the adaptive methods down in the first 50 or so epochs, but the
aggressive step size made the flatlining behavior worse, and no step decay scheme could fix the
behavior.
4.3
Character-Level Language Modeling
Using the torch-rnn library, we train a character-level language model on the text of the novel War
and Peace, running for a fixed budget of 200 epochs. Our results are shown in Figures 2(a) and 2(b).
Under the fixed-decay scheme, the best configuration for all algorithms except AdaGrad was to decay
relatively late with regards to the total number of epochs, either 60 or 80% through the total number
of epochs and by a large amount, dividing the step size by 10. The dev-decay scheme paralleled
(within the same standard deviation) the results of the exhaustive search over the decay frequency
and amount; we report the curves from the fixed policy.
Overall, SGD achieved the lowest test loss at 1.212 ? 0.001. AdaGrad has fast initial progress, but
flatlines. The adaptive methods appear more sensitive to the initialization scheme than non-adaptive
methods, displaying a higher variance on both train and test. Surprisingly, RMSProp closely trails
SGD on test loss, confirming that it is not impossible for adaptive methods to find solutions that
generalize well. We note that there are step configurations for RMSProp that drive the training loss
7
below that of SGD, but these configurations cause erratic behavior on test, driving the test error of
RMSProp above Adam.
4.4
Constituency Parsing
A constituency parser is used to predict the hierarchical structure of a sentence, breaking it down into
nested clause-level, phrase-level, and word-level units. We carry out experiments using two stateof-the-art parsers: the stand-alone discriminative parser of Cross and Huang [2], and the generative
reranking parser of Choe and Charniak [1]. In both cases, we use the dev-decay scheme with = 0.9
for learning rate decay.
Discriminative Model. Cross and Huang [2] develop a transition-based framework that reduces
constituency parsing to a sequence prediction problem, giving a one-to-one correspondence between
parse trees and sequences of structural and labeling actions. Using their code with the default settings,
we trained for 50 epochs on the Penn Treebank [11], comparing labeled F1 scores on the training and
development data over time. RMSProp was not implemented in the used version of DyNet, and we
omit it from our experiments. Results are shown in Figures 2(c) and 2(d).
We find that SGD obtained the best overall performance on the development set, followed closely
by HB and Adam, with AdaGrad trailing far behind. The default configuration of Adam without
learning rate decay actually achieved the best overall training performance by the end of the run, but
was notably worse than tuned Adam on the development set.
Interestingly, Adam achieved its best development F1 of 91.11 quite early, after just 6 epochs,
whereas SGD took 18 epochs to reach this value and didn?t reach its best F1 of 91.24 until epoch 31.
On the other hand, Adam continued to improve on the training set well after its best development
performance was obtained, while the peaks for SGD were more closely aligned.
Generative Model. Choe and Charniak [1] show that constituency parsing can be cast as a language
modeling problem, with trees being represented by their depth-first traversals. This formulation
requires a separate base system to produce candidate parse trees, which are then rescored by the
generative model. Using an adapted version of their code base,3 we retrained their model for 100
epochs on the Penn Treebank. However, to reduce computational costs, we made two minor changes:
(a) we used a smaller LSTM hidden dimension of 500 instead of 1500, finding that performance
decreased only slightly; and (b) we accordingly lowered the dropout ratio from 0.7 to 0.5. Since they
demonstrated a high correlation between perplexity (the exponential of the average loss) and labeled
F1 on the development set, we explored the relation between training and development perplexity to
avoid any conflation with the performance of a base parser.
Our results are shown in Figures 2(e) and 2(f). On development set performance, SGD and HB
obtained the best perplexities, with SGD slightly ahead. Despite having one of the best performance
curves on the training dataset, Adam achieves the worst development perplexities.
5
Conclusion
Despite the fact that our experimental evidence demonstrates that adaptive methods are not advantageous for machine learning, the Adam algorithm remains incredibly popular. We are not sure
exactly as to why, but hope that our step-size tuning suggestions make it easier for practitioners to use
standard stochastic gradient methods in their research. In our conversations with other researchers,
we have surmised that adaptive gradient methods are particularly popular for training GANs [18, 5]
and Q-learning with function approximation [13, 9]. Both of these applications stand out because
they are not solving optimization problems. It is possible that the dynamics of Adam are accidentally
well matched to these sorts of optimization-free iterative search procedures. It is also possible that
carefully tuned stochastic gradient methods may work as well or better in both of these applications.
3
While the code of Choe and Charniak treats the entire corpus as a single long example, relying on the
network to reset itself upon encountering an end-of-sentence token, we use the more conventional approach of
resetting the network for each example. This reduces training efficiency slightly when batches contain examples
of different lengths, but removes a potential confounding factor from our experiments.
8
It is an exciting direction of future work to determine which of these possibilities is true and to
understand better as to why.
Acknowledgements
The authors would like to thank Pieter Abbeel, Moritz Hardt, Tomer Koren, Sergey Levine, Henry
Milner, Yoram Singer, and Shivaram Venkataraman for many helpful comments and suggestions.
RR is generously supported by DOE award AC02-05CH11231. MS and AW are supported by
NSF Graduate Research Fellowships. NS is partially supported by NSF-IIS-13-02662 and NSF-IIS15-46500, an Inter ICRI-RI award and a Google Faculty Award. BR is generously supported by
NSF award CCF-1359814, ONR awards N00014-14-1-0024 and N00014-17-1-2191, the DARPA
Fundamental Limits of Learning (Fun LoL) Program, a Sloan Research Fellowship, and a Google
Faculty Award.
(a) War and Peace (Training Set)
(b) War and Peace (Test Set)
(c) Discriminative Parsing (Training Set)
(d) Discriminative Parsing (Development Set)
(e) Generative Parsing (Training Set)
(f) Generative Parsing (Development Set)
Figure 2: Performance curves on the training data (left) and the development/test data (right) for three
experiments on natural language tasks. The annotations indicate where the best performance is attained for
each method. The shading represents one standard deviation computed across five runs from random initial
starting points.
9
References
[1] Do Kook Choe and Eugene Charniak. Parsing as language modeling. In Jian Su, Xavier
Carreras, and Kevin Duh, editors, Proceedings of the 2016 Conference on Empirical Methods in
Natural Language Processing, EMNLP 2016, Austin, Texas, USA, November 1-4, 2016, pages
2331?2336. The Association for Computational Linguistics, 2016.
[2] James Cross and Liang Huang. Span-based constituency parsing with a structure-label system
and provably optimal dynamic oracles. In Jian Su, Xavier Carreras, and Kevin Duh, editors,
Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing,
Austin, Texas, pages 1?11. The Association for Computational Linguistics, 2016.
[3] John C. Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online
learning and stochastic optimization. Journal of Machine Learning Research, 12:2121?2159,
2011.
[4] Sepp Hochreiter and J?rgen Schmidhuber. Flat minima. Neural Computation, 9(1):1?42, 1997.
[5] Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A Efros. Image-to-image translation with
conditional adversarial networks. arXiv:1611.07004, 2016.
[6] Andrej Karparthy. A peek at trends in machine learning. https://medium.com/@karpathy/
a-peek-at-trends-in-machine-learning-ab8a1085a106. Accessed: 2017-05-17.
[7] Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, and Ping
Tak Peter Tang. On large-batch training for deep learning: Generalization gap and sharp minima.
In The International Conference on Learning Representations (ICLR), 2017.
[8] D.P. Kingma and J. Ba. Adam: A method for stochastic optimization. The International
Conference on Learning Representations (ICLR), 2015.
[9] Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa,
David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. In
International Conference on Learning Representations (ICLR), 2016.
[10] Siyuan Ma and Mikhail Belkin. Diving into the shallows: a computational perspective on
large-scale shallow learning. arXiv:1703.10622, 2017.
[11] Mitchell P. Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. Building a large annotated
corpus of english: The penn treebank. COMPUTATIONAL LINGUISTICS, 19(2):313?330,
1993.
[12] H. Brendan McMahan and Matthew Streeter. Adaptive bound optimization for online convex
optimization. In Proceedings of the 23rd Annual Conference on Learning Theory (COLT), 2010.
[13] Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep
reinforcement learning. In International Conference on Machine Learning (ICML), 2016.
[14] Behnam Neyshabur, Ruslan Salakhutdinov, and Nathan Srebro. Path-SGD: Path-normalized
optimization in deep neural networks. In Neural Information Processing Systems (NIPS), 2015.
[15] Behnam Neyshabur, Ryota Tomioka, and Nathan Srebro. In search of the real inductive bias:
On the role of implicit regularization in deep learning. In International Conference on Learning
Representations (ICLR), 2015.
[16] Maxim Raginsky, Alexander Rakhlin, and Matus Telgarsky. Non-convex learning via stochastic
gradient Langevin dynamics: a nonasymptotic analysis. arXiv:1702.03849, 2017.
[17] Benjamin Recht, Moritz Hardt, and Yoram Singer. Train faster, generalize better: Stability
of stochastic gradient descent. In Proceedings of the International Conference on Machine
Learning (ICML), 2016.
10
[18] Scott Reed, Zeynep Akata, Xinchen Yan, Lajanugen Logeswaran, Bernt Schiele, and Honglak
Lee. Generative adversarial text to image synthesis. In Proceedings of The International
Conference on Machine Learning (ICML), 2016.
[19] Ilya Sutskever, James Martens, George Dahl, and Geoffrey Hinton. On the importance of
initialization and momentum in deep learning. In Proceedings of the International Conference
on Machine Learning (ICML), 2013.
[20] Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE Conference
on Computer Vision and Pattern Recognition (CVPR), 2016.
[21] T. Tieleman and G. Hinton. Lecture 6.5?RmsProp: Divide the gradient by a running average
of its recent magnitude. COURSERA: Neural Networks for Machine Learning, 2012.
[22] Yuan Yao, Lorenzo Rosasco, and Andrea Caponnetto. On early stopping in gradient descent
learning. Constructive Approximation, 26(2):289?315, 2007.
[23] Sergey Zagoruyko. Torch blog. http://torch.ch/blog/2015/07/30/cifar.html, 2015.
11
| 7003 |@word repository:1 version:2 faculty:2 middle:1 norm:7 advantageous:1 pieter:1 r:2 tried:4 bn:1 xtest:6 sgd:29 shading:2 carry:1 initial:10 configuration:4 score:1 charniak:4 tuned:2 interestingly:1 past:1 current:2 com:5 comparing:1 surprising:1 written:3 must:2 parsing:11 john:1 chicago:1 numerical:1 happen:1 confirming:1 christian:1 remove:1 update:1 alone:1 half:2 generative:10 pursued:1 fewer:2 reranking:1 accordingly:1 inspection:1 iterates:5 completeness:1 simpler:1 accessed:1 five:3 wierstra:1 constructed:2 become:1 ik:1 pritzel:1 prove:1 consists:1 yuan:1 fitting:1 combine:1 ch11231:1 inter:1 notably:2 indeed:5 rapid:1 andrea:1 behavior:5 salakhutdinov:2 relying:1 increasing:2 begin:1 classifies:1 moreover:2 xx:2 matched:1 didn:1 medium:1 lowest:2 what:2 finding:3 guarantee:1 berkeley:3 every:1 act:2 fun:1 exactly:1 classifier:1 demonstrates:1 control:1 unit:1 penn:6 omit:2 appear:3 positive:4 understood:1 local:3 treat:1 limit:1 despite:2 path:2 becoming:1 might:1 chose:2 initialization:4 studied:1 suggests:2 hunt:1 logeswaran:1 discriminatory:1 graduate:1 unique:4 accumulator:1 yj:1 practice:1 regret:1 definite:1 procedure:1 area:2 rrs:2 empirical:4 rnn:4 yan:2 attain:2 significantly:1 word:3 pre:2 refers:1 suggest:1 close:2 andrej:1 risk:1 influence:4 context:1 seminal:1 impossible:1 conventional:3 equivalent:1 marten:1 fruitful:1 map:1 maximizing:1 demonstrated:1 attention:1 starting:4 incredibly:1 convex:4 focused:1 sepp:1 simplicity:1 continued:1 shlens:1 population:2 stability:2 proving:1 construction:2 suppose:1 parser:8 controlling:1 milner:1 xinchen:1 us:2 trail:1 logarithmically:1 trend:2 recognition:1 particularly:5 labeled:2 vein:2 levine:1 role:1 worst:1 cy:1 dynet:3 coursera:1 venkataraman:1 technological:1 contemporary:1 rq:1 benjamin:2 substantial:1 rmsprop:13 schiele:1 nesterov:2 dynamic:4 traversal:1 trained:1 solving:2 algebra:1 upon:1 efficiency:1 accelerate:1 easily:2 darpa:1 represented:1 shirish:1 regularizer:1 train:7 fast:1 describe:1 labeling:1 kevin:2 outside:1 exhaustive:1 whose:3 quite:1 elad:1 solve:1 bernt:1 say:2 drawing:1 otherwise:3 cvpr:1 ability:2 unseen:2 itself:2 online:3 sequence:4 rr:1 took:1 propose:1 product:2 reset:1 frequent:1 aligned:1 hadamard:1 reproducibility:1 poorly:3 achieve:3 sutskever:1 produce:3 generating:1 adam:23 converges:1 perfect:1 silver:2 help:1 illustrate:2 derive:1 develop:1 tim:1 telgarsky:1 minor:1 progress:3 dividing:1 implemented:1 indicate:2 direction:1 closely:3 annotated:1 stochastic:14 require:2 beatrice:1 fix:1 generalization:10 f1:4 abbeel:1 marcinkiewicz:1 singularity:1 hold:2 algorithmic:1 predict:1 matus:1 claim:1 matthew:1 rgen:1 driving:1 trailing:1 achieves:2 optimizer:1 efros:1 early:4 ruslan:1 label:7 sensitive:1 largest:1 hope:1 generously:2 always:1 aim:3 rather:2 pn:1 avoid:1 zhou:1 wilson:1 broader:1 earliest:1 focus:1 improvement:2 hk:17 contrast:1 industrial:1 adversarial:2 baseline:1 brendan:1 helpful:1 minimizers:6 stopping:2 entire:3 typically:2 torch:13 initially:1 spurious:1 hidden:1 relation:1 tak:1 provably:1 overall:4 classification:8 among:1 colt:1 stateof:2 priori:1 undue:1 development:21 html:1 art:4 special:1 smoothing:1 marginal:1 equal:6 construct:4 having:3 beach:1 koray:1 choe:4 represents:2 look:1 icml:4 jon:1 discrepancy:1 future:2 mirza:1 recommend:1 simplify:2 inherent:1 primarily:1 belkin:2 few:2 randomly:1 report:1 geometry:2 attempt:1 harley:1 interest:1 possibility:1 mnih:1 alexei:1 adjust:1 extreme:2 behind:1 xb:1 necessary:1 conduct:1 iv:1 euclidean:1 tree:3 initialized:4 re:1 desired:1 puigdomenech:1 divide:1 minimal:1 increased:1 instance:3 modeling:4 dev:4 assertion:2 dheevatsa:1 ordinary:1 phrase:1 cost:1 wada:5 entry:3 deviation:3 predictor:2 aw:1 gd:2 recht:2 st:1 lstm:4 peak:2 fundamental:1 international:8 sequel:1 shivaram:1 lee:1 synthesis:1 quickly:3 ilya:1 gans:1 yao:1 w1:2 again:1 choose:1 huang:3 emnlp:1 rosasco:1 worse:10 szegedy:1 aggressive:1 potential:2 volodymyr:1 nonasymptotic:1 minimizew:1 wk:41 includes:1 coefficient:2 notable:1 sloan:1 root:1 try:1 closed:3 analyze:1 hazan:1 sort:1 capability:1 annotation:2 minimize:2 square:5 convolutional:2 variance:1 keskar:3 roelofs:2 resetting:1 yield:1 wisdom:2 peril:1 identify:1 spaced:1 generalize:16 vincent:1 kavukcuoglu:1 trajectory:1 researcher:2 drive:2 history:2 plateau:2 inquiry:1 explain:1 reach:2 ping:1 whenever:2 mudigere:1 frequency:3 james:2 naturally:1 proof:1 dataset:3 hardt:3 popular:4 mitchell:3 knowledge:1 conversation:1 dimensionality:1 schedule:1 carefully:1 actually:1 akata:1 attained:2 higher:1 tom:1 impacted:1 improved:2 formulation:2 evaluated:2 though:5 strongly:1 just:1 implicit:3 inception:1 until:1 overfit:1 hand:1 correlation:1 parse:2 su:2 mehdi:1 google:2 defines:2 artifact:1 reveal:1 overparameterized:1 quality:1 icri:1 mary:1 building:1 usa:2 lillicrap:2 normalized:1 effect:2 true:2 name:2 counterpart:2 contain:1 regularization:2 hence:2 ccf:1 moritz:2 nonzero:2 xavier:2 inductive:1 pwe:1 illustrative:1 m:1 generalized:1 performs:2 l1:3 duchi:1 image:4 wise:2 consideration:1 novel:2 common:2 bookkeeping:1 empirically:1 clause:1 tassa:1 association:2 significant:1 honglak:1 tuning:8 rd:1 grid:4 similarly:1 erez:1 language:8 henry:1 lol:1 l3:4 lowered:1 stable:1 badia:1 encountering:1 base:3 carreras:2 showed:1 confounding:1 perspective:1 recent:1 diving:1 perplexity:4 schmidhuber:2 n00014:2 binary:3 arbitrarily:4 blog:3 siyuan:1 onr:1 yi:10 rescored:1 jorge:1 tinghui:1 minimum:8 additional:1 care:1 isola:1 george:1 converge:5 determine:1 ii:2 relates:1 multiple:1 reduces:2 caponnetto:1 exceeds:1 faster:3 match:1 cross:4 long:3 cifar:11 award:6 peace:5 prediction:2 variant:1 pernicious:1 vision:2 metric:3 arxiv:3 sometimes:1 sergey:3 hochreiter:2 achieved:6 c1:3 background:1 whereas:3 addition:1 fellowship:2 decreased:1 jian:2 peek:2 lajanugen:1 zagoruyko:1 pass:1 sure:1 isolate:1 tend:1 comment:1 practitioner:2 structural:1 yk22:1 feedforward:1 iii:1 enough:1 easy:1 hb:8 variety:1 xj:1 brecht:1 restrict:1 architecture:5 polyak:1 reduce:1 ac02:1 vgg:1 br:1 texas:2 whether:1 war:5 allocate:1 peter:1 flatlines:2 cause:1 action:1 deep:19 heess:1 dramatically:1 useful:2 generally:3 pleasant:1 tune:1 karpathy:1 amount:5 constituency:5 http:6 outperform:2 xij:1 canonical:1 nsf:4 sign:11 track:1 hyperparameter:2 salient:1 four:1 demonstrating:1 dahl:1 nocedal:1 subgradient:1 sum:1 raginsky:2 run:3 family:1 reasonable:1 almost:1 appendix:4 scaling:1 summarizes:1 comparable:1 entirely:1 bound:1 layer:3 dropout:2 followed:1 koren:1 display:2 correspondence:1 oracle:1 annual:1 adapted:1 ahead:1 precisely:1 alex:1 ri:1 flat:2 sake:1 nathan:3 argument:1 span:8 nitish:1 performing:3 separable:2 relatively:1 smelyanskiy:1 ball:2 combination:2 conjugate:1 across:4 smaller:1 increasingly:1 character:3 slightly:3 shallow:2 happens:1 invariant:1 pipeline:1 equation:3 resource:1 remains:3 discus:1 turn:3 needed:2 singer:3 letting:1 end:4 generalizes:1 available:2 neyshabur:4 observe:6 hierarchical:1 batch:7 shortly:1 denotes:2 remaining:2 include:1 ensure:2 top:1 running:2 linguistics:3 xw:3 yoram:3 giving:1 uj:2 objective:2 added:1 question:1 primary:2 diagonal:2 gradient:32 iclr:4 distance:1 unable:1 separate:2 link:1 thank:1 duh:2 outer:1 w0:2 phillip:1 rethinking:1 zeynep:1 marcus:1 assuming:1 code:4 length:1 useless:1 reed:1 mini:1 ratio:1 minimizing:1 liang:1 difficult:1 setup:1 sharper:1 ashia:2 gk:9 ryota:1 negative:1 reconsider:1 ba:1 wojna:1 zbigniew:1 stern:1 policy:1 perform:5 observation:1 datasets:1 daan:1 finite:1 descent:8 november:1 incorrectly:1 langevin:2 hinton:2 ever:2 santorini:1 sharp:2 retrained:1 tomer:1 kxw:1 ttic:1 rebecca:1 david:2 cast:1 required:2 specified:3 connection:1 sentence:2 california:1 learned:1 distinction:1 tensorflow:2 established:1 kingma:1 nip:2 able:2 ofsample:1 usually:1 pattern:2 below:1 scott:1 challenge:1 program:1 rf:7 including:2 max:1 erratic:1 event:1 natural:3 zhu:1 scheme:17 improve:1 github:4 lorenzo:1 library:1 jun:1 text:2 epoch:16 understanding:2 l2:4 nati:1 prior:1 acknowledgement:1 adagrad:15 relative:1 eugene:1 graf:1 loss:10 lecture:1 adaptivity:2 suggestion:2 proportional:1 srebro:5 geoffrey:1 validation:1 vanhoucke:1 displaying:1 treebank:6 exciting:1 editor:2 classifying:1 share:2 heavy:2 translation:1 austin:2 row:4 summary:1 token:1 surprisingly:1 last:2 free:1 english:1 infeasible:1 accidentally:1 drastically:1 guide:1 formal:1 understand:1 bias:1 institute:1 absolute:1 sparse:1 mikhail:2 regard:1 curve:4 default:6 dimension:2 stand:2 transition:1 depth:1 author:1 made:2 adaptive:64 reinforcement:2 far:3 keep:1 global:2 active:1 ioffe:1 corpus:2 nag:2 xi:6 discriminative:6 search:4 iterative:1 continuous:1 streeter:1 why:3 table:4 additionally:3 learn:1 ca:1 nicolas:1 necessarily:1 complex:1 diag:5 did:1 linearly:2 i2p:1 hyperparameters:4 fashion:1 n:1 tomioka:2 fails:1 momentum:8 explicit:1 exponential:1 lie:3 candidate:1 mcmahan:1 breaking:1 toyota:1 third:1 late:1 tang:1 down:2 specific:1 xt:1 showing:2 behnam:2 list:1 decay:22 dk:3 explored:2 rakhlin:1 evidence:1 exists:3 supported:4 importance:1 maxim:1 magnitude:2 budget:4 asynchronous:1 margin:4 gap:1 easier:1 surprise:1 entropy:1 timothy:2 contained:1 partially:1 scalar:3 ch:1 nested:1 minimizer:1 satisfies:1 tieleman:1 ma:2 conditional:1 goal:1 presentation:1 adria:1 ann:1 change:2 typical:1 infinite:3 uniformly:1 except:1 yuval:1 surpass:1 lemma:4 total:2 experimental:1 exception:1 support:1 paralleled:1 jonathan:1 alexander:2 accelerated:1 constructive:1 |
6,638 | 7,004 | Aggressive Sampling for Multi-class to Binary
Reduction with Applications to Text Classification
Bikash Joshi
Univ. Grenoble Alps, LIG
Grenoble, France
[email protected]
Massih-Reza Amini
Univ. Grenoble Alps, LIG
Grenoble, France
[email protected]
Franck Iutzeler
Univ. Grenoble Alps, LJK
Grenoble, France
[email protected]
Ioannis Partalas
Expedia EWE
Geneva, Switzerland
[email protected]
Yury Maximov
Los Alamos National Laboratory and Skolkovo IST,
USA and Moscow, Russia
[email protected]
Abstract
We address the problem of multi-class classification in the case where the number of
classes is very large. We propose a double sampling strategy on top of a multi-class
to binary reduction strategy, which transforms the original multi-class problem into
a binary classification problem over pairs of examples. The aim of the sampling
strategy is to overcome the curse of long-tailed class distributions exhibited in
majority of large-scale multi-class classification problems and to reduce the number
of pairs of examples in the expanded data. We show that this strategy does not
alter the consistency of the empirical risk minimization principle defined over the
double sample reduction. Experiments are carried out on DMOZ and Wikipedia
collections with 10,000 to 100,000 classes where we show the efficiency of the
proposed approach in terms of training and prediction time, memory consumption,
and predictive performance with respect to state-of-the-art approaches.
1
Introduction
Large-scale multi-class or extreme classification involves problems with extremely large number of
classes as it appears in text repositories such as Wikipedia, Yahoo! Directory (www.dir.yahoo.com),
or Directory Mozilla DMOZ (www.dmoz.org); and it has recently evolved as a popular branch of
machine learning with many applications in tagging, recommendation and ranking. The most common
and popular baseline in this case is the one-versus-all approach (OVA) [18] where one independent
binary classifier is learned per class. Despite its simplicity, this approach suffers from two main
limitations; first, it becomes computationally intractable when the number of classes grow large,
affecting at the same time the prediction. Second, it suffers from the class imbalance problem by
construction.Recently, two main approaches have been studied to cope with these limitations. The
first one, broadly divided in tree-based and embedding-based methods, have been proposed with
the aim of reducing the effective space of labels in order to control the complexity of the learning
problem. Tree-based methods [4, 3, 6, 7, 9, 21, 5, 15] rely on binary tree structures where each
leaf corresponds to a class and inference is performed by traversing the tree from top to bottom; a
binary classifier being used at each node to determine the child node to develop. These methods have
logarithmic time complexity with the drawback that it is a challenging task to find a balanced tree
structure which can partition the class labels. Further, even though different heuristics have been
developed to address the unbalanced problem, these methods suffer from the drawback that they have
to make several decisions prior to reaching a final category, which leads to error propagation and
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
thus a decrease in accuracy. On the other hand, label embedding approaches [11, 5, 19] first project
the label-matrix into a low-dimensional linear subspace and then use an OVA classifier. However,
the low-rank assumption of the label-matrix is generally transgressed in the extreme multi-class
classification setting, and these methods generally lead to high prediction error.The second type of
approaches aim at reducing the original multi-class problem into a binary one by first expanding the
original training set using a projection of pairs of observations and classes into a low dimensional
dyadic space, and then learning a single classifier to separate between pairs constituted with examples
and their true classes and pairs constituted with examples with other classes [1, 28, 16]. Although
prediction in the new representation space is relatively fast, the construction of the dyadic training
observations is generally time consuming and prevails over the training and prediction times.
Contributions. In this paper, we propose a scalable multi-class classification method based on
an aggressive double sampling of the dyadic output prediction problem. Instead of computing all
possible dyadic examples, our proposed approach consists first in drawing a new training set of much
smaller size from the original one by oversampling the most small size classes and by sub-sampling
the few big size classes in order to avoid the curse of long-tailed class distributions common in the
majority of large-scale multi-class classification problems [2]. The second goal is to reduce the
number of constructed dyadic examples. Our reduction strategy brings inter-dependency between the
pairs containing the same observation and its true class in the original training set. Thus, we derive
new generalization bounds using local fractional Rademacher complexity showing that even with a
shift in the original class distribution and also the inter-dependency between the pairs of example, the
empirical risk minimization principle over the transformation of the sampled training set remains
consistent. We validate our approach by conducting a series of experiments on subsets of DMOZ and
the Wikipedia collections with up to 100,000 target categories.
2
A doubly-sampled multi-class to binary reduction strategy
We address the problem of monolabel multi-class classification defined on joint space X ? Y
.
where X ? Rd is the input space and Y = {1, . . . , K} = [K] the output space, made of K
y
classes. Elements of X ? Y are denoted as x = (x, y). Furthermore, we assume the training set
S = (xyi i )m
i=1 is made of m i.i.d examples/class pairs distributed according to a fixed but unknown
probability distribution D, and we consider a class of predictor functions G = {g : X ? Y ? R}.
We define the instantaneous loss for predictor g ? G on example xy as:
e(g, xy ) =
1
K ?1
X
1g(xy )?g(xy0 ) ,
(1)
y 0 ?Y\{y}
where 1? is the indicator function equal to 1 if the predicate ? is true and 0 otherwise. Compared to
the classical multi-class error, e0 (g, xy ) = 1y6=argmaxy0 ?Y g(xy0 ) , the loss of (1) estimates the average
number of classes, given any input data, that get a greater scoring by g than the correct class. The
loss (1) is hence a ranking criterion, and the multi-class SVM of [29] and AdaBoost.MR [24] optimize
convex surrogate functions of this loss. It is also used in label ranking [12]. Our objective is to find a
function g ? G with a small expected risk R(g) = Exy ?D [e(g, xy )], by minimizing the empirical
error defined as the average number of training examples xyi i ? S which, in mean, are scored lower
0
than xyi , for y 0 ? Y\{yi } :
m
m
X
X
1
? m (g, S) = 1
R
e(g, xyi i ) =
m i=1
m(K ? 1) i=1
2.1
X
y 0 ?Y\{yi }
1g(xyi )?g(xy0 )?0 .
i
(2)
i
Binary reduction based on dyadic representations of examples and classes
In this work, we consider prediction functions of the form g = f ? ?, where ? : X ? Y ? Rp
is a projection of the input and the output space into a joint feature space of dimension p; and
f ? F = {f : Rp ? R} is a function that measures the adequacy between an observation x and
a class y using their corresponding representation ?(xy ). The projection function ? is applicationdependent and it can either be learned [28], or defined using some heuristics [27, 16].
2
Further, consider the following dyadic transformation
zj = ?(xki ), ?(xyi i ) , y?j = ?1
T (S) =
zj = ?(xyi i ), ?(xki ) , y?j = +1
if k < yi
elsewhere
,
.
(3)
j =(i?1)(K?1)+k
where j = (i ? 1)(K ? 1) + k with i ? [m], k ? [K ? 1]; that expands a K-class labeled set S of
size m into a binary labeled set T (S) of size N = m(K ? 1) (e.g. Figure 1 over a toy problem).
With the class of functions
0
0
H = {h : Rp ? Rp ? R; (?(xy ), ?(xy )) 7? f (?(xy )) ? f (?(xy )), f ? F},
(4)
the empirical loss (Eq. (2)) can be rewritten as :
N
X
? T (S) (h) = 1
R
1y? h(z )?0 .
N j=1 j j
(5)
Hence, the minimization of Eq. (5) over the transformation T (S) of a training set S
defines a binary classification over the pairs of
x
x
x
x
S
dyadic examples. However, this binary problem
takes as examples dependent random variables,
T
as for each original example xy ? S, the K ? 1
0
), ?(x )), +1) (z = (?(x ), ?(x )), +1) (z = (?(x ), ?(x )), +1)
pairs in {(?(xy ), ?(xy )); y?} ? T (S) all de- (z(z == (?(x
(?(x ), ?(x )), ?1) (z = (?(x ), ?(x )), +1) (z = (?(x ), ?(x )), +1)
y
pend on x . In [16] this problem is studied by (z = (?(x ), ?(x )), ?1) (z = (?(x ), ?(x )), ?1) (z = (?(x ), ?(x )), +1)
bounding the generalization error associated to (z = (?(x ), ?(x )), ?1) (z = (?(x ), ?(x )), ?1) (z = (?(x ), ?(x )), ?1)
(5) using the fractional Rademacher complexity proposed in [25]. In this work, we derive Figure 1: A toy example depicting the transformaa new generalization bounds based on Local tion T (Eq. (3)) applied to a training set S of size
Rademacher Complexities introduced in [22] m = 4 and K = 4.
that implies second-order (i.e. variance) information inducing faster convergence rates (Theorem 1).
Our analysis relies on the notion of graph covering introduced in [14] and defined as :
Definition 1 (Exact proper fractional cover of G, [14]). Let G = (V, E) be a graph. C =
{(Ck , ?k )}k?[J] , for some positive integer J, with Ck ? V and ?k ? [0, 1] is an exact proper
fractional cover of G, if: i) it is proper: ?k, Ck is an independent set, i.e.,
P there is no connections
between vertices in Ck ; ii) it is an exact fractional cover of G: ?v ? V, k:v?Ck ?k = 1.
. P
The weight W (C) of C is given by: W (C) =
k?[J] ?k and the minimum weight
?? (G) = minC?K(G) W (C) over the set K(G) of all exact proper fractional covers of
G is the fractional chromatic number of G.
From this statement, [14] extended Hoeffding?s inequality and proposed large deviation bounds for sums of dependent random variables which was the precursor of new generalisation bounds, including a Talagrand?s type inequality for empirical processes in the dependent case presented in [22].
With the classes of functions G and H introduced previously, consider the parameterized
family Hr which, for r > 0, is defined as:
.
Hr = {h : h ? H, V[h] = Vz,?y [1y?h(z) ] ? r},
where V denotes the variance.
y1
1
1
4
7
10
y1
1
y1
2
y1
3
y1
4
y2
1
y2
2
y3
3
y4
4
2
5
8
11
y2
2
y1
1
y2
2
y2
3
y2
4
y3
3
y3
1
y3
2
y3
3
y4
4
y4
4
3
6
9
12
y1
1
y2
2
y3
3
y3
4
y4
1
y4
2
y4
3
y4
4
The fractional Rademacher complexity introduced in [25] entails our analysis :
X
X
. 2
RT (S) (H)= E?
?k ECk sup
?? h(z? ),
N
h?H ??C
k?[K?1]
k
z? ?T (S)
Figure 2: The dependency graph G = {1, . . . , 12}
corresponding to the toy problem of Figure 1,
with (?i )N
a
sequence
of
independent
i=1
Rademacher variables verifying P(?n = 1) = where dependent nodes are connected with verP(?n=?1) = 12 . If other is not specified explic- tices in blue double-line. The exact proper fracitly we assume below all ?k = 1. Our first result tional cover C1 , C2 and C3 is shown in dashed.
that bounds the generalization error of a function The fractional chromatic number is in this case
? T (S) (h)], with respect ?? (G) = K ? 1 = 3.
h ? H; R(h) = ET (S) [R
? T (S) (h) over a transformed training set, T (S), and the fractional Rademacher
to its empirical error R
complexity, RT (S) (H), is stated below.
3
m
Theorem 1. Let S = (xyi i )m
i=1 ? (X ? Y) be a dataset of m examples drawn i.i.d. according to a
probability distribution D over X ? Y and T (S) = ((zi , y?i ))N
i=1 the transformed set obtained as
in Eq. (3). Then for any 1 > ? > 0 and 0/1 loss ` : {?1, +1} ? R ? [0, 1], with probability at
least (1 ? ?) the following generalization bound holds for all h ? Hr : s
r
log 1?
5 q
r
25 log 1?
?
RT (S) (` ? Hr ) +
R(h) ? RT (S) (h) + RT (S) (` ? Hr ) +
+
.
2
2
m
48 m
The proof is provided in the supplementary material, and it relies on the idea of splitting up the
sum (5) into several parts, each part being a sum of independent variables.
2.2
Aggressive Double Sampling
Even-though the previous multi-class to binary transformation T with a proper projection function
? allows to redefine the learning problem in a dyadic feature space of dimension p d, the
increased number of examples can lead to a large computational overhead. In order to cope with
this problem, we propose a (?, ?)-double subsampling of T (S), which first aims to balance the
presence of classes by constructing a new training set S? from S with probabilities ? = (?k )K
k=1 .
The idea here is to overcome
the curse of long-tailed class Algorithm: (?, ?)-DS
distributions exhibited in ma- Input: Labeled training set S = (xyi )m
i i=1
jority of large-scale multi- initialization: S ? ?;
?
class classification problems T (S ) ? ? ;
? ?
[2] by oversampling the most for k = 1..K do
small size classes and by subDraw randomly a set S?k of examples of class k from S with
sampling the few big size
probability ?k ;
classes in S. The hyperpaS? ? S? ? S?k ;
rameters ? are formally dey
fined as ?k, ?k = P (xy ? forall x ? S? do
Draw uniformly a set Yxy of ? classes from Y\{y} . ? K;
S? |xy ? S). In practice
forall k ? Yxy do
we set them inversely proif k < y then
portional to the size of each
T? (S? ) ? T? (S? ) ? z = ?(xk ), ?(xy ) , y? = ?1 ;
class in the original training
set; ?k, ?k ? 1/?k where
else
?k is the proportion of class
T? (S? ) ? T? (S? ) ? z = ?(xy ), ?(xk ) , y? = +1 ;
k in S. The second aim is to
reduce the number of dyadic return T? (S? )
examples controlled by the
hyperparameter ?. The pseudo-code of this aggressive double sampling procedure, referred to as
(?, ?)-DS, is depicted above and it is composed of two main steps.
1. For each class k ? {1, . . . , K}, draw randomly a set S?k of examples from S of that class
K
[
with probability ?k , and let S? =
S?k ;
k=1
2. For each example xy in S? , draw uniformly ? adversarial classes in Y\{y}.
After this double sampling, we apply the transformation T defined as in Eq. (3), leading to a set
T? (S? ) of size ?|S? | N .
In Section 3, we will show that this procedure practically leads to dramatic improvements in terms of
memory consumption, computational complexity, and a higher multi-class prediction accuracy when
the number of classes is very large. The empirical loss over the transformation of the new subsampled
training set S? of size M , outputted by the (?, ?)-DS algorithm is :
X
1 X X
? T (S ) (h) = 1
R
1y?? h(z? )?0 =
1g(xy )?g(xy0 )?0 , (6)
?
?
?M
?M y
0
y
x ?S? y ?Yx
(?
y ? ,z? )?T? (S? )
which is essentially the same empirical risk as the one defined in Eq. (2) but taken with respect to the
training set outputted by the (?, ?)-DS algorithm. Our main result is the following theorem which
? T (S ) .
bounds the generalization error of a function h ? H learned by minimizing R
?
?
4
m
Theorem 2. Let S = (xyi i )m
be a training set of size m i.i.d. according to a
i=1 ? (X ? Y)
probability distribution D over X ? Y, and T (S) = ((zi , y?i ))N
i=1 the transformed set obtained with
the transformation function T defined as in Eq. (3). Let S? ? S, |S? | = M , be a training set
outputted by the algorithm (?, ?)-DS and T (S? ) ? T (S) its corresponding transformation. Then for
any 1 > ? > 0 with probability at least (1 ? ?) the following risk bound holds for all h ? H :
s
s
2
(K
?
1)
log
2? log 4K
7? log 4K
?
?
?
? T (S ) (h) + ?RT (S ) (` ? H) + ?
R(h) ? ?R
+
+
.
?
?
?
?
2M ?
?(m ? 1)
3(m ? 1)
Furthermore, for all functions in the class Hr , we have the following generalization bound that holds
with probability at least (1 ? ?) :
s
4K
4K
? T (S ) (h) + ?RT (S ) (` ? Hr ) + 2? log ? + 7? log ?
R(h) ??R
?
?
?
?
?(m ? 1)
3(m ? 1)
s
r
q
(K ? 1) log 2?
25? log 2?
5?
r
RT? (S? ) (` ? Hr ) +
+
,
+
2
2
M?
48 M
where ` : {?1, +1} ? R ? [0, 1] 0/1 is an instantaneous loss, and ? = maxy: 1?y?K ?y /?y ,
? = maxy: 1?y?K 1/?y and ?y > 0 is the proportion of class y in S.
The proof is provided in the supplementary material. This theorem hence paves the way for the
consistency of the empirical risk minimization principle [26, Th. 2.1, p. 38] defined over the double
sample reduction strategy we propose.
2.3
Prediction with Candidate Selection
The prediction is carried out in the dyadic feature space, by first considering the pairs constituted by a test observation and all the classes, and then
choosing the class that leads to the highest score by the learned classifier.
In the large scale scenario, comAlgorithm: Prediction with Candidate Selection Algorithm
puting the feature representations
for all classes may require a huge Input: Unlabeled test set T ;
?
p
amount of time. To overcome this Learned function f : R ? R;
initialization:
?
?
?;
problem we sample over classes
by choosing just those that are the forall x ? T do
Select Yx ? Y candidate set of q nearest-centroid classes;
nearest to a test example, based on
? ? ? ? argmaxk?Yx f ? (?(xk )) ;
its distance with class centroids.
Here we propose to consider class return predicted classes ?
centroids as the average of vectors
within that class. Note that class centroids are computed once in the preliminary projection of training
examples and classes in the dyadic feature space and thus represent no additional computation at this
stage. The algorithm above presents the pseudocode of this candidate based selection strategy 1 .
3
Experiments
In this section, we provide an empirical evaluation of the proposed reduction approach with the (?, ?)DS sampling strategy for large-scale multi-class classification of document collections. First, we
present the mapping ? : X ? Y ? Rp . Then, we provide a statistical and computational comparison
of our method with state-of-the-art large-scale approaches on popular datasets.
3.1
a Joint example/class representation for text classification
The particularity of text classification is that documents are represented in a vector space induced by
the vocabulary of the corresponding collection [23]. Hence each class can be considered as a megadocument, constituted by the concatenation of all of the documents in the training set belonging to it,
1
The number of classes pre-selected can be tuned to offer a prediction time/accuracy tradeoff if the prediction
time is more critical.
5
Features in the joint example/class representation
?(xy ).
representation
X
X
lS
1.
log(1 + yt )
2.
log 1 +
3.
It
Ft
t?y?x
t?y?x
t?y?x
X yt
X
X
yt
yt
4.
.It
5.
log 1 +
6.
log 1 +
.It
|y|
|y|
|y|
t?y?x
t?y?x
t?y?x
X
X
y t lS
7.
log 1 +
.
8.
1
9. d(xy , centroid(y))
|y|
F
t
t?y?x
t?y?x
X
10. BM25 =
P
t?y?x
2?yt
It . yt +(0.25+0.75?len(y)/avg(len(y))
Table 1: Joint example/class representation for text classification, where t ? y ? x are terms that are
present in both the class y?s mega-document and document
P x. V represents
P the set of distinct
P terms
within S, and xt is the frequency of term t in x, yt = x?y xt , |y| = t?V yt , Ft = x?S xt ,
P
lS = t?V St . Finally, It is the inverse document frequency of term t, len(y) is number of terms of
documents in class y, and avg(len(y)) is the average of document lengths for all the classes.
and simple feature mapping of examples and classes can be defined over their common words. Here
we used p = 10 features inspired from learning to rank [17] by resembling a class and a document to
respectively a document and a query (Table 1). All features except feature 9, that is the distance of
an example x to the centroid of all examples of a particular class y, are classical. In addition to its
predictive interest, the latter is also used in prediction for performing candidate preselection. Note
that for other large-scale multi-class classification applications like recommendation with extremely
large number of offer categories or image classification, a same kind of mapping can either be learned
or defined using their characteristics [27, 28].
3.2
Experimental Setup
Datasets. We evaluate the proposed method using popular datasets from the Large Scale Hierarchical
Text Classification challenge (LSHTC) 1 and 2 [20]. These datasets are provided in a pre-processed
format using stop-word removal and stemming. Various characteristics of these datesets including the
statistics of train, test and heldout are listed in Table 2. Since, the datasets used in LSHTC2 challenge
were in multi-label format, we converted them to multi-class format by replicating the instances
belonging to different class labels. Also, for the largest dataset (WIKI-large) used in LSHTC2
challenge, we used samples with 50,000 and 100,000 classes. The smaller dataset of LSHTC2
challenge is named as WIKI-Small, whereas the two 50K and 100K samples of large dataset are
named as WIKI-50K and WIKI-100K in our result section.
Datasets
LSHTC1
DMOZ
WIKI-Small
WIKI-50K
WIKI-100K
# of classes, K Train Size Test Size Heldout Size Dimension, d
12294
126871
31718
5000
409774
27875
381149
95288
34506
594158
36504
796617
199155
5000
380078
50000
1102754
276939
5000
951558
100000
2195530
550133
5000
1271710
Table 2: Characteristics of the datasets used in our experiments
Baselines. We compare the proposed approach,2 denoted as the sampling strategy by (?, ?)-DS,
with popular baselines listed below:
? OVA: LibLinear [10] implementation of one-vs-all SVM.
? M-SVM: LibLinear implementation of multi-class SVM proposed in [8].
? RecallTree [9]: A recent tree based multi-class classifier implemented in Vowpal Wabbit.
2
Source code and datasets can be found in the following repository https://github.com/bikash617/AggressiveSampling-for-Multi-class-to-BinaryReduction
6
Data
LSHTC1
m = 163589
d = 409774
K = 12294
DMOZ
m = 510943
d = 594158
K = 27875
WIKI-Small
m = 1000772
d = 380078
K = 36504
WIKI-50K
m = 1384693
d = 951558
K = 50000
WIKI-100K
m = 2750663
d = 1271710
K = 100000
train time
predict time
total memory
Accuracy
MaF1
train time
predict time
total memory
Accuracy
MaF1
train time
predict time
total memory
Accuracy
MaF1
train time
predict time
total memory
Accuracy
MaF1
train time
predict time
total memory
Accuracy
MaF1
OVA
23056s
328s
40.3G
44.1%
27.4%
180361s
2797s
131.9G
37.7%
22.2%
212438s
2270s
109.1G
15.6%
8.8 %
NA
NA
330G
NA
NA
NA
NA
1017G
NA
NA
M-SVM
48313s
314s
40.3G
36.4%
18.8%
212356s
3981s
131.9G
32.2%
14.3%
>4d
NA
109.1G
NA
NA
NA
NA
330G
NA
NA
NA
NA
1017G
NA
NA
RecallTree
701s
21s
122M
18.1%
3.8%
2212s
47s
256M
16.9%
1.75%
1610s
24s
178M
7.9%
<1%
4188s
45s
226M
17.9%
5.5%
8593s
90s
370M
8.4%
1.4%
FastXML
8564s
339s
470M
39.3%
21.3%
14334s
424s
1339M
33.4%
15.1%
10646s
453s
949M
11.1%
4.6%
30459s
1110s
1327M
25.8%
14.6%
42359s
1687s
2622M
15%
8%
PfastReXML
3912s
164s
471M
39.8%
22.4%
15492s
505s
1242M
33.7%
15.9%
21702s
871s
947M
12.1%
5.63%
48739s
2461s
1781M
27.3%
16.3%
73371s
3210s
2834M
16.1%
9%
PD-Sparse
5105s
67s
10.5G
45.7%
27.7%
63286s
482s
28.1G
40.8%
22.7%
16309s
382s
12.4G
15.6%
9.91%
41091s
790s
35G
33.8%
23.4%
155633s
3121s
40.3G
22.2%
15.1%
(?, ?)-DS
321s
544s
2G
37.4%
26.5%
1060s
2122s
5.3G
27.8%
20.5%
1290s
2577s
3.6G
21.5%
13.3%
3723s
4083s
5G
33.4%
24.5%
9264s
20324s
9.8G
25%
17.8%
Table 3: Comparison of the result of various baselines in terms of time, memory, accuracy, and
macro F1-measure
? FastXML [21]: An extreme multi-class classification method which performs partitioning in
the feature space for faster prediction.
? PfastReXML [13]: Tree ensemble based extreme classifier for multi-class and multilabel
problems.
? PD-Sparse [30]: A recent approach which uses multi-class loss with `1 -regularization.
Referring to the work [30], we did not consider other recent methods SLEEC [5] and LEML [31] in our
experiments, since they have been shown to be consistently outperformed by the above mentioned
state-of-the-art approaches.
Platform and Parameters. In all of our experiments, we used a machine with an Intel Xeon 2.60GHz
processor with 256 GB of RAM. Each of these methods require tuning of various hyper-parameters
that influence their performance. For each methods, we tuned the hyperparameters over a heldout set
and used the combination which gave best predictive performance. The list of used hyperparameters
for the results we obtained are reported in the supplementary material (Appendix B).
Evaluation Measures. Different approaches are evaluated over the test sets using accuracy and
the macro F1 measure (MaF1 ), which is the harmonic average of macro precision and macro recall;
higher MaF1 thus corresponds to better performance. As opposed to accuracy, macro F1 measure is
not affected by the class imbalance problem inherent to multi-class classification, and is commonly
used as a robust measure for comparing predictive performance of classification methods.
4
Results
The parameters of the datasets along with the results for compared methods are shown in Table 3.
The results are provided in terms of train and predict times, total memory usage, and predictive
performance measured with accuracy and macro F1-measure (MaF1 ). For better visualization and
comparison, we plot the same results as bar plots in Fig. 3 keeping only the best five methods while
comparing the total runtime and memory usage. First, we observe that the tree based approaches
(FastXML, PfastReXML and RecallTree) have worse predictive performance compared to the other
methods. This is due to the fact that the prediction error made at the top-level of the tree cannot be
corrected at lower levels, also known as cascading effect. Even though they have lower runtime and
memory usage, they suffer from this side effect.
For large scale collections (WIKI-Small, WIKI-50K and WIKI-100K), the solvers with competitive
predictive performance are OVA, M-SVM, PD-Sparse and (?, ?)-DS. However, standard OVA and
7
LSHTC1
1200
DMOZ
450
135
900
90
600
45
300
0
0
0
12
30
10.0
8
20
MaF (%)
Accuracy (%)
Total Memory (GB)
Time (min.)
180
WIKI-Small
1200
900
300
WIKI-50K
3000
600
150
300
1000
0
0
36
42
24
28
2.5
12
14
7.5
5.0
4
10
0
0
0.0
0
0
45
45
45
45
45
30
30
30
30
30
15
15
15
15
15
0
0
0
0
0
30
30
30
30
30
20
20
20
20
20
10
10
10
10
10
0
0
0
0
RecallTree
WIKI-100K
2000
FastXML
PfastReXML
PD-Sparse
0
Proposed (?, ?)-DS
Figure 3: Comparisons in Total (Train and Test) Time (min.), Total Memory usage (GB), and MaF1 of
the five best performing methods on LSHTC1, DMOZ, WIKI-Small, WIKI-50K and WIKI-100K.
M-SVM have a complexity that grows linearly with K thus the total runtime and memory usage
explodes on those datasets, making them impossible. For instance, on Wiki large dataset sample
of 100K classes, the memory consumption of both approaches exceeds the Terabyte and they take
several days to complete the training. Furthermore, on this data set and the second largest Wikipedia
collection (WIKI-50K and WIKI-100K) the proposed approach is highly competitive in terms of
Time, Total Memory and both performance measures comparatively to all the other approaches.
These results suggest that the method least affected by long-tailed class distributions is (?, ?)-DS,
mainly because of two reasons: first, the sampling tends to make the training set balanced and
second, the reduced binary dataset contains similar number of positive and negative examples. Hence,
for the proposed approach, there is an improvement in both accuracy and MaF1 measures. The
recent PD-Sparse method also enjoys a competitive predictive performance but it requires to store
intermediary weight vectors during optimization which prevents it from scaling well. The PD-Sparse
solver provides an option for hashing leading to fewer memory usage during training which we used
in the experiments; however, the memory usage is still significantly high for large datasets and at the
same time this option slows down the training process considerably. In overall, among the methods
with competitive predictive performance, (?, ?)-DS seems to present the best runtime and memory
usage; its runtime is even competitive with most of tree-based methods, leading it to provide the best
compromise among the compared methods over the three time, memory and performance measures.
5
Conclusion
We presented a new method for reducing a multiclass classification problem to binary classification.
We employ similarity based feature representation for class and examples and a double sampling
stochastic scheme for the reduction process. Even-though the sampling scheme shifts the distribution
of classes and that the reduction of the original problem to a binary classification problem brings
inter-dependency between the dyadic examples; we provide generalization error bounds suggesting
that the Empirical Risk Minimization principle over the transformation of the sampled training set
still remains consistent. Furthermore, the characteristics of the algorithm contribute for its excellent
performance in terms of memory usage and total runtime and make the proposed approach highly
suitable for large class scenario.
Acknowledgments
This work has been partially supported by the LabEx PERSYVAL-Lab (ANR-11-LABX-0025-01)
funded by the French program Investissement d?avenir, and by the U.S. Department of Energy?s
Office of Electricity as part of the DOE Grid Modernization Initiative.
8
References
[1] Naoki Abe, Bianca Zadrozny, and John Langford. An iterative method for multi-class cost-sensitive
learning. In Proceedings of the 10th ACM SIGKDD, KDD ?04, pages 3?11, 2004.
[2] Rohit Babbar, Cornelia Metzig, Ioannis Partalas, Eric Gaussier, and Massih R. Amini. On power law
distributions in large-scale taxonomies. SIGKDD Explorations, 16(1), 2014.
[3] Samy Bengio, Jason Weston, and David Grangier. Label embedding trees for large multi-class tasks. In
Advances in Neural Information Processing Systems, pages 163?171, 2010.
[4] Alina Beygelzimer, John Langford, and Pradeep Ravikumar. Error-correcting tournaments. In Proceedings
of the 20th International Conference on Algorithmic Learning Theory, ALT?09, pages 247?262, 2009.
[5] Kush Bhatia, Himanshu Jain, Purushottam Kar, Manik Varma, and Prateek Jain. Sparse local embeddings
for extreme multi-label classification. In Advances in Neural Information Processing Systems, pages
730?738, 2015.
[6] Anna Choromanska, Alekh Agarwal, and John Langford. Extreme multi class classification. In NIPS
Workshop: eXtreme Classification, submitted, 2013.
[7] Anna Choromanska and John Langford.
abs/1406.1822, 2014.
Logarithmic time online multiclass prediction.
CoRR,
[8] Koby Crammer and Yoram Singer. On the algorithmic implementation of multiclass kernel-based vector
machines. J. Mach. Learn. Res., 2:265?292, 2002.
[9] Hal Daume III, Nikos Karampatziakis, John Langford, and Paul Mineiro. Logarithmic time one-againstsome. arXiv preprint arXiv:1606.04988, 2016.
[10] Rong-En Fan, Kai-Wei Chang, Cho-Jui Hsieh, Xiang-Rui Wang, and Chih-Jen Lin. Liblinear: A library
for large linear classification. J. Mach. Learn. Res., 9:1871?1874, 2008.
[11] Daniel J Hsu, Sham M Kakade, John Langford, and Tong Zhang. Multi-label prediction via compressed
sensing. In Advances in Neural Information Processing Systems 22 (NIPS), pages 772?780, 2009.
[12] Eyke H?llermeier and Johannes F?rnkranz. On minimizing the position error in label ranking. In Machine
Learning: ECML 2007, pages 583?590. Springer, 2007.
[13] Himanshu Jain, Yashoteja Prabhu, and Manik Varma. Extreme multi-label loss functions for recommendation, tagging, ranking & other missing label applications. In Proceedings of the 22nd ACM SIGKDD
International Conference on Knowledge Discovery and Data Mining, pages 935?944. ACM, 2016.
[14] S. Janson. Large deviations for sums of partly dependent random variables. Random Structures and
Algorithms, 24(3):234?248, 2004.
[15] Kalina Jasinska and Nikos Karampatziakis. Log-time and log-space extreme classification. arXiv preprint
arXiv:1611.01964, 2016.
[16] Bikash Joshi, Massih-Reza Amini, Ioannis Partalas, Liva Ralaivola, Nicolas Usunier, and ?ric Gaussier.
On binary reduction of large-scale multiclass classification problems. In Advances in Intelligent Data
Analysis XIV - 14th International Symposium, IDA 2015, pages 132?144, 2015.
[17] Tie-Yan Liu, Jun Xu, Tao Qin, Wenying Xiong, and Hang Li. Letor: Benchmark dataset for research on
learning to rank for information retrieval. In Proceedings of SIGIR 2007 workshop on learning to rank for
information retrieval, pages 3?10, 2007.
[18] Ana Carolina Lorena, Andr? C. Carvalho, and Jo?o M. Gama. A review on the combination of binary
classifiers in multiclass problems. Artif. Intell. Rev., 30(1-4):19?37, 2008.
[19] Paul Mineiro and Nikos Karampatziakis. Fast label embeddings via randomized linear algebra. In Machine
Learning and Knowledge Discovery in Databases - European Conference, ECML PKDD 2015, Porto,
Portugal, September 7-11, 2015, Proceedings, Part I, pages 37?51, 2015.
[20] I. Partalas, A. Kosmopoulos, N. Baskiotis, T. Artieres, G. Paliouras, E. Gaussier, I. Androutsopoulos, M.-R.
Amini, and P. Galinari. LSHTC: A Benchmark for Large-Scale Text Classification. ArXiv e-prints, March
2015.
[21] Yashoteja Prabhu and Manik Varma. Fastxml: A fast, accurate and stable tree-classifier for extreme
multi-label learning. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge
discovery and data mining, pages 263?272. ACM, 2014.
[22] Liva Ralaivola and Massih-Reza Amini. Entropy-based concentration inequalities for dependent variables.
In Proceedings of the 32nd International Conference on Machine Learning, ICML 2015, Lille, France,
6-11 July 2015, pages 2436?2444, 2015.
[23] G. Salton, A. Wong, and C. S. Yang. A vector space model for automatic indexing. Commun. ACM,
18(11):613?620, November 1975.
9
[24] Robert E Schapire and Yoram Singer. Improved boosting algorithms using confidence-rated predictions.
Machine learning, 37(3):297?336, 1999.
[25] Nicolas Usunier, Massih-Reza Amini, and Patrick Gallinari. Generalization error bounds for classifiers
trained with interdependent data. In Advances in Neural Information Processing Systems 18 (NIPS), pages
1369?1376, 2005.
[26] Vladimir N. Vapnik. Statistical Learning Theory. Wiley-Interscience, 1998.
[27] Maksims Volkovs and Richard S. Zemel. Collaborative ranking with 17 parameters. In Advances in Neural
Information Processing Systems 25, pages 2294?2302, 2012.
[28] Jason Weston, Samy Bengio, and Nicolas Usunier. Wsabie: Scaling up to large vocabulary image
annotation. In Proceedings of the International Joint Conference on Artificial Intelligence, IJCAI, 2011.
[29] Jason Weston and Chris Watkins. Multi-class support vector machines. Technical report, Technical Report
CSD-TR-98-04, Department of Computer Science, Royal Holloway, University of London, 1998.
[30] Ian EH Yen, Xiangru Huang, Kai Zhong, Pradeep Ravikumar, and Inderjit S Dhillon. Pd-sparse: A primal
and dual sparse approach to extreme multiclass and multilabel classification. In Proceedings of the 33nd
International Conference on Machine Learning, 2016.
[31] Hsiang-Fu Yu, Prateek Jain, Purushottam Kar, and Inderjit Dhillon. Large-scale multi-label learning with
missing labels. In International Conference on Machine Learning, pages 593?601, 2014.
10
| 7004 |@word repository:2 proportion:2 seems:1 nd:3 carolina:1 hsieh:1 dramatic:1 tr:1 liblinear:3 reduction:11 liu:1 series:1 score:1 contains:1 daniel:1 tuned:2 document:10 janson:1 com:3 comparing:2 beygelzimer:1 ida:1 exy:1 liva:2 john:6 stemming:1 partition:1 kdd:1 plot:2 v:1 intelligence:1 leaf:1 selected:1 fewer:1 directory:2 xk:3 provides:1 boosting:1 node:3 contribute:1 org:1 zhang:1 five:2 along:1 constructed:1 c2:1 symposium:1 initiative:1 androutsopoulos:1 consists:1 doubly:1 ewe:1 overhead:1 redefine:1 interscience:1 inter:3 tagging:2 expected:1 pkdd:1 multi:37 inspired:1 eck:1 gov:1 curse:3 precursor:1 considering:1 solver:2 becomes:1 project:1 provided:4 prateek:2 evolved:1 kind:1 developed:1 transformation:9 pseudo:1 y3:7 modernization:1 expands:1 runtime:6 tie:1 classifier:10 control:1 partitioning:1 imag:3 gallinari:1 positive:2 tices:1 puting:1 local:3 naoki:1 tends:1 despite:1 mach:2 xiv:1 tournament:1 initialization:2 studied:2 challenging:1 acknowledgment:1 practice:1 procedure:2 empirical:11 yan:1 outputted:3 lshtc:2 projection:5 significantly:1 pre:2 word:2 confidence:1 jui:1 suggest:1 get:1 cannot:1 unlabeled:1 selection:3 jasinska:1 ralaivola:2 risk:7 influence:1 impossible:1 wong:1 www:2 optimize:1 yt:8 vowpal:1 resembling:1 missing:2 l:3 convex:1 sigir:1 simplicity:1 splitting:1 correcting:1 fastxml:5 cascading:1 varma:3 embedding:3 notion:1 construction:2 target:1 exact:5 us:1 samy:2 element:1 labeled:3 database:1 bottom:1 ft:2 artieres:1 preprint:2 wang:1 verifying:1 connected:1 decrease:1 highest:1 balanced:2 mentioned:1 pd:7 complexity:9 maf:1 multilabel:2 trained:1 algebra:1 compromise:1 predictive:9 efficiency:1 eric:1 joint:6 represented:1 various:3 train:9 univ:3 distinct:1 fast:3 effective:1 jain:4 london:1 query:1 zemel:1 bhatia:1 artificial:1 hyper:1 choosing:2 heuristic:2 supplementary:3 kai:2 drawing:1 otherwise:1 particularity:1 anr:1 compressed:1 statistic:1 final:1 online:1 sequence:1 wabbit:1 propose:5 fr:3 massih:6 macro:6 qin:1 inducing:1 validate:1 los:1 convergence:1 double:10 ijcai:1 rademacher:6 letor:1 lig:2 derive:2 develop:1 measured:1 nearest:2 eq:7 implemented:1 predicted:1 involves:1 implies:1 avenir:1 switzerland:1 drawback:2 correct:1 porto:1 stochastic:1 exploration:1 alp:3 ana:1 material:3 require:2 f1:4 generalization:9 preliminary:1 rong:1 hold:3 practically:1 considered:1 mapping:3 predict:6 algorithmic:2 yashoteja:2 outperformed:1 intermediary:1 label:18 sensitive:1 vz:1 largest:2 minimization:5 aim:5 reaching:1 ck:5 avoid:1 forall:3 zhong:1 chromatic:2 minc:1 office:1 improvement:2 consistently:1 rank:4 karampatziakis:3 mainly:1 sigkdd:4 adversarial:1 centroid:6 baseline:4 inference:1 tional:1 dependent:6 transformed:3 france:4 choromanska:2 tao:1 overall:1 classification:32 among:2 dual:1 denoted:2 yahoo:2 art:3 platform:1 equal:1 once:1 beach:1 sampling:14 y6:1 represents:1 koby:1 icml:1 lille:1 yu:1 alter:1 report:2 intelligent:1 inherent:1 employ:1 grenoble:6 richard:1 few:2 randomly:2 composed:1 national:1 leml:1 intell:1 subsampled:1 ab:1 huge:1 interest:1 highly:2 mining:2 evaluation:2 extreme:11 pradeep:2 primal:1 accurate:1 fu:1 xy:21 traversing:1 tree:12 re:2 e0:1 increased:1 xeon:1 instance:2 cover:5 electricity:1 cost:1 vertex:1 subset:1 deviation:2 mozilla:1 alamo:1 predictor:2 xy0:4 predicate:1 reported:1 dependency:4 dir:1 considerably:1 cho:1 referring:1 st:2 international:8 randomized:1 volkovs:1 na:19 jo:1 containing:1 prevails:1 russia:1 hoeffding:1 opposed:1 huang:1 worse:1 leading:3 return:2 toy:3 li:1 aggressive:4 converted:1 suggesting:1 de:1 persyval:1 ioannis:3 yury:2 ranking:6 manik:3 performed:1 pend:1 tion:1 lab:1 jason:3 sup:1 competitive:5 len:4 expedia:2 option:2 annotation:1 yen:1 contribution:1 collaborative:1 yxy:2 accuracy:14 variance:2 conducting:1 characteristic:4 ensemble:1 partalas:4 xyi:10 processor:1 submitted:1 suffers:2 definition:1 energy:1 frequency:2 associated:1 proof:2 franck:2 salton:1 sampled:3 stop:1 dataset:7 hsu:1 popular:5 recall:1 knowledge:3 fractional:10 appears:1 higher:2 hashing:1 day:1 adaboost:1 wei:1 improved:1 evaluated:1 though:4 dey:1 furthermore:4 just:1 stage:1 langford:6 talagrand:1 hand:1 d:12 propagation:1 french:1 defines:1 brings:2 grows:1 hal:1 artif:1 usa:2 usage:9 effect:2 true:3 y2:7 hence:5 regularization:1 laboratory:1 dhillon:2 eyke:1 during:2 covering:1 criterion:1 complete:1 performs:1 image:2 harmonic:1 instantaneous:2 recently:2 wikipedia:4 common:3 lshtc2:3 pseudocode:1 reza:5 rd:1 tuning:1 consistency:2 grid:1 automatic:1 portugal:1 grangier:1 replicating:1 funded:1 stable:1 entail:1 similarity:1 alekh:1 patrick:1 recent:4 purushottam:2 commun:1 scenario:2 store:1 inequality:3 binary:18 kar:2 yi:3 scoring:1 minimum:1 greater:1 additional:1 nikos:3 mr:1 terabyte:1 determine:1 dashed:1 ii:1 branch:1 july:1 sham:1 exceeds:1 technical:2 faster:2 offer:2 long:5 fined:1 lin:1 divided:1 retrieval:2 ravikumar:2 controlled:1 prediction:19 scalable:1 essentially:1 arxiv:5 represent:1 kernel:1 agarwal:1 c1:1 affecting:1 addition:1 whereas:1 else:1 grow:1 source:1 exhibited:2 explodes:1 induced:1 adequacy:1 integer:1 joshi:3 presence:1 yang:1 bengio:2 embeddings:2 iii:1 zi:2 gave:1 paliouras:1 reduce:3 idea:2 tradeoff:1 multiclass:6 shift:2 kush:1 gb:3 suffer:2 generally:3 listed:2 johannes:1 transforms:1 amount:1 preselection:1 processed:1 category:3 reduced:1 http:1 wiki:22 bm25:1 schapire:1 zj:2 oversampling:2 andr:1 llermeier:1 per:1 mega:1 blue:1 broadly:1 hyperparameter:1 affected:2 ist:1 drawn:1 alina:1 ram:1 graph:3 sum:4 inverse:1 parameterized:1 named:2 family:1 chih:1 draw:3 decision:1 appendix:1 scaling:2 ric:1 bound:11 fan:1 extremely:2 xki:2 min:2 performing:2 expanded:1 relatively:1 format:3 department:2 according:3 combination:2 march:1 belonging:2 smaller:2 wsabie:1 kakade:1 rev:1 making:1 maxy:2 indexing:1 taken:1 computationally:1 xiangru:1 visualization:1 remains:2 previously:1 singer:2 usunier:3 rewritten:1 apply:1 observe:1 hierarchical:1 himanshu:2 amini:7 xiong:1 rp:5 original:9 moscow:1 top:3 denotes:1 subsampling:1 yx:3 yoram:2 classical:2 comparatively:1 objective:1 print:1 strategy:10 concentration:1 rt:8 pave:1 surrogate:1 september:1 subspace:1 distance:2 separate:1 concatenation:1 majority:2 consumption:3 chris:1 prabhu:2 dmoz:8 reason:1 code:2 length:1 y4:7 minimizing:3 balance:1 vladimir:1 setup:1 gaussier:3 robert:1 statement:1 taxonomy:1 stated:1 negative:1 slows:1 implementation:3 proper:6 unknown:1 imbalance:2 observation:5 datasets:11 benchmark:2 november:1 ecml:2 zadrozny:1 extended:1 y1:7 abe:1 introduced:4 david:1 pair:11 lanl:1 specified:1 connection:1 c3:1 learned:6 nip:4 address:3 bar:1 below:3 challenge:4 program:1 including:2 memory:21 royal:1 power:1 critical:1 suitable:1 rely:1 eh:1 indicator:1 hr:8 scheme:2 github:1 rated:1 inversely:1 library:1 carried:2 argmaxk:1 jun:1 text:7 prior:1 review:1 discovery:3 removal:1 interdependent:1 rohit:1 xiang:1 law:1 maksims:1 loss:10 heldout:3 gama:1 limitation:2 skolkovo:1 rameters:1 carvalho:1 versus:1 labex:1 consistent:2 principle:4 elsewhere:1 supported:1 sleec:1 keeping:1 enjoys:1 ovum:6 side:1 sparse:9 distributed:1 ghz:1 overcome:3 dimension:3 vocabulary:2 rnkranz:1 collection:6 made:3 avg:2 commonly:1 cope:2 geneva:1 hang:1 investissement:1 consuming:1 iterative:1 mineiro:2 tailed:4 table:6 learn:2 robust:1 ca:1 expanding:1 nicolas:3 depicting:1 excellent:1 european:1 constructing:1 did:1 anna:2 main:4 constituted:4 csd:1 linearly:1 big:2 bounding:1 scored:1 hyperparameters:2 daume:1 paul:2 kalina:1 child:1 dyadic:13 xu:1 fig:1 referred:1 intel:1 en:1 bianca:1 hsiang:1 tong:1 wiley:1 precision:1 sub:1 position:1 candidate:5 watkins:1 ian:1 theorem:5 down:1 xt:3 jen:1 showing:1 sensing:1 list:1 svm:7 alt:1 intractable:1 workshop:2 vapnik:1 corr:1 rui:1 entropy:1 depicted:1 logarithmic:3 prevents:1 partially:1 inderjit:2 recommendation:3 chang:1 springer:1 corresponds:2 portional:1 relies:2 acm:6 ma:1 labx:1 ljk:1 weston:3 goal:1 generalisation:1 except:1 reducing:3 uniformly:2 corrected:1 total:13 partly:1 experimental:1 formally:1 select:1 holloway:1 support:1 maf1:10 latter:1 crammer:1 unbalanced:1 evaluate:1 baskiotis:1 |
6,639 | 7,005 | Deconvolutional Paragraph Representation Learning
Yizhe Zhang
Dinghan Shen
Guoyin Wang
Zhe Gan
Ricardo Henao
Lawrence Carin
Department of Electrical & Computer Engineering, Duke University
Abstract
Learning latent representations from long text sequences is an important first step
in many natural language processing applications. Recurrent Neural Networks
(RNNs) have become a cornerstone for this challenging task. However, the quality of sentences during RNN-based decoding (reconstruction) decreases with the
length of the text. We propose a sequence-to-sequence, purely convolutional and
deconvolutional autoencoding framework that is free of the above issue, while
also being computationally efficient. The proposed method is simple, easy to
implement and can be leveraged as a building block for many applications. We
show empirically that compared to RNNs, our framework is better at reconstructing and correcting long paragraphs. Quantitative evaluation on semi-supervised
text classification and summarization tasks demonstrate the potential for better
utilization of long unlabeled text data.
1
Introduction
A central task in natural language processing is to learn representations (features) for sentences or
multi-sentence paragraphs. These representations are typically a required first step toward more
applied tasks, such as sentiment analysis [1, 2, 3, 4], machine translation [5, 6, 7], dialogue systems
[8, 9, 10] and text summarization [11, 12, 13]. An approach for learning sentence representations
from data is to leverage an encoder-decoder framework [14]. In a standard autoencoding setup, a
vector representation is first encoded from an embedding of an input sequence, then decoded to the
original domain to reconstruct the input sequence. Recent advances in Recurrent Neural Networks
(RNNs) [15], especially Long Short-Term Memory (LSTM) [16] and variants [17], have achieved
great success in numerous tasks that heavily rely on sentence-representation learning.
RNN-based methods typically model sentences recursively as a generative Markov process with
hidden units, where the one-step-ahead word from an input sentence is generated by conditioning on
previous words and hidden units, via emission and transition operators modeled as neural networks.
In principle, the neural representations of input sequences aim to encapsulate sufficient information
about their structure, to subsequently recover the original sentences via decoding. However, due to the
recursive nature of the RNN, challenges exist for RNN-based strategies to fully encode a sentence into
a vector representation. Typically, during training, the RNN generates words in sequence conditioning
on previous ground-truth words, i.e., teacher forcing training [18], rather than decoding the whole
sentence solely from the encoded representation vector. This teacher forcing strategy has proven
important because it forces the output sequence of the RNN to stay close to the ground-truth sequence.
However, allowing the decoder to access ground truth information when reconstructing the sequence
weakens the encoder?s ability to produce self-contained representations, that carry enough information
to steer the decoder through the decoding process without additional guidance. Aiming to solve
this problem, [19] proposed a scheduled sampling approach during training, which gradually shifts
from learning via both latent representation and ground-truth signals to solely use the encoded latent
representation. Unfortunately, [20] showed that scheduled sampling is a fundamentally inconsistent
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
training strategy, in that it produces largely unstable results in practice. As a result, training may fail
to converge on occasion.
During inference, for which ground-truth sentences are not available, words ahead can only be generated by conditioning on previously generated words through the representation vector. Consequently,
decoding error compounds proportional to the length of the sequence. This means that generated
sentences quickly deviate from the ground-truth once an error has been made, and as the sentence
progresses. This phenomenon was coined exposure bias in [19].
We propose a simple yet powerful purely convolutional framework for learning sentence representations. Conveniently, without RNNs in our framework, issues connected to teacher forcing training
and exposure bias are not relevant. The proposed approach uses a Convolutional Neural Network
(CNN) [21, 22, 23] as encoder and a deconvolutional (i.e., transposed convolutional) neural network
[24, 25] as decoder. To the best of our knowledge, the proposed framework is the first to force
the encoded latent representation to capture information from the entire sentence via a multi-layer
CNN specification, to achieve high reconstruction quality without leveraging RNN-based decoders.
Our multi-layer CNN allows representation vectors to abstract information from the entire sentence,
irrespective of order or length, making it an appealing choice for tasks involving long sentences or
paragraphs. Further, since our framework does not involve recursive encoding or decoding, it can
be very efficiently parallelized using convolution-specific Graphical Process Unit (GPU) primitives,
yielding significant computational savings compared to RNN-based models.
2
2.1
Convolutional Auto-encoding for Text Modeling
Convolutional encoder
Let wt denote the t-th word in a given sentence. Each word wt is embedded into a k-dimensional
word vector xt = We [wt ], where We ? Rk?V is a (learned) word embedding matrix, V is the
vocabulary size, and We [v] denotes the v-th column of We . All columns of We are normalized
to have unit `2 -norm, i.e., ||We [v]||2 = 1, ?v, by dividing each column with its `2 -norm. After
embedding, a sentence of length T (padded where necessary) is represented as X ? Rk?T , by
concatenating its word embeddings, i.e., xt is the t-th column of X.
For sentence encoding, we use a CNN architecture similar to [26], though originally proposed for
image data. The CNN consists of L layers (L ? 1 convolutional, and the Lth fully-connected) that
ultimately summarize an input sentence into a (fixed-length) latent representation vector, h. Layer
l ? {1, . . . , L} consists of pl filters, learned from data. For the i-th filter in layer 1, a convolutional
(i,1)
operation with stride length r(1) applies filter Wc
? Rk?h to X, where h is the convolution filter
(1)
(i,1)
(i,1)
size. This yields latent feature map, c
= ?(X ? Wc + b(i,1) ) ? R(T ?h)/r +1 , where ?(?) is
(1)
a nonlinear activation function, b(i,1) ? R(T ?h)/r +1 , and ? denotes the convolutional operator. In
our experiments, ?(?) is represented by a Rectified Linear Unit (ReLU) [27]. Note that the original
(1)
embedding dimension, k, changes after the first convolutional layer, as c(i,1) ? R(T ?h)/r +1 ,
for i = 1, . . . , p1 . Concatenating the results from p1 filters (for layer 1), results in feature map,
(1)
C(1) = [c(1,1) . . . c(p1 ,1) ] ? Rp1 ?[(T ?h)/r +1] .
After this first convolutional layer, we apply the convolution operation to the feature map, C(1) , using
the same filter size, h, with this repeated in sequence for L ? 1 layers. Each time, the length along
the spatial coordinate is reduced to T (l+1) = b(T (l) ? h)/r(l) + 1c, where r(l) is the stride length,
T (l) is the spatial length, l denotes the l-th layer and b?c is the floor function. For the final layer, L,
the feature map C(L?1) is fed into a fully-connected layer, to produce the latent representation h.
Implementation-wise, we use a convolutional layer with filter size equals to T (L?1) (regardless of
h), which is equivalent to a fully-connected layer; this implementation trick has been also utilized in
[26]. This last layer summarizes all remaining spatial coordinates, T (L?1) , into scalar features that
(i,l)
encapsulate sentence sub-structures throughout the entire sentence characterized by filters, {Wc }
(i,l)
for i = 1, . . . , pl and l = 1, . . . , L, where Wc denotes filter i for layer l. This also implies that
the extracted feature is of fixed-dimensionality, independent of the length of the input sentence.
2
(k,h,p1,r(1))
(300, 5, 300, 2)
(k,h,p2,r(2))
(1, 5, 600, 2)
C(1)
C(2)
300
(k x T)
300 x 60
(T(1) x p1)
28 x 300
(k,T(2),p3,r(3))
(1, 12, 500, 1)
500
600
(T(2) x p2)
12 x 600
600
300
(T(2) x p2)
12 x 600
(T(1) x p1)
28 x 300
(k x T)
300 x 60
Deconvolution Layers
Convolution Layers
Figure 1: Convolutional auto-encoding architecture. Encoder: the input sequence is first expanded to an
embedding matrix, X, then fully compressed to a representation vector h, through a multi-layer convolutional
encoder with stride. In the last layer, the spatial dimension is collapsed to remove the spatial dependency.
Decoder: the latent vector h is fed through a multi-layer deconvolutional decoder with stride to reconstruct X as
? via cosine-similarity cross-entropy loss.
X,
Having pL filters on the last layer, results in pL -dimensional representation vector, h = C(L) , for the
input sentence. For example, in Figure 1, the encoder consists of L = 3 layers, which for a sentence
of length T = 60, embedding dimension k = 300, stride lengths {r(1) , r(2) , r(3) } = {2, 2, 1}, filter
sizes h = {5, 5, 12} and number of filters {p1 , p2 , p3 } = {300, 600, 500}, results in intermediate
feature maps, C(1) and C(2) of sizes {28 ? 300, 12 ? 600}, respectively. The last feature map of
size 1 ? 500, corresponds to latent representation vector, h.
Conceptually, filters from the lower layers capture primitive sentence information (h-grams, analogous to edges in images), while higher level filters capture more sophisticated linguistic features, such
as semantic and syntactic structures (analogous to image elements). Such a bottom-up architecture
models sentences by hierarchically stacking text segments (h-grams) as building blocks for representation vector, h. This is similar in spirit to modeling linguistic grammar formalisms via concrete
syntax trees [28], however, we do not pre-specify a tree structure based on some syntactic structure
(i.e., English language), but rather abstract it from data via a multi-layer convolutional network.
2.2 Deconvolutional decoder
We apply the deconvolution with stride (i.e., convolutional transpose), as the conjugate operation of
convolution, to decode the latent representation, h, back to the source (discrete) text domain. As
the deconvolution operation proceeds, the spatial resolution gradually increases, by mirroring the
convolutional steps described above, as illustrated in Figure 1. The spatial dimension is first expanded
to match the spatial dimension of the (L ? 1)-th layer of convolution, then progressively expanded as
T (l+1) = (T (l) ? 1) ? r(l) + h, for l = 1, ? ? ? up to L-th deconvolutional layer (which corresponds to
the input layer of the convolutional encoder). The output of the L-layer deconvolution operation aims
? In line with word embedding
to reconstruct the word embedding matrix, which we denote as X.
?
matrix We , columns of X are normalized to have unit `2 -norm.
Denoting w
? t as the t-th word in reconstructed sentence s?, the probability of w
? t to be word v is
specified as
exp[? ?1 Dcos (?
xt , We [v])]
,
?1 D
xt , We [v 0 ])]
cos (?
v 0 ?V exp[?
p(w
? t = v) = P
(1)
hx,yi
where Dcos (x, y) is the cosine similarity defined as, ||x||||y||
, We [v] is the v-th column of We ,
t
?
? is the t-th column of X, ? is a positive number we denote as temperature parameter [29]. This
x
parameter is akin to the concentration parameter of a Dirichlet distribution, in that it controls the
spread of probability vector [p(w
? t = 1) . . . p(w
? t = V )], thus a large ? encourages uniformly
distributed probabilities, whereas a small ? encourages sparse, concentrated probability values. In
the experiments we set ? = 0.01. Note that in our setting, the cosine similarity can be obtained
? have unit `2 -norm by specification. This
as an inner product, provided that columns of We and X
deconvolutional module can also be leveraged as building block in VAE[30, 31] or GAN[32, 33]
3
2.3
Model learning
The objective of the convolutional autoencoder described above can be written as the word-wise
log-likelihood for all sentences s ? D, i.e.,
P
P
Lae = d?D t log p(w
?dt = wdt ) ,
(2)
where D denotes the set of observed sentences. The simple, maximum-likelihood objective in (2)
is optimized via stochastic gradient descent. Details of the implementation are provided in the
experiments. Note that (2) differs from prior related work in two ways: i) [22, 34] use pooling and
un-pooling operators, while we use convolution/deconvolution with stride; and ii) more importantly,
[22, 34] do not use a cosine similarity reconstruction as in (1), but a RNN-based decoder. A further
discussion of related work is provided in Section 3. We could use pooling and un-pooling instead
of striding (a particular case of deterministic pooling/un-pooling), however, in early experiments
(not shown) we did not observe significant performance gains, while convolution/deconvolution
operations with stride are considerably more efficient in terms of memory footprint. Compared to a
standard LSTM-based RNN sequence autoencoders with roughly the same number of parameters,
computations in our case are considerably faster (see experiments) using single NVIDIA TITAN X
GPU. This is due to the high parallelization efficiency of CNNs via cuDNN primitives [35].
Comparison between deconvolutional and RNN Decoders The proposed framework can be seen
as a complementary building block for natural language modeling. Contrary to the standard LSTMbased decoder, the deconvolutional decoder imposes in general a less strict sequence dependency
compared to RNN architectures. Specifically, generating a word from an RNN requires a vector of
hidden units that recursively accumulate information from the entire sentence in an order-preserving
manner (long-term dependencies are heavily down-weighted), while for a deconvolutional decoder,
the generation only depends on a representation vector that encapsulates information from throughout
the sentence without a pre-specified ordering structure. As a result, for language generation tasks, a
RNN decoder will usually generate more coherent text, when compared to a deconvolutional decoder.
On the contrary, a deconvolutional decoder is better at accounting for distant dependencies in long
sentences, which can be very beneficial in feature extraction for classification and text summarization
tasks.
2.4
Semi-supervised classification and summarization
Identifying related topics or sentiments, and abstracting (short) summaries from user generated content
such as blogs or product reviews, has recently received significant interest [1, 3, 4, 36, 37, 13, 11]. In
many practical scenarios, unlabeled data are abundant, however, there are not many practical cases
where the potential of such unlabeled data is fully realized. Motivated by this opportunity, here we
seek to complement scarcer but more valuable labeled data, to improve the generalization ability of
supervised models. By ingesting unlabeled data, the model can learn to abstract latent representations
that capture the semantic meaning of all available sentences irrespective of whether or not they are
labeled. This can be done prior to the supervised model training, as a two-step process. Recently,
RNN-based methods exploiting this idea have been widely utilized and have achieved state-of-the-art
performance in many tasks [1, 3, 4, 36, 37]. Alternatively, one can learn the autoencoder and classifier
jointly, by specifying a classification model whose input is the latent representation, h; see for
instance [38, 31].
In the case of product reviews, for example, each review may contain hundreds of words. This poses
challenges when training RNN-based sequence encoders, in the sense that the RNN has to abstract
information on-the-fly as it moves through the sentence, which often leads to loss of information,
particularly in long sentences [39]. Furthermore, the decoding process uses ground-truth information
during training, thus the learned representation may not necessarily keep all information from the
input text that is necessary for proper reconstruction, summarization or classification.
We consider applying our convolutional autoencoding framework to semi-supervised learning from
long-sentences and paragraphs. Instead of pre-training a fully unsupervised model as in [1, 3], we cast
the semi-supervised task as a multi-task learning problem similar to [40], i.e., we simultaneously train
a sequence autoencoder and a supervised model. In principle, by using this joint training strategy,
the learned paragraph embedding vector will preserve both reconstruction and classification ability.
4
Specifically, we consider the following objective:
P
P
P
Lsemi = ? d?{Dl +Du } t log p(w
?dt = wdt ) + d?Dl Lsup (f (hd ), yd ) ,
(3)
where ? > 0 is an annealing parameter balancing the relative importance of supervised and unsupervised loss; Dl and Du denote the set of labeled and unlabeled data, respectively. The first term
in (3) is the sequence autoencoder loss in (2) for the d-th sequence. Lsup (?) is the supervision loss
for the d-th sequence (labeled only). The classifier function, f (?), that attempts to reconstruct yd
from hd can be either a Multi-Layer Perceptron (MLP) in classification tasks, or a CNN/RNN in text
summarization tasks. For the latter, we are interested in a purely convolutional specification, however,
we also consider an RNN for comparison. For classification, we use a standard cross-entropy loss,
and for text summarization we use either (2) for the CNN or the standard LSTM loss for the RNN.
In practice, we adopt a scheduled annealing strategy for ? as in [41, 42], rather than fixing it a
priori as in [1]. During training, (3) gradually transits from focusing solely on the unsupervised
sequence autoencoder to the supervised task, by annealing ? from 1 to a small positive value ?min .
We set ?min = 0.01 in the experiments. The motivation for this annealing strategy is to first focus on
abstracting paragraph features, then to selectively refine learned features that are most informative to
the supervised task.
3
Related Work
Previous work has considered leveraging CNNs as encoders for various natural language processing
tasks [22, 34, 21, 43, 44]. Typically, CNN-based encoder architectures apply a single convolution
layer followed by a pooling layer, which essentially acts as a detector of specific classes of h-grams,
given a convolution filter window of size h. The deep architecture in our framework will, in principle,
enable the high-level layers to capture more sophisticated language features. We use convolutions
with stride rather than pooling operators, e.g., max-pooling, for spatial downsampling following
[26, 45], where it is argued that fully convolutional architectures are able to learn their own spatial
downsampling. Further, [46] uses a 29-layer CNN for text classification. Our CNN encoder is
considerably simpler in structure (convolutions with stride and no more than 4 layers) while still
achieving good performance.
Language decoders other than RNNs are less well studied. Recently, [47] proposed a hybrid model
by coupling a convolutional-deconvolutional network with an RNN, where the RNN acts as decoder
and the deconvolutional model as a bridge between the encoder (convolutional network) and decoder.
Additionally, [42, 48, 49, 50] considered CNN variants, such as pixelCNN [51], for text generation.
Nevertheless, to achieve good empirical results, these methods still require the sentences to be
generated sequentially, conditioning on the ground truth historical information, akin to RNN-based
decoders, thus still suffering from the exposure bias.
Other efforts have been made to improve embeddings from long paragraphs using unsupervised
approaches [2, 52]. The paragraph vector [2] learns a fixed length vector by concatenating it with
a word2vec [53] embedding of history sequence to predict future words. The hierarchical neural
autoencoder [52] builds a hierarchical attentive RNN, then it uses paragraph-level hidden units of
that RNN as embedding. Our work differs from these approaches in that we force the sequence to be
fully restored from the latent representation, without aid from any history information.
Previous methods have considered leveraging unlabeled data for semi-supervised sequence classification tasks. Typically, RNN-based methods consider either i) training a sequence-to-sequence RNN
autoencoder, or a RNN classifier that is robust to adversarial perturbation, as initialization for the encoder in the supervised model [1, 4]; or, ii) learning latent representation via a sequence-to-sequence
RNN autoencoder, and then using them as inputs to a classifier that also takes features extracted from
a CNN as inputs [3]. For summarization tasks, [54] has considered a semi-supervised approach based
on support vector machines, however, so far, research on semi-supervised text summarization using
deep models is scarce.
4
Experiments
Experimental setup For all the experiments, we use a 3-layer convolutional encoder followed by a
3-layer deconvolutional decoder (recall implementation details for the top layer). Filter size, stride
5
Ground-truth:
Hier. LSTM [52]
Our LSTM-LSTM
Our CNN-DCNN
on every visit to nyc , the hotel beacon is the place we love to stay . so conveniently located to central park , lincoln
center and great local restaurants . the rooms are lovely . beds so comfortable , a great little kitchen and new wizz
bang coffee maker . the staff are so accommodating and just love walking across the street to the fairway supermarket
with every imaginable goodies to eat .
every time in new york , lighthouse hotel is our favorite place to stay . very convenient , central park , lincoln center
, and great restaurants . the room is wonderful , very comfortable bed , a kitchenette and a large explosion of coffee
maker . the staff is so inclusive , just across the street to walk to the supermarket channel love with all kinds of what
to eat .
on every visit to nyc , the hotel beacon is the place to relax and wanting to become conveniently located . hotel , in
the evenings out good budget accommodations . the views are great and we were more than two couples . manny the
doorman has a great big guy come and will definitly want to leave during my stay and enjoy a wonderfully relaxing
wind break in having for 24 hour early rick?s cafe . oh perfect ! easy easy walking distance to everything imaginable
groceries . if you may want to watch yours !
on every visit to nyc , the hotel beacon is the place we love to stay . so closely located to central park , lincoln center
and great local restaurants . biggest rooms are lovely . beds so comfortable , a great little kitchen and new UNK
suggestion coffee maker . the staff turned so accommodating and just love walking across the street to former fairway
supermarket with every food taxes to eat .
Table 1: Reconstructed paragraph of the Hotel Reviews example, used in [52].
and word embedding are set to h = 5, rl = 2, for l = 1, . . . , 3 and k = 300, respectively. The
dimension of the latent representation vector varies for each experiment, thus is reported separately.
For notational convenience, we denote our convolutional-deconvolutional autoencoder as CNNDCNN. In most comparisons, we also considered two standard autoencoders as baselines: a) CNNLSTM: CNN encoder coupled with LSTM decoder; and b) LSTM-LSTM: LSTM encoder with
LSTM decoder. An LSTM-DCNN configuration is not included because it yields similar performance
to CNN-DCNN while being more computationally expensive. The complete experimental setup and
baseline details is provided in the Supplementary Material (SM). CNN-DCNN has the least number
of parameters. For example, using 500 as the dimension of h results in about 9, 13, 15 million total
trainable parameters for CNN-DCNN, CNN-LSTM and LSTM-LSTM, respectively.
BLEU
24.1
26.7
28.5
18.3
94.2
ROUGE-1
57.1
59.0
62.4
56.6
97.0
ROUGE-2
30.2
33.0
35.5
28.2
94.2
100
Bleu score
Model
LSTM-LSTM [52]
Hier. LSTM-LSTM [52]
Hier. + att. LSTM-LSTM [52]
CNN-LSTM
CNN-DCNN
80
60
40
CNN-DCNN
CNN-LSTM
LSTM-LSTM
20
0
60 80 100 120 140 160 180 200
Sentence length
Table 2: Reconstruction evaluation results on the Hotel Reviews Figure 2: BLEU score vs. sentence
Dataset.
length for Hotel Review data.
Paragraph reconstruction We first investigate the performance of the proposed autoencoder in
terms of learning representations that can preserve paragraph information. We adopt evaluation
criteria from [52], i.e., ROUGE score [55] and BLEU score [56], to measure the closeness of the
reconstructed paragraph (model output) to the input paragraph. Briefly, ROUGE and BLEU scores
measures the n-gram recall and precision between the model outputs and the (ground-truth) references.
We use BLEU-4, ROUGE-1, 2 in our evaluation, in alignment with [52]. In addition to the CNNLSTM and LSTM-LSTM autoencoder, we also compared with the hierarchical LSTM autoencoder
[52]. The comparison is performed on the Hotel Reviews datasets, following the experimental setup
from [52], i.e., we only keep reviews with sentence length ranging from 50 to 250 words, resulting
in 348,544 training data samples and 39,023 testing data samples. For all comparisons, we set the
dimension of the latent representation to h = 500.
From Table 1, we see that for long paragraphs, the LSTM decoder in CNN-LSTM and LSTM-LSTM
suffers from heavy exposure bias issues. We further evaluate the performance of each model with
different paragraph lengths. As shown in Figure 2 and Table 2, on this task CNN-DCNN demonstrates
a clear advantage, meanwhile, as the length of the sentence increases, the comparative advantage
becomes more substantial. For LSTM-based methods, the quality of the reconstruction deteriorates
quickly as sequences get longer. In constrast, the reconstruction quality of CNN-DCNN is stable and
consistent regardless of sentence length. Furthermore, the computational cost, evaluated as wall-clock,
is significantly lower in CNN-DCNN. Roughly, CNN-LSTM is 3 times slower than CNN-DCNN,
and LSTM-LSTM is 5 times slower on a single GPU. Details are reported in the SM.
Character-level and word-level correction This task seeks to evaluate whether the deconvolutional decoder can overcome exposure bias, which severely limits LSTM-based decoders. We consider
6
a denoising autoencoder where the input is tweaked slightly with certain modifications, while the
model attempts to denoise (correct) the unknown modification, thus recover the original sentence.
Character Error Rate (CER)
For character-level correction, we consider the Yahoo! Answer dataset [57]. The dataset description
and setup for word-level correction is provided in the SM. We follow the experimental setup in
[58] for word-level and character-level spelling correction (see details in the SM). We considered
substituting each word/character with a different one at random with probability ?, with ? = 0.30.
For character-level analysis, we first map all characters into a 40 dimensional embedding vector, with
the network structure for word- and character-level models kept the same.
1.0
0.8
0.6
Original c a n a n y o n e s u g g e s t s o m e g o o d b o o k s ?
CNN-DCNN
CNN-LSTM
LSTM-LSTM
LSTM-LSTM c a n a n y o n e s u g g e s t j o k e f o o d y o u n g ?
CNN-LSTM c a n a n y o n e g u i t e s s o m e o w e p o o k s ?
CNN-DCNN c a n a n y o n e s u g g e s t s o m e w o o d b o o k s ?
0.4
Original w h a t s y o u r i d e a o f a s t e p p i n g s t o n e t o b e t t e r t h i n g s t o c o m e ?
Modified w u a t s y o g r i d e m o f t s t e p u k n g j t z n e t i b e t t e r t h i n g z t t c o e e ?
0.2
0.0
0
Modified c a p a n y o n k w u g g e s t x o h e i o r d y o o k u ?
ActorCritic c a n a n y o n e w i t h e s t t o e f o r d y o u u ?
ActorCritic w h a t s y o u r i d e m o f t s t e p u a n g j o k n e t i b e t t e r t h i n g i t t c o m e ?
LSTM-LSTM w h a t s y o u r i d e a o f a s p e a k i n g s t a n d t o b e t t e r t h i n g s t o c o m e ?
10 20 30 40 50 60 70
Time (hour)
CNN-LSTM w h a t s y o u r i d e m o f a s t e p p i n g s t a r t t o b e t t e r t h i n g t o c o m e ?
CNN-DCNN w h a t s y o u r i d e a o f a s t e p p i n g s t o n e t o b e t t e r t h i n g s t o c o m e ?
Model
Actor-critic[58]
LSTM-LSTM
CNN-LSTM
CNN-DCNN
Model
LSTM-LSTM
CNN-LSTM
CNN-DCNN
Yahoo(CER)
0.2284
0.2621
0.2035
0.1323
ArXiv(WER)
0.7250
0.3819
0.3067
Figure 3: CER comparison. Figure 4: Spelling error denoising compar- Table 3: CER and WER comBlack triangles indicate the end ison. Darker colors indicate higher uncer- parison on Yahoo and ArXiv
of an epoch.
tainty. Trained on modified sentences.
data.
We employ Character Error Rate (CER) [58] and Word Error Rate (WER) [59] for evaluation. The
WER/CER measure the ratio of Levenshtein distance (a.k.a., edit distance) between model predictions
and the ground-truth, and the total length of sequence. Conceptually, lower WER/CER indicates
better performance. We use LSTM-LSTM and CNN-LSTM denoising autoencoders for comparison.
The architecture for the word-level baseline models is the same as in the previous experiment. For
character-level correction, we set dimension of h to 900. We also compare to actor-critic training
[58], following their experimental guidelines (see details in the SM).
As shown in Figure 3 and Table 3, we observed CNN-DCNN achieves both lower CER and faster
convergence. Further, CNN-DCNN delivers stable denoising performance irrespective of the noise
location within the sentence, as seen in Figure 4. For CNN-DCNN, even when an error is detected
but not exactly corrected (darker colors in Figure 4 indicate higher uncertainty), denoising with future
words is not effected, while for CNN-LSTM and LSTM-LSTM the error gradually accumulates with
longer sequences, as expected.
For word-level correction, we consider word substitutions only, and mixed perturbations from three
kinds: substitution, deletion and insertion. Generally, CNN-DCNN outperforms CNN-LSTM and
LSTM-LSTM, and is faster. We provide experimental details and comparative results in the SM.
Semi-supervised sequence classification & summarization We investigate whether our CNNDCNN framework can improve upon supervised natural language tasks that leverage features learned
from paragraphs. In principle, a good unsupervised feature extractor will improve the generalization ability in a semi-supervised learning setting. We evaluate our approach on three popular
natural language tasks: sentiment analysis, paragraph topic prediction and text summarization. The
first two tasks are essentially sequence classification, while summarization involves both language
comprehension and language generation.
We consider three large-scale document classification datasets: DBPedia, Yahoo! Answers and
Yelp Review Polarity [57]. The partition of training, validation and test sets for all datasets follows
the settings from [57]. The detailed summary statistics of all datasets are shown in the SM. To
demonstrate the advantage of incorporating the reconstruction objective into the training of text
classifiers, we further evaluate our model with different amounts of labeled data (0.1%, 0.15%, 0.25%,
1%, 10% and 100%, respectively), and the whole training set as unlabeled data.
For our purely supervised baseline model (supervised CNN), we use the same convolutional encoder
architecture described above, with a 500-dimensional latent representation dimension, followed by
a MLP classifier with one hidden layer of 300 hidden units. The dropout rate is set to 50%. Word
embeddings are initialized at random.
As shown in Table 4, the joint training strategy consistently and significantly outperforms the purely
supervised strategy across datasets, even when all labels are available. We hypothesize that during the
early phase of training, when reconstruction is emphasized, features from text fragments can be readily
7
Model
ngrams TFIDF
Large Word ConvNet
Small Word ConvNet
Large Char ConvNet
Small Char ConvNet
SA-LSTM (word-level)
Deep ConvNet
Ours (Purely supervised)
Ours (joint training with CNN-LSTM)
Ours (joint training with CNN-DCNN)
DBpedia
1.31
1.72
1.85
1.73
1.98
1.40
1.29
1.76
1.36
1.17
Yelp P.
4.56
4.89
5.54
5.89
6.53
4.28
4.62
4.21
3.96
Yahoo
31.49
29.06
30.02
29.55
29.84
26.57
27.42
26.32
25.82
Accuracy (%)
learned. As the training proceeds, the most discriminative text fragment features are selected. Further,
the subset of features that are responsible for both reconstruction and discrimination presumably
encapsulate longer dependency structure, compared to the features using a purely supervised strategy.
Figure 5 demonstrates the behavior of our model in a semi-supervised setting on Yelp Review dataset.
The results for Yahoo! Answer and DBpedia are provided in the SM.
100
95
90
85
80
Supervised
75
70
Semi (CNN-DCNN)
65
Semi (CNN-LSTM)
60
55
0.1
1
10
100
Proportion (%) of labeled data
Table 4: Test error rates of document classification (%). Results
Figure 5: Semi-supervised classifica-
from other methods were obtained from [57].
tion accuracy on Yelp review data.
For summarization, we used a dataset composed of 58,000 abstract-title pairs, from arXiv. Abstracttitle pairs are selected if the length of the title and abstract do not exceed 50 and 500 words,
respectively. We partitioned the training, validation and test sets into 55000, 2000, 1000 pairs each.
We train a sequence-to-sequence model to generate the title given the abstract, using a randomly
selected subset of paired data with proportion ? = (5%, 10%, 50%, 100%). For every value of
?, we considered both purely supervised summarization using just abstract-title pairs, and semisupervised summarization, by leveraging additional abstracts without titles. We compared LSTM and
deconvolutional network as the decoder for generating titles for ? = 100%.
Table 5 summarizes quantitative results
Obs. proportion ?
5%
10%
50%
100%
DCNN dec.
using ROUGE-L (longest common subSupervised
12.40
13.07
15.87
16.37
14.75
Semi-sup.
16.04
16.62
17.64
18.14
16.83
sequence) [55]. In general, the additional
abstracts without titles improve the gen- Table 5: Summarization task on arXiv data, using ROUGE-L
eralization ability on the test set. Inter- metric. First 4 columns are for the LSTM decoder, and the last
estingly, even when ? = 100% (all titles column is for the deconvolutional decoder (100% observed).
are observed), the joint training objective
still yields a better performance than using Lsup alone. Presumably, since the joint training objective
requires the latent representation to be capable of reconstructing the input paragraph, in addition
to generating a title, the learned representation may better capture the entire structure (meaning) of
the paragraph. We also empirically observed that titles generated under the joint training objective
are more likely to use the words appearing in the corresponding paragraph (i.e., more extractive),
while the the titles generated using the purely supervised objective Lsup , tend to use wording more
freely, thus more abstractive. One possible explanation is that, for the joint training strategy, since the
reconstructed paragraph and title are all generated from latent representation h, the text fragments
that are used for reconstructing the input paragraph are more likely to be leveraged when ?building?
the title, thus the title bears more resemblance to the input paragraph.
As expected, the titles produced by a deconvolutional decoder are less coherent than an LSTM
decoder. Presumably, since each paragraph can be summarized with multiple plausible titles, the
deconvolutional decoder may have trouble when positioning text segments. We provide discussions
and titles generated under different setups in the SM. Designing a framework which takes the best of
these two worlds, LSTM for generation and CNN for decoding, will be an interesting future direction.
5
Conclusion
We proposed a general framework for text modeling using purely convolutional and deconvolutional
operations. The proposed method is free of sequential conditional generation, avoiding issues
associated with exposure bias and teacher forcing training. Our approach enables the model to
fully encapsulate a paragraph into a latent representation vector, which can be decompressed to
reconstruct the original input sequence. Empirically, the proposed approach achieved excellent long
paragraph reconstruction quality and outperforms existing algorithms on spelling correction, and
semi-supervised sequence classification and summarization, with largely reduced computational cost.
8
Acknowledgements This research was supported in part by ARO, DARPA, DOE, NGA and ONR.
References
[1] Andrew M Dai and Quoc V Le. Semi-supervised sequence learning. In NIPS, 2015.
[2] Quoc Le and Tomas Mikolov. Distributed representations of sentences and documents. In ICML, 2014.
[3] Rie Johnson and Tong Zhang. Supervised and Semi-Supervised Text Categorization using LSTM for
Region Embeddings. arXiv, February 2016.
[4] Takeru Miyato, Andrew M Dai, and Ian Goodfellow. Adversarial Training Methods for Semi-Supervised
Text Classification. In ICLR, May 2017.
[5] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural Machine Translation by Jointly Learning
to Align and Translate. In ICLR, 2015.
[6] Kyunghyun Cho, Bart Van Merri?nboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger
Schwenk, and Yoshua Bengio. Learning phrase representations using rnn encoder-decoder for statistical
machine translation. In EMNLP, 2014.
[7] Fandong Meng, Zhengdong Lu, Mingxuan Wang, Hang Li, Wenbin Jiang, and Qun Liu. Encoding source
language with convolutional neural network for machine translation. In ACL, 2015.
[8] Tsung-Hsien Wen, Milica Gasic, Nikola Mrksic, Pei-Hao Su, David Vandyke, and Steve Young. Semantically conditioned lstm-based natural language generation for spoken dialogue systems. arXiv,
2015.
[9] Jiwei Li, Will Monroe, Alan Ritter, Michel Galley, Jianfeng Gao, and Dan Jurafsky. Deep reinforcement
learning for dialogue generation. arXiv, 2016.
[10] Jiwei Li, Will Monroe, Tianlin Shi, Alan Ritter, and Dan Jurafsky. Adversarial learning for neural dialogue
generation. arXiv:1701.06547, 2017.
[11] Ramesh Nallapati, Bowen Zhou, Cicero Nogueira dos santos, Caglar Gulcehre, and Bing Xiang. Abstractive
Text Summarization Using Sequence-to-Sequence RNNs and Beyond. In CoNLL, 2016.
[12] Shashi Narayan, Nikos Papasarantopoulos, Mirella Lapata, and Shay B Cohen. Neural Extractive Summarization with Side Information. arXiv, April 2017.
[13] Alexander M Rush, Sumit Chopra, and Jason Weston. A Neural Attention Model for Abstractive Sentence
Summarization. In EMNLP, 2015.
[14] Ilya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks. In
NIPS, 2014.
[15] Tomas Mikolov, Martin Karafi?t, Lukas Burget, Jan Cernock`y, and Sanjeev Khudanpur. Recurrent neural
network based language model. In INTERSPEECH, 2010.
[16] Sepp Hochreiter and J?rgen Schmidhuber. Long short-term memory. In Neural computation, 1997.
[17] Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. Empirical evaluation of gated
recurrent neural networks on sequence modeling. arXiv, 2014.
[18] Ronald J Williams and David Zipser. A learning algorithm for continually running fully recurrent neural
networks. Neural computation, 1(2):270?280, 1989.
[19] Samy Bengio, Oriol Vinyals, Navdeep Jaitly, and Noam Shazeer. Scheduled sampling for sequence
prediction with recurrent neural networks. In NIPS, 2015.
[20] Ferenc Husz?r. How (not) to train your generative model: Scheduled sampling, likelihood, adversary?
arXiv, 2015.
[21] Nal Kalchbrenner, Edward Grefenstette, and Phil Blunsom. A convolutional neural network for modelling
sentences. In ACL, 2014.
[22] Yoon Kim. Convolutional neural networks for sentence classification. In EMNLP, 2014.
[23] Zhe Gan, Yunchen Pu, Henao Ricardo, Chunyuan Li, Xiaodong He, and Lawrence Carin. Learning generic
sentence representations using convolutional neural networks. In EMNLP, 2017.
9
[24] Ishaan Gulrajani, Kundan Kumar, Faruk Ahmed, Adrien Ali Taiga, Francesco Visin, David Vazquez, and
Aaron Courville. Pixelvae: A latent variable model for natural images. arXiv, 2016.
[25] Yunchen Pu, Win Yuan, Andrew Stevens, Chunyuan Li, and Lawrence Carin. A deep generative deconvolutional image model. In Artificial Intelligence and Statistics, pages 741?750, 2016.
[26] Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep
convolutional generative adversarial networks. arXiv, 2015.
[27] Vinod Nair and Geoffrey E Hinton. Rectified linear units improve restricted boltzmann machines. In ICML,
pages 807?814, 2010.
[28] Ian Chiswell and Wilfrid Hodges. Mathematical logic, volume 3. OUP Oxford, 2007.
[29] Emil Julius Gumbel and Julius Lieblein. Statistical theory of extreme values and some practical applications:
a series of lectures. 1954.
[30] Yunchen Pu, Xin Yuan, and Lawrence Carin. A generative model for deep convolutional learning. arXiv
preprint arXiv:1504.04054, 2015.
[31] Yunchen Pu, Zhe Gan, Ricardo Henao, Xin Yuan, Chunyuan Li, Andrew Stevens, and Lawrence Carin.
Variational autoencoder for deep learning of images, labels and captions. In NIPS, 2016.
[32] Yizhe Zhang, Zhe Gan, Kai Fan, Zhi Chen, Ricardo Henao, Dinghan Shen, and Lawrence Carin. Adversarial
feature matching for text generation. In ICML, 2017.
[33] Zhe Gan, Liqun Chen, Weiyao Wang, Yunchen Pu, Yizhe Zhang, Hao Liu, Chunyuan Li, and Lawrence
Carin. Triangle generative adversarial networks. arXiv preprint arXiv:1709.06548, 2017.
[34] Ronan Collobert, Jason Weston, L?on Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa.
Natural language processing (almost) from scratch. In JMLR, 2011.
[35] Sharan Chetlur, Cliff Woolley, Philippe Vandermersch, Jonathan Cohen, John Tran, Bryan Catanzaro, and
Evan Shelhamer. cudnn: Efficient primitives for deep learning. arXiv, 2014.
[36] Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. Hierarchical attention
networks for document classification. In NAACL, 2016.
[37] Adji B Dieng, Chong Wang, Jianfeng Gao, and John Paisley. TopicRNN: A Recurrent Neural Network
with Long-Range Semantic Dependency. In ICLR, 2016.
[38] Diederik P Kingma, Shakir Mohamed, Danilo Jimenez Rezende, and Max Welling. Semi-supervised
learning with deep generative models. In NIPS, 2014.
[39] Sepp Hochreiter, Yoshua Bengio, Paolo Frasconi, and J?rgen Schmidhuber. Gradient flow in recurrent
nets: the difficulty of learning long-term dependencies, 2001.
[40] Richard Socher, Jeffrey Pennington, Eric H Huang, Andrew Y Ng, and Christopher D Manning. Semisupervised recursive autoencoders for predicting sentiment distributions. In EMNLP. Association for
Computational Linguistics, 2011.
[41] Samuel R Bowman, Luke Vilnis, Oriol Vinyals, Andrew M Dai, Rafal Jozefowicz, and Samy Bengio.
Generating sentences from a continuous space. arXiv, 2015.
[42] Zichao Yang, Zhiting Hu, Ruslan Salakhutdinov, and Taylor Berg-Kirkpatrick. Improved Variational
Autoencoders for Text Modeling using Dilated Convolutions. arXiv, February 2017.
[43] Baotian Hu, Zhengdong Lu, Hang Li, and Qingcai Chen. Convolutional neural network architectures for
matching natural language sentences. In NIPS, 2014.
[44] Rie Johnson and Tong Zhang. Effective use of word order for text categorization with convolutional neural
networks. In NAACL HLT, 2015.
[45] Jost Tobias Springenberg, Alexey Dosovitskiy, Thomas Brox, and Martin Riedmiller. Striving for simplicity:
The all convolutional net. arXiv, 2014.
[46] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. In ICLR, 2015.
10
[47] Stanislau Semeniuta, Aliaksei Severyn, and Erhardt Barth. A Hybrid Convolutional Variational Autoencoder for Text Generation. arXiv, February 2017.
[48] Nal Kalchbrenner, Lasse Espeholt, Karen Simonyan, Aaron van den Oord, Alex Graves, and Koray
Kavukcuoglu. Neural machine translation in linear time. arXiv, 2016.
[49] Yann N Dauphin, Angela Fan, Michael Auli, and David Grangier. Language Modeling with Gated
Convolutional Networks. arXiv, December 2016.
[50] J. Gehring, M. Auli, D. Grangier, D. Yarats, and Y. N. Dauphin. Convolutional Sequence to Sequence
Learning. arXiv, May 2017.
[51] Aaron van den Oord, Nal Kalchbrenner, Lasse Espeholt, Oriol Vinyals, Alex Graves, et al. Conditional
image generation with pixelcnn decoders. In NIPS, pages 4790?4798, 2016.
[52] Jiwei Li, Minh-Thang Luong, and Dan Jurafsky. A hierarchical neural autoencoder for paragraphs and
documents. In ACL, 2015.
[53] Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. Distributed representations of
words and phrases and their compositionality. In NIPS, 2013.
[54] Kam-Fai Wong, Mingli Wu, and Wenjie Li. Extractive summarization using supervised and semi-supervised
learning. In ICCL. Association for Computational Linguistics, 2008.
[55] Chin-Yew Lin. Rouge: A package for automatic evaluation of summaries. In ACL workshop, 2004.
[56] Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. Bleu: a method for automatic evaluation
of machine translation. In ACL. Association for Computational Linguistics, 2002.
[57] Xiang Zhang, Junbo Zhao, and Yann LeCun. Character-level convolutional networks for text classification.
In NIPS, pages 649?657, 2015.
[58] Dzmitry Bahdanau, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron
Courville, and Yoshua Bengio. An actor-critic algorithm for sequence prediction. arXiv, 2016.
[59] JP Woodard and JT Nelson. An information theoretic measure of speech recognition performance. In
Workshop on standardisation for speech I/O, 1982.
11
| 7005 |@word cnn:51 briefly:1 norm:4 proportion:3 hu:2 seek:2 accounting:1 pavel:1 recursively:2 carry:1 liu:2 series:1 score:5 fragment:3 jimenez:1 configuration:1 denoting:1 substitution:2 att:1 deconvolutional:23 document:5 ours:3 existing:1 outperforms:3 activation:1 yet:1 diederik:1 written:1 gpu:3 readily:1 john:2 ronald:1 distant:1 ronan:1 informative:1 partition:1 enables:1 hypothesize:1 remove:1 progressively:1 discrimination:1 alone:1 bart:1 selected:3 intelligence:1 v:1 generative:7 alec:1 rp1:1 short:3 location:1 simpler:1 zhang:6 mathematical:1 bowman:1 along:1 become:2 junbo:1 yuan:3 consists:3 dan:3 paragraph:29 manner:1 yours:1 inter:1 expected:2 kuksa:1 roughly:2 behavior:1 p1:7 kundan:1 multi:8 love:5 salakhutdinov:1 zhi:1 soumith:1 food:1 little:2 window:1 becomes:1 provided:6 tweaked:1 qingcai:1 what:1 santos:1 kind:2 spoken:1 adji:1 quantitative:2 every:7 act:2 exactly:1 wenjie:1 classifier:6 demonstrates:2 control:1 utilization:1 unit:11 faruk:1 enjoy:1 kelvin:1 encapsulate:4 continually:1 comfortable:3 positive:2 engineering:1 local:2 todd:1 yelp:4 aiming:1 limit:1 severely:1 rouge:8 encoding:5 oxford:1 jiang:1 cliff:1 meng:1 solely:3 accumulates:1 yd:2 mirella:1 rnns:6 acl:5 initialization:1 studied:1 blunsom:1 alexey:1 specifying:1 relaxing:1 challenging:1 co:1 catanzaro:1 luke:2 jurafsky:3 range:1 practical:3 responsible:1 lecun:1 lovely:2 testing:1 practice:2 block:4 implement:1 differs:2 goyal:1 recursive:3 footprint:1 jan:1 evan:1 riedmiller:1 rnn:30 empirical:2 significantly:2 convenient:1 matching:2 burget:1 word:39 pre:3 get:1 convenience:1 unlabeled:7 close:1 operator:4 salim:1 collapsed:1 applying:1 wong:1 equivalent:1 dean:1 map:7 phil:1 deterministic:1 center:3 primitive:4 regardless:2 attention:2 sepp:2 williams:1 shi:1 resolution:1 shen:2 tomas:3 simplicity:1 identifying:1 constrast:1 correcting:1 caption:1 importantly:1 oh:1 hd:2 embedding:13 coordinate:2 merri:1 analogous:2 heavily:2 decode:1 user:1 duke:1 dbpedia:3 us:4 designing:1 goodfellow:1 jaitly:1 samy:2 trick:1 element:1 pixelvae:1 recognition:2 expensive:1 utilized:2 located:3 julius:2 particularly:1 walking:3 labeled:6 observed:5 bottom:1 yoon:1 fly:1 module:1 wang:4 capture:6 electrical:1 preprint:2 region:1 connected:4 ordering:1 decrease:1 valuable:1 nikola:1 substantial:1 insertion:1 tobias:1 ultimately:1 trained:1 wording:1 ferenc:1 segment:2 mingli:1 ali:1 purely:10 upon:1 efficiency:1 eric:1 triangle:2 darpa:1 joint:8 schwenk:1 represented:2 various:1 aliaksei:1 train:3 effective:1 detected:1 artificial:1 jianfeng:2 kalchbrenner:3 whose:1 encoded:4 kai:2 plausible:1 supplementary:1 solve:1 relax:1 reconstruct:5 compressed:1 grammar:1 ability:5 statistic:2 simonyan:2 encoder:17 ward:1 syntactic:2 jointly:2 final:1 shakir:1 autoencoding:3 sequence:48 advantage:3 net:2 emil:1 propose:2 tran:1 aro:1 product:3 reconstruction:13 turned:1 relevant:1 gen:1 translate:1 tax:1 lincoln:3 papineni:1 achieve:2 description:1 bed:3 sutskever:2 exploiting:1 convergence:1 woodard:1 jing:1 produce:3 generating:4 comparative:2 leave:1 perfect:1 categorization:2 weakens:1 andrew:7 coupling:1 pose:1 recurrent:8 fixing:1 narayan:1 received:1 progress:1 sa:1 p2:4 dividing:1 edward:1 wonderful:1 implies:1 indicate:3 extractive:3 come:1 direction:1 involves:1 stevens:2 imaginable:2 closely:1 filter:16 stochastic:1 cnns:2 correct:1 subsequently:1 diyi:1 enable:1 char:2 danilo:1 material:1 everything:1 require:1 espeholt:2 argued:1 hx:1 generalization:2 wall:1 tfidf:1 weiyao:1 ryan:1 comprehension:1 pl:4 correction:7 considered:7 eduard:1 ground:11 exp:2 presumably:3 lawrence:7 great:8 predict:1 visin:1 rgen:2 substituting:1 achieves:1 early:3 adopt:2 ruslan:1 label:2 maker:3 title:17 bridge:1 edit:1 tainty:1 weighted:1 dcnn:23 yarats:1 aim:2 modified:3 rather:4 husz:1 zhou:1 rick:1 vae:1 linguistic:2 encode:1 rezende:1 emission:1 focus:1 notational:1 consistently:1 modelling:1 indicates:1 likelihood:3 longest:1 adversarial:6 sharan:1 kim:1 sense:1 baseline:4 inference:1 entire:5 typically:5 hidden:6 interested:1 henao:4 issue:4 classification:19 unk:1 dauphin:2 priori:1 yahoo:6 adrien:1 grocery:1 art:1 spatial:10 brox:1 equal:1 once:1 saving:1 frasconi:1 beach:1 sampling:4 ng:1 thang:1 having:2 extraction:1 park:3 icml:3 carin:7 unsupervised:6 koray:2 cer:8 holger:1 future:3 yoshua:5 fundamentally:1 richard:1 dosovitskiy:1 wen:1 employ:1 randomly:1 composed:1 preserve:2 simultaneously:1 anirudh:1 kitchen:2 phase:1 jeffrey:1 attempt:2 mlp:2 interest:1 investigate:2 evaluation:8 abstractive:3 alignment:1 chong:1 kirkpatrick:1 extreme:1 yielding:1 word2vec:1 edge:1 capable:1 explosion:1 necessary:2 vandermersch:1 tree:2 kam:1 taylor:1 walk:1 initialized:1 abundant:1 rush:1 guidance:1 eralization:1 instance:1 column:10 modeling:7 stanislau:1 steer:1 formalism:1 ison:1 phrase:2 cost:2 stacking:1 subset:2 hundred:1 cicero:1 galley:1 johnson:2 sumit:1 reported:2 dependency:7 answer:3 teacher:4 varies:1 encoders:2 considerably:3 cho:3 my:1 st:1 tianlin:1 lstm:70 oord:2 stay:5 ritter:2 decoding:8 michael:2 quickly:2 ilya:2 concrete:1 sanjeev:1 central:4 hodges:1 rafal:1 huang:1 leveraged:3 emnlp:5 severyn:1 guy:1 luong:1 zhao:1 dialogue:4 chung:1 michel:1 li:10 ricardo:4 potential:2 lapata:1 stride:11 summarized:1 dilated:1 titan:1 depends:1 collobert:1 tsung:1 performed:1 view:1 wind:1 lowe:1 break:1 jason:2 sup:1 tion:1 effected:1 recover:2 metz:1 dieng:1 actorcritic:2 greg:1 accuracy:2 convolutional:42 largely:2 efficiently:1 hovy:1 yield:3 yew:1 conceptually:2 zhengdong:2 kavukcuoglu:2 top:1 produced:1 lu:2 rectified:2 vazquez:1 history:2 detector:1 suffers:1 taiga:1 hlt:1 attentive:1 hotel:9 mohamed:1 chintala:1 associated:1 transposed:1 couple:1 gain:1 dataset:5 popular:1 recall:2 color:2 knowledge:1 dimensionality:1 sophisticated:2 back:1 barth:1 focusing:1 steve:1 originally:1 dt:2 supervised:35 follow:1 classifica:1 specify:1 improved:1 april:1 wei:1 rie:2 evaluated:1 done:1 zisserman:1 though:1 furthermore:2 just:4 smola:1 autoencoders:5 clock:1 uncer:1 hier:3 christopher:1 su:1 nonlinear:1 fai:1 pineau:1 gulrajani:1 quality:5 resemblance:1 scheduled:5 semisupervised:2 xiaodong:2 usa:1 naacl:2 contain:1 building:5 dcos:2 normalized:2 former:1 kyunghyun:3 semantic:3 illustrated:1 cafe:1 fethi:1 during:8 self:1 interspeech:1 encourages:2 cosine:4 samuel:1 wdt:2 criterion:1 occasion:1 syntax:1 chin:1 theoretic:1 demonstrate:2 complete:1 delivers:1 temperature:1 meaning:2 ranging:1 wise:2 variational:3 recently:3 image:8 baotian:1 common:1 rl:1 empirically:3 cohen:2 conditioning:4 jp:1 volume:1 million:1 association:3 he:2 bougares:1 significant:3 ishaan:1 jozefowicz:1 accumulate:1 paisley:1 automatic:2 nyc:3 grangier:2 language:18 pixelcnn:2 specification:3 stable:2 supervision:1 similarity:4 access:1 actor:3 align:1 pu:5 longer:3 accommodation:1 own:1 recent:1 showed:1 forcing:4 scenario:1 schmidhuber:2 certain:1 lighthouse:1 nvidia:1 compound:1 blog:1 onr:1 success:1 joelle:1 yi:1 seen:2 preserving:1 additional:3 dai:3 nikos:1 floor:1 staff:3 parallelized:1 freely:1 converge:1 corrado:1 signal:1 ii:2 semi:20 multiple:1 karlen:1 alan:2 positioning:1 match:1 characterized:1 ahmed:1 takeru:1 long:16 lin:1 faster:3 cross:2 beacon:3 visit:3 paired:1 jost:1 prediction:4 variant:2 involving:1 essentially:2 metric:1 navdeep:1 arxiv:26 achieved:3 dec:1 hochreiter:2 addition:2 whereas:1 separately:1 want:2 annealing:4 yunchen:5 jiwei:3 source:2 parallelization:1 strict:1 pooling:9 tend:1 bahdanau:3 december:1 contrary:2 flow:1 spirit:1 inconsistent:1 leveraging:4 zipser:1 chopra:1 yang:3 leverage:2 intermediate:1 bengio:7 easy:3 enough:1 embeddings:4 exceed:1 vinod:1 restaurant:3 relu:1 architecture:10 inner:1 idea:1 shift:1 whether:3 motivated:1 effort:1 akin:2 sentiment:4 karen:2 speech:2 york:1 deep:11 mirroring:1 generally:1 cornerstone:1 detailed:1 clear:1 compar:1 involve:1 amount:1 concentrated:1 decompressed:1 reduced:2 generate:2 exist:1 deteriorates:1 bryan:1 discrete:1 paolo:1 nevertheless:1 achieving:1 nal:3 kept:1 padded:1 nga:1 package:1 wer:5 uncertainty:1 springenberg:1 powerful:1 you:1 place:4 throughout:2 almost:1 wu:1 yann:2 p3:2 ob:1 summarizes:2 conll:1 scarcer:1 mingxuan:1 dropout:1 layer:39 followed:3 courville:2 lieblein:1 fan:2 lsup:4 refine:1 ahead:2 chetlur:1 your:1 inclusive:1 alex:3 wc:4 generates:1 min:2 kumar:1 mikolov:3 expanded:3 eat:3 oup:1 gehring:1 nboer:1 martin:2 department:1 manning:1 conjugate:1 liqun:1 across:4 slightly:1 reconstructing:4 beneficial:1 character:11 partitioned:1 appealing:1 karafi:1 encapsulates:1 making:1 modification:2 quoc:3 den:2 restricted:1 gradually:4 computationally:2 fairway:2 previously:1 bing:1 fail:1 dyer:1 fed:2 end:1 gulcehre:3 available:3 operation:7 apply:3 observe:1 hierarchical:5 generic:1 striding:1 appearing:1 slower:2 original:7 thomas:1 denotes:5 remaining:1 dirichlet:1 running:1 linguistics:3 graphical:1 angela:1 trouble:1 gan:6 opportunity:1 miyato:1 coined:1 especially:1 build:1 coffee:3 february:3 higher:3 move:1 objective:8 realized:1 restored:1 strategy:10 concentration:1 spelling:3 cudnn:2 gradient:2 win:1 iclr:4 convnet:5 distance:3 decoder:34 street:3 accommodating:2 chris:1 topic:2 transit:1 nelson:1 unstable:1 toward:1 dzmitry:3 bleu:7 length:21 modeled:1 polarity:1 ratio:1 downsampling:2 widely:1 zichao:2 setup:7 unfortunately:1 hao:2 noam:1 implementation:4 guideline:1 proper:1 boltzmann:1 summarization:21 gated:2 pei:1 unknown:1 allowing:1 convolution:13 francesco:1 datasets:5 philemon:1 sm:9 minh:1 caglar:3 hsien:1 descent:1 markov:1 ramesh:1 philippe:1 hinton:1 shazeer:1 perturbation:2 auli:2 chunyuan:4 compositionality:1 david:4 complement:1 cast:1 required:1 specified:2 pair:4 optimized:1 sentence:54 dinghan:2 coherent:2 learned:8 deletion:1 hour:2 kingma:1 nip:10 beyond:1 able:1 proceeds:2 usually:1 adversary:1 challenge:2 summarize:1 max:2 memory:3 explanation:1 nogueira:1 zhiting:1 difficulty:1 rely:1 force:3 hybrid:2 predicting:1 natural:10 cernock:1 scarce:1 zhu:1 wanting:1 improve:6 bowen:1 numerous:1 irrespective:3 auto:2 autoencoder:16 coupled:1 supermarket:3 text:31 prior:2 review:11 epoch:1 acknowledgement:1 deviate:1 graf:2 lae:1 relative:1 fully:11 lecture:1 embedded:1 xiang:2 abstracting:2 generation:12 suggestion:1 proportional:1 loss:7 bear:1 geoffrey:1 mixed:1 interesting:1 proven:1 validation:2 shelhamer:1 shay:1 sufficient:1 consistent:1 imposes:1 principle:4 roukos:1 heavy:1 translation:6 critic:3 balancing:1 wenbin:1 summary:3 supported:1 last:5 transpose:1 free:2 english:1 bias:6 side:1 perceptron:1 lukas:1 sparse:1 distributed:3 van:3 doorman:1 overcome:1 vocabulary:1 world:1 dimension:10 transition:1 gram:4 made:2 reinforcement:1 historical:1 far:1 kishore:1 welling:1 brakel:1 reconstructed:4 hang:2 keep:2 logic:1 sequentially:1 discriminative:1 zhe:5 alternatively:1 un:3 evening:1 latent:21 continuous:1 ngrams:1 table:10 additionally:1 favorite:1 channel:1 learn:4 nature:1 ca:1 robust:1 du:2 excellent:1 bottou:1 meanwhile:1 necessarily:1 domain:2 did:1 hierarchically:1 spread:1 whole:2 motivation:1 big:1 noise:1 denoise:1 nallapati:1 repeated:1 suffering:1 complementary:1 lasse:2 xu:1 biggest:1 junyoung:1 darker:2 tong:2 aid:1 erhardt:1 precision:1 exposure:6 decoded:1 sub:1 concatenating:3 jmlr:1 extractor:1 learns:1 young:1 ian:2 rk:3 down:1 specific:2 emphasized:1 jt:1 xt:4 striving:1 closeness:1 dl:3 incorporating:1 socher:1 deconvolution:6 workshop:2 sequential:1 pennington:1 importance:1 woolley:1 conditioned:1 budget:1 gumbel:1 chen:4 monroe:2 entropy:2 likely:2 gao:2 conveniently:3 vinyals:4 khudanpur:1 contained:1 scalar:1 watch:1 applies:1 radford:1 iccl:1 parison:1 truth:11 corresponds:2 semeniuta:1 extracted:2 nair:1 weston:2 conditional:2 yizhe:3 grefenstette:1 lth:1 bang:1 consequently:1 room:3 jeff:1 content:1 change:1 included:1 specifically:2 corrected:1 uniformly:1 wt:3 semantically:1 denoising:5 total:2 experimental:6 xin:2 qun:1 aaron:4 selectively:1 berg:1 support:1 latter:1 jonathan:1 alexander:1 vilnis:1 phenomenon:1 avoiding:1 oriol:4 levenshtein:1 evaluate:4 trainable:1 scratch:1 |
6,640 | 7,006 | Random Permutation Online Isotonic Regression
Wojciech Kot?owski
Pozna?n University of Technology
Poland
[email protected]
Wouter M. Koolen
Centrum Wiskunde & Informatica
Amsterdam, The Netherlands
[email protected]
Alan Malek
MIT
Cambridge, MA
[email protected]
Abstract
We revisit isotonic regression on linear orders, the problem of fitting monotonic
functions to best explain the data, in an online setting. It was previously shown that
online isotonic regression is unlearnable in a fully adversarial model, which lead to
its study in the fixed design model. Here, we instead develop the more practical
random permutation model. We show that the regret is bounded above by the
excess leave-one-out loss for which we develop efficient algorithms and matching
lower bounds. We also analyze the class of simple and popular forward algorithms
and recommend where to look for algorithms for online isotonic regression on
partial orders.
1
Introduction
A function f : R ? R is called isotonic (non-decreasing) if x ? y implies f (x) ? f (y). Isotonic
functions model monotonic relationships between input and output variables, like those between
drug dose and response [25] or lymph node condition and survival time [24]. The problem of
isotonic regression is to find the isotonic function that best explains a given data set or population
distribution. The isotonic regression problem has been extensively studied in statistics [1, 24], which
resulted in efficient optimization algorithms for fitting isotonic functions to the data [7, 16] and sharp
convergence rates of estimation under various model assumptions [26, 29].
In online learning problems, the data arrive sequentially, and the learner is tasked with predicting
each subsequent data point as it arrives [6]. In online isotonic regression, the natural goal is to predict
the incoming data points as well as the best isotonic function in hindsight. Specifically, for time steps
t = 1, . . . , T , the learner observes an instance xi ? R, makes a prediction ybi of the true label yi ,
which is assumed to lie in [0, 1]. There is no restriction that the labels or predictions be isotonic. We
evaluate a prediction ybi by its squared loss (b
yi ? yi )2 . The quality of an algorithm is measured by its
PT
yi ? yi )2 ? L?T , where L?T is the loss of the best isotonic function on the entire data
regret, t=1 (b
sequence.
Isotonic regression is nonparametric: the number of parameters grows linearly with the number of
data points. It is thus natural to ask whether there are efficient, provably low regret algorithms for
online isotonic regression. As of yet, the picture is still very incomplete in the online setting. The
first online results were obtained in the recent paper [14] which considered linearly ordered domains
in the adversarial fixed design model, i.e. a model in which all the inputs x1 , . . . , xT are given to the
learner before the start of prediction. The authors show that, due to the nonparametric nature of the
problem, many textbook online learning algorithms fail to learn at all (including Online Gradient
Descent, Follow the Leader and Exponential Weights) in the sense that their worst-case regret grows
1
linearly with the number of data points. They prove a ?(T 3 ) worst case regret lower bound, and
1
? 3 ) regret. Unfortunately, the fixed design
develop a matching algorithm that achieves the optimal O(T
assumption is often unrealistic. This leads us to our main question: Can we design methods for online
isotonic regression that are practical (do not hinge on fixed design)?
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
Our contributions Our long-term goal is to design practical and efficient methods for online
isotonic regression, and in this work we move beyond the fixed design model and study algorithms
that do not depend on future instances. Unfortunately, the completely adversarial design model (in
which the instances are selected by an adaptive adversary) is impossibly hard: every learner can
suffer linear regret in this model [14]. So in order to drop the fixed design assumption, we need to
constrain the adversary in some other way. In this paper we consider the natural random permutation
model, in which all T instances and labels are chosen adversarially before the game begins but then
are presented to the learner in a random order.
This model corresponds with the intuition that the data gathering process (which fixes the order) is
independent of the underlying data generation mechanism (which fixes instances and labels). We
will show that learning is possible in the random permutation model (in fact we present a reduction
? 13 ) upper bound on
showing that it is not harder than adversarial fixed design) by proving an O(T
regret for an online-to-batch conversion of the optimal fixed design algorithm from [14] (Section 3).
Our main tool for analyzing the random permutation model is the leave-one-out loss, drawing
interesting connections with cross-validation and calibration. The leave-one-out loss on a set of t
labeled instances is the error of the learner predicting the i-th label after seeing all remaining t ? 1
labels, averaged uniformly over i = 1, . . . , t. We begin by proving a general correspondence between
regret and leave-one-out loss for the random permutation model in Section 2.1, which allows us to
use excess leave-one-out loss as a proxy for regret. We then describe a version of online-to-batch
conversion that relates the fixed design model with the random permutation model, resulting in an
? 31 ) regret.
algorithm that attains the optimal O(T
Section 4 then turns to the computationally efficient and natural class of forward algorithms that
use an offline optimization oracle to form their prediction. This class contains most common online
1
isotonic regression algorithms. We then show a O(T 2 ) upper bound on the regret for the entire class,
1
which improves to O(T 3 ) for the well-specified case where the data are in fact generated from an
isotonic function plus i.i.d. noise (the most common model in the statistics literature).
1
While forward algorithms match the lower bound for the well-specified case, there is a factor T 6 gap
in the random permutation case. Section 4.6 proposes a new algorithm that calls a weighted offline
oracle with a large weight on the current instance. This algorithm can be efficiently computed via
[16]. We prove necessary bounds on the weight.
Related work Offline isotonic regression has been extensively studied in statistics starting from
work by [1, 4]. Applications range across statistics, biology, medicine, psychology, etc. [24, 15, 25,
22, 17]. In statistics, isotonic regression is studied in generative models [26, 3, 29]. In machine
learning, isotonic regression is used for calibrating class probability estimates [28, 21, 18, 20, 27],
ROC analysis [8], training Generalized Linear Models and Single Index Models[12, 11], data cleaning
[13], and ranking [19]. Fast algorithms for partial ordering are developed in [16].
In the online setting, [5] bound the minimax regret for monotone predictors under logarithmic loss
and [23, 10] study online nonparametric regression in general. Efficient algorithms and worst-cases
regret bounds for fixed design online isotonic regression are studied in [14]. Finally, the relation
between regret and leave-one-out loss was pioneered by [9] for linear regression.
2
Problem Setup
Given a finite set of instances {x1 , . . . , xt } ? R, a function f : {x1 , . . . , xt } ? [0, 1] is isotonic
(non-decreasing) if xi ? xj implies f (xi ) ? f (xj ) for all i, j ? {1, . . . , t}. Given a set of labeled
instances D = {(x1 , y1 ), . . . , (xt , yt )} ? R ? [0, 1], let L? (D) denote the total squared loss of the
best isotonic function on D,
t
X
L? (D) := min
(yi ? f (xi ))2 .
isotonic f
i=1
This convex optimization problem can be solved by the celebrated Pool Adjacent Violators Algorithm
(PAVA) in time linear in t [1, 7]. The optimal solution, called the isotonic regression function, is
piecewise constant and its value on any of its levels sets equals the average of labels within that set
[24].
2
Online isotonic regression in the random permutation model is defined as follows. At the beginning of
the game, the adversary chooses data instances x1 < . . . < xT 1 and labels y1 , . . . , yT . A permutation
? = (?1 , . . . , ?T ) of {1, . . . , T } is then drawn uniformly at random and used to determine the order
in which the data will be revealed. In round t, the instance x?t is revealed to the learner who then
predicts yb?t . Next, the learner observes the true label y?t and incurs the squared loss (b
y?t ? y?t )2 .
?
?
For a fixed permutation ?, we use the shorthand notation Lt = L ({(x?1 , y?1 ), . . . , (x?t , y?t )}) to
denote the optimal isotonic regression loss of the first t labeled instances (L?t will clearly depend on
?, except for the case t = T ). The goal of the learner is to minimize the expected regret,
X
T
T
X
RT := E?
(y?t ? yb?t )2 ? L?T =
rt ,
t=1
t=1
where we have decomposed the regret into its per-round increase,
h
i
rt := E? (y?t ? yb?t )2 ? L?t + L?t?1 ,
(1)
L?0
:= 0. To simplify the analysis, let us assume that the prediction strategy does not depend
with
on the order in which the past data were revealed (which is true for all algorithms considered
in this paper). Fix t and define D = {(x?1 , y?1 ), . . . , (x?t , y?t )} to be the set of first t labeled
instances. Furthermore, let D?i = D\{(x?i , y?i )} denote the set D with the instance from round
i removed. Using this notation, the expression under the expectation in (1) can be written as
2
y?t ? yb?t (D?t ) ? L? (D) + L? (D?t ), where we made the dependence of yb?t on D?t explicit
(and used the fact that it only depends on the set of instances, not on their order). By symmetry of the
expectation over permutations with respect to the indices, we have
h
h
2 i
2 i
E? y?t ? yb?t (D?t )
= E? y?i ? yb?i (D?i )
, and E? L? (D?t ) = E? L? (D?i ) ,
for all i = 1, . . . , t. Thus, (1) can as well be rewritten as:
X
t
2
1
y?i ? yb?i (D?i ) + L? (D?i ) ? L? (D) .
rt = E?
t i=1
Let us denote the expression inside the expectation by rt (D) to stress its dependence on the set of
instances D, but not on the order in which they were revealed. If we can show that rt (D) ? Bt holds
PT
for all t, then its expectation has the same bound, so RT ? t=1 Bt .
2.1
Excess Leave-One-Out Loss and Regret
Our main tool for analyzing the random permutation model is the leave-one-out loss. In the
leave-one-out model, there is no sequential structure. The adversary picks a data set D =
{(x1 , y1 ), . . . , (xt , yt )} with x1 < . . . < xt . An index i is sampled uniformly at random, the
learner is given D?i , the entire data set except (xi , yi ), and predicts ybi (as a function of D?i ) on
instance xi . We call the difference between the expected loss of the learner and L? (D) the expected
excess leave-one-out loss:
!
X
t
2
1
?
`oot (D) :=
yi ? ybi (D?i )
? L (D) .
(2)
t
i=1
The random permutation model has the important property that the bound on the excess leave-one-out
loss of a prediction algorithm translates into a regret bound. A similar result has been shown by [9]
for expected loss in the i.i.d. setting.
Lemma 2.1. rt (D) ? `oot (D) for any t and any data set D = {(x1 , y1 ), . . . , (xt , yt )}.
Pt
2
Proof. As x1 < . . . < xt , let (y1? , . . . , yt? ) = argminf1 ?...?ft
i=1 (yi ? fi ) be the isotonic
Pt
?
?
regression function on D. From the definition of L , we can see that L (D) = i=1 (yi? ? yi )2 ?
L? (D?i ) + (yi ? yi? )2 . Thus, the regret increase rt (D) is bounded by
rt (D) =
t
X
(yi ? ybi )2 + L? (D?i )
i=1
t
? L? (D) ?
t
X
(yi ? ybi )2 ? (yi ? y ? )2
i
i=1
1
t
= `oot (D).
We assume all points xt are distinct as it will significantly simplify the presentation. All results in this
paper are also valid for the case x1 ? . . . ? xT .
3
However, we note that lower bounds for `oot (D) do not imply lower bounds on regret.
In what follows, our strategy will be to derive bounds `oot (D) ? Bt for various algorithms, from
PT
which the regret bound RT ? t=1 Bt can be immediately obtained. From now on, we abbreviate
`oot (D) to `oot , (as D is clear from the context); we will also consistently assume x1 < . . . < xt .
2.2
Noise free case
As a warm-up, we analyze the noise-free case (when the labels themselves are isotonic) and demonstrate that analyzing `oot easily results in an optimal bound for this setting.
Proposition 2.2. Assume that the labels satisfy y1 ? y2 ? . . . ? yt . The prediction ybi that is the
linear interpolation between adjacent labels ybi = 12 (yi?1 + yi+1 ), has
`oot ?
1
1
, and thus RT ? log(T + 1).
2t
2
Pt
1
2
?
Proof. For ?i := yi ? yi?1 , it is easy to check that `oot = 4t
i=1 (?i+1 ? ?i ) because the L (D)
term is zero. This expression is a convex function of ?1 , . . . , ?t+1 . Note that ?i ? 0 for each
Pt+1
i = 1, . . . , t + 1, and i=1 ?i = 1. Since the maximum of a convex function is at the boundary of
the feasible region, the maximizer is given by ?i = 1 for some i ? {1, . . . , t + 1}, and ?j = 0 for all
j ? {1, . . . , t + 1}, j 6= i. This implies that `oot ? (2t)?1 .
2.3
General Lower Bound
In [14], a general lower bound was derived showing that the regret of any online isotonic regression
1
procedure is at least ?(T 3 ) for the adversarial setup (when labels and the index order were chosen
adversarially). This lower bound applies regardless of the order of outcomes, and hence it is also a
lower bound for the random permutation model. This bound translates into `oot = ?(t?2/3 ).
3
Online-to-batch for fixed design
Here, we describe an online-to-batch conversion that relates the adversarial fixed design model with
the random permutation model considered in this paper. In the fixed design model with time horizon
Tfd the learner is given the points x1 , . . . , xTfd in advance (which is not the case in the random
permutation model), but the adversary chooses the order ? in which the labels are revealed (as
opposed to ? being drawn at random). We can think of an algorithm for fixed design as a prediction
function
ybfd x?t y?1 , . . . , y?t?1 , {x1 , . . . , xTfd } ,
for any order ?, any set {x1 , . . . , xTfd } (and hence any time horizon Tfd ), and any time t. This
notation is quite heavy, but makes it explicit that the learner, while predicting at point x?t , knows the
previously revealed labels and the whole set of instances.
In the random permutation model, at trial t, the learner only knows the previously revealed t ? 1
labeled instances and predicts on the new instance. Without loss of generality, denote the past
instances by D?i = {(x1 , y1 ), . . . , (xi?1 , yi?1 ), (xi+1 , yi+1 ), . . . (xt , yt )}, and the new instance by
xi , for some i ? {1, . . . , t}. Given an algorithm for fixed design ybfd , we construct a prediction
ybt = ybt (D?i , xi ) of the algorithm in the random permutation model. The reduction goes through an
online-to-batch conversion. Specifically, at trial t, given past labeled instances D?i , and a new point
xi , the learner plays the expectation of the prediction of the fixed design algorithm with time horizon
T fd = t and points {x1 , . . . , xt } under a uniformly random time from the past j ? {1, . . . , t} and a
random permutation ? on {1, . . . , t}, with ?t = i, i.e.2
X
t
1
fd
ybt = E{?:?t =i}
yb (xi |y?1 , . . . , y?j?1 , {x1 , . . . , xt } .
(3)
t j=1
2
Choosing the prediction as an expectation is elegant but inefficient. However, the proof indicates that we
might as well sample a single j and a single random permutation ? to form the prediction and the reduction
would also work in expectation.
4
Note that this is a valid construction, as the right hand side only depends on D?i and xi , which are
known to the learner in a random permutation model at round t. We prove (in Appendix A) that the
excess leave-one-out loss of yb at trial t is upper bounded by the expected regret (over all permutations)
of ybfd in trials 1, . . . , t divided by t:
Theorem 3.1. Let D = {(x1 , y1 ), . . . , (xt , yt )} be a set of t labeled instances. Fix any algorithm
ybfd for online adversarial isotonic regression with fixed design, and let Regt (b
y fd | ?) denote its regret
on D when the labels are revealed in order ?. The random permutation learner yb from (3) ensures
y fd | ?)].
`oot (D) ? 1t E? [Regt (b
1
? 3 ) fixed design regret result from [14].
This constructions allows immediate transport of the O(T
fd
Theorem 3.2. There is an algorithm for the random-permutation model with excess leave-one-out
? ? 23 ) and hence expected regret RT ? P O(t
? 13 ).
? ? 32 ) = O(T
loss `oot = O(t
t
4
Forward Algorithms
For clarity of presentation, we use vector notation in this section: y = (y1 , . . . , yt ) is the label vector,
y ? = (y1? , . . . , yt? ) is the isotonic regression function, and y?i = (y1 , . . . , yi?1 , yi+1 , . . . , yt ) is y
with i-th label removed. Moreover, keeping in mind that x1 < . . . < xt , we can drop xi ?s entirely
from the notation and refer to an instance xi simply by its index i.
Given labels y?i and some index i to predict on, we want a good prediction for yi . Follow the Leader
(FL) algorithms, which predict using the best isotonic function on the data seen so far, are not directly
applicable to online isotonic regression: the best isotonic function is only defined at the observed
data instances and can be arbitrary (up to monotonicity constraint) otherwise. Instead, we analyze
a simple and natural class of algorithms which we dub forward algorithms3 . We define a forward
algorithm, or FA, to be any algorithm that estimates a label yi0 ? [0, 1] (possibly dependent on i and
y?i ), and plays with the FL strategy on the sequence of past data including the new instance with the
estimated label, i.e. performs offline isotonic regression on y 0 ,
X
t
yb = argmin
(yj0 ? fj )2 ,
where y 0 = (y1 , . . . , yi?1 , yi0 , yi+1 , . . . , yt ).
f1 ?...?ft
j=1
Then, FA predicts with ybi , the value at index i of the offline function of the augmented data. Note that
if the estimate turned out to be correct (yi0 = yi ), the forward algorithm would suffer no additional
loss for that round.
Forward algorithms are practically important: we will show that many popular algorithms can be cast
as FA with a particular estimate. FA automatically inherit any computational advances for offline
isotonic regression; in particular, they scale efficiently to partially ordered data [16]. To our best
knowledge, we are first to give bounds on the performance of these algorithms in the online setting.
Alternative formulation We can describe a FA using a minimax representation of the isotonic
regression [see, e.g., 24]: the optimal isotonic regression y ? satisfies
yi? = min max y `,r = max min y `,r ,
r?i `?i
Pr
`?i r?i
(4)
yj
j=`
where y `,r = r?`+1
. The ?saddle point? (`i , ri ) for which yi? = y `i ,ri , specifies the boundaries of
?
the level set {j : yj = yi? } of the isotonic regression function that contains i.
It follows from (4) that isotonic regression is monotonic with respect to labels: for any two label
sequences y and z such that yi ? zi for all i, we have yi? ? zi? for all i. Thus, if we denote the
predictions for label estimates yi0 = 0 and yi0 = 1 by ybi0 and ybi1 , respectively, the monotonicity implies
that any FA has ybi0 ? ybi ? ybi1 . Conversely, using the continuity of isotonic regression y ? as a
function of y, (which follows from (4)), we can show that for any prediction ybi with ybi0 ? ybi ? ybi1 ,
there exists an estimate yt0 ? [0, 1] that could generate this prediction. Hence, we can equivalently
interpret FA as an algorithm which in each trial predicts with some ybi in the range [b
yi0 , ybi1 ].
3
The name highlights resemblance to the Forward algorithm introduced by [2] for exponential family models.
5
4.1
Instances
With the above equivalence between forward algorithms and algorithms that predict in [b
yi0 , ybi1 ], we
can show that many of the well know isotonic regression algorithms are forward algorithms and
thereby add weight to our next section where we prove regret bounds for the entire class.
Isotonic regression with interpolation (IR-Int)[28] Given y?i and index i, the algorithm
first
?
?
computes f ? , the isotonic regression of y?i , and then predicts with ybiint = 21 fi?1
+ fi+1
, where
?
we used f0? = 0 and ft+1
= 1. To see that this is a FA, note that if we use estimate yi0 = ybiint , the
?
?
, yi0 , fi+1
, . . . , ft? ).
isotonic regression of y 0 = (y1 , . . . , yi?1 , yi0 , yi+1 , . . . , yt ) is yb = (f1? , . . . , fi?1
?
This is because: i) yb is isotonic by construction; ii) f has the smallest squared error loss for y?t
among isotonic functions; and iii) the loss of yb on point yi0 is zero, and the loss of yb on all other
points is equal to the loss of f ? .
Direct combination of ybi0 and ybi1 . It is clear from Section 4, that any algorithm that predicts
ybi = ?i ybi0 + (1 ? ?i )b
yi1 for some ?i ? [0, 1] is a FA. The weight ?i can be set to a constant (e.g.,
?i = 1/2), or can be chosen depending on ybi1 and ybi0 . Such algorithms were considered by [27]:
log-IVAP : ybilog =
ybi1
ybi1
,
+ 1 ? ybi0
Brier-IVAP : ybiBrier =
1 + (b
yi0 )2 ? (1 ? ybi1 )2
.
2
It is straightforward to show that both algorithms satisfy ybi0 ? ybi ? ybi1 and are thus instances of FA.
Last-step minimax (LSM). LSM plays the minimax strategy with one round remaining,
n
o
ybi = argmin max (b
y ? yi )2 ? L? (y) ,
y
b?[0,1] yi ?[0,1]
where L? (y) is the isotonic regression loss on y. Define L?b = L? (y1 , . . . , yi?1 , b, yi+1 , . . . , yt )
for b ? {0, 1}, i.e. L?b is the loss of isotonic regression function with label estimate yi0 = b. In
?
1+L?
0 ?L1
Appendix B we show that ybi =
and it is also an instance of FA.
2
4.2
Bounding the leave-one-out loss
q
We now give a O( logt t ) bound on the leave-one-out loss for forward algorithms. Interestingly, the
bound holds no matter what label estimate the algorithm uses. The proof relies on the stability of
isotonic regression with respect to a change of a single label. While the bound looks suboptimal in
light of Section 2.3, we will argue in Section 4.5 that the bound is actually tight (up to a logarithmic
factor) for one FA and experimentally verify that all other mentioned forward algorithms also have a
tight lower bound of that form for the same sequence of outcomes.
We will bound `oot by defining ?i = ybi ? yi? and using the following simple inequality:
t
t
t
1X
1 X
2X
(b
yi ? yi )2 ? (yi? ? yi )2 =
(b
yi ? yi? )(b
yi + yi? ? 2yi ) ?
|?i |.
t i=1
t i=1
t i=1
q
log t
Theorem 4.1. Any forward algorithm has `oot = O
.
t
`oot =
Proof. Fix some forward algorithm. For any i, let {j : yj? = yi? } = {`i , . . . , ri }, for some `i ? i ? ri ,
be the level set of isotonic regression at level yi? . We need the stronger version of the minimax
representation, shown in Appendix C:
yi? = min y `i ,r = max y `,ri .
r?i
`?i
(5)
n
h
o
k
We partition the points {1, . . . , t} into K consecutive segments: Sk = i : yi? ? k?1
,
for
K
K
K?1
?
?
k = 1, . . . , K ? 1 and SK = i : yi ? K . Due to monotonicity of y , Sk are subsets of the
form {`k , . . . , rk } (where we use rk = `k ? 1 if Sk is empty). From the definition, every level set
of y ? is contained in Sk for some k, and each `k (rk ) is a left-end (right-end) of some level set.
6
Now, choose some index i, and let Sk be such that i ? Sk . Let yi0 be the estimate of the FA, and let
y 0 = (y1 , . . . , yi?1 , yi0 , yi+1 , . . . , yt ). The minimax representation (4) and definition of FA imply
yi0 ? yi o
r?i
r?i
`?i r?i
r ? `k + 1
yi0 ? yi
yi0 ? yi
? min y `k ,r + min
? min y `k ,r + min
r?i
r?i r ? `k + 1
r?i r ? `k + 1
r?`k
by (5)
1
?1
1
1
? y`?k + min
? y`?k ?
? yi? ?
?
.
r?i r ? `k + 1
i ? `k + 1
K
i ? `k + 1
1
1
+ rk ?i+1
. Hence, we can bound |?i | = |b
A symmetric argument gives ybi ? yi? + K
yi ? yi? | ?
P
|Sk |
1
1
1
i?Sk |?i | ? K + 2 1 + log |Sk | ,
K + max i?`k +1 , rk ?i+1 . Summing over i ? Sk yields
which allows the bound
K
2X
2
`oot ?
|?i | ?
+ 4 (1 + log t).
t i
K
t
p
The theorem follows from setting K = ?( t/ log t).
n
ybi = max min y 0`,r ? min y 0`k ,r = min y `k ,r +
4.3
Forward algorithms for the well-specified case
p
P
While the `oot upper bound of the previous section yields a regret bound RT ? t O( log t/t) =
? 12 ) that is a factor O(T 16 ) gap from the lower bound in Section 2.3, there are two pieces of good
O(T
news. First, forward algorithms do get the optimal rate in the well-specified setting, popular in the
classical statistics literature, where the labels are generated i.i.d. such that E[yi ] = ?i with isotonic
1
?1 ? . . . ? ?t .4 Second, there is a ?(t? 2 ) lower bound for forward algorithms as proven in the next
section. Together, these results imply that the random permutation model in indeed harder than the
well-specified case: forward algorithms are sufficient for the latter but not the former.
Theorem 4.2. For data generated from the well-specified setting (monotonic means with i.i.d. noise),
? 13 ) bound on the regret.
? ? 23 ), which translates to a O(T
any FA has `oot = O(t
The proof is given in Appendix D. Curiously, the proof makes use of the existence of the seemingly
? ? 32 ) excess leave-one-out loss from Theorem 3.2.
unrelated optimal algorithm with O(t
4.4
Entropic loss
We now abandon the squared loss for a moment and analyze how a FA performs when the loss function
is the entropic loss, defined as ?y log yb ? (1 ? y) log(1 ? yb) for y ? [0, 1]. Entropic loss (precisely:
its binary-label version known as log-loss) is extensively used in the isotonic regression context for
maximum likelihood estimation [14] or for probability calibration [28, 21, 27]. A surprising fact in
isotonic regression is that minimizing entropic loss5 leads to exactly the same optimal solution as in
the squared loss case, the isotonic regression function y ? [24].
Not every FA is appropriate for entropic loss, as recklessly choosing the label estimate might result in
an infinite loss in just a single trial (as noted by [27]). Indeed, consider a sequence of outcomes with
y1 = 0 and yi = 1 for i > 1. While predicting on index i = 1, choosing y10 = 1 results in yb1 = 1, for
which the entropic loss is infinite (as y1 = 0). Does there exists a FA which achieves a meaningful
bound on `oot in the entropic loss setup?
We answer this question in the affirmative, showing that the log-IVAP predictor FA gets the same
excess-leave-one-out loss bound as given in Theorem 4.1. As the reduction from the regret to leaveone-out loss (Lemma 2.1) does not use any properties of the loss function, this immediately implies a
bound on the expected regret. Interestingly, the proof (given in Appendix G) uses as an intermediate
step the bound on |?i | for the worst possible forward algorithm which always produces the estimate
being the opposite of the actual label.
q
log t
Theorem 4.3. The log-IVAP algorithm has `oot = O
for the entropic loss.
t
4
5
The ?(T 1/3 ) regret lower bound in [14] uses a mixture of well-specified distributions and still applies.
In fact, this statement applies to any Bregman divergence [24].
7
4.5
Lower bound
1
The last result of this section is that FA can be made to have `oot = ?(t? 2 ). We show this by means
of a counterexample. Assume t = n2 for some integer n > 0 and let the labels
? be binary, yi ? {0, 1}.
We split the set {1, . . . , t} into n consecutive segments, each of size n = t. The proportion of ones
(yi = 1) in the k-th segment is equal to nk , but within each segment all ones precede all zeros. For
instance, when t = 25, the label sequence is:
10000
| {z } 11100
| {z } 11110
| {z } 11111
| {z },
| {z } 11000
1/5
2/5
3/5
4/5
5/5
One can use the minimax formulation (4) to verify that the segments will correspond to the level sets
of the isotonic regression and that yi? = nk for any i in the k-th segment. This sequence is hard:
1
Lemma 4.4. The IR-Int algorithm run on the sequence described above has `oot = ?(t? 2 ).
We prove the lower bound for IR-Int, since the presentation (in Appendix E) is clearest. Empirical
simulations showing that the other forward algorithms also suffer this regret are in Appendix F.
4.6
Towards optimal forward algorithms
An attractive feature of forward algorithms is that they generalize to partial orders, for which efficient
? ? 12 )
offline optimization algorithms exist. However, in Section 4 we saw that FAs only give a O(t
? ? 23 ) is possible (with an algorithm that is not known to scale
rate, while in Section 3 we saw that O(t
to partial orders). Is there any hope of an algorithm that both generalizes and has the optimal rate?
In this section, we propose the Heavy-? algorithm, a slight modification of the forward algorithm that
plugs in label estimate yi0 = ? ? [0, 1] with weight c (with unit weight on all other points), then plays
the value of the isotonic regression function. Implementation is straightforward for offline isotonic
regression algorithms that permit the specification of weights (such as [16]). Otherwise, we might
simulate such weighting by plugging in c copies of the estimated label ? at location xi .
What label estimate ? and weight c should we use? We show that the choice of ? is not very sensitive,
1
but it is crucial to tune the weight to c = ?(t 3 ). Lemmas H.1 and H.2 show that higher and lower c
are necessarily sub-optimal for `oot . This leaves only one choice for c, for which we believe
1
? ? 23 ).
Conjecture 4.5. Heavy-? with weight c = ?(t 3 ) has `oot = O(t
We cannot yet prove this conjecture, although numerical experiments strongly suggest it. We do not
believe that picking a constant label ? is special. For example, we might alternatively predict with the
average of the predictions of Heavy-1 and Heavy-0. Yet not any label estimate works. In particular, if
we estimate the label that would be predicted by IR-Int (see 4.1) and the discussion below it), and
we plug that in with any weight c ? 0, then the isotonic regression function will still have that same
1
label estimate as its value. This means that the ?(t? 2 ) lower bound of Section 4.5 applies.
5
Conclusion
We revisit the problem of online isotonic regression and argue that we need a new perspective to
design practical algorithms. We study the random permutation model as a novel way to bypass the
stringent fixed design requirement of previous work. Our main tool in the design and analysis of
algorithms is the leave-one-out loss, which bounds the expected regret from above. We start by
observing that the adversary from the adversarial fixed design setting also provides a lower bound
here. We then show that this lower bound can be matched by applying online-to-batch conversion to
the optimal algorithm for fixed design. Next we provide an online analysis of the natural, popular and
practical class of Forward Algorithms, which are defined in terms of an offline optimization oracle.
We show that Forward algorithms achieve a decent regret rate in all cases, and match the optimal rate
in special cases. We conclude by sketching the class of practical Heavy algorithms and conjecture
that a specific parameter setting might guarantee the correct regret rate.
Open problem The next major challenge is the design and analysis of efficient algorithms for
online isotonic regression on arbitrary partial orders. Heavy-? is our current best candidate. We pose
? 13 ) regret on linear orders as an open problem.
deciding if it in fact even guarantees O(T
8
Acknowledgments
Wojciech Kot?owski acknowledges support from the Polish National Science Centre (grant no.
2016/22/E/ST6/00299). Wouter Koolen acknowledges support from the Netherlands Organization for
Scientific Research (NWO) under Veni grant 639.021.439. This work was done in part while Koolen
was visiting the Simons Institute for the Theory of Computing.
References
[1] M. Ayer, H. D. Brunk, G. M. Ewing, W. T. Reid, and E. Silverman. An empirical distribution
function for sampling with incomplete information. Annals of Mathematical Statistics, 26(4):
641?647, 1955.
[2] K. Azoury and M. Warmuth. Relative loss bounds for on-line density estimation with the
exponential family of distributions. Journal of Machine Learning, 43(3):211?246, 2001.
[3] Lucien Birg? and Pascal Massart. Rates of convergence for minimum contrast estimators.
Probability Theory and Related Fields, 97:113?150, 1993.
[4] H. D. Brunk. Maximum likelihood estimates of monotone parameters. Annals of Mathematical
Statistics, 26(4):607?616, 1955.
[5] Nicol? Cesa-Bianchi and G?bor Lugosi. Worst-case bounds for the logarithmic loss of predictors.
Machine Learning, 43(3):247?264, 2001.
[6] Nicol? Cesa-Bianchi and G?bor Lugosi. Prediction, learning, and games. Cambridge University
Press, 2006.
[7] Jan de Leeuw, Kurt Hornik, and Patrick Mair. Isotone optimization in R: Pool-adjacent-violators
algorithm (PAVA) and active set methods. Journal of Statistical Software, 32:1?24, 2009.
[8] Tom Fawcett and Alexandru Niculescu-Mizil. PAV and the ROC convex hull. Machine Learning,
68(1):97?106, 2007.
[9] J?rgen Forster and Manfred K Warmuth. Relative expected instantaneous loss bounds. Journal
of Computer and System Sciences, 64(1):76?102, 2002.
[10] Pierre Gaillard and S?bastien Gerchinovitz. A chaining algorithm for online nonparametric
regression. In Conference on Learning Theory (COLT), pages 764?796, 2015.
[11] Sham M Kakade, Varun Kanade, Ohad Shamir, and Adam Kalai. Efficient learning of generalized linear and single index models with isotonic regression. In Neural Information Processing
Systems (NIPS), pages 927?935, 2011.
[12] Adam Tauman Kalai and Ravi Sastry. The isotron algorithm: High-dimensional isotonic
regression. In COLT, 2009.
[13] Wojciech Kot?owski and Roman S?owi?nski. Rule learning with monotonicity constraints. In
International Conference on Machine Learning (ICML), pages 537?544, 2009.
[14] Wojciech Kot?owski, Wouter M. Koolen, and Alan Malek. Online isotonic regression. In
Vitaly Feldman and Alexander Rakhlin, editors, Proceedings of the 29th Annual Conference on
Learning Theory (COLT), pages 1165?1189, June 2016.
[15] J. B. Kruskal. Multidimensional scaling by optimizing goodness of fit to a nonmetric hypothesis.
Psychometrika, 29(1):1?27, 1964.
[16] Rasmus Kyng, Anup Rao, and Sushant Sachdeva. Fast, provable algorithms for isotonic
regression in all `p -norms. In Neural Information Processing Systems (NIPS), 2015.
[17] Ronny Luss, Saharon Rosset, and Moni Shahar. Efficient regularized isotonic regression with
application to gene?gene interaction search. Annals of Applied Statistics, 6(1):253?283, 2012.
9
[18] Aditya Krishna Menon, Xiaoqian Jiang, Shankar Vembu, Charles Elkan, and Lucila OhnoMachado. Predicting accurate probabilities with a ranking loss. In Interantional Conference on
Machine Learning (ICML), 2012.
[19] T. Moon, A. Smola, Y. Chang, and Z. Zheng. Intervalrank: Isotonic regression with listwise
and pairwise constraint. In WSDM, pages 151?160. ACM, 2010.
[20] Harikrishna Narasimhan and Shivani Agarwal. On the relationship between binary classification,
bipartite ranking, and binary class probability estimation. In Neural Information Processing
Systems (NIPS), pages 2913?2921, 2013.
[21] Alexandru Niculescu-Mizil and Rich Caruana. Predicting good probabilities with supervised
learning. In ICML, volume 119, pages 625?632. ACM, 2005.
[22] G. Obozinski, C. E. Grant, G. R. G. Lanckriet, M. I. Jordan, and W. W. Noble. Consistent
probabilistic outputs for protein function prediction. Genome Biology, 2008 2008.
[23] Alexander Rakhlin and Karthik Sridharan. Online nonparametric regression. In Conference on
Learning Theory (COLT), pages 1232?1264, 2014.
[24] T. Robertson, F. T. Wright, and R. L. Dykstra. Order Restricted Statistical Inference. John
Wiley & Sons, 1998.
[25] Mario Stylianou and Nancy Flournoy. Dose finding using the biased coin up-and-down design
and isotonic regression. Biometrics, 58(1):171?177, 2002.
[26] Sara Van de Geer. Estimating a regression function. Annals of Statistics, 18:907?924, 1990.
[27] Vladimir Vovk, Ivan Petej, and Valentina Fedorova. Large-scale probabilistic predictors with
and without guarantees of validity. In Neural Information Processing Systems (NIPS), pages
892?900, 2015.
[28] Bianca Zadrozny and Charles Elkan. Transforming classifier scores into accurate multiclass
probability estimates. In International Conference on Knowledge Discovery and Data Mining
(KDD), pages 694?699, 2002.
[29] Cun-Hui Zhang. Risk bounds in isotonic regression. The Annals of Statistics, 30(2):528?555,
2002.
10
| 7006 |@word trial:6 version:3 stronger:1 proportion:1 yi0:19 norm:1 open:2 simulation:1 pick:1 incurs:1 thereby:1 harder:2 moment:1 reduction:4 celebrated:1 contains:2 score:1 leeuw:1 interestingly:2 kurt:1 past:5 current:2 surprising:1 yet:3 written:1 john:1 subsequent:1 partition:1 numerical:1 kdd:1 gerchinovitz:1 drop:2 generative:1 selected:1 leaf:1 warmuth:2 beginning:1 yi1:1 isotone:1 manfred:1 provides:1 node:1 lsm:2 location:1 zhang:1 mathematical:2 direct:1 prove:6 shorthand:1 fitting:2 inside:1 pairwise:1 indeed:2 expected:9 themselves:1 brier:1 owski:4 wsdm:1 decreasing:2 decomposed:1 automatically:1 actual:1 psychometrika:1 begin:2 estimating:1 bounded:3 underlying:1 notation:5 moreover:1 unrelated:1 matched:1 what:3 argmin:2 affirmative:1 textbook:1 developed:1 narasimhan:1 hindsight:1 finding:1 guarantee:3 every:3 multidimensional:1 exactly:1 classifier:1 unit:1 grant:3 reid:1 before:2 analyzing:3 jiang:1 interpolation:2 lugosi:2 might:5 plus:1 yb1:1 studied:4 equivalence:1 conversely:1 sara:1 range:2 averaged:1 practical:6 acknowledgment:1 yj:3 regret:38 silverman:1 ybt:3 procedure:1 jan:1 empirical:2 drug:1 oot:26 significantly:1 matching:2 seeing:1 suggest:1 protein:1 get:2 cannot:1 shankar:1 put:1 context:2 applying:1 ronny:1 risk:1 isotonic:73 restriction:1 yt:15 go:1 regardless:1 starting:1 straightforward:2 convex:4 immediately:2 estimator:1 rule:1 population:1 proving:2 stability:1 annals:5 pt:7 play:4 construction:3 pioneered:1 cleaning:1 yj0:1 shamir:1 us:3 hypothesis:1 elkan:2 lanckriet:1 robertson:1 centrum:1 pozna:1 predicts:7 labeled:7 observed:1 ft:4 solved:1 worst:5 region:1 ensures:1 news:1 ordering:1 removed:2 observes:2 mentioned:1 intuition:1 transforming:1 depend:3 tight:2 segment:6 poznan:1 bipartite:1 learner:17 completely:1 easily:1 various:2 distinct:1 fast:2 describe:3 outcome:3 choosing:3 quite:1 drawing:1 otherwise:2 statistic:11 think:1 abandon:1 online:34 seemingly:1 sequence:8 propose:1 interaction:1 kyng:1 turned:1 achieve:1 mair:1 convergence:2 empty:1 requirement:1 produce:1 adam:2 leave:18 derive:1 develop:3 depending:1 pose:1 measured:1 c:1 predicted:1 implies:5 correct:2 alexandru:2 hull:1 stringent:1 explains:1 fix:5 f1:2 proposition:1 pl:1 hold:2 practically:1 considered:4 wright:1 deciding:1 predict:5 rgen:1 major:1 achieves:2 consecutive:2 smallest:1 entropic:8 kruskal:1 estimation:4 applicable:1 precede:1 label:40 lucien:1 nwo:1 saw:2 sensitive:1 gaillard:1 tool:3 weighted:1 hope:1 mit:2 clearly:1 always:1 kalai:2 cwi:1 derived:1 june:1 consistently:1 check:1 indicates:1 likelihood:2 polish:1 adversarial:8 attains:1 contrast:1 sense:1 inference:1 dependent:1 niculescu:2 entire:4 bt:4 relation:1 provably:1 among:1 colt:4 classification:1 pascal:1 proposes:1 ivap:4 special:2 ewing:1 equal:3 construct:1 field:1 beach:1 sampling:1 biology:2 adversarially:2 look:2 icml:3 noble:1 future:1 recommend:1 piecewise:1 simplify:2 roman:1 resulted:1 divergence:1 national:1 isotron:1 karthik:1 organization:1 fd:5 wouter:3 mining:1 zheng:1 mixture:1 arrives:1 nl:1 light:1 accurate:2 bregman:1 partial:5 necessary:1 ohad:1 biometrics:1 incomplete:2 dose:2 instance:31 rao:1 caruana:1 goodness:1 subset:1 predictor:4 answer:1 rosset:1 chooses:2 nski:1 st:1 density:1 international:2 st6:1 probabilistic:2 pool:2 picking:1 together:1 sketching:1 squared:6 cesa:2 opposed:1 choose:1 possibly:1 inefficient:1 wojciech:4 de:2 int:4 matter:1 satisfy:2 ranking:3 depends:2 piece:1 analyze:4 observing:1 mario:1 start:2 simon:1 contribution:1 minimize:1 ir:4 moon:1 who:1 efficiently:2 yield:2 correspond:1 ybi:20 generalize:1 bor:2 dub:1 lu:1 explain:1 definition:3 clearest:1 proof:8 sampled:1 popular:4 ask:1 nancy:1 knowledge:2 improves:1 harikrishna:1 nonmetric:1 actually:1 higher:1 varun:1 follow:2 ayer:1 response:1 brunk:2 tom:1 yb:18 formulation:2 done:1 supervised:1 strongly:1 generality:1 furthermore:1 just:1 smola:1 hand:1 transport:1 maximizer:1 continuity:1 quality:1 menon:1 resemblance:1 scientific:1 grows:2 believe:2 usa:1 name:1 calibrating:1 verify:2 true:3 y2:1 validity:1 former:1 hence:5 symmetric:1 attractive:1 pav:1 adjacent:3 round:6 game:3 noted:1 chaining:1 generalized:2 stress:1 demonstrate:1 performs:2 l1:1 saharon:1 fj:1 instantaneous:1 novel:1 fi:5 charles:2 common:2 koolen:4 volume:1 slight:1 vembu:1 interpret:1 wmkoolen:1 refer:1 cambridge:2 counterexample:1 feldman:1 sastry:1 centre:1 calibration:2 f0:1 specification:1 moni:1 etc:1 add:1 patrick:1 recent:1 perspective:1 optimizing:1 fedorova:1 inequality:1 binary:4 shahar:1 yi:68 seen:1 minimum:1 additional:1 krishna:1 determine:1 ii:1 relates:2 sham:1 alan:2 match:2 plug:2 cross:1 long:2 divided:1 algorithms3:1 plugging:1 prediction:20 regression:61 expectation:7 tasked:1 fawcett:1 agarwal:1 anup:1 want:1 crucial:1 biased:1 massart:1 elegant:1 vitaly:1 sridharan:1 jordan:1 call:2 integer:1 revealed:8 iii:1 easy:1 intermediate:1 split:1 decent:1 xj:2 fit:1 psychology:1 zi:2 ivan:1 suboptimal:1 opposite:1 valentina:1 multiclass:1 translates:3 whether:1 expression:3 regt:2 curiously:1 wiskunde:1 suffer:3 clear:2 tune:1 netherlands:2 nonparametric:5 extensively:3 shivani:1 informatica:1 generate:1 specifies:1 exist:1 revisit:2 estimated:2 per:1 loss5:1 veni:1 drawn:2 clarity:1 ravi:1 y10:1 monotone:2 impossibly:1 run:1 arrive:1 family:2 appendix:7 scaling:1 entirely:1 bound:51 fl:2 correspondence:1 oracle:3 annual:1 constraint:3 precisely:1 constrain:1 ri:5 software:1 simulate:1 argument:1 min:12 conjecture:3 combination:1 logt:1 across:1 son:1 kakade:1 cun:1 modification:1 restricted:1 pr:1 gathering:1 computationally:1 previously:3 turn:1 fail:1 mechanism:1 know:3 mind:1 end:2 xiaoqian:1 generalizes:1 rewritten:1 permit:1 appropriate:1 birg:1 sachdeva:1 pierre:1 batch:6 alternative:1 coin:1 existence:1 remaining:2 hinge:1 medicine:1 classical:1 dykstra:1 move:1 question:2 strategy:4 fa:21 rt:14 dependence:2 forster:1 visiting:1 gradient:1 argue:2 provable:1 index:11 relationship:2 rasmus:1 minimizing:1 vladimir:1 equivalently:1 setup:3 unfortunately:2 statement:1 design:28 implementation:1 bianchi:2 upper:4 conversion:5 finite:1 descent:1 zadrozny:1 immediate:1 defining:1 y1:17 sharp:1 arbitrary:2 introduced:1 cast:1 specified:7 connection:1 lymph:1 nip:5 beyond:1 adversary:6 below:1 kot:4 challenge:1 including:2 max:6 unrealistic:1 natural:6 warm:1 tfd:2 predicting:6 regularized:1 abbreviate:1 mizil:2 minimax:7 technology:1 imply:3 picture:1 acknowledges:2 poland:1 literature:2 discovery:1 nicol:2 relative:2 fully:1 loss:51 permutation:27 highlight:1 generation:1 interesting:1 proven:1 recklessly:1 validation:1 sufficient:1 proxy:1 consistent:1 editor:1 bypass:1 heavy:7 yt0:1 last:2 free:2 keeping:1 copy:1 offline:9 side:1 institute:1 tauman:1 leaveone:1 listwise:1 van:1 boundary:2 valid:2 rich:1 computes:1 genome:1 forward:26 author:1 adaptive:1 made:2 far:1 excess:9 gene:2 monotonicity:4 sequentially:1 incoming:1 active:1 summing:1 assumed:1 conclude:1 xi:16 leader:2 alternatively:1 search:1 sk:11 kanade:1 nature:1 learn:1 ca:1 symmetry:1 hornik:1 necessarily:1 domain:1 inherit:1 main:4 linearly:3 azoury:1 whole:1 noise:4 bounding:1 owi:1 n2:1 x1:19 augmented:1 roc:2 bianca:1 wiley:1 sub:1 explicit:2 exponential:3 lie:1 candidate:1 sushant:1 weighting:1 petej:1 theorem:8 rk:5 down:1 xt:17 specific:1 bastien:1 showing:4 rakhlin:2 survival:1 exists:2 sequential:1 hui:1 horizon:3 nk:2 gap:2 logarithmic:3 lt:1 simply:1 saddle:1 amsterdam:1 ordered:2 contained:1 aditya:1 partially:1 chang:1 monotonic:4 applies:4 corresponds:1 malek:2 violator:2 satisfies:1 relies:1 ma:1 acm:2 obozinski:1 goal:3 presentation:3 towards:1 feasible:1 hard:2 change:1 experimentally:1 specifically:2 except:2 uniformly:4 infinite:2 vovk:1 lemma:4 called:2 total:1 geer:1 meaningful:1 support:2 latter:1 alexander:2 pava:2 evaluate:1 unlearnable:1 |
6,641 | 7,007 | A Unified Game-Theoretic Approach to
Multiagent Reinforcement Learning
Marc Lanctot
DeepMind
lanctot@
Karl Tuyls
DeepMind
karltuyls@
Vinicius Zambaldi
DeepMind
vzambaldi@
?
Audrunas
Gruslys
DeepMind
audrunas@
Julien P?rolat
DeepMind
perolat@
David Silver
DeepMind
davidsilver@
Angeliki Lazaridou
DeepMind
angeliki@
Thore Graepel
DeepMind
thore@
[email protected]
Abstract
To achieve general intelligence, agents must learn how to interact with others in
a shared environment: this is the challenge of multiagent reinforcement learning
(MARL). The simplest form is independent reinforcement learning (InRL), where
each agent treats its experience as part of its (non-stationary) environment. In
this paper, we first observe that policies learned using InRL can overfit to the
other agents? policies during training, failing to sufficiently generalize during
execution. We introduce a new metric, joint-policy correlation, to quantify this
effect. We describe an algorithm for general MARL, based on approximate best
responses to mixtures of policies generated using deep reinforcement learning, and
empirical game-theoretic analysis to compute meta-strategies for policy selection.
The algorithm generalizes previous ones such as InRL, iterated best response,
double oracle, and fictitious play. Then, we present a scalable implementation
which reduces the memory requirement using decoupled meta-solvers. Finally,
we demonstrate the generality of the resulting policies in two partially observable
settings: gridworld coordination games and poker.
1
Introduction
Deep reinforcement learning combines deep learning [59] with reinforcement learning [94, 64] to
compute a policy used to drive decision-making [73, 72]. Traditionally, a single agent interacts with
its environment repeatedly, iteratively improving its policy by learning from its observations. Inspired
by recent success in Deep RL, we are now seeing a renewed interest in multiagent reinforcement
learning (MARL) [90, 17, 99]. In MARL, several agents interact and learn in an environment
simultaneously, either competitively such as in Go [91] and Poker [39, 105, 74], cooperatively such
as when learning to communicate [23, 93, 36], or some mix of the two [60, 95, 35].
The simplest form of MARL is independent RL (InRL), where each learner is oblivious to the other
agents and simply treats all the interaction as part of its (?localized?) environment. Aside from
the problem that these local environments are non-stationary and non-Markovian [57] resulting in
a loss of convergence guarantees for many algorithms, the policies found can overfit to the other
agents? policies and hence not generalize well. There has been relatively little work done in RL
community on overfitting to the environment [102, 69], but we argue that this is particularly important
in multiagent settings where one must react dynamically based on the observed behavior of others.
Classical techniques collect or approximate extra information such as the joint values [62, 19, 29, 56],
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
use adaptive learning rates [12], adjust the frequencies of updates [48, 81], or dynamically respond
to the other agents actions online [63, 50]. However, with the notable exceptions of very recent
work [22, 80], they have focused on (repeated) matrix games and/or the fully-observable case.
There have several proposals for treating partial observability in the multiagent setting. When the
model is fully known and the setting is strictly adversarial with two players, there are policy iteration
methods based on regret minimization that scale very well when using domain-specific abstractions [27, 14, 46, 47], which was a major component of the expert no-limit poker AI Libratus [15];
recently these methods were combined with deep learning to create an expert no-limit poker AI
called DeepStack [74]. There is a significant amount of work that deals with the case of decentralized
cooperative problems [76, 79], and in the general setting by extending the notion of belief states
and Bayesian updating from POMDPs [28]. These models are quite expressive, and the resulting
algorithms are fairly complex. In practice, researchers often resort to approximate forms, by sampling
or exploiting structure, to ensure good performance due to intractability [41, 2, 68].
In this paper, we introduce a new metric for quantifying the correlation effects of policies learned by
independent learners, and demonstrate the severity of the overfitting problem. These coordination
problems have been well-studied in the fully-observable cooperative case [70]: we observe similar
problems in a partially-observed mixed cooperative/competitive setting and, and we show that the
severity increases as the environment becomes more partially-observed. We propose a new algorithm
based on economic reasoning [82], which uses (i) deep reinforcement learning to compute best
responses to a distribution over policies, and (ii) empirical game-theoretic analysis to compute new
meta-strategy distributions. As is common in the MARL setting, we assume centralized training for
decentralized execution: policies are represented as separate neural networks and there is no sharing
of gradients nor architectures among agents. The basic form uses a centralized payoff table, which is
removed in the distributed, decentralized form that requires less space.
2
Background and Related Work
In this section, we start with basic building blocks necessary to describe the algorithm. We interleave
this with the most relevant previous work for our setting. Several components of the general idea have
been (re)discovered many times across different research communities, each with slightly different
but similar motivations. One aim here is therefore to unify the algorithms and terminology.
A normal-form game is a tuple (?, U, n) where n is the number of players, ? = (?1 , ? ? ? , ?n )
is the set of policies (or strategies, one for each player i ? [[n]], where [[n]] = {1, ? ? ? , n}), and
U : ? ? <n is a payoff table of utilities for each joint policy played by all players. Extensive-form
games extend these formalisms to the multistep sequential case (e.g. poker).
Players try to maximize their own expected utility. Each player does this by choosing a policy
from ?i , or by sampling from a mixture (distribution) over them ?i ? ?(?i ). In this multiagent
setting, the quality of ?i depends on other players? strategies, and so it cannot be found nor assessed
independently. Every finite extensive-form game has an equivalent normal-form [53], but since it is
exponentially larger, most algorithms have to be adapted to handle the sequential setting directly.
There are several algorithms for computing strategies. In zero-sum games (where ?? ? ?, ~1 ?
U (?) = 0), one can use e.g. linear programming, fictitious play [13], replicator dynamics [97],
or regret minimization [8]. Some of these techniques have been extended to extensive (sequential)
form [39, 25, 54, 107] with an exponential increase in the size of the state space. However, these
extensions have almost exclusively treated the two-player case, with some notable exceptions [54, 26].
Fictitious play also converges in potential games which includes cooperative (identical payoff) games.
The double oracle (DO) algorithm [71] solves a set of (two-player, normal-form) subgames induced
by subsets ?t ? ? at time t. A payoff matrix for the subgame Gt includes only those entries
corresponding to the strategies in ?t . At each time step t, an equilibrium ? ?,t is obtained for Gt , and
?,t
to obtain Gt+1 each player adds a best response ?it+1 ? BR(??i
) from the full space ?i , so for all i,
t+1
t
?t+1
=
?
?
{?
}.
The
algorithm
is
illustrated
in
Figure
1.
Note
that finding an equilibrium in a
i
i
i
zero-sum game takes time polynomial in |?t |, and is PPAD-complete for general-sum [89].
Clearly, DO is guaranteed to converge to an equilibrium in two-player games. But, in the worst-case,
the entire strategy space may have to be enumerated. For example, this is necessary for Rock-Paper2
Figure 1: The Double Oracle Algorithm. Figure taken from [10] with authors? permission.
Scissors, whose only equilibrium has full support ( 13 , 13 , 13 ). However, there is evidence that support
sizes shrink for many games as a function of episode length, how much hidden information is revealed
and/or affects it has on the payoff [65, 86, 10]. Extensions to the extensive-form games have been
developed [67, 9, 10] but still large state spaces are problematic due to the curse of dimensionality.
Empirical game-theoretic analysis (EGTA) is the study of meta-strategies obtained through simulation
in complex games [100, 101]. An empirical game, much smaller in size than the full game, is
constructed by discovering strategies, and meta-reasoning about the strategies to navigate the strategy
space. This is necessary when it is prohibitively expensive to explicitly enumerate the game?s
strategies. Expected utilities for each joint strategy are estimated and recorded in an empirical
payoff table. The empirical game is analyzed, and the simulation process continues. EGTA has been
employed in trading agent competitions (TAC) and automated bidding auctions.
One study used evolutionary dynamics in the space of known expert meta-strategies in Poker [83].
Recently, reinforcement learning has been used to validate strategies found via EGTA [104]. In this
work, we aim to discover new strategies through learning. However, instead of computing exact best
responses, we compute approximate best responses using reinforcement learning. A few epochs of
this was demonstrated in continuous double auctions using tile coding [87]. This work follows up in
this line, running more epochs, using modern function approximators (deep networks), a scalable
implementation, and with a focus on finding policies that can generalize across contexts.
A key development in recent years is deep learning [59]. While most work in deep learning has
focused on supervised learning, impressive results have recently been shown using deep neural
networks for reinforcement learning, e.g. [91, 38, 73, 77]. For instance, Mnih et al. [73] train policies
for playing Atari video games and 3D navigation [72], given only screenshots. Silver et al. introduced
AlphaGo [91, 92], combining deep RL with Monte Carlo tree search, outperforming human experts.
Computing approximate responses is more computationally feasible, and fictitious play can handle
approximations [42, 61]. It is also more biologically plausible given natural constraints of bounded
rationality. In behavioral game theory [103], the focus is to predict actions taken by humans, and
the responses are intentionally constrained to increase predictive ability. A recent work uses a deep
learning architecture [34]. The work that closely resembles ours is level-k thinking [20] where
level k agents respond to level k ? 1 agents, and more closely cognitive hierarchy [18], in which
responses are to distributions over levels {0, 1, . . . , k ? 1}. However, our goals and motivations are
very different: we use the setup as a means to produce more general policies rather than to predict
human behavior. Furthermore, we consider the sequential setting rather than normal-form games.
Lastly, there has been several studies from the literature on co-evolutionary algorithms; specifically,
how learning cycles and overfitting to the current populations can be mitigated [78, 85, 52].
3
Policy-Space Response Oracles
We now present our main conceptual algorithm, policy-space response oracles (PSRO). The algorithm
is a natural generalization of Double Oracle where the meta-game?s choices are policies rather than
actions. It also generalizes Fictitious Self-Play [39, 40]. Unlike previous work, any meta-solver
can be plugged in to compute a new meta-strategy. In practice, parameterized policies (function
approximators) are used to generalize across the state space without requiring any domain knowledge.
The process is summarized in Algorithm 1. The meta-game is represented as an empirical game,
starting with a single policy (uniform random) and growing, each epoch, by adding policies (?oracles?)
3
Algorithm 2: Deep Cognitive Hierarchies
input :player number i, level k
while not terminated do
C HECK L OAD MS({j|j ? [[n]], j 6= i}, k)
C HECK L OAD O RACLES(j ? [[n]], k 0 ? k)
C HECK S AVE MS(?i,k )
C HECK S AVE O RACLE(?i,k )
Sample ??i ? ??i,k
Train oracle ?i,k over ?1 ? (?i,k , ??i )
if iteration number mod Tms = 0 then
Sample ?i ? ?i,k
Compute ui (?2 ), where ?2 ? (?i , ??i )
Update stats for ?i and update ?i,k
Output ?i,k for player i at level k
Algorithm 1: Policy-Space Response Oracles
input :initial policy sets for all players ?
Compute exp. utilities U ? for each joint ? ? ?
Initialize meta-strategies ?i = U NIFORM(?i )
while epoch e in {1, 2, ? ? ? } do
for player i ? [[n]] do
for many episodes do
Sample ??i ? ??i
Train oracle ?i0 over ? ? (?i0 , ??i )
?i = ?i ? {?i0 }
Compute missing entries in U ? from ?
Compute a meta-strategy ? from U ?
Output current solution strategy ?i for player i
that approximate best responses to the meta-strategy of the other players. In (episodic) partially
observable multiagent environments, when the other players are fixed the environment becomes
Markovian and computing a best response reduces to solving a form of MDP [30]. Thus, any
reinforcement learning algorithm can be used. We use deep neural networks due to the recent success
in reinforcement learning. In each episode, one player is set to oracle(learning) mode to train ?i0 , and
a fixed policy is sampled from the opponents? meta-strategies (??i ? ??i ). At the end of the epoch,
the new oracles are added to their policy sets ?i , expected utilities for new policy combinations are
computed via simulation and added to the empirical tensor U ? , which takes time exponential in |?|.
Define ?T = ?T ?1 ? ? 0 as the policy space including the currently learning oracles, and |?i | = |?Ti |
for all i ? [[n]]. Iterated best response is an instance of PSRO with ??i = (0, 0, ? ? ? , 1, 0). Similarly,
Independent RL and fictitious play are instances of PSRO with ??i = (0, 0, ? ? ? , 0, 1) and ??i =
(1/K, 1/K, ? ? ? , 1/K, 0), respectively, where K = |?T?i?1 |. Double Oracle is an instance of PSRO
T ?1
with n = 2 and ? T set to a Nash equilibrium profile of the meta-game (?T ?1 , U ? ).
An exciting question is what can happen with (non-fixed) meta-solvers outside this known space?
Fictitious play is agnostic to the policies it is responding to; hence it can only sharpen the metastrategy distribution by repeatedly generating the same best responses. On the other hand, responses
to equilibrium strategies computed by Double Oracle will (i) overfit to a specific equilibrium in the
n-player or general-sum case, and (ii) be unable to generalize to parts of the space not reached by any
equilibrium strategy in the zero-sum case. Both of these are undesirable when computing general
policies that should work well in any context. We try to balance these problems of overfitting with a
compromise: meta-strategies with full support that force (mix in) ? exploration over policy selection.
3.1
Meta-Strategy Solvers
A meta-strategy solver takes as input the empirical game (?, U ? ) and produces a meta-strategy ?i
for each player i. We try three different solvers: regret-matching, Hedge, and projected replicator
dynamics. These specific meta-solvers accumulate values for each policy (?arm?) and an aggregate
value based on all players? meta-strategies. We refer to ui (?) as player i?s expected value given
all players? meta-strategies and the current empirical payoff tensor U ? (computed via multiple
tensor dot products.) Similarly, denote ui (?i,k , ??i ) as the expected utility if player i plays their
k th ? [[K]] ? {0} policy and the other players play with their meta-strategy ??i . Our strategies use
?
an exploration parameter ?, leading to a lower bound of K+1
on the probability of selecting any ?i,k .
The first two meta-solvers (Regret Matching and Hedge) are straight-forward applications of previous
algorithms, so we defer the details to Appendix A.1 Here, we introduce a new solver we call projected
replicator dynamics (PRD). From Appendix A, when using the asymmetric replicator dynamics,
e.g. with two players, where U ? = (A, B), the change in probabilities for the k th component (i.e.,
the policy ?i,k ) of meta-strategies (?1 , ?2 ) = (x, y) are:
dyk
= yk [(xT B)k ? xT By],
dt
dxk
= xk [(Ay)k ? xT Ay],
dt
1
Appendices are available in the longer technical report version of the paper, see [55].
4
To simulate the replicator dynamics in practice, discretized updates are simulated using a step-size of ?.
We add a projection operator P (?) to these equations that guarantees exploration: x ? P (x + ? dx
dt ),
0
K+1
y ? P (y + ? dy
),
where
P
(x)
=
argmin
{||x
?
x||},
if
any
x
<
?/(K
+
1)
or
x
0
k
dt
P x ???
?
K+1
otherwise, and ??
= {x | xk ? K+1 , k xk = 1} is the ?-exploratory simplex of size K + 1.
This enforces exploratory ?i (?i,k ) ? ?/(K + 1). The PRD approach can be understood as directing
exploration in comparison to standard replicator dynamics approaches that contain isotropic diffusion
or mutation terms (which assume undirected and unbiased evolution), for more details see [98].
3.2
Deep Cognitive Hierarchies
K + 1 levels
While the generality of PSRO is clear and appealing, the
RL step can take a long time to converge to a good response. In complex environments, much of the basic
behavior that was learned in one epoch may need to be
relearned when starting again from scratch; also, it may be
desirable to run many epochs to get oracle policies that can
recursively reason through deeper levels of contingencies.
rand
?1,1
?1,1
. . . . .
rand
. . . . .
rand
. . . . .
rand
. . . . .
N players
?i,k
?
To overcome these problems, we introduce a practical
parallel form of PSRO. Instead of an unbounded number
Figure 2: Overview of DCH
of epochs, we choose a fixed number of levels in advance.
Then, for an n-player game, we start nK processes in
parallel (level 0 agents are uniform random): each one trains a single oracle policy ?i,k for player i
and level k and updates its own meta-strategy ?i,k , saving each to a central disk periodically. Each
process also maintains copies of all the other oracle policies ?j,k0 ?k at the current and lower levels, as
well as the meta-strategies at the current level ??i,k , which are periodically refreshed from a central
disk. We circumvent storing U ? explicitly by updating the meta-strategies online. We call this a
Deep Cognitive Hierarchy (DCH), in reference to Camerer, Ho, & Chong?s model augmented with
deep RL. Example oracle response dynamics shown in Figure 2, and the pseudo-code in Algorithm 2.
i,k
Since each process uses slightly out-dated copies of the other process?s policies and meta-strategies,
DCH approximates PSRO. Specifically, it trades away accuracy of the correspondence to PSRO
for practical efficiency and, in particular, scalability. Another benefit of DCH is an asymptotic
reduction in total space complexity. In PSRO, for K policies and n players, the space required to
store the empirical payoff tensor is K n . Each process in DCH stores nK policies of fixed size, and
n meta-strategies (and other tables) of size bounded by k ? K. Therefore the total space required
is O(nK ? (nK + nK)) = O(n2 K 2 ). This is possible is due to the use of decoupled meta-solvers,
which compute strategies online without requiring a payoff tensor U ? , which we describe now.
3.2.1
Decoupled Meta-Strategy Solvers
In the field of online learning, the experts algorithms (?full information? case) receive information
about each arm at every round. In the bandit (?partial information?) case, feedback is only given for
the arm that was pulled. Decoupled meta-solvers are essentially sample-based adversarial bandits [16]
applied to games. Empirical strategies are known to converge to Nash equilibria in certain classes of
games (i.e. zero-sum, potential games) due to the folk theorem [8].
We try three: decoupled regret-matching [33], Exp3 (decoupled Hedge) [3], and decoupled PRD. Here
again, we use exploratory strategies with ? of the uniform strategy mixed in, which is also necessary
to ensure that the estimates are unbiased. For decoupled PRD, we maintain running averages for the
overall average value an value of each arm (policy). Unlike in PSRO, in the case of DCH, one sample
is obtained at a time and the meta-strategy is updated periodically from online estimates.
4
Experiments
In all of our experiments, oracles use Reactor [31] for learning, which has achieved state-of-the-art
results in Atari game-playing. Reactor uses Retrace(?) [75] for off-policy policy evaluation, and
?-Leave-One-Out policy gradient for policy updates, and supports recurrent network training, which
could be important in trying to match online experiences to those observed during training.
5
The action spaces for each player are identical, but the algorithms do not require this. Our implementation differs slightly from the conceptual descriptions in Section 3; see App. C for details.
First-Person Gridworld Games. Each agent has a local field-of-view (making the world partially
observable), sees 17 spaces in front, 10 to either side, and 2 spaces behind. Consequently, observations
are encoded as 21x20x3 RGB tensors with values 0 ? 255. Each agent has a choice of turning left or
right, moving forward or backward, stepping left or right, not moving, or casting an endless light
beam in their current direction. In addition, the agent has two composed actions of moving forward
and turning. Actions are executed simultaneously, and order of resolution is randomized. Agents
start on a random spawn point at the beginning of each episode. If an agent is touched (?tagged?) by
another agent?s light beam twice, then the target agent is immediately teleported to a spawn point. In
laser tag, the source agent then receives 1 point of reward for the tag. In another variant, gathering,
there is no tagging but agents can collect apples, for 1 point per apple, which refresh at a fixed rate.
In pathfind, there is no tagging nor apples, and both agents get 1 point reward when both reach their
destinations, ending the episode. In every variant, an episode consists of 1000 steps of simulation.
Other details, such as specific maps, can be found in Appendix D.
Leduc Poker is a common benchmark in Poker AI, consisting of a six-card deck: two suits with
three cards (Jack, Queen, King) each. Each player antes 1 chip to play, and receives one private card.
There are two rounds of betting, with a maximum of two raises each, whose values are 2 and 4 chips
respectively. After the first round of betting, a single public card is revealed. The input is represented
as in [40], which includes one-hot encodings of the private card, public card, and history of actions.
Note that we use a more difficult version than in previous work; see Appendix D.1 for details.
4.1
Joint Policy Correlation in Independent Reinforcement Learning
To identify the effect of overfitting in independent reinforcement learners, we introduce joint policy
correlation (JPC) matrices. To simplify the presentation, we describe here the special case of
symmetric two-player games with non-negative rewards; for a general description, see Appendix B.2.
Values are obtained by running D instances of the same experiment, differing only in the seed used
to initialize the random number generators. Each experiment d ? [[D]] (after many training episodes)
produces policies (?1d , ?2d ). The entries of each D ? D matrix shows the mean return over T = 100
PT
episodes, t=1 T1 (R1t + R2t ), obtained when player 1 uses row policy ?1di and and player 2 uses
d
column policy ?2 j . Hence, entries on the diagonals represent returns for policies that learned together
(i.e., same instance), while off-diagonals show returns from policies that trained in separate instances.
11.0
15.5
24
2
14.6
12.9
30.3
7.3
23.8
18
3
27.3
31.7
27.6
30.6
26.2
12
25.1
27.3
29.6
5.3
29.8
6
0
1
2
3
4
Player #2
0
17.8
18.5
7.1
0.6
1
30.8
2.9
12.2
18.2
11.3
0.8
8.2
16
2
1
29.9
23.0
18.2
2.3
20.0
6.0
2.6
12
3.6
2.6
4.1
20.5
4.7
8
0.5
3.5
0.6
3.9
19.2
0
1
2
3
4
Player #1
9.2
3
3.7
4
0
23.9
Player #1
30.9
4
30
30.7
Player #2
20
4
Figure 3: Example JPC matrices for InRL on Laser Tag small2 map (left) and small4 (right).
? ? O)/
? D
? where
From a JPC matrix, we compute an average proportional loss in reward as R? = (D
? is the mean value of the diagonals and O
? is the mean value of the off-diagonals. E.g. in Figure 3:
D
D = 30.44, O = 20.03, R? = 0.342. Even in a simple domain with almost full observability
(small2), an independently-learned policy could expect to lose 34.2% of its reward when playing with
another independently-learned policy even though it was trained under identical circumstances! This
clearly demonstrates an important problem with independent learners. In the other variants (gathering
and pathfind), we observe no JPC problem, presumably because coordination is not required and the
policies are independent. Results are summarized in Table 1. We have also noticed similar effects
when using DQN [73] as the oracle training algorithm; see Appendix B.1 for example videos.
6
Environment
Map
Laser Tag
Laser Tag
Laser Tag
Gathering
Pathfind
small2
small3
small4
field
merge
Table 1:
InRL
DCH(Reactor, 2, 10)
JPC Reduction
?
?
?
?
D
O
R?
D
O
R?
30.44
20.03 0.342 28.20
26.63 0.055
28.7 %
23.06
9.06
0.625 27.00
23.45 0.082
54.3 %
20.15
5.71
0.717 18.72
15.90 0.150
56.7 %
147.34 146.89 0.003 139.70 138.74 0.007
?
108.73 106.32 0.022 90.15 91.492
<0
?
Summary of JPC results in first-person gridworld games.
We see that a (level 10) DCH agent reduces the JPC problem significantly. On small2, DCH reduces
the expected loss down to 5.5%, 28.7% lower than independent learners. The problem gets larger as
the map size grows and problem becomes more partially observed, up to a severe 71.7% average loss.
The reduction achieved by DCH also grows from 28.7% to 56.7%.
Is the Meta-Strategy Necessary During Execution? The figures above represent the fully-mixed
strategy ?i,10 . We also analyze JPC for only the highest-level policy ?i,10 in the laser tag levels. The
values are larger here: R? = 0.147, 0.27, 0.118 for small2-4 respectively, showing the importance of
the meta-strategy. However, these are still significant reductions in JPC: 19.5%, 36.5%, 59.9%.
How Many Levels? On small4, we also compute values for level 5 and level 3: R? = 0.156 and
R? = 0.246, corresponding to reductions in JPC of 56.1% and 44%, respectively. Level 5 reduces
JPC by a similar amount as level 10 (56.1% vs 56.7%), while level 3 less so (44% vs. 56.1%.)
4.2
Learning to Safely Exploit and Indirectly Model Opponents in Leduc Poker
We now show results for a Leduc poker where strong benchmark algorithms exist, such as counterfactual regret (CFR) minimization [107, 11]. We evaluate our policies using two metrics: the first
is performance against fixed players (random, CFR?s average strategy after 500 iterations ?cfr500?,
and a purified version of ?cfr500pure? that chooses the
with highest probability.) The second
Paction
n
is commonly used in poker AI: NASH C ONV(?) = i max?i0 ??i ui (?i0 , ??i ) ? ui (?), representing how much can be gained by deviating to their best response (unilaterally), a value that can be
interpreted as a distance from a Nash equilibrium (called exploitability in the two-player setting).
NashConv is easy to compute in small enough games [45]; for CFR?s values see Appendix E.1.
Effect of Exploration and Meta-Strategy Overview. We now analyze the effect of the various
meta-strategies and exploration parameters. In Figure 4, we measure the mean area-under-the-curve
(MAUC) of the NashConv values for the last (right-most) 32 values in the NashConv graph, and
exploration rate of ? = 0.4. Figures for the other values of ? are in Appendix E, but we found this
value of ? works best for minimizing NashConv. Also, we found that decoupled replicator dynamics
works best, followed by decoupled regret-matching and Exp3. Also, it seems that the higher the level,
the lower the resulting NashConv value is, with diminishing improvements. For exploitation, we
found that ? = 0.1 was best, but the meta-solvers seemed to have little effect (see Figure 10.)
Comparison to Neural Fictitious Self-Play. We now compare to Neural Fictitious Self-Play
(NFSP) [40], an implementation of fictitious play in sequential games using reinforcement learning.
Note that NFSP, PSRO, and DCH are all sample-based learning algorithms that use general function
approximation, whereas CFR is a tabular method that requires a full game-tree pass per iteration.
NashConv graphs are shown for {2,3}-player in Figure 5, and performance vs. fixed bots in Figure 6.
0.2
3.0
level
2.0
1.5
1.0
0.5
0.0
uprd
urm
metasolver
exp3
(a)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
0.4
0.6
0.8
MAUC
MAUC (NashConv)
2.5
1.0
1.2
1.4
min_exploration_weight
0.1
0.25
0.4
1.6
1.8
2.0
1
2
3
4
5
6
7
8 9 10 11 12 13 14 15
level
(b)
Figure 4: (a) Effect of DCH parameters on NashConv in 2 player Leduc Poker. Left: decoupled PRD,
Middle: decoupled RM, Right: Exp3, and (b) MAUC of the exploitation graph against cfr500.
7
5
NFSP
DCH
PSRO
12
10
3
NashConv
NashConv
14
NFSP
DCH
PSRO
4
2
8
6
1
0
4
0
500
1000
1500
Episodes (in thousands)
2000
2
2500
0
100
200
300
400
500
Episodes (in thousands)
(a) 2 players
600
700
(b) 3 players
Figure 5: Exploitability for NFSP x DCH x PSRO.
1
0
2
0
1
0
Mean score
1
Mean score
Mean score
1
2
3
1
NFSP
DCH
PSRO
2
200
400
600
800
1000
Episodes (in thousands)
1200
1400
(a) Random bots as ref. set
1600
NFSP
DCH
PSRO
4
5
500
1000
1500
2000
Episodes (in thousands)
2500
3000
2
3
4
NFSP
DCH
PSRO
5
100
200
300
400
500
Episodes (in thousands)
600
700
800
(b) 2-player CFR500 bots as ref. set (c) 3-player CFR500 bots as ref. set
Figure 6: Evaluation against fixed set of bots. Each data point is an average of the four latest values.
We observe that DCH (and PSRO) converge faster than NFSP at the start of training, possibly due to
a better meta-strategy than the uniform random one used in fictitious play. The convergence curves
eventually plateau: DCH in two-player is most affected, possibly due to the asynchronous nature of
the updates, and NFSP converges to a lower exploitability in later episodes. We believe that this is
due to NFSP?s ability to learn a more accurate mixed average strategy at states far down in the tree,
which is particularly important in poker, whereas DCH and PSRO mix at the top over full policies.
On the other hand, we see that PSRO/DCH are able to achieve higher performance against the
fixed players. Presumably, this is because the policies produced by PSRO/DCH are better able to
recognize flaws in the weaker opponent?s policies, since the oracles are specifically trained for this,
and dynamically adapt to the exploitative response during the episode. So, NFSP is computing a safe
equilibrium while PSRO/DCH may be trading convergence precision for the ability to adapt to a range
of different play observed during training, in this context computing a robust counter-strategy [44, 24].
5
Conclusion and Future Work
In this paper, we quantify a severe problem with independent reinforcement learners, joint policy correlation (JPC), that limits the generality of these approaches. We describe a generalized
algorithm for multiagent reinforcement learning that subsumes several previous algorithms. In our
experiments, we show that PSRO/DCH produces general policies that significantly reduce JPC in
partially-observable coordination games, and robust counter-strategies that safely exploit opponents
in a common competitive imperfect information game. The generality offered by PSRO/DCH can
be seen as a form of ?opponent/teammate regularization?, and has also been observed recently in
practice [66, 5]. We emphasize the game-theoretic foundations of these techniques, which we hope
will inspire further investigation into algorithm development for multiagent reinforcement learning.
In future work, we will consider maintaining diversity among oracles via loss penalties based on policy
dissimilarity, general response graph topologies, environments such as emergent language games [58]
and RTS games [96, 84], and other architectures for prediction of behavior, such as opponent
modeling [37] and imagining future states via auxiliary tasks [43]. We would also like to investigate
fast online adaptation [1, 21] and the relationship to computational Theory of Mind [106, 4], as well
as generalized (transferable) oracles over similar opponent policies using successor features [6].
Acknowledgments. We would like to thank DeepMind and Google for providing an excellent
research environment that made this work possible. Also, we would like to thank the anonymous
reviewers and several people for helpful comments: Johannes Heinrich, Guy Lever, Remi Munos,
Joel Z. Leibo, Janusz Marecki, Tom Schaul, Noam Brown, Kevin Waugh, Georg Ostrovski, Sriram
Srinivasan, Neil Rabinowitz, and Vicky Holgate.
8
References
[1] Maruan Al-Shedivat, Trapit Bansal, Yuri Burda, Ilya Sutskever, Igor Mordatch, and Pieter Abbeel. Continuous adaptation via meta-learning in nonstationary and competitive environments. CoRR, abs/1710.03641,
2017.
[2] Christopher Amato and Frans A. Oliehoek. Scalable planning and learning for multiagent POMDPs. In
AAAI15, pages 1995?2002, January 2015.
[3] P. Auer, N. Cesa-Bianchi, Y. Freund, and R. E. Schapire. Gambling in a rigged casino: The adversarial
multi-armed bandit problem. In Proceedings of the 36th Annual Symposium on Foundations of Computer
Science, pages 322?331, 1995.
[4] C.L. Baker, R.R. Saxe, and J.B. Tenenbaum. Bayesian theory of mind: Modeling joint belief-desire
attribution. In Proceedings of the Thirty-Third Annual Conference of the Cognitive Science Society, pages
2469?2474, 2011.
[5] Trapit Bansal, Jakub Pachocki, Szymon Sidor, Ilya Sutskever, and Igor Mordatch. Emergent complexity
via multi-agent competition. CoRR, abs/1710.03748, 2017.
[6] Andr? Barreto, Will Dabney, R?mi Munos, Jonathan Hunt, Tom Schaul, David Silver, and Hado van
Hasselt. Transfer in reinforcement learning with successor features and generalised policy improvement.
In Proceedings of the Thirty-First Annual Conference on Neural Information Processing Systems (NIPS
2017), 2017. To appear. Preprint available at http://arxiv.org/abs/1606.05312.
[7] Daan Bloembergen, Karl Tuyls, Daniel Hennes, and Michael Kaisers. Evolutionary dynamics of multiagent learning: A survey. J. Artif. Intell. Res. (JAIR), 53:659?697, 2015.
[8] A. Blum and Y. Mansour. Learning, regret minimization, and equilibria. In Algorithmic Game Theory,
chapter 4. Cambridge University Press, 2007.
[9] Branislav Bosansky, Viliam Lisy, Jiri Cermak, Roman Vitek, and Michal Pechoucek. Using double-oracle
method and serialized alpha-beta search for pruning in simultaneous moves games. In Proceedings of the
Twenty-Third International Joint Conference on Artificial Intelligence (IJCAI), 2013.
?
and Mark H.M. Winands. Algorithms for
[10] Branislav Bo?ansk?, Viliam Lis?, Marc Lanctot, Ji?r? Cerm?k,
computing strategies in two-player simultaneous move games. Artificial Intelligence, 237:1?40, 2016.
[11] Michael Bowling, Neil Burch, Michael Johanson, and Oskari Tammelin. Heads-up Limit Hold?em Poker
is solved. Science, 347(6218):145?149, January 2015.
[12] Michael Bowling and Manuela Veloso. Multiagent learning using a variable learning rate. Artificial
Intelligence, 136:215?250, 2002.
[13] G. W. Brown. Iterative solutions of games by fictitious play. In T.C. Koopmans, editor, Activity Analysis
of Production and Allocation, pages 374?376. John Wiley & Sons, Inc., 1951.
[14] Noam Brown, Sam Ganzfried, and Tuomas Sandholm. Hierarchical abstraction, distributed equilibrium
computation, and post-processing, with application to a champion no-limit Texas Hold?em agent. In
Proceedings of the 2015 International Conference on Autonomous Agents and Multiagent Systems, pages
7?15. International Foundation for Autonomous Agents and Multiagent Systems, 2015.
[15] Noam Brown and Tuomas Sandholm. Safe and nested subgame solving for imperfect-information games.
CoRR, abs/1705.02955, 2017.
[16] S. Bubeck and N. Cesa-Bianchi. Regret analysis of stochastic and nonstochastic multi-armed bandit
problems. Foundations and Trends in Machine Learning, 5(1):1?122, 2012.
[17] L. Busoniu, R. Babuska, and B. De Schutter. A comprehensive survey of multiagent reinforcement
learning. IEEE Transaction on Systems, Man, and Cybernetics, Part C: Applications and Reviews,
38(2):156?172, 2008.
[18] Colin F. Camerer, Teck-Hua Ho, and Juin-Kuan Chong. A cognitive hierarchy model of games. The
Quarterly Journal of Economics, 2004.
[19] C. Claus and C. Boutilier. The dynamics of reinforcement learning in cooperative multiagent systems. In
Proceedings of the Fifteenth National Conference on Artificial Intelligence (AAAI-98), pages 746?752,
1998.
9
[20] M. A. Costa-Gomes and V. P. Crawford. Cognition and behavior in two-person guessing games: An
experimental study. The American Economy Review, 96(6):1737?1768, 2006.
[21] Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation of deep
networks. In Doina Precup and Yee Whye Teh, editors, Proceedings of the 34th International Conference
on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pages 1126?1135,
International Convention Centre, Sydney, Australia, 06?11 Aug 2017. PMLR.
[22] Jakob Foerster, Nantas Nardelli, Gregory Farquhar, Triantafyllos Afouras, Philip H. S. Torr, Pushmeet
Kohli, and Shimon Whiteson. Stabilising experience replay for deep multi-agent reinforcement learning.
In Proceedings of the 34th International Conference on Machine Learning (ICML 2017), 2017.
[23] Jakob N. Foerster, Yannis M. Assael, Nando de Freitas, and Shimon Whiteson. Learning to communicate
with deep multi-agent reinforcement learning. In 30th Conference on Neural Information Processing
Systems (NIPS 2016), 2016.
[24] Sam Ganzfried and Tuomas Sandholm. Safe opponent exploitation. ACM Transactions on Economics
and Computation (TEAC), 3(2):1?28, 2015. Special issue on selected papers from EC-12.
[25] N. Gatti, F. Panozzo, and M. Restelli. Efficient evolutionary dynamics with extensive-form games. In
Proceedings of the Twenty-Seventh AAAI Conference on Artificial Intelligence, pages 335?341, 2013.
[26] Richard Gibson. Regret minimization in non-zero-sum games with applications to building champion
multiplayer computer poker agents. CoRR, abs/1305.0034, 2013.
[27] A. Gilpin. Algorithms for Abstracting and Solving Imperfect Information Games. PhD thesis, Carnegie
Mellon University, 2009.
[28] Gmytrasiewicz and Doshi. A framework for sequential planning in multiagent settings. Journal of
Artificial Intelligence Research, 24:49?79, 2005.
[29] Amy Greenwald and Keith Hall. Correlated Q-learning. In Proceedings of the Twentieth International
Conference on Machine Learning (ICML 2003), pages 242?249, 2003.
[30] Amy Greenwald, Jiacui Li, and Eric Sodomka. Solving for best responses and equilibria in extensive-form
games with reinforcement learning methods. In Rohit Parikh on Logic, Language and Society, volume 11
of Outstanding Contributions to Logic, pages 185?226. Springer International Publishing, 2017.
[31] Audrunas Gruslys, Mohammad Gheshlaghi Azar, Marc G. Bellemare, and Remi Munos. The Reactor: A
sample-efficient actor-critic architecture. CoRR, abs/1704.04651, 2017.
[32] S. Hart and A. Mas-Colell. A simple adaptive procedure leading to correlated equilibrium. Econometrica,
68(5):1127?1150, 2000.
[33] Sergiu Hart and Andreu Mas-Colell. A reinforcement procedure leading to correlated equilibrium. In
Economics Essays: A Festschrift for Werner Hildenbrand. Springer Berlin Heidelberg, 2001.
[34] Jason S. Hartford, James R. Wright, and Kevin Leyton-Brown. Deep learning for predicting human
strategic behavior. In Proceedings of the 30th Conference on Neural Information Processing Systems
(NIPS 2016), 2016.
[35] Matthew Hausknecht and Peter Stone. Deep reinforcement learning in parameterized action space. In
Proceedings of the International Conference on Learning Representations (ICLR), May 2016.
[36] Matthew John Hausknecht. Cooperation and communication in multiagent deep reinforcement learning.
PhD thesis, University of Texas at Austin, Austin, USA, 2016.
[37] He He, Jordan Boyd-Graber, Kevin Kwok, , and Hal Daum? III. Opponent modeling in deep reinforcement
learning. In In Proceedings of The 33rd International Conference on Machine Learning (ICML), pages
1804?1813, 2016.
[38] Nicolas Heess, Gregory Wayne, David Silver, Timothy P. Lillicrap, Tom Erez, and Yuval Tassa. Learning
continuous control policies by stochastic value gradients. In Advances in Neural Information Processing
Systems 28: Annual Conference on Neural Information Processing Systems 2015, December 7-12, 2015,
Montreal, Quebec, Canada, pages 2944?2952, 2015.
[39] Johannes Heinrich, Marc Lanctot, and David Silver. Fictitious self-play in extensive-form games. In
Proceedings of the 32nd International Conference on Machine Learning (ICML 2015), 2015.
10
[40] Johannes Heinrich and David Silver. Deep reinforcement learning from self-play in imperfect-information
games. CoRR, abs/1603.01121, 2016.
[41] Trong Nghia Hoang and Kian Hsiang Low. Interactive POMDP lite: Towards practical planning to predict
and exploit intentions for interacting with self-interested agents. In Proceedings of the Twenty-Third
International Joint Conference on Artificial Intelligence, IJCAI ?13, pages 2298?2305. AAAI Press, 2013.
[42] Josef Hofbauer and William H. Sandholm. On the global convergence of stochastic fictitious play.
Econometrica, 70(6):2265?2294, 11 2002.
[43] Max Jaderberg, Volodymyr Mnih, Wojciech Marian Czarnecki, Tom Schaul, Joel Z. Leibo, David
Silver, and Koray Kavukcuoglu. Reinforcement learning with unsupervised auxiliary tasks. CoRR,
abs/1611.05397, 2016.
[44] M. Johanson, M. Zinkevich, and M. Bowling. Computing robust counter-strategies. In Advances in
Neural Information Processing Systems 20 (NIPS), pages 1128?1135, 2008. A longer version is available
as a University of Alberta Technical Report, TR07-15.
[45] Michael Johanson, Michael Bowling, Kevin Waugh, and Martin Zinkevich. Accelerating best response
calculation in large extensive games. In Proceedings of the Twenty-Second International Joint Conference
on Artificial Intelligence (IJCAI), pages 258?265, 2011.
[46] Michael Johanson, Neil Burch, Richard Valenzano, and Michael Bowling. Evaluating state-space
abstractions in extensive-form games. In Proceedings of the Twelfth International Conference on
Autonomous Agents and Multi-Agent Systems (AAMAS), 2013.
[47] Michael Bradley Johanson. Robust Strategies and Counter-Strategies: From Superhuman to Optimal
Play. PhD thesis, University of Alberta, 2016. http://johanson.ca/publications/theses/
2016-johanson-phd-thesis/2016-johanson-phd-thesis.pdf.
[48] Michael Kaisers and Karl Tuyls. Frequency adjusted multi-agent Q-learning. In 9th International
Conference on Autonomous Agents and Multiagent Systems AAMAS 2010), Toronto, Canada, May 10-14,
2010, Volume 1-3, pages 309?316, 2010.
[49] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint
arXiv:1412.6980, 2014.
[50] M. Kleiman-Weiner, M. K. Ho, J. L. Austerweil, M. L. Littman, and J. B. Tenenbaum. Coordinate to
cooperate or compete: abstract goals and joint intentions in social interaction. In Proceedings of the 38th
Annual Conference of the Cognitive Science Society, 2016.
[51] D. Koller, N. Megiddo, and B. von Stengel. Fast algorithms for finding randomized strategies in game
trees. In Proceedings of the 26th ACM Symposium on Theory of Computing (STOC ?94), pages 750?759,
1994.
[52] Kostas Kouvaris, Jeff Clune, Loizos Kounios, Markus Brede, and Richard A. Watson. How evolution
learns to generalise: Using the principles of learning theory to understand the evolution of developmental
organisation. PLOS Computational Biology, 13(4):1?20, 04 2017.
[53] H. W. Kuhn. Extensive games and the problem of information. Contributions to the Theory of Games,
2:193?216, 1953.
[54] Marc Lanctot. Further developments of extensive-form replicator dynamics using the sequence-form
representation. In Proceedings of the Thirteenth International Conference on Autonomous Agents and
Multi-Agent Systems (AAMAS), pages 1257?1264, 2014.
[55] Marc Lanctot, Vinicius Zambaldi, Audr?unas Gruslys, Angeliki Lazaridou, Karl Tuyls, Julien P?rolat,
David Silver, and Thore Graepel. A unified game-theoretic approach to multiagent reinforcement learning.
CoRR, abs/1711.00832, 2017.
[56] M. Lauer and M. Riedmiller. Reinforcement learning for stochastic cooperative multi-agent systems. In
Proceedings of the AAMAS ?04, New York, 2004.
[57] Guillaume J. Laurent, La?titia Matignon, and Nadine Le Fort-Piat. The world of independent learners
is not Markovian. International Journal of Knowledge-Based and Intelligent Engineering Systems,
15(1):55?64, March 2011.
[58] Angeliki Lazaridou, Alexander Peysakhovich, and Marco Baroni. Multi-agent cooperation and the emergence of (natural) language. In Proceedings of the International Conference on Learning Representations
(ICLR), April 2017.
11
[59] Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. Nature, 521:436?444, 2015.
[60] Joel Z. Leibo, Vinicius Zambaldia, Marc Lanctot, Janusz Marecki, and Thore Graepel. Multi-agent
reinforcement learning in sequential social dilemmas. In Proceedings of the Sixteenth International
Conference on Autonomous Agents and Multiagent Systems, 2017.
[61] David S. Leslie and Edmund J. Collins. Generalised weakened fictitious play. Games and Economic
Behavior, 56(2):285?298, 2006.
[62] Michael L. Littman. Markov games as a framework for multi-agent reinforcement learning. In In
Proceedings of the Eleventh International Conference on Machine Learning, pages 157?163. Morgan
Kaufmann, 1994.
[63] Michael L. Littman. Friend-or-foe Q-learning in general-sum games. In Proceedings of the Eighteenth
International Conference on Machine Learning, ICML ?01, pages 322?328, San Francisco, CA, USA,
2001. Morgan Kaufmann Publishers Inc.
[64] Michael L. Littman. Reinforcement learning improves behaviour from evaluative feedback. Nature,
521:445?451, 2015.
[65] J. Long, N. R. Sturtevant, M. Buro, and T. Furtak. Understanding the success of perfect information Monte
Carlo sampling in game tree search. In Proceedings of the AAAI Conference on Artificial Intelligence,
pages 134?140, 2010.
[66] Ryan Lowe, Yi Wu, Aviv Tamar, Jean Harb, Pieter Abbeel, and Igor Mordatch. Multi-agent actor-critic
for mixed cooperative-competitive environments. CoRR, abs/1706.02275, 2017.
[67] N. Burch M. Zinkevich, M. Bowling. A new algorithm for generating equilibria in massive zero-sum
games. In Proceedings of the Twenty-Seventh Conference on Artificial Intelligence (AAAI-07), 2007.
[68] Janusz Marecki, Tapana Gupta, Pradeep Varakantham, Milind Tambe, and Makoto Yokoo. Not all agents
are equal: Scaling up distributed pomdps for agent networks. In Proceedings of the Seventh International
Joint Conference on Autonomous Agents and Multi-agent Systems, 2008.
[69] Vukosi N. Marivate. Improved Empirical Methods in Reinforcement Learning Evaluation. PhD thesis,
Rutgers, New Brunswick, New Jersey, 2015.
[70] L. Matignon, G. J. Laurent, and N. Le Fort-Piat. Independent reinforcement learners in cooperative
Markov games: a survey regarding coordination problems. The Knowledge Engineering Review, 27(01):1?
31, 2012.
[71] H.B. McMahan, G. Gordon, and A. Blum. Planning in the presence of cost functions controlled by an
adversary. In Proceedings of the Twentieth International Conference on Machine Learning (ICML-2003),
2003.
[72] Volodymyr Mnih, Adri? Puigdom?nech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap, Tim
Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning.
In Proceedings of the 33rd International Conference on Machine Learning (ICML), pages 1928?1937,
2016.
[73] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G. Bellemare,
Alex Graves, Martin Riedmiller, Andreas K. Fidjeland, Georg Ostrovski, Stig Petersen, Charles Beattie,
Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran, Daan Wierstra, Shane Legg, and
Demis Hassabis. Human-level control through deep reinforcement learning. Nature, 518:529?533, 2015.
[74] Matej Morav?c?k, Martin Schmid, Neil Burch, Viliam Lis?, Dustin Morrill, Nolan Bard, Trevor Davis,
Kevin Waugh, Michael Johanson, and Michael Bowling. Deepstack: Expert-level artificial intelligence in
heads-up no-limit poker. Science, 358(6362), October 2017.
[75] R. Munos, T. Stepleton, A. Harutyunyan, and M. G. Bellemare. Safe and efficient off-policy reinforcement
learning. In Advances in Neural Information Processing Systems, 2016.
[76] Ranjit Nair. Coordinating multiagent teams in uncertain domains using distributed POMDPs. PhD thesis,
University of Southern California, Los Angeles, USA, 2004.
[77] Junhyuk Oh, Xiaoxiao Guo, Honglak Lee, Richard L. Lewis, and Satinder P. Singh. Action-conditional
video prediction using deep networks in atari games. In Advances in Neural Information Processing
Systems 28: Annual Conference on Neural Information Processing Systems 2015, December 7-12, 2015,
Montreal, Quebec, Canada, pages 2863?2871, 2015.
12
[78] F.A. Oliehoek, E.D. de Jong, and N. Vlassis. The parallel Nash memory for asymmetric games. In
Proceedings of the Genetic and Evolutionary Computation Conference (GECCO), 2006.
[79] Frans A. Oliehoek and Christopher Amato. A Concise Introduction to Decentralized POMDPs. SpringerBriefs in Intelligent Systems. Springer, 2016. Authors? pre-print.
[80] Shayegan Omidshafiei, Jason Pazis, Christopher Amato, Jonathan P. How, and John Vian. Deep decentralized multi-task multi-agent reinforcement learning under partial observability. In Proceedings of the
34th International Conference on Machine Learning (ICML 2017), 2017.
[81] Liviu Panait, Karl Tuyls, and Sean Luke. Theoretical advantages of lenient learners: An evolutionary
game theoretic perspective. Journal of Machine Learning Research, 9:423?457, 2008.
[82] David C. Parkes and Michael P. Wellman. Economic reasoning and artificial intelligence. Science,
349(6245):267?272, 2015.
[83] Marc Ponsen, Karl Tuyls, Michael Kaisers, and Jan Ramon. An evolutionary game theoretic analysis of
poker strategies. Entertainment Computing, 2009.
[84] F. Sailer, M. Buro, and M. Lanctot. Adversarial planning through strategy simulation. In IEEE Symposium
on Computational Intelligence and Games (CIG), pages 37?45, 2007.
[85] Spyridon Samothrakis, Simon Lucas, Thomas Philip Runarsson, and David Robles. Coevolving GamePlaying Agents: Measuring Performance and Intransitivities. IEEE Transactions on Evolutionary
Computation, April 2013.
[86] Martin Schmid, Matej Moravcik, and Milan Hladik. Bounding the support size in extensive form
games with imperfect information. In Proceedings of the Twenty-Eighth AAAI Conference on Artificial
Intelligence, 2014.
[87] L. Julian Schvartzman and Michael P. Wellman. Stronger CDA strategies through empirical gametheoretic analysis and reinforcement learning. In Proceedings of The 8th International Conference on
Autonomous Agents and Multiagent Systems (AAMAS), pages 249?256, 2009.
[88] Wenling Shang, Kihyuk Sohn, Diogo Almeida, and Honglak Lee. Understanding and improving convolutional neural networks via concatenated rectified linear units. In Proceedings of the International
Conference on Machine Learning (ICML), 2016.
[89] Y. Shoham and K. Leyton-Brown. Multiagent Systems: Algorithmic, Game-Theoretic, and Logical
Foundations. Cambridge University Press, 2009.
[90] Yoav Shoham, Rob Powers, and Trond Grenager. If multi-agent learning is the answer, what is the
question? Artif. Intell., 171(7):365?377, 2007.
[91] David Silver, Aja Huang, Chris J. Maddison, Arthur Guez, Laurent Sifre, George van den Driessche, Julian
Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, Sander Dieleman, Dominik
Grewe, John Nham, Nal Kalchbrenner, Ilya Sutskever, Timothy Lillicrap, Madeleine Leach, Koray
Kavukcuoglu, Thore Graepel, and Demis Hassabis. Mastering the game of Go with deep neural networks
and tree search. Nature, 529:484?489, 2016.
[92] David Silver, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang, Arthur Guez,
Thomas Hubert, Lucas Baker, Matthew Lai, Adrian Bolton, Yutian Chen, Timothy Lillicrap, Fan Hui,
Laurent Sifre, George van den Driessche, Thore Graepel, and Demis Hassabis. Mastering the game of go
without human knowledge. Nature, 550:354?359, 2017.
[93] S. Sukhbaatar, A. Szlam, and R. Fergus. Learning multiagent communication with backpropagation. In
30th Conference on Neural Information Processing Systems (NIPS 2016), 2016.
[94] R. Sutton and A. Barto. Reinforcement Learning: An Introduction. MIT Press, 1998.
[95] Ardi Tampuu, Tambet Matiisen, Dorian Kodelja, Ilya Kuzovkin, Kristjan Korjus, Juhan Aru, Jaan Aru,
and Raul Vicente. Multiagent cooperation and competition with deep reinforcement learning. PLoS ONE,
12(4), 2017.
[96] Anderson Tavares, Hector Azpurua, Amanda Santos, and Luiz Chaimowicz. Rock, paper, starcraft:
Strategy selection in real-time strategy games. In The Twelfth AAAI Conference on Artificial Intelligence
and Interactive Digital Entertainment (AIIDE-16), 2016.
13
[97] Taylor and Jonker. Evolutionarily stable strategies and game dynamics. Mathematical Biosciences,
40:145?156, 1978.
[98] K. Tuyls and R. Westra. Replicator dynamics in discrete and continuous strategy spaces. In Agents,
Simulation and Applications, pages 218?243. Taylor and Francis, 2008.
[99] Karl Tuyls and Gerhard Weiss. Multiagent learning: Basics, challenges, and prospects. AI Magazine,
33(3):41?52, 2012.
[100] W. E. Walsh, R. Das, G. Tesauro, and J.O. Kephart. Analyzing complex strategic interactions in multiagent games. In AAAI-02 Workshop on Game Theoretic and Decision Theoretic Agents, 2002., 2002.
[101] Michael P. Wellman. Methods for empirical game-theoretic analysis. In Proceedings of the National
Conference on Artificial Intelligence (AAAI), 2006.
[102] S. Whiteson, B. Tanner, M. E. Taylor, and P. Stone. Protecting against evaluation overfitting in empirical
reinforcement learning. In 2011 IEEE Symposium on Adaptive Dynamic Programming and Reinforcement
Learning (ADPRL), pages 120?127, 2011.
[103] James R. Wright and Kevin Leyton-Brown. Beyond equilibrium: Predicting human behavior in normal
form games. In Proceedings of the Twenty-Fourth AAAI Conference on Artificial Intelligence (AAAI-10),
pages 901?907, 2010.
[104] Mason Wright. Using reinforcement learning to validate empirical game-theoretic analysis: A continuous
double auction study. CoRR, abs/1604.06710, 2016.
[105] Nikolai Yakovenko, Liangliang Cao, Colin Raffel, and James Fan. Poker-CNN: A pattern learning
strategy for making draws and bets in poker games using convolutional networks. In Proceedings of the
Thirtieth AAAI Conference on Artificial Intelligence, 2016.
[106] Wako Yoshida, Ray J. Dolan, and Karl J. Friston. Game theory of mind. PLOS Computational Biology,
4(12):1?14, 12 2008.
[107] M. Zinkevich, M. Johanson, M. Bowling, and C. Piccione. Regret minimization in games with incomplete
information. In Advances in Neural Information Processing Systems 20 (NIPS 2007), 2008.
14
| 7007 |@word private:2 interleave:1 stronger:1 nd:1 simulation:6 rgb:1 vicky:1 reduction:5 initial:1 score:3 selecting:1 wako:1 freitas:1 bradley:1 dx:1 refresh:1 periodically:3 happen:1 diogo:1 update:7 stationary:2 discovering:1 selected:1 isotropic:1 beginning:1 serialized:1 symposium:4 expected:6 solver:13 becomes:3 baker:2 what:2 atari:3 dharshan:1 developed:1 unified:2 wayne:1 local:2 merge:1 twice:1 resembles:1 co:1 thirty:2 gruslys:3 demis:3 significantly:2 intention:2 petersen:1 context:3 demonstrated:1 missing:1 starting:2 jimmy:1 focused:2 resolution:1 react:1 handle:2 notion:1 exploratory:3 target:1 pt:1 magazine:1 us:7 updating:2 continues:1 episode:15 trade:1 counter:4 environment:16 nash:5 developmental:1 littman:4 babuska:1 trained:3 yutian:1 learner:9 bidding:1 represented:3 train:5 describe:5 choosing:1 outside:1 plausible:1 ability:3 simonyan:1 sequence:1 advantage:1 combining:1 los:1 sutskever:3 ijcai:3 double:9 extending:1 silver:12 leave:1 tim:1 peysakhovich:1 strong:1 sydney:1 auxiliary:2 convention:1 kuhn:1 safe:4 human:7 alphago:1 require:1 adprl:1 abbeel:3 generalization:1 ryan:1 strictly:1 cooperatively:1 hold:2 marco:1 seed:1 algorithmic:2 dieleman:1 deepstack:2 coordination:5 champion:2 minimization:6 spyridon:1 johanson:10 rusu:1 barto:1 publication:1 amato:3 ave:2 economy:1 i0:6 gmytrasiewicz:1 bandit:4 koller:1 interested:1 overall:1 lucas:2 art:1 fairly:1 beach:1 sampling:3 biology:2 ppad:1 yoshua:1 modern:1 national:2 deviating:1 lite:1 reactor:4 assael:1 shedivat:1 ostrovski:2 evaluation:4 severe:2 adjust:1 navigation:1 endless:1 tuple:1 arthur:2 experience:3 hausknecht:2 theoretical:1 nardelli:1 measuring:1 yoav:1 queen:1 leslie:1 gregory:2 st:1 international:27 evaluative:1 lee:2 szymon:1 precup:1 again:2 aaai:12 choose:1 guy:1 coding:1 includes:3 juhan:1 explicitly:2 doina:1 try:4 lowe:1 parallel:3 contribution:2 kaufmann:2 bayesian:2 carlo:2 cybernetics:1 sampled:1 costa:1 counterfactual:1 dimensionality:1 dt:4 supervised:1 tom:4 response:24 improved:1 rand:4 april:2 lastly:1 google:2 quality:1 believe:1 hal:1 aviv:1 mdp:1 effect:8 artif:2 dqn:1 hence:3 symmetric:1 purified:1 illustrated:1 deal:1 self:6 ay:2 demonstrate:2 reasoning:3 cooperate:1 recently:4 charles:1 common:3 replicator:9 rl:7 stepping:1 volume:3 erez:1 centre:1 dot:1 moving:3 badia:1 stable:1 add:2 perspective:1 outperforming:1 yi:1 george:2 maximize:1 technical:2 veloso:1 jpc:13 post:1 essentially:1 metric:3 circumstance:1 sergey:1 addition:1 source:1 unlike:2 revealed:2 automated:1 architecture:4 topology:1 reduce:1 idea:1 accelerating:1 peter:1 enumerate:1 boutilier:1 heess:1 sohn:1 simplest:2 exist:1 problematic:1 exploitative:1 andr:1 estimated:1 coordinating:1 georg:2 srinivasan:1 nal:1 graph:4 compete:1 draw:1 lanctot:9 scaling:1 played:1 constraint:1 simulate:1 relatively:1 multiplayer:1 em:2 biologically:1 madeleine:1 away:1 permission:1 running:3 ensure:2 entertainment:2 daum:1 concatenated:1 added:2 kaiser:3 diagonal:4 southern:1 separate:2 berlin:1 samothrakis:1 length:1 julian:3 balance:1 policy:71 twenty:7 bianchi:2 benchmark:2 protecting:1 hinton:1 severity:2 andreu:1 jakob:2 canada:3 required:3 extensive:12 learned:6 kingma:1 eighth:1 memory:2 friston:1 force:1 arm:4 representing:1 julien:2 grewe:1 schmid:2 sodomka:1 literature:1 understanding:2 rohit:1 graf:2 freund:1 fictitious:15 digital:1 principle:1 editor:2 intractability:1 row:1 weaker:1 burda:1 distributed:4 overcome:1 forward:3 san:1 sifre:2 transaction:3 viliam:3 jaderberg:1 logic:2 satinder:1 manuela:1 luiz:1 iterative:1 heidelberg:1 excellent:1 repeated:1 gambling:1 cerm:1 dominik:1 learns:1 davidsilver:1 evidence:1 corr:10 phd:7 dissimilarity:1 nk:5 remi:2 leyton:3 ma:2 conditional:1 king:2 vicente:1 total:2 pas:1 guillaume:1 mark:1 guo:1 jonathan:2 alexander:1 collins:1 outstanding:1 evaluate:1 scratch:1 correlated:3 kohli:1 version:4 koopmans:1 seems:1 rigged:1 recursively:1 renewed:1 current:6 michal:1 treating:1 yokoo:1 amir:1 libratus:1 org:1 unbounded:1 kristjan:1 eleventh:1 ray:1 growing:1 multi:17 alberta:2 curse:1 bounded:2 mitigated:1 runarsson:1 differing:1 finding:3 pseudo:1 safely:2 starcraft:1 prohibitively:1 demonstrates:1 appear:1 generalised:2 treat:2 sutton:1 analyzing:1 multistep:1 weakened:1 zambaldi:2 hunt:1 acknowledgment:1 practice:4 regret:11 block:1 differs:1 area:1 riedmiller:2 gibson:1 marl:6 matching:4 projection:1 get:3 selection:3 operator:1 bellemare:3 branislav:2 raffel:1 eighteenth:1 attribution:1 economics:3 yoshida:1 independently:3 survey:3 oh:1 unilaterally:1 traditionally:1 hierarchy:5 play:21 rationality:1 oad:2 asymmetric:2 observed:7 oliehoek:3 worst:1 removed:1 highest:2 ui:5 raise:1 solving:4 compromise:1 predictive:1 dilemma:1 efficiency:1 chip:2 various:1 chapter:1 jersey:1 monte:2 kalchbrenner:1 jean:1 larger:3 austerweil:1 grenager:1 online:7 tampuu:1 rock:2 product:1 cao:1 sixteenth:1 schaul:3 description:2 competition:3 exploiting:1 pechoucek:1 convergence:4 generating:2 stabilising:1 converges:2 montreal:2 aug:1 solves:1 vitek:1 trading:2 direction:1 stochastic:5 nando:1 saxe:1 australia:1 kodelja:1 adjusted:1 hall:1 exp:1 major:1 failing:1 baroni:1 currently:1 clearly:2 bet:1 clune:1 jiacui:1 abstraction:3 hidden:1 josef:1 issue:1 among:2 field:3 veness:1 identical:3 icml:9 igor:3 tabular:1 nech:1 gordon:1 richard:4 composed:1 intell:2 festschrift:1 consisting:1 maintain:1 ab:11 interest:1 investigate:1 mnih:4 mixture:2 analyzed:1 behind:1 partial:3 retrace:1 decoupled:12 plugged:1 cda:1 instance:7 column:1 modeling:3 markovian:3 cost:1 sidor:1 subset:1 inrl:6 colell:2 seventh:3 front:1 answer:1 randomized:2 milind:1 oskari:1 thesis:8 central:2 cesa:2 trond:1 huang:2 tile:1 expert:6 volodymyr:3 potential:2 diversity:1 de:3 heck:4 scissors:1 view:1 analyze:2 start:4 defer:1 mutation:1 accuracy:1 convolutional:2 dorian:1 camerer:2 produced:1 apple:3 foe:1 simultaneous:2 against:5 doshi:1 di:1 mi:1 niform:1 graepel:5 wei:1 done:1 shrink:1 generality:4 jaan:1 overfit:3 rolat:2 expressive:1 christopher:3 mehdi:1 grows:2 thore:6 dxk:1 evolution:3 game:89 bowling:8 davis:1 m:2 pazis:1 whye:1 stone:2 auction:3 jack:1 parikh:1 ji:1 significant:2 refer:1 perolat:1 ai:5 language:3 harb:1 own:2 recent:5 certain:1 watson:1 approximators:2 ii:2 desirable:1 reduces:5 match:1 exp3:4 controlled:1 scalable:3 janusz:3 foerster:2 iteration:4 represent:2 arxiv:3 beam:2 thirteenth:1 undirected:1 december:2 quebec:2 mod:1 presence:1 easy:1 enough:1 tr07:1 nonstochastic:1 imperfect:5 regarding:1 tm:1 andreas:1 utility:6 penalty:1 york:1 tenenbaum:2 http:2 nghia:1 kian:1 carnegie:1 prd:5 backward:1 year:1 run:1 fourth:1 oracle:25 burch:4 markus:1 march:1 across:3 slightly:3 sandholm:4 appealing:1 den:2 spawn:2 computationally:1 equation:1 panneershelvam:1 pmlr:1 hofbauer:1 classical:1 tensor:6 move:2 noticed:1 strategy:67 interacts:1 iclr:2 thank:2 cfr:4 reason:1 tuomas:3 minimizing:1 difficult:1 stoc:1 vlassis:1 communication:2 directing:1 interacting:1 angeliki:4 david:14 california:1 intransitivity:1 pachocki:1 gheshlaghi:1 mordatch:3 challenge:2 ramon:1 circumvent:1 dated:1 review:3 fully:4 expect:1 piccione:1 abstracting:1 proportional:1 allocation:1 geoffrey:1 offered:1 storing:1 production:1 karl:8 last:1 pulled:1 understand:1 aru:2 feedback:2 world:2 commonly:1 reinforcement:48 made:1 ec:1 far:1 pushmeet:1 table:6 transfer:1 ca:3 improving:2 imagining:1 complex:4 domain:4 terminated:1 profile:1 aamas:5 graber:1 replay:1 dustin:1 yannis:1 theorem:1 stepleton:1 specific:4 showing:1 mason:1 small4:3 sequential:7 adding:1 gained:1 bubeck:1 deck:1 desire:1 partially:7 sadik:1 acm:2 hedge:3 presentation:1 change:1 specifically:3 experimental:1 busoniu:1 exception:2 almeida:1 brunswick:1 assessed:1 exploitation:3 middle:1 cnn:1 disk:2 twelfth:2 hector:1 essay:1 daniel:1 hasselt:1 must:2 aside:1 sukhbaatar:1 intelligence:19 aja:2 parkes:1 marivate:1 beta:1 jiri:1 consists:1 liangliang:1 frans:2 behavioral:1 nor:3 planning:5 nham:1 kuzovkin:1 discover:1 hartford:1 ti:1 interactive:2 megiddo:1 control:2 szlam:1 t1:1 teammate:1 engineering:2 metastrategy:1 limit:6 puigdom:1 laurent:4 studied:1 tambe:1 lecun:1 enforces:1 trapit:2 vukosi:1 shoham:2 boyd:1 undesirable:1 zinkevich:4 equivalent:1 reviewer:1 map:4 go:3 latest:1 pomdp:1 unify:1 immediately:1 amy:2 autonomous:8 updated:1 programming:2 trend:1 expensive:1 cooperative:8 levine:1 plo:3 prospect:1 yk:1 heinrich:3 eric:1 czarnecki:1 laser:6 fast:3 artificial:17 aggregate:1 kevin:6 quite:1 whose:2 otherwise:1 neil:4 kuan:1 cig:1 propose:1 adaptation:3 relevant:1 gametheoretic:1 requirement:1 adam:1 recurrent:1 friend:1 closely:2 exploration:7 successor:2 investigation:1 anonymous:1 presumably:2 equilibrium:18 cognition:1 predict:3 matthew:3 lose:1 makoto:1 lazaridou:3 mit:1 rather:3 casting:1 focus:2 improvement:2 legg:1 adversarial:4 helpful:1 waugh:3 flaw:1 entire:1 constrained:1 initialize:2 equal:1 saving:1 unsupervised:1 simplex:1 others:2 report:2 simplify:1 roman:1 leduc:4 intelligent:2 simultaneously:2 comprehensive:1 harley:1 sriram:1 subgames:1 tuyls:8 pradeep:1 hubert:1 necessary:5 folk:1 varakantham:1 tree:6 taylor:3 kephart:1 gatti:1 werner:1 strategic:2 combined:1 tanner:1 together:1 von:1 possibly:2 american:1 leading:3 relearned:1 return:3 li:3 stengel:1 casino:1 ioannis:3 inc:2 depends:1 jason:2 francis:1 reached:1 competitive:4 iterated:2 researcher:1 rectified:1 straight:1 plateau:1 xiaoxiao:1 frequency:2 james:3 refreshed:1 logical:1 improves:1 anderson:1 furthermore:1 correlation:5 ganzfried:2 dabney:1 lillicrap:4 requiring:2 contain:1 brown:7 matiisen:1 regularization:1 tagged:1 during:6 trying:1 theoretic:13 mohammad:1 tassa:1 extend:1 approximates:1 mellon:1 cambridge:2 similarly:2 impressive:1 gt:3 tesauro:1 store:2 meta:43 yuri:1 leach:1 seen:1 employed:1 converge:4 colin:2 full:8 multiple:1 adapt:2 calculation:1 long:3 hart:2 prediction:2 basic:4 rutgers:1 proposal:1 whereas:2 nikolai:1 extra:1 comment:1 claus:1 shane:1 superhuman:1 jordan:1 bengio:1 observability:3 economic:3 br:1 angeles:1 dch:26 veda:1 jonker:1 repeatedly:2 action:9 deep:32 clear:1 wenling:1 johannes:3 per:2 affected:1 four:1 leibo:3 diffusion:1 communicate:2 wu:1 appendix:9 bolton:1 guaranteed:1 activity:1 alex:2 tag:7 betting:2 smaller:1 son:1 sam:2 mastering:2 rob:1 taken:2 eventually:1 mind:3 finn:1 antonoglou:3 generalizes:2 hassabis:3 publishing:1 maintaining:1 lenient:1 society:3 evolutionary:8 gradient:3 unable:1 philip:2 gecco:1 chris:1 maddison:1 argue:1 bard:1 setup:1 october:1 noam:3 negative:1 korjus:1 observation:2 kumaran:1 finite:1 racle:1 january:2 extended:1 team:1 buro:2 introduced:1 fort:2 nip:7 beyond:1 panait:1 max:2 belief:2 power:1 treated:1 predicting:2 crawford:1 epoch:8 multiagent:28 sturtevant:1 tambet:1 generator:1 hoang:1 contingency:1 exciting:1 playing:3 critic:2 austin:2 cooperation:3 copy:2 liviu:1 deeper:1 generalise:1 munos:4 van:3 curve:2 seemed:1 adaptive:3 projected:2 approximate:6 schutter:1 r1t:1 global:1 overfitting:6 learn:3 exploitability:3 nature:6 robust:4 nicolas:1 interact:2 azar:1 bounding:1 restelli:1 evolutionarily:1 precision:1 third:3 xt:3 jakub:1 gupta:1 organisation:1 importance:1 edmund:1 roble:1 timothy:4 simply:1 twentieth:2 bo:1 hua:1 hladik:1 nested:1 lewis:1 goal:2 greenwald:2 jeff:1 yuval:1 shang:1 la:1 jong:1 support:5 kihyuk:1 barreto:1 polynomial:1 adrian:1 pieter:3 concise:1 exclusively:1 genetic:1 ours:1 com:1 diederik:1 guez:2 john:4 v:3 rts:1 xk:3 toronto:1 wierstra:1 mathematical:1 constructed:1 combine:1 introduce:5 tagging:2 behavior:8 discretized:1 inspired:1 little:2 armed:2 maruan:1 agnostic:2 santos:1 argmin:1 interpreted:1 deepmind:9 ansk:1 guarantee:2 marian:1 every:3 rm:1 unit:1 understood:1 encoding:1 dynamically:3 collect:2 luke:1 walsh:1 range:1 practical:3 backpropagation:1 subgame:2 procedure:2 jan:1 episodic:1 empirical:17 pre:1 seeing:1 cannot:1 yee:1 helen:1 stats:1 population:1 coordinate:1 gerhard:1 massive:1 exact:1 particularly:2 preprint:2 solved:1 thousand:5 cycle:1 complexity:2 reward:5 econometrica:2 dynamic:16 singh:1 joint:14 k0:1 emergent:2 encoded:1 nolan:1 emergence:1 interaction:3 achieve:2 validate:2 milan:1 scalability:1 paper2:1 produce:4 perfect:1 keith:1 quantify:2 screenshots:1 public:2 behaviour:1 enumerated:1 extension:2 sufficiently:1 wright:3 normal:5 create:1 hope:1 aim:2 thirtieth:1 bloembergen:1 raul:1 diminishing:1 development:3 special:2 trong:1 koray:4 thinking:1 future:3 mirza:1 tammelin:1 oblivious:1 few:1 recognize:1 william:1 suit:1 r2t:1 centralized:2 joel:4 chong:2 wellman:3 winands:1 light:2 accurate:1 incomplete:1 re:2 uncertain:1 formalism:1 matignon:2 entry:4 uniform:4 chooses:1 person:3 destination:1 off:4 michael:19 ilya:4 lever:1 recorded:1 cognitive:7 resort:1 wojciech:1 summarized:2 subsumes:1 notable:2 later:1 maintains:1 simon:1 ante:1 identify:1 vinicius:3 generalize:5 kavukcuoglu:4 pomdps:5 drive:1 app:1 history:1 reach:1 harutyunyan:1 sharing:1 trevor:1 intentionally:1 bioscience:1 onv:1 knowledge:4 sean:1 auer:1 matej:2 higher:2 jair:1 inspire:1 though:1 hand:2 receives:2 mode:1 rabinowitz:1 building:2 usa:4 unbiased:2 iteratively:1 round:3 transferable:1 generalized:2 bansal:2 pdf:1 complete:1 coevolving:1 brede:1 junhyuk:1 overview:2 exponentially:1 he:2 accumulate:1 honglak:2 tac:1 rd:2 sharpen:1 actor:2 longer:2 chelsea:1 success:3 morgan:2 adri:1 mix:3 faster:1 lai:1 variant:3 tavares:1 fifteenth:1 hado:1 achieved:2 receive:1 background:1 publisher:1 ardi:1 lauer:1 induced:1 call:2 nonstationary:1 iii:1 sander:1 affect:1 tamar:1 texas:2 weiner:1 six:1 karen:1 amount:2 schapire:1 bot:5 discrete:1 key:1 terminology:1 blum:2 sum:9 parameterized:2 respond:2 audrunas:3 almost:2 yann:1 decision:2 dy:1 sergiu:1 bound:1 followed:1 correspondence:1 fan:2 annual:6 adapted:1 martin:4 teleported:1 combination:1 making:3 gathering:3 end:1 available:3 decentralized:5 opponent:9 competitively:1 observe:4 hierarchical:1 quarterly:1 indirectly:1 kwok:1 stig:1 ho:3 thomas:2 responding:1 top:1 exploit:3 question:2 print:1 guessing:1 poker:19 distance:1 card:6 simulated:1 fidjeland:1 code:1 relationship:1 providing:1 schrittwieser:2 executed:1 farquhar:1 ba:1 implementation:4 teh:1 markov:2 daan:2 payoff:9 head:2 gridworld:3 discovered:1 mansour:1 community:2 marecki:3 able:2 adversary:1 amanda:1 pattern:1 including:1 video:3 hot:1 natural:3 turning:2 dyk:1 dolan:1 asymptotic:1 loss:5 mixed:5 localized:1 foundation:5 agent:54 summary:1 asynchronous:2 side:1 benefit:1 ending:1 evaluating:1 author:2 social:2 alpha:1 observable:6 emphasize:1 pruning:1 conceptual:2 mauc:4 gomes:1 francisco:1 fergus:1 continuous:5 search:4 whiteson:3 marc:10 da:1 main:1 motivation:2 n2:1 ref:3 augmented:1 andrei:1 hsiang:1 wiley:1 kostas:1 exponential:2 mcmahan:1 touched:1 down:2 shimon:2 navigate:1 workshop:1 hui:1 execution:3 chen:1 driessche:2 springer:3 nair:1 quantifying:1 consequently:1 towards:1 piat:2 shared:1 man:1 feasible:1 torr:1 beattie:1 called:2 player:51 gilpin:1 people:1 |
6,642 | 7,008 | Inverse Filtering for Hidden Markov Models
Robert Mattila
Department of Automatic Control
KTH Royal Institute of Technology
[email protected]
Vikram Krishnamurthy
Cornell Tech
Cornell University
[email protected]
Cristian R. Rojas
Department of Automatic Control
KTH Royal Institute of Technology
[email protected]
Bo Wahlberg
Department of Automatic Control
KTH Royal Institute of Technology
[email protected]
Abstract
This paper considers a number of related inverse filtering problems for hidden
Markov models (HMMs). In particular, given a sequence of state posteriors and
the system dynamics; i) estimate the corresponding sequence of observations,
ii) estimate the observation likelihoods, and iii) jointly estimate the observation
likelihoods and the observation sequence. We show how to avoid a computationally expensive mixed integer linear program (MILP) by exploiting the algebraic
structure of the HMM filter using simple linear algebra operations, and provide
conditions for when the quantities can be uniquely reconstructed. We also propose a
solution to the more general case where the posteriors are noisily observed. Finally,
the proposed inverse filtering algorithms are evaluated on real-world polysomnographic data used for automatic sleep segmentation.
1
Introduction
The hidden Markov model (HMM) is a cornerstone of statistical modeling [1?4]. In it, a latent (i.e.,
hidden) state evolves according to Markovian dynamics. The state of the system is only indirectly
observed via a sensor that provides noisy observations. The observations are sampled independently,
conditioned on the state of the system, according to observation likelihood probabilities. Of paramount
importance in many applications of HMMs is the classical stochastic filtering problem, namely:
Given observations from an HMM with known dynamics and observation likelihood
probabilities, compute the posterior distribution of the latent state.
Throughout the paper, we restrict our attention to discrete-time finite observation-alphabet HMMs.
For such HMMs, the solution to the filtering problem is a recursive algorithm known as the HMM
filter [1, 4].
In this paper, we consider the inverse of the above problem. In particular, our aim is to provide
solutions to the following inverse filtering problems:
Given a sequence of posteriors (or, more generally, noisily observed posteriors)
from an HMM with known dynamics, compute (estimate) the observation likelihood
probabilities and/or the observations that generated the posteriors.
To motivate these problems, we give several possible applications of our results below.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
Applications The underlying idea of inverse filtering problems (?inform me about your state
estimate and I will know your sensor characteristics, including your measurements?) has potential
applications in, e.g., autonomous calibration of sensors, fault diagnosis, and detecting Bayesian
behavior in agents. In model-based fault-detection [5, 6], sensor information together with solutions
to related inverse filtering problems are used to detect abnormal behavior. (As trivial examples; i)
if the true sequence of observations is known from a redundant sensor, it can be compared to the
reconstructed sequence; if there is a miss-match, something is wrong, or ii) if multiple data batches
are available, then change detection can be performed on the sequence of reconstructed observation
likelihoods.) They are also of relevance in a revealed preference context in microeconomics where
the aim is to detect expected utility maximization behavior of an agent; estimating the posterior given
the agent?s actions is a crucial step, see, e.g., [7].
Recent advances in wearables and smart-sensor technology have led to consumer grade products
(smart watches with motion and heart-beat monitoring, sleep trackers, etc.) that produce vast amounts
of personal data by performing state estimation. This information can serve as an indicator of health,
fitness and stress. It may be very difficult, or even impossible, to access the raw sensor data since the
sensor and state estimator usually are tightly integrated and encapsulated in intelligent sensor systems.
Inverse filtering provides a framework for reverse engineering and performing fault detection of such
sensors. In Section 5, we demonstrate our proposed solutions on a system that performs automatic
sequencing of sleep stages based on electroencephalogram (EEG) data ? the outputs of such an
automatic system are exactly posteriors over the different sleep stages [8].
Another important application of the inverse filtering problem arises in electronic warfare and cyberphysical security. How can one determine how accurate an enemy?s sensors are? In such problems,
the state of the underlying Markov chain is usually known (a probing sequence), and one observes
actions taken by the enemy which are based on filtered posterior distributions. The aim is to estimate
the observation likelihood probabilities of the enemy, i.e., determine how accurate its sensors are.
Our contributions It is possible to obtain a solution to the inverse filtering problem for HMMs by
employing a brute-force approach (see Section 2.3) ? essentially by testing observations from the
alphabet, and at the same time finding system parameters consistent with the data. However, this
leads to a computationally expensive combinatorial optimization problem. Instead, we demonstrate
in this paper an efficient solution based on linear algebra by exploiting the inherent structure of the
problem and the HMM filter. In particular, the contributions of this paper are three-fold:
1. We propose analytical solutions to three inverse filtering problems for HMMs that avoid
computationally expensive mixed integer linear program (MILP) formulations. Moreover,
we establish theorems guaranteeing unique identifiability.
2. We consider the setting where the output of the HMM filter is corrupted by noise, and
propose an inverse filtering algorithm based on clustering.
3. We evaluate the algorithm on real-world data for automatic segmentation of the sleep cycle.
Related work There are only two known cases where the optimal filter allows a finite dimensional
characterization: the HMM filter for (discrete) HMMs, and the Kalman filter [9, 10] for linear
Gaussian state-space models. Inverse filtering problems for the Kalman filter have been considered
in, e.g., [5, 6, 10], however, inverse filtering for HMMs has, to the best knowledge of the authors,
received much less attention.
The inverse filtering problem has connections to a number of other inverse problems in various fields.
For example, in control theory, the fundamental inverse optimal control problem, whose formulation
dates back to 1964 [11], studies the question: given a system and a policy, for what cost criteria is the
policy optimal? In microeconomic theory, the related problem of revealed preferences [12] asks the
question: given a set of decisions made by an agent, is it possible to determine if a utility is being
maximized, and if so, which?
In machine learning, there are clear connections to, e.g., apprenticeship learning, imitation learning
and inverse reinforcement learning, see, e.g., [13?17], which recently have received much attention.
In these, the reward function of a Markov decision process (MDP) is learned by observing an expert
demonstrating the task that an agent wants to learn to perform.
The key difference between these works and our work is the set of system parameters we aim to learn.
2
2
Preliminaries
In this section, we formulate the inverse filtering problems, discuss how these can be solved using
combinatorial optimization, and state our assumptions formally. With regards to notation, all vectors
are column vectors, unless transposed. The vector 1 is the vector of all ones. ? denotes the
Moore?Penrose pseudoinverse.
2.1
Hidden Markov models (HMMs) and the HMM filter
We consider a discrete-time finite observation-alphabet HMM. Denote its state at time k as xk ?
{1, . . . , X} and the corresponding observation yk ? {1, . . . , Y }. The underlying Markov chain xk
evolves according to the row-stochastic transition probability matrix P ? RX?X , where [P ]ij =
Pr[xk+1 = j|xk = i]. The initial state x0 is sampled from the probability distribution ?0 ? RX ,
where [?0 ]i = Pr[x0 = i]. The noisy observations of the underlying Markov chain are obtained from
the row-stochastic observation likelihood matrix B ? RX?Y , where [B]ij = Pr[yk = j|xk = i] are
the observation likelihood probabilities. We denote the columns of the observation likelihood matrix
as {bi }Yi=1 , i.e., B = [b1 . . . bY ].
In the classical stochastic filtering problem, the aim is to compute the posterior distribution ?k ? RX
of the latent state (Markov chain, in our case) at time k, given observations from the system up to
time k. The HMM filter [1, 4] computes these posteriors via the following recursive update:
?k =
Byk P T ?k?1
,
1T Byk P T ?k?1
(1)
initialized by ?0 , where [?k ]i = Pr[xk = i|y1 , . . . , yk ] is the posterior distribution at time k,
Byk = diag(byk ) ? RX?X , and {yk }N
k=1 is a set of observations.
2.2
Inverse HMM filtering problem formulations
The inverse filtering problem for HMMs is not a single problem ? multiple variants can be formulated
depending on what information is available a priori. We pose and consider a number of variations of
increasing levels of generality depending on what data we can extract from the sensor system. To
restrict the scope of the paper, we assume throughout that the transition matrix P is known, and is the
same in both the system and the HMM filter (i.e, we do not consider miss-matched HMM filtering
problems). Formally, the inverse filtering problems considered in this paper are as follows:
Problem 1 (Inverse
filtering problem with unknown observations). Consider the known data D =
P, B, {?k }N
k=0 , where the posteriors have been generated by an HMM-filter sensor. Reconstruct
the observations {yk }N
k=1 .
Problem
2
(Inverse
filtering
problem with unknown sensor). Consider the known data D =
N
,
where
the posteriors have been generated by an HMM-filter sensor. ReconP, {yk }N
,
{?
}
k
k=1
k=0
struct the observation likelihood matrix B.
Combining these two formulations yields the general problem:
Problem 3 (Inverse
filtering
problem with unknown sensor and observations). Consider the known
data D = P, {?k }N
k=0 , where the posteriors have been generated by an HMM-filter sensor.
Reconstruct the observations {yk }N
k=1 and the observation likelihood matrix B.
Finally, we consider the more general setting where the posteriors we obtain are corrupted by noise
(due to, e.g., quantization, measurement or model uncertainties). In particular, we consider the case
where the following sequence of noisy posteriors is obtained over time:
?
?k = ?k + noise,
(2)
from the sensor system. We state directly the generalization of Problem 3 (the corresponding
generalizations of Problems 1 and 2 follow as special-cases):
Problem 4 (Noise-corrupted
inverse filtering problem with unknown sensor and observations).
Consider the data D = P, {?
?k }N
k=0 , where the posteriors ?k have been generated by an HMMfilter sensor, but we obtain noise-corrupted measurements ?
?k . Estimate the observations {yk }N
k=1
and the observation likelihood matrix B.
3
2.3
Inverse filtering as an optimization problem
It is possible to formulate Problems 1-4 as optimization problems of increasing levels of generality.
As a first step, rewrite the HMM filter equation (1) as:1
(1) ?? bTyk P T ?k?1 ?k = diag(byk )P T ?k?1 .
(3)
In Problem 3 we need to find what observation occurred at each time instant (a combinatorial
problem), and at the same time reconstruct an observation likelihood matrix consistent with the
data. To be consistent with the data, equation (3) has to be satisfied. This feasibility problem can be
formulated as the following mixed-integer linear program (MILP):
min
Y
{yk }N
k=1 ,{bi }i=1
s.t.
N
X
kbTyk P T ?k?1 ?k ? diag(byk )P T ?k?1 k?
k=1
yk ? {1, . . . , Y },
bi ? 0,
[b1 . . . bY ]1 = 1,
for k = 1, . . . , N,
for i = 1, . . . , Y,
(4)
where the choice of norm is arbitrary since for noise-free data it is possible to exactly fit observations
and an observation likelihood matrix. In Problem 1, the bi :s are dropped as optimization variables and
the problem reduces to an integer program (IP). In Problem 2, where the sequence of observations is
known, the problem reduces to a linear program (LP) .
Despite the ease of formulation, the down-side of this approach is that, even though Problems 1 and 2
are computationally tractable, the MILP-formulation of Problem 3 can become computationally very
expensive for larger data sets. In the following sections, we will outline how the problems can be
solved efficiently by exploiting the structure of the HMM filter.
2.4
Assumptions
Before providing solutions to Problems 1-4, we state the assumptions that the HMMs in this paper
need to satisfy to guarantee unique solutions. The first assumption serves as a proxy for ergodicity of
the HMM and the HMM filter ? it is a common assumption in statistical inference for HMMs [18, 4].
Assumption 1 (Ergodicity). The transition matrix P and the observation matrix B are elementwise
(strictly) positive.
The second assumption is a natural rank assumption on the observation likelihoods. The assumption
says that the conditional distribution of any observation is not a linear combination of the conditional
distributions of any other observations.
Assumption 2 (Distinguishable observation likelihoods). The observation likelihood matrix B is full
column rank.
We will see that this assumption can be relaxed to the following assumption in problems where only
the sequence of observations is to be reconstructed:
Assumption 3 (Non-parallel observation likelihoods). No pair of columns of the observation likelihood matrix B is colinear, i.e., bi 6= ?bj for any real number ? and any i 6= j.
Without Assumption 3, it is impossible to distinguish between observation i and observation j. Note
also that Assumption 2 implies Assumption 3.
3
Solution to the inverse filtering problem for HMMs in absence of noise
In this section, we detail our solutions to Problems 1-3. We first provide the following two useful
lemmas that will be key to the solutions for Problems 1-4. They give an alternative characterization
of the HMM-filter update equation. (Note that all proofs are in the supplementary material.)
1
Multiplication by the denominator is allowed under Assumption 1 ? see below.
4
Lemma 1. The HMM-filter update equation (3) can equivalently be written
?k (P T ?k?1 )T ? diag(P T ?k?1 ) byk = 0.
(5)
The second lemma characterizes the solutions to (5).
Lemma 2. Under Assumption 1, the nullspace of the X ? X matrix
?k (P T ?k?1 )T ? diag(P T ?k?1 )
(6)
is of dimension one for k > 1.
3.1
Solution to the inverse filtering problem with unknown observations
In the formulation of Problem 1, we assumed that the observation likelihoods B were known, and
aimed to reconstruct the sequence of observations from the posterior data. Equation (5) constrains
which columns of the observation matrix B that are consistent with the update of the posterior vector
at each time instant. Formally, any sequence
y?k ? y ? {1, . . . , Y } : ?k (P T ?k?1 )T ? diag(P T ?k?1 ) by = 0 ,
(7)
for k = 1, . . . , N , is consistent with the HMM filter posterior updates. (Recall that by denotes column
y of the observation matrix B.) Since the problems (7) are decoupled in time k, they can trivially be
solved in parallel.
Theorem 1. Under Assumptions 1 and 3, the set in the right-hand side of equation (7) is a singleton,
and is equal to the true observation, i.e.,
y?k = yk ,
(8)
for k > 1.
3.2
Solution to the inverse filtering problem with unknown sensor
The second inverse filtering problem we consider is when the sequence of observations is known, but
the observation likelihoods B are unknown (Problem 2). This problem can be solved by exploiting
Lemmas 1 and 2.
Computing a basis for the nullspace of the coefficient matrix in formulation (5) of the HMM filter
recovers, according to Lemmas 1 and 2, the direction of one column of B. In particular, the direction
of the column corresponding to observation yk , i.e., byk . From such basis vectors, we can construct a
matrix C ? RX?Y where the yth column is aligned with by . Note that to be able to fully construct
this matrix, every observation from the set {1, . . . , Y } needs to have been observed at least once.
Due to being basis vectors for nullspaces, the columns of C are only determined up to scalings, so
we need to exploit the structure of the observation matrix B to properly normalize them. To form an
? from C, we employ that the observation likelihood matrix is row-stochastic. This means
estimate B
that we should rescale each column:
? = C diag(?)
B
(9)
? = 1. Details are provided in the following theorem.
for some ? ? RY , such that B1
Theorem 2. If Assumption 1 holds, and every possible observation has been observed (i.e., that
{1, . . . , Y } ? {yk }N
k=1 ), then:
? = B,
i) there exists ? ? RY such that B
? is equal to B. In particular,
ii) if Assumption 2 holds, then the choice of ? is unique, and B
? = C ? 1.
5
3.3
Solution to the inverse filtering problem with unknown sensor and observations
Finally, we turn to the general formulation in which we consider the combination of the previous
two problems: both the sequence of observations and the observation likelihoods are unknown
(Problem 3). Again, the solution follows from Lemmas 1 and 2. Note that there will be a degree of
freedom since we can arbitrarily relabel each observation and correspondingly permute the columns
of the observation likelihood matrix.
As in the solution to Problem 2, computing a basis vector, say c?k , for the nullspace of the coefficient
matrix in equation (5) recovers the direction of one column of the B matrix. However, since the
sequence of observations is unknown, we do not know which column. To circumvent this, we
concatenate such basis vectors in a matrix2
C? = [?
c2 . . . c?N ] ? RX?(N ?1) .
(10)
For sufficiently large N ? essentially when every possible observation has been processed by the
HMM filter ? the matrix C? in (10) will contain Y columns out of which no pair is colinear (due
to Assumption 3). All the columns that are parallel correspond to one particular observation. Let
{?1 , . . . , ?Y } be the indices of Y such columns, and construct
?
C = C?
(11)
? = [e?1 . . . e?Y ] ? R(N ?1)?Y ,
(12)
using the selection matrix
where ei is the ith Cartesian basis vector.
Lemma 3. Under Assumption 1 and Assumption 3, the expected number of samples needed to be
able to construct the selection matrix ? is upper-bounded by
? ?1 (1 + 1/2 + ? ? ? + 1/Y ) ,
(13)
where B ? ? > 0 elementwise.
With C constructed in (11), we have obtained the direction of each column of the observation matrix.
However, as before, they need to be properly normalized. For this, we exploit the sum-to-one property
of the observation matrix as in the previous section. Let
? = C diag(?),
B
(14)
? = 1. Details on how to find ? are provided in the theorem below.
for ? ? RY , such that B1
This solves the first part of the problem, i.e., reconstructing the observation matrix. Secondly, to
recover the sequence of observations, take
n
o
y?k ? y ? {1, . . . , Y } : ?by = ??
ck for some real number ? ,
(15)
? that the nullspace of the HMM filter coefficientfor k > 1. In words; check which columns of B
matrix (6) is colinear with at each time instant.
Theorem 3. If Assumptions 1 and 3 hold, and the number of samples N is sufficiently large ? see
Lemma 3 ? then:
? = BP, where P is a permutation matrix.
i) there exists ? ? RY in equation (14) such that B
ii) the set on the right-hand side of equation (15) is a singleton. Moreover, the reconstructed
observations y?k are, up to relabellings corresponding to P, equal to the true observations
yk .
? = BP. In particular, ? = C ? 1.
iii) if Assumption 2 holds, then the choice of ? is unique, and B
2
We start with c?2 , since we make no assumption on the positivity of ?0 ? see the proof of Lemma 2.
6
4
Solution to the inverse filtering problem for HMMs in presence of noise
In this section, we discuss the more general setting where the posteriors obtained from the sensor
system are corrupted by noise. We will see that this problem naturally fits in a clustering framework
since every posterior update will provide us with a noisy estimate of the direction of one column of
the observation likelihood matrix. We consider an additive noise model of the following form:
Assumption 4 (Noise model). The posteriors are corrupted by additive noise wk :
?
?k = ?k + wk ,
(16)
T
such that 1 ?
?k = 1 and ?
?k > 0.
This noise model is valid, for example, when each observed posterior vector has been subsequently
renormalized after noise that originates from quantization or measurement errors has been added.
In the solution proposed in Section 3.3 for the noise-free case, the matrix C? in equation (10) was
constructed by concatenating basis vectors for the nullspaces of the coefficient matrix in equation (5).
With perturbed posterior vectors, the corresponding system of equations becomes
?
?k (P T ?
?k?1 )T ? diag(P T ?
?k?1 ) c?k = 0,
(17)
where c?k is now a perturbed (and scaled) version of byk . That this equation is valid is guaranteed by
the generalization of Lemma 2:
Lemma 4. Under Assumptions 1 and 4, the nullspace of the matrix
?
?k (P T ?
?k?1 )T ? diag(P T ?
?k?1 )
(18)
is of dimension one for k > 1.
Remark 1. In case Assumption 4 does not hold, the problem can instead be interpreted as a perturbed
eigenvector problem. The vector c?k should then be taken as the eigenvector corresponding to the
smallest eigenvalue.
Lemma 4 says that we can construct a matrix C? (analogous to C? in Section 3.3) by concatenating the
basis vectors from the one-dimensional nullspaces in (17). Due to the perturbations, every solution to
equation (17) will be a perturbed version of the solution to the corresponding noise-free version of the
equation. This means that it will not be possible to construct a selection matrix ? as was done for C?
in equation (12). However, because there are only Y unique solutions to the noise-free equations (5),
it is natural to circumvent this (assuming that the perturbations are small) by clustering the columns
of C? into Y clusters. As the columns of C? are only unique up to scaling, the clustering has to be
performed with respect to their angular separations (using, e.g., the spherical k-means algorithm
[19]).
Let C ? RX?Y be the matrix of the Y centroids resulting from running a clustering algorithm on the
? Each centroid can be interpreted as a noisy estimate of one column of the observation
columns of C.
likelihood matrix. To obtain a properly normalized estimate of the observation likelihood matrix, we
take
? = CA,
B
(19)
where A ? RY ?Y . Note that, since C now contains noisy estimates of the directions of the columns
of the observation likelihood matrix, we are not certain to be able to properly normalize it by purely
rescaling each column (i.e., taking A to be a diagonal matrix as was done in Sections 3.2 and 3.3). A
logical choice is the solution to the following LP,
min
max [A]ij
A?RY ?Y
i6=j
CA ? 0,
CA1 = 1,
(20)
which tries to minimize the off-diagonal elements of A. The resulting rescaling matrix A guarantees
? = CA is a proper stochastic matrix (non-negative and has row-sum equal to one), as well as
that B
? are minimized.
that the discrepancy between the directions of the columns of C and B
s.t.
The second part of the problem ? reconstructing the sequence of observations ? follows naturally
from the clustering algorithm; an estimate of the sequence is obtained by checking to what cluster the
solution c?k of equation (17) belongs in for each time instant.
7
5
Experimental results for sleep segmentation
In this section, we illustrate the inverse filtering problem on real-world data.
Background Roughly one third of a person?s life is spent sleeping. Sleep disorders are becoming
more prevalent and, as public awareness has increased, the usage of sleep trackers is becoming
wide-spread. The example below illustrates how the inverse filtering formulation and associated
algorithms can be used as a step in real-time diagnosis of failure of sleep-tracking medical equipment.
During the course of sleep, a human transitions through five different sleep stages [20]: wake, S1,
S2, slow wave sleep (SWS) and rapid eye movement (REM). An important part of sleep analysis is
obtaining a patient?s evolution over these sleep stages. Manual sequencing from all-night polysomnographic (PSG) recordings (including, e.g., electroencephalogram (EEG) readings) can be performed
according to the Rechtschaffen and Kales (R&K) rules by well-trained experts [8, 20]. However,
this is costly and laborious, so several works, e.g., [8, 20, 21], propose automatic sequencing based
on HMMs. These systems usually output a posterior distribution over the sleep stages, or provide a
Viterbi path.
A malfunction of such an automatic system could have problematic consequences since medical
decisions would be based on faulty information. The inverse filtering problem arises naturally for
such reasons of fault-detection. Joint knowledge of the transition matrix can be assumed, since it is
possible to obtain, from public sources, manually labeled data from which an estimate of P can be
computed.
Setup A version of the automatic sleep-staging system in [8, 20] was implemented. The mean
frequency over the 0-30 Hz band of the EEG (over C3-A2 or C4-A1, according to the international
10-20 system) was used as observations. These readings were encoded to five symbols using a vectorquantization based codebook. The model was trained on data from nine patients in the PhysioNet
CAP Sleep Database [22, 23]. The model was then evaluated on another patient ? see Fig. 1 ? over
one full-night of sleep. The manually labeled stages according to K&R-rules are dashed-marked in
the figure. To summarize the resulting posterior distributions over the sleep stages, we plot the mean
state estimate when equidistant numbers have been assigned to each state.
For the inverse filtering, the full posterior vectors were elementwise corrupted by Gaussian noise of
standard deviation ?, and projected back to the simplex (to ensure a valid posterior probability vector)
? simulating a noisy reading from the automatic system. A total of one hundred noise realizations
were simulated. The noise can be a manifestation of measurement or quantization noise in the sensor
system, or noise related to model uncertainties (in this case, an error in the transition probability
matrix P ).
Results After permuting the labels of the observations, the error in the reconstructed observation
likelihood matrix, as well as the fraction of correctly reconstructed observations, were computed. This
is illustrated in Fig. 2. For the 1030 quantized EEG samples from the patient, the entire procedure
takes less than one second on a 2.0 Ghz Intel Core 2 Duo processor system.
REM
SWS
S2
S1
WAKE
0
1
2
3
4
5
6
7
8
hours since bedtime
Figure 1: One night of sleep in which polysomnographic (PSG) observation data has been manually
processed by an expert sleep analyst according to the R&K rules to obtain the sleep stages (
).
The posterior distribution over the sleep stages, resulting from an automatic sleep-staging system, has
been summarized to a mean state estimate (
).
8
0.5
0.2
10
?8
10
?6
?4
10
noise ?
10
?2
10
0
P
1
? ? BPkF
min kB
fraction
Correctly recovered observations
Error in B
100
10?2
10?4
10?8
10?6 10?4
noise ?
10?2
100
Figure 2: Result of inverse filtering for various noise standard deviations ?. The vector of posterior
probabilities is perturbed elementwise with Gaussian noise. Right: Error in the recovered observation
likelihood matrix after permuting the columns to find the best match to the true matrix. Left: Fraction
of correctly reconstructed observations. As the signal-to-noise ratio increases, the inverse filtering
algorithm successfully reconstructs the sequence of observations and estimates the observation
likelihoods.
From Fig. 2, we can see that as the variance of the noise decreases, the left hand side of equation
(17) converges to that of equation (5) and the true quantities are recovered. On the other extreme,
as the signal-to-noise ratio becomes small, the estimated sequence of observations tends to that of
a uniform distribution at 1/Y = 0.2. This is because the clusters in C? become heavily intertwined.
The discontinuous nature of the solution of the clustering algorithm is apparent by the plateau-like
behaviour in the middle of the scale ? a few observations linger on the edge of being assigned to the
correct clusters.
In conclusion, the results show that it is possible to estimate the observation sequence processed by
the automatic sleep-staging system, as well as, its sensor?s specifications. This is an important step in
performing fault detection for such a device: for example, using several nights of data, it is possible
to perform change detection on the observation likelihoods to detect if the sleep monitoring device
has failed.
6
Conclusions
In this paper, we have considered several inverse filtering problems for HMMs. Given posteriors
from an HMM filter (or more generally, noisily observed posteriors), the aim was to reconstruct the
observation likelihoods and also the sample path of observations. It was shown that a computationally
expensive solution based on combinatorial optimization can be avoided by exploiting the algebraic
structure of the HMM filter. We provided solutions to the inverse filtering problems, as well as
theorems guaranteeing unique identifiability. The more general case of noise-corrupted posteriors
was also considered. A solution based on clustering was proposed and evaluated on real-world data
based on a system for automatic sleep-staging from EEG readings.
In the future, it would be interesting to consider other variations and generalizations of inverse
filtering. For example, the case where the system dynamics are unknown and need to be estimated, or
when only actions based on the filtered distribution can be observed.
Acknowledgments
This work was partially supported by the Swedish Research Council under contract 2016-06079,
the U.S. Army Research Office under grant 12346080 and the National Science Foundation under
grant 1714180. The authors would like to thank Alexandre Proutiere for helpful comments during the
preparation of this work.
References
[1] V. Krishnamurthy, Partially Observed Markov Decision Processes. Cambridge, UK: Cambridge
University Press, 2016.
9
[2] L. Rabiner, ?A tutorial on hidden Markov models and selected applications in speech recognition,? Proceedings of the IEEE, vol. 77, pp. 257?286, Feb. 1989.
[3] R. J. Elliott, J. B. Moore, and L. Aggoun, Hidden Markov Models: Estimation and Control.
New York, NY: Springer, 1995.
[4] O. Capp?, E. Moulines, and T. Ryd?n, Inference in Hidden Markov Models. New York, NY:
Springer, 2005.
[5] F. Gustafsson, Adaptive filtering and change detection. New York: Wiley, 2000.
[6] J. Chen and R. J. Patton, Robust Model-Based Fault Diagnosis for Dynamic Systems. Boston,
MA: Springer, 1999.
[7] A. Caplin and M. Dean, ?Revealed preference, rational inattention, and costly information
acquisition,? The American Economic Review, vol. 105, no. 7, pp. 2183?2203, 2015.
[8] A. Flexerand, G. Dorffner, P. Sykacekand, and I. Rezek, ?An automatic, continuous and
probabilistic sleep stager based on a hidden Markov model,? Applied Artificial Intelligence,
vol. 16, pp. 199?207, Mar. 2002.
[9] D. Koller and N. Friedman, Probabilistic graphical models: principles and techniques. Cambridge, MA: MIT Press, 2009.
[10] B. Anderson and J. Moore, Optimal Filtering. Englewood Cliffs, NJ: Prentice-Hall, 1979.
[11] R. E. Kalman, ?When is a linear control system optimal,? Journal of Basic Engineering, vol. 86,
no. 1, pp. 51?60, 1964.
[12] H. R. Varian, Microeconomic analysis. New York: Norton, 3rd ed., 1992.
[13] D. Hadfield-Menell, S. J. Russell, P. Abbeel, and A. Dragan, ?Cooperative inverse reinforcement
learning,? in Advances in Neural Information Processing Systems, 2016.
[14] J. Choi and K.-E. Kim, ?Nonparametric Bayesian inverse reinforcement learning for multiple
reward functions,? in Advances in Neural Information Processing Systems, 2012.
[15] E. Klein, M. Geist, B. Piot, and O. Pietquin, ?Inverse Reinforcement Learning through Structured
Classification,? in Advances in Neural Information Processing Systems, 2012.
[16] S. Levine, Z. Popovic, and V. Koltun, ?Nonlinear inverse reinforcement learning with gaussian
processes,? in Advances in Neural Information Processing Systems, 2011.
[17] A. Ng, ?Algorithms for inverse reinforcement learning,? in Proceedings of the 17th International
Conference on Machine Learning (ICML?00), pp. 663?670, 2000.
[18] L. E. Baum and T. Petrie, ?Statistical inference for probabilistic functions of finite state Markov
chains,? The annals of mathematical statistics, vol. 37, no. 6, pp. 1554?1563, 1966.
[19] C. Buchta, M. Kober, I. Feinerer, and K. Hornik, ?Spherical k-means clustering,? Journal of
Statistical Software, vol. 50, no. 10, pp. 1?22, 2012.
[20] S.-T. Pan, C.-E. Kuo, J.-H. Zeng, and S.-F. Liang, ?A transition-constrained discrete hidden
Markov model for automatic sleep staging,? BioMedical Engineering OnLine, vol. 11, no. 1,
p. 52, 2012.
[21] Y. Chen, X. Zhu, and W. Chen, ?Automatic sleep staging based on ECG signals using hidden
Markov models,? in Proceedings of the 37th Annual International Conference of the IEEE
Engineering in Medicine and Biology Society (EMBC), pp. 530?533, 2015.
[22] A. L. Goldberger, L. A. Amaral, L. Glass, J. M. Hausdorff, P. C. Ivanov, R. G. Mark, J. E.
Mietus, G. B. Moody, C.-K. Peng, and H. E. Stanley, ?Physiobank, physiotoolkit, and physionet,?
Circulation, vol. 101, no. 23, pp. e215?e220, 2000.
[23] M. G. Terzano, L. Parrino, A. Sherieri, R. Chervin, S. Chokroverty, C. Guilleminault, M. Hirshkowitz, M. Mahowald, H. Moldofsky, A. Rosa, and others, ?Atlas, rules, and recording
techniques for the scoring of cyclic alternating pattern (CAP) in human sleep,? Sleep medicine,
vol. 2, no. 6, pp. 537?553, 2001.
10
| 7008 |@word middle:1 version:4 e215:1 norm:1 asks:1 initial:1 cyclic:1 contains:1 recovered:3 goldberger:1 written:1 additive:2 concatenate:1 plot:1 atlas:1 update:6 intelligence:1 selected:1 device:2 xk:6 ith:1 core:1 menell:1 filtered:2 provides:2 detecting:1 characterization:2 codebook:1 preference:3 quantized:1 five:2 mathematical:1 c2:1 constructed:2 become:2 koltun:1 gustafsson:1 stager:1 apprenticeship:1 x0:2 peng:1 expected:2 rapid:1 roughly:1 behavior:3 ry:6 grade:1 moulines:1 rem:2 spherical:2 ivanov:1 increasing:2 becomes:2 provided:3 estimating:1 underlying:4 moreover:2 notation:1 matched:1 bounded:1 duo:1 what:5 interpreted:2 eigenvector:2 ca1:1 finding:1 nj:1 guarantee:2 every:5 exactly:2 wrong:1 scaled:1 uk:1 control:7 brute:1 originates:1 medical:2 grant:2 before:2 positive:1 engineering:4 dropped:1 tends:1 consequence:1 despite:1 cliff:1 path:2 becoming:2 ecg:1 hmms:16 ease:1 bi:5 unique:7 acknowledgment:1 testing:1 recursive:2 procedure:1 word:1 selection:3 inattention:1 faulty:1 context:1 impossible:2 prentice:1 dean:1 baum:1 kale:1 attention:3 independently:1 formulate:2 disorder:1 estimator:1 rule:4 autonomous:1 krishnamurthy:2 variation:2 analogous:1 annals:1 heavily:1 element:1 physiotoolkit:1 expensive:5 recognition:1 labeled:2 database:1 observed:9 cooperative:1 levine:1 solved:4 cycle:1 movement:1 decrease:1 russell:1 observes:1 yk:14 constrains:1 reward:2 dynamic:6 personal:1 renormalized:1 byk:9 motivate:1 trained:2 rewrite:1 colinear:3 algebra:2 smart:2 serve:1 purely:1 basis:8 capp:1 microeconomic:2 joint:1 various:2 geist:1 alphabet:3 artificial:1 milp:4 whose:1 encoded:1 larger:1 enemy:3 supplementary:1 say:3 apparent:1 reconstruct:5 statistic:1 jointly:1 noisy:7 ip:1 cristian:1 online:1 sequence:22 eigenvalue:1 analytical:1 propose:4 product:1 kober:1 aligned:1 combining:1 realization:1 date:1 normalize:2 exploiting:5 cluster:4 produce:1 guaranteeing:2 converges:1 spent:1 depending:2 illustrate:1 pose:1 rescale:1 ij:3 received:2 hadfield:1 solves:1 implemented:1 pietquin:1 implies:1 physiobank:1 direction:7 discontinuous:1 correct:1 filter:25 stochastic:6 subsequently:1 kb:1 human:2 material:1 public:2 behaviour:1 abbeel:1 generalization:4 preliminary:1 secondly:1 strictly:1 hold:5 tracker:2 considered:4 sufficiently:2 hall:1 scope:1 bj:1 viterbi:1 smallest:1 a2:1 estimation:2 encapsulated:1 combinatorial:4 label:1 council:1 successfully:1 mit:1 sensor:26 gaussian:4 aim:6 ck:1 avoid:2 cornell:3 office:1 properly:4 sequencing:3 likelihood:34 rank:2 check:1 tech:1 prevalent:1 equipment:1 centroid:2 detect:3 kim:1 helpful:1 warfare:1 inference:3 glass:1 integrated:1 entire:1 hidden:11 koller:1 proutiere:1 classification:1 priori:1 constrained:1 special:1 field:1 equal:4 construct:6 once:1 beach:1 ng:1 manually:3 biology:1 rosa:1 icml:1 discrepancy:1 minimized:1 simplex:1 others:1 intelligent:1 inherent:1 employ:1 future:1 few:1 tightly:1 national:1 fitness:1 friedman:1 freedom:1 detection:7 englewood:1 laborious:1 extreme:1 permuting:2 chain:5 staging:6 accurate:2 edge:1 decoupled:1 unless:1 initialized:1 varian:1 increased:1 column:28 modeling:1 markovian:1 nullspaces:3 maximization:1 mahowald:1 cost:1 deviation:2 hundred:1 uniform:1 perturbed:5 corrupted:8 st:1 person:1 fundamental:1 international:3 contract:1 off:1 probabilistic:3 rechtschaffen:1 together:1 moody:1 again:1 satisfied:1 reconstructs:1 positivity:1 yth:1 expert:3 american:1 rescaling:2 potential:1 singleton:2 summarized:1 wk:2 coefficient:3 satisfy:1 performed:3 try:1 observing:1 characterizes:1 start:1 recover:1 wave:1 parallel:3 identifiability:2 contribution:2 minimize:1 circulation:1 variance:1 characteristic:1 efficiently:1 maximized:1 yield:1 correspond:1 rabiner:1 bayesian:2 raw:1 monitoring:2 rx:8 processor:1 plateau:1 inform:1 psg:2 manual:1 ed:1 norton:1 failure:1 acquisition:1 frequency:1 pp:10 naturally:3 proof:2 associated:1 transposed:1 recovers:2 sampled:2 wearable:1 rational:1 logical:1 recall:1 knowledge:2 cap:2 stanley:1 segmentation:3 malfunction:1 back:2 alexandre:1 follow:1 swedish:1 formulation:10 evaluated:3 though:1 done:2 generality:2 mar:1 ergodicity:2 stage:9 angular:1 anderson:1 biomedical:1 hand:3 night:4 ei:1 zeng:1 nonlinear:1 mdp:1 usa:1 usage:1 contain:1 true:5 normalized:2 hausdorff:1 evolution:1 assigned:2 alternating:1 moore:3 illustrated:1 during:2 uniquely:1 criterion:1 manifestation:1 stress:1 outline:1 demonstrate:2 electroencephalogram:2 performs:1 motion:1 recently:1 petrie:1 common:1 occurred:1 elementwise:4 measurement:5 cambridge:3 automatic:17 rd:1 trivially:1 i6:1 calibration:1 access:1 specification:1 e220:1 etc:1 feb:1 something:1 posterior:35 recent:1 noisily:3 belongs:1 reverse:1 certain:1 arbitrarily:1 fault:6 life:1 yi:1 scoring:1 relaxed:1 determine:3 redundant:1 dashed:1 ii:4 signal:3 multiple:3 full:3 reduces:2 match:2 long:1 a1:1 feasibility:1 variant:1 basic:1 denominator:1 essentially:2 relabel:1 patient:4 sleeping:1 background:1 want:1 wake:2 source:1 crucial:1 comment:1 recording:2 hz:1 integer:4 presence:1 revealed:3 iii:2 fit:2 equidistant:1 restrict:2 economic:1 idea:1 utility:2 algebraic:2 dorffner:1 wahlberg:1 speech:1 nine:1 york:4 action:3 remark:1 cornerstone:1 generally:2 useful:1 se:3 clear:1 aimed:1 amount:1 nonparametric:1 band:1 processed:3 problematic:1 tutorial:1 piot:1 estimated:2 correctly:3 klein:1 diagnosis:3 discrete:4 intertwined:1 vol:9 rezek:1 key:2 demonstrating:1 vast:1 fraction:3 sum:2 inverse:47 linger:1 uncertainty:2 throughout:2 electronic:1 separation:1 matrix2:1 decision:4 scaling:2 abnormal:1 guaranteed:1 distinguish:1 fold:1 sleep:32 paramount:1 microeconomics:1 annual:1 your:3 bp:2 software:1 min:3 performing:3 department:3 structured:1 according:8 combination:2 reconstructing:2 pan:1 lp:2 evolves:2 s1:2 pr:4 heart:1 taken:2 computationally:6 equation:20 discus:2 turn:1 needed:1 know:2 tractable:1 serf:1 available:2 operation:1 indirectly:1 simulating:1 batch:1 alternative:1 struct:1 denotes:2 clustering:9 running:1 ensure:1 graphical:1 sw:2 instant:4 medicine:2 exploit:2 establish:1 classical:2 physionet:2 society:1 question:2 quantity:2 added:1 costly:2 diagonal:2 kth:6 thank:1 simulated:1 hmm:29 me:1 considers:1 trivial:1 reason:1 consumer:1 assuming:1 kalman:3 analyst:1 index:1 providing:1 ratio:2 equivalently:1 difficult:1 setup:1 liang:1 robert:1 negative:1 proper:1 policy:2 unknown:11 perform:2 embc:1 upper:1 observation:97 markov:17 finite:4 beat:1 y1:1 perturbation:2 arbitrary:1 namely:1 pair:2 c3:1 connection:2 security:1 c4:1 learned:1 hour:1 nip:1 able:3 below:4 usually:3 pattern:1 reading:4 summarize:1 program:5 royal:3 including:2 max:1 natural:2 force:1 circumvent:2 indicator:1 zhu:1 technology:4 eye:1 extract:1 health:1 dragan:1 review:1 checking:1 multiplication:1 fully:1 permutation:1 mixed:3 interesting:1 filtering:44 foundation:1 awareness:1 agent:5 degree:1 elliott:1 consistent:5 proxy:1 principle:1 row:4 course:1 supported:1 free:4 side:4 institute:3 wide:1 patton:1 taking:1 correspondingly:1 ghz:1 regard:1 dimension:2 world:4 transition:7 valid:3 computes:1 author:2 made:1 reinforcement:6 projected:1 avoided:1 adaptive:1 employing:1 amaral:1 reconstructed:8 pseudoinverse:1 b1:4 assumed:2 popovic:1 imitation:1 continuous:1 latent:3 vikram:1 learn:2 nature:1 robust:1 ca:4 obtaining:1 eeg:5 hornik:1 permute:1 diag:10 spread:1 s2:2 noise:30 allowed:1 fig:3 intel:1 caplin:1 slow:1 probing:1 ny:2 wiley:1 mietus:1 concatenating:2 nullspace:5 third:1 theorem:7 down:1 choi:1 symbol:1 exists:2 quantization:3 importance:1 conditioned:1 illustrates:1 cartesian:1 chen:3 boston:1 led:1 distinguishable:1 army:1 penrose:1 failed:1 tracking:1 partially:2 bo:2 watch:1 springer:3 ma:2 conditional:2 marked:1 rojas:1 formulated:2 absence:1 change:3 determined:1 miss:2 lemma:13 total:1 kuo:1 experimental:1 formally:3 mark:1 arises:2 relevance:1 preparation:1 evaluate:1 |
6,643 | 7,009 | Non-parametric Structured Output Networks
Andreas M. Lehrmann
Disney Research
Pittsburgh, PA 15213
[email protected]
Leonid Sigal
Disney Research
Pittsburgh, PA 15213
[email protected]
Abstract
Deep neural networks (DNNs) and probabilistic graphical models (PGMs) are
the two main tools for statistical modeling. While DNNs provide the ability to
model rich and complex relationships between input and output variables, PGMs
provide the ability to encode dependencies among the output variables themselves.
End-to-end training methods for models with structured graphical dependencies
on top of neural predictions have recently emerged as a principled way of combining these two paradigms. While these models have proven to be powerful in
discriminative settings with discrete outputs, extensions to structured continuous
spaces, as well as performing efficient inference in these spaces, are lacking. We
propose non-parametric structured output networks (NSON), a modular approach
that cleanly separates a non-parametric, structured posterior representation from
a discriminative inference scheme but allows joint end-to-end training of both
components. Our experiments evaluate the ability of NSONs to capture structured
posterior densities (modeling) and to compute complex statistics of those densities
(inference). We compare our model to output spaces of varying expressiveness and
popular variational and sampling-based inference algorithms.
1
Introduction
In recent years, deep neural networks have led to tremendous progress in domains such as image
classification [1, 2] and segmentation [3], object detection [4, 5] and natural language processing [6, 7].
These achievements can be attributed to their hierarchical feature representation, the development of
effective regularization techniques [8, 9] and the availability of large amounts of training data [10, 11].
While a lot of effort has been spent on identifying optimal network structures and trainings schemes
to enable these advances, the expressiveness of the output space has not evolved at the same rate.
Indeed, it is striking that most neural architectures model categorical posterior distributions that
do not incorporate any structural assumptions about the underlying task; they are discrete and
global (Figure 1a). However, many tasks are naturally formulated as structured problems or would
benefit from continuous representations due to their high cardinality. In those cases, it is desirable to
learn an expressive posterior density reflecting the dependencies in the underlying task.
As a simple example, consider a stripe of n noisy pixels in a natural image. If we want to learn
a neural network that encodes the posterior distribution p? (y | x) of the clean output y given the
noisy input x, we must ensure that p? is expressive enough to represent potentially complex noise
distributions and structured enough to avoid modeling spurious dependencies between the variables.
Probabilistic graphical models [12], such as Bayesian networks or Markov random fields, have a
long history in machine learning and provide principled frameworks for such structured data. It is
therefore natural to use their factored representations as a means of enforcing structure in a deep
neural network. While initial results along this line of research have been promising [13, 14], they
focus exclusively on the discrete case and/or mean-field inference.
Instead, we propose a deep neural network that encodes a non-parametric posterior density that
factorizes over a graph (Figure 1b). We perform recurrent inference inspired by message-passing in
this structured output space and show how to learn all components end-to-end.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
(i) Deep
Neural Network
Classification
Neural Network
Dropout
FC
U
V
U
ReLU
ReLU
?=
|Y |
{pi }i=1
Pooling
p?
Convolution
p? (Y = y)
Dropout
FC
ReLU
ReLU
Pooling
x
Y
?
(a) Traditional Neural Network:
discrete, global, parametric.
?3|2
?en?
?n|?
?e32
V
Y1
Y1
?e21
?2|1
e
e, B}
e ?
?e = {w,
p?2|1
Y3
Y2
Y4
???
Convolution
???
p? (y | x)
?1
???
y
?e1
???
x
(ii) Non-parametric
Graphical Model
?? (?, y)
Yn-1
LM
Y5
Yn
Yi | pa(Yi ) ? p?i|pa(i)
Recurrent
Inference Network
Fig. 2
LI
(b) Non-parametric Structured Output Network:
continuous, structured, non-parametric.
Figure 1: Overview: Non-parametric Structured Output Networks. (a) Traditional neural networks use
a series of convolution and inner product modules to predict a discrete posterior without graphical structure
(e.g., VGG [15]). [grey =
b optional] (b) Non-parametric structured output networks use a deep neural network to
predict a non-parametric graphical model p? (x) (y) (NGM) that factorizes over a graph. A recurrent inference
network (RIN) computes statistics t[p? (x) (y)] from this structured output density. At training time, we propagate
stochastic gradients from both NGM and RIN back to the inputs.
1.1
Related Work
Our framework builds upon elements from neural networks, structured models, non-parametric
statistics, and approximate inference. We will first present prior work on structured neural networks
and then discuss the relevant literature on approximate non-parametric inference.
1.1.1
Structured Neural Networks
Structured neural networks combine the expressive representations of deep neural networks with
the structured dependencies of probabilistic graphical models. Early attempts to combine both
frameworks used high-level features from neural networks (e.g., fc7) to obtain fixed unary potentials
for a graphical model [18]. More recently, statistical models and their associated inference tasks have
been reinterpreted as (layers in) neural networks, which has allowed true end-to-end training and
blurred the line between both paradigms: [13, 14] express the classic mean-field update equations
as a series of layers in a recurrent neural network (RNN). Structure inference machines [17] use an
RNN to simulate message-passing in a graphical model with soft-edges for activity recognition. A
full backward-pass through loopy-BP was proposed in [19]. The structural-RNN [16] models all
node and edge potentials in a spatio-temporal factor graph as RNNs that are shared among groups of
nodes/edges with similar semantics. Table 1 summarizes some important properties of these methods.
Notably, all output spaces except for the non-probabilistic work [16] are discrete.
1.1.2
Inference in Structured Neural Networks
In contrast to a discrete and global posterior, which allows inference of common statistics (e.g., its
mode) in linear time, expressive output spaces, as in Figure 1b, require message-passing schemes [20]
Output Space
Related Work
Continuous
Non-parametric
Structured
End-to-end Training
Prob. Inference
Posterior Sampling
VGG
MRF-RNN
[15]
[14]
7
7
7
X
X
MF
7
D
X
Structural
RNN
[16]
Structure
Inference Machines
[17]
Deep Structured
Models
[13]
X
7
X
X
7
7
7
7
X
7
MP
X
X
X
MF
7
Table 1: Output Space Properties Across Models.
[MF: mean-field; MP: message passing; D: direct; ? ?: not applicable]
2
NSON
(ours)
X
X
X
X
MP
X
to propagate and aggregate information. Local potentials outside of the exponential family, such
as non-parametric distributions, lead to intractable message updates, so one needs to resort to
approximate inference methods, which include the following two popular groups:
Variational Inference. Variational methods, such as mean-field and its structured variants [12],
approximate an intractable target distribution with a tractable variational distribution by maximizing
the evidence lower bound (ELBO). Stochastic extensions allow the use of this technique even on
large datasets [21]. If the model is not in the conjugate-exponential family [22], as is the case for
non-parametric graphical models, black box methods must be used to approximate an intractable
expectation in the ELBO [23]. For fully-connected graphs with Gaussian pairwise potentials, the
dense-CRF model [24] proposes an efficient way to perform the variational updates using the
permutohedral lattice [25]. For general edge potentials, [26] proposes a density estimation technique
that allows the use of non-parametric edge potentials.
Sampling-based Inference. This group of methods employs (sets of) samples to approximate
intractable operations when computing message updates. Early works use iterative refinements of approximate clique potentials in junction trees [27]. Non-parametric belief propagation (NBP) [28, 29]
represents each message as a kernel density estimate and uses Gibbs sampling for propagation. Particle belief propagation [30] represents each message as a set of samples drawn from an approximation
to the receiving node?s marginal, effectively circumventing the kernel smoothing required in NBP.
Diverse particle selection [31] keeps a diverse set of hypothesized solutions at each node that pass
through an iterative augmentation-update-selection scheme that preserves message values. Finally, a
mean shift density approximation has been used as an alternative to sampling in [32].
1.2
Contributions
Our NSON model is inspired by the structured neural architectures (Section 1.1.1). However, in
contrast to those approaches, we model structured dependencies on top of expressive non-parametric
densities. In doing so, we build an inference network that computes statistics of these non-parametric
output densities, thereby replacing the need for more conventional inference (Section 1.1.2).
In particular, we make the following contributions: (1) We propose non-parametric structured output
networks, a novel approach combining the predictive power of deep neural networks with the
structured representation and multimodal flexibility of non-parametric graphical models; (2) We show
how to train the resulting output density together with recurrent inference modules in an end-to-end
way; (3) We compare non-parametric structured output networks to a variety of alternative output
densities and demonstrate superior performance of the inference module in comparison to variational
and sampling-based approaches.
2
Non-parametric Structured Output Networks
Traditional neural networks (Figure 1a; [15]) encode a discrete posterior distribution by predicting
e
an input-conditioned parameter vector ?(x)
of a categorical distribution, i.e., Y | X = x ? p?(x)
.
e
? (x) parameterizes a
Non-parametric structured output networks (Figure 1b) do the same, except that e
continuous graphical model with non-parametric potentials. It consists of three components: A deep
neural network (DNN), a non-parametric graphical model (NGM), and a recurrent inference network
(RIN). While the DNN+NGM encode a structured posterior (=
b model), the RIN computes complex
statistics in this output space (=
b inference).
At a high level, the DNN, conditioned on an input x, predicts the parameters ?e = {?eij } (e.g.,
kernel weights, centers and bandwidths) of local non-parametric distributions over a node and its
parents according to the NGM?s graph structure (Figure 1b). Using a function ?? , these local joint
distributions are then transformed to conditional distributions parameterized by ? = {?i|j } (e.g.,
through a closed-form conditioning operation) and assembled into a structured joint density p? (x) (y)
with conditional (in)dependencies prescribed by the graphical model. Parameters of the DNN are
optimized with respect to a maximum-likelihood loss LM . Simultaneously, a recurrent inference
network (detailed in Figure 2) that takes ?e as input, is trained to compute statistics of the structured
distribution (e.g., marginals) using a separate inference loss LI . The following two paragraphs discuss
these elements in more detail.
3
Model (DNN+NGM). The DNN is parameterized by a weight vector
from a generic input space X to a Cartesian parameter space ?n ,
M
and encodes a function
M e
x7 !
? (x) = (?ei,pa(i) (x))ni=1 ,
(1)
each of whose components models a joint kernel density (Yi , pa(Yi )) ? p?ei,pa(i) (x) and thus, implicitly, the local conditional distribution Yi | pa(Yi ) ? p?i|pa(i) (x) of a non-parametric graphical model
p? (x) (y) =
n
Y
i=1
(2)
p?i|pa(i) (x) (yi | pa(yi ))
over a structured output space Y with directed, acyclic graph G = (Y, E). Here, pa(?) denotes the set
of parent nodes w.r.t. G, which we fix in advance based on prior knowledge or structure learning [12].
The conditional density of a node Y = Yi with parents Y 0 = pa(Yi ) and parameters ? = ?i|pa(i) (x)
is thus given by1
N
X
p? (y | y 0 ) =
w(j) ? |B(j) | 1 ?(B( j) (y ?(j) )),
(3)
j=1
Q
where the differentiable kernel ?(u) = i q(ui ) is defined in terms of a symmetric, zero-mean
density q with positive variance and the conditional parameters ? = (w, ? , B) 2 ? correspond to
the full set of kernel weights, kernel centers, and kernel bandwidth matrices, respectively.2 The
functional relationship between ? and its joint counterpart ?e = ?ei,pa(i) (x) is mediated through a
e = ?? (w,
e = ? and can be computed in closede, B)
e ?
kernel-dependent conditioning operation ?? (?)
form for a wide range of kernels, including Gaussian, cosine, logistic and other kernels with sigmoid
e (j)
?
e(j)
y
e (j) = By 0(j) and ?
CDF. In particular, for block decompositions B
e(j) = (j) , we obtain
e =?=
?? (?)
8
(j)
e (j)
>
e(j) ? |B
>w /w
y0 |
<
0
1
?(j) = ?
e(j)
y ,
>
>
: B(j) = B
e (j) .
y
( j)
e 0
?(B
y
e
B
y0
(y 0
(j)
?
ey0 )),
?
ey0
1?j?N
(4)
See Appendix A.1 for a detailed derivation. We refer to the structured posterior density in Eq. (2)
with the non-parametric local potentials in Eq. (3) as a non-parametric structured output network.
0
Given an output training set DY = {y(i) 2 Y}N
i=1 , traditional kernel density estimation [33] can be
viewed as an extreme special case of this architecture in which the discriminative, trainable DNN
is replaced with a generative, closed-form estimator and n := 1 (no structure), N := N 0 (#kernels
= #training points), w(i) := (N 0 ) 1 (uniform weights), B(i) := B(0) (shared covariance) and
?(i) := y(i) (fixed centers). When learning M from data, we can easily enforce parts or all of those
restrictions in our model (see Section 5), but Section 3 will provide all necessary derivations for the
more general case shown above.
Inference (RIN). In contrast to traditional classification networks with discrete label posterior,
non-parametric structured output networks encode a complex density with rich statistics. We employ
a recurrent inference network with parameters I to compute such statistics t from the predicted
parameters ?e(x) 2 ?n ,
I
?e(x) 7 !
t[p? (x) ].
(5)
Similar to conditional graphical models, the underlying assumption is that the input-conditioned
density p? (x) contains all information about the semantic entities of interest and that we can infer
whichever statistic we are interested in from it. A popular example of a statistic is a summary statistic,
t[p? (x) ](yi ) = opy\yi p? (x) (y) d(y\yi ),
(6)
R
which is known as sum-product BP (op = ; computing marginals) and max-product BP (op =
max; computing max-marginals). Note, however, that we can attach recurrent inference networks
corresponding to arbitrary tasks to this meta representation. Section 4 discusses the necessary details.
?
?
:= B(j)
1
1 >
1
We write B(
2
Note that ? represents the parameters of a specific node; different nodes may have different parameters.
j)
and B
T
:= B
4
to avoid double superscripts.
3
Learning Structured Densities using Non-Parametric Back-Propagation
The previous section introduced the model and inference components of a non-parametric structured
output network. We will now describe how to learn the model (DNN+NGM) from a supervised
training set (x(i) , y(i) ) ? pD .
3.1
Likelihood Loss
? (x; M )) to explicitly refer to the weights M of the deep neural network
We write ? (x; M ) = ?? (e
predicting the non-parametric graphical model (Eq. (1)). Since the parameters of p? (x) are deterministic predictions from the input x, the only free and learnable parameters are the components of M .
We train the DNN via empirical risk minimization with a negative log-likelihood loss LM ,
?
M
= argmin E(x,y)?bpD [LM (?? (x;
M ), y)]
M
= argmax E(x,y)?bpD [log p? (x;
M)
(7)
(y)],
M
where pbD refers to the empirical distribution and the expectation in Eq. (7) is taken over the factorization in Eq. (2) and the local distributions in Eq. (3). Note the similarities and differences
between a non-parametric structured output network and a non-parametric graphical model with
unary potentials from a neural network: Both model classes describe a structured posterior. However,
while the unaries in the latter perform a reweighting of the potentials, a non-parametric structured
output network predicts those potentials directly and allows joint optimization of its DNN and NGM
components by back-propagating the structured loss first through the nodes of the graphical model
and then through the layers of the neural network all the way back to the input.
3.2
Topological Non-parametric Gradients
We optimize Eq. (7) via stochastic gradient descent of the loss LM w.r.t. the deep neural network
weights M using Adam [34]. Importantly, the gradients r M LM (?? (x; M ), y) decompose into a
factor from the deep neural network and a factor from the non-parametric graphical model,
r
M
LM (?? (x;
M ), y)
=
@ log p? (x;
e
@ ??(x;
e
(y) @ ??(x;
M)
,
?
@ M
M)
M)
(8)
where the partial derivatives of the second factor can be obtained via standard back-propagation and
the first factor decomposes according to the graphical model?s graph structure G,
@ log p? (x;
e
@ ??(x;
M)
(y)
M)
n
X
@ log p?i|pa(i) (x; M ) (yi | pa(yi ))
=
.
e
@ ??(x;
M)
(9)
i=1
e
The gradient of a local model w.r.t. the joint parameters ??(x;
M ) is given by two factors accounting
for the gradient w.r.t. the conditional parameters and the Jacobian of the conditioning operation,
@ log p?i|pa(i) (x; M ) (yi | pa(yi ))
@ log p?i|pa(i) (x; M ) (yi | pa(yi )) @ ? (x;
=
?
e
@ ? (x; M )
@ ??(x;
@ ?e(x;
M)
M)
M)
.
(10)
Note that the Jacobian takes a block-diagonal form, because ? = ?i|pa(i) (x; M ) is independent
of ?e = ?ej,pa(j) (x; M ) for i 6= j. Each block constitutes the backward-pass through a node Yi ?s
conditioning operation,
2
3
@w @w @w
e
@B
e @e
?
6 @w
@?
@ (w, ? , B) 6
?
@?
6
=
= 6 0 @e
?
e
e, B)
e ?
@ ?e @ (w,
4
0 0
7
7
0 7
7,
5
(11)
@B
e
@B
where the individual entries are given by the derivatives of Eq. (4), e.g.,
@w
e
= (w ? w + diag(w)) ? diag(w)
e
@w
5
1
.
(12)
Similar equations exist for the derivatives of the weights w.r.t. the kernel locations and kernel
bandwidth matrices; the remaining cases are simple projections. In practice, we may be able to group
the potentials p?i|pa(i) according to their semantic meaning, in which case we can train one potential
per group instead of one potential per node by sharing the corresponding parameters in Eq. (9).
All topological operations can be implemented as separate layers in a deep neural network and the
corresponding gradients can be obtained using automatic differentiation.
3.3
Distributional Non-parametric Gradients
We have shown how the gradient of the loss factorizes over the graph of the output space. Next, we
will provide the gradients of those local factors log p? (y | y 0 ) (Eq. (3)) w.r.t. the local parameters
? = ?i|pa(i) . To reduce notational clutter, we introduce the shorthand yb(k) := B( k) (y ?(k) ) to
refer to the normalized input and provide only final results; detailed derivations for all gradients and
worked out examples for specific kernels can be found in Appendix A.2.
Kernel Weights.
?
rw log p? (y | y ) = > ,
w ?
0
? :=
?
|B
( k)
(k)
|?(b
y
)
?N
.
(13)
k=1
Note that w is required to lie on the standard (N 1)-simplex (N 1) . Different normalizations are
possible, including a softmax or a projection onto the simplex, i.e., ? (N 1) (w(i) ) = max(0, w(i) + u)
and u is the unique translation such that the positive points sum to 1 [35].
Kernel Centers.
?
?N
B( >k) @?(b
y (k) )
:=
?
.
(14)
|B(k) |
@b
y (k)
k=1
The kernel centers do not underlie any spatial restrictions, but proper initialization is important.
Typically, we use the centers of a k-means clustering with k := N to initialize the kernel centers.
w
r? log p? (y | y ) =
,
w> ?
0
Kernel Bandwidth Matrices.
rB log p? (y | y 0 ) =
w
,
w> ?
:=
?
?
??N
B( >k)
@?(b
y (k) ) (>k)
(k)
?
?(b
y
)
+
y
b
.
|B(k) |
@b
y (k)
k=1
(15)
While computation of the gradient w.r.t. B is a universal approach, specific kernels may allow
alternative gradients: In a Gaussian kernel, for instance, the Gramian of the bandwidth matrix acts as a
covariance matrix. We can thus optimize B(k) B(>k) in the interior of the cone of positive-semidefinite
matrices by computing the gradients w.r.t. the Cholesky factor of the inverse covariance matrix.
4
Inferring Complex Statistics using Neural Belief Propagation
The previous sections introduced non-parametric structured output networks and showed how their
components, DNN and NGM, can be learned from data. Since the resulting posterior density p? (x) (y)
(Eq. (2)) factorizes over a graph, we can, in theory, use local messages to propagate beliefs about statistics t[p? (x) (y)] along its edges (BP; [20]). However, special care must be taken to handle intractable
operations caused by non-parametric local potentials and to allow an end-to-end integration.
For ease of exposition, we assume that we can represent the local conditional distributions as a set of
pairwise potentials { (yi , yj )}, effectively converting our directed model to a normalized MRF. This
is not limiting, as we can always convert a factor graph representation of Eq. (2) into an equivalent
pairwise MRF [36]. In this setting, a BP message ?i!j (yj ) from Yi to Yj takes the form
?i!j (yj ) = opyi (yi , yj ) ? ??!i (yi ),
(16)
where the operator opy computes a summary statistic, such as integration or maximization, and
??!i (yi ) is the product of all incoming messages at Yi . In case of a graphical model with nonparametric local distributions (Eq. (3)), this computation is not feasible for two reasons: (1) the premessages ??!i (yi ) are products of sums, which means that the number of kernels grows exponentially
in the number of incoming messages; (2) the functional opy does not usually have an analytic form.
6
LM
?e32
1
k 2 ne(i)\j
?eij
FC+ReLU
(t 1)
?
bk!i
Stacking
1
(t)
?
bi!j
t = 1, . . . , T
bbi
(i)
LI
?e21
FC+ReLU
1
1
(T )
?
bk!i
k 2 ne(i)
(t 1)
?
b3!2
(t 1)
?
b1!2
?e42
(t 1)
?
b3!2
(t 1)
?
b1!2
FC
Non-parametric
Graphical Model
Fig. 1(b)(ii)
FC
{?eij }
?e42
FC
Deep
Neural Network
Fig. 1(b)(i)
?e32
(t)
?
b2!4
?e21
i = 1, . . . , n
(a) Recurrent Inference Network.
(b) Partially Unrolled Inference Network.
Figure 2: Inferring Complex Statistics. Expressive output spaces require explicit inference procedures to
obtain posterior statistics. We use an inference network inspired by message-passing schemes in non-parametric
graphical models. (a) An RNN iteratively computes outgoing messages from incoming messages and the local
potential. (b) Unrolled inference network illustrating the computation of ?
b2!4 in the graph shown in Figure 1b.
Inspired by recent results in imitation learning [37] and inference machines for classification [17, 38],
we take an alternate route and use an RNN to model the exchange of information between nonparametric nodes. In particular, we introduce an RNN node ?
bi!j for each message and connect them
in time according to Eq. (16), i.e., each node has incoming connections from its local potential ?eij ,
predicted by the DNN, and the nodes {b
?k!i : k 2 neG (i)\j}, which correspond to the incoming
messages. The message computation itself is approximated through an FC+ReLU layer with weights
i!j
. An approximate message ?
bi!j from Yi to Yj can thus be written as
I
?
bi!j = ReLU(FC
i!j
I
(Stacking(?eij , {b
?k!i : k 2 neG (i)\j}))),
(17)
where neG (?) returns the neighbors of a node in G. The final beliefs bbi = ?
b?!i ? ?
bi!j can be
implemented analogously. Similar to (loopy) belief updates in traditional message-passing, we run
the RNN for a fixed number of iterations, at each step passing all neural messages. Furthermore, using
the techniques discussed in Section 3.3, we can ensure that the messages are valid non-parametric
distributions. All layers in this recurrent inference network are differentiable, so that we can propagate
Pn
(i)
a decomposable inference loss LI = i=1 LI end-to-end back to the inputs. In practice, we find that
generic loss functions work well (see Section 5) and that canonic loss functions can often be obtained
directly from the statistic. The DNN weights M are thus updated so as to do both predict the right
posterior density and, together with the RIN weights I , perform correct inference in it (Figure 2).
5
Experiments
We validate non-parametric structured output networks at both the model (DNN+NGM) and the
inference level (RIN). Model validation consists of a comparison to baselines along two binary
axes, structuredness and non-parametricity. Inference validation compares our RIN unit to the
two predominant groups of approaches for inference in structured non-parametric densities, i.e.,
sampling-based and variational inference (Section 1.1.2).
5.1 Dataset
We test our approach on simple natural pixel statistics from Microsoft COCO [11] by sampling stripes
y = (yi )ni=1 2 [0, 255]n of n = 10 pixels. Each pixel yi is corrupted by a linear noise model, leading
to the observable output xi = ? yi + ?, with ? ? N (255 ? , 1 , 2 ) and ? Ber( ), where the
target space of the Bernoulli trial is { 1, +1}. For our experiments, we set 2 = 100 and = 0.5.
Using this noise process, we generate training and test sets of sizes 100,000 and 1,000, respectively.
5.2 Model Validation
The distributional gradients (Eq. (9)) comprise three types of parameters: Kernel locations, kernel
weights, and kernel bandwidth matrices. Default values for the latter two exist in the form of uniform
weights and plug-in bandwidth estimates [33], respectively, so we can turn optimization of those
7
Neural
Network
Structured
Gaussian
Kernel Density
Non-param.
Model
7
X
7
7
Parameter Group Estimation
W
+W
B
+B
B
+B
1.13 (ML estimation)
+6.66 (Plug-in bandwidth estimation)
Gaussian
7 7
0.90 +2.54
0.88
+2.90
GGM [39]
7 X
0.85 +1.55
0.93
+1.53
+
Mixture Density [40] X 7 +9.22 +6.87 +11.18 +11.51
NGM-100 (ours)
X X +15.26 +15.30 +16.00 +16.46
(a) Model Validation
Inference
Particles
Performance Runtime
(marg. log-lik.)
(sec)
BB-VI [23]
400
800
+2.30
+3.03
660.65
1198.08
P-BP [30]
50
100
200
400
+2.91
+6.13
+7.01
+8.85
0.49
2.11
6.43
21.13
+16.62
0.04
RIN-100 (ours)
(b) Inference Validation
Table 2: Quantitative Evaluation. (a) We report the expected log-likelihood of the test set under the predicted
posterior p? (x) (y), showing the need for a structured and non-parametric approach to model rich posteriors.
(b) Inference using our RIN architecture is much faster than sampling-based or variational inference while
still leading to accurate marginals. [(N/G)GM: Non-parametric/Gaussian Graphical Model; RIN-x: Recurrent
Inference Network with x kernels; P-BP: Particle Belief Propagation; BB-VI: Black Box Variational Inference]
parameter groups on/off as desired.3 In addition to those variations, non-parametric structured output
networks with a Gaussian kernel ? = N (? | ~0, I) comprise a number of popular baselines as special
cases, including neural networks predicting a Gaussian posterior (n = 1, N = 1), mixture density
networks (n = 1, N > 1; [40]), and Gaussian graphical models (n > 1, N = 1; [39]). For the sake
of completeness, we also report the performance of two basic posteriors without preceding neural
network, namely a pure Gaussian and traditional kernel density estimation (KDE). We compare our
approach to those baselines in terms of the expected log-likelihood on the test set, which is a relative
measure for the KL-divergence to the true posterior.
Setup and Results. For the two basic models, we learn a joint density p(y, x) by maximum likelihood (Gaussian) and plug-in bandwidth estimation (KDE) and condition on the inputs x to infer
the labels y. We train the other 4 models for 40 epochs using a Gaussian kernel and a diagonal
bandwidth matrix for the non-parametric models. The DNN consists of 2 fully-connected layers with
256 units and the kernel weights are constrained to lie on a simplex with a softmax layer. The NGM
uses a chain-structured graph that connects each pixel to its immediate neighbors. Table 2a shows our
results. Ablation study: unsurprisingly, a purely Gaussian posterior cannot represent the true posterior
appropriately. A multimodal kernel density works better than a neural network with parametric posterior but cannot compete with the two non-parametric models attached to the neural network. Among
the methods with a neural network, optimization of kernel locations only (first column) generally
performs worst. However, the W + B setting (second column) gets sometimes trapped in local minima, especially in case of global mixture densities. If we decide to estimate a second parameter group,
weights (+W ) should therefore be preferred over bandwidths (+B). Best results are obtained when
estimation is turned on for all three parameter groups. Baselines: the two non-parametric methods
consistently perform better than the parametric approaches, confirming our claim that non-parametric
densities are a powerful alternative to a parametric posterior. Furthermore, a comparison of the
last two rows shows a substantial improvement due to our factored representation, demonstrating
the importance of incorporating structure into high-dimensional, continuous estimation problems.
Learned Graph Structures. While the output variables in our experiments with one-dimensional
pixel stripes have a canonical dependence structure, the optimal connectivity of the NGM in tasks with
complex or no spatial semantics might be less obvious. As an example, we consider the case of twodimensional image patches of size 10 ? 10, which we extract and corrupt following the same protocol
and noise process as above. Instead of specifying the graph by hand, we use a mutual information criterion [41] to learn the optimal arborescence from the training labels. With estimation of all parameter
groups turned on (+W + B), we obtain results that are fully in line with those above: the expected
test log-likelihood of NSONs (+153.03) is again superior to a global mixture density (+76.34),
which in turn outperforms the two parametric approaches (GGM: +18.60; Gaussian: 19.03). A full
ablation study as well as a visualization of the inferred graph structure are shown in Appendix A.3.
3
Since plug-in estimators depend on the kernel locations, the gradient w.r.t. the kernel locations needs to take
these dependencies into account by backpropagating through the estimator and computing the total derivative.
8
5.3 Inference Validation
Section 4 motivated the use of a recurrent inference network (RIN) to infer rich statistics from
structured, non-parametric densities. We compare this choice to the other two groups of approaches,
i.e., variational and sampling-based inference (Section 1.1.2), in a marginal inference task. To this
end, we pick one popular member from each group as baselines for our RIN architecture.
Particle Belief Propagation (P-BP; [30]). Sum-product particle belief propagation approximates
R
(s)
a BP-message (Eq. (16); op := ) with a set of particles {yj }Ss=1 per node Yj by computing
(k)
?
bi!j (yj )
=
(s) (k)
(s)
S
X
(yi , yj ) ? ?
b?!i (yi )
(s)
S?(yi )
s=1
,
(18)
where the particles are sampled from a proposal distribution ? that approximates the true marginal by
running MCMC on the beliefs ?
b?!i (yi ) ? ?
bi!j (yi ). Similar versions exist for other operators [42].
Black Box Variational Inference (BB-VI; [23]). Black box variational inference maximizes the
ELBO LV I [q ] with respect to a variational distribution q by approximating its gradient through a
set of samples {y(s) }Ss=1 ? q and performing stochastic gradient ascent,
?
S
X
p? (y)
p? (y(s) )
r LV I [q ] = r Eq (y) log
?S 1
r log q (y(s) ) log
.
(19)
q (y)
q (y(s) )
s=1
A statistic t (Eq. (5)) can then be estimated from the tractable variational distribution q (y) instead
of the complex target distribution p? (y). We useQan isotropic Gaussian kernel ? = N (? | ~0, I)
n
together with the traditional factorization q (y) = i=1 q i (yi ), in which case variational sampling
is straighforward and the (now unconditional) gradients are given directly by Section 3.3.
5.3.1
Setup and Results.
We train our RIN architecture with a negative log-likelihood loss attached to each belief node,
(i)
LI = log p?i (yi ), and compare its performance to the results obtained from P-BP and BB-VI by
calculating the sum of marginal log-likelihoods. For the baselines, we consider different numbers
of particles, which affects both performance and speed. Additionally, for BB-VI we track the
performance across 1024 optimization steps and report the best results. Table 2b summarizes our
findings. Among the baselines, P-BP performs better than BB-VI once a required particle threshold is
exceeded. We believe this is a manifestation of the special requirements associated with inference in
non-parametric densities: while BB-VI needs to fit a high number of parameters, which poses the risk
of getting trapped in local minima, P-BP relies solely on the evaluation of potentials. However, both
methods are outperformed by a significant margin by our RIN, which we attribute to its end-to-end
training in accordance with DNN+NGM and its ability to propagate and update full distributions
instead of their mere value at a discrete set of points. In addition to pure performance, a key advantage
of RIN inference over more traditional inference methods is its speed: our RIN approach is over 50?
faster than P-BP with 100 particles and orders of magnitude faster than BB-VI. This is significant,
even when taking dependencies on hardware and implementation into account, and allows the use of
expressive non-parametric posteriors in time-critical applications.
6
Conclusion
We proposed non-parametric structured output networks, a highly expressive framework consisting of
a deep neural network predicting a non-parametric graphical model and a recurrent inference network
computing statistics in this structured output space. We showed how all three components can be
learned end-to-end by backpropagating non-parametric gradients through directed graphs and neural
messages. Our experiments showed that non-parametric structured output networks are necessary
for both effective learning of multimodal posteriors and efficient inference of complex statistics in
them. We believe that NSONs are suitable for a variety of other structured tasks and can be used
to obtain accurate approximations to many intractable statistics of non-parametric densities beyond
(max-)marginals.
9
References
[1] Krizhevsky, A., Sutskever, I., Hinton, G.: ImageNet Classification with Deep Convolutional
Neural Networks. NIPS (2012)
[2] He, K., Zhang, X., Ren, S., Sun, J.: Deep Residual Learning for Image Recognition. CVPR
(2016)
[3] Shelhamer, E., Long, J., Darrell, T.: Fully Convolutional Networks for Semantic Segmentation.
PAMI (2016)
[4] Girshick, R.: Fast R-CNN. ICCV (2015)
[5] Rena, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: Towards Real-Time Object Detection
with Region Proposal Networks. arXiv:1506.01497 [cs.CV] (2015)
[6] Collobert, R., Weston, J.: A Unified Architecture for Natural Language Processing: Deep
Neural Networks with Multitask Learning. ICML (2008)
[7] Bahdanau, D., Cho, K., Bengio, Y.: Neural Machine Translation by Jointly Learning to Align
and Translate. ICLR (2015)
[8] Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: A Simple
Way to Prevent Neural Networks from Overfitting. JMLR (2014)
[9] Ioffe, S., Szegedy, S.: Batch Normalization: Accelerating Deep Network Training by Reducing
Internal Covariate Shift. ICML (2015)
[10] Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: ImageNet: A Large-Scale
Hierarchical Image Database. CVPR (2009)
[11] Lin, T.Y., Maire, M., Belongie, S., Bourdev, L., Girshick, R., Hays, J., Perona, P., Ramanan, D.,
Zitnick, C.L., Dollar, P.: Microsoft COCO: Common Objects in Context. In arXiv:1405.0312
[cs.CV]. (2014)
[12] Koller, D., Friedman, N.: Probabilistic Graphical Models: Principles and Techniques. MIT
Press (2009)
[13] Schwing, A., Urtasun, R.: Fully Connected Deep Structured Networks. arXiv:1503.02351
[cs.CV] (2015)
[14] Zheng, S., Jayasumana, S., Romera-Paredes, B., Vineet, V., Su, Z., Du, D., Huang, C., Torr, P.:
Conditional Random Fields as Recurrent Neural Networks. ICCV (2015)
[15] Simonyan, K., Zisserman, A.: Very Deep Convolutional Networks for Large-Scale Image
Recog. ICLR (2015)
[16] Jain, A., Zamir, A.R., Savarese, S., Saxena, A.: Structural-RNN: Deep Learning on SpatioTemporal Graphs. CVPR (2016)
[17] Deng, Z., Vahdat, A., Hu, H., Mori, G.: Structure Inference Machines: Recurrent Neural
Networks for Analyzing Relations in Group Activity Recognition. CVPR (2015)
[18] Chen, L.C., Papandreou, G., Kokkinos, I., Murphy, K., Yuille, A.: Semantic Image Segmentation
with Deep Convolutional Nets and Fully Connected CRFs. ICLR (2015)
[19] Chen, L.C., Schwing, A., Yuille, A., Urtasun, R.: Learning Deep Structured Models. ICML
(2015)
[20] Pearl, J.: Probabilistic Reasoning in Intelligent Systems. Morgan Kaufmann (1988)
[21] Hoffman, M.D., Blei, D.M., Wang, C., Paisley, J.: Stochastic Variational Inference. JMLR
(2013)
[22] Ghahramani, Z., Beal, M.: Propagation Algorithms for Variational Bayesian Learning. NIPS
(2001)
[23] Ranganath, R., Gerrish, S., Blei, D.M.: Black Box Variational Inference. JMLR W&CP (2014)
[24] Kraehenbuehl, P., Koltun, V.: Efficient Inference in Fully Connected CRFs with Gaussian Edge
Potentials. NIPS (2012)
[25] Adams, A., Baek, J., Davis, M.A.: Fast High-Dimensional Filtering Using the Permutohedral
Lattice. Computer Graphics Forum (2010)
10
[26] Campbell, N., Subr, K., Kautz, J.: Fully-Connected CRFs with Non-Parametric Pairwise
Potentials. CVPR (2013)
[27] Koller, D., Lerner, U., Angelov, D.: A General Algorithm for Approximate Inference and its
Application to Hybrid Bayes Nets. UAI (1999)
[28] Isard, M.: Pampas: Real-Valued Graphical Models for Computer Vision. CVPR (2003)
[29] Sudderth, E., lhler, A., Freeman, W., Willsky, A.: Non-parametric Belief Propagation. CVPR
(2003)
[30] Ihler, A., McAllester, D.: Particle Belief Propagation. AISTATS (2009)
[31] Pacheco, J., Zuffi, S., Black, M.J., Sudderth, E.: Preserving Modes and Messages via Diverse
Particle Selection. ICML (2014)
[32] Park, M., Liu, Y., Collins, R.T.: Efficient Mean Shift Belief Propagation for Vision Tracking.
CVPR (2008)
[33] Scott, D.: Multivariate Density Estimation: Theory, Practice, and Visualization. Wiley (1992)
[34] Kingma, D., Ba, J.: Adam: A Method for Stochastic Optimization. ICLR (2015)
[35] Wang, W., Carreira-Perpi??n, M.?.: Projection onto the Probability Simplex: An Efficient
Algorithm with a Simple Proof, and an Application. arXiv:1309.1541 [cs.LG] (2013)
[36] Yedidia, J.S., Freeman, W.T., Weiss, Y.: Understanding Belief Propagation and its Generalizations. Technical report, Mitsubishi Electric Research Laboratories (2001)
[37] Sun, W., Venkatramana, A., Gordon, G.J., Boots, B., Bagnell, J.A.: Deeply AggreVaTeD:
Differentiable Imitation Learning for Sequential Prediction. arXiv:1703.01030 [cs.LG] (2017)
[38] Ross, S., Munoz, D., Hebert, M., Bagnell, J.A.: Learning Message-Passing Inference Machines
for Structured Prediction. CVPR (2011)
[39] Weiss, Y., Freeman, W.T.: Correctness of Belief Propagation in Gaussian Graphical Models of
Arbitrary Topology. Neural Computation (2001)
[40] Bishop, C.M.: Mixture Density Networks. Technical report, Aston University (1994)
[41] Lehrmann, A., Gehler, P., Nowozin, S.: A Non-Parametric Bayesian Network Prior of Human
Pose. ICCV (2013)
[42] Kothapa, R., Pacheco, J., Sudderth, E.B.: Max-Product Particle Belief Propagation. Technical
report, Brown University (2011)
11
| 7009 |@word multitask:1 trial:1 illustrating:1 version:1 cnn:2 kokkinos:1 paredes:1 cleanly:1 grey:1 hu:1 propagate:5 mitsubishi:1 decomposition:1 covariance:3 accounting:1 pick:1 thereby:1 initial:1 liu:1 series:2 exclusively:1 contains:1 ours:3 romera:1 outperforms:1 com:2 must:3 written:1 confirming:1 analytic:1 update:7 generative:1 isard:1 isotropic:1 blei:2 completeness:1 node:19 location:5 zhang:1 along:3 direct:1 koltun:1 consists:3 shorthand:1 combine:2 paragraph:1 introduce:2 pairwise:4 notably:1 unaries:1 expected:3 indeed:1 themselves:1 inspired:4 salakhutdinov:1 freeman:3 param:1 cardinality:1 underlying:3 maximizes:1 evolved:1 nbp:2 argmin:1 disneyresearch:2 unified:1 finding:1 differentiation:1 temporal:1 quantitative:1 y3:1 saxena:1 act:1 runtime:1 unit:2 underlie:1 ramanan:1 yn:2 positive:3 local:17 accordance:1 vahdat:1 analyzing:1 solely:1 pami:1 black:6 rnns:1 might:1 initialization:1 specifying:1 ease:1 factorization:2 range:1 bi:7 directed:3 unique:1 yj:10 practice:3 block:3 procedure:1 maire:1 rnn:10 empirical:2 universal:1 projection:3 refers:1 get:1 onto:2 interior:1 selection:3 operator:2 cannot:2 twodimensional:1 risk:2 marg:1 context:1 restriction:2 conventional:1 deterministic:1 optimize:2 center:7 maximizing:1 equivalent:1 crfs:3 decomposable:1 identifying:1 pure:2 factored:2 estimator:3 importantly:1 classic:1 handle:1 variation:1 limiting:1 updated:1 target:3 gm:1 us:2 pa:25 element:2 recognition:3 approximated:1 stripe:3 predicts:2 distributional:2 database:1 gehler:1 recog:1 module:3 aggrevated:1 wang:2 capture:1 worst:1 zamir:1 region:1 connected:6 sun:3 deeply:1 principled:2 substantial:1 pd:1 ui:1 trained:1 depend:1 predictive:1 purely:1 upon:1 yuille:2 rin:17 multimodal:3 joint:8 easily:1 derivation:3 train:5 jain:1 fast:2 effective:2 describe:2 aggregate:1 outside:1 whose:1 emerged:1 modular:1 valued:1 cvpr:9 s:2 elbo:3 ability:4 statistic:24 simonyan:1 jointly:1 noisy:2 itself:1 superscript:1 final:2 beal:1 advantage:1 differentiable:3 net:2 propose:3 product:7 relevant:1 combining:2 ablation:2 turned:2 translate:1 flexibility:1 validate:1 pampas:1 getting:1 achievement:1 sutskever:2 parent:3 double:1 requirement:1 darrell:1 adam:3 object:3 spent:1 recurrent:16 bourdev:1 pose:2 propagating:1 op:3 progress:1 eq:18 implemented:2 predicted:3 c:5 correct:1 attribute:1 stochastic:6 human:1 enable:1 mcallester:1 require:2 exchange:1 dnns:2 fix:1 generalization:1 decompose:1 extension:2 predict:3 lm:8 claim:1 early:2 estimation:11 outperformed:1 applicable:1 label:3 ross:1 correctness:1 tool:1 hoffman:1 minimization:1 mit:1 gaussian:17 always:1 pacheco:2 avoid:2 ej:1 pn:1 varying:1 factorizes:4 encode:4 ax:1 focus:1 notational:1 consistently:1 bernoulli:1 likelihood:9 improvement:1 contrast:3 baseline:7 dollar:1 inference:67 dependent:1 unary:2 typically:1 spurious:1 perona:1 dnn:16 koller:2 transformed:1 relation:1 interested:1 semantics:2 pixel:6 ey0:2 among:4 classification:5 development:1 proposes:2 smoothing:1 special:4 softmax:2 initialize:1 marginal:4 field:6 once:1 integration:2 comprise:2 beach:1 sampling:11 mutual:1 represents:3 park:1 icml:4 constitutes:1 simplex:4 report:6 intelligent:1 gordon:1 employ:2 lerner:1 preserve:1 simultaneously:1 divergence:1 individual:1 murphy:1 replaced:1 argmax:1 connects:1 consisting:1 microsoft:2 attempt:1 friedman:1 detection:2 interest:1 message:27 highly:1 zheng:1 reinterpreted:1 evaluation:2 predominant:1 mixture:5 extreme:1 semidefinite:1 unconditional:1 chain:1 accurate:2 edge:7 partial:1 necessary:3 tree:1 canonic:1 savarese:1 desired:1 girshick:3 instance:1 column:2 modeling:3 soft:1 papandreou:1 lattice:2 loopy:2 maximization:1 stacking:2 entry:1 uniform:2 krizhevsky:2 graphic:1 dependency:9 connect:1 corrupted:1 spatiotemporal:1 cho:1 st:1 density:38 vineet:1 probabilistic:6 off:1 receiving:1 dong:1 together:3 analogously:1 connectivity:1 augmentation:1 again:1 huang:1 resort:1 derivative:4 leading:2 return:1 li:8 szegedy:1 account:2 potential:22 b2:2 sec:1 availability:1 blurred:1 mp:3 explicitly:1 caused:1 vi:8 collobert:1 lot:1 closed:2 doing:1 bayes:1 kautz:1 contribution:2 ni:2 ggm:2 convolutional:4 variance:1 kaufmann:1 correspond:2 bayesian:3 mere:1 ren:1 history:1 sharing:1 obvious:1 naturally:1 associated:2 attributed:1 ihler:1 proof:1 sampled:1 dataset:1 popular:5 knowledge:1 segmentation:3 reflecting:1 back:6 campbell:1 exceeded:1 supervised:1 zisserman:1 wei:2 yb:1 box:5 furthermore:2 outgoing:1 hand:1 expressive:8 su:1 replacing:1 ei:3 propagation:16 reweighting:1 mode:2 logistic:1 jayasumana:1 grows:1 believe:2 usa:1 b3:2 hypothesized:1 normalized:2 y2:1 true:4 counterpart:1 brown:1 regularization:1 symmetric:1 iteratively:1 laboratory:1 semantic:4 backpropagating:2 davis:1 cosine:1 criterion:1 manifestation:1 crf:1 demonstrate:1 performs:2 cp:1 reasoning:1 image:7 variational:18 meaning:1 novel:1 recently:2 common:2 superior:2 sigmoid:1 functional:2 overview:1 conditioning:4 exponentially:1 attached:2 discussed:1 he:2 approximates:2 marginals:5 refer:3 significant:2 munoz:1 gibbs:1 cv:3 paisley:1 automatic:1 particle:14 language:2 similarity:1 fc7:1 align:1 posterior:28 multivariate:1 recent:2 showed:3 coco:2 route:1 hay:1 meta:1 binary:1 yi:38 neg:3 morgan:1 minimum:2 preserving:1 care:1 preceding:1 deng:2 converting:1 paradigm:2 ii:2 lik:1 full:4 desirable:1 infer:3 technical:3 faster:4 plug:4 long:3 lin:1 e1:1 prediction:4 mrf:3 variant:1 basic:2 vision:2 expectation:2 arxiv:5 iteration:1 represent:3 kernel:38 normalization:2 sometimes:1 proposal:2 addition:2 want:1 sudderth:3 e32:3 appropriately:1 ascent:1 pooling:2 bahdanau:1 member:1 structural:4 e21:3 bengio:1 enough:2 variety:2 affect:1 relu:8 fit:1 architecture:7 bandwidth:11 bpd:2 topology:1 andreas:2 inner:1 parameterizes:1 reduce:1 vgg:2 shift:3 motivated:1 accelerating:1 effort:1 passing:8 deep:25 generally:1 detailed:3 amount:1 clutter:1 nonparametric:2 hardware:1 rw:1 generate:1 exist:3 zuffi:1 canonical:1 lsigal:1 trapped:2 estimated:1 per:3 rb:1 track:1 diverse:3 discrete:10 write:2 express:1 group:14 key:1 demonstrating:1 threshold:1 drawn:1 prevent:1 clean:1 backward:2 graph:17 circumventing:1 convert:1 year:1 sum:5 cone:1 prob:1 parameterized:2 powerful:2 inverse:1 striking:1 run:1 compete:1 family:2 decide:1 patch:1 summarizes:2 appendix:3 dy:1 dropout:3 layer:8 bound:1 topological:2 activity:2 worked:1 fei:2 bp:13 encodes:3 lehrmann:3 sake:1 x7:1 simulate:1 speed:2 prescribed:1 performing:2 structured:59 according:4 alternate:1 conjugate:1 across:2 y0:2 constrained:1 iccv:3 taken:2 mori:1 equation:2 visualization:2 discus:3 turn:2 tractable:2 whichever:1 end:21 junction:1 operation:7 yedidia:1 hierarchical:2 generic:2 enforce:1 alternative:4 batch:1 top:2 denotes:1 ensure:2 include:1 remaining:1 graphical:30 clustering:1 running:1 calculating:1 ghahramani:1 build:2 especially:1 approximating:1 forum:1 parametric:70 dependence:1 traditional:9 diagonal:2 bagnell:2 gradient:20 iclr:4 separate:3 entity:1 y5:1 urtasun:2 reason:1 enforcing:1 willsky:1 gramian:1 relationship:2 y4:1 unrolled:2 setup:2 lg:2 potentially:1 kde:2 negative:2 ba:1 implementation:1 proper:1 perform:5 boot:1 convolution:3 markov:1 datasets:1 descent:1 optional:1 immediate:1 hinton:2 disney:2 y1:2 arbitrary:2 expressiveness:2 inferred:1 introduced:2 bk:2 namely:1 required:3 kl:1 optimized:1 connection:1 imagenet:2 learned:3 tremendous:1 pearl:1 kingma:1 nip:4 assembled:1 able:1 beyond:1 usually:1 scott:1 bbi:2 including:3 max:6 belief:17 power:1 critical:1 suitable:1 natural:5 hybrid:1 attach:1 predicting:4 residual:1 scheme:5 aston:1 ne:2 categorical:2 mediated:1 extract:1 prior:3 literature:1 epoch:1 understanding:1 relative:1 unsurprisingly:1 lacking:1 fully:8 loss:11 by1:1 filtering:1 proven:1 acyclic:1 lv:2 validation:6 straighforward:1 shelhamer:1 sigal:1 principle:1 e42:2 corrupt:1 nowozin:1 pi:1 translation:2 row:1 summary:2 last:1 free:1 hebert:1 allow:3 ber:1 wide:1 neighbor:2 taking:1 pgms:2 benefit:1 default:1 valid:1 rich:4 computes:5 refinement:1 bb:8 ranganath:1 approximate:9 observable:1 implicitly:1 preferred:1 keep:1 clique:1 ml:1 global:5 overfitting:1 incoming:5 ioffe:1 uai:1 b1:2 pittsburgh:2 belongie:1 spatio:1 discriminative:3 xi:1 imitation:2 continuous:6 iterative:2 decomposes:1 table:5 additionally:1 promising:1 learn:6 ca:1 angelov:1 du:1 complex:10 electric:1 domain:1 diag:2 protocol:1 zitnick:1 aistats:1 main:1 dense:1 noise:4 allowed:1 fig:3 en:1 wiley:1 inferring:2 explicit:1 exponential:2 lie:2 spatial:2 jmlr:3 jacobian:2 perpi:1 specific:3 covariate:1 baek:1 showing:1 bishop:1 learnable:1 evidence:1 intractable:6 incorporating:1 socher:1 sequential:1 effectively:2 importance:1 magnitude:1 conditioned:3 cartesian:1 margin:1 chen:2 mf:3 led:1 fc:9 eij:5 arborescence:1 tracking:1 partially:1 gerrish:1 relies:1 cdf:1 weston:1 conditional:9 viewed:1 formulated:1 exposition:1 towards:1 shared:2 leonid:1 feasible:1 permutohedral:2 carreira:1 except:2 reducing:1 torr:1 schwing:2 total:1 pas:3 internal:1 cholesky:1 latter:2 collins:1 incorporate:1 evaluate:1 mcmc:1 trainable:1 srivastava:1 |
6,644 | 701 | Time Warping Invariant Neural Networks
Guo-Zheng Sun, Hsing-Hen Chen and Yee-Chun Lee
Institute for Advanced Computer Studies
and
Laboratory for Plasma Research,
University of Maryland
College Park, MD 20742
Abstract
We proposed a model of Time Warping Invariant Neural Networks (TWINN)
to handle the time warped continuous signals. Although TWINN is a simple modification of well known recurrent neural network, analysis has shown that TWINN completely removes time warping and is able to handle difficult classification problem. It
is also shown that TWINN has certain advantages over the current available sequential
processing schemes: Dynamic Programming(DP)[I], Hidden Markov Model(HMM)[2], Time Delayed Neural Networks(TDNN) [3] and Neural Network Finite
Automata(NNFA)[4].
We also analyzed the time continuity employed in TWINN and pointed out that
this kind of structure can memorize longer input history compared with Neural Network Finite Automata (NNFA). This may help to understand the well accepted fact
that for learning grammatical reference with NNF A one had to start with very short
strings in training set.
The numerical example we used is a trajectory classification problem. This
problem, making a feature of variable sampling rates, having internal states, continuous dynamics, heavily time-warped data and deformed phase space trajectories, is
shown to be difficult to other schemes. With TWINN this problem has been learned in
100 iterations. For benchmark we also trained the exact same problem with TDNN and
completely failed as expected.
I. INTRODUCTION
In dealing with the temporal pattern classification or recognition, time warping of input signals is one of the difficult problems we often encounter. Although there are a number of
schemes available to handle time warping, e.g. Dynamic Programming (DP) and Hidden Markov Model(HMM), these schemes also have their own shortcomings in certain aspects. More depressing is that, as far as we know, there are no efficient neural network schemes to handle time
warping. In this paper we proposed a model of Time Warping Invariant Neural Networks
(TWINN) as a solution. Although TWINN is only a simple modification to the well known neural net structure, analysis shows that TWINN has the built-in ability to remove time warping
completely.
The basic idea ofTWINN is straightforward. If one plots the state trajectories of a continuous
180
Time Warping Invariant Neural Networks
dynamical system in its phase space, these trajectory curves are independent of time warping
because time warping can only change the time duration when traveling along these trajectories
and does not affect their shapes and structures. Therefore, if we normalize the time dependence
of the state variables with respect to any phase space variable, say the length of trajectory, the
neural network dynamics becomes time warping invariant.
To illustrate the power of the TWINN we tested it with a numerical example of trajectory
classification. This problem, chosen as a typical problem that the TWINN could handle, has the
following properties: (1). The input signals obey a continuous time dynamics and are sampled
with various sampling rates. (2). The dynamics of the de-warped signals has internal states. (3).
The temporal patterns consist of severely time warped signals.
To our knowledge there have not been any neural network schemes which can deal with this
case effectively. We tested it with TDNN and failed to learn.
In the next section we will introduce the TWINN and prove its time warping invariance. In
Section III we analyze its features and identify the advantages over other schemes. The numerical example of the trajectory classification with TWINN is presented in Section IV.
II. TIME WARPING INVARIANT NEURAL NETWORKS (TWINN)
To process temporal signals, we consider a fully recurrent network, which consists of two
groups of neurons: the state neurons (or recurrent units) represented by vector S(t) and the input
neurons that are clamped to the external input signals {I(t), t = 0, I, 2, ...... , T-l). The Time
Warping Invariant Neural Networks (TWINN) is simply defined as:
S(t+ 1)
= S(t)
+1(t)F(S(t), W,/(t?
(1)
where W is the weight matrix, [(t) is the distance between two consecutive input vectors defined
by the norm
l(t) = 11/(t+ 1) -/(t) II
(2)
and the mapping function F is a nonlinear function usually referred as neural activity function.
For example of first order networks, it could take the form:
(3)
Fj(S(t), W,/(t? = Tanh(~Wij(S(t) EfH(t?)
J
where Tanh(x) is Hyperbolic Tangent function and symbol Ef> stands for the vector concatenation.
For the purpose of classification (or recognition), we assign the target final state Sk>
(k= 1,2,3, ... K), for each category of patterns. After we feed into the TWINN the whole sequence
{J(O), 1(1), 1(2), ...... ,/(T-l)}, the state vector S(t) will reach the final state SeT). We then need to
compare S(n with the target final state Sk for each category k, (k=I,2,3, ... K), and calculate the
error:
(4)
The one with minimal error will be classified as such. The ideal error is zero.
For the purpose of training, we are given a set of training examples for each category. We
then minimize the error functions given by Eq. (4) using either back-propagation[7] or forward
propagation algorithm[8]. The training process can be terminated when the total error reach its
minimum.
The formula of TWINN as shown in Eq. (1) does not look like new. The subtle difference
from wildly used models is the introduction of normalization factor let) as in Eq. (1). The main
advantage by doing this lies in its built-in time warping ability. This can be directly seen from
its continuous version.
As Eq. (1) is the discrete implementation of continuous dynamics, we can easily convert it
into a continuous version by replacing "t +1" by "t+~t" and let ~t --? O. By doing so, we get
181
182
Sun, Chen, and Lee
S(t+~t)
.
-Set)
11m - - - - - - -
61-.01l/(t+M) -/(t)
II -
dS
-
(5)
dL
where L is the input trajectory length, which can be expressed as an integral
I
L (t)
III
= ~~II dt
(6)
o
or summation (as in discrete version)
L(t)
=
I
L II/(t+ 1) - ?/(t) II
(7)
1:=0
For deterministic dynamics, the distance L(t) is a single-valued function. Therefore, we can
make a unique mapping from t to L, TI: t --7 L, and any function of t can be transformed into a
function of L in terms of this mapping. For instance, the input trajectory I(t) and the state trajectory Set) can be transformed into I(L) and S(L). By doing so, discrete dynamics of Eq. (1)
becomes, in the continuous limit,
~~ = F (S (L), W, I (L) )
( 8)
It is obvious that there is no explicit time dependence in Eq. (8) and therefore the dynamics represented by Eq. (8) is time warping independent.
To be more specific, if we draw the trajectory curves of l(t) and S(t) in their phase spaces respectively, these two curves would not be deformed if we only change the time duration when
traveling along the curves. Therefore, if we generate several input sequences {J(t)} using different time warping functions and feed them into TWINN, represented by Eq. (8) or Eq. (1), the
induced state dynamics of S(L) would be the same. Meanwhile, the final state is the solo criterion for classification. Therefore, any time warped signals would be classified by the TWINN
as the same. This is the so called "time warping invariant".
III. ANALYSIS OF TWINN VS. OTHER SCHEMES
We emphasize two points in this section. First, we would analyze the advantages of the
TWINN over the other neural network structures, like TDNN, and other mature and well known
algorithms for time warping, such as HMM and Dynamics Programming. Second, we would analyze the memory capacity of input history for both the continuous dynamical networks as illustrated in Eq. (1) and its discrete companion, Neural Network Finite Automata used in
grammatical inference by Liu [3], Sun [4] and Giles [5]. And, we will show by mathematical
estimation that the continuity employed in TWINN increases the power of memorizing history
compared with NNFA
The Time Delayed Neural Networks (TDNN)[3] has been a useful neural network structure
in processing temporal signals and achieves successes in several applications, e.g. speech recognition. The traditional neural network structures are either feedforward or recurrent. The
TDNN is something in between. The power of TDNN is in its dynamic combination of the spatial processing (as in a feedforward net) and sequential processing (as in a recurrent net with
short time memory). Therefore, the TDNN could detect the local features within each windowed
frame and store their voting scores into the short time memory neurons, and then make a final
decision at the end of input sequence. This technique is suitable for processing the temporal patterns where the classification is decided by the integration of local features. But, it could not
handle the long time correlation across time frames like a state machine. It also does not tolerate
time warping effectively. Each of time warped patterns will be treated as a new feature. ThereforG, TDNN would not be able to handle the numerical example given in this paper which has
both the severe time warping and the internal states (long time correlation). The benchmark test
has been performed and it proved our prediction. Actually, it can be seen later that in our exam-
Time Warping Invariant Neural Networks
pIes, no matter which category they belong to, all windowed frames would contain similar local
features, the simple integration of local features do not contribute directly to the final classification, rather the whole sinal history will decide the classification.
As for the Dynamic Programming, it is to date the most efficient way to cope with time warping problem. The most impressing feature of dynamic programming is that it accomplishes a
global search among all NN possible paths using only -0(N2) operations, where N is the length
of the input time series and, of course, one operation here represents all calculations involved
in evaluating the 'score" of one path. But, on the other hand this is not ideal. If we can do the
time warping using recurrent network, the number of uperations will be reduced to -O(N). This
is a dramatic saving. Another undesirable feature of current dynamic warping scheme is that the
recognition or classification result heavily depends on the pre-selected template and therefore
one may need a large number of templates for a better classification rate. By adding one or two
template we actually double or triple the number of operations. Therefore, search for a neural
network time warping scheme is a pressing task.
Another available technique for time warping is Hidden Markov Model (HMM), which has
been successfully applied in speech recognition. The way for HMM to deal with time warping
is in terms of statistical behavior of its hidden state transition. Starting from one state qj, HMM
allows a certain probability ~j to forward to another state qj. Therefore, for any given HMM one
could generate various state sequences, say, qlq2q2q3q4q4qS' QlQ2Q2Q2q3Q3q4q4qS' etc., each
with a certain occurrence probability. But, these state sequences are "hidden", the observed part
is a set of speech data or symbol represented by {Sk} for example. HMM also includes a set of
observation probability B=={bjk }, so that when it is in a certain state, say Qj' HMM allows each
symbol from the set {sk} to occur with the probability bjk . Therefore, for any state sequence one
can generate various series of symbols. As an example, let us consider one simple way to generate symbols: in state Qj we generate symbol Sj (with probability bjj ). By doing so, the two state
sequences mentioned above would correspond to two possible symbol sequences:
sl s2s2s3s4s4sS and sl s2s2s2s3s3s4S4sS' Examining the two strings closely, we find that the second
one may be considered as the time warped version of the first one, or vice versa. If we present
these two strings to the HMM for testing, it will accept them with similar probabilities. This is
the way that HMM tolerates time warping. And, these state transition probabilities of HMM are
learned from the statistics of training set by using re-estimation formula. In this sense, HMM
does not deal with time warping directly, instead, it learns statistical distribution of training set
which contains time warped patterns. Consequently, if one presents a test pattern with time
warped signals which is far away from the statistical distribution of training set, it is very unlikely for a HMM to recognize this pattern.
On the contrary, the model of TWINN we proposed here has intrinsic built-in time warping
nature. Although the TWINN itself has internal states, these internal states are not used for tolerating time warping. Instead, they are used to learn more complex behavior of the "de-warped"
trajectories. In this sense, TWINN could be more powerful than HMM.
Another feature ofTWINN needs be mention is its explicit expression of continuous mapping
from S(t) to S(t+1) as shown in Eq. (1). In our early work of [4,5,6], to train a NNFA (Neural
Network Finite Automaton), we used a discrete mapping
S(t+ 1) = F(S(t), W,/(t?
(9)
where F is a nonlinear function, say Sigmoid function g(x) == 1 l(l+e' X). This model has been
successfully applied into the grammatical inference. The reason we call Eq. (1) a continuous
mapping but Eq. (9) a discrete one, even though both of them are implemented in discrete time
steps, is because there is an explicit infinitesimal factor let) used in Eq. (1). Due to this factor
the continuous state dynamics is guaranteed, by which we mean that the state variation S(t+ I)
- S(t+1) approaches zero if the input variation 1(t+l) -1{t+I) does so. But, In general, the state
183
184
Sun, Chen, and Lee
variation S(t+ 1) - S(t+ 1) generated by Eq. (9) is of order of one, regardless of what input variations are. If one starts from random initial weights, Eq. (9) provides a discrete jump between
different, randomly distributed states, which is far away from any continuous dynamics.
We did numerical test using NNFA of Eq. (9) to learn the classification problem of continuous trajectories as shown in Section V. For simplicity we did not include time warping, but the
NNFA still failed to learn. The reason is that when we tried to train a NNF A to learning the continuous dynamics, we were actually forcing the weights to generate an almost identical mapping
F from Set) to S(t+ 1). This is a very strong constrain on the weight parameters, such that it
drives the diagonal terms to positive infinity and off-diagonal terms to negative infinity (Sigmoid function is used). When this happens, the learning is stuck due to the saturation effect.
The failure of NNF A may also comes from the short history memory capacity compared to
the continuous mapping ofEq. (1). It has been shown by many numerical experiments on grammatical inference [3,4,5] that to train an NNFA as in Eq. (9) effectively, one has to start with
short training patterns (usually, the sentence length ~ 4). Otherwise, learning will fail or be very
slow. This is exactly what happened to learning the trajectory classification using NNFA, where
the lengths of our training patterns are in general considerably long (normally,- 60). But,
TWINN learned it easily. To understand the NNFA's failure and TWINN's success, in the following, we will analyze how the history information enters the learning process.
Consider the example of learning grammatical inference. Before training since we have no (I
priori knowledge about the target values of weights, we normally start with random initial values. On the other hand, during training the credit assignment (or the weight correction ~ W) can
only be done at the end of each input sequence. Consequently, each ~W should explicitly contain the information about all symbols contained in that string, otherwise the learning is meaningless. But, in numerical implementation, every variable, including both ~W and W, has a
finite precision and any information beyond the precision range will be lost. Therefore, to compare which model has the longer history memory we need to examine how the history information relates to the finite precisions of ~ Wand W.
Let us illustrate this point with a simple second-order connected fully recurrent network and
write both Eq. (1) and Eq. (9) in a unified form
S(t+l) =G,+l
(10)
such that Eq. (1) is represented by
G' + I
= S (1) + I (1) g (K (1) )
(11 )
and Eq. (9) is just
G,+l = g(K(t))
where K(t) is the weighted sum of concatenation of vectors Set) and /(t)
Kj(t) = LWjj(S(t) EfH(t?j
(12)
(13)
j
For a grammatical inference problem the error is calculated from the final state S(I) as
E= (S(T)-Starget)2
(14)
Learning is to minimize this error function. According to the standard error back-propagation
scheme, the recurrent net can be viewed as a multi-layered net with identical weights between
neurons at adjacent time step: w(t) = W, where w(t) is the "till layer" weights connecting input
S(t-I) to output S(t). The total weight correction is the summation of all weight corrections at
each layer. By using the gradient descent scheme one immediately has
~W=
aE
aE
aG I
LOW(t) =-llLaW(t) =-llL aS(t) ? aW(t)
T
T
1=1
1=1
T
(15)
1=1
If we define new symbols: vector u(t), second-order tensor A(t) and third-order tensor B(t) as
Time Warping Invariant Neural Networks
aG~+ I
A .. (t)
IJ
== a
I
S.(t)
(16)
J
the weight correction can be simply written as
T
~W=-1\~U(t).B(t)
(17)
and the "error rate" u(t) can be back-propagated using the Derivative Chain Rule
u (t) = u (t + 1) . A (t)
(18)
t = 1, 2, ... , T - 1 ;
so that it is easy to have
u(t)
= u(n
?A(T-I) ? A(T-2) ? ... ?A(t)
==u(n'tJ~t~(t)
t
= 1,2, ... ,T-I;
(19)
First, let us examine the model ofNNFA in Eq. (9). Using Eqs. (12), (13) and (16), Ai/t) and
Bijk(t) can be written as
A lj.. (t)
= g' (K. (t) ) W ..
I
~
B??k(f)
IJ
= aIJ.. (S(t-I)
El)/(t-I?k
(20)
=
where g'(x) == dg/dx gO-g) is the derivative of Sigmoid function and 8ij is Kronecker delta
function. If we substitute Bijk(t) into Eq. (17), ~Wbecomes a weighted sum of all input symbols
{/(O), 1(1), 1(2), ...... J(T-I)}, each with different weighting factor u(t). Therefore, to guarantee
that ~ W contain the information of all input symbols {/(O), 1(1), 1(2), ...... J(T-I)}, the ratio of
lu(t)lmaxllu(t)lmin should be within the range of precision of ~W. This is the main point.
The exact mathematical analysis has not been done, but from a rough estimate we can gain
some good understanding. From Eq. (9), u(t) is a matrices product of Aij(t), and u(1) the coefficient of 1(0) contains the highest order product of Ai/t). The key point is that the coefficient
ratio between the adjacent symbols: lu(t)"lu(t+l) is of the order of lAi/t)I, which is a small value, therefore the earlier symbol information could be lost from ~ W due to its finite precision. It
can be shown that xg'(x) =x g(x)( l-g(x)< 0.25 for any real value of x. Then, we roughly have
lAij(t)1 =Ig' Wijl Ig(l-g)Wij 1< 0.25, if we assume the values of weights Wij to be order 1. Thus,
the ratio R=lu(t)lmax"u(t)lmin is estimated as
=
1
R-
IU(1)l/lu(nl-"p_ 1 IA(f')1 <2- 2. (T-l)
(21)
From Eq. (21) we see that if the input pattern length is T= lOwe need at least 2(T-1) == 18 bits
computer memory to store weight variables (including u, W and ~ W). If T= 60, as in the trajectory classification problem, it requires at least 128 bit weight variables. This is why the NNFA
Eq. (9) could not work.
Similarly, for the dynamics of Eq. 0), we use Eqs. (11), (13) and (16), and obtain
Aij(l)
= 1+1(t)
(g'(Kj(t?W i}
Bijk(l)
= 1(1)
(aij(S(t-l) GH(t-l?k)
(22)
From Eq. (22) we see that no matter how small the factor let) will be, lAi/!)1 remains a value
of order of one, therefore the ratio R=lu(t)lmax"u(t)lmin which is estimated as a product of lAij(1)1
would be of order of one compared with result of discrete case as in Eq. (21).Therefore, the contributions from all {I(O), 1(1), 1(2), ...... J(T-I)} to the weight correction ~ Ware of the same order. This prevents the information loss during learning.
IV NUMERICAL SIMULATION
We demonstrate the power of TWINN with a trajectory classification problem. The three 2-
185
186
Sun, Chen, and Lee
D trajectory equations are artificially given by
(xCt) =sin(t+~)lsinCt)1 (xCt) =sin(O.5t+~)sin(1.5t) (xU) =sinU+~)sin(2t)
\y (t) = cos (t+~) Isin (t) I \y (t) = cos (O.5t +~) sin (1.5t) ~ (1) = cos (t +~) sin (2t)
(23)
where ~ is a unifonnly distributed random parameter. When ~ is changed, these trajectories are
distorted accordingly. Some examples (three for each class) are shown in Fig. I.
Class 1
Class 2
-
-G .7 !.
-0.5
Class 3
-
Fig.l PHASE SPACE TRAJECTORIES
Three different shapes of 2-D trajecwry, each is shown in one column with three examples.
Recurrent neural networks are trained to recognize the different shapes of trajectory.
The trajectory data are the time series of two dimensional coordinate pairs {x(t), y(t)} sampled
along three different types of curves in the phase space. The neural net dynamics of TWINN is
(24)
where we used 6 input neurons I ={I, x(t), y(t), .?(t), y2(t), x(t)y(t)} (normalized to norm = 1.0)
and 4 (N=4) state neurons S ={ Sl' S2' S3' S4}. The neural network structure is shown in Fig. 2.
Fig.2 Time Warping Invariant Neural Network
for Trajectory Classification
Fig.3 Time Delayed Neural Network
for Trajectory ClassificatiolJ
For training, we assign the desired final output for the three trajectory classes to be (1,0,0),
Time Warping Invariant Neural Networks
(0,1,0) and (0,0,1) respectively. For recognition, each trajectory data sequence needs to be fed
to the input neurons and the state neurons evolve according to the dynamics in Eq. (24). At the
end of input series we check the last three state neurons and classify the input trajectory according to the "winner-take-all" rule.
In each iteration of training we randomly picked up 150 deformed trajectories, 50 for each of
the three categories, by choosing different values of ~ within O$~ $27t. To simulate time warping we randomly sampled the data by choosing the random time step ~t = 27trff along each trajectory, where r is a random number between 0 and 2 and the sampling rate T=60 for training
patterns, and T=20 to 200 for testing patterns. Therefore, each training pattern is a time warped
trajectory data with averaged length = 60. Using RTRL algorithm[8] to minimize the error function, after 100 iterations of training it converged to Mean Square Error of == 0.03.
We tested the trained network with hundreds of randomly picked input sequences with different sampling rate (from 20/27t to 200127t) and different wrapping functions (non-uniform step
length). All input trajectories are classified correctly. If the sampling rates are too large (>200)
or too small( <20), some classification errors will occur.
We test the same example with TDNN. See Fig.3 for its parameters. The top layer contains
three output neurons for the three classes of trajectories. The classification rules, error function
and training patterns are the same as those ofTWINN. After three days of training with DEC3100 Workstation the training error (MSE) approaches 0.5 and in testing the error rate is 70%.
V. CONCLUSION
We have proposed a model of Time Warping Invariant Neural Network to handle temporal
pattern classification where the severely time warped and deformed data may occur. This model
is shown to have built-in time warping ability. We have analyzed the properties ofTWINN and
shown that for trajectory classification it has several advantages over other schemes: HMM, DP,
TDNN and NNFA.
We also numerically implemented the TWINN and trained a trajectory classification easily.
This problem is shown by analysis to be difficult to other schemes. It has been trained with
TDNN but failed.
References
[1] H.Sakoe and S. Chiba, "Dynamic Programming Algorithm Optimization for Spoken
Word Recognition", IEEE Transactions on Acoustics Speech and Signal Processing, Vol.
ASSP-26, pp.43-49, Feb. 1978.
[2] L.R.Rabiner and B.H.Juang, "An Introduction to Hidden Markov Models", IEEE, ASSP
Mag., Vol.3, No.1, pp. 4-16, 1986.
[3]A. Weibel, T. Hanazawa, G. Hinton, K.shikano and K. Lang, "Phoneme Recognition Using Time-Delay Neural Networks", IEEE Transactions on Acoustics Speech and Signal Processing, March, 1989.
[4]. Y.D. Liu, G.Z. Sun, H.H. Chen, c.L. Giles and Y.c. Lee, "Grammatic Inference and
Neural Network State Machine", Proceedings of the International Joint Conference on Neural
Networks, pp. 1-285, Washington D.C. (1990).
[5]. G.Z. Sun, H.H. Chen, c.L. Giles, Y.c. Lee and D. Chen, "Connectionist Pushdown Automata that Learn Context-Free Grammars", Proceedings of the International Joint Conference
on Neural networks, pp. 1-577, Washington D.C. (1990).
[6]Giles, C.L., Sun, G.Z., Chen, H.H., Lee,Y.C., and Chen, D. (1990). "Higher Order Recurrent Networks & Grammatical Inference". Advances in Neurallnformation Processing Systems
2, D.S. Touretzky (editor), 380-386, Morgan Kaufmann, San Mateo, c.A. (7)
[7] D.Rumelhart, G. Hinton, and R. Williams. "Learning internal representations by error
propagation", In PDP: VoU MIT press 1986. P. Werbos, "Beyond Regression: New tools for
prediction and analysis in the behavior sciences", Ph.D. thesis, Harvard university, 1974.
[8] R. Williams and D. Zipser, "A learning algorithm for continually running fully recurrent
neural networks", Neural Computation 1(1989), pp.270-280.
187
| 701 |@word deformed:4 version:4 norm:2 efh:2 simulation:1 tried:1 dramatic:1 mention:1 initial:2 liu:2 series:4 score:2 contains:3 mag:1 current:2 lang:1 dx:1 written:2 numerical:8 shape:3 remove:2 plot:1 v:1 selected:1 accordingly:1 short:5 provides:1 contribute:1 mathematical:2 along:4 windowed:2 prove:1 consists:1 introduce:1 sakoe:1 expected:1 roughly:1 behavior:3 examine:2 multi:1 lll:1 bijk:3 becomes:2 what:2 kind:1 string:4 unified:1 ag:2 spoken:1 guarantee:1 temporal:6 every:1 ti:1 voting:1 exactly:1 unit:1 normally:2 continually:1 positive:1 before:1 local:4 limit:1 severely:2 ware:1 path:2 mateo:1 co:3 p_:1 range:2 averaged:1 decided:1 unique:1 bjk:2 testing:3 lost:2 hyperbolic:1 pre:1 word:1 get:1 undesirable:1 layered:1 context:1 lmin:3 yee:1 deterministic:1 straightforward:1 regardless:1 starting:1 duration:2 automaton:5 go:1 williams:2 simplicity:1 immediately:1 rule:3 handle:8 variation:4 coordinate:1 target:3 heavily:2 exact:2 programming:6 harvard:1 rumelhart:1 recognition:8 werbos:1 observed:1 enters:1 calculate:1 impressing:1 connected:1 sun:8 highest:1 mentioned:1 dynamic:23 trained:5 completely:3 easily:3 joint:2 various:3 represented:5 train:3 shortcoming:1 choosing:2 hsing:1 valued:1 say:4 otherwise:2 grammar:1 ability:3 statistic:1 itself:1 hanazawa:1 final:8 advantage:5 sequence:11 pressing:1 net:6 product:3 date:1 till:1 normalize:1 juang:1 double:1 help:1 illustrate:2 recurrent:11 exam:1 ij:3 tolerates:1 eq:33 strong:1 implemented:2 memorize:1 come:1 closely:1 assign:2 summation:2 correction:5 considered:1 credit:1 mapping:8 achieves:1 consecutive:1 early:1 purpose:2 estimation:2 tanh:2 vice:1 successfully:2 tool:1 weighted:2 rough:1 mit:1 rather:1 check:1 detect:1 sense:2 inference:7 el:1 nn:1 unlikely:1 lj:1 accept:1 hidden:6 wij:3 transformed:2 iu:1 classification:22 among:1 priori:1 spatial:1 integration:2 saving:1 having:1 washington:2 sampling:5 identical:2 represents:1 park:1 look:1 connectionist:1 randomly:4 dg:1 recognize:2 delayed:3 phase:6 zheng:1 severe:1 analyzed:2 nl:1 tj:1 chain:1 solo:1 integral:1 unifonnly:1 iv:2 re:1 desired:1 minimal:1 instance:1 column:1 earlier:1 giles:4 classify:1 assignment:1 hundred:1 uniform:1 delay:1 examining:1 too:2 aw:1 considerably:1 international:2 lee:7 off:1 connecting:1 thesis:1 external:1 warped:12 derivative:2 de:2 includes:1 coefficient:2 matter:2 explicitly:1 depends:1 performed:1 later:1 lowe:1 picked:2 analyze:4 doing:4 start:4 ofeq:1 minimize:3 contribution:1 square:1 phoneme:1 kaufmann:1 correspond:1 identify:1 rabiner:1 tolerating:1 lu:6 trajectory:34 drive:1 history:8 classified:3 converged:1 reach:2 touretzky:1 infinitesimal:1 failure:2 pp:5 involved:1 obvious:1 workstation:1 propagated:1 sampled:3 gain:1 proved:1 knowledge:2 subtle:1 actually:3 back:3 feed:2 tolerate:1 dt:1 day:1 higher:1 depressing:1 done:2 though:1 wildly:1 just:1 correlation:2 d:1 traveling:2 hand:2 replacing:1 nonlinear:2 propagation:4 continuity:2 effect:1 contain:3 y2:1 normalized:1 laboratory:1 illustrated:1 deal:3 adjacent:2 sin:6 during:2 criterion:1 demonstrate:1 gh:1 fj:1 laij:2 ef:1 sigmoid:3 winner:1 belong:1 numerically:1 versa:1 ai:2 similarly:1 pointed:1 had:1 vou:1 longer:2 etc:1 feb:1 something:1 own:1 bjj:1 forcing:1 store:2 certain:5 success:2 seen:2 minimum:1 morgan:1 employed:2 accomplishes:1 signal:12 ii:6 relates:1 calculation:1 long:3 lai:2 prediction:2 basic:1 regression:1 ae:2 iteration:3 normalization:1 meaningless:1 induced:1 mature:1 contrary:1 xct:2 call:1 zipser:1 neurallnformation:1 ideal:2 feedforward:2 iii:3 easy:1 affect:1 nnf:3 idea:1 qj:4 expression:1 speech:5 useful:1 s4:1 ph:1 category:5 reduced:1 generate:6 sl:3 s3:1 happened:1 delta:1 estimated:2 correctly:1 discrete:9 write:1 vol:2 group:1 key:1 isin:1 convert:1 sum:2 wand:1 powerful:1 distorted:1 almost:1 decide:1 draw:1 decision:1 bit:2 layer:3 guaranteed:1 activity:1 occur:3 infinity:2 kronecker:1 constrain:1 aspect:1 simulate:1 according:3 combination:1 march:1 across:1 rtrl:1 modification:2 making:1 happens:1 memorizing:1 invariant:13 equation:1 remains:1 fail:1 know:1 fed:1 end:3 available:3 operation:3 obey:1 away:2 occurrence:1 encounter:1 substitute:1 top:1 running:1 include:1 warping:40 tensor:2 wrapping:1 dependence:2 md:1 traditional:1 diagonal:2 gradient:1 dp:3 distance:2 maryland:1 concatenation:2 hmm:16 capacity:2 reason:2 length:8 ratio:4 difficult:4 pie:1 negative:1 implementation:2 neuron:11 observation:1 markov:4 benchmark:2 finite:7 descent:1 hinton:2 assp:2 frame:3 pdp:1 pair:1 sentence:1 acoustic:2 learned:3 able:2 beyond:2 dynamical:2 pattern:16 usually:2 saturation:1 built:4 including:2 memory:6 power:4 suitable:1 ia:1 treated:1 advanced:1 scheme:14 xg:1 tdnn:12 kj:2 weibel:1 understanding:1 tangent:1 hen:1 evolve:1 fully:3 loss:1 triple:1 editor:1 course:1 lmax:2 changed:1 last:1 free:1 aij:4 understand:2 institute:1 template:3 distributed:2 grammatical:7 curve:5 calculated:1 chiba:1 stand:1 evaluating:1 transition:2 forward:2 stuck:1 jump:1 san:1 ig:2 far:3 cope:1 transaction:2 sj:1 wijl:1 emphasize:1 dealing:1 global:1 shikano:1 continuous:16 search:2 sk:4 why:1 learn:5 nature:1 mse:1 complex:1 meanwhile:1 artificially:1 did:2 main:2 terminated:1 whole:2 s2:1 n2:1 xu:1 fig:6 referred:1 slow:1 precision:5 explicit:3 lie:1 clamped:1 third:1 weighting:1 learns:1 formula:2 companion:1 specific:1 symbol:13 chun:1 dl:1 consist:1 intrinsic:1 sequential:2 effectively:3 adding:1 chen:9 simply:2 failed:4 prevents:1 expressed:1 contained:1 viewed:1 consequently:2 change:2 pushdown:1 typical:1 total:2 called:1 accepted:1 invariance:1 plasma:1 college:1 internal:6 guo:1 tested:3 |
6,645 | 7,010 | Learning Active Learning from Data
Ksenia Konyushkova?
CVLab, EPFL
Lausanne, Switzerland
[email protected]
Sznitman Raphael
ARTORG Center, University of Bern
Bern, Switzerland
[email protected]
Pascal Fua
CVLab, EPFL
Lausanne, Switzerland
[email protected]
Abstract
In this paper, we suggest a novel data-driven approach to active learning (AL).
The key idea is to train a regressor that predicts the expected error reduction for a
candidate sample in a particular learning state. By formulating the query selection
procedure as a regression problem we are not restricted to working with existing
AL heuristics; instead, we learn strategies based on experience from previous AL
outcomes. We show that a strategy can be learnt either from simple synthetic 2D
datasets or from a subset of domain-specific data. Our method yields strategies that
work well on real data from a wide range of domains.
1
Introduction
Many modern machine learning techniques require large amounts of training data to reach their full
potential. However, annotated data is hard and expensive to obtain, notably in specialized domains
where only experts whose time is scarce and precious can provide reliable labels. Active learning
(AL) aims to ease the data collection process by automatically deciding which instances an annotator
should label to train an algorithm as quickly and effectively as possible.
Over the years many AL strategies have been developed for various classification tasks, without
any one of them clearly outperforming others in all cases. Consequently, a number of meta-AL
approaches have been proposed to automatically select the best strategy. Recent examples include
bandit algorithms [2, 11, 3] and reinforcement learning approaches [5]. A common limitation of these
methods is that they cannot go beyond combining pre-existing hand-designed heuristics. Besides,
they require reliable assessment of the classification performance which is problematic because
the annotated data is scarce. In this paper, we overcome these limitations thanks to two features
of our approach. First, we look at a whole continuum of AL strategies instead of combinations
of pre-specified heuristics. Second, we bypass the need to evaluate the classification quality from
application-specific data because we rely on experience from previous tasks and can seamlessly
transfer strategies to new domains.
More specifically, we formulate Learning Active Learning (LAL) as a regression problem. Given
a trained classifier and its output for a specific sample without a label, we predict the reduction in
generalization error that can be expected by adding the label to that datapoint. In practice, we show
that we can train this regression function on synthetic data by using simple features, such as the
variance of the classifier output or the predicted probability distribution over possible labels for a
?
http://ksenia.konyushkova.com
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
specific datapoint. The features for the regression are not domain-specific and this enables to apply
the regressor trained on synthetic data directly to other classification problems. Furthermore, if a
sufficiently large annotated set can be provided initially, the regressor can be trained on it instead
of on synthetic data. The resulting AL strategy is then tailored to the particular problem at hand.
We show that LAL works well on real data from several different domains such as biomedical
imaging, economics, molecular biology and high energy physics. This query selection strategy
outperforms competing methods without requiring hand-crafted heuristics and at a comparatively low
computational cost.
2
Related work
The extensive development of AL in the last decade has resulted in various strategies. They include
uncertainty sampling [32, 15, 27, 34], query-by-committee [7, 13], expected model change [27,
30, 33], expected error or variance minimization [14, 9] and information gain [10]. Among these,
uncertainty sampling is both simple and computationally efficient. This makes it one of the most
popular strategies in real applications. In short, it suggests labeling samples that are the most uncertain,
i.e., closest to the classifier?s decision boundary. The above methods work very well in cases such
as the ones depicted in the top row of Fig. 2, but often fail in the more difficult ones depicted in the
bottom row [2].
Among AL methods, some cater to specific classifiers, such as those relying on Gaussian processes [16], or to specific applications, such as natural language processing [32, 25], sequence
labeling tasks [28], visual recognition [21, 18], semantic segmentation [33], foreground-background
segmentation [17], and preference learning [29, 22]. Moreover, various query strategies aim to
maximize different performance metrics, as evidenced in the case of multi-class classification [27].
However, there is no one algorithm that consistently outperforms all others in all applications [28].
Meta-learning algorithms have been gaining in popularity in recent years [31, 26], but few of them
tackle the problem of learning AL strategies. Baram et al. [2] combine several known heuristics
with the help of a bandit algorithm. This is made possible by the maximum entropy criterion, which
estimates the classification performance without labels. Hsu et al. [11] improve it by moving the focus
from datasamples as arms to heuristics as arms in the bandit and use a new unbiased estimator of
the test error. Chu and Lin [3] go further and transfer the bandit-learnt combination of AL heuristics
between different tasks. Another approach is introduced by Ebert et al. [5]. It involves balancing
exploration and exploitation in the choice of samples with a Markov decision process.
The two main limitations of these approaches are as follows. First, they are restricted to combining
already existing techniques and second, their success depends on the ability to estimate the classification performance from scarce annotated data. The data-driven nature of LAL helps to overcome
these limitations. Sec. 5 shows that it outperforms several baselines including those of Hsu et al. [11]
and Kapoor et al. [16].
3
Towards data-driven active learning
In this section we briefly introduce the active leaning framework along with uncertainty sampling
(US), the most frequently-used AL heuristic. Then, we motivate why a data-driven approach can
improve AL strategies and how it can deal with the situations where US fails. We select US as a
representative method because it is popular and widely applicable, however the behavior that we
describe is typical for a wide range of AL strategies.
3.1
Active learning (AL)
Given a machine learning model and a pool of unlabeled data, the goal of AL is to select which data
should be annotated in order to learn the model as quickly as possible. In practice, this means that
instead of asking experts to annotate all the data, we select iteratively and adaptively which datapoints
should be annotated next. In this paper we are interested in classifying datapoints from a target
dataset Z = {(x1 , y1 ), . . . , (xN , yN )}, where xi is a D-dimensional feature vector and yi 2 {0, 1}
is its binary label. We choose a probabilistic classifier f that can be trained on some Lt ? Z to map
2
features to labels, ft (xi ) = y?i , through the predicted probability pt (yi = y | xi ). The standard AL
procedure unfolds as follows.
1. The algorithm starts with a small labeled training dataset Lt ? Z and large pool of
unannotated data Ut = Z \ Lt with t = 0.
2. A classifier ft is trained using Lt .
3. A query selection procedure picks an instance x? 2 Ut to be annotated at the next iteration.
4. x? is given a label y ? by an oracle. The labeled and unlabeled sets are updated.
5. t is incremented, and steps 2?5 iterate until the desired accuracy is achieved or the number
of iterations has reached a predefined limit.
Uncertainty sampling (US) US has been reported to be successful in numerous scenarios and
settings and despite its simplicity, it often works remarkably well [32, 15, 27, 34, 17, 24]. It focuses
its selection on samples which the current classifier is the least certain about. There are several
definitions of maximum uncertainty but one of the most widely used ones is to select a sample x?
that maximizes the entropy H over the probability of predicted classes:
x? = arg max H[pt (yi = y | xi )] .
(1)
xi 2Ut
3.2
Success, failure, and motivation
We now motivate the need for LAL by presenting two toy examples. In the first one, US is empirically
observed to be the best greedy approach, but in the second it makes suboptimal decisions. Let us
consider simple two-dimensional datasets Z and Z 0 drawn from the same distribution with an equal
number of points in each class (Fig. 1, left). The data in each class comes from a Gaussian distribution
with a different mean and the same isotropic covariance. We can initialize the AL procedure of
Sec. 3.1 with one sample from each class and its respective label: L0 = {(x1 , 0), (x2 , 1)} ? Z and
U0 = Z \ L0 . Here we train a simple logistic regression classifier f on L0 and then test it on Z 0 .
0
If |Z P
| is large, the test error can be considered as a good approximation of the generalization error:
`0 = (x0 ,y0 )2Z 0 `(?
y , y 0 ), where y? = f0 (x0 ).
Let us try to label every point x from U0 one by one, form a new labeled
P set Lx = L00 [ (x, y)
and check what error a new classifier fx yields on Z 0 , that is, `x = (x0 ,y0 )2Z 0 `(?
y , y ), where
0
y? = fx (x ). The difference between errors obtained with classifiers constructed on L0 and Lx
indicates how much the addition of a new datapoint x reduces the generalization error: x = `0 `x .
We plot x for the 0/1 loss function, averaged over 10 000 experiments as a function of the predicted
probability p0 (Fig. 1, left). By design, US would select a datapoint with probability of class 0 close
to 0.5. We observe that in this experiment, the datasample with p0 closest to 0.5 is indeed the one
that yields the greatest error reduction.
0.02
0.03
0.01
0.00
0.00
?0.03
0
0
1
1
Figure 1: Balanced vs unbalanced. Left: two Gaussian clouds of the same size. Right: two Gaussian
clouds with the class 0 twice bigger than class 1. The test error reduction as a function of predicted
probability of class 0 in the respective datasets.
In the next experiment, the class 0 contains twice as many datapoints as the other class, see Fig. 1
(right). As before, we plot the average error reduction as a function of p0 . We observe this time that
the value of p0 that corresponds to the largest expected error reduction is different from 0.5 and thus
the choice of US becomes suboptimal. Also, the reduction in error is no longer symmetric for the two
classes. The more imbalanced the two classes are, the further from the optimum the choice made by
3
US is. In a complex realistic scenario, there are many other factors such as label noise, outliers and
shape of distribution that further compound the problem.
Although query selection procedures can take into account statistical properties of the datasets and
classifier, there is no simple way to foresee the influence of all possible factors. Thus, in this paper,
we suggest Learning Active Learning (LAL). It uses properties of classifiers and data to predict the
potential error reduction. We tackle the query selection problem by using a regression model; this
perspective enables us to construct new AL strategies in a flexible way. For instance, in the example
of Fig. 1 (right) we expect LAL to learn a model that automatically adapts its selection to the relative
prevalence of the two classes without having to explicitly state such a rule. Moreover, having learnt
the error reduction prediction function, we can seamlessly transfer LAL strategy to other domains
with very little annotated data.
4
Monte-Carlo LAL
Our approach to AL is data-driven and can be formulated as a regression problem. Given a representative dataset with ground truth, we simulate an online learning procedure using a Monte-Carlo
technique. We propose two versions of AL strategies that differ in the way how datasets for learning
a regressor are constructed. When building the first one, LAL INDEPENDENT, we incorporate unused
labels individually and at random to retrain the classifier. Our goal is to correlate the change in
test performance with the properties of the classifier and of newly added datapoint. To build the
LAL ITERATIVE strategy, we further extend our method by a sequential procedure to account for
selection bias caused by AL. We formalize our LAL procedures in the remainder of the section.
4.1
Independent LAL
Let the representative dataset2 consist of a training set D and a testing set D0 . Let f be a classifier with
a given training procedure. We start collecting data for the regressor by splitting D into a labeled set
L? of size ? and an unlabeled set U? containing the remaining points (Alg. 1 DATA M ONTE C ARLO).
We then train a classifier f on L? , resulting in a function f? that we use to predict class labels for elements x0 from the test set D0 and estimate the test classification loss `? . We characterize the classifier
state by K parameters ? = { 1? , . . . , K
? }, which are specific to the particular classifier type and
are sensitive to the change in the training set while being relatively invariant to the stochasticity of
the optimization procedure. For example, they can be the parameters of the kernel function if f is
kernel-based, the average depths of the trees if f is a tree-based method, or prediction variability if f
is an ensemble classifier. The above steps are summarized in lines 3?5 of Alg. 1.
Algorithm 1 DATA M ONTE C ARLO
1: Input: training set D and test set D 0 , classification procedure f , partitioning function S PLIT,
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
13:
14:
15:
size ?
Initialize: L? , U?
S PLIT(D, ? )
train a classifier f?
estimate the test set loss `?
compute the classification state parameters
{ 1? , . . . , K
? }
for m = 1 to M do
select x 2 U? at random
form a new labeled dataset Lx
L? [ {x}
compute the datapoint parameters
{ x1 , . . . , xR }
train a classifier fx
estimate the new test loss `x
compute
loss reduction x
`? ?`x
? the
1
K
1
R
?m
???
???
x
?
?
x
x , m
?
{?m } ,
{ m} : 1 ? m ? M
Return: matrix of learning states ? 2 RM ?(K+R) , vector of reductions in error
2 RM
2
The representative dataset is an annotated dataset that does not need to come from the domain of interest. In
Sec. 5 we show that a simple synthetic dataset is sufficient for learning strategies that can be applied to various
real tasks across various domains.
4
Algorithm 2 BUILD LAL INDEPENDENT
Algorithm 3 BUILD LAL ITERATIVE
1: Input: iteration range {?min , . . . , ?max },
1: Input: iteration range {?min , . . . , ?max },
2:
3:
2: S PLIT
random partitioning function
3: Initialize: generate train set D and test
4:
5:
6:
7:
8:
9:
10:
classification procedure f
S PLIT
random partitioning function
Initialize: generate train set D and test
dataset D0
for ? in {?min , . . . , ?max } do
for q = 1 to Q do
?? q , ? q
DATA M ONTE C ARLO
(D, D0 , f, S PLIT, ? )
?,
{?? q }, { ? q }
train a regressor g : ? 7! on data ?,
construct LAL INDEPENDENT A(g):
x? = arg maxx2Ut g[?t,x )]
Return: LAL INDEPENDENT
classification procedure f
4:
5:
6:
7:
8:
9:
10:
11:
12:
13:
dataset D0
for ? in {?min , . . . , ?max } do
for q = 1 to Q do
?? q , ? q
DATA M ONTE C ARLO
(D, D0 , f, S PLIT, ? )
?? , ?
{?? q , ? q }
train regressor g? : ? 7! on ?? , ?
S PLIT
A(g? )
?,
{?? , ? }
train a regressor g : ? 7! on ?,
construct LAL ITERATIVE A(g)
Return: LAL ITERATIVE
Next, we randomly select a new datapoint x from U? which is characterized by R parameters
1
R
x = { x , . . . , x }. For example, they can include the predicted probability to belong to class y,
the distance to the closest point in the dataset or the distance to the closest labeled point, but they do
not include the features of x. We form a new labeled set Lx = L? [ {x} and retrain f (lines 7?13 of
Alg. 1). The new classifier fx results in the test-set loss `x . Finally, we record the difference between
previous and new loss x = `? `x which is associated ?to the learning state in which it was
? received.
K
1
R
The learning state is characterized by a vector ??x = 1? ? ? ?
2 RK+R ,
???
?
x
x
whose elements depend both on the state of the current classifier f? and on the datapoint x. To build
an AL strategy LAL INDEPENDENT we repeat the DATA M ONTE C ARLO procedure for Q different
initializations L1? , L2? , . . . , LQ
? and T various labeled subset sizes ? = 2, . . . , T + 1 (Alg. 2 lines 4
and 5). For each initialization q and iteration ? , we sample M different datapoints x each of which
yields classifier/datapoint state pairs with an associated reduction in error (Alg. 1, line 13). This
results in a matrix ? 2 R(QM T )?(K+R) of observations ? and a vector
2 RQM T of labels
(Alg. 2, line 9).
Our insight is that observations ? should lie on a smooth manifold and that similar states of the
classifier result in similar behaviors when annotating similar samples. From this, a regression function
can predict the potential error reduction of annotating a specific sample in a given classifier state. Line
10 of the BUILD LAL INDEPENDENT algorithm looks for a mapping g : ? 7! . This mapping is not
specific to the dataset D, and thus can be used to detect samples that promise the greatest increase in
classifier performance in other target domains Z. The resulting LAL INDEPENDENT strategy greedily
selects a datapoint with the highest potential error reduction at iteration t by taking the maximum of
the value predicted by the regressor g:
x? = arg max g( t ,
x2Ut
4.2
x ).
(2)
Iterative LAL
For any AL strategy at iteration t > 0, the labeled set Lt consists of samples selected at previous
iterations, which is clearly not random. However, in Sec. 4.1 the dataset D is split into L? and U?
randomly no matter how many labeled samples ? are available.
To account for this, we modify the approach of Section 4.1 in Alg. 3 BUILD LAL ITERATIVE. Instead
of partitioning the dataset D into L? and U? randomly, we suggest simulating the AL procedure
which selects datapoints according to the strategy learnt on the previously collected data (Alg. 3,
line 10). It first learns a strategy A(g2 ) based on a regression function g2 which selects the most
promising 3rd datapoint when 2 random points are available. In the next iteration, it learns a strategy
A(g3 ) that selects 4th datapoint given 2 random points and 1 selected by A(g2 ) etc. In this way,
5
samples at each iteration depend on the samples at the previous iteration and the sampling bias of AL
is represented in the data ?, from which the final strategy LAL ITERATIVE is learnt.
The resulting strategies LAL INDEPENDENT and LAL ITERATIVE are both reasonably fast during
the online steps of AL: they just require evaluating the RF regressor. The offline part, generating a
datasets to learn a regression function, can induce a significant computational cost depending on the
parameters of the algorithm. For this reason, LAL INDEPENDENT is preferred to LAL ITERATIVE
when an application-specific strategy is needed.
5
Experiments
Implementation details We test AL strategies in two possible settings: a) cold start, where we
start with one sample from each of two classes and b) warm start, where a larger dataset of size
N0 ? N is available to train the initial classifier. In cold start we take the representative dataset
to be a 2D synthetic dataset where class-conditional data distributions are Gaussian and we use the
same LAL regressor in all 7 classification tasks. While we mostly concentrate on cold start scenario,
we look at a few examples of warm start because we believe that it is largely overloooked in the
litterature, but it has a significant practical interest. Learning a classifier for a real-life application
with AL rarely starts from scratch, but a small initial annotated set is provided to understand if a
learning-based approach is applicable at all. While a small set is good to provide an initial insight, a
real working prototype still requires much more training data. In this situation, we can benefit from
the available training data to learn a specialized AL strategy for an application.
In most of the experiments, we use Random Forest (RF) classifiers for f and a RF regressor for
g. The state of the learning process ?t at time t consists of the following features: a) predicted
probability p(y = 0|Lt , x); b) proportion of class 0 in Lt ; c) out-of-bag cross-validated accuracy
of ft ; d) variance of feature importances of ft ; e) forest variance computed as variance of trees?
predictions on Ut ; f) average tree depth of the forest; g) size of Lt . For additional implementational
details, including examples of the synthetic datasets, parameters of the data generation algorithm and
features in the case of GP classification, we refer the reader to the supplementary material. The code
is made available at https://github.com/ksenia-konyushkova/LAL.
Baselines and protocol We consider the three versions of our approach: a) LAL-independent-2D,
LAL INDEPENDENT strategy trained on a synthetic dataset of cold start; b) LAL-iterative-2D,
LAL ITERATIVE strategy trained on a synthetic dataset of cold start; c) LAL-independent-WS,
LAL INDEPENDENT strategy trained on warm start representative data. We compare them against
the following 4 baselines: a) Rs, random sampling; b) Us, uncertainty sampling; c) Kapoor [16], an
algorithm that balances exploration and exploitation by incorporating mean and variance estimation
of the GP classifier; d) ALBE [11], a recent example of meta-AL that adaptively uses a combination
of strategies, including Us, Rs and that of Huang et al. [12] (a strategy that uses the topology of the
feature space in the query selection). The method of Hsu et al. [11] is chosen as a our main baseline
because it is a recent example of meta AL and is known to outperform several benchmarks.
In all AL experiments we select samples from a training set and report the classification performance
on an independent test set. We repeat each experiment 50?100 times with random permutations of
training and testing splits and different initializations. Then we report the average test performance as
a function of the number of labeled samples. The performance metrics are task-specific and include
classification accuracy, IOU [6], dice score [8], AMS score [1], as well as area under the ROC curve
(AUC).
5.1
Synthetic data
Two-Gaussian-clouds experiments In this dataset we test our approach with two classifiers: RF
and Gaussian Process classifier (GPC). Due to the the computational cost of GPC, it is only tested in
this experiment. We generate 100 new unseen synthetic datasets of the form as shown in the top row
of Fig. 2 and use them for testing AL strategies. In both cases the proposed LAL strategies select
datapoints that help to construct better classifiers faster than Rs, Us, Kapoor and ALBE.
XOR-like experiments XOR-like datasets are known to be challenging for many machine learning
methods and AL is no exception. It was reported in Baram et al. [2] that various AL algorithms
6
accuracy
Gaussian clouds, RF
Gaussian clouds, GP
0.9
0.9
0.8
0.8
0.7
0.7
0.6
Rs
Us
ALBE
Kapoor
LAL-independent-2D
LAL-iterative-2D
0.6
0
50
100
0
100
Checkerboard 4x4
Checkerboard 2x2
1.0
accuracy
50
0.8
0.6
Rotated checkerboard 2x2
0.85
1.0
0.75
0.9
0.65
0.8
0.55
0.7
0.6
0.45
0
100
# labelled points
200
0
100
# labelled points
200
0
100
# labelled points
200
Figure 2: Experiments on the synthetic data. Top row: RF and GP on 2 Gaussian clouds. Bottom
row from left to right: experiments on Checkerboard 2 ? 2, Checkerboard 4 ? 4, and Rotated
Checkerboard 2 ? 2 datasets.
struggle with tasks such as those depicted in the bottom row of Fig. 2, namely Checkerboard 2 ? 2
and Checkerboard 4 ? 4. Additionally, we consider Rotated Checkerboard 2 ? 2 dataset (Fig. 2,
bottom row, right). The task for RF becomes more difficult in this case because the discriminating
features are no longer aligned to the axis. As previously observed [2], Us loses to Rs in these
cases. ALBE does not suffer from such adversarial conditions as much as Us, but LAL-iterative-2D
outperforms it on all XOR-like datasets.
5.2
Real data
We now turn to real data from domains where annotating is hard because it requires special training
to do it correctly:
Striatum, 3D Electron Microscopy stack of rat neural tissue, the task is to detect and segment
mitochondria [20, 17];
MRI, brain scans obtained from the BRATS competition [23], the task is to segment brain tumor in
T1, T2, FLAIR, and post-Gadolinium T1 MR images;
Credit card [4], a dataset of credit card transactions made in 2013 by European cardholders, the task
is to detect fraudulent transactions;
Splice, a molecular biology dataset with the task of detecting splice junctions in DNA sequences [19];
Higgs, a high energy physics dataset that contains measurements simulating the ATLAS experiment [1], the task is to detect the Higgs boson in the noise signal.
Additional details about the above datasets including sizes, dimensionalities and preprocessing
techniques can be found in the supplementary materials.
Cold Start AL Top row of Fig. 3 depicts the results of applying Rs, Us, LAL-independent2D, and LAL-iterative-2D on the Striatum, MRI, and Credit card datasets. Both LAL strategies
outperform Us, with LAL-iterative-2D being the best of the two. The best score of Us in these
complex real-life tasks is reached 2.2?5 times faster by the LAL-iterative-2D. Considering that
the LAL regressor was learned using a simple synthetic 2D dataset, it is remarkable that it works
effectively on such complex and high-dimensional tasks. Due to the high computational cost of
ALBE, we downsample Striatum and MRI datasets to 2000 datapoints (referred to as Striatum mini
and MRI mini). Downsampling was not possible for the Credit card dataset due to the sparsity
of positive labels (0.17%). We see in the bottom row of Fig. 3 that ALBE performs worse than
7
0.50
0.6
0.93
0.35
AUC
0.96
0.4
0.2
0.20
0
250
0.8
0.55
0.6
dice
0.70
0.40
0.84
0
500
100
200
0.10
0
150
300
Rs
Us
ALBE
LAL-independent-2D
LAL-iterative-2D
0.4
0.2
0.25
0.90
0.87
0.0
0.05
IOU
Credit card
MRI
0.8
dice
IOU
Striatum
0.65
0.0
0
100
# labelled points
200
0
100
200
# labelled points
Figure 3: Experiments on real data. Top row: IOU for Striatum, dice score for MRI and AUC for
Credit card as a function of a number of labeled points. Bottom row: Comparison with ALBE on the
Striatum mini and MRI mini datasets.
Us but better than Rs. We ascribe this to the lack of labeled data, which ALBE needs to estimate
classification accuracy (see Sec. 2).
Warm Start AL In Fig. 4 we compare LAL-independent-WS on the Splice and Higgs datasets
by initializing BUILD LAL INDEPENDENT with 100 and 200 datapoints from the corresponding tasks.
Notice that this is the only experiment where a significant amount of labelled data in the domain of
interest is available prior to AL. We tested ALBE on the Splice dataset, however in the Higgs dataset
the number of iterations in the experiment is too big. LAL-independent-WS outperforms other
methods with ALBE delivering competitive performance?yet, at a high computational cost?only
after many AL iterations.
Splice
Higgs
300
AMS
accuracy
0.95
0.92
0.89
Rs
Us
ALBE
LAL-independent-WS
270
240
100
200
labelled points
300
210
1000
2000
labelled points
Figure 4: Experiments on the real datasets in warm start scenario. Accuracy for Splice is on the left,
AMS score for Higgs is on the right.
5.3
Analysis of LAL strategies and time comparison
To better understand LAL strategies, we show in Fig. 5 (left) the relative importance of the features
of the regressor g for LAL ITERATIVE. We observe that both classifier state parameters and datapoint
parameters influence the AL selection giving evidence that both of them are important for selecting a
point to label. In order to understand what kind of selection LAL INDEPENDENT and LAL ITERATIVE
do, we record the predicted probability of the chosen datapoint p(y ? = 0|Dt , x? ) in 10 cold start
experiments with the same initialization on the MRI dataset. Fig. 5 (right) shows the histograms
of these probabilities for Us, LAL-independent-2D and LAL-iterative-2D. LAL strategies have
8
1000
Us
LAL-independent-2D
LAL-iterative-2D
forest variance
feature importance
tree depth
probability
out-of-bag
proportion
size
500
0
0.0
0.1
0.2
0.3
0.0
0.4
0.2
0.4
0.6
0.8
1.0
?
Relative Importance
probability p
Figure 5: Left: feature importances of the RF regressor representing LAL ITERATIVE strategy. Right:
histograms of the selected probability for different AL strategies in experiments with MRI dataset.
high variance and modes different from 0.5. Not only does the selection by LAL strategies differ
significantly from standard Us, but also the independent and iterative approaches differ from each
other.
Computational costs While collecting synthetic data can be slow, it must only be done once,
offline, for all applications. Besides, Alg. 1, 2 and 3 can be trivially parallelised thanks to a number
of independent loops. Collecting data offline for warm start, that is application specific, took us
approximately 2.7h and 1.9h for Higgs and Splice datasets respectively. By contrast, the online
user-interaction part is fast: it simply consists of learning ft , extracting learning state parameters
and evaluating the regressor g. The LAL run time depends on the parameters of the random forest
regressor which are estimated via cross-validation (discussed in the supplementary materials). Run
times of a Python-based implementation running on 1 core are given in Tab. 1 for a typical parameter
set (? 20% depending on exact parameter values). Real-time performance can be attained by
parallelising and optimising the code, even in applications with large amounts of high-dimensional
data.
Table 1: Time in seconds for one iteration of AL for various strategies and tasks.
Dataset
Dimensions # samples
Us ALBE LAL
Checkerboard
MRI mini
MRI
Striatum mini
Striatum
Credit
6
2
188
188
272
272
30
1000
2000
22 934
2000
276 130
142 404
0.11
0.11
0.12
0.11
2.05
0.43
13.12
64.52
?
75.64
?
?
0.54
0.55
0.88
0.59
19.50
4.73
Conclusion
In this paper we introduced a new approach to AL that is driven by data: Learning Active Learning.
We found out that Learning Active Learning from simple 2D data generalizes remarkably well to
challenging new domains. Learning from a subset of application-specific data further extends the
applicability of our approach. Finally, LAL demonstrated robustness to the choice of type of classifier
and features.
In future work we would like to address issues of multi-class classification and batch-mode AL.
Also, we would like to experiment with training the LAL regressor to predict the change in various
performance metrics and with different families of classifiers. Another interesting direction is to
transfer a LAL strategy between different real datasets, for example, by training a regressor on
multiple real datasets and evaluating its performance on unseen datasets. Finally, we would like to go
beyond constructing greedy strategies by using reinforcement learning.
9
Acknowledgements
This project has received funding from the European Union?s Horizon 2020 Research and Innovation
Programme under Grant Agreement No. 720270 (HBP SGA1). We would like to thank Carlos Becker
and Helge Rhodin for their comments on the text, and Lucas Maystre for his discussions and attention
to details.
References
[1] C. Adam-Bourdarios, G. Cowan, C. Germain, I. Guyon, B. K?gl, and D. Rousseau. The
higgs boson machine learning challenge. In NIPS 2014 Workshop on High-energy Physics and
Machine Learning, 2015.
[2] Y. Baram, R. El-Yaniv, and K. Luz. Online choice of active learning algorithms. Journal of
Machine Learning Research, 2004.
[3] H.-M. Chu and H.-T. Lin. Can active learning experience be transferred? arXiv preprint
arXiv:1608.00667, 2016.
[4] A. Dal Pozzolo, O. Caelen, R. A. Johnson, and G. Bontempi. Calibrating probability with
undersampling for unbalanced classification. In IEEE Symposium Series on Computational
Intelligence, 2015.
[5] S. Ebert, M. Fritz, and B. Schiele. RALF: A reinforced active learning formulation for object
class recognition. In Conference on Computer Vision and Pattern Recognition, 2012.
[6] M. Everingham, L. Van Gool, C. Williams, J. Winn, and A. Zisserman. The pascal visual object
classes (voc) challenge. International journal of computer vision, 2010.
[7] R. Gilad-bachrach, A. Navot, and N. Tishby. Query by committee made real. In Advances in
Neural Information Processing Systems, 2005.
[8] N. Gordillo, E. Montseny, and P. Sobrevilla. State of the art survey on MRI brain tumor
segmentation. Magnetic Resonance in Medicine, 2013.
[9] S.C.and Hoi, R. Jin, J. Zhu, and M.R. Lyu. Batch mode active learning and its application to
medical image classification. In International Conference on Machine Learning, 2006.
[10] N. Houlsby, F. Husz?r, Z. Ghahramani, and M. Lengyel. Bayesian active learning for classification and preference learning. arXiv preprint arXiv:1112.5745, 2011.
[11] W.-N. Hsu, , and H.-T. Lin. Active learning by learning. American Association for Artificial
Intelligence Conference, 2015.
[12] S.-J. Huang, R. Jin, and Z.-H. Zhou. Active learning by querying informative and representative
examples. In Advances in Neural Information Processing Systems, 2010.
[13] J.E. Iglesias, E. Konukoglu, A. Montillo, Z. Tu, and A. Criminisi. Combining generative
and discriminative models for semantic segmentation. In Information Processing in Medical
Imaging, 2011.
[14] A. J. Joshi, F. Porikli, and N. P. Papanikolopoulos. Scalable active learning for multiclass image
classification. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2012.
[15] A.J. Joshi, F. Porikli, and N. Papanikolopoulos. Multi-class active learning for image classification. In Conference on Computer Vision and Pattern Recognition, 2009.
[16] A. Kapoor, K. Grauman, R. Urtasun, and T. Darrell. Active learning with Gaussian Processes
for object categorization. In International Conference on Computer Vision, 2007.
[17] K. Konyushkova, R. Sznitman, and P. Fua. Introducing geometry into active learning for image
segmentation. In International Conference on Computer Vision, 2015.
10
[18] J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation.
In Conference on Computer Vision and Pattern Recognition, 2015.
[19] A. C. Lorena, G. E. A. P. A. Batista, A. C. P. L. F. de Carvalho, and M. C. Monard. Splice junction
recognition using machine learning techniques. In Brazilian Workshop on Bioinformatics, 2002.
[20] A. Lucchi, Y. Li, K. Smith, and P. Fua. Structured image segmentation using kernelized features.
In European Conference on Computer Vision, 2012.
[21] T. Luo, K. Kramer, S. Samson, A. Remsen, D. B. Goldgof, L. O. Hall, and T. Hopkins. Active
learning to recognize multiple types of plankton. In International Conference on Pattern
Recognition, 2004.
[22] L. Maystre and M. Grossglauser. Just sort it! A simple and effective approach to active
preference learning. In International Conference on Machine Learning, 2017.
[23] B. Menza, A. Jacas, et al. The multimodal brain tumor image segmentation benchmark (BRATS).
IEEE Transactions on Medical Imaging, 2014.
[24] A. Mosinska, R. Sznitman, P. Glowacki, and P. Fua. Active learning for delineation of curvilinear
structures. In Conference on Computer Vision and Pattern Recognition, 2016.
[25] F. Olsson. A literature survey of active machine learning in the context of natural language
processing. Swedish Institute of Computer Science, 2009.
[26] A. Santoro, S. Bartunov, M. Botvinick, D. Wierstra, and T. Lillicrap. Meta-learning with
memory-augmented neural networks. In International Conference on Machine Learning, 2016.
[27] B. Settles. Active learning literature survey. Technical report, University of Wisconsin?Madison,
2010.
[28] B. Settles and M. Craven. An analysis of active learning strategies for sequence labeling tasks.
In Conference on Empirical Methods in Natural Language Processing, 2008.
[29] A. Singla, S. Tschiatschek, and A. Krause. Actively learning hemimetrics with applications to
eliciting user preferences. In International Conference on Machine Learning, 2016.
[30] R. Sznitman and B. Jedynak. Active testing for face detection and localization. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2010.
[31] A. Tamar, Y. WU, G. Thomas, S. Levine, and P. Abbeel. Value iteration networks. In Advances
in Neural Information Processing Systems, 2016.
[32] S. Tong and D. Koller. Support vector machine active learning with applications to text
classification. Machine Learning, 2002.
[33] A. Vezhnevets, V. Ferrari, and J.M. Buhmann. Weakly supervised structured output learning for
semantic segmentation. In Conference on Computer Vision and Pattern Recognition, 2012.
[34] Y. Yang, Z. Ma, F. Nie, X. Chang, and A. G. Hauptmann. Multi-class active learning by
uncertainty sampling with diversity maximization. International Journal of Computer Vision,
2015.
11
| 7010 |@word ksenia:4 exploitation:2 version:2 briefly:1 mri:12 proportion:2 everingham:1 r:9 covariance:1 p0:4 pick:1 reduction:14 initial:3 contains:2 score:5 selecting:1 series:1 batista:1 outperforms:5 existing:3 current:2 com:2 luo:1 yet:1 chu:2 must:1 realistic:1 informative:1 shape:1 enables:2 designed:1 plot:2 atlas:1 n0:1 v:1 greedy:2 selected:3 intelligence:4 generative:1 isotropic:1 smith:1 short:1 core:1 record:2 detecting:1 preference:4 lx:4 wierstra:1 along:1 constructed:2 symposium:1 consists:3 combine:1 introduce:1 x0:4 notably:1 indeed:1 expected:5 behavior:2 frequently:1 multi:4 brain:4 relying:1 voc:1 automatically:3 little:1 delineation:1 considering:1 becomes:2 provided:2 project:1 moreover:2 maximizes:1 what:2 kind:1 developed:1 porikli:2 every:1 collecting:3 tackle:2 grauman:1 classifier:37 rm:2 qm:1 partitioning:4 botvinick:1 grant:1 medical:3 yn:1 before:1 t1:2 positive:1 modify:1 struggle:1 limit:1 striatum:9 despite:1 approximately:1 twice:2 initialization:4 suggests:1 lausanne:2 challenging:2 ease:1 tschiatschek:1 range:4 averaged:1 jedynak:1 practical:1 testing:4 practice:2 union:1 prevalence:1 xr:1 procedure:15 cold:7 dice:4 area:1 empirical:1 significantly:1 pre:2 induce:1 suggest:3 cannot:1 unlabeled:3 selection:12 close:1 context:1 influence:2 applying:1 map:1 demonstrated:1 center:1 go:3 economics:1 attention:1 williams:1 survey:3 formulate:1 bachrach:1 simplicity:1 splitting:1 estimator:1 rule:1 insight:2 datapoints:8 his:1 ralf:1 ferrari:1 fx:4 updated:1 target:2 pt:2 user:2 exact:1 us:3 agreement:1 element:2 expensive:1 recognition:9 predicts:1 labeled:13 bottom:6 ft:5 observed:2 cloud:6 preprint:2 initializing:1 levine:1 incremented:1 highest:1 balanced:1 schiele:1 nie:1 trained:8 motivate:2 depend:2 segment:2 weakly:1 localization:1 multimodal:1 various:9 represented:1 train:13 fast:2 describe:1 effective:1 monte:2 query:9 artificial:1 labeling:3 outcome:1 precious:1 whose:2 heuristic:8 widely:2 larger:1 supplementary:3 annotating:3 ability:1 unseen:2 gp:4 final:1 online:4 sequence:3 arlo:5 took:1 propose:1 interaction:1 raphael:2 remainder:1 tu:1 aligned:1 combining:3 loop:1 kapoor:5 adapts:1 iglesias:1 competition:1 curvilinear:1 yaniv:1 optimum:1 darrell:2 generating:1 adam:1 categorization:1 rotated:3 object:3 help:3 depending:2 boson:2 received:2 predicted:9 involves:1 come:2 differ:3 switzerland:3 concentrate:1 iou:4 direction:1 annotated:10 criminisi:1 exploration:2 settle:2 material:3 hoi:1 require:3 abbeel:1 generalization:3 rousseau:1 sufficiently:1 considered:1 ground:1 credit:7 deciding:1 hall:1 mapping:2 predict:5 lyu:1 electron:1 continuum:1 estimation:1 applicable:2 bag:2 label:17 sensitive:1 individually:1 largest:1 singla:1 minimization:1 clearly:2 gaussian:11 aim:2 papanikolopoulos:2 husz:1 zhou:1 l0:4 focus:2 validated:1 consistently:1 check:1 indicates:1 seamlessly:2 contrast:1 adversarial:1 greedily:1 baseline:4 detect:4 am:3 downsample:1 el:1 epfl:4 maystre:2 santoro:1 w:4 bandit:4 kernelized:1 initially:1 koller:1 interested:1 selects:4 arg:3 classification:24 among:2 pascal:3 flexible:1 issue:1 lucas:1 development:1 resonance:1 art:1 special:1 initialize:4 equal:1 construct:4 once:1 having:2 beach:1 sampling:8 biology:2 x4:1 optimising:1 look:3 foreground:1 future:1 others:2 report:3 t2:1 few:2 modern:1 randomly:3 resulted:1 olsson:1 recognize:1 geometry:1 detection:1 interest:3 bontempi:1 predefined:1 experience:3 respective:2 tree:5 desired:1 dal:1 uncertain:1 instance:3 asking:1 implementational:1 maximization:1 cost:6 applicability:1 introducing:1 subset:3 successful:1 johnson:1 too:1 tishby:1 characterize:1 reported:2 learnt:5 synthetic:14 adaptively:2 thanks:2 st:1 konyushkova:5 fritz:1 discriminating:1 international:9 probabilistic:1 physic:3 regressor:19 pool:2 plit:7 quickly:2 lucchi:1 hopkins:1 containing:1 choose:1 huang:2 worse:1 expert:2 american:1 return:3 toy:1 checkerboard:10 account:3 potential:4 li:1 de:1 actively:1 diversity:1 sec:5 summarized:1 matter:1 explicitly:1 caused:1 depends:2 unannotated:1 higgs:8 try:1 tab:1 reached:2 start:17 competitive:1 carlos:1 houlsby:1 sort:1 accuracy:8 xor:3 variance:8 largely:1 convolutional:1 ensemble:1 yield:4 reinforced:1 bayesian:1 carlo:2 lengyel:1 tissue:1 datapoint:14 reach:1 parallelised:1 definition:1 failure:1 against:1 energy:3 bourdarios:1 associated:2 gain:1 hsu:4 dataset:30 baram:3 popular:2 newly:1 ut:4 dimensionality:1 segmentation:9 formalize:1 brat:2 attained:1 dt:1 supervised:1 zisserman:1 luz:1 fua:5 formulation:1 foresee:1 done:1 swedish:1 furthermore:1 just:2 biomedical:1 until:1 working:2 hand:3 assessment:1 lack:1 logistic:1 mode:3 quality:1 ascribe:1 believe:1 usa:1 building:1 calibrating:1 requiring:1 unbiased:1 lillicrap:1 symmetric:1 iteratively:1 semantic:4 deal:1 during:1 auc:3 flair:1 rat:1 criterion:1 presenting:1 performs:1 l1:1 image:7 novel:1 funding:1 common:1 specialized:2 empirically:1 vezhnevets:1 extend:1 belong:1 discussed:1 association:1 significant:3 refer:1 measurement:1 rd:1 trivially:1 stochasticity:1 language:3 samson:1 moving:1 f0:1 longer:2 etc:1 mitochondrion:1 closest:4 imbalanced:1 recent:4 perspective:1 driven:6 scenario:4 compound:1 certain:1 meta:5 outperforming:1 cater:1 success:2 binary:1 dataset2:1 yi:3 life:2 additional:2 mr:1 maximize:1 montillo:1 signal:1 u0:2 multiple:2 full:1 reduces:1 d0:6 smooth:1 technical:1 faster:2 characterized:2 cross:2 long:2 lin:3 post:1 molecular:2 bigger:1 prediction:3 scalable:1 regression:10 vision:10 metric:3 arxiv:4 annotate:1 iteration:15 tailored:1 kernel:2 histogram:2 achieved:1 microscopy:1 gilad:1 background:1 remarkably:2 addition:1 krause:1 winn:1 unibe:1 comment:1 cowan:1 extracting:1 joshi:2 yang:1 unused:1 split:2 parallelising:1 iterate:1 bartunov:1 competing:1 suboptimal:2 topology:1 idea:1 prototype:1 tamar:1 multiclass:1 becker:1 suffer:1 gpc:2 delivering:1 amount:3 dna:1 http:2 generate:3 outperform:2 problematic:1 notice:1 estimated:1 popularity:1 correctly:1 promise:1 grossglauser:1 key:1 drawn:1 undersampling:1 imaging:3 year:2 run:2 uncertainty:7 fraudulent:1 extends:1 family:1 reader:1 guyon:1 brazilian:1 wu:1 decision:3 oracle:1 x2:3 simulate:1 min:4 formulating:1 relatively:1 transferred:1 structured:2 according:1 combination:3 craven:1 across:1 y0:2 helge:1 g3:1 outlier:1 restricted:2 invariant:1 computationally:1 previously:2 turn:1 committee:2 fail:1 needed:1 available:6 junction:2 generalizes:1 apply:1 observe:3 magnetic:1 simulating:2 batch:2 robustness:1 thomas:1 top:5 remaining:1 include:5 running:1 sznitman:5 madison:1 medicine:1 giving:1 ghahramani:1 build:7 konukoglu:1 eliciting:1 comparatively:1 hbp:1 already:1 added:1 strategy:49 distance:2 thank:1 card:6 manifold:1 collected:1 urtasun:1 reason:1 onte:5 besides:2 code:2 mini:6 balance:1 downsampling:1 innovation:1 difficult:2 mostly:1 design:1 implementation:2 litterature:1 plankton:1 observation:2 datasets:21 markov:1 benchmark:2 jin:2 situation:2 variability:1 y1:1 stack:1 introduced:2 evidenced:1 pair:1 namely:1 specified:1 extensive:1 germain:1 lal:69 learned:1 nip:2 address:1 beyond:2 pattern:8 sparsity:1 challenge:2 rf:8 reliable:2 gaining:1 including:4 max:6 gool:1 greatest:2 memory:1 natural:3 rely:1 warm:6 buhmann:1 scarce:3 arm:2 ebert:2 representing:1 improve:2 github:1 zhu:1 numerous:1 axis:1 text:2 prior:1 literature:2 l2:1 python:1 acknowledgement:1 relative:3 wisconsin:1 loss:7 expect:1 permutation:1 fully:1 generation:1 limitation:4 interesting:1 querying:1 carvalho:1 remarkable:1 annotator:1 validation:1 shelhamer:1 sufficient:1 leaning:1 bypass:1 classifying:1 balancing:1 row:11 repeat:2 last:1 gl:1 bern:2 offline:3 bias:2 understand:3 institute:1 wide:2 taking:1 face:1 benefit:1 van:1 overcome:2 boundary:1 xn:1 unfolds:1 depth:3 evaluating:3 curve:1 dimension:1 collection:1 reinforcement:2 made:5 preprocessing:1 programme:1 correlate:1 transaction:5 l00:1 preferred:1 active:30 navot:1 xi:5 discriminative:1 iterative:23 decade:1 why:1 table:1 additionally:1 promising:1 learn:5 transfer:4 nature:1 ca:1 reasonably:1 forest:5 alg:9 complex:3 european:3 constructing:1 domain:13 protocol:1 main:2 whole:1 motivation:1 noise:2 big:1 cvlab:2 x1:3 augmented:1 crafted:1 fig:13 representative:7 retrain:2 roc:1 depicts:1 referred:1 slow:1 tong:1 fails:1 lq:1 candidate:1 lie:1 learns:2 splice:8 rk:1 specific:14 evidence:1 consist:1 incorporating:1 workshop:2 adding:1 effectively:2 sequential:1 importance:5 hauptmann:1 horizon:1 entropy:2 depicted:3 lt:8 simply:1 visual:2 g2:3 chang:1 ch:3 corresponds:1 truth:1 loses:1 ma:1 conditional:1 goal:2 formulated:1 kramer:1 consequently:1 towards:1 labelled:8 hard:2 change:4 specifically:1 typical:2 tumor:3 rarely:1 select:10 exception:1 support:1 scan:1 unbalanced:2 bioinformatics:1 incorporate:1 evaluate:1 tested:2 scratch:1 |
6,646 | 7,011 | VAE Learning via Stein Variational Gradient Descent
Yunchen Pu, Zhe Gan, Ricardo Henao, Chunyuan Li, Shaobo Han, Lawrence Carin
Department of Electrical and Computer Engineering, Duke University
{yp42, zg27, r.henao, cl319, shaobo.han, lcarin}@duke.edu
Abstract
A new method for learning variational autoencoders (VAEs) is developed, based
on Stein variational gradient descent. A key advantage of this approach is that
one need not make parametric assumptions about the form of the encoder distribution. Performance is further enhanced by integrating the proposed encoder with
importance sampling. Excellent performance is demonstrated across multiple unsupervised and semi-supervised problems, including semi-supervised analysis of
the ImageNet data, demonstrating the scalability of the model to large datasets.
1
Introduction
There has been significant recent interest in the variational autoencoder (VAE) [11], a generalization
of the original autoencoder [33]. VAEs are typically trained by maximizing a variational lower
bound of the data log-likelihood [2, 10, 11, 12, 18, 21, 22, 23, 30, 34, 35]. To compute the variational
expression, one must be able to explicitly evaluate the associated distribution of latent features, i.e.,
the stochastic encoder must have an explicit analytic form. This requirement has motivated design
of encoders in which a neural network maps input data to the parameters of a simple distribution,
e.g., Gaussian distributions have been widely utilized [1, 11, 27, 25].
The Gaussian assumption may be too restrictive in some cases [28]. Consequently, recent work has
considered normalizing flows [28], in which random variables from (for example) a Gaussian distribution are fed through a series of nonlinear functions to increase the complexity and representational
power of the encoder. However, because of the need to explicitly evaluate the distribution within the
variational expression used when learning, these nonlinear functions must be relatively simple, e.g.,
planar flows. Further, one may require many layers to achieve the desired representational power.
We present a new approach for training a VAE. We recognize that the need for an explicit form for
the encoder distribution is only a consequence of the fact that learning is performed based on the
variational lower bound. For inference (e.g., at test time), we do not need an explicit form for the
distribution of latent features, we only require fast sampling from the encoder. Consequently, rather
than directly employing the traditional variational lower bound, we seek to minimize the KullbackLeibler (KL) distance between the true posterior of model and latent parameters. Learning then
becomes a novel application of Stein variational gradient descent (SVGD) [15], constituting its first
application to training VAEs. We extend SVGD with importance sampling [1], and also demonstrate
its novel use in semi-supervised VAE learning.
The concepts developed here are demonstrated on a wide range of unsupervised and semi-supervised
learning problems, including a large-scale semi-supervised analysis of the ImageNet dataset. These
experimental results illustrate the advantage of SVGD-based VAE training, relative to traditional
approaches. Moreover, the results demonstrate further improvements realized by integrating SVGD
with importance sampling.
Independent work by [3, 6] proposed the similar models, in which the aurthers incorporated SVGD
with VAEs [3] and importance sampling [6] for unsupervised learning tasks.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
2
2.1
Stein Learning of Variational Autoencoder (Stein VAE)
Review of VAE and Motivation for Use of SVGD
Consider data D = {xn }N
n=1 , where xn are modeled via decoder xn |z n ? p(x|z n ; ?). A prior
p(z) is placed on the latent codes. To learn parameters ?, one typically is interested in maximizing
PN
the empirical expected log-likelihood, N1 n=1 log p(xn ; ?). A variational lower bound is often
employed:
h p(x|z; ?)p(z) i
L(?, ?; x) = Ez|x;? log
= ?KL(q(z|x; ?)kp(z|x; ?)) + log p(x; ?) ,
(1)
q(z|x; ?)
with log p(x; ?) ? L(?, ?; x), and where Ez|x;? [?] is approximated by averaging over a finite
number of samples drawn from encoder q(z|x; ?). Parameters ? and ? are typically iteratively
PN
optimized via stochastic gradient descent [11], seeking to maximize n=1 L(?, ?; xn ).
To evaluate the variational expression in (1), we require the ability to sample efficiently from
q(z|x; ?), to approximate the expectation. We also require a closed form for this encoder, to evaluate log[p(x|z; ?)p(z)/q(z|x; ?)]. In the proposed VAE learning framework, rather than maximizing the variational lower bound explicitly, we focus on the term KL(q(z|x; ?)kp(z|x; ?)), which
we seek to minimize. This can be achieved by leveraging Stein variational gradient descent (SVGD)
[15]. Importantly, for SVGD we need only be able to sample from q(z|x; ?), and we need not
possess its explicit functional form.
In the above discussion, ? is treated as a parameter; below we treat it as a random variable, as
was considered in the Appendix of [11]. Treatment of ? as a random variable allows for model
averaging, and a point estimate of ? is revealed as a special case of the proposed method.
The set of codes associated with all xn ? D is represented Z = {z n }N
n=1 . The prior on {?, Z} is
QN
here represented as p(?, Z) = p(?) n=1 p(z n ). We desire the posterior p(?, Z|D). Consider the
revised variational expression
h p(D|Z, ?)p(?, Z) i
L1 (q; D) = Eq(?,Z) log
= ?KL(q(?, Z)kp(?, Z|D)) + log p(D; M) , (2)
q(?, Z)
where p(D; M) is the evidence for the underlying model M. Learning q(?, Z) such that L1 is
maximized is equivalent to seeking q(?, Z) that minimizes KL(q(?, Z)kp(?, Z|D)). By leveraging
and generalizing SVGD, we will perform the latter.
2.2
Stein Variational Gradient Descent (SVGD)
Rather than explicitly specifying a form for p(?, Z|D), we sequentially refine samples of ? and Z,
such that they are better matched to p(?, Z|D). We alternate between updating the samples of ? and
samples of Z, analogous to how ? and ? are updated alternatively in traditional VAE optimization
of (1). We first consider updating samples of ?, with the samples of Z held fixed. Specifically,
M
assume we have samples {? j }M
j=1 drawn from distribution q(?), and samples {z jn }j=1 drawn from
M
distribution q(Z). We wish to transform {? j }j=1 by feeding them through a function, and the
corresponding (implicit) transformed distribution from which they are drawn is denoted as qT (?). It
is desired that, in a KL sense, qT (?)q(Z) is closer to p(?, Z|D) than was q(?)q(Z). The following
theorem is useful for defining how to best update {? j }M
j=1 .
Theorem 1 Assume ? and Z are Random Variables (RVs) drawn from distributions q(?) and q(Z),
respectively. Consider the transformation T (?) = ? + ?(?; D) and let qT (?) represent the distribution of ? 0 = T (?). We have
? KL(qT kp) |=0 = ?E??q(?) trace(Ap (?; D)) ,
(3)
where qT = qT (?)q(Z), p = p(?, Z|D), Ap (?; D) = ?? log p?(?; D)?(?; D)T + ?? ?(?; D),
log p?(?; D) = EZ?q(Z) [log p(D, Z, ?)], and p(D, Z, ?) = p(D|Z, ?)p(?, Z).
The proof is provided in Appendix A. Following [15], we assume ?(?; D) lives in a reproducing
kernel Hilbert space (RKHS) with kernel k(?, ?). Under this assumption, the solution for ?(?; D)
2
that maximizes the decrease in the KL distance (3) is
? ? (?; D) = Eq(?) [k(?, ?)?? log p?(?; D) + ?? k(?, ?)] .
(4)
Theorem 1 concerns updating samples from q(?) assuming fixed q(Z). Similarly, to update q(Z)
with q(?) fixed, we employ a complementary form of Theorem 1 (omitted for brevity). In that case,
we consider transformation T (Z) = Z + ?(Z; D), with Z ? q(Z), and function ?(Z; D) is also
assumed to be in a RKHS.
(t+1)
(t)
(t)
The expectations in (3) and (4) are approximated by samples ? j
= ? j + ?? j , with
PM
(t)
(t)
(t)
(t)
(t)
(t)
1
?? j ? M j 0 =1 k? (? j 0 , ? j )??(t) log p?(? j 0 ; D) + ??(t) k? (? j 0 , ? j )) ,
j0
j0
with ?? log p?(?; D) ?
1
M
PN
n=1
PM
j=1 ?? log p(xn |z jn , ?)p(?).
(t+1)
(t)
(t)
z jn = z jn + ?z jn :
A similar update of samples is
manifested for the latent variables
PM
(t)
(t)
(t)
(t)
(t)
(t)
1
?z jn = M j 0 =1 kz (z j 0 n , z jn )?z(t) log p?(z j 0 n ; D) + ?z(t) kz (z j 0 n , z jn ) ,
j0 n
(5)
j0 n
(6)
PM
0
1
where ?zn log p?(z n ; D) ? M
j=1 ?z n log p(xn |z n , ? j )p(z n ). The kernels used to update samples of ? and z n are in general different, denoted respectively k? (?, ?) and kz (?, ?), and is a small
step size. For notational simplicity, M is the same in (5) and (6), but in practice a different number
of samples may be used for ? and Z.
If M = 1 for parameter ?, indices j and j 0 are removed in (5). Learning then reduces to gradient
descent and a point estimate for ?, identical to the optimization procedure used for the traditional
VAE expression in (1), but with the (multiple) samples associated with Z sequentially transformed
via SVGD (and, importantly, without the need to assume a form for q(z|x; ?)). Therefore, if only a
point estimate of ? is desired, (1) can be optimized wrt ?, while for updating Z SVGD is applied.
2.3
Efficient Stochastic Encoder
At iteration t of the above learning procedure, we realize a set of latent-variable (code) samples
(t)
{z jn }M
j=1 for each xn ? D under analysis. For large N , training may be computationally expensive.
Further, the need to evolve (learn) samples {z j? }M
j=1 for each new test sample, x? , is undesirable.
We therefore develop a recognition model that efficiently computes samples of latent codes for a data
sample of interest. The recognition model draws samples via z jn = f ? (xn , ? jn ) with ? jn ? q0 (?).
Distribution q0 (?) is selected such that it may be easily sampled, e.g., isotropic Gaussian.
After each iteration of updating the samples of Z, we refine recognition model f ? (x, ?) to mimic
the Stein sample dynamics. Assume recognition-model parameters ? (t) have been learned thus far.
(t)
Using ? (t) , latent codes for iteration t are constituted as z jn = f ?(t) (xn , ? jn ), with ? jn ? q0 (?).
These codes are computed for all data xn ? Bt , where Bt ? D is the minibatch of data at iteration
(t)
t. The change in the codes is ?z jn , as defined in (6). We then update ? to match the refined codes,
as
P
PM
(t+1)
? (t+1) = arg min? xn ?Bt j=1 kf ? (xn , ? jn ) ? z jn k2 .
(7)
The analytic solution of (7) is intractable. We update ? with K steps of gradient descent as ? (t,k) =
P
PM
(t,k?1)
(t,k?1)
? (t,k?1) ? ? xn ?Bt j=1 ?? jn
, where ?? jn
= ?? f ? (xn , ? jn )(f ? (xn , ? jn ) ?
(t+1)
z jn )|?=?(t,k?1) , ? is a small step size, ? (t) = ? (t,0) , ? (t+1) = ? (t,K) , and ?? f ? (xn , ? jn ) is
the transpose of the Jacobian of f ? (xn , ? jn ) wrt ?. Note that the use of minibatches mitigates
challenges of training with large training sets, D.
The function f ? (x, ?) plays a role analogous to q(z|x; ?) in (1), in that it yields a means of efficiently drawing samples of latent codes z, given observed x; however, we do not impose an explicit
functional form for the distribution of these samples.
3
3
3.1
Stein Variational Importance Weighted Autoencoder (Stein VIWAE)
Multi-sample importance-weighted KL divergence
Recall the variational expression in (1) employed in conventional VAE learning. Recently, [1, 19]
showed that the multi-sample (k samples) importance-weighted estimator
i
h
i
Pk
)
(8)
Lk (x) = Ez1 ,...,zk ?q(z|x) log k1 i=1 p(x,z
q(z i |x) ,
provides a tighter lower bound and a better proxy for the log-likelihood, where z 1 , . . . , z k are random variables sampled independently from q(z|x). Recall from (3) that the KL divergence played
a key role in the Stein-based learning of Section 2. Equation (8) motivates replacement of the KL
objective function with the multi-sample importance-weighted KL divergence
h
i
i
Pk
|D)
KLkq,p (?; D) , ?E?1:k ?q(?) log k1 i=1 p(?
,
(9)
i
q(? )
where ? = (?, Z) and ?1:k = ?1 , . . . , ?k are independent samples from q(?, Z). Note that the
special case of k = 1 recovers the standard KL divergence. Inspired by [1], the following theorem
(proved in Appendix A) shows that increasing the number of samples k is guaranteed to reduce the
KL divergence and provide a better approximation of target distribution.
Theorem 2 For any natural number k, we have KLkq,p (?; D) ? KLk+1
q,p (?; D) ? 0, and if
q(?)/p(?|D) is bounded, then limk?? KLkq,p (?; D) = 0.
We minimize (9) with a sample transformation based on a generalization of SVGD and the recognition model (encoder) is trained in the same way as in Section 2.3. Specifically, we first draw samples
M
1:k M
{? 1:k
j }j=1 and {z jn }j=1 from a simple distribution q0 (?), and convert these to approximate draws
1:k
from p(? , Z 1:k |D) by minimizing the multi-sample importance weighted KL divergence via nonlinear functional transformation.
3.2
Importance-weighted SVGD for VAEs
The following theorem generalizes Theorem 1 to multi-sample weighted KL divergence.
Theorem 3 Let ?1:k be RVs drawn independently from distribution q(?) and KLkq,p (?, D) is the
multi-sample importance weighted KL divergence in (9). Let T (?) = ? + ?(?; D) and qT (?)
represent the distribution of ?0 = T (?). We have
? KLkq,p (?0 ; D) |=0 = ?E?1:k ?q(?) (Akp (?1:k ; D)) .
(10)
The proof and detailed definition is provided in Appendix A. The following corollaries generalize
Theorem 1 and (4) via use of importance sampling, respectively.
Corollary 3.1 ? 1:k and Z 1:k are RVs drawn independently from distributions q(?) and q(Z), respectively. Let T (?) = ? + ?(?; D), qT (?) represent the distribution of ? 0 = T (?), and
?0 = (? 0 , Z) . We have
? KLkqT ,p (?0 ; D) |=0 = ?E?1:k ?q(?) (Akp (? 1:k ; D)) ,
(11)
where Akp (? 1:k ; D) =
1
?
?
Pk
i
i=1 ?i Ap (? ; D), ?i = EZ i ?q(Z)
h
p(? i ,Z i ,D)
q(? i )q(Z i )
i
, ?
? =
Pk
i=1
?i ;
Ap (?; D) and log p?(?; D) are as defined in Theorem 1.
Corollary 3.2 Assume ?(?; D) lives in a reproducing kernel Hilbert space (RKHS) with kernel
k? (?, ?). The solution for ?(?; D) that maximizes the decrease in the KL distance (11) is
? ? (?; D) = E?1:k ?q(?)
h P
k
1
?
?
i=1
i
?i ??i k? (? i , ?) + k? (? i , ?)??i log p?(? i ; D) .
4
(12)
M
Corollary 3.1 and Corollary 3.2 provide a means of updating multiple samples {? 1:k
j }j=1 from q(?)
i
i
i
via T (? ) = ? + ?(? ; D). The expectation wrt q(Z) is approximated via samples drawn from
q(Z). Similarly, we can employ a complementary form of Corollary 3.1 and Corollary 3.2 to update
multiple samples {Zj1:k }M
j=1 from q(Z). This suggests an importance-weighted learning procedure
M
1:k M
that alternates between update of particles {? 1:k
j }j=1 and {Zj }j=1 , which is similar to the one in
Section 2.2. Detailed update equations are provided in Appendix B.
4
Semi-Supervised Learning with Stein VAE
l
Consider labeled data as pairs Dl = {xn , y n }N
n=1 , where the label y n ? {1, . . . , C} and the de? = p(x|z n ; ?)p(y|z n ; ?),
? where ?
? represents
coder is modeled as (xn , y n |z n ) ? p(x, y|z n ; ?, ?)
the parameters of the decoder for labels. The set of codes associated with all labeled data are reprel
sented as Zl = {z n }N
n=1 . We desire to approximate the posterior distribution on the entire dataset
?
p(?, ?, Z, Zl |D, Dl ) via samples, where D represents the unlabeled data, and Z is the set of codes
? and Zl .
associated with D. In the following, we will only discuss how to update the samples of ?, ?
Updating samples Z is the same as discussed in Sections 2 and 3.2 for Stein VAE and Stein VIWAE,
respectively.
? M
?
Assume {? j }M
j=1 drawn from distribution q(?), {? j }j=1 drawn from distribution q(?), and samples
M
{z jn }j=1 drawn from (distinct) distribution q(Zl ). The following corollary generalizes Theorem 1
and (4), which is useful for defining how to best update {? j }M
j=1 .
? Z and Zl are RVs drawn from distributions q(?), q(?),
? q(Z) and
Corollary 3.3 Assume ?, ?,
q(Zl ), respectively. Consider the transformation T (?) = ? + ?(?; D, Dl ) where ?(?; D, Dl )
lives in a RKHS with kernel k? (?, ?). Let qT (?) represent the distribution of ? 0 = T (?). For
? and p = p(?, ?,
? Z|D, Dl ), we have
qT = qT (?)q(Z)q(?)
? KL(qT kp) |=0 = ?E??q(?) (Ap (?; D, Dl )) ,
(13)
where Ap (?; D, Dl ) = ?? ?(?; D, Dl ) + ?? log p?(?; D, Dl )?(?; D, Dl )T , log p?(?; D, Dl ) =
EZ?q(Z) [log p(D|Z, ?)] + EZl ?q(Zl ) [log p(Dl |Zl , ?)], and the solution for ?(?; D, Dl ) that maximizes the change in the KL distance (13) is
? ? (?; D, Dl ) = Eq(?) [k(?, ?)?? log p?(?; D, Dl ) + ?? k(?, ?)] .
(14)
Further details are provided in Appendix C.
5
Experiments
For all experiments, we use a radial basis-function (RBF) kernel as in [15], i.e., k(x, x0 ) =
exp(? h1 kx ? x0 k22 ), where the bandwidth, h, is the median of pairwise distances between current samples. q0 (?) and q0 (?) are set to isotropic Gaussian distributions. We share the samples of ?
across data points, i.e., ? jn = ? j , for n = 1, . . . , N (this is not necessary, but it saves computation).
The samples of ? and z, and parameters of the recognition model, ?, are optimized via Adam [9]
with learning rate 0.0002. We do not perform any dataset-specific tuning or regularization other
than dropout [32] and early stopping on validation sets. We set M = 100 and k = 50, and use
minibatches of size 64 for all experiments, unless otherwise specified.
5.1
Expressive power of Stein recognition model
Gaussian Mixture Model We synthesize data by (i) drawing z n ? 21 N (?1 , I) + 12 N (?2 , I),
where ?1 = [5, 5]T , ?2 = [?5, ?5]T ; (ii) drawing xn ? N (?z n , ? 2 I), where ? = 21 ?1
?2 and
? = 0.1. The recognition model f? (xn , ? j ) is specified as a multi-layer perceptron (MLP) with
100 hidden units, by first concatenating ? j and xn into a long vector. The dimension of ? j is set
to 2. The recognition model for standard VAE is also an MLP with 100 hidden units, and with the
assumption of a Gaussian distribution for the latent codes [11].
5
Figure 1: Approximation of posterior distribution: Stein VAE vs. VAE. The figures represent different samples of Stein VAE. (left) 10 samples, (center) 50 samples, and (right) 100 samples.
We generate N = 10, 000 data points for training and 10 data points for testing. The analytic form
of true posterior distribution is provided in Appendix D. Figure 1 shows the performance of Stein
VAE approximations for the true posterior; other similar examples are provided in Appendix F. The
Stein recognition model is able to capture the multi-modal posterior and produce accurate density
approximation.
Poisson Factor Analysis Given a discrete vector
xn ? ZP
+ , Poisson factor analysis [36] assumes xn
is a weighted combination of V latent factors xn ?
?V
Pois(?z n ), where ? ? RP
is the factor loadings
+
matrix and z n ? RV+ is the vector of factor scores.
We consider topic modeling with Dirichlet priors
on ? v (v-th column of ?) and gamma priors on each
component of z n .
We evaluate our model on the 20 Newsgroups
dataset containing N = 18, 845 documents with a
vocabulary of P = 2, 000. The data are partitioned
into 10,314 training, 1,000 validation and 7,531 test
documents. The number of factors (topics) is set to
V = 128. ? is first learned by Markov chain Monte
Carlo (MCMC) [4]. We then fix ? at its MAP value,
and only learn the recognition model ? using standard VAE and Stein VAE; this is done, as in the
previous example, to examine the accuracy of the
recognition model to estimate the posterior of the
latent factors, isolated from estimation of ?. The
recognition model is an MLP with 100 hidden units.
Figure 2: Univariate marginals and pairwise posteriors. Purple, red and green represent the distribution inferred from MCMC, standard VAE and Stein
VAE, respectively.
An analytic form of the true posterior distribution
Table 1: Negative log-likelihood (NLL) on
p(z n |xn ) is intractable for this problem. Consequently,
MNIST. ? Trained with VAE and tested with
we employ samples collected from MCMC as ground
IWAE. ? Trained and tested with IWAE.
truth. With ? fixed, we sample z n via Gibbs sampling, usMethod
NLL
ing 2,000 burn-in iterations followed by 2,500 collection
DGLM [27]
89.90
draws, retaining every 10th collection sample. We show
Normalizing flow [28]
85.10
the marginal and pairwise posterior of one test data point
?
VAE
+
IWAE
[1]
86.76
in Figure 2. Additional results are provided in Appendix
IWAE + IWAE [1]?
84.78
F. Stein VAE leads to a more accurate approximation than
standard VAE, compared to the MCMC samples. ConsidStein VAE + ELBO
85.21
ering Figure 2, note that VAE significantly underestimates
Stein VAE + S-ELBO
84.98
the variance of the posterior (examining the marginals), a
Stein VIWAE + ELBO
83.01
Stein VIWAE + S-ELBO 82.88
well-known problem of variational Bayesian analysis [7].
In sharp contrast, Stein VAE yields highly accurate approximations to the true posterior.
5.2 Density estimation
Data We consider five benchmark datasets: MNIST and four text corpora: 20 Newsgroups
(20News), New York Times (NYT), Science and RCV1-v2 (RCV2). For MNIST, we used the standard split of 50K training, 10K validation and 10K test examples. The latter three text corpora
6
consist of 133K, 166K and 794K documents. These three datasets are split into 1K validation, 10K
testing and the rest for training.
Evaluation Given new data x? (testing data), the marginal log-likelihood/perplexity values are
estimated by the variational evidence lower bound (ELBO) while integrating the decoder parameters ? out, log p(x? ) ? Eq(z? ) [log p(x? , z ? )] + H(q(z ? )) = ELBO(q(z ? )), where p(x? , z ? ) =
Eq(?) [log p(x? , ?, z ? )] and H(q(?)) = ?Eq (log q(?)) is the entropy. The expectation is approxiM
mated with samples {? j }M
j=1 and {z ?j }j=1 with z ?j = f ? (x? , ? j ), ? j ? q0 (?). Directly evaluating
?f (x,?) ?1
q(z ? ) is intractable, thus it is estimated via density transformation q(z) = q0 (?)det ??? .
We further estimate the marginal loglikelihood/perplexity values via the
Method
20News NYT Science RCV2
stochastic variational lower bound, as
DocNADE [14]
896
2496
1725
742
the mean of 5K-sample importance
DEF [24]
?2416
1576
?weighting estimate [1]. Therefore, for
NVDM [17]
852
??550
each dataset, we report four results: (i)
Stein VAE + ELBO
849
2402
1499
549
Stein VAE + ELBO, (ii) Stein VAE + SStein VAE + S-ELBO
845
2401
1497
544
ELBO, (iii) Stein VIWAE + ELBO and
Stein VIWAE + ELBO
837
2315
1453
523
(iv) Stein VIWAE + S-ELBO; the first
829
2277
1421
518
Stein VIWAE + S-ELBO
term denotes the training procedure is
employed as Stein VAE in Section 2 or Stein VIWAE in Section 3; the second term denotes the
testing log-likelihood/perplexity is estimated by the ELBO or the stochastic variational lower bound,
S-ELBO [1].
Table 2: Test perplexities on four text corpora.
Model For MNIST, we train the model with one stochastic layer, z n , with 50 hidden units and
two deterministic layers, each with 200 units. The nonlinearity is set as tanh. The visible layer,
xn , follows a Bernoulli distribution. For the text corpora, we build a three-layer deep Poisson
network [24]. The sizes of hidden units are 200, 200 and 50 for the first, second and third layer,
respectively (see [24] for detailed architectures).
Time (s)
5.3
Negative Log?likelihood (nats)
6
Results The log-likelihood/perplexity results
88
Negative Log?likelihood
are summarized in Tables 1 and 2. On MNIST,
5
Testing Time for Entire Dataset
our Stein VAE achieves a variational lower bound
Training Time for Each Epoch
87
4
of -85.21 nats, which outperforms standard VAE
with the same model architecture. Our Stein VI3
86
WAE achieves a log-likelihood of -82.88 nats,
2
exceeding normalizing flow (-85.1 nats) and im1
portance weighted autoencoder (-84.78 nats),
85
which is the best prior result obtained by feed1
5 10
20
40
60
100
200
300
forward neural network (FNN). DRAW [5] and
Number of Samples (M)
PixelRNN [20], which exploit spatial structure, Figure 3: NLL vs. Training/Testing time on MNIST
achieved log-likelihoods of around -80 nats. Our with various numbers of samples for ?.
model can also be applied on these models, but
this is left as interesting future work. To further illustrate the benefit of model averaging, we vary
the number of samples for ? (while retaining 100 samples for Z) and show the results associated
with training/testing time in Figure 3. When M = 1 for ?, our model reduces to a point estimate
for that parameter. Increasing the number of samples of ? (model averaging) improves the negative
log-likelihood (NLL). The testing time of using 100 samples of ? is around 0.12 ms per image.
Semi-supervised Classification
We consider semi-supervised classification on MNIST and ImageNet [29] data. For each dataset,
we report the results obtained by (i) VAE, (ii) Stein VAE, and (iii) Stein VIWAE.
MNIST We randomly split the training set into a labeled and unlabeled set, and the number of
labeled samples in each category varies from 10 to 300. We perform testing on the standard test
? =
set with 20 different training-set splits. The decoder for labels is implemented as p(y n |z n , ?)
? n ). We consider two types of decoders for images p(xn |z n , ?) and encoder f (x, ?):
softmax(?z
?
7
(i) FNN: Following [12], we use a 50-dimensional latent variables z n and two hidden layers, each
with 600 hidden units, for both encoder and decoder; softplus is employed as the nonlinear activation
function. (ii) All convolutional nets (CNN): Inspired by [31], we replace the two hidden layers with
32 and 64 kernels of size 5 ? 5 and a stride of 2. A fully connected layer is stacked on the CNN to
produce a 50-dimensional latent variables z n . We use the leaky rectified activation [16]. The input
of the encoder is formed by spatially aligning and stacking xn and ?, while the output of decoder is
the image itself.
Table 3 shows the classi- Table 3: Semi-supervised classification error (%) on MNIST. N? is the number
fication results. Our Stein of labeled images per class. ? [12]; ? our implementation.
VAE and Stein VIWAE
FNN
CNN
N?
consistently achieve betVAE?
Stein VAE
Stein VIWAE
VAE?
Stein VAE
Stein VIWAE
ter performance than the
10
3.33 ? 0.14 2.78 ? 0.24
2.67 ? 0.09
2.44 ? 0.17 1.94 ? 0.24
1.90 ? 0.05
VAE. We further observe
60
2.59 ?0.05 2.13 ? 0.08
2.09 ? 0.03
1.88 ?0.05 1.44 ? 0.04
1.41 ? 0.02
that the variance of Stein
100 2.40 ?0.02 1.92 ? 0.05
1.88 ? 0.01
1.47 ?0.02 1.01 ? 0.03
0.99 ? 0.02
VIWAE results is much
300 2.18 ?0.04 1.77 ? 0.03
1.75 ? 0.01
0.98 ?0.02 0.89 ? 0.03
0.86 ? 0.01
smaller than that of Stein
VAE results on small labeled data, indicating the former produces more robust parameter estimates.
State-ofthe-art results [26] are achieved by the Ladder network, which can be employed with
our Stein-based approach, however, we will consider this extension as future work.
ImageNet
2012 We
Table 4: Semi-supervised classification accuracy (%) on ImageNet.
consider scalability of our
VAE
Stein VAE
Stein VIWAE DGDN [21]
model to large datasets.
We split the 1.3 million
1 % 35.92? 1.91 36.44 ? 1.66 36.91 ? 0.98 43.98? 1.15
2 % 40.15? 1.52 41.71 ? 1.14 42.57 ? 0.84 46.92? 1.11
training images into an
5 % 44.27? 1.47 46.14 ? 1.02 46.20 ? 0.52 47.36? 0.91
unlabeled and labeled set,
10 % 46.92? 1.02 47.83 ? 0.88 48.67 ? 0.31 48.41? 0.76
and vary the proportion
20 % 50.43? 0.41 51.62 ? 0.24 51.77 ? 0.12 51.51? 0.28
of labeled images from
30 % 53.24? 0.33 55.02 ? 0.22 55.45 ? 0.11 54.14? 0.12
1% to 40%. The classes
40 % 56.89? 0.11 58.17 ? 0.16 58.21 ? 0.12 57.34? 0.18
are balanced to ensure
that no particular class
is over-represented, i.e., the ratio of labeled and unlabeled images is the same for each class. We
repeat the training process 10 times for the training setting with labeled images ranging from 1% to
10% , and 5 times for the the training setting with labeled images ranging from 20% to 40%. Each
time we utilize different sets of images as the unlabeled ones.
We employ an all convolutional net [31] for both the encoder and decoder, which replaces deterministic pooling (e.g., max-pooling) with stridden convolutions. Residual connections [8] are incorporated to encourage gradient flow. The model architecture is detailed in Appendix E. Following [13],
images are resized to 256 ? 256. A 224 ? 224 crop is randomly sampled from the images or its
horizontal flip with the mean subtracted [13]. We set M = 20 and k = 10.
Table 4 shows classification results indicating that Stein VAE and Stein IVWAE outperform VAE
in all the experiments, demonstrating the effectiveness of our approach for semi-supervised classification. When the proportion of labeled examples is too small (< 10%), DGDN [21] outperforms
all the VAE-based models, which is not surprising provided that our models are deeper, thus have
considerably more parameters than DGDN [21].
6
Conclusion
We have employed SVGD to develop a new method for learning a variational autoencoder, in which
we need not specify an a priori form for the encoder distribution. Fast inference is manifested
by learning a recognition model that mimics the manner in which the inferred code samples are
manifested. The method is further generalized and improved by performing importance sampling.
An extensive set of results, for unsupervised and semi-supervised learning, demonstrate excellent
performance and scaling to large datasets.
Acknowledgements
This research was supported in part by ARO, DARPA, DOE, NGA, ONR and NSF.
8
References
[1] Y. Burda, R. Grosse, and R. Salakhutdinov. Importance weighted autoencoders. In ICLR,
2016.
[2] L. Chen, S. Dai, Y. Pu, C. Li, and Q. Su Lawrence Carin. Symmetric variational autoencoder
and connections to adversarial learning. In arXiv, 2017.
[3] Y. Feng, D. Wang, and Q. Liu. Learning to draw samples with amortized stein variational
gradient descent. In UAI, 2017.
[4] Z. Gan, C. Chen, R. Henao, D. Carlson, and L. Carin. Scalable deep poisson factor analysis
for topic modeling. In ICML, 2015.
[5] K. Gregor, I. Danihelka, A. Graves, and D. Wierstra. Draw: A recurrent neural network for
image generation. In ICML, 2015.
[6] J. Han and Q. Liu. Stein variational adaptive importance sampling. In UAI, 2017.
[7] S. Han, X. Liao, D.B. Dunson, and L. Carin. Variational gaussian copula inference. In AISTATS, 2016.
[8] K. He, X. Zhang, S. Ren, and Sun J. Deep residual learning for image recognition. In CVPR,
2016.
[9] D. Kingma and J. Ba. Adam: A method for stochastic optimization. In ICLR, 2015.
[10] D. P. Kingma, T. Salimans, R. Jozefowicz, X.i Chen, I. Sutskever, and M. Welling. Improving
variational inference with inverse autoregressive flow. In NIPS, 2016.
[11] D. P. Kingma and M. Welling. Auto-encoding variational Bayes. In ICLR, 2014.
[12] D.P. Kingma, D.J. Rezende, S. Mohamed, and M. Welling. Semi-supervised learning with
deep generative models. In NIPS, 2014.
[13] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional
neural networks. In NIPS, 2012.
[14] H. Larochelle and S. Laulyi. A neural autoregressive topic model. In NIPS, 2012.
[15] Q. Liu and D. Wang. Stein variational gradient descent: A general purpose bayesian inference
algorithm. In NIPS, 2016.
[16] A. L. Maas, A. Y. Hannun, and A. Y. Ng. Rectifier nonlinearities improve neural network
acoustic models. In ICML, 2013.
[17] Y. Miao, L. Yu, and Phil Blunsomi. Neural variational inference for text processing. In ICML,
2016.
[18] A. Mnih and K. Gregor. Neural variational inference and learning in belief networks. In ICML,
2014.
[19] A. Mnih and D. J. Rezende. Variational inference for monte carlo objectives. In ICML, 2016.
[20] A. Oord, N. Kalchbrenner, and K. Kavukcuoglu. Pixel recurrent neural network. In ICML,
2016.
[21] Y. Pu, Z. Gan, R. Henao, X. Yuan, C. Li, A. Stevens, and L. Carin. Variational autoencoder for
deep learning of images, labels and captions. In NIPS, 2016.
[22] Y. Pu, X. Yuan, and L. Carin. Generative deep deconvolutional learning. In ICLR workshop,
2015.
[23] Y. Pu, X. Yuan, A. Stevens, C. Li, and L. Carin. A deep generative deconvolutional image
model. Artificial Intelligence and Statistics (AISTATS), 2016.
9
[24] R. Ranganath, L. Tang, L. Charlin, and D. M.Blei. Deep exponential families. In AISTATS,
2015.
[25] R. Ranganath, D. Tran, and D. M. Blei. Hierarchical variational models. In ICML, 2016.
[26] A. Rasmus, M. Berglund, M. Honkala, H. Valpola, and T. Raiko. Semi-supervised learning
with ladder networks. In NIPS, 2015.
[27] D. J. Rezende, S. Mohamed, and D. Wierstra. Stochastic backpropagation and approximate
inference in deep generative models. In ICML, 2014.
[28] D.J. Rezende and S. Mohamed. Variational inference with normalizing flows. In ICML, 2015.
[29] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy,
A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-fei. Imagenet large scale visual recognition
challenge. IJCV, 2014.
[30] D. Shen, Y. Zhang, R. Henao, Q. Su, and L. Carin. Deconvolutional latent-variable model for
text sequence matching. In arXiv, 2017.
[31] J. T. Springenberg, A. Dosovitskiy, T. Brox, and M. Riedmiller. Striving for simplicity: The
all convolutional net. In ICLR workshop, 2015.
[32] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov. Dropout: A
simple way to prevent neural networks from overfitting. JMLR, 2014.
[33] P. Vincent, H. Larochelle, I. Lajoie, Y. Bengio, and P.-A. Manzagol. Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion.
JMLR, 2010.
[34] Y. Pu W. Wang, R. Henao, L. Chen, Z. Gan, C. Li, and Lawrence Carin. Adversarial symmetric
variational autoencoder. In NIPS, 2017.
[35] Y. Zhang, D. Shen, G. Wang, Z. Gan, R. Henao, and L. Carin. Deconvolutional paragraph
representation learning. In NIPS, 2017.
[36] M. Zhou, L. Hannah, D. Dunson, and L. Carin. Beta-negative binomial process and Poisson
factor analysis. In AISTATS, 2012.
10
| 7011 |@word cnn:3 loading:1 proportion:2 seek:2 klk:1 liu:3 series:1 score:1 rkhs:4 document:3 deconvolutional:4 outperforms:2 current:1 surprising:1 activation:2 must:3 realize:1 visible:1 analytic:4 update:11 v:2 generative:4 selected:1 intelligence:1 isotropic:2 blei:2 provides:1 zhang:3 five:1 wierstra:2 vi3:1 beta:1 yuan:3 ijcv:1 paragraph:1 manner:1 svgd:15 pairwise:3 x0:2 expected:1 examine:1 multi:8 inspired:2 salakhutdinov:2 increasing:2 becomes:1 provided:8 moreover:1 underlying:1 matched:1 maximizes:3 bounded:1 coder:1 minimizes:1 developed:2 transformation:6 every:1 k2:1 zl:8 unit:7 danihelka:1 engineering:1 local:1 treat:1 consequence:1 encoding:1 ap:6 burn:1 specifying:1 suggests:1 range:1 testing:9 practice:1 backpropagation:1 lcarin:1 procedure:4 j0:4 riedmiller:1 empirical:1 significantly:1 matching:1 integrating:3 radial:1 pixelrnn:1 undesirable:1 unlabeled:5 rcv2:2 equivalent:1 map:2 demonstrated:2 conventional:1 maximizing:3 center:1 deterministic:2 phil:1 independently:3 shen:2 simplicity:2 iwae:5 estimator:1 importantly:2 fication:1 portance:1 analogous:2 updated:1 enhanced:1 play:1 target:1 caption:1 duke:2 synthesize:1 amortized:1 approximated:3 expensive:1 utilized:1 updating:7 recognition:16 labeled:12 observed:1 role:2 electrical:1 capture:1 wang:4 news:2 connected:1 sun:1 decrease:2 removed:1 balanced:1 complexity:1 nats:6 dynamic:1 trained:4 basis:1 easily:1 darpa:1 represented:3 various:1 train:1 stacked:2 distinct:1 fast:2 monte:2 kp:6 artificial:1 refined:1 kalchbrenner:1 yp42:1 widely:1 cvpr:1 loglikelihood:1 drawing:3 otherwise:1 elbo:16 encoder:15 ability:1 statistic:1 transform:1 itself:1 nll:4 advantage:2 sequence:1 net:3 aro:1 tran:1 achieve:2 representational:2 scalability:2 sutskever:3 requirement:1 zp:1 produce:3 adam:2 illustrate:2 develop:2 recurrent:2 qt:12 eq:6 implemented:1 larochelle:2 stevens:2 stochastic:8 require:4 feeding:1 fix:1 generalization:2 tighter:1 extension:1 around:2 considered:2 ground:1 exp:1 lawrence:3 achieves:2 early:1 vary:2 omitted:1 purpose:1 estimation:2 label:4 tanh:1 honkala:1 weighted:12 gaussian:8 rather:3 pn:3 zhou:1 poi:1 resized:1 vae:50 corollary:9 rezende:4 focus:1 improvement:1 notational:1 bernoulli:1 likelihood:12 consistently:1 contrast:1 adversarial:2 sense:1 inference:10 stopping:1 typically:3 bt:4 entire:2 hidden:8 transformed:2 interested:1 pixel:1 henao:7 arg:1 classification:7 denoted:2 priori:1 retaining:2 spatial:1 special:2 softmax:1 art:1 marginal:3 copula:1 brox:1 beach:1 sampling:9 ng:1 identical:1 represents:2 yu:1 unsupervised:4 carin:11 icml:10 mimic:2 future:2 report:2 dosovitskiy:1 employ:4 randomly:2 gamma:1 recognize:1 divergence:8 replacement:1 n1:1 interest:2 mlp:3 highly:1 mnih:2 evaluation:1 mixture:1 held:1 chain:1 accurate:3 closer:1 encourage:1 necessary:1 unless:1 iv:1 desired:3 isolated:1 column:1 modeling:2 zn:1 stacking:1 krizhevsky:2 examining:1 too:2 kullbackleibler:1 encoders:1 varies:1 considerably:1 st:1 density:3 oord:1 containing:1 huang:1 berglund:1 ricardo:1 li:5 nonlinearities:1 de:1 stride:1 summarized:1 explicitly:4 performed:1 h1:1 closed:1 red:1 bayes:1 minimize:3 formed:1 purple:1 accuracy:2 convolutional:4 variance:2 efficiently:3 maximized:1 yield:2 wae:1 ofthe:1 generalize:1 bayesian:2 vincent:1 kavukcuoglu:1 ren:1 carlo:2 rectified:1 russakovsky:1 definition:1 underestimate:1 mohamed:3 associated:6 proof:2 recovers:1 sampled:3 dataset:7 treatment:1 proved:1 recall:2 improves:1 hilbert:2 miao:1 supervised:14 planar:1 modal:1 specify:1 improved:1 done:1 charlin:1 implicit:1 autoencoders:3 horizontal:1 expressive:1 su:3 nonlinear:4 minibatch:1 usa:1 k22:1 concept:1 true:5 former:1 regularization:1 spatially:1 q0:8 iteratively:1 symmetric:2 m:1 generalized:1 criterion:1 demonstrate:3 l1:2 image:16 variational:39 ranging:2 novel:2 recently:1 functional:3 million:1 extend:1 discussed:1 he:1 marginals:2 significant:1 jozefowicz:1 gibbs:1 tuning:1 pm:6 similarly:2 particle:1 nonlinearity:1 han:4 pu:6 aligning:1 posterior:13 recent:2 showed:1 perplexity:5 manifested:3 onr:1 life:3 additional:1 dai:1 impose:1 employed:6 deng:1 maximize:1 semi:14 rv:5 multiple:4 ii:4 dgdn:3 reduces:2 ing:1 match:1 long:2 ez1:1 scalable:1 crop:1 liao:1 expectation:4 poisson:5 arxiv:2 iteration:5 represent:6 kernel:8 achieved:3 krause:1 yunchen:1 median:1 rest:1 posse:1 limk:1 pooling:2 flow:7 leveraging:2 effectiveness:1 ter:1 revealed:1 split:5 iii:2 bernstein:1 bengio:1 newsgroups:2 architecture:3 bandwidth:1 ering:1 reduce:1 det:1 expression:6 motivated:1 york:1 deep:11 useful:3 detailed:4 karpathy:1 stein:55 category:1 generate:1 outperform:1 zj:1 nsf:1 estimated:3 per:2 discrete:1 key:2 four:3 demonstrating:2 drawn:12 prevent:1 im1:1 zg27:1 utilize:1 nyt:2 convert:1 nga:1 inverse:1 akp:3 springenberg:1 family:1 sented:1 draw:7 appendix:10 scaling:1 dropout:2 bound:10 layer:10 def:1 guaranteed:1 played:1 followed:1 replaces:1 refine:2 fei:2 min:1 mated:1 rcv1:1 performing:1 relatively:1 department:1 alternate:2 combination:1 across:2 smaller:1 partitioned:1 computationally:1 equation:2 hannun:1 discus:1 wrt:3 flip:1 fed:1 generalizes:2 observe:1 hierarchical:1 v2:1 salimans:1 save:1 subtracted:1 rp:1 jn:28 original:1 assumes:1 dirichlet:1 denotes:2 ensure:1 gan:5 binomial:1 carlson:1 exploit:1 restrictive:1 k1:2 build:1 gregor:2 feng:1 seeking:2 objective:2 realized:1 parametric:1 traditional:4 gradient:11 iclr:5 distance:5 valpola:1 decoder:8 lajoie:1 topic:4 collected:1 assuming:1 code:13 modeled:2 index:1 rasmus:1 ratio:1 minimizing:1 manzagol:1 dunson:2 trace:1 negative:5 ba:1 design:1 implementation:1 motivates:1 satheesh:1 perform:3 convolution:1 revised:1 datasets:5 markov:1 benchmark:1 finite:1 descent:10 defining:2 hinton:2 incorporated:2 reproducing:2 chunyuan:1 sharp:1 inferred:2 pair:1 kl:20 specified:2 optimized:3 imagenet:7 connection:2 extensive:1 acoustic:1 learned:2 kingma:4 nip:10 able:3 below:1 challenge:2 including:2 green:1 max:1 belief:1 power:3 treated:1 natural:1 residual:2 improve:1 ladder:2 lk:1 raiko:1 autoencoder:9 auto:1 text:6 review:1 prior:5 epoch:1 acknowledgement:1 kf:1 evolve:1 relative:1 graf:1 fully:1 interesting:1 generation:1 shaobo:2 validation:4 docnade:1 proxy:1 share:1 maas:1 placed:1 repeat:1 transpose:1 supported:1 deeper:1 perceptron:1 burda:1 wide:1 leaky:1 benefit:1 dimension:1 xn:31 vocabulary:1 evaluating:1 qn:1 kz:3 computes:1 collection:2 forward:1 adaptive:1 autoregressive:2 employing:1 constituting:1 far:1 welling:3 ranganath:2 approximate:4 cl319:1 sequentially:2 approxim:1 uai:2 overfitting:1 corpus:4 assumed:1 zhe:1 alternatively:1 latent:15 khosla:1 table:7 learn:3 zk:1 robust:1 ca:1 improving:1 excellent:2 aistats:4 pk:4 constituted:1 motivation:1 complementary:2 grosse:1 explicit:5 wish:1 concatenating:1 exceeding:1 exponential:1 jmlr:2 jacobian:1 weighting:1 third:1 tang:1 hannah:1 theorem:12 specific:1 rectifier:1 mitigates:1 zj1:1 striving:1 normalizing:4 evidence:2 concern:1 intractable:3 dl:15 mnist:9 consist:1 workshop:2 importance:17 kx:1 chen:4 entropy:1 generalizing:1 univariate:1 ez:5 visual:1 desire:2 truth:1 minibatches:2 ma:1 fnn:3 consequently:3 rbf:1 replace:1 change:2 specifically:2 averaging:4 classi:1 denoising:2 experimental:1 vaes:5 indicating:2 berg:1 softplus:1 latter:2 brevity:1 evaluate:5 mcmc:4 tested:2 srivastava:1 |
6,647 | 7,012 | Reconstructing perceived faces from brain activations
with deep adversarial neural decoding
Ya?gmur G??l?t?rk*, Umut G??l?*,
Katja Seeliger, Sander Bosch,
Rob van Lier, Marcel van Gerven,
Radboud University, Donders Institute for Brain, Cognition and Behaviour
Nijmegen, the Netherlands
{y.gucluturk, u.guclu}@donders.ru.nl
*Equal contribution
Abstract
Here, we present a novel approach to solve the problem of reconstructing perceived
stimuli from brain responses by combining probabilistic inference with deep learning. Our approach first inverts the linear transformation from latent features to brain
responses with maximum a posteriori estimation and then inverts the nonlinear
transformation from perceived stimuli to latent features with adversarial training
of convolutional neural networks. We test our approach with a functional magnetic resonance imaging experiment and show that it can generate state-of-the-art
reconstructions of perceived faces from brain activations.
ConvNet (adversarial training)
likelihood (Gaussian)
posterior (Gaussian)
maximum a posteriori
brain resp.
ConvNet (pretrained) + PCA
prior (Gaussian)
latent feat.
perceived stim.
*reconstruction
*from brain resp.
Figure 1: An illustration of our approach to solve the problem of reconstructing perceived stimuli
from brain responses by combining probabilistic inference with deep learning.
1
Introduction
A key objective in sensory neuroscience is to characterize the relationship between perceived stimuli
and brain responses. This relationship can be studied with neural encoding and neural decoding
in functional magnetic resonance imaging (fMRI) [1]. The goal of neural encoding is to predict
brain responses to perceived stimuli [2]. Conversely, the goal of neural decoding is to classify [3, 4],
identify [5, 6] or reconstruct [7?11] perceived stimuli from brain responses.
The recent integration of deep learning into neural encoding has been a very successful endeavor [12,
13]. To date, the most accurate predictions of brain responses to perceived stimuli have been
achieved with convolutional neural networks [14?20], leading to novel insights about the functional
organization of neural representations. At the same time, the use of deep learning as the basis for
neural decoding has received less widespread attention. Deep neural networks have been used for
classifying or identifying stimuli via the use of a deep encoding model [16, 21] or by predicting
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
intermediate stimulus features [22, 23]. Deep belief networks and convolutional neural networks have
been used to reconstruct basic stimuli (handwritten characters and geometric figures) from patterns
of brain activity [24, 25]. To date, going beyond such mostly retinotopy-driven reconstructions and
reconstructing complex naturalistic stimuli with high accuracy have proven to be difficult.
The integration of deep learning into neural decoding is an exciting approach for solving the reconstruction problem, which is defined as the inversion of the (non)linear transformation from perceived
stimuli to brain responses to obtain a reconstruction of the original stimulus from patterns of brain
activity alone. Reconstruction can be formulated as an inference problem, which can be solved by
maximum a posteriori estimation. Multiple variants of this formulation have been proposed in the
literature [26?30]. At the same time, significant improvements are to be expected from deep neural
decoding given the success of deep learning in solving image reconstruction problems in computer
vision such as colorization [31], face hallucination [32], inpainting [33] and super-resolution [34].
Here, we present a new approach by combining probabilistic inference with deep learning, which
we refer to as deep adversarial neural decoding (DAND). Our approach first inverts the linear
transformation from latent features to observed responses with maximum a posteriori estimation.
Next, it inverts the nonlinear transformation from perceived stimuli to latent features with adversarial
training and convolutional neural networks. An illustration of our model is provided in Figure 1. We
show that our approach achieves state-of-the-art reconstructions of perceived faces from the human
brain.
2
2.1
Methods
Problem statement
Let x ? Rh?w?c , z ? Rp , y ? Rq be a stimulus, feature, response triplet, and ? : Rh?w?c ? Rp be
a latent feature model such that z = ?(x) and x = ??1 (z). Without loss of generality, we assume
that all of the variables are normalized to have zero mean and unit variance.
We are interested in solving the problem of reconstructing perceived stimuli from brain responses:
? = ??1 (arg max Pr (z | y))
x
(1)
z
where Pr(z | y) is the posterior. We reformulate the posterior through Bayes? theorem:
?1
?=?
x
arg max [Pr(y | z) Pr(z)]
(2)
z
where Pr(y | z) is the likelihood, and Pr(z) is the prior. In the following subsections, we define the
latent feature model, the likelihood and the prior.
2.2
Latent feature model
We define the latent feature model ?(x) by modifying the VGG-Face pretrained model [35]. This
model is a 16-layer convolutional neural network, which was trained for face recognition. First, we
truncate it by retaining the first 14 layers and discarding the last two layers of the model. At this
point, the truncated model outputs 4096-dimensional latent features. To reduce the dimensionality of
the latent features, we then combine the model with principal component analysis by estimating the
loadings that project the 4096-dimensional latent features to the first 699 principal component scores
(maximum number of components given the number of training observations) and adding them at the
end of the truncated model as a new fully-connected layer. At this point, the combined model outputs
699-dimensional latent features.
Following the ideas presented in [36?38], we define the inverse of the feature model ??1 (z) (i.e.,
the image generator) as a convolutional neural network which transforms the 699-dimensional latent
variables to 64 ? 64 ? 3 images and estimate its parameters via an adversarial process. The generator
comprises five deconvolution layers: The ith layer has 210?i kernels with a size of 4 ? 4, a stride
of 2 ? 2, a padding of 1 ? 1, batch normalization and rectified linear units. Exceptions are the first
layer which has a stride of 1 ? 1, and no padding; and the last layer which has three kernels, no batch
normalization [39] and hyperbolic tangent units. Note that we do use the inverse of the loadings in
the generator.
2
To enable adversarial training, we define a discriminator (?) along with the generator. The discriminator comprises five convolution layers. The ith layer has 25+i kernels with a size of 4 ? 4, a stride
of 2 ? 2, a padding of 1 ? 1, batch normalization and leaky rectified linear units with a slope of 0.2
except for the first layer which has no batch normalization and last layer which has one kernel, a
stride of 1 ? 1, no padding, no batch normalization and a sigmoid unit.
We train the generator and the discriminator by pitting them against each other in a two-player
zero-sum game, where the goal of the discriminator is to discriminate stimuli from reconstructions
and the goal of the generator is to generate reconstructions that are indiscriminable from original
stimuli. This ensures that reconstructed stimuli are similar to target stimuli on a pixel level and a
feature level.
The discriminator is trained by iteratively minimizing the following discriminator loss function:
Ldis = ?E log(?(x)) + log(1 ? ?(??1 (z)))
(3)
where ? is the output of the discriminator which gives the probability that its input is an original
stimulus and not a reconstructed stimulus. The generator is trained by iteratively minimizing a
generator loss function, which is a linear combination of an adversarial loss function, a feature loss
function and a stimulus loss function:
Lgen = ??adv E log(?(??1 (z))) +?fea E[k?(x) ? ?(??1 (z))k2 ] +?sti E[kx ? ??1 (z)k2 ] (4)
|
{z
}
|
{z
}
|
{z
}
Lfea
Ladv
Lsti
where ? is the relu3_3 outputs of the pretrained VGG-16 model [40, 41]. Note that the targets and the
reconstructions are lower resolution (i.e., 64 ? 64) than the images that are used to obtain the latent
features (i.e., 224 ? 224).
2.3
Likelihood and prior
We define the likelihood as a multivariate Gaussian distribution over y:
Pr(y|z) = Ny (B> z, ?)
p?q
diag(?12 , . . . , ?q2 )
(5)
q?q
where B = (? 1 , . . . , ? q ) ? R
and ? =
? R . Here, the features ? voxels
matrix B contains the learnable parameters of the likelihood in its columns ? i (which can also be
interpreted as regression coefficients of a linear regression model, which predicts y from z).
? = arg min E[kyi ? ? > zk2 ]
We estimate the parameters with ordinary least squares, such that ?
i
?i
i
? > zk2 ].
and ?
? 2 = E[kyi ? ?
i
i
We define the prior as a zero mean and unit variance multivariate Gaussian distribution Pr(z) =
Nz (0, I).
2.4
Posterior
To derive the posterior (2), we first reformulate the likelihood as a multivariate Gaussian distribution
over z. That is, after taking out constant terms with respect to z from the likelihood, it immediately
becomes proportional to the canonical form Gaussian over z with ? = B??1 y and ? = B??1 B> ,
which is equivalent to the standard form Gaussian with mean ??1 ? and covariance ??1 .
This allows us to write:
Pr(z|y) ? Nz ??1 ?, ??1 )Nz (0, I
(6)
Next, recall that the product of two multivariate Gaussians can be formulated in terms of one
multivariate Gaussian [42]. That is, Nz (m1 , ?1 )Nz (m2 , ?2 ) ? Nz (mc , ?c ) with mc =
?1
?1 ?1
?1 ?1
??1
??1 m1 + ??1
. By plugging this formula1 + ?2
2 m2 and ?c = ?1 + ?2
tion into Equation (6), we obtain Pr(z|y) ? Nz (mc , ?c ) with mc = (B??1 B> + I)?1 B??1 y
and ?c = (B??1 B> + I)?1 .
Recall that we are interested in reconstructing stimuli from responses by generating reconstructions
from the features that maximize the posterior. Notice that the (unnormalized) posterior is maximized
3
at its mean mc since this corresponds to the mode for a multivariate Gaussian distribution. Therefore,
the solution of the problem of reconstructing stimuli from responses reduces to the following simple
expression:
? = ??1 (B??1 B> + I)?1 B??1 y
x
(7)
3
3.1
Results
Datasets
We used the following datasets in our experiments:
fMRI dataset. We collected a new fMRI dataset, which comprises face stimuli and associated bloodoxygen-level dependent (BOLD) responses. The stimuli used in the fMRI experiment were drawn
from [43?45] and other online sources, and consisted of photographs of front-facing individuals
with neutral expressions. We measured BOLD responses (TR = 1.4 s, voxel size = 2 ? 2 ? 2 mm3 ,
whole-brain coverage) of two healthy adult subjects (S1: 28-year old female; S2: 39-year old male) as
they were fixating on a target (0.6 ? 0.6 degree) [46] superimposed on the stimuli (15 ? 15 degrees).
Each face was presented at 5 Hz for 1.4 s and followed by a middle gray background presented for
2.8 s. In total, 700 faces were presented twice for the training set, and 48 faces were repeated 13 times
for the test set. The test set was balanced in terms of gender and ethnicity (based on the norming data
provided in the original datasets). The experiment was approved by the local ethics committee (CMO
Regio Arnhem-Nijmegen) and the subjects provided written informed consent in accordance with the
Declaration of Helsinki. Our fMRI dataset is available from the first authors on reasonable request.
The stimuli were preprocessed as follows: Each image was cropped and resized to 224 ? 224 pixels.
This procedure was organized such that the distance between the top of the image and the vertical
center of the eyes was 87 pixels, the distance between the vertical center of the eyes and the vertical
center of the mouth was 75 pixels, the distance between the vertical center of the mouth and the
bottom of the image was 61 pixels, and the horizontal center of the eyes and the mouth was at the
horizontal center of the image.
The fMRI data were preprocessed as follows: Functional scans were realigned to the first functional
scan and the mean functional scan, respectively. Realigned functional scans were slice time corrected.
Anatomical scans were coregistered to the mean functional scan. Brains were extracted from
the coregistered anatomical scans. Finally, stimulus-specific responses were deconvolved from
the realigned and slice time corrected functional scans with a general linear model [47]. Here,
deconvolution refers to estimating regression coefficients (y) of the following GLMs: y? = Xy,
where y? is raw voxel responses, X is HRF-convolved design matrix (one regressor per stimulus
indicating its presence), and y is deconvolved voxel responses such that y is a vector of size m ? 1
with m denoting the number of unique stimuli, and there is one y per voxel.
CelebA dataset [48]. This dataset comprises 202599 in-the-wild portraits of 10177 people, which
were drawn from online sources. The portraits are annotated with 40 attributes and five landmarks.
We preprocessed the portraits as we preprocessed the stimuli in our fMRI dataset.
3.2
Implementation details
Our implementation makes use of Chainer and Cupy with CUDA and cuDNN [49] except for the
following: The VGG-16 and VGG-Face pretrained models were ported to Chainer from Caffe [50].
Principal component analysis was implemented in scikit-learn [51]. fMRI preprocessing was implemented in SPM [52]. Brain extraction was implemented in FSL [53].
We trained the discriminator and the generator on the entire CelebA dataset by iteratively minimizing
the discriminator loss function and the generator loss function in sequence for 100 epochs with Adam
[54]. Model parameters were initialized as follows: biases were set to zero, the scaling parameters
were drawn from N (1, 2?10?2 I), the shifting parameters were set to zero and the weights were drawn
from N (1, 10?2 I) [37]. We set the hyperparameters of the loss functions as follows: ?adv = 102 ,
?dis = 102 , ?fea = 10?2 and ?sti = 2 ? 10?6 [38]. We set the hyperparameters of the optimizer as
follows: ? = 0.001, ?1 = 0.9, ?2 = 0.999 and = 108 [37].
We estimated the parameters of the likelihood term on the training split of our fMRI dataset.
4
3.3
Evaluation metrics
We evaluated our approach on the test split of our fMRI dataset with the following metrics: First,
the feature similarity between the stimuli and their reconstructions, where the feature similarity is
defined as the Euclidean similarity between the features, defined as the relu7 outputs of the VGGFace pretrained model. Second, the Pearson correlation coefficient between the stimuli and their
reconstructions. Third, the structural similarity between the stimuli and their reconstructions [55]. All
evaluation was done on a held-out set not used at any point during model estimation or training. The
voxels used in the reconstructions were selected as follows: For each test trial, n voxels with smallest
residuals (on training set) were selected. n itself was selected such that reconstruction accuracy of
remaining test trials was highest. We also performed an encoding analysis to see how well the latent
features were predictive of voxel responses in different brain areas. The results of this analysis is
reported in the supplementary material.
3.4
Reconstruction
We first demonstrate our results by reconstructing the stimulus images in the test set using i) the latent
features and ii) the brain responses. Figure 2 shows 4 representative examples of the test stimuli
and their reconstructions. The first column of both panels show the original test stimuli. The second
column of both panels show the reconstructions of these stimuli x from the latent features z obtained
by ?(x). These can be considered as an upper limit for the reconstruction accuracy of the brain
responses since they are the best possible reconstructions that we can expect to achieve with a perfect
neural decoder that can exactly predict the latent features from brain responses. The third and fourth
columns of the figure show reconstructions of brain responses to stimuli of Subject 1 and Subject 2,
respectively.
stim.
reconstruction from:
model brain 1 brain 2
stim.
reconstruction from:
model brain 1 brain 2
Figure 2: Reconstructions of the test stimuli from the latent features (model) and the brain responses
of the two subjects (brain 1 and brain 2).
Visual inspection of the reconstructions from brain responses reveals that they match the test stimuli
in several key aspects, such as gender, skin color and facial features. Table 1 shows the three
reconstruction accuracy metrics for both subjects in terms of ratio of the reconstruction accuracy
from brain responses to the reconstruction accuracy from latent features, which were significantly
(p < 0.05, permutation test) above those for randomly sampled latent features (cf. 0.5181, 0.1532
and 0.5183, respectively).
Table 1: Reconstruction accuracy of the proposed decoding approach. The results are reported as the
ratio of accuracy of reconstructing from brain responses and latent features.
S1
S2
Feature similarity
Pearson correlation coefficient
Structural similarity
0.6546 ? 0.0220
0.6465 ? 0.0222
0.6512 ? 0.0493
0.6580 ? 0.0480
0.8365 ? 0.0239
0.8325 ? 0.0229
Furthermore, besides reconstruction accuracy, we tested the identification performance within and
between groups that shared similar features (those that share gender or ethnicity as defined by the
norming data were assumed to share similar features). Identification accuracies (which ranged
between 57% and 62%) were significantly above chance-level (which ranged between 3% and 8%) in
all cases (p 0.05, Student?s t-test). Furthermore, we found no significant differences between the
identification accuracies when a reconstruction was identified among a group sharing similar features
versus among a group that did not share similar features (p > 0.79, Student?s t-test) (cf. [56]).
5
3.5
Visualization, interpolation and sampling
In the second experiment, we analyzed the properties of the stimulus features predictive of brain activations to characterize neural representations of faces. We first investigated the model representations
to better understand what kind of features drive responses of the model. We visualized the features
explaining the highest variance by independently setting the values of the first few latent dimensions
to vary between their minimum and maximum values and generating reconstructions from these
representations (Figure 3). As a result, we found that many of the latent features were coding for
interpretable high level information such as age, gender, etc. For example, the first feature in Figure 3
appears to code for gender, the second one appears to code for hair color and complexion, the third
one appears to code for age, and the fourth one appears to code for two different facial expressions.
reconstruction (from features)
feature i = min. <-> feature i = max.
4
3
feature
2
1
reconstruction (from features)
feature i = min. <-> feature i = max.
Figure 3: Reconstructions from features with single features set to vary between their minimum and
maximum values.
We then explored the feature space that was learned by the latent feature model and the response
space that was learned by the likelihood by systematically traversing the reconstructions obtained
from different points in these spaces.
Figure 4A shows examples of reconstructions of stimuli from the latent features (rows one and four)
and brain responses (rows two, three, five and six), as well as reconstructions from their interpolations
between two points (columns three to nine). The reconstructions from the interpolations between two
points show semantic changes with no sharp transitions.
Figure 4B shows reconstructions from latent features sampled from the model prior (first row) and
from responses sampled from the response prior of each subject (second and third rows). The
reconstructions from sampled representations are diverse and of high quality.
These results provide evidence that no memorization took place and the models learned relevant and
interesting representations [37]. Furthermore, these results suggest that neural representations of
faces might be embedded in a continuous and distributed space in the brain.
3.6
Comparison versus state-of-the-art
In this section we qualitatively (Figure 5) and quantitatively (Table 2) compare the performance of
our approach with two existing decoding approaches from the literature? . Figure 5 shows example
reconstructions from brain responses with three different approaches, namely with our approach,
the eigenface approach [11, 57] and the identity transform approach [58, 29]. To achieve a fair
comparison, the implementations of the three approaches only differed in terms of the feature models
that were used, i.e. the eigenface approach had an eigenface (PCA) feature model and the identity
transform approach had simply an identity transformation in place of the feature model.
Visual inspection of the reconstructions displayed in Figure 5 shows that DAND clearly outperforms
the existing approaches. In particular, our reconstructions better capture the features of the stimuli
?
We also experimented with the VGG-ImageNet pretrained model, which failed to match the reconstruction
performance of the VGG-Face model, while their encoding performances were comparable in non-face related
brain areas. We plan to further investigate other models in detail in future work.
6
A
recon.
recon. (from interpolated features or responses)
recon.
stim.
brain 2 brain 1
reconstruction from:
model
brain 2 brain 1
model
stim.
B
reconstruction from:
brain 2 brain 1 model
recon. (from sampled features or responses)
Figure 4: Reconstructions from interpolated (A) and sampled (B) latent features (model) and brain
responses of the two subjects (brain 1 and brain 2).
such as gender, skin color and facial features. Furthermore, our reconstructions are more detailed,
sharper, less noisy and more photorealistic than the eigenface and identity transform approaches. A
quantitative comparison of the performance of the three approaches shows that the reconstruction
accuracies achieved by our approach were significantly higher than those achieved by the existing
approaches (p 0.05, Student?s t-test).
Table 2: Reconstruction accuracies of the three decoding approaches. LF denotes reconstructions
from latent features.
Identity
Eigenface
DAND
Feature similarity
Pearson correlation coefficient
Structural similarity
S1
S2
LF
S1
S2
LF
0.1254 ? 0.0031
0.1254 ? 0.0038
1.0000 ? 0.0000
0.1475 ? 0.0043
0.1457 ? 0.0043
0.3841 ? 0.0149
0.4194 ? 0.0347
0.4299 ? 0.0350
1.0000 ? 0.0000
0.3779 ? 0.0403
0.2241 ? 0.0435
0.9875 ? 0.0011
0.3744 ? 0.0083
0.3877 ? 0.0083
1.0000 ? 0.0000
0.3735 ? 0.0102
0.3671 ? 0.0113
0.9234 ? 0.0040
S1
S2
LF
0.1900 ? 0.0052
0.1867 ? 0.0054
0.2895 ? 0.0137
0.4679 ? 0.0358
0.4722 ? 0.0344
0.7181 ? 0.0419
0.4662 ? 0.0126
0.4676 ? 0.0130
0.5595 ? 0.0181
7
stim.
deep recon. from:
eigen. recon. from:
identity recon. from:
model brain 1 brain 2 model brain 1 brain 2 model brain 1 brain 2
Figure 5: Reconstructions from the latent features and brain responses of the two subjects (brain 1
and brain 2) using our decoding approach, as well as the eigenface and identity transform approaches
for comparison.
3.7
Factors contributing to reconstruction accuracy
Finally, we investigated the factors contributing to the quality of reconstructions from brain responses.
All of the faces in the test set had been annotated with 30 objective physical measures (such as
nose width, face length, etc.) and 14 subjective measures (such as attractiveness, gender, ethnicity,
etc.). Among these measures, we identified five subjective measures that are important for face
perception [59?64] as measures of interest and supplemented them with an additional measure of
stimulus complexity. Complexity was included because of its important role in visual perception [65].
The selected measures were attractiveness, complexity, ethnicity, femininity, masculinity and prototypicality. Note that the complexity measure was not part of the dataset annotations and was defined
as the Kolmogorov complexity of the stimuli, which was taken to be their compressed file sizes [66].
To this end, we correlated the reconstruction accuracies of the 48 stimuli in the test set (for both
subjects) with their corresponding measures (except for ethnicity) and used a two-tailed Student?s
t-test to test if the multiple comparison corrected (Bonferroni correction) p-value was less than the
critical value of 0.05. In the case of ethnicity we used one-way analysis of variance to compare the
reconstruction accuracies of faces with different ethnicities.
We were able to reject the null hypothesis for the measures complexity, femininity and masculinity,
but failed to do so for attractiveness, ethnicity and prototypicality. Specifically, we observed a significant negative correlation (r = -0.3067) between stimulus complexity and reconstruction accuracy.
Furthermore, we found that masculinity and reconstruction accuracy were significantly positively
correlated (r = 0.3841). Complementing this result, we found a negative correlation (r = -0.3961)
between femininity and reconstruction accuracy. We found no effect of attractiveness, ethnicity and
prototypicality on the quality of reconstructions. We then compared the complexity levels of the
images of each gender and found that female face images were significantly more complex than male
face images (p < 0.05, Student?s t-test), pointing to complexity as the factor underlying the relationship between reconstruction accuracy and gender. This result demonstrates the importance of taking
stimulus complexity into account while making inferences about factors driving the reconstructions
from brain responses.
4
Conclusion
In this study we combined probabilistic inference with deep learning to derive a novel deep neural
decoding approach. We tested our approach by reconstructing face stimuli from BOLD responses at
an unprecedented level of accuracy and detail, matching the target stimuli in several key aspects such
as gender, skin color and facial features as well as identifying perceptual factors contributing to the
reconstruction accuracy. Deep decoding approaches such as the one developed here are expected to
play an important role in the development of new neuroprosthetic devices that operate by reading
subjective information from the human brain.
8
Acknowledgments
This work has been partially supported by a VIDI grant (639.072.513) from the Netherlands Organization for Scientific Research and a GPU grant (GeForce Titan X) from the Nvidia Corporation.
References
[1] T. Naselaris, K. N. Kay, S. Nishimoto, and J. L. Gallant, ?Encoding and decoding in fMRI,? NeuroImage,
vol. 56, no. 2, pp. 400?410, may 2011.
[2] M. van Gerven, ?A primer on encoding models in sensory neuroscience,? J. Math. Psychol., vol. 76, no. B,
pp. 172?183, 2017.
[3] J. V. Haxby, ?Distributed and overlapping representations of faces and objects in ventral temporal cortex,?
Science, vol. 293, no. 5539, pp. 2425?2430, sep 2001.
[4] Y. Kamitani and F. Tong, ?Decoding the visual and subjective contents of the human brain,? Nature
Neuroscience, vol. 8, no. 5, pp. 679?685, apr 2005.
[5] T. M. Mitchell, S. V. Shinkareva, A. Carlson, K.-M. Chang, V. L. Malave, R. A. Mason, and M. A. Just,
?Predicting human brain activity associated with the meanings of nouns,? Science, vol. 320, no. 5880, pp.
1191?1195, may 2008.
[6] K. N. Kay, T. Naselaris, R. J. Prenger, and J. L. Gallant, ?Identifying natural images from human brain
activity,? Nature, vol. 452, no. 7185, pp. 352?355, mar 2008.
[7] B. Thirion, E. Duchesnay, E. Hubbard, J. Dubois, J.-B. Poline, D. Lebihan, and S. Dehaene, ?Inverse
retinotopy: Inferring the visual content of images from brain activation patterns,? NeuroImage, vol. 33,
no. 4, pp. 1104?1116, dec 2006.
[8] Y. Miyawaki, H. Uchida, O. Yamashita, M. aki Sato, Y. Morito, H. C. Tanabe, N. Sadato, and Y. Kamitani,
?Visual image reconstruction from human brain activity using a combination of multiscale local image
decoders,? Neuron, vol. 60, no. 5, pp. 915?929, dec 2008.
[9] T. Naselaris, R. J. Prenger, K. N. Kay, M. Oliver, and J. L. Gallant, ?Bayesian reconstruction of natural
images from human brain activity,? Neuron, vol. 63, no. 6, pp. 902?915, sep 2009.
[10] S. Nishimoto, A. T. Vu, T. Naselaris, Y. Benjamini, B. Yu, and J. L. Gallant, ?Reconstructing visual
experiences from brain activity evoked by natural movies,? Current Biology, vol. 21, no. 19, pp.
1641?1646, oct 2011.
[11] A. S. Cowen, M. M. Chun, and B. A. Kuhl, ?Neural portraits of perception: Reconstructing face images
from evoked brain activity,? NeuroImage, vol. 94, pp. 12?22, jul 2014.
[12] D. L. K. Yamins and J. J. Dicarlo, ?Using goal-driven deep learning models to understand sensory cortex,?
Nat. Neurosci., vol. 19, pp. 356?365, 2016.
[13] N. Kriegeskorte, ?Deep neural networks: A new framework for modeling biological vision and brain
information processing,? Annu. Rev. Vis. Sci., vol. 1, no. 1, pp. 417?446, 2015.
[14] D. L. K. Yamins, H. Hong, C. F. Cadieu, E. A. Solomon, D. Seibert, and J. J. DiCarlo,
?Performance-optimized hierarchical models predict neural responses in higher visual cortex,? Proceedings
of the National Academy of Sciences, vol. 111, no. 23, pp. 8619?8624, may 2014.
[15] S.-M. Khaligh-Razavi and N. Kriegeskorte, ?Deep supervised, but not unsupervised, models may explain
IT cortical representation,? PLoS Computational Biology, vol. 10, no. 11, p. e1003915, nov 2014.
[16] U. G??l? and M. van Gerven, ?Deep neural networks reveal a gradient in the complexity of neural
representations across the ventral stream,? Journal of Neuroscience, vol. 35, no. 27, pp. 10 005?10 014, jul
2015.
[17] R. M. Cichy, A. Khosla, D. Pantazis, A. Torralba, and A. Oliva, ?Comparison of deep neural networks to
spatio-temporal cortical dynamics of human visual object recognition reveals hierarchical correspondence,?
Scientific Reports, vol. 6, no. 1, jun 2016.
[18] U. G??l?, J. Thielen, M. Hanke, and M. van Gerven, ?Brains on beats,? in Advances in Neural Information
Processing Systems, 2016.
9
[19] U. G??l? and M. A. J. van Gerven, ?Modeling the dynamics of human brain activity with recurrent neural
networks,? Frontiers in Computational Neuroscience, vol. 11, feb 2017.
[20] M. Eickenberg, A. Gramfort, G. Varoquaux, and B. Thirion, ?Seeing it all: Convolutional network layers
map the function of the human visual system,? NeuroImage, vol. 152, pp. 184?194, may 2017.
[21] U. G??l? and M. van Gerven, ?Increasingly complex representations of natural movies across the dorsal
stream are shared between subjects,? NeuroImage, vol. 145, pp. 329?336, jan 2017.
[22] T. Horikawa and Y. Kamitani, ?Generic decoding of seen and imagined objects using hierarchical visual
features,? Nature Communications, vol. 8, p. 15037, may 2017.
[23] ??, ?Hierarchical neural representation of dreamed objects revealed by brain decoding with deep neural
network features,? Frontiers in Computational Neuroscience, vol. 11, jan 2017.
[24] M. van Gerven, F. de Lange, and T. Heskes, ?Neural decoding with hierarchical generative models,?
Neural Comput., vol. 22, no. 12, pp. 3127?3142, 2010.
[25] C. Du, C. Du, and H. He, ?Sharing deep generative representation for perceived image reconstruction from
human brain activity,? CoRR, vol. abs/1704.07575, 2017.
[26] B. Thirion, E. Duchesnay, E. Hubbard, J. Dubois, J.-B. Poline, D. Lebihan, and S. Dehaene, ?Inverse
retinotopy: inferring the visual content of images from brain activation patterns,? Neuroimage, vol. 33,
no. 4, pp. 1104?1116, 2006.
[27] T. Naselaris, R. J. Prenger, K. N. Kay, M. Oliver, and J. L. Gallant, ?Bayesian reconstruction of natural
images from human brain activity,? Neuron, vol. 63, no. 6, pp. 902?915, 2009.
[28] U. G??l? and M. van Gerven, ?Unsupervised learning of features for Bayesian decoding in functional
magnetic resonance imaging,? in Belgian-Dutch Conference on Machine Learning, 2013.
[29] S. Schoenmakers, M. Barth, T. Heskes, and M. van Gerven, ?Linear reconstruction of perceived images
from human brain activity,? NeuroImage, vol. 83, pp. 951?961, dec 2013.
[30] S. Schoenmakers, U. G??l?, M. van Gerven, and T. Heskes, ?Gaussian mixture models and semantic
gating improve reconstructions from human brain activity,? Frontiers in Computational Neuroscience,
vol. 8, jan 2015.
[31] R. Zhang, P. Isola, and A. A. Efros, ?Colorful image colorization,? Lect. Notes Comput. Sci., vol. 9907
LNCS, pp. 649?666, 2016.
[32] Y. G??l?t?rk, U. G??l?, R. van Lier, and M. van Gerven, ?Convolutional sketch inversion,? in Lecture
Notes in Computer Science. Springer International Publishing, 2016, pp. 810?824.
[33] D. Pathak, P. Kr?henb?hl, J. Donahue, T. Darrell, and A. A. Efros, ?Context encoders: Feature learning by
inpainting,? CoRR, vol. abs/1604.07379, 2016.
[34] C. Ledig, L. Theis, F. Huszar, J. Caballero, A. P. Aitken, A. Tejani, J. Totz, Z. Wang, and W. Shi,
?Photo-realistic single image super-resolution using a generative adversarial network,? CoRR, vol.
abs/1609.04802, 2016.
[35] O. M. Parkhi, A. Vedaldi, and A. Zisserman, ?Deep face recognition,? in British Machine Vision Conference,
jul 2016.
[36] I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. C. Courville, and
Y. Bengio, ?Generative adversarial networks,? CoRR, vol. abs/1406.2661, 2014.
[37] A. Radford, L. Metz, and S. Chintala, ?Unsupervised representation learning with deep convolutional
generative adversarial networks,? CoRR, vol. abs/1511.06434, 2015.
[38] A. Dosovitskiy and T. Brox, ?Generating images with perceptual similarity metrics based on deep
networks,? CoRR, vol. abs/1602.02644, 2016.
[39] S. Ioffe and C. Szegedy, ?Batch normalization: Accelerating deep network training by reducing internal
covariate shift,? CoRR, vol. abs/1502.03167, 2015.
[40] K. Simonyan and A. Zisserman, ?Very deep convolutional networks for large-scale image recognition,?
CoRR, vol. abs/1409.1556, 2014.
10
[41] J. Johnson, A. Alahi, and F. Li, ?Perceptual losses for real-time style transfer and super-resolution,? CoRR,
vol. abs/1603.08155, 2016.
[42] K. B. Petersen and M. S. Pedersen, ?The matrix cookbook,? nov 2012, version 20121115.
[43] D. S. Ma, J. Correll, and B. Wittenbrink, ?The Chicago face database: A free stimulus set of faces and
norming data,? Behavior Research Methods, vol. 47, no. 4, pp. 1122?1135, jan 2015.
[44] N. Strohminger, K. Gray, V. Chituc, J. Heffner, C. Schein, and T. B. Heagins, ?The MR2: A multi-racial,
mega-resolution database of facial stimuli,? Behavior Research Methods, vol. 48, no. 3, pp. 1197?1204,
aug 2015.
[45] O. Langner, R. Dotsch, G. Bijlstra, D. H. J. Wigboldus, S. T. Hawk, and A. van Knippenberg, ?Presentation
and validation of the Radboud faces database,? Cognition & Emotion, vol. 24, no. 8, pp. 1377?1388, dec
2010.
[46] L. Thaler, A. Sch?tz, M. Goodale, and K. Gegenfurtner, ?What is the best fixation target? the effect of
target shape on stability of fixational eye movements,? Vision Research, vol. 76, pp. 31?42, jan 2013.
[47] J. A. Mumford, B. O. Turner, F. G. Ashby, and R. A. Poldrack, ?Deconvolving BOLD activation in
event-related designs for multivoxel pattern classification analyses,? NeuroImage, vol. 59, no. 3, pp.
2636?2643, feb 2012.
[48] Z. Liu, P. Luo, X. Wang, and X. Tang, ?Deep learning face attributes in the wild,? in Proceedings of
International Conference on Computer Vision (ICCV), Dec. 2015.
[49] S. Tokui, K. Oono, S. Hido, and J. Clayton, ?Chainer: a next-generation open source framework for deep
learning,? in Advances in Neural Information Processing Systems Workshops, 2015.
[50] Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. B. Girshick, S. Guadarrama, and T. Darrell,
?Caffe: Convolutional architecture for fast feature embedding,? CoRR, vol. abs/1408.5093, 2014.
[51] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer,
R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay,
?Scikit-learn: Machine learning in Python,? Journal of Machine Learning Research, vol. 12, pp. 2825?2830,
2011.
[52] K. Friston, J. Ashburner, S. Kiebel, T. Nichols, and W. Penny, Eds., Statistical Parametric Mapping: The
Analysis of Functional Brain Images. Academic Press, 2007.
[53] M. Jenkinson, C. F. Beckmann, T. E. Behrens, M. W. Woolrich, and S. M. Smith, ?FSL,? NeuroImage,
vol. 62, no. 2, pp. 782?790, aug 2012.
[54] D. P. Kingma and J. Ba, ?Adam: A method for stochastic optimization,? CoRR, vol. abs/1412.6980, 2014.
[55] Z. Wang, A. Bovik, H. Sheikh, and E. Simoncelli, ?Image quality assessment: From error visibility to
structural similarity,? IEEE Transactions on Image Processing, vol. 13, no. 4, pp. 600?612, apr 2004.
[56] E. Goesaert and H. P. O. de Beeck, ?Representations of facial identity information in the ventral visual
stream investigated with multivoxel pattern analyses,? Journal of Neuroscience, vol. 33, no. 19, pp.
8549?8558, may 2013.
[57] H. Lee and B. A. Kuhl, ?Reconstructing perceived and retrieved faces from activity patterns in lateral
parietal cortex,? Journal of Neuroscience, vol. 36, no. 22, pp. 6069?6082, jun 2016.
[58] M. van Gerven and T. Heskes, ?A linear gaussian framework for decoding of perceived images,? in 2012
Second International Workshop on Pattern Recognition in NeuroImaging. IEEE, jul 2012.
[59] A. C. Hahn and D. I. Perrett, ?Neural and behavioral responses to attractiveness in adult and infant faces,?
Neuroscience & Biobehavioral Reviews, vol. 46, pp. 591?603, oct 2014.
[60] D. I. Perrett, K. A. May, and S. Yoshikawa, ?Facial shape and judgements of female attractiveness,? Nature,
vol. 368, no. 6468, pp. 239?242, mar 1994.
[61] B. Birk?s, M. Dzhelyova, B. L?badi, T. Bereczkei, and D. I. Perrett, ?Cross-cultural perception of
trustworthiness: The effect of ethnicity features on evaluation of faces? observed trustworthiness across
four samples,? Personality and Individual Differences, vol. 69, pp. 56?61, oct 2014.
11
[62] M. A. Strom, L. A. Zebrowitz, S. Zhang, P. M. Bronstad, and H. K. Lee, ?Skin and bones: The contribution
of skin tone and facial structure to racial prototypicality ratings,? PLoS ONE, vol. 7, no. 7, p. e41193, jul
2012.
[63] A. C. Little, B. C. Jones, D. R. Feinberg, and D. I. Perrett, ?Men?s strategic preferences for femininity in
female faces,? British Journal of Psychology, vol. 105, no. 3, pp. 364?381, jun 2013.
[64] M. de Lurdes Carrito, I. M. B. dos Santos, C. E. Lefevre, R. D. Whitehead, C. F. da Silva, and D. I. Perrett,
?The role of sexually dimorphic skin colour and shape in attractiveness of male faces,? Evolution and
Human Behavior, vol. 37, no. 2, pp. 125?133, mar 2016.
[65] Y. G??l?t?rk, R. H. A. H. Jacobs, and R. van Lier, ?Liking versus complexity: Decomposing the inverted
U-curve,? Frontiers in Human Neuroscience, vol. 10, mar 2016.
[66] D. Donderi and S. McFadden, ?Compressed file length predicts search time and errors on visual displays,?
Displays, vol. 26, no. 2, pp. 71?78, apr 2005.
12
| 7012 |@word katja:1 trial:2 version:1 middle:1 inversion:2 loading:2 approved:1 kriegeskorte:2 mr2:1 judgement:1 open:1 covariance:1 jacob:1 inpainting:2 tr:1 liu:1 contains:1 score:1 denoting:1 dubourg:1 outperforms:1 existing:3 subjective:4 current:1 guadarrama:1 luo:1 activation:6 written:1 gpu:1 kiebel:1 realistic:1 chicago:1 shape:3 haxby:1 visibility:1 interpretable:1 alone:1 generative:5 selected:4 device:1 infant:1 complementing:1 tone:1 inspection:2 ith:2 smith:1 math:1 preference:1 zhang:2 five:5 yoshikawa:1 along:1 eickenberg:1 fixation:1 combine:1 wild:2 behavioral:1 blondel:1 aitken:1 expected:2 behavior:3 multi:1 brain:79 little:1 biobehavioral:1 becomes:1 provided:3 estimating:2 project:1 underlying:1 panel:2 cultural:1 null:1 what:2 santos:1 kind:1 interpreted:1 q2:1 developed:1 informed:1 miyawaki:1 transformation:6 corporation:1 temporal:2 quantitative:1 alahi:1 exactly:1 k2:2 demonstrates:1 unit:6 grant:2 colorful:1 pitting:1 local:2 accordance:1 limit:1 encoding:8 interpolation:3 might:1 twice:1 nz:7 studied:1 evoked:2 conversely:1 unique:1 acknowledgment:1 vu:1 lf:4 procedure:1 jan:5 lncs:1 area:2 cowen:1 hyperbolic:1 significantly:5 reject:1 matching:1 vedaldi:1 refers:1 seeing:1 suggest:1 petersen:1 naturalistic:1 fsl:2 context:1 memorization:1 length:2 nishimoto:2 equivalent:1 map:1 center:6 shi:1 attention:1 independently:1 resolution:5 identifying:3 immediately:1 pouget:1 m2:2 insight:1 kay:4 stability:1 embedding:1 resp:2 target:6 play:1 behrens:1 hypothesis:1 goodfellow:1 recognition:5 predicts:2 database:3 observed:3 bottom:1 role:3 solved:1 capture:1 wang:3 ensures:1 connected:1 adv:2 plo:2 movement:1 highest:2 rq:1 balanced:1 complexity:12 warde:1 goodale:1 dynamic:2 trained:4 solving:3 passos:1 predictive:2 basis:1 sep:2 langner:1 kolmogorov:1 train:1 fast:1 vidi:1 prenger:3 radboud:2 lect:1 pearson:3 caffe:2 supplementary:1 solve:2 reconstruct:2 compressed:2 simonyan:1 transform:4 itself:1 noisy:1 online:2 sequence:1 karayev:1 unprecedented:1 took:1 reconstruction:72 product:1 fea:2 relevant:1 combining:3 date:2 consent:1 achieve:2 academy:1 razavi:1 complexion:1 darrell:2 jenkinson:1 generating:3 adam:2 perfect:1 object:4 derive:2 recurrent:1 measured:1 bosch:1 received:1 aug:2 coverage:1 implemented:3 marcel:1 annotated:2 attribute:2 modifying:1 stochastic:1 human:16 enable:1 material:1 eigenface:6 behaviour:1 varoquaux:2 biological:1 frontier:4 correction:1 considered:1 caballero:1 cognition:2 predict:3 mapping:1 pointing:1 driving:1 efros:2 ventral:3 achieves:1 optimizer:1 smallest:1 vary:2 torralba:1 perceived:18 estimation:4 prettenhofer:1 healthy:1 hubbard:2 naselaris:5 clearly:1 gaussian:12 super:3 resized:1 realigned:3 improvement:1 likelihood:10 superimposed:1 grisel:1 adversarial:11 posteriori:4 inference:6 dependent:1 entire:1 going:1 interested:2 pixel:5 arg:3 classification:1 among:3 ported:1 retaining:1 development:1 plan:1 art:3 integration:2 gramfort:2 brox:1 noun:1 equal:1 emotion:1 resonance:3 extraction:1 beach:1 sampling:1 cadieu:1 biology:2 yu:1 unsupervised:3 cookbook:1 deconvolving:1 jones:1 celeba:2 fmri:11 future:1 report:1 stimulus:55 quantitatively:1 dosovitskiy:1 few:1 mirza:1 randomly:1 national:1 individual:2 ab:11 prototypicality:4 organization:2 guclu:1 interest:1 yamashita:1 investigate:1 cournapeau:1 hallucination:1 evaluation:3 feinberg:1 male:3 analyzed:1 mixture:1 nl:1 farley:1 held:1 accurate:1 oliver:2 belgian:1 xy:1 experience:1 facial:8 traversing:1 old:2 euclidean:1 initialized:1 schein:1 girshick:1 deconvolved:2 classify:1 column:5 portrait:4 modeling:2 ordinary:1 strategic:1 neutral:1 successful:1 johnson:1 front:1 characterize:2 reported:2 encoders:1 combined:2 st:1 international:3 tokui:1 probabilistic:4 lee:2 decoding:20 regressor:1 solomon:1 woolrich:1 tz:1 leading:1 style:1 michel:1 shinkareva:1 szegedy:1 fixating:1 account:1 li:1 de:3 stride:4 bold:4 student:5 coding:1 coefficient:5 titan:1 kamitani:3 vi:1 stream:3 tion:1 performed:1 bone:1 bayes:1 metz:1 annotation:1 jul:5 slope:1 hanke:1 jia:1 contribution:2 square:1 accuracy:22 convolutional:11 variance:4 maximized:1 identify:1 tejani:1 handwritten:1 raw:1 identification:3 bayesian:3 pedersen:1 mc:5 rectified:2 drive:1 explain:1 sharing:2 ed:1 ashburner:1 against:1 pp:39 geforce:1 chintala:1 associated:2 sampled:6 photorealistic:1 dataset:10 ledig:1 mitchell:1 recall:2 subsection:1 color:4 dimensionality:1 ethic:1 organized:1 barth:1 appears:4 higher:2 supervised:1 pantazis:1 totz:1 response:42 zisserman:2 wei:1 formulation:1 evaluated:1 done:1 mar:4 generality:1 furthermore:5 just:1 correlation:5 glms:1 sketch:1 horizontal:2 nonlinear:2 scikit:2 overlapping:1 multiscale:1 widespread:1 spm:1 assessment:1 birk:1 mode:1 quality:4 gray:2 reveal:1 scientific:2 usa:1 effect:3 normalized:1 consisted:1 ranged:2 nichols:1 evolution:1 iteratively:3 semantic:2 strom:1 game:1 during:1 width:1 bonferroni:1 aki:1 unnormalized:1 hong:1 mm3:1 demonstrate:1 ladv:1 perrett:5 silva:1 image:30 meaning:1 novel:3 sigmoid:1 functional:11 physical:1 poldrack:1 imagined:1 he:1 m1:2 significant:3 refer:1 malave:1 heskes:4 benjamini:1 had:3 similarity:10 cortex:4 etc:3 feb:2 posterior:7 multivariate:6 recent:1 female:4 khaligh:1 retrieved:1 driven:2 nvidia:1 success:1 lebihan:2 inverted:1 seen:1 minimum:2 additional:1 isola:1 maximize:1 ii:1 multiple:2 simoncelli:1 liking:1 reduces:1 match:2 academic:1 cross:1 long:2 plugging:1 hido:1 prediction:1 variant:1 basic:1 regression:3 hair:1 vision:5 metric:4 oliva:1 dutch:1 kernel:4 normalization:6 achieved:3 dec:5 background:1 cropped:1 source:3 bovik:1 sch:1 operate:1 file:2 subject:11 hz:1 dehaene:2 structural:4 gerven:12 presence:1 intermediate:1 split:2 sander:1 ethnicity:10 revealed:1 bengio:1 psychology:1 architecture:1 identified:2 reduce:1 idea:1 lange:1 vgg:6 shift:1 expression:3 pca:2 six:1 colour:1 accelerating:1 padding:4 henb:1 nine:1 deep:31 detailed:1 netherlands:2 transforms:1 oono:1 recon:7 visualized:1 fixational:1 generate:2 chainer:3 canonical:1 notice:1 cuda:1 cmo:1 neuroscience:11 estimated:1 lgen:1 per:2 mega:1 anatomical:2 diverse:1 write:1 brucher:1 vol:57 group:3 key:3 four:2 drawn:4 kyi:2 preprocessed:4 imaging:3 sum:1 year:2 sti:2 inverse:4 fourth:2 place:2 reasonable:1 scaling:1 comparable:1 huszar:1 layer:13 ashby:1 followed:1 courville:1 correspondence:1 display:2 activity:14 sato:1 helsinki:1 uchida:1 interpolated:2 aspect:2 min:3 truncate:1 combination:2 request:1 across:3 reconstructing:13 character:1 increasingly:1 sheikh:1 rob:1 making:1 s1:5 rev:1 hl:1 zebrowitz:1 iccv:1 pr:10 taken:1 equation:1 visualization:1 committee:1 thirion:4 yamins:2 nose:1 end:2 zk2:2 photo:1 whitehead:1 available:1 gaussians:1 decomposing:1 kuhl:2 hierarchical:5 generic:1 magnetic:3 batch:6 eigen:1 rp:2 convolved:1 original:5 primer:1 top:1 remaining:1 cf:2 dand:3 denotes:1 publishing:1 personality:1 carlson:1 hahn:1 objective:2 skin:6 perrot:1 norming:3 mumford:1 parametric:1 cudnn:1 gradient:1 convnet:2 distance:3 sci:2 lateral:1 landmark:1 decoder:2 collected:1 stim:6 ozair:1 ru:1 besides:1 dicarlo:2 racial:2 code:4 relationship:3 illustration:2 reformulate:2 colorization:2 minimizing:3 ratio:2 difficult:1 mostly:1 beckmann:1 neuroimaging:1 statement:1 sharper:1 trustworthiness:2 nijmegen:2 negative:2 ba:1 design:2 implementation:3 gallant:5 upper:1 vertical:4 observation:1 convolution:1 datasets:3 neuron:3 displayed:1 truncated:2 beat:1 parietal:1 communication:1 sharp:1 rating:1 clayton:1 namely:1 vanderplas:1 optimized:1 discriminator:9 imagenet:1 learned:3 kingma:1 nip:1 adult:2 beyond:1 able:1 pattern:8 perception:4 reading:1 max:4 belief:1 mouth:3 shifting:1 critical:1 pathak:1 natural:5 event:1 friston:1 predicting:2 residual:1 turner:1 heffner:1 improve:1 movie:2 thaler:1 eye:4 dubois:2 psychol:1 jun:3 review:1 prior:7 geometric:1 literature:2 tangent:1 voxels:3 epoch:1 contributing:3 theis:1 python:1 embedded:1 loss:10 fully:1 mcfadden:1 expect:1 men:1 permutation:1 interesting:1 lecture:1 proportional:1 e1003915:1 proven:1 facing:1 versus:3 generation:1 generator:10 age:2 validation:1 shelhamer:1 degree:2 exciting:1 systematically:1 classifying:1 share:3 row:4 poline:2 supported:1 last:3 free:1 dis:1 lier:3 bias:1 understand:2 institute:1 explaining:1 face:34 taking:2 leaky:1 penny:1 van:16 slice:2 curve:1 distributed:2 dimension:1 neuroprosthetic:1 transition:1 donders:2 cortical:2 sensory:3 author:1 qualitatively:1 preprocessing:1 tanabe:1 voxel:5 transaction:1 reconstructed:2 nov:2 feat:1 umut:1 reveals:2 ioffe:1 assumed:1 spatio:1 continuous:1 latent:31 search:1 triplet:1 tailed:1 khosla:1 table:4 learn:2 nature:4 transfer:1 ca:1 thielen:1 du:2 investigated:3 complex:3 diag:1 da:1 did:1 apr:3 neurosci:1 rh:2 whole:1 s2:5 hyperparameters:2 repeated:1 fair:1 positively:1 xu:1 representative:1 attractiveness:7 differed:1 ny:1 tong:1 neuroimage:9 duchesnay:3 comprises:4 inverts:4 inferring:2 comput:2 hrf:1 perceptual:3 third:4 donahue:2 tang:1 rk:3 theorem:1 annu:1 british:2 discarding:1 specific:1 covariate:1 gating:1 supplemented:1 learnable:1 explored:1 experimented:1 mason:1 chun:1 abadie:1 evidence:1 deconvolution:2 workshop:2 adding:1 corr:11 importance:1 kr:1 nat:1 kx:1 photograph:1 coregistered:2 simply:1 visual:14 failed:2 partially:1 pretrained:6 ldis:1 chang:1 springer:1 gender:10 corresponds:1 radford:1 chance:1 extracted:1 ma:1 declaration:1 oct:3 goal:5 endeavor:1 formulated:2 identity:8 presentation:1 seibert:1 shared:2 retinotopy:3 change:1 content:3 included:1 specifically:1 except:3 corrected:3 cichy:1 parkhi:1 reducing:1 principal:3 total:1 discriminate:1 ya:1 player:1 exception:1 indicating:1 pedregosa:1 internal:1 people:1 scan:8 dorsal:1 hawk:1 tested:2 correlated:2 |
6,648 | 7,013 | Efficient Use of Limited-Memory Accelerators
for Linear Learning on Heterogeneous Systems
?
Celestine Dunner
IBM Research - Zurich
Switzerland
[email protected]
Thomas Parnell
IBM Research - Zurich
Switzerland
[email protected]
Martin Jaggi
EPFL
Switzerland
[email protected]
Abstract
We propose a generic algorithmic building block to accelerate training of machine
learning models on heterogeneous compute systems. Our scheme allows to efficiently employ compute accelerators such as GPUs and FPGAs for the training
of large-scale machine learning models, when the training data exceeds their memory capacity. Also, it provides adaptivity to any system?s memory hierarchy in
terms of size and processing speed. Our technique is built upon novel theoretical
insights regarding primal-dual coordinate methods, and uses duality gap information to dynamically decide which part of the data should be made available for
fast processing. To illustrate the power of our approach we demonstrate its performance for training of generalized linear models on a large-scale dataset exceeding
the memory size of a modern GPU, showing an order-of-magnitude speedup over
existing approaches.
1
Introduction
As modern compute systems rapidly increase in size, complexity and computational power, they
become less homogeneous. Today?s systems exhibit strong heterogeneity at many levels: in terms
of compute parallelism, memory size and access bandwidth, as well as communication bandwidth
between compute nodes (e.g., computers, mobile phones, server racks, GPUs, FPGAs, storage nodes
etc.). This increasing heterogeneity of compute environments is posing new challenges for the
development of efficient distributed algorithms. That is to optimally exploit individual compute
resources with very diverse characteristics without suffering from the I/O cost of exchanging data
between them.
In this paper, we focus on the task of training large scale
machine learning models in such heterogeneous compute enUnit B
Unit A
vironments and propose a new generic algorithmic building
????
?
block to efficiently distribute the workload between heterogeneous compute units. Assume two compute units, denoted A
and B, which differ in compute power as well as memory capacity as illustrated in Figure 1. The computational power of
unit A is smaller and its memory capacity is larger relative to
its peer unit B (i.e., we assume that the training data fits into
the memory of A, but not into B?s). Hence, on the compu- Figure 1: Compute units A, B with
tationally more powerful unit B, only part of the data can be different memory size, bandwidth
processed at any given time. The two units, A and B, are able and compute power.
to communicate with each other over some interface, however
there is cost associated with doing so.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
This generic setup covers many essential elements of modern machine learning systems. A typical
example is that of accelerator units, such as a GPUs or FPGAs, augmenting traditional computers
or servers. While such devices can offer a significant increase in computational power due to their
massively parallel architectures, their memory capacity is typically very limited. Another example
can be found in hierarchical memory systems where data in the higher level memory can be accessed
and hence processed faster than data in the ? typically larger ? lower level memory. Such memory
systems are spanning from, e.g., fast on-chip caches on one extreme to slower hard drives on the
other extreme.
The core question we address in this paper is the following: How can we efficiently distribute the
workload between heterogeneous units A and B in order to accelerate large scale learning?
The generic algorithmic building block we propose systematically splits the overall problem into two
workloads, a more data-intensive but less compute-intensive part for unit A and a more computeintensive but less data-intensive part for B. These workloads are then executed in parallel, enabling
full utilization of both resources while keeping the amount of necessary communication between
the two units minimal. Such a generic algorithmic building block is useful much more widely than
just for training on two heterogeneous compute units ? it can serve as a component of larger training
algorithms or pipelines thereof. In a distributed training setting, our scheme allows each individual
node to locally benefit from its own accelerator, therefore speeding up the overall task on a cluster,
e.g., as part of [14] or another distributed algorithm. Orthogonal to such a horizontal application, our
scheme can also be used as a building block vertically integrated in a system, serving the efficiency
of several levels of the memory hierarchy of a given compute node.
Related Work. The most popular existing approach to deal with memory limitations is to process
data in batches. For example, for the special case of SVMs, [15] splits data samples into blocks
which are then loaded and processed sequentially (on B), in the setting of limited RAM and the
full data residing on disk. This approach enables contiguous chunks of data to be loaded which is
beneficial in terms of I/O overhead; it however treats samples uniformly. Later, in [2, 7] it is proposed
to selectively load and keep informative samples in memory in order to reduce disk access, but this
approach is specific to support vectors and is unable to theoretically quantify the possible speedup.
In this work, we propose a novel, theoretically-justified scheme to efficiently deal with memory
limitations in the heterogeneous two-unit setting illustrated in Figure 1. Our scheme can be applied
to a broad class of machine learning problems, including generalized linear models, empirical risk
minimization problems with a strongly convex regularizer, such as SVM, as well as sparse models,
such as Lasso. In contrast to the related line of research [15, 2, 7], our scheme is designed to take full
advantage of both compute resources A and B for training, by systematically splitting the workload
among A and B in order to adapt to their specific properties and to the available bandwidth between
them. At the heart of our approach lies a smart data selection scheme using coordinate-wise duality
gaps as selection criteria. Our theory will show that our selection scheme provably improves the
convergence rate of training overall, by explicitly quantifying the benefit over uniform sampling. In
contrast, existing work [2, 7] only showed that the linear convergence rate on SVMs is preserved
asymptotically, but not necessarily improved.
A different line of related research is steepest coordinate selection. It is known that steepest coordinate descent can converge much faster than uniform [8] for single coordinate updates on smooth
objectives, however it typically does not perform well for general convex problems, such as those
with L1 regularization. In our work, we overcome this issue by using the generalized primal-dual
gaps [4] which do extend to L1 problems. Related to this notion, [3, 9, 11] have explored the use
of similar information as an adaptive measure of importance, in order to adapt the sampling probabilities of coordinate descent. Both this line of research as well as steepest coordinate descent [8]
are still limited to single coordinate updates, and cannot be readily extended to arbitrary accuracy
updates on a larger subset of coordinates (performed per communication round) as required in our
heterogeneous setting.
Contributions. The main contributions of this work are summarized as follows:
? We analyze the per-iteration-improvement of primal-dual block coordinate descent and how it
depends on the selection of the active coordinate block at that iteration. We extend the convergence theory to arbitrary approximate updates on the coordinate subsets, and propose a novel
dynamic selection scheme for blocks of coordinates, which relies on coordinate-wise duality
gaps, and we precisely quantify the speedup of the convergence rate over uniform sampling.
2
? Our theoretical findings result in a scheme for learning in heterogeneous compute environments
which is easy to use, theoretically justified and versatile in that it can be adapted to given resource constraints, such as memory, computation and communication. Furthermore our scheme
enables parallel execution between, and also within, two heterogeneous compute units.
? For the example of joint training in a CPU plus GPU environment ? which is very challenging
for data-intensive work loads ? we demonstrate a more than 10? speed-up over existing methods
for limited-memory training.
2
Learning Problem
For the scope of this work we focus on the training of convex generalized linear models of the form
min
??Rn
O(?) := f (A?) + g(?)
(1)
P
where f is a smooth function and g(?) = i gi (?i ) is separable, ? ? Rn describes the parameter
vector and A = [a1 , a2 , . . . , an ] ? Rd?n the data matrix with column vectors ai ? Rd . This setting
covers many prominent machine learning problems, including generalized linear models as used for
regression, classification and feature selection. To avoid confusion, it is important to distinguish the
two main application classes: On one hand, we cover empirical risk minimization (ERM) problems
with a strongly convex regularizer such as L2 -regularized SVM ? where ? then is the dual variable
vector and f is the smooth regularizer conjugate, as in SDCA [13]. On the other hand, we also cover
the class of sparse models such as Lasso or ERM with a sparse regularizer ? where f is the data-fit
term and g takes the role of the non-smooth regularizer, so ? are the original primal parameters.
Duality Gap. Through the perspective of Fenchel-Rockafellar duality, one can, for any primaldual solution pair (?, w), define the non-negative duality gap for (1) as
gap(?; w)
:= f (A?) + g(?) + f ? (w) + g ? (?A> w)
(2)
where the functions f ? , g ? in (2) are defined as the convex conjugate1 of their corresponding counterparts f, g [1]. Let us consider parameters w that are optimal relative to a given ?, i.e.,
w := w(?) = ?f (A?),
(3)
which implies f (A?) + f ? (w) = hA?, wi. In this special case, the duality gap (2) simplifies and
becomes separable over the columns ai of A and the corresponding parameter weights ?i given w.
We will later exploit this property to quantify the suboptimality of individual coordinates.
X
gap(?) =
gapi (?i ), where gapi (?i ) := w> ai ?i + gi (?i ) + gi? (?a>
(4)
i w).
i?[n]
Notation. For the remainder of the paper we use v[P] to denote a vector v with non-zero entries
only for the coordinates i ? P ? [n] = {1, . . . , n}. Similarly we write A[P] to denote the matrix A
composing only of columns indexed by i ? P.
3
Approximate Block Coordinate Descent
The theory we present in this section serves to derive a theoretical framework for our heterogeneous
learning scheme presented in Section 4. Therefore, let us consider the generic block minimization
scheme described in Algorithm 1 to train generalized linear models of the form (1).
3.1
Algorithm Description
In every round t, of Algorithm 1, a block P of m coordinates of ? is selected according to an
arbitrary selection rule. Then, an update is computed on this block of coordinates by optimizing
arg min
??[P] ?Rn
O(? + ??[P] )
(5)
where an arbitrary solver can be used to find this update. This update is not necessarily perfectly
optimal but of a relative accuracy ?, in the following sense of approximation quality:
1
For h : Rd ? R the convex conjugate is defined as h? (v) := supu?Rd v> u ? h(u).
3
Algorithm 1 Approximate Block CD
Algorithm 2 D U HL
1: Initialize ?(0) := 0, z := 0
2: for t = 0, 1, 2, ...
3:
determine P according to (13)
4:
refresh memory B to contain A[P] .
5:
on B do:
6:
??[P] ? ?-approx. solution to (12)
7:
in parallel on A do:
8:
while B not finished
9:
sample j ? [n]
(t)
10:
update zj := gapj (?j )
11:
?(t+1) := ?(t) + ??[P]
(0)
1: Initialize ?
:= 0
2: for t = 0, 1, 2, ... do
3:
select a subset P with |P| = m
4:
??[P] ? ?-approx. solution to (5)
5:
?(t+1) := ?(t) + ??[P]
6: end for
Definition 1 (?-Approximate Update). The block update ??[P] is ?-approximate iff
?? ? [0, 1] : O(? + ??[P] ) ? ?O(? + ???[P] ) + (1 ? ?)O(?)
where
3.2
???[P]
(6)
? arg min??[P] ?Rn O(? + ??[P] ).
Convergence Analysis
In order to derive a precise convergence rate for Algorithm 1 we build on the convergence analysis
of [4, 13]. We extend their analysis of stochastic coordinate descent in two ways: 1) to a block
coordinate scheme with approximate coordinate updates, and 2) to explicitly cover the importance
of each selected coordinate, as opposed to uniform sampling.
We define
?t,P :=
P
(t)
1
j?P gapj (?j )
m
P
(t)
1
j?[n] gapj (?j )
n
(7)
which quantifies how much the coordinates i ? P of ?(t) contribute to the global duality gap
(2). Thus giving a measure of suboptimality for these coordinates. In Algorithm 1 an arbitrary
selection scheme (deterministic or randomized) can be applied and our theory will explain how
the convergence of Algorithm 1 depends on the selection through the distribution of ?t,P . That
is, for strongly convex functions gi , we found that the per-step improvement in suboptimality is
proportional to ?t,P of the specific coordinate block P being selected at that iteration t:
(t+1) ? (1 ? ?t,P ?c) (t)
(8)
where (t) := O(?(t) ) ? O(?? ) measures the suboptimality of ?(t) and c > 0 is a constant which
will be specified in the following theorem. A similar dependency on ?t,P can also be shown for
non-strongly convex functions gi , leading to our two main convergence results for Algorithm 1:
Theorem 1. For Algorithm 1 running on (1) where f is L-smooth and gi is ?-strongly convex with
? > 0 for all i ? [n], it holds that
t
m ?
(t)
(0)
EP [ | ? ] ? 1 ? ?P
(0)
(9)
n ?L + ?
where ? := kA[P] k2op and ?P := mint ? EP [?t,P | ?(t) ]. Expectations are over the choice of P.
That is, for strongly convex gi , Algorithm 1 has a linear convergence rate. This was shown before
in [13, 4] for the special case of exact coordinate updates. In strong contrast to earlier coordinate
descent analyses which build on random uniform sampling, our theory explicitly quantifies the impact of the sampling scheme on the convergence through ?t,P . This allows one to benefit from smart
selection and provably improve the convergence rate by taking advantage of the inhomogeneity of
the duality gaps. The same holds for non-strongly convex functions gi :
4
Theorem 2. For Algorithm 1 running on (1) where f is L-smooth and gi has B-bounded support
for all i ? [n], it holds that
1
2?n2
EP [(t) | ?(0) ] ?
(10)
?P m 2n + t ? t0
n
(0)
with ? := 2LB 2 ? where ? := kA[P] k2op and t ? t0 = max 0, m
log 2?m
where ?P :=
n?
mint ? EP [?t,P | ?(t) ]. Expectations are over the choice of P.
Remark 1. Note that for uniform selection, our proven convergence rates for Algorithm 1 recover
classical primal-dual coordinate descent [4, 13] as a special case, where in every iteration a single
coordinate is selected and each update is solved exactly, i.e., ? = 1. In this case ?t,P measures the
contribution of a single coordinate to the duality gap. For uniform sampling, EP [?t,P | ?(t) ] = 1
and hence ?P = 1 which recovers [4, Theorems 8 and 9].
3.3 Gap-Selection Scheme
The convergence results of Theorems 1 and 2 suggest that the optimal rule for selecting the block
of coordinates P in step 3 of Algorithm 1, leading to the largest improvement in that step, is the
following:
X
(t)
P := arg max
(11)
gapj ?j .
P?[n]:|P|=m j?P
This scheme maximizes ?t,P at every iterate. Furthermore, the selection scheme (11) guarantees
?t,P ? 1 which quantifies the relative gain over random uniform sampling. In contrast to existing
importance sampling schemes [16, 12, 5] which assign static probabilities to individual coordinates,
our selection scheme (11) is dynamic and adapts to the current state ?(t) of the algorithm, similar
to that used in [9, 11] in the standard non-heterogeneous setting.
4
Heterogeneous Training
In this section we build on the theoretical insight of the previous section to tackle the main objective
of this work: How can we efficiently distribute the workload between two heterogeneous compute
units A and B to train a large-scale machine learning problem where A and B fulfill the following
two assumptions:
Assumption 1 (Difference in Memory Capacity). Compute unit A can fit the whole dataset in its
memory and compute unit B can only fit a subset of the data. Hence, B only has access to A[P] , a
subset P of m columns of A, where m is determined by the memory size of B.
Assumption 2 (Difference in Computational Power). Compute unit B can access and process data
faster than compute unit A.
4.1
D U HL: A Duality Gap-Based Heterogeneous Learning Scheme
We propose a duality gap-based heterogeneous learning scheme, henceforth referring to as D U HL,
for short. D U HL is designed for efficient training on heterogeneous compute resources as described
above. The core idea of D U HL is to identify a block P of coordinates which are most relevant to
improving the model at the current stage of the algorithm, and have the corresponding data columns,
A[P] , residing locally in the memory of B. Compute unit B can then exploit its superior compute
power by using an appropriate solver to locally find a block coordinate update ??[P] . At the same
time, compute unit A, is assigned the task of updating the block P of important coordinates as
the algorithm proceeds and the iterates change. Through this split of workloads D U HL enables full
utilization of both compute units A and B. Our scheme, summarized in Algorithm 2, fits the theoretical framework established in the previous section and can be viewed as an instance of Algorithm 1,
implementing a time-delayed version of the duality gap-based selection scheme (11).
Local Subproblem. In the heterogeneous setting compute unit B only has access to its local data
A[P] and some current state v := A? ? Rd in order to compute a block update ??[P] in Step 4
of Algorithm 1. While for quadratic functions f this information is sufficient to optimize (5), for
non-quadratic functions f we consider the following modified local optimization problem instead:
X
L
arg min f (v) + h?f (v), A??[P] i + kA??[P] k22 +
gi ((? + ??[P] )i ).
(12)
2
??[P] ?Rn
i?P
5
Figure 2: Illustration of one round of DUHL as described in Algorithm 2.
It can be shown that the convergence guarantees of Theorems 1 and 2 similarly hold if the block
coordinate update in Step 4 of Algorithm 1 is computed on (12) instead of (5) (see Appendix C for
more details).
A Time-Delayed Gap Measure. Motivated by our theoretical findings, we use the duality gap as a
measure of importance for selecting which coordinates unit B is working on. However, a scheme as
suggested in (11) is not suitable for our purpose since it requires knowledge of the duality gaps (4)
for every coordinate i at a given iterate ?(t) . For our scheme this would imply a computationally
expensive selection step at the beginning of every round which has to be performed in sequence to
the update step. To overcome this and enable parallel execution of the two workloads on A and B,
we propose to introduce a gap memory. This is an n-dimensional vector z where zi measures the
(t0 )
importance of coordinate ?i . We have zi := gap(?i ) where t0 ? [0, t] and the different elements
0
of z are allowed to be based on different, possibly stale iterates ?(t ) . Thus, the entries of z can be
continuously updated during the course of the algorithm. Then, at the beginning of every round the
new block P is chosen based on the current state of z as follows:
X
zj .
(13)
P := arg max
P?[n]:|P|=m j?P
In DUHL, keeping z up to date is the job of compute unit A. Hence, while B is computing a block
coordinate update ??[P] , A updates z by randomly sampling from the entire training data. Then,
as soon as B is done, the current state of z is used to determine P for the next round and data
columns on B are replaced if necessary. The parallel execution of the two workloads during a single
round of DUHL is illustrated in Figure 2. Note, that the freshness of the gap-memory z depends
on the relative compute power of A versus B, as well as ? which controls the amount of time spent
computing on unit B in every round.
In Section 5.2 we will experimentally investigate the effect of staleness of the values zi on the
convergence behavior of our scheme.
5
Experimental Results
For our experiments we have implemented D U HL for the particular use-case where A corresponds
to a CPU with attached RAM and B corresponds to a GPU ? A and B communicate over the PCIe
bus. We use an 8-core Intel Xeon E5 x86 CPU with 64GB of RAM which is connected over PCIe
Gen3 to an NVIDIA Quadro M4000 GPU which has 8GB of RAM. GPUs have recently experience
a widespread adoption in machine learning systems and thus this hardware scenario is timely and
highly relevant. In such a setting we wish to apply D U HL to efficiently populate the GPU memory
and thereby making this part of the data available for fast processing.
GPU solver. In order to benefit from the enormous parallelism offered by GPUs and fulfill Assumption 2, we need a local solver capable of exploiting the power of the GPU. Therefore, we
have chosen to implement the twice parallel, asynchronous version of stochastic coordinate descent
6
(a)
(b)
(a)
Figure 3: Validation of faster convergence: (a)
theoretical quantity ?t,P (orange), versus the
practically observed speedup (green) ? both relative to the random scheme baseline, (b) convergence of gap selection compared to random
selection.
(b)
Figure 4: Effect of stale entries in the gap memory of D U HL: (a) number of rounds needed
to reach suboptimality 10?4 for different update
frequencies compared to o-D U HL, (b) the number of data columns that are replaced per round
for update frequency of 5%.
(TPA-SCD) that has been proposed in [10] for solving ridge regression. In this work we have generalized the implementation further so that it can be applied in a similar manner to solve the Lasso,
as well as the SVM problem. For more details about the algorithm and how to generalize it we refer
the reader to Appendix D.
5.1
Algorithm Behavior
Firstly, we will use the publicly available epsilon dataset from the LIBSVM website (a fully dense
dataset with 400?000 samples and 2?000 features) to study the convergence behavior of our scheme.
For the experiments in this section we assume that the GPU fits 25% of the training data, i.e., m = n4
and show results for training the sparse Lasso as well as the ridge regression model. For the Lasso
case we have chosen the regularizer to obtain a support size of ? 12% and we apply the coordinatewise Lipschitzing trick [4] to the L1 -regularizer in order to allow the computation of the duality
gaps. For computational details we refer the reader to Appendix E.
Validation of Faster Convergence. From our theory in Section 3.2 we expect that during any
given round t of Algorithm 1, the relative gain in convergence rate of one sampling scheme over
the other should be quantified by the ratio of the corresponding values of ?t,P := ??t,P (for the
respective block of coordinates processed in this round). To verify this, we trained a ridge regression
model on the epsilon dataset implementing a) the gap-based selection scheme, (11), and b) random
selection, fixing ? for both schemes. Then, in every round t of our experiment, we record the value
of ?t,P as defined in (7) and measure the relative gain in convergence rate of the gap-based scheme
over the random scheme. In Figure 3(a) we plot the effective speedup of our scheme, and observe
that this speedup almost perfectly matches the improvement predicted by our theory as measured
by ?t,P - we observe a relative measurement error of 0.42. Both speedup numbers are calculated
relative to plain random selection. In Figure 3(b) we see that the gap-based selection can achieve a
remarkable 10? improvement in convergence over the random reference scheme. When running on
sparse problems instead of ridge regression, we have observed ?t,P of the oracle scheme converging
n
to m
within only a few iterations if the support of the problem is smaller than m and fits on the
GPU.
Effect of Gap-Approximation. In this section we study the effect of using stale, inconsistent gapmemory entries for selection on the convergence of D U HL. While the freshness of the memory
entries is, in reality, determined by the relative compute power of unit B over unit A and the relative
accuracy ?, in this experiment we artificially vary the number of gap updates performed during each
round while keeping ? fixed. We train the Lasso model and show, in Figure 4(a), the number of
rounds needed to reach a suboptimality of 10?4 , as a function of the number of gap entries updated
per round. As a reference we show o-D U HL which has access to an oracle providing the true duality
gaps. We observe that our scheme is quite robust to stale gap values and can achieve performance
within a factor of two over the oracle scheme up to an average delay of 20 iterations. As the update
frequency decreases we observed that the convergence slows down in the initial rounds because the
algorithm needs more rounds until the active set of the sparse problem is correctly detected.
7
(d) Lasso
(e) SVM
(f) ridge regression
Figure 5: Performance results of DUHL on the 30GB ImageNet dataset. I/O cost (top) and convergence behavior (bottom) for Lasso, SVM and ridge regression.
Reduced I/O operations. The efficiency of our scheme regarding I/O operations is demonstrated
in Figure 4(b), where we plot the number of data columns that are replaced on B in every round
of Algorithm 2. Here the Lasso model is trained assuming a gap update frequency of 5%. We
observe that the number of required I/O operations of our scheme is decreasing over the course of
the algorithm. When increasing the freshness of the gap memory entries we could see the number
of swaps go to zero faster.
5.2
Reference Schemes
In the following we compare the performance of our scheme against four reference schemes. We
compare against the most widely-used scheme for using a GPU to accelerate training when the data
does not fit into the memory of the GPU, that is the sequential block selection scheme presented
in [15]. Here the data columns are split into blocks of size m which are sequentially put on the GPU
and operated on (the data is efficiently copied to the GPU as a contiguous memory block).
We also compare against importance sampling as presented in [16], which we refer to as IS. Since
probabilities assigned to individual data columns are static we cannot use them as importance measures in a deterministic selection scheme. Therefore, in order to apply importance sampling in the
heterogeneous setting, we non-uniformly sample m data-columns to reside inside the GPU memory
in every round of Algorithm 2 and have the CPU determine the new set in parallel. As we will see,
data column norms often come with only small variance, in particular for dense datasets. Therefore,
importance sampling often fails to give a significant gain over uniformly random selection.
Additionally, we compare against a single-threaded CPU implementation of a stochastic coordinate
descent solver to demonstrate that with our scheme, the use of a GPU in such a setting indeed yields a
significant speedup over a basic CPU implementation despite the high I/O cost of repeatedly copying
data on and off the GPU memory. To the best of our knowledge, we are the first to demonstrate this.
For all competing schemes, we use TPA-SCD as the solver to efficiently compute the block update
??[P] on the GPU. The accuracy ? of the block update computed in every round is controlled by
the number of randomized passes of TPA-SCD through the coordinates of the selected block P. For
a fair comparison we optimize this parameter for the individual schemes.
5.3
Performance Analysis of DUHL
For our large-scale experiments we use an extended version of the Kaggle Dogs vs. Cats ImageNet
dataset as presented in [6], where we additionally double the number of samples, while using single
precision floating point numbers. The resulting dataset is fully dense and consists of 40?000 samples
and 200?704 features, resulting in over 8 billion non-zero elements and a data size of 30GB. Since
the memory capacity of our GPU is 8GB, we can put ? 25% of the data on the GPU. We will show
8
results for training a sparse Lasso model, ridge regression as well as linear L2 -regularized SVM.
For Lasso we choose the regularization to achieve a support size of 12%, whereas for SVM the
regularizer was chosen through cross-validation. For all three tasks, we compare the performance
of D U HL to sequential block selection, random selection, selection through importance sampling
(IS) all on GPU, as well as a single-threaded CPU implementation. In Figure 5(d) and 5(e) we
demonstrate that for Lasso as well as SVM, D U HL converges 10? faster than any reference scheme.
This gain is achieved by improved convergence ? quantified through ?t,P ? as well as through
reduced I/O cost, as illustrated in the top plots of Figure 5, which show the number of data columns
replaced per round. The results in Figure 5(f) show that the application of D U HL is not limited
to sparse problems and SVMs. Even for ridge regression D U HL significantly outperforms all the
reference scheme considered in this study.
6
Conclusion
We have presented a novel theoretical analysis of block coordinate descent, highlighting how the
performance depends on the coordinate selection. These results prove that the contribution of individual coordinates to the overall duality gap is indicative of their relevance to the overall model
optimization. Using this measure we develop a generic scheme for efficient training in the presence
of high performance resources of limited memory capacity. We propose D U HL, an efficient gap
memory-based strategy to select which part of the data to make available for fast processing. On a
large dataset which exceeds the capacity of a modern GPU, we demonstrate that our scheme outperforms existing sequential approaches by over 10? for Lasso and SVM models. Our results show
that the practical gain matches the improved convergence predicted by our theory for gap-based
sampling under the given memory and communication constraints, highlighting the versatility of the
approach.
References
[1] Heinz H Bauschke and Patrick L Combettes. Convex Analysis and Monotone Operator Theory in Hilbert
Spaces. CMS Books in Mathematics. Springer New York, New York, NY, 2011.
[2] Kai-Wei Chang and Dan Roth. Selective block minimization for faster convergence of limited memory
large-scale linear models. In Proceedings of the 17th ACM SIGKDD international conference on Knowledge Discovery and Data Mining, pages 699?707, New York, USA, August 2011. ACM.
[3] Dominik Csiba, Zheng Qu, and Peter Richt?arik. Stochastic Dual Coordinate Ascent with Adaptive Probabilities. In ICML 2015 - Proceedings of the 32th International Conference on Machine Learning, February
2015.
[4] Celestine D?unner, Simone Forte, Martin Tak?ac, and Martin Jaggi. Primal-Dual Rates and Certificates.
In Proceedings of the 33th International Conference on Machine Learning (ICML) - Volume 48, pages
783?792, 2016.
[5] Olivier Fercoq and Peter Richt?arik. Optimization in High Dimensions via Accelerated, Parallel, and
Proximal Coordinate Descent. SIAM Review, 58(4):739?771, January 2016.
[6] Christina Heinze, Brian McWilliams, and Nicolai Meinshausen. DUAL-LOCO: Distributing Statistical
Estimation Using Random Projections. In AISTATS - Proceedings of the th International Conference on
Artificial Intelligence and Statistics, pages 875?883, 2016.
[7] Shin Matsushima, SVN Vishwanathan, and Alex J Smola. Linear support vector machines via dual cached
loops. In Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and
data mining, pages 177?185, New York, USA, 2012. ACM Press.
[8] Julie Nutini, Mark Schmidt, Issam Laradji, Michael Friedlander, and Hoyt Koepke. Coordinate Descent
Converges Faster with the Gauss-Southwell Rule Than Random Selection. In ICML 2015 - Proceedings
of the 32th International Conference on Machine Learning, pages 1632?1641, 2015.
[9] Anton Osokin, Jean-Baptiste Alayrac, Isabella Lukasewitz, Puneet K. Dokania, and Simon LacosteJulien. Minding the gaps for block frank-wolfe optimization of structured svms. In Proceedings of
the 33rd International Conference on Machine Learning (ICML) - Volume 48, pages 593?602. JMLR.org,
2016.
[10] Thomas Parnell, Celestine D?unner, Kubilay Atasu, Manolis Sifalakis, and Haris Pozidis. Large-Scale
Stochastic Learning using GPUs. In Proceedings of the 6th International Workshop on Parallel and
Distributed Computing for Large Scale Machine Learning and Big Data Analytics (IPDPSW), IEEE,
2017.
9
[11] Dmytro Perekrestenko, Volkan Cevher, and Martin Jaggi. Faster Coordinate Descent via Adaptive Importance Sampling. In AISTATS - Artificial Intelligence and Statistics, pages 869?877. April 2017.
[12] Zheng Qu and Peter Richt?arik. Coordinate descent with arbitrary sampling I: algorithms and complexity.
Optimization Methods and Software, 31(5):829?857, April 2016.
[13] Shai Shalev-Shwartz and Tong Zhang. Stochastic dual coordinate ascent methods for regularized loss. J.
Mach. Learn. Res., 14(1):567?599, February 2013.
[14] Virginia Smith, Simone Forte, Chenxin Ma, Martin Tak?ac? , Michael I Jordan, and Martin Jaggi. CoCoA:
A General Framework for Communication-Efficient Distributed Optimization. arXiv, November 2016.
[15] Hsiang-Fu Yu, Cho-Jui Hsieh, Kai-Wei Chang, and Chih-Jen Lin. Large Linear Classification When Data
Cannot Fit in Memory. ACM Transactions on Knowledge Discovery from Data, 5(4):1?23, February
2012.
[16] Peilin Zhao and Tong Zhang. Stochastic Optimization with Importance Sampling for Regularized Loss
Minimization. In ICML 2015 - Proceedings of the 32th International Conference on Machine Learning,
pages 1?9, 2015.
10
| 7013 |@word version:3 norm:1 disk:2 hsieh:1 thereby:1 versatile:1 minding:1 initial:1 selecting:2 outperforms:2 existing:6 ka:3 com:2 current:5 nicolai:1 readily:1 refresh:1 gpu:21 informative:1 enables:3 designed:2 plot:3 update:26 v:1 intelligence:2 selected:5 device:1 website:1 indicative:1 beginning:2 steepest:3 smith:1 core:3 short:1 record:1 volkan:1 provides:1 iterates:2 node:4 contribute:1 certificate:1 firstly:1 org:1 accessed:1 zhang:2 become:1 consists:1 prove:1 overhead:1 dan:1 inside:1 manner:1 introduce:1 theoretically:3 indeed:1 behavior:4 quadro:1 heinz:1 decreasing:1 manolis:1 cpu:7 cache:1 solver:6 increasing:2 becomes:1 notation:1 bounded:1 maximizes:1 cm:1 finding:2 guarantee:2 every:11 tackle:1 exactly:1 utilization:2 unit:29 control:1 mcwilliams:1 before:1 vertically:1 treat:1 local:4 despite:1 mach:1 plus:1 twice:1 quantified:2 dynamically:1 meinshausen:1 challenging:1 limited:8 analytics:1 adoption:1 practical:1 block:36 implement:1 supu:1 loco:1 shin:1 sdca:1 empirical:2 significantly:1 isabella:1 projection:1 jui:1 suggest:1 cannot:3 selection:32 operator:1 storage:1 risk:2 put:2 optimize:2 deterministic:2 demonstrated:1 roth:1 go:1 convex:12 splitting:1 insight:2 rule:3 notion:1 coordinate:54 updated:2 hierarchy:2 today:1 exact:1 olivier:1 homogeneous:1 us:1 trick:1 element:3 wolfe:1 expensive:1 updating:1 ep:5 role:1 subproblem:1 observed:3 bottom:1 solved:1 connected:1 richt:3 decrease:1 environment:3 complexity:2 scd:3 dynamic:2 trained:2 solving:1 smart:2 serve:1 upon:1 efficiency:2 swap:1 accelerate:3 workload:9 joint:1 chip:1 cat:1 lukasewitz:1 regularizer:8 train:3 fast:4 effective:1 detected:1 artificial:2 shalev:1 peer:1 quite:1 jean:1 larger:4 widely:2 solve:1 kai:2 statistic:2 gi:10 inhomogeneity:1 advantage:2 sequence:1 propose:8 remainder:1 relevant:2 loop:1 rapidly:1 date:1 iff:1 achieve:3 adapts:1 description:1 x86:1 exploiting:1 convergence:29 cluster:1 double:1 billion:1 cached:1 converges:2 spent:1 illustrate:1 derive:2 develop:1 fixing:1 augmenting:1 ac:2 measured:1 job:1 strong:2 tpa:4 implemented:1 predicted:2 implies:1 come:1 quantify:3 differ:1 switzerland:3 stochastic:7 enable:1 implementing:2 assign:1 brian:1 hold:4 practically:1 residing:2 considered:1 algorithmic:4 scope:1 vary:1 a2:1 purpose:1 estimation:1 largest:1 minimization:5 arik:3 modified:1 fulfill:2 avoid:1 mobile:1 koepke:1 focus:2 improvement:5 contrast:4 sigkdd:2 baseline:1 sense:1 epfl:2 typically:3 integrated:1 entire:1 tak:2 selective:1 provably:2 overall:5 dual:10 among:1 issue:1 denoted:1 classification:2 arg:5 development:1 special:4 initialize:2 orange:1 beach:1 sampling:19 broad:1 yu:1 icml:5 employ:1 few:1 modern:4 randomly:1 individual:7 delayed:2 floating:1 replaced:4 versatility:1 investigate:1 highly:1 mining:2 zheng:2 extreme:2 operated:1 primal:6 fu:1 capable:1 necessary:2 experience:1 respective:1 orthogonal:1 indexed:1 re:1 theoretical:8 minimal:1 cevher:1 fenchel:1 column:13 earlier:1 instance:1 xeon:1 cover:5 contiguous:2 exchanging:1 cost:5 subset:5 entry:7 uniform:8 delay:1 virginia:1 optimally:1 bauschke:1 dependency:1 proximal:1 cho:1 chunk:1 st:1 referring:1 international:9 randomized:2 siam:1 off:1 hoyt:1 michael:2 continuously:1 opposed:1 choose:1 possibly:1 henceforth:1 compu:1 book:1 zhao:1 leading:2 distribute:3 summarized:2 rockafellar:1 explicitly:3 depends:4 later:2 performed:3 doing:1 analyze:1 recover:1 pcie:2 parallel:10 shai:1 timely:1 simon:1 contribution:4 publicly:1 accuracy:4 loaded:2 characteristic:1 efficiently:8 variance:1 yield:1 identify:1 generalize:1 anton:1 drive:1 explain:1 reach:2 definition:1 against:4 frequency:4 thereof:1 associated:1 recovers:1 static:2 gain:6 dataset:9 popular:1 knowledge:5 improves:1 hilbert:1 higher:1 improved:3 wei:2 april:2 done:1 strongly:7 furthermore:2 just:1 stage:1 smola:1 until:1 hand:2 working:1 horizontal:1 rack:1 widespread:1 heinze:1 quality:1 stale:4 usa:3 building:5 k22:1 verify:1 contain:1 effect:4 counterpart:1 true:1 hence:5 regularization:2 assigned:2 staleness:1 illustrated:4 deal:2 round:22 during:4 forte:2 suboptimality:6 criterion:1 generalized:7 prominent:1 ridge:8 demonstrate:6 confusion:1 l1:3 interface:1 wise:2 novel:4 recently:1 tationally:1 superior:1 attached:1 volume:2 extend:3 significant:3 refer:3 measurement:1 ai:3 rd:6 approx:2 kaggle:1 mathematics:1 similarly:2 access:6 etc:1 patrick:1 jaggi:5 own:1 showed:1 perspective:1 optimizing:1 mint:2 phone:1 massively:1 scenario:1 nvidia:1 server:2 cocoa:1 freshness:3 converge:1 determine:3 full:4 exceeds:2 smooth:6 faster:10 adapt:2 match:2 offer:1 long:1 cross:1 lin:1 christina:1 simone:2 baptiste:1 a1:1 controlled:1 impact:1 converging:1 regression:9 basic:1 heterogeneous:19 expectation:2 arxiv:1 iteration:6 achieved:1 justified:2 preserved:1 whereas:1 pass:1 ascent:2 inconsistent:1 alayrac:1 jordan:1 presence:1 split:4 easy:1 iterate:2 fit:9 zi:3 architecture:1 bandwidth:4 lasso:13 perfectly:2 reduce:1 regarding:2 simplifies:1 idea:1 competing:1 svn:1 intensive:4 t0:4 motivated:1 gb:5 distributing:1 peter:3 dokania:1 york:4 remark:1 repeatedly:1 useful:1 dmytro:1 amount:2 locally:3 hardware:1 processed:4 svms:4 reduced:2 zj:2 per:6 correctly:1 serving:1 diverse:1 write:1 four:1 enormous:1 libsvm:1 ram:4 asymptotically:1 monotone:1 powerful:1 communicate:2 almost:1 reader:2 decide:1 chih:1 appendix:3 peilin:1 distinguish:1 copied:1 quadratic:2 oracle:3 adapted:1 precisely:1 constraint:2 vishwanathan:1 alex:1 software:1 speed:2 min:4 fercoq:1 separable:2 martin:7 gpus:6 speedup:8 structured:1 according:2 conjugate:2 smaller:2 beneficial:1 describes:1 wi:1 puneet:1 qu:2 n4:1 making:1 hl:17 erm:2 pipeline:1 heart:1 computationally:1 resource:6 zurich:4 bus:1 southwell:1 needed:2 serf:1 end:1 issam:1 available:5 operation:3 apply:3 observe:4 hierarchical:1 generic:7 appropriate:1 batch:1 schmidt:1 slower:1 fpgas:3 thomas:2 original:1 top:2 running:3 exploit:3 giving:1 epsilon:2 build:3 february:3 classical:1 objective:2 question:1 quantity:1 strategy:1 traditional:1 exhibit:1 unable:1 capacity:8 threaded:2 spanning:1 assuming:1 copying:1 illustration:1 ratio:1 providing:1 setup:1 executed:1 frank:1 negative:1 slows:1 implementation:4 perform:1 datasets:1 enabling:1 descent:15 november:1 january:1 heterogeneity:2 extended:2 communication:6 precise:1 rn:5 arbitrary:6 lb:1 august:1 pair:1 required:2 specified:1 dog:1 imagenet:2 established:1 nip:1 address:1 able:1 k2op:2 proceeds:1 parallelism:2 suggested:1 csiba:1 challenge:1 built:1 including:2 memory:41 max:3 green:1 power:11 suitable:1 regularized:4 scheme:55 improve:1 imply:1 finished:1 speeding:1 review:1 l2:2 discovery:3 friedlander:1 relative:12 fully:2 expect:1 loss:2 adaptivity:1 accelerator:4 limitation:2 proportional:1 proven:1 versus:2 remarkable:1 validation:3 haris:1 offered:1 sufficient:1 systematically:2 cd:1 ibm:4 course:2 keeping:3 soon:1 asynchronous:1 populate:1 allow:1 taking:1 sparse:8 julie:1 distributed:5 benefit:4 overcome:2 calculated:1 plain:1 dimension:1 reside:1 made:1 adaptive:3 osokin:1 transaction:1 approximate:6 keep:1 global:1 sequentially:2 active:2 shwartz:1 quantifies:3 reality:1 additionally:2 learn:1 robust:1 ca:1 composing:1 improving:1 e5:1 posing:1 necessarily:2 artificially:1 aistats:2 main:4 dense:3 whole:1 big:1 n2:1 coordinatewise:1 suffering:1 allowed:1 fair:1 intel:1 ny:1 tong:2 combettes:1 precision:1 fails:1 hsiang:1 exceeding:1 wish:1 lie:1 dominik:1 jmlr:1 matsushima:1 theorem:6 down:1 load:2 specific:3 jen:1 showing:1 explored:1 svm:9 essential:1 workshop:1 sequential:3 importance:12 magnitude:1 execution:3 gap:39 highlighting:2 chang:2 springer:1 ch:1 primaldual:1 corresponds:2 nutini:1 relies:1 acm:5 ma:1 viewed:1 quantifying:1 hard:1 change:1 experimentally:1 typical:1 determined:2 uniformly:3 laradji:1 duality:18 experimental:1 gauss:1 selectively:1 select:2 support:6 mark:1 relevance:1 accelerated:1 |
6,649 | 7,014 | Temporal Coherency based Criteria for Predicting
Video Frames using Deep Multi-stage Generative
Adversarial Networks
Prateep Bhattacharjee1 , Sukhendu Das2
Visualization and Perception Laboratory
Department of Computer Science and Engineering
Indian Institute of Technology Madras, Chennai, India
1
[email protected], 2 [email protected]
Abstract
Predicting the future from a sequence of video frames has been recently a sought
after yet challenging task in the field of computer vision and machine learning.
Although there have been efforts for tracking using motion trajectories and flow
features, the complex problem of generating unseen frames has not been studied
extensively. In this paper, we deal with this problem using convolutional models
within a multi-stage Generative Adversarial Networks (GAN) framework. The
proposed method uses two stages of GANs to generate crisp and clear set of
future frames. Although GANs have been used in the past for predicting the
future, none of the works consider the relation between subsequent frames in
the temporal dimension. Our main contribution lies in formulating two objective
functions based on the Normalized Cross Correlation (NCC) and the Pairwise
Contrastive Divergence (PCD) for solving this problem. This method, coupled
with the traditional L1 loss, has been experimented with three real-world video
datasets viz. Sports-1M, UCF-101 and the KITTI. Performance analysis reveals
superior results over the recent state-of-the-art methods.
1
Introduction
Video frame prediction has recently been a popular problem in computer vision as it caters to a wide
range of applications including self-driving cars, surveillance, robotics and in-painting. However, the
challenge lies in the fact that, real-world scenes tend to be complex, and predicting the future events
requires modeling of complicated internal representations of the ongoing events. Past approaches
on video frame prediction include the use of recurrent neural architectures [19], Long Short Term
Memory [8] networks [22] and action conditional deep networks [17]. Recently, the work of [14]
modeled the frame prediction problem in the framework of Generative Adversarial Networks (GAN).
Generative models, as introduced by Goodfellow et. al., [5] try to generate images from random noise
by simultaneously training a generator (G) and a discriminator network (D) in a process similar to a
zero-sum game. Mathieu et. al. [14] shows the effectiveness of this adversarial training in the domain
of frame prediction using a combination of two objective functions (along with the basic adversarial
loss) employed on a multi-scale generator network. This idea stems from the fact that the original
L2-loss tends to produce blurry frames. This was overcome by the use of Gradient Difference Loss
(GDL) [14], which showed significant improvement over the past approaches when compared using
similarity and sharpness measures. However, this approach, although producing satisfying results for
the first few predicted frames, tends to generate blurry results for predictions far away (?6) in the
future.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
Figure 1: The proposed multi-stage GAN framework. The stage-1 generator network produces a
low-resolution version of predicted frames which are then fed to the stage-2 generator. Discriminators
at both the stages predict 0 or 1 for each predicted frame to denote its origin: synthetic or original.
In this paper, we aim to get over this hurdle of blurry predictions by considering an additional
constraint between consecutive frames in the temporal dimension. We propose two objective functions:
(a) Normalized Cross-Correlation Loss (NCCL) and (b) Pairwise Contrastive Divergence Loss
(PCDL) for effectively capturing the inter-frame relationships in the GAN framework. NCCL
maximizes the cross-correlation between neighborhood patches from consecutive frames, whereas,
PCDL applies a penalty when subsequent generated frames are predicted wrongly by the discriminator
network (D), thereby separating them far apart in the feature space. Performance analysis over three
real world video datasets shows the effectiveness of the proposed loss functions in predicting future
frames of a video.
The rest of the paper is organized as follows: section 2 describes the multi-stage generative adversarial
architecture; sections 3 - 6 introduce the different loss functions employed: the adversarial loss (AL)
and most importantly NCCL and PCDL. We show the results of our experiments on Sports-1M [10],
UCF-101 [21] and KITTI [4] and compare them with state-of-the-art techniques in section 7. Finally,
we conclude our paper highlighting the key points and future direction of research in section 8.
2
Multi-stage Generative Adversarial Model
Generative Adversarial Networks (GAN) [5] are composed of two networks: (a) the Generator (G)
and (b) the Discriminator (D). The generator G tries to generate realistic images by learning to
model the true data distribution pdata and thereby trying to make the task of differentiating between
original and generated images by the discriminator difficult. The discriminator D, in the other hand,
is optimized to distinguish between the synthetic and the real images. In essence, this procedure of
alternate learning is similar to the process of two player min-max games [5]. Overall, the GANs
minimize the following objective function:
min max v(D, G) = Ex?pdata [log(D(x))] + Ez?pz [log(1 ? D(G(z)))]
G
D
(1)
where, x is a real image from the true distribution pdata and z is a vector sampled from the distribution
pz , usually to be uniform or Gaussian. The adversarial loss employed in this paper is a variant of that
in equation 1, as the input to our network is a sequence of frames of a video, instead of a vector z.
As convolutions account only for short-range relationships, pooling layers are used to garner information from wider range. But, this process generates low resolution images. To overcome this,
Mathieu et. al. [14] uses a multi-scale generator network, equivalent to the reconstruction process
of a Laplacian pyramid [18], coupled with discriminator networks to produce high-quality output
frames of size 32 ? 32. There are two shortcomings of this approach:
2
a. Generating image output at higher dimensions viz. (128 ? 128) or (256 ? 256), requires
multiple use of upsampling operations applied on the output of the generators. In our
proposed model, this upsampling is handled by the generator networks itself implicitly
through the use of consecutive unpooling operations, thereby generating predicted frames at
much higher resolution in lesser number of scales.
b. As the generator network parameters are not learned with respect to any objective function
which captures the temporal relationship effectively, the output becomes blurry after ? 4
frames.
To overcome the first issue, we propose a multi-stage (2-stage) generative adversarial network
(MS-GAN).
2.1
Stage-1
Generating the output frame(s) directly often produces blurry outcomes. Instead, we simplify the
process by first generating crude, low-resolution version of the frame(s) to be predicted. The stage-1
generator (G1 ) consists of a series of convolutional layers coupled with unpooling layers [25] which
upsample the frames. We used ReLU non-linearity in all but the last layer, in which case, hyperbolic
tangent (tanh) was used following the scheme of [18]. The inputs to G1 are m number of consecutive
frames of dimension W0 ? H0 , whereas the outputs are n predicted frames of size W1 ? H1 , where,
W1 = W0 ? 2 and H1 = H0 ? 2. These outputs, stacked with the upsampled version of the original
input frames, produce the input of dimension (m + n) ? W1 ? H1 for the stage-1 discriminator (D1 ).
D1 applies a chain of convolutional layers followed by multiple fully-connected layers to finally
produce an output vector of dimension (m + n), consisting of 0?s and 1?s.
One of the key differences of our proposed GAN framework with the conventional one [5]is that, the
discriminator network produces decision output for multiple frames, instead of a single 0/1 outcome.
This is exploited by one of the proposed objective functions, the PCDL, which is described later in
section 4.
2.2
Stage-2
The second stage network closely resembles the stage-1 architecture, differing only in the input and
output dimensions. The input to the stage-2 generator (G2 ) is formed by stacking the predicted
frames and the upsampled inputs of G1 , thereby having dimension of (m + n) ? W1 ? H1 . The
output of G2 are n predicted high-resolution frames of size W2 ? H2 , where, W2 = W1 ? 4 and
H2 = H1 ? 4. The stage-2 discriminator (D2 ), works in a similar fashion as D1 , producing an output
vector of length (m + n).
Effectively, the multi-stage model can be represented by the following recursive equations:
(
Gk (Y?k?1 , Xk?1 ), f or k ? 2
Y?k =
Gk (Xk?1 )
f or k = 1
(2)
where, Y?k is the set of predicted frames and Xk are the input frames at the kth stage of the generator
network Gk .
2.3
Training the multi-stage GAN
The training procedure of the multi-stage GAN model follows that of the original generative adversarial networks with minor variations. The training of the discriminator and the generator are described
as follows:
Training of the discriminator Considering the input to the discriminator (D) as X (series of
m frames) and the target output to be Y (series of n frames), D is trained to distinguish between
synthetic and original inputs by classifying (X, Y ) into class 1 and (X, G(X)) into class 0. Hence,
for each of the k stages, we train D with target ~1 (Vector of 1?s with dimension m) for (X, Y ) and
3
target ~0 (Vector of 0?s with dimension n) for (X, G(X)). The loss function for training D is:
Nstages
LD
adv =
X
Lbce (Dk (Xk , Yk ), ~1) + Lbce (Dk (Xk , Gk (Xk )), ~0)
(3)
k=1
where, Lbce , the binary cross-entropy loss is defined as:
0
Lbce (A, A ) = ?
|A|
X
0
0
0
A i log(Ai ) + (1 ? A i )log(1 ? Ai ), Ai ? {0, 1}, A i ? [0, 1]
(4)
i=1
where, A and A0 are the target and discriminator outputs respectively.
Training of the generator We perform an optimization step on the generator network (G), keeping
the weights of D fixed, by feeding a set of consecutive frames X sampled from the training data with
target Y (set of ground-truth output frames) and minimize the following adversarial loss:
Nstages
LG
adv (X) =
X
Lbce (Dk (Xk , Gk (Xk )), ~1)
(5)
k=1
By minimizing the above two loss criteria (eqns. 3, 5), G makes the discriminator believe that,
the source of the generated frames is the input data space itself. Although this process of alternate
optimization of D and G is reasonably well designed formulation, in practical purposes, this produces
an unstable system where G generates samples that consecutively move far away from the original
input space and in consequence D distinguishes them easily. To overcome this instability inherent in
the GAN principle and the issue of producing blurry frames defined in section 2, we formulate a pair
of objective criteria: (a) Normalized Cross Correlation Loss (NCCL) and (b)Pairwise Contrastive
Divergence Loss (PCDL), to be used along with the established adversarial loss (refer eqns. 3 and 5).
3
Normalized Cross-Correlation Loss (NCCL)
The main advantage of video over image data is the fact that, it offers a far richer space of data
distribution by adding the temporal dimension along with the spatial one. Convolutional Neural
Networks (CNN) can only capture short-range relationships, a small part of the vast available
information, from the input video data, that too in the spatial domain. Although this can be somewhat
alleviated by the use of 3D convolutions [9], that increases the number of learn-able parameters
immensely. Normalized cross-correlation has been used since long time in the field of video analytics
[1, 2, 16, 13, 23] to model the spatial and temporal relationships present in the data.
Normalized cross correlation (NCC) measures the similarity of two image patches as a function of
the displacement of one relative to the other. This can be mathematically defined as:
X (f (x, y) ? ?f )(g(x, y) ? ?g )
(6)
N CC(f, g) =
?f ?g
x,y
where, f (x, y) is a sub-image, g(x, y) is the template to be matched, ?f , ?g denotes the mean of
the sub-image and the template respectively and ?f , ?g denotes the standard deviation of f and g
respectively.
In the domain of video frame(s) prediction, we incorporate the NCC by first extracting small nonoverlapping square patches of size h ? h (1 < h ? 4), denoted by a 3-tuple Pt {x, y, h}, where, x
and y are the co-ordinates of the top-left pixel of a particular patch, from the predicted frame at time t
and then calculating the cross-correlation score with the patch extracted from the ground truth frame
at time (t ? 1), represented by P?t?1 {x ? 2, y ? 2, h + 4}.
In simpler terms, we estimate the cross-correlation score between a small portion of the current
predicted frame and the local neighborhood of that in the previous ground-truth frame. We assume
that, the motion features present in the entire scene (frame) be effectively approximated by adjacent
spatial blocks of lower resolution,using small local neighborhoods in the temporal dimension. This
stems from the fact that, unless the video contains significant jitter or unexpected random events like
4
Algorithm 1: Normalized cross-correlation score for estimating similarity between a set of predicted
frame(s) and a set of ground-truth frame(s).
Input: Ground-truth frames (GT ), Predicted frames (P RED)
Output: Cross-correlation score (ScoreN CC )
// h = height and width of an image patch
// H = height and width of the predicted frames
// t = current time
// T = Number of frames predicted
Initialize: ScoreN CC = 0;
for t = 1 to T do
for i = 0 to H, i ? i + h do
for j = 0 to H, j ? j + h do
Pt ? extract_patch(P REDt , i, j, h);
/* Extracts a patch from the predicted frame at time t of dimension
h ? h starting from the top-left pixel index (i, j)
*/
P?t?1 ? extract_patch(GTt?1 , i ? 2, j ? 2, h + 4);
/* Extracts a patch from the ground-truth frame at time (t ? 1) of
dimension (h + 4) ? (h + 4) starting from the top-left pixel index
(i ? 2, j ? 2)
*/
?Pt ? avg(Pt );
?P?t?1 ? avg(P?t?1 );
?Pt ? standard_deviation(Pt );
?P?t?1 ? standard_deviation(P?t?1 );
P (Pt (x,y)??Pt )(P?t?1 (x,y)??P?t?1 )
;
ScoreN CC ? ScoreN CC + max 0, x,y
?P ? ?
t
end
end
ScoreN CC ? ScoreN CC/bH/hc2 ;
end
ScoreN CC ? ScoreN CC/(T ?1);
Pt?1
// Average over all the patches
// Average over all the frames
scene change, the motion features remain smooth over time. The step-by-step process for finding the
cross-correlation score by matching local patches of predicted and ground truth frames is described
in algorithm 1.
The idea of calculating the NCC score is modeled into an objective function for the generator network
G, where it tries to maximize the score over a batch of inputs. In essence, this objective function
models the temporal data distribution by smoothing the local motion features generated by the
convolutional model. This loss function, LN CCL , is defined as:
LN CCL (Y, Y? ) = ?ScoreN CC (Y, Y? )
(7)
where, Y and Y? are the ground truth and predicted frames and ScoreN CC is the average normalized
cross-correlation score over all the frames, obtained using the method as described in algorithm 1.
The generator tries to minimize LN CCL along with the adversarial loss defined in section 2.
We also propose a variant of this objective function, termed as Smoothed Normalized CrossCorrelation Loss (SNCCL), where the patch similarity finding logic of NCCL is extended by
convolving with Gaussian filters to suppress transient (sudden) motion patterns. A detailed discussion of this algorithm is given in sec. A of the supplementary document.
4
Pairwise Contrastive Divergence Loss (PCDL)
As discussed in sec. 3, the proposed method captures motion features that vary slowly over time. The
NCCL criteria aims to achieve this using local similarity measures. To complement this in a global
scale, we use the idea of pairwise contrastive divergence over the input frames. The idea of exploiting
this temporal coherence for learning motion features has been studied in the recent past [6, 7, 15].
5
By assuming that, motion features vary slowly over time, we describe Y?t and Y?t+1 as a temporal
pair, where, Y?i and Y?t+1 are the predicted frames at time t and (t + 1) respectively, if the outputs of
the discriminator network D for both these frames are 1. With this notation, we model the slowness
principle of the motion features using an objective function as:
T
?1
X
?
LP CDL (Y , p~) =
D? (Y?i , Y?i+1 , pi ? pi+1 )
i=0
=
T
?1
X
(8)
pi ? pi+1 ? d(Y?i , Y?i+1 ) + (1 ? pi ? pi+1 ) ? max(0, ? ? d(Y?i , Y?i+1 ))
i=0
where, T is the time-duration of the frames predicted, pi is the output decision (pi ? {0, 1}) of the
discriminator, d(x, y) is a distance measure (L2 in this paper) and ? is a positive margin. Equation
8 in simpler terms, minimizes the distance between frames that have been predicted correctly and
encourages the distance in the negative case, up-to a margin ?.
5
Higher Order Pairwise Contrastive Divergence Loss
The Pairwise Contrastive Divergence Loss (PCDL) discussed in the previous section takes into
account (dis)similarities between two consecutive frames to bring them further (or closer) in the
spatio-temporal feature space. This idea can be extended for higher order situations involving three
or more consecutive frames. For n = 3, where n is the number of consecutive frames considered,
PCDL can be defined as:
T
?2
X
L3?P CDL =
D? (|Y?i ? Y?i+1 |, |Y?i+1 ? Y?i+2 |, pi,i+1,i+2 )
i=0
=
T
?2
X
pi,i+1,i+2 ? d(|Y?i ? Y?i+1 |, |Y?i+1 ? Y?i+2 |)
(9)
i=0
+ (1 ? pi,i+1,i+2 ) ? max(0, ? ? d(|(Y?i ? Y?i+1 )|, |(Y?i+1 ? Y?i+2 )|))
where, pi,i+1,i+2 = 1 only if pi , pi+1 and pi+2 - all are simultaneously 1, i.e., the discriminator is
very sure about the predicted frames, that they are from the original data distribution. All the other
symbols bear standard representations defined in the paper.
This version of the objective function, in essence shrinks the distance between the predicted frames
occurring sequentially in a temporal neighborhood, thereby increasing their similarity and maintaining
the temporal coherency.
6
Combined Loss
Finally, we combine the objective functions given in eqns. 5 - 8 along with the general L1-loss with
different weights as:
?
LCombined =?adv LG
adv (X) + ?L1 LL1 (X, Y ) + ?N CCL LN CCL (Y, Y )
(10)
+ ?P CDL LP CDL (Y? , p~) + ?P CDL L3?P CDL (Y? , p~)
All the weights viz. ?L1 , ?N CCL , ?P CDL and ?3?P CDL have been set as 0.25, while ?adv equals
0.01. This overall loss is minimized during the training stage of the multi-stage GAN using Adam
optimizer [11].
We also evaluate our models by incorporating another loss function described in section A of the
supplementary document, the Smoothed Normalized Cross-Correlation Loss (SNCCL). The weight
for SNCCL, ?SN CCL equals 0.33 while ?3?P CDL and ?P CDL is kept at 0.16.
7
Experiments
Performance analysis with experiments of our proposed prediction model for video frame(s) have
been done on video clips from Sports-1M [10], UCF-101 [21] and KITTI [4] datasets. The inputoutput configuration used for training the system is as follows: input: 4 frames and output: 4 frames.
6
We compare our results with recent state-of-the-art methods using two popular metrics: (a) Peak
Signal to Noise Ratio (PSNR) and (b) Structural Similarity Index Measure (SSIM) [24].
7.1
Datasets
Sports-1M A large collection of sports videos collected from YouTube spread over 487 classes.
The main reason for choosing this dataset is the amount of movement in the frames. Being a collection
of sports videos, this has sufficient amount of motion present in most of the frames, making it an
efficient dataset for training the prediction model. Only this dataset has been used for training all
throughout our experimental studies.
UCF-101 This dataset contains 13320 annotated videos belonging to 101 classes having 180
frames/video on average. The frames in this video do not contain as much movement as the Sports1m and hence this is used only for testing purpose.
KITTI This consists of high-resolution video data from different road conditions. We have taken
raw data from two categories: (a) city and (b) road.
7.2
Architecture of the network
Table 1: Network architecture details; G and D represents the generator and discriminator networks
respectively. U denotes an unpooling operation which upsamples an input by a factor of 2.
Network
Number of feature maps
Stage-1 (G)
64, 128, 256U, 128,
64
Kernel sizes
Fully connected
5, 3, 3, 3, 5
N/A
Stage-2 (G)
64, 128U, 256,
512U, 256, 128,
64
5, 5, 5, 5, 5, 5, 5
N/A
Stage-1 (D)
64, 128, 256
Stage-2 (D)
128, 256, 512,
256, 128
3, 5, 5
1024, 512
7, 5, 5, 5, 5
1024, 512
The architecture details for the generator (G) and discriminator (D) networks used for experimental
studies are shown in table 1. All the convolutional layers except the terminal one in both stages of G
are followed by ReLU non-linearity. The last layer is tied with tanh activation function. In both the
stages of G, we use unpooling layers to upsample the image into higher resolution in magnitude of 2
in both dimensions (height and width). The learning rate is set to 0.003 for G, which is gradually
decreased to 0.0004 over time. The discriminator (D) uses ReLU non-linearities and is trained with a
learning rate of 0.03. We use mini-batches of 8 clips for training the overall network.
7.3
Evaluation metric for prediction
Assessment of the quality of the predicted frames is done by two methods: (a) Peak Signal to Noise
Ratio (PSNR) and (b) Structural Similarity Index Measure (SSIM). PSNR measures the quality of
the reconstruction process through the calculation of Mean-squared error between the original and
the reconstructed signal in logarithmic decibel scale [1]. SSIM is also an image similarity measure
where, one of the images being compared is assumed to be of perfect quality [24].
As the frames in videos are composed of foreground and background, and in most cases the background is static (not the case in the KITTI dataset, as it has videos taken from camera mounted on
a moving car), we extract random sequences of 32 ? 32 patches from the frames with significant
motion. Calculation of motion is done using the optical flow method of Brox et. al. [3].
7.4
Comparison
We compare the results on videos from UCF-101, using the model trained on the Sports-1M dataset.
Table 2 demonstrates the superiority of our method over the most recent work [14]. We followed
similar choice of test set videos as in [14] to make a fair comparison. One of the impressive facts
in our model is that, it can produce acceptably good predictions even in the 4th frame, which is a
significant result considering that [14] uses separate smaller multi-scale models for achieving this
7
Figure 2: Qualitative results of using the proposed framework for predicting frames in UCF-101 with
the three rows representing (a) Ground-truth, (b) Adv + L1 and (c) Combined (section 6) respectively.
?T? denotes the time-step. Figures in insets show zoomed-in patches for better visibility of areas
involving motion (Best viewed in color).
feat. Also note that, even though the metrics for the first predicted frame do not differ by a large
margin compared to the results from [14] for higher frames, the values decrease much slowly for the
models trained with the proposed objective functions (rows 8-10 of table 2). The main reason for this
phenomenon in our proposed method is the incorporation of the temporal relations in the objective
functions, rather than learning only in the spatial domain.
Similar trend was also found in case of the KITTI dataset. We could not find any prior work in
the literature reporting findings on the KITTI dataset and hence compared only with several of our
proposed models. In all the cases, the performance gain with the inclusion of NCCL and PCDL is
evident.
Finally, we show the prediction results obtained on both the UCF-101 and KITTI in figures 2 and 3.
It is evident from the sub-figures that, our proposed objective functions produce impressive quality
frames while the models trained with L1 loss tends to output blurry reconstruction. The supplementary
document contains visual results (shown in figures C.1-C.2) obtained in case of predicting frames
far-away from the current time-step (8 frames).
8
Conclusion
In this paper, we modified the Generative Adversarial Networks (GAN) framework with the use
of unpooling operations and introduced two objective functions based on the normalized crosscorrelation (NCCL) and the contrastive divergence estimate (PCDL), to design an efficient algorithm
for video frame(s) prediction. Studies show significant improvement of the proposed methods over the
recent published works. Our proposed objective functions can be used with more complex networks
involving 3D convolutions and recurrent neural networks. In the future, we aim to learn weights for
the cross-correlation such that it focuses adaptively on areas involving varying amount of motion.
8
Table 2: Comparison of performance for different methods using PSNR/SSIM scores for the UCF-101
and KITTI datasets. The first five rows report the results from [14]. (*) indicates models fine tuned on
patches of size 64 ? 64 [14]. (-) denotes unavailability of data. GDL stands for Gradient Difference
Loss [14]. SNCCL is discussed in section A of the supplementary document. Best results in bold.
Methods
L1
GDL L1
GDL L1*
Adv + GDL fine-tuned*
Optical flow
Next-flow [20]
Deep Voxel Flow [12]
Adv + NCCL + L1
Combined
Combined + SNCCL
Combined + SNCCL (full
frame)
1st frame
prediction score
UCF
KITTI
28.7/0.88 29.4/0.90 29.9/0.90 32.0/0.92 31.6/0.93 31.9/35.8/0.96 35.4/0.94 37.1/0.91
37.3/0.95 39.7/0.93
38.2/0.95 40.2/0.94
37.3/0.94 39.4/0.94
2nd frame
prediction score
UCF
KITTI
23.8/0.83 24.9/0.84 26.4/0.87 28.9/0.89 28.2/0.90 33.9/0.92 35.4/0.90
35.7/0.92 37.1/0.91
36.8/0.93 37.7/0.91
35.1/0.91 36.4/0.91
4th frame
prediction score
UCF
KITTI
28.7/0.75 27.8/0.75
30.2/0.76 29.6/0.76
30.9/0.77 30.4/0.77
29.5/0.75 29.1/0.76
Figure 3: Qualitative results of using the proposed framework for predicting frames in the KITTI
Dataset, for (a) L1, (b) NCCL (section 3), (c) Combined (section 6) and (d) ground-truth (Best viewed
in color).
9
References
[1] A. C. Bovik. The Essential Guide to Video Processing. Academic Press, 2nd edition, 2009.
[2] K. Briechle and U. D. Hanebeck. Template matching using fast normalized cross correlation. In
Aerospace/Defense Sensing, Simulation, and Controls, pages 95?102. International Society for Optics and
Photonics, 2001.
[3] T. Brox and J. Malik. Large displacement optical flow: descriptor matching in variational motion estimation.
IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 33(3):500?513, 2011.
[4] A. Geiger, P. Lenz, C. Stiller, and R. Urtasun. Vision meets robotics: The kitti dataset. International
Journal of Robotics Research (IJRR), 2013.
[5] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio.
Generative adversarial nets. In Advances in Neural Information Processing Systems (NIPS), pages 2672?
2680, 2014.
[6] R. Goroshin, J. Bruna, J. Tompson, D. Eigen, and Y. LeCun. Unsupervised learning of spatiotemporally
coherent metrics. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), pages
4086?4093, 2015.
[7] R. Hadsell, S. Chopra, and Y. LeCun. Dimensionality reduction by learning an invariant mapping. In IEEE
Conference on Computer Vision and Pattern Recognition (CVPR), volume 2, pages 1735?1742, 2006.
[8] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural Computation, 9(8):1735?1780, 1997.
[9] S. Ji, W. Xu, M. Yang, and K. Yu. 3d convolutional neural networks for human action recognition. IEEE
Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 35(1):221?231, 2013.
[10] A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar, and L. Fei-Fei. Large-scale video classification
with convolutional neural networks. In IEEE Conference on Computer Vision and Pattern Recognition
(CVPR), 2014.
[11] D. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
[12] Z. Liu, R. Yeh, X. Tang, Y. Liu, and A. Agarwala. Video frame synthesis using deep voxel flow. arXiv
preprint arXiv:1702.02463, 2017.
[13] J. Luo and E. E. Konofagou. A fast normalized cross-correlation calculation method for motion estimation.
IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control, 57(6):1347?1357, 2010.
[14] M. Mathieu, C. Couprie, and Y. LeCun. Deep multi-scale video prediction beyond mean square error.
International Conference on Learning Representations (ICLR), 2016.
[15] H. Mobahi, R. Collobert, and J. Weston. Deep learning from temporal coherence in video. In Proceedings
of the 26th Annual International Conference on Machine Learning (ICML), pages 737?744. ACM, 2009.
[16] A. Nakhmani and A. Tannenbaum. A new distance measure based on generalized image normalized
cross-correlation for robust video tracking and image recognition. Pattern Recognition Letters (PRL),
34(3):315?321, 2013.
[17] J. Oh, X. Guo, H. Lee, R. L. Lewis, and S. Singh. Action-conditional video prediction using deep networks
in atari games. In Advances in Neural Information Processing Systems (NIPS), pages 2863?2871, 2015.
[18] A. Radford, L. Metz, and S. Chintala. Unsupervised representation learning with deep convolutional
generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015.
[19] M. Ranzato, A. Szlam, J. Bruna, M. Mathieu, R. Collobert, and S. Chopra. Video (language) modeling: a
baseline for generative models of natural videos. arXiv preprint arXiv:1412.6604, 2014.
[20] N. Sedaghat. Next-flow: Hybrid multi-tasking with next-frame prediction to boost optical-flow estimation
in the wild. arXiv preprint arXiv:1612.03777, 2016.
[21] K. Soomro, A. R. Zamir, and M. Shah. Ucf101: A dataset of 101 human actions classes from videos in the
wild. arXiv preprint arXiv:1212.0402, 2012.
[22] N. Srivastava, E. Mansimov, and R. Salakhudinov. Unsupervised learning of video representations using
lstms. In International Conference on Machine Learning (ICML), pages 843?852, 2015.
[23] A. Subramaniam, M. Chatterjee, and A. Mittal. Deep neural networks with inexact matching for person
re-identification. In Advances in Neural Information Processing Systems (NIPS), pages 2667?2675, 2016.
[24] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli. Image quality assessment: from error visibility
to structural similarity. IEEE Transactions on Image Processing (TIP), 13(4):600?612, 2004.
[25] M. D. Zeiler and R. Fergus. Visualizing and understanding convolutional networks. In European Conference
on Computer Vision (ECCV), pages 818?833. Springer, 2014.
10
| 7014 |@word cnn:1 version:4 nd:2 d2:1 simulation:1 contrastive:8 thereby:5 ld:1 reduction:1 configuration:1 series:3 score:12 contains:3 liu:2 tuned:2 document:4 past:4 current:3 luo:1 activation:1 yet:1 unpooling:5 subsequent:2 realistic:1 visibility:2 designed:1 generative:13 intelligence:2 xk:8 short:4 sudden:1 cse:1 simpler:2 five:1 height:3 along:5 gtt:1 qualitative:2 consists:2 combine:1 wild:2 introduce:1 pairwise:7 inter:1 multi:15 terminal:1 toderici:1 considering:3 increasing:1 becomes:1 estimating:1 linearity:3 matched:1 maximizes:1 notation:1 atari:1 minimizes:1 differing:1 finding:3 temporal:15 demonstrates:1 mansimov:1 control:2 szlam:1 acceptably:1 superiority:1 producing:3 positive:1 engineering:1 local:5 tends:3 consequence:1 iitm:2 meet:1 studied:2 resembles:1 challenging:1 co:1 analytics:1 range:4 practical:1 camera:1 lecun:3 testing:1 recursive:1 block:1 procedure:2 displacement:2 area:2 hyperbolic:1 tasking:1 alleviated:1 matching:4 spatiotemporally:1 road:2 ucf101:1 upsampled:2 get:1 wrongly:1 bh:1 instability:1 sukthankar:1 crisp:1 equivalent:1 conventional:1 map:1 starting:2 duration:1 sharpness:1 resolution:8 formulate:1 hadsell:1 pouget:1 importantly:1 oh:1 variation:1 target:5 pt:9 us:4 goodfellow:2 origin:1 trend:1 satisfying:1 approximated:1 recognition:5 preprint:6 wang:1 capture:3 zamir:1 connected:2 adv:8 ranzato:1 movement:2 decrease:1 yk:1 warde:1 trained:5 singh:1 solving:1 easily:1 represented:2 stacked:1 train:1 fast:2 shortcoming:1 describe:1 neighborhood:4 outcome:2 h0:2 choosing:1 richer:1 supplementary:4 cvpr:2 unseen:1 g1:3 itself:2 sequence:3 advantage:1 tpami:2 net:1 ferroelectrics:1 subramaniam:1 propose:3 reconstruction:3 zoomed:1 achieve:1 inputoutput:1 exploiting:1 produce:10 generating:5 adam:2 perfect:1 stiller:1 kitti:14 wider:1 recurrent:2 ac:2 minor:1 ex:1 predicted:26 goroshin:1 differ:1 direction:1 closely:1 annotated:1 filter:1 consecutively:1 stochastic:1 human:2 transient:1 feeding:1 mathematically:1 immensely:1 considered:1 ground:10 mapping:1 predict:1 driving:1 sought:1 consecutive:8 vary:2 optimizer:1 salakhudinov:1 purpose:2 estimation:3 lenz:1 tanh:2 mittal:1 city:1 gaussian:2 aim:3 modified:1 rather:1 surveillance:1 varying:1 focus:1 viz:3 improvement:2 indicates:1 adversarial:18 baseline:1 leung:1 entire:1 lbce:5 a0:1 relation:2 pixel:3 overall:3 issue:2 classification:1 agarwala:1 denoted:1 art:3 spatial:5 initialize:1 smoothing:1 brox:2 field:2 equal:2 having:2 beach:1 represents:1 yu:1 unsupervised:3 icml:2 pdata:3 foreground:1 future:8 minimized:1 report:1 mirza:1 simplify:1 inherent:1 few:1 distinguishes:1 composed:2 simultaneously:2 divergence:8 ccl:7 consisting:1 evaluation:1 photonics:1 tompson:1 farley:1 chain:1 tuple:1 closer:1 unless:1 re:1 modeling:2 stacking:1 deviation:1 uniform:1 too:1 synthetic:3 combined:6 adaptively:1 st:2 person:1 peak:2 international:6 lee:1 tip:1 synthesis:1 gans:3 w1:5 squared:1 slowly:3 convolving:1 crosscorrelation:2 account:2 nonoverlapping:1 sec:2 bold:1 chennai:1 collobert:2 later:1 try:4 h1:5 portion:1 red:1 metz:1 complicated:1 contribution:1 minimize:3 formed:1 square:2 convolutional:10 descriptor:1 painting:1 raw:1 identification:1 garner:1 none:1 trajectory:1 cc:11 published:1 ncc:4 inexact:1 frequency:1 gdl:5 chintala:1 static:1 sampled:2 gain:1 dataset:11 popular:2 color:2 car:2 dimensionality:1 psnr:4 organized:1 higher:6 formulation:1 done:3 shrink:1 though:1 stage:32 correlation:18 hand:1 lstms:1 assessment:2 quality:6 believe:1 usa:1 normalized:14 true:2 contain:1 hence:3 laboratory:1 deal:1 visualizing:1 adjacent:1 unavailability:1 self:1 game:3 eqns:3 essence:3 width:3 encourages:1 during:1 criterion:4 m:1 trying:1 generalized:1 evident:2 motion:16 l1:11 nccl:11 bring:1 image:19 variational:1 recently:3 superior:1 ji:1 volume:1 discussed:3 significant:5 refer:1 ai:3 inclusion:1 language:1 l3:2 moving:1 bruna:2 ucf:11 similarity:11 impressive:2 gt:1 recent:5 showed:1 apart:1 termed:1 schmidhuber:1 slowness:1 binary:1 exploited:1 additional:1 somewhat:1 sdas:1 employed:3 maximize:1 signal:3 multiple:3 full:1 simoncelli:1 stem:2 smooth:1 academic:1 calculation:3 cross:19 long:4 offer:1 laplacian:1 prediction:19 variant:2 basic:1 involving:4 vision:7 metric:4 arxiv:12 kernel:1 pyramid:1 robotics:3 hochreiter:1 whereas:2 hurdle:1 background:2 fine:2 decreased:1 source:1 bovik:2 w2:2 rest:1 sure:1 pooling:1 tend:1 flow:9 effectiveness:2 extracting:1 structural:3 chopra:2 yang:1 prl:1 bengio:1 relu:3 architecture:6 idea:5 lesser:1 handled:1 defense:1 effort:1 penalty:1 soomro:1 action:4 deep:9 clear:1 detailed:1 karpathy:1 amount:3 extensively:1 clip:2 category:1 generate:4 coherency:2 correctly:1 key:2 achieving:1 kept:1 vast:1 sum:1 letter:1 jitter:1 reporting:1 throughout:1 patch:14 geiger:1 decision:2 coherence:2 capturing:1 layer:9 followed:3 distinguish:2 courville:1 annual:1 optic:1 constraint:1 pcd:1 incorporation:1 fei:2 scene:3 generates:2 min:2 formulating:1 optical:4 department:1 alternate:2 combination:1 belonging:1 describes:1 remain:1 smaller:1 lp:2 sheikh:1 making:1 gradually:1 iccv:1 invariant:1 taken:2 ln:4 equation:3 visualization:1 fed:1 end:3 available:1 operation:4 away:3 blurry:7 batch:2 shetty:1 shah:1 eigen:1 original:9 denotes:5 top:3 include:1 gan:12 zeiler:1 maintaining:1 madras:1 calculating:2 society:1 objective:18 move:1 malik:1 traditional:1 gradient:2 kth:1 iclr:1 distance:5 separate:1 separating:1 upsampling:2 w0:2 collected:1 unstable:1 urtasun:1 reason:2 ozair:1 assuming:1 length:1 modeled:2 relationship:5 index:4 ratio:2 minimizing:1 mini:1 difficult:1 lg:2 gk:5 negative:1 ba:1 suppress:1 design:1 perform:1 ssim:4 convolution:3 datasets:5 situation:1 extended:2 frame:92 smoothed:2 hanebeck:1 ordinate:1 introduced:2 complement:1 pair:2 optimized:1 discriminator:21 aerospace:1 coherent:1 learned:1 established:1 kingma:1 boost:1 nip:4 able:1 beyond:1 usually:1 perception:1 pattern:6 challenge:1 including:1 memory:2 video:37 max:5 event:3 natural:1 hybrid:1 predicting:8 ll1:1 representing:1 scheme:1 technology:1 cdl:10 ijrr:1 mathieu:4 coupled:3 extract:3 sn:1 prior:1 literature:1 l2:2 tangent:1 yeh:1 understanding:1 relative:1 loss:31 fully:2 bear:1 mounted:1 generator:20 h2:2 sedaghat:1 sufficient:1 principle:2 classifying:1 pi:15 row:3 eccv:1 last:2 keeping:1 dis:1 guide:1 india:1 institute:1 wide:1 template:3 differentiating:1 overcome:4 dimension:15 world:3 stand:1 collection:2 avg:2 far:5 voxel:2 transaction:4 reconstructed:1 implicitly:1 feat:1 logic:1 global:1 sequentially:1 reveals:1 conclude:1 assumed:1 spatio:1 fergus:1 table:5 learn:2 reasonably:1 robust:1 ca:1 complex:3 european:1 domain:4 main:4 spread:1 noise:3 edition:1 fair:1 caters:1 xu:2 fashion:1 sub:3 lie:2 crude:1 tied:1 tang:1 decibel:1 inset:1 mobahi:1 symbol:1 sensing:1 experimented:1 pz:2 dk:3 abadie:1 incorporating:1 essential:1 adding:1 effectively:4 magnitude:1 occurring:1 chatterjee:1 margin:3 entropy:1 logarithmic:1 ez:1 visual:1 highlighting:1 unexpected:1 tracking:2 sport:7 upsample:2 g2:2 applies:2 radford:1 springer:1 truth:10 lewis:1 extracted:1 acm:1 weston:1 conditional:2 viewed:2 couprie:1 change:1 youtube:1 except:1 experimental:2 player:1 internal:1 guo:1 indian:1 ongoing:1 incorporate:1 evaluate:1 d1:3 phenomenon:1 srivastava:1 |
6,650 | 7,015 | Sobolev Training for Neural Networks
Wojciech Marian Czarnecki, Simon Osindero, Max Jaderberg
Grzegorz Swirszcz, and Razvan Pascanu
DeepMind, London, UK
{lejlot,osindero,jaderberg,swirszcz,razp}@google.com
Abstract
At the heart of deep learning we aim to use neural networks as function approximators ? training them to produce outputs from inputs in emulation of a ground
truth function or data creation process. In many cases we only have access to
input-output pairs from the ground truth, however it is becoming more common to
have access to derivatives of the target output with respect to the input ? for example when the ground truth function is itself a neural network such as in network
compression or distillation. Generally these target derivatives are not computed, or
are ignored. This paper introduces Sobolev Training for neural networks, which is
a method for incorporating these target derivatives in addition the to target values
while training. By optimising neural networks to not only approximate the function?s outputs but also the function?s derivatives we encode additional information
about the target function within the parameters of the neural network. Thereby
we can improve the quality of our predictors, as well as the data-efficiency and
generalization capabilities of our learned function approximation. We provide
theoretical justifications for such an approach as well as examples of empirical
evidence on three distinct domains: regression on classical optimisation datasets,
distilling policies of an agent playing Atari, and on large-scale applications of
synthetic gradients. In all three domains the use of Sobolev Training, employing
target derivatives in addition to target values, results in models with higher accuracy
and stronger generalisation.
1
Introduction
Deep Neural Networks (DNNs) are one of the main tools of modern machine learning. They are
consistently proven to be powerful function approximators, able to model a wide variety of functional
forms ? from image recognition [8, 24], through audio synthesis [27], to human-beating policies
in the ancient game of GO [22]. In many applications the process of training a neural network
consists of receiving a dataset of input-output pairs from a ground truth function, and minimising
some loss with respect to the network?s parameters. This loss is usually designed to encourage
the network to produce the same output, for a given input, as that from the target ground truth
function. Many of the ground truth functions we care about in practice have an unknown analytic
form, e.g. because they are the result of a natural physical process, and therefore we only have the
observed input-output pairs for supervision. However, there are scenarios where we do know the
analytic form and so are able to compute the ground truth gradients (or higher order derivatives),
alternatively sometimes these quantities may be simply observable. A common example is when the
ground truth function is itself a neural network; for instance this is the case for distillation [9, 20],
compressing neural networks [7], and the prediction of synthetic gradients [12]. Additionally, if we
are dealing with an environment/data-generation process (vs. a pre-determined set of data points),
then even though we may be dealing with a black box we can still approximate derivatives using finite
differences. In this work, we consider how this additional information can be incorporated in the
learning process, and what advantages it can provide in terms of data efficiency and performance. We
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
D_{\mathbf{x}} f
Dx hDx hm, v1 i, v2 i
Dx2 m
l2
@
@x
Dx m
@
@x
l1
Dx f
l
@
@x
@
@x
l1
Dx hf, v1 i
@
@x
v1
m
f
Dx hDx hf, v1 i, v2 i
v2
mv1 = Dx hm, v1 i
@
@x
@
@x
m
@
@x
Dx2 f
l2
l
x
x
a)
b)
f
Figure 1: a) Sobolev Training of order 2. Diamond nodes m and f indicate parameterised functions,
where m is trained to approximate f . Green nodes receive supervision. Solid lines indicate connections through which error signal from loss l, l1 , and l2 are backpropagated through to train m.
b) Stochastic Sobolev Training of order 2. If f and m are multivariate functions, the gradients are
Jacobian matrices. To avoid computing these high dimensional objects, we can efficiently compute
and fit their projections on a random vector vj sampled from the unit sphere.
D_{\mathbf{x}} \langle D_{\mathbf{x}} \langle m, v_1 \rangle, v_2 \rangle
propose Sobolev Training (ST) for neural networks as a simple and efficient technique for leveraging
derivative information about the desired function in a way that can easily be incorporated into any
training pipeline using modern machine learning libraries.
The approach is inspired by the work of Hornik [10] which proved the universal approximation
theorems for neural networks in Sobolev spaces ? metric spaces where distances between functions
are defined both in terms of their differences in values and differences in values of their derivatives.
In particular, it was shown that a sigmoid network can not only approximate a function?s value
arbitrarily well, but that the network?s derivatives with respect to its inputs can approximate the
corresponding derivatives of the ground truth function arbitrarily well too. Sobolev Training exploits
this property, and tries to match not only the output of the function being trained but also its derivatives.
There are several related works which have also exploited derivative information for function approximation. For instance Wu et al. [30] and antecedents propose a technique for Bayesian optimisation
with Gaussian Processess (GP), where it was demonstrated that the use of information about gradients and Hessians can improve the predictive power of GPs. In previous work on neural networks,
derivatives of predictors have usually been used either to penalise model complexity (e.g. by pushing
Jacobian norm to 0 [19]), or to encode additional, hand crafted invariances to some transformations
(for instance, as in Tangentprop [23]), or estimated derivatives for dynamical systems [6] and very
recently to provide additional learning signal during attention distillation [31]1 . Similar techniques
have also been used in critic based Reinforcement Learning (RL), where a critic?s derivatives are
trained to match its target?s derivatives [29, 15, 5, 4, 26] using small, sigmoid based models. Finally,
Hyv?rinen proposed Score Matching Networks [11], which are based on the somewhat surprising
observation that one can model unknown derivatives of the function without actual access to its values
? all that is needed is a sampling based strategy and specific penalty. However, such an estimator has
a high variance [28], thus it is not really useful when true derivatives are given.
To the best of our knowledge and despite its simplicity, the proposal to directly match network
derivatives to the true derivatives of the target function has been minimally explored for deep
networks, especially modern ReLU based models. In our method, we show that by using the
additional knowledge of derivatives with Sobolev Training we are able to train better models ? models
which achieve lower approximation errors and generalise to test data better ? and reduce the sample
complexity of learning. The contributions of our paper are therefore threefold: (1): We introduce
1
Please relate to Supplementary Materials, section 5 for details
2
Sobolev Training ? a new paradigm for training neural networks. (2): We look formally at the
implications of matching derivatives, extending previous results of Hornik [10] and showing that
modern architectures are well suited for such training regimes. (3): Empirical evidence demonstrating
that Sobolev Training leads to improved performance and generalisation, particularly in low data
regimes. Example domains are: regression on classical optimisation problems; policy distillation
from RL agents trained on the Atari domain; and training deep, complex models using synthetic
gradients ? we report the first successful attempt to train a large-scale ImageNet model using synthetic
gradients.
2
Sobolev Training
We begin by introducing the idea of training using Sobolev spaces. When learning a function
f , we may have access to not only the output values f (xi ) for training points xi , but also the
values of its j-th order derivatives with respect to the input, Dxj f (xi ). In other words, instead
of the typical training set consisting of pairs {(xi , f (xi ))}N
i=1 we have access to (K + 2)-tuples
{(xi , f (xi ), Dx1 f (xi ), ..., DxK f (xi ))}N
i=1 . In this situation, the derivative information can easily be
incorporated into training a neural network model of f by making derivatives of the neural network
match the ones given by f .
Considering a neural network model m parameterised with ?, one typically seeks to minimise the
empirical error in relation to f according to some loss function `
N
X
i=1
`(m(xi |?), f (xi )).
When learning in Sobolev spaces, this is replaced with:
?
?
N
K
X
X
?`(m(xi |?), f (xi )) +
`j Dxj m(xi |?), Dxj f (xi ) ? ,
i=1
(1)
j=1
where `j are loss functions measuring error on j-th order derivatives. This causes the neural network
to encode derivatives of the target function in its own derivatives. Such a model can still be trained
using backpropagation and off-the-shelf optimisers.
A potential concern is that this optimisation might be expensive when either the output dimensionality
of f or the order K are high, however one can reduce this cost through stochastic approximations.
Specifically, if f is a multivariate function, instead of a vector gradient, one ends up with a full
Jacobian matrix which can be large. To avoid adding computational complexity to the training
process, one can use an efficient, stochastic version of Sobolev Training: instead of computing a full
Jacobian/Hessian, one just computes its projection onto a random vector (a direct application of a
known estimation trick [19]). In practice, this means that during training we have a random variable
v sampled uniformly from the unit sphere, and we match these random projections instead:
?
?
N
K
X
X
j
?`(m(xi |?), f (xi )) +
Evj `j Dx m(xi |?), v j , Dxj f (xi ), v j ? .
(2)
i=1
j=1
Figure 1 illustrates compute graphs for non-stochastic and stochastic Sobolev Training of order 2.
3
Theory and motivation
While in the previous section we defined Sobolev Training, it is not obvious that modeling the
derivatives of the target function f is beneficial to function approximation, or that optimising such
an objective is even feasible. In this section we motivate and explore these questions theoretically,
showing that the Sobolev Training objective is a well posed one, and that incorporating derivative
information has the potential to drastically reduce the sample complexity of learning.
Hornik showed [10] that neural networks with non-constant, bounded, continuous activation functions,
with continuous derivatives up to order K are universal approximators in the Sobolev spaces of
order K, thus showing that sigmoid-networks are indeed capable of approximating elements of these
3
Figure 2: Left: From top: Example of the piece-wise linear function; Two (out of a continuum of)
hypotheses consistent with 3 training points, showing that one needs two points to identify each linear
segment; The only hypothesis consistent with 3 training points enriched with derivative information.
Right: Logarithm of test error (MSE) for various optimisation benchmarks with varied training set
size (20, 100 and 10000 points) sampled uniformly from the problem?s domain.
spaces arbitrarily well. However, nowadays we often use activation functions such as ReLU which
are neither bounded nor have continuous derivatives. The following theorem shows that for K = 1
we can use ReLU function (or a similar one, like leaky ReLU) to create neural networks that are
universal approximators in Sobolev spaces. We will use a standard symbol C 1 (S) (or simply C 1 ) to
denote a space of functions which are continuous, differentiable, and have a continuous derivative on
a space S [14]. All proofs are given in the Supplementary Materials (SM).
Theorem 1. Let f be a C 1 function on a compact set. Then, for every positive ? there exists a single
hidden layer neural network with a ReLU (or a leaky ReLU) activation which approximates f in
Sobolev space S1 up to error.
This suggests that the Sobolev Training objective is achievable, and that we can seek to encode the
values and derivatives of the target function in the values and derivatives of a ReLU neural network
model. Interestingly, we can show that if we seek to encode an arbitrary function in the derivatives of
the model then this is impossible not only for neural networks but also for any arbitrary differentiable
predictor on compact sets.
Theorem 2. Let f be a C 1 function. Let g be a continuous function satisfying kg
? ?f
?x k?
> 0. Then,
there exists an ? > 0 such that for any C 1 function h either kf ? hk? ? ? or
g ? ?h
?x ? ? ?.
However, when we move to the regime of finite training data, we can encode any arbitrary function in
the derivatives (as well as higher order signals if the resulting Sobolev spaces are not degenerate), as
shown in the following Proposition.
Proposition 1. Given any two functions f : S ? R and g : S ? Rd on S ? Rd and a finite
set ? ? S, there exists neural network h with a ReLU (or a leaky ReLU) activation such that
?x ? ? : f (x) = h(x) and g(x) = ?h
?x (x) (it has 0 training loss).
Having shown that it is possible to train neural networks to encode both the values and derivatives of
a target function, we now formalise one possible way of showing that Sobolev Training has lower
sample complexity than regular training.
Let F denote the family of functions parametrised by ?. We define Kreg = Kreg (F) to be a measure
of the amount of data needed to learn some target function f . That is Kreg is the smallest number for
which there holds: for every f? ? F and every set of distinct Kreg points (x1 , ..., xKreg ) such that
?i=1,...,Kreg f (xi ) = f? (xi ) ? f = f? . Ksob is defined analogously, but the final implication is of
?f?
form f (xi ) = f? (xi ) ? ?f
?x (xi ) = ?x (xi ) ? f = f? . Straight from the definition there follows:
Proposition 2. For any F, there holds Ksob (F) ? Kreg (F).
For many families, the above inequality becomes sharp. For example, to determine the coefficients
of a polynomial of degree n one needs to compute its values in at least n + 1 distinct points. If we
know values and the derivatives at k points, it is a well-known fact that only d n2 e points suffice to
determine all the coefficients. We present two more examples in a slightly more formal way. Let
FG denote a family of Gaussian PDF-s (parametrised by ?, ?). Let Rd ? D = D1 ? . . . ? Dn and
let FPL be a family of functions from D1 ? ... ? Dn (Cartesian product of sets Di ) to Rn of form
f (x) = [A1 x1 + b1 , . . . , An xn + bn ] (linear element-wise) (Figure 2 Left).
4
Dataset
20 training samples
Regular
Sobolev
100 training samples
Regular
Sobolev
Figure 3: Styblinski-Tang function (on the left) and its models using regular neural network training
(left part of each plot) and Sobolev Training (right part). We also plot the vector field of the gradients
of each predictor underneath the function plot.
Proposition 3. There holds Ksob (FG ) < Kreg (FG ) and Ksob (FPL ) < Kreg (FPL ).
This result relates to Deep ReLU networks as they build a hyperplanes-based model of the target
function. If those were parametrised independently one could expect a reduction of sample complexity
by d+1 times, where d is the dimension of the function domain. In practice parameters of hyperplanes
in such networks are not independent, furthermore the hinges positions change so the Proposition
cannot be directly applied, but it can be seen as an intuitive way to see why the sample complexity
drops significantly for Deep ReLU networks too.
4
Experimental Results
We consider three domains where information about derivatives is available during training2 .
4.1
Artificial Data
First, we consider the task of regression on a set of well known low-dimensional functions used for
benchmarking optimisation methods.
We train two hidden layer neural networks with 256 hidden units per layer with ReLU activations to
regress towards function values, and verify generalisation capabilities by evaluating the mean squared
error on a hold-out test set. Since the task is standard regression, we choose all the losses of Sobolev
Training to be L2 errors, and use a first order Sobolev method (second order derivatives of ReLU
networks with a linear output layer are constant, zero). The optimisation is therefore:
min N1
?
N
X
i=1
kf (xi ) ? m(xi |?)k22 + k?x f (xi ) ? ?x m(xi |?)k22 .
Figure 2 right shows the results for the optimisation benchmarks. As expected, Sobolev trained
networks perform extremely well ? for six out of seven benchmark problems they significantly reduce
the testing error with the obtained errors orders of magnitude smaller than the corresponding errors of
the regularly trained networks. The stark difference in approximation error is highlighted in Figure 3,
where we show the Styblinski-Tang function and its approximations with both regular and Sobolev
Training. It is clear that even in very low data regimes, the Sobolev trained networks can capture the
functional shape.
Looking at the results, we make two important observations. First, the effect of Sobolev Training
is stronger in low-data regimes, however it does not disappear even in the high data regime, when
one has 10,000 training examples for training a two-dimensional function. Second, the only case
where regular regression performed better is the regression towards Ackley?s function. This particular
2
All experiments were performed using TensorFlow [2] and the Sonnet neural network library [1].
5
Test DKL
Test action prediction error
Regular distillation
Sobolev distillation
Figure 4: Test results of distillation of RL agents on three Atari games. Reported test action prediction
error (left) is the error of the most probable action predicted between the distilled policy and target
policy, and test DKL (right) is the Kulblack-Leibler divergence between policies. Numbers in the
column title represents the percentage of the 100K recorded states used for training (the remaining
are used for testing). In all scenarios the Sobolev distilled networks are significantly more similar to
the target policy.
example was chosen to show that one possible weak point of our approach might be approximating
functions with a very high frequency signal component in the relatively low data regime. Ackley?s
function is composed of exponents of high frequency cosine waves, thus creating an extremely bumpy
surface, consequently a method that tries to match the derivatives can behave badly during testing if
one does not have enough data to capture this complexity. However, once we have enough training
data points, Sobolev trained networks are able to approximate this function better.
4.2
Distillation
Another possible application of Sobolev Training is to perform model distillation. This technique has
many applications, such as network compression [21], ensemble merging [9], or more recently policy
distillation in reinforcement learning [20].
We focus here on a task of distilling a policy. We aim to distill a target policy ? ? (s) ? a trained
neural network which outputs a probability distribution over actions ? into a smaller neural network
?(s|?), such that the two policies ? ? and ? have the same behaviour. In practice this is often done by
minimising an expected divergence measure between ? ? and ?, for example, the Kullback?Leibler
divergence DKL (?(s)k? ? (s)), over states gathered while following ? ? . Since policies are multivariate functions, direct application of Sobolev Training would mean producing full Jacobian matrices
with respect to the s, which for large actions spaces is computationally expensive. To avoid this issue
we employ a stochastic approximation described in Section 2, thus resulting in the objective
min DKL (?(s|?)k? ? (s)) + ?Ev [k?s hlog ? ? (s), vi ? ?s hlog ?(s|?), vik] ,
?
where the expectation is taken with respect to v coming from a uniform distribution over the unit
sphere, and Monte Carlo sampling is used to approximate it.
As target policies ? ? , we use agents playing Atari games [17] that have been trained with A3C [16]
on three well known games: Pong, Breakout and Space Invaders. The agent?s policy is a neural
network consisting of 3 layers of convolutions followed by two fully-connected layers, which we
distill to a smaller network with 2 convolutional layers and a single smaller fully-connected layer
(see SM for details). Distillation is treated here as a purely supervised learning problem, as our aim is
not to re-evaluate known distillation techniques, but rather to show that if the aim is to minimise a
given divergence measure, we can improve distillation using Sobolev Training. Figure 4 shows test
error during training with and without Sobolev Training3 . The introduction of Sobolev Training leads
to similar effects as in the previous section ? the network generalises much more effectively, and this
3
Testing is performed on a held out set of episodes, thus there are no temporal nor causal relations between
training and testing
6
xx
x x yxy
y y y xx
(a)(a) (a)(a)(a)
x x yxy
Synthetic
error
gradient
error
gradient
Synthetic
error
gradient
Synthetic
error
gradient
Synthetic
error
gradient
y y ySynthetic
(b) (b)(b)(b)
(b)
Table 1: Various techniques for producing synthetic gradients. Green shaded nodes denote nodes that
get supervision from the corresponding object from the main network (gradient or loss value). We
report accuracy on the test set ? standard deviation. Backpropagation results are given in parenthesis.
00
SG(h,
SG(h,
y) y) SG(h,
SG(h,
y)
SG(h,
y)SG(h,
SG(h,
y)SG(h,
y)
y)y)y) SG(h,
y)SG(h,
y) SG(h,
y) SG(h,
SG(h,
y) y)
y) y) SG(h,
y) SG(h,
0 y) SG(h,
0SG(h,
0SG(h,
@ @
@h @h
@ @ @ @
@h @h @h @h
?
?L
L
?L
? L
?
? L
L
@ @ @@ @
@h@h
@h @h @h
? L
?
? L
?L
L
@ @ @
@h @h @h
?
? L
? L
L
f (h, y|?)
(h, y|?)
f (h,
f (h,
y|?)y|?) f (h,fy|?)
p(h|?)
p(h|?)
p(h|?)
p(h|?)
p(h|?)
p(h|?) p(h|?)
p(h|?) p(h|?)
p(h|?)
p(h|?) p(h|?)
p(h|?)
h yh y
Noprop
y hy y hy h hy hy y hy h hy y y
y h h
h yhhy h y y h hy hhyh y y h
Direct SG [12]
VFBN [25]
Critic
Sobolev
CIFAR-10 with 3 synthetic gradient modules
Top 1 (94.3%) 54.5% ?1.15
79.2% ?0.01
88.5% ?2.70
93.2% ?0.02
93.5% ?0.01
ImageNet with 1 synthetic gradient module
Top 1 (75.0%) 54.0% ?0.29
Top 5 (92.3%) 77.3% ?0.06
-
57.9% ?2.03
81.5% ?1.20
71.7% ?0.23
90.5% ?0.15
72.0% ?0.05
90.8% ?0.01
ImageNet with 3 synthetic gradient modules
Top 1 (75.0%) 18.7% ?0.18
Top 5 (92.3%) 38.0% ?0.34
-
28.3% ?5.24
52.9% ?6.62
65.7% ?0.56
86.9% ?0.33
66.5% ?0.22
87.4% ?0.11
is especially true in low data regimes. Note the performance gap on Pong is small due to the fact that
optimal policy is quite degenerate for this game4 . In all remaining games one can see a significant
performance increase from using our proposed method, and as well as minor to no overfitting.
Despite looking like a regularisation effect, we stress that Sobolev Training is not trying to find the
simplest models for data or suppress the expressivity of the model. This training method aims at
matching the original function?s smoothness/complexity and so reduces overfitting by effectively
extending the information content of the training set, rather than by imposing a data-independent
prior as with regularisation.
4.3
Synthetic Gradients
The previous experiments have shown how information about the derivatives can boost approximating
function values. However, the core idea of Sobolev Training is broader than that, and can be employed
in both directions. Namely, if one ultimately cares about approximating derivatives, then additionally
approximating values can help this process too. One recent technique, which requires a model of
gradients is Synthetic Gradients (SG) [12] ? a method for training complex neural networks in a
decoupled, asynchronous fashion. In this section we show how we can use Sobolev Training for SG.
The principle behind SG is that instead of doing full backpropagation using the chain-rule, one splits
a network into two (or more) parts, and approximates partial derivatives of the loss L with respect
to some hidden layer activations h with a trainable function SG(h, y|?). In other words, given that
network parameters up to h are denoted by ?
?L ?h
?h
?L
=
? SG(h, y|?)
.
??
?h ??
??
2
h ,y)
In the original SG paper, this module is trained to minimise LSG (?) =
SG(h, y|?) ? ?L(p
,
?h
2
where ph is the final prediction of the main network for hidden activations h. For the case of learning
a classifier, in order to apply Sobolev Training in this context we construct a loss predictor, composed
4
For majority of the time the policy in Pong is uniform, since actions taken when the ball is far away from
the player do not matter at all. Only in crucial situations it peaks so the ball hits the paddle.
7
of a class predictor p(?|?) followed by the log loss, which gets supervision from the true loss, and the
gradient of the prediction gets supervision from the true gradient:
m(h, y|?) := L(p(h|?), y),
SG(h, y|?) := ?m(h, y|?)/?h,
?m(h,y|?) ?L(ph ,y)
Lsob
(?)
=
`(m(h,
y|?),
L(p
,
y)))
+
`
.
,
h
1
SG
?h
?h
In the Sobolev Training framework, the target function is the loss of the main network L(ph , y)
for which we train a model m(h, y|?) to approximate, and in addition ensure that the model?s
derivatives ?m(h, y|?)/?h are matched to the true derivatives ?L(ph , y)/?h. The model?s derivatives
?m(h, y|?)/?h are used as the synthetic gradient to decouple the main network.
This setting closely resembles what is known in reinforcement learning as critic methods [13]. In
particular, if we do not provide supervision on the gradient part, we end up with a loss critic. Similarly
if we do not provide supervision at the loss level, but only on the gradient component, we end up in a
method that resembles VFBN [25]. In light of these connections, our approach in this application
setting can be seen as a generalisation and unification of several existing ones (see Table 1 for
illustrations of these approaches).
One could ask why we need these additional constraints, and what is gained over using a neural
network based approximator directly [12]. The answer lies in the fact that gradient vector fields are a
tiny subset of all vector fields, and while each neural network produces a valid vector field, almost no
(standard) neural network produces valid gradient vector fields. Using non-gradient vector fields as
update directions for learning can have catastrophic consequences ? learning divergence, oscillations,
chaotic behaviour, etc. The following proposition makes this observation more formal:
Proposition 4. If an approximator SG(h, y|?) produces a valid gradient vector field of some scalar
function L then the approximator?s Jacobian matrix must be symmetric.
It is worth noting that having a symmetric Jacobian is an extremely rare property for a neural network
model. For example, a linear model has a symmetric Jacobian if and only if its weight matrix is
symmetric. If we sample weights iid from typical distribution (like Gaussian or uniform on an
interval), the probability of sampling such a matrix is 0, but it could be easy to learn with strong,
symmetric-enforcing updates. On the other hand, for highly non-linear neural networks, it is not only
improbable to randomly find such a model, but enforcing this constraint during learning becomes
much harder too. This might be one of the reasons why linear SG modules work well in Jaderberg et
al. [12], but non-linear convolutional SG struggled to achieve state-of-the-art performance.
When using Sobolev-like approach SG always produces a valid gradient vector field by construction,
thus avoiding the problem described.
We perform experiments on decoupling deep convolutional neural network image classifiers using
synthetic gradients produced by loss critics that are trained with Sobolev Training, and compare to
regular loss critic training, and regular synthetic gradient training. We report results on CIFAR-10 for
three network splits (and therefore three synthetic gradient modules) and on ImageNet with one and
three network splits 5 .
The results are shown in Table 1. With a naive SG model, we obtain 79.2% test accuracy on CIFAR-10.
Using an SG architecture which resembles a small version of the rest of the model makes learning
much easier and led to 88.5% accuracy, while Sobolev Training achieves 93.5% final performance.
The regular critic also trains well, achieving 93.2%, as the critic forces the lower part of the network
to provide a representation which it can use to reduce the classification (and not just prediction) error.
Consequently it provides a learning signal which is well aligned with the main optimisation. However,
this can lead to building representations which are suboptimal for the rest of the network. Adding
additional gradient supervision by constructing our Sobolev SG module avoids this issue by making
sure that synthetic gradients are truly aligned and gives an additional boost to the final accuracy.
For ImageNet [3] experiments based on ResNet50 [8], we obtain qualitatively similar results. Due
to the complexity of the model and an almost 40% gap between no backpropagation and full
backpropagation results, the difference between methods with vs without loss supervision grows
significantly. This suggests that at least for ResNet-like architectures, loss supervision is a crucial
5
N.b. the experiments presented use learning rates, annealing schedule, etc. optimised to maximise the
backpropagation baseline, rather than the synthetic gradient decoupled result (details in the SM).
8
component of a SG module. After splitting ResNet50 into four parts the Sobolev SG achieves 87.4%
top 5 accuracy, while the regular critic SG achieves 86.9%, confirming our claim about suboptimal
representation being enforced by gradients from a regular critic. Sobolev Training results were also
much more reliable in all experiments (significantly smaller standard deviation of the results).
5
Discussion and Conclusion
In this paper we have introduced Sobolev Training for neural networks ? a simple and effective way
of incorporating knowledge about derivatives of a target function into the training of a neural network
function approximator. We provided theoretical justification that encoding both a target function?s
value as well as its derivatives within a ReLU neural network is possible, and that this results in
more data efficient learning. Additionally, we show that our proposal can be efficiently trained using
stochastic approximations if computationally expensive Jacobians or Hessians are encountered.
In addition to toy experiments which validate our theoretical claims, we performed experiments to
highlight two very promising areas of applications for such models: one being distillation/compression
of models; the other being the application to various meta-optimisation techniques that build models
of other models dynamics (such as synthetic gradients, learning-to-learn, etc.). In both cases we obtain
significant improvement over classical techniques, and we believe there are many other application
domains in which our proposal should give a solid performance boost.
In this work we focused on encoding true derivatives in the corresponding ones of the neural network.
Another possibility for future work is to encode information which one believes to be highly correlated
with derivatives. For example curvature [18] is believed to be connected to uncertainty. Therefore,
given a problem with known uncertainty at training points, one could use Sobolev Training to match
the second order signal to the provided uncertainty signal. Finite differences can also be used to
approximate gradients for black box target functions, which could help when, for example, learning a
generative temporal model. Another unexplored path would be to apply Sobolev Training to internal
derivatives rather than just derivatives with respect to the inputs.
References
[1] Sonnet. https://github.com/deepmind/sonnet. 2017.
[2] Mart?n Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S
Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, et al. Tensorflow: Large-scale machine learning on
heterogeneous distributed systems. arXiv preprint arXiv:1603.04467, 2016.
[3] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical
image database. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on,
pages 248?255. IEEE, 2009.
[4] Michael Fairbank and Eduardo Alonso. Value-gradient learning. In Neural Networks (IJCNN), The 2012
International Joint Conference on, pages 1?8. IEEE, 2012.
[5] Michael Fairbank, Eduardo Alonso, and Danil Prokhorov. Simple and fast calculation of the second-order
gradients for globalized dual heuristic dynamic programming in neural networks. IEEE transactions on
neural networks and learning systems, 23(10):1671?1676, 2012.
[6] A Ronald Gallant and Halbert White. On learning the derivatives of an unknown mapping with multilayer
feedforward networks. Neural Networks, 5(1):129?138, 1992.
[7] Song Han, Huizi Mao, and William J Dally. Deep compression: Compressing deep neural networks with
pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149, 2015.
[8] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition.
In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 770?778,
2016.
[9] Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. arXiv
preprint arXiv:1503.02531, 2015.
[10] Kurt Hornik. Approximation capabilities of multilayer feedforward networks. Neural networks, 4(2):251?
257, 1991.
9
[11] Aapo Hyv?rinen. Estimation of non-normalized statistical models using score matching. Journal of
Machine Learning Research, pages 695?709, 2005.
[12] Max Jaderberg, Wojciech Marian Czarnecki, Simon Osindero, Oriol Vinyals, Alex Graves, and Koray
Kavukcuoglu. Decoupled neural interfaces using synthetic gradients. arXiv preprint arXiv:1608.05343,
2016.
[13] Vijay R Konda and John N Tsitsiklis. Actor-critic algorithms. In NIPS, volume 13, pages 1008?1014,
1999.
[14] Steven G Krantz. Handbook of complex variables. Springer Science & Business Media, 2012.
[15] W Thomas Miller, Paul J Werbos, and Richard S Sutton. Neural networks for control. MIT press, 1995.
[16] Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy Lillicrap, Tim Harley,
David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. In
International Conference on Machine Learning, pages 1928?1937, 2016.
[17] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra,
and Martin Riedmiller. Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602,
2013.
[18] Razvan Pascanu and Yoshua Bengio. Revisiting natural gradient for deep networks. arXiv preprint
arXiv:1301.3584, 2013.
[19] Salah Rifai, Gr?goire Mesnil, Pascal Vincent, Xavier Muller, Yoshua Bengio, Yann Dauphin, and Xavier
Glorot. Higher order contractive auto-encoder. Machine Learning and Knowledge Discovery in Databases,
pages 645?660, 2011.
[20] Andrei A Rusu, Sergio Gomez Colmenarejo, Caglar Gulcehre, Guillaume Desjardins, James Kirkpatrick,
Razvan Pascanu, Volodymyr Mnih, Koray Kavukcuoglu, and Raia Hadsell. Policy distillation. arXiv
preprint arXiv:1511.06295, 2015.
[21] Bharat Bhusan Sau and Vineeth N Balasubramanian. Deep model compression: Distilling knowledge from
noisy teachers. arXiv preprint arXiv:1610.09650, 2016.
[22] David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian
Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering the game of go
with deep neural networks and tree search. Nature, 529(7587):484?489, 2016.
[23] Patrice Simard, Bernard Victorri, Yann LeCun, and John S Denker. Tangent prop-a formalism for specifying
selected invariances in an adaptive network. In NIPS, volume 91, pages 895?903, 1991.
[24] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
[25] Shin-ichi Maeda Koyama Masanori Takeru Miyato, Daisuke Okanohara. Synthetic gradient methods with
virtual forward-backward networks. ICLR workshop proceedings, 2017.
[26] Yuval Tassa and Tom Erez. Least squares solutions of the hjb equation with neural network value-function
approximators. IEEE transactions on neural networks, 18(4):1031?1041, 2007.
[27] A?ron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal
Kalchbrenner, Andrew Senior, and Koray Kavukcuoglu. Wavenet: A generative model for raw audio.
CoRR abs/1609.03499, 2016.
[28] Pascal Vincent. A connection between score matching and denoising autoencoders. Neural computation,
23(7):1661?1674, 2011.
[29] Paul J Werbos. Approximate dynamic programming for real-time control and neural modeling. Handbook
of intelligent control, 1992.
[30] Anqi Wu, Mikio C Aoi, and Jonathan W Pillow. Exploiting gradients and hessians in bayesian optimization
and bayesian quadrature. arXiv preprint arXiv:1704.00060, 2017.
[31] Sergey Zagoruyko and Nikos Komodakis. Paying more attention to attention: Improving the performance
of convolutional neural networks via attention transfer. arXiv preprint arXiv:1612.03928, 2016.
10
| 7015 |@word version:2 achievable:1 compression:5 stronger:2 norm:1 polynomial:1 hyv:2 seek:3 bn:1 prokhorov:1 thereby:1 solid:2 harder:1 reduction:1 score:3 interestingly:1 kurt:1 existing:1 com:2 surprising:1 anqi:1 activation:7 dx:7 must:1 guez:1 john:2 devin:1 ronald:1 confirming:1 shape:1 analytic:2 designed:1 plot:3 drop:1 update:2 v:2 globalized:1 generative:2 selected:1 core:1 aja:1 provides:1 pascanu:3 node:4 ron:1 hyperplanes:2 zhang:1 wierstra:1 dn:2 direct:3 abadi:1 consists:1 hjb:1 bharat:1 introduce:1 theoretically:1 expected:2 indeed:1 nor:2 wavenet:1 inspired:1 balasubramanian:1 actual:1 considering:1 becomes:2 begin:1 xx:2 bounded:2 suffice:1 matched:1 provided:2 medium:1 what:3 kg:1 atari:5 deepmind:2 transformation:1 eduardo:2 temporal:2 marian:2 every:3 penalise:1 unexplored:1 classifier:2 hit:1 uk:1 control:3 unit:4 producing:2 positive:1 maximise:1 consequence:1 despite:2 encoding:2 sutton:1 optimised:1 path:1 becoming:1 laurent:1 black:2 might:3 minimally:1 resembles:3 suggests:2 shaded:1 specifying:1 contractive:1 lecun:1 testing:5 practice:4 backpropagation:6 chaotic:1 razvan:3 shin:1 area:1 riedmiller:1 empirical:3 universal:3 significantly:5 projection:3 matching:5 pre:1 word:2 regular:12 get:3 onto:1 cannot:1 context:1 impossible:1 dean:2 demonstrated:1 tangentprop:1 go:2 attention:4 independently:1 focused:1 hadsell:1 simplicity:1 splitting:1 matthieu:1 estimator:1 rule:1 d1:2 justification:2 target:24 construction:1 rinen:2 programming:2 gps:1 hypothesis:2 invader:1 trick:1 element:2 recognition:5 particularly:1 expensive:3 satisfying:1 werbos:2 database:2 observed:1 ackley:2 module:8 preprint:11 steven:1 capture:2 revisiting:1 compressing:2 connected:3 sun:1 episode:1 mesnil:1 environment:1 pong:3 complexity:10 dynamic:3 ultimately:1 trained:15 motivate:1 segment:1 creation:1 predictive:1 purely:1 efficiency:2 czarnecki:2 sonnet:3 easily:2 joint:1 various:3 train:7 distinct:3 fast:1 effective:1 london:1 monte:1 artificial:1 kalchbrenner:1 quite:1 heuristic:1 supplementary:2 posed:1 kai:1 cvpr:1 okanohara:1 encoder:1 simonyan:2 gp:1 highlighted:1 itself:2 noisy:1 final:4 patrice:1 advantage:1 differentiable:2 propose:2 product:1 coming:1 aligned:2 degenerate:2 achieve:2 intuitive:1 breakout:1 validate:1 exploiting:1 extending:2 produce:6 silver:3 object:2 help:2 resnet:1 tim:1 andrew:2 minor:1 paying:1 strong:1 predicted:1 indicate:2 distilling:4 direction:2 emulation:1 closely:1 stochastic:7 human:1 material:2 virtual:1 dnns:1 behaviour:2 generalization:1 d_:3 really:1 proposition:7 probable:1 hold:4 ground:9 mapping:1 dieleman:1 claim:2 desjardins:1 achieves:3 continuum:1 smallest:1 estimation:2 title:1 create:1 tool:1 mit:1 gaussian:3 always:1 aim:5 rather:4 avoid:3 shelf:1 rusu:1 broader:1 encode:8 focus:1 improvement:1 consistently:1 masanori:1 hk:1 underneath:1 baseline:1 typically:1 hidden:5 relation:2 issue:2 classification:1 dual:1 pascal:2 denoted:1 exponent:1 dauphin:1 art:1 field:8 construct:1 distilled:2 once:1 having:2 beach:1 sampling:3 optimising:2 represents:1 koray:5 look:1 future:1 report:3 mirza:1 yoshua:2 richard:2 employ:1 intelligent:1 modern:4 randomly:1 composed:2 divergence:5 replaced:1 antecedent:1 consisting:2 jeffrey:1 n1:1 william:1 harley:1 ab:1 attempt:1 highly:2 possibility:1 mnih:3 introduces:1 truly:1 kirkpatrick:1 light:1 behind:1 parametrised:3 held:1 razp:1 chain:1 implication:2 daisuke:1 andy:1 nowadays:1 encourage:1 capable:1 partial:1 unification:1 improbable:1 arthur:1 decoupled:3 tree:1 puigdomenech:1 ancient:1 logarithm:1 desired:1 a3c:1 re:1 causal:1 halbert:1 theoretical:3 formalise:1 instance:3 column:1 modeling:2 formalism:1 measuring:1 kreg:8 cost:1 introducing:1 distill:2 deviation:2 subset:1 rare:1 predictor:6 uniform:3 aoi:1 successful:1 osindero:3 too:4 paddle:1 gr:1 reported:1 answer:1 teacher:1 synthetic:23 st:2 peak:1 international:2 oord:1 off:1 receiving:1 dong:1 michael:2 synthesis:1 analogously:1 ashish:1 squared:1 recorded:1 bumpy:1 zen:1 choose:1 huang:1 creating:1 derivative:57 danil:1 wojciech:2 stark:1 jacobians:1 toy:1 li:4 potential:2 volodymyr:3 simard:1 coding:1 ioannis:2 coefficient:2 matter:1 vi:1 piece:1 performed:4 try:2 dally:1 doing:1 wave:1 hf:2 capability:3 simon:2 jia:2 contribution:1 square:1 yxy:2 greg:1 accuracy:6 variance:1 convolutional:5 efficiently:2 ensemble:1 gathered:1 identify:1 miller:1 weak:1 raw:1 bayesian:3 vincent:2 kavukcuoglu:5 produced:1 iid:1 craig:1 carlo:1 ren:1 worth:1 straight:1 definition:1 frequency:2 regress:1 obvious:1 james:1 proof:1 di:1 sampled:3 dataset:2 proved:1 ask:1 knowledge:6 dimensionality:1 schedule:1 higher:4 supervised:1 tom:1 zisserman:1 improved:1 wei:1 huizi:1 done:1 though:1 box:2 furthermore:1 parameterised:2 just:3 autoencoders:1 hand:2 mehdi:1 google:1 quality:1 grows:1 believe:1 building:1 usa:1 lillicrap:1 normalized:1 true:7 dxk:1 verify:1 k22:2 effect:3 xavier:2 symmetric:5 leibler:2 dx2:2 white:1 komodakis:1 game:6 during:6 please:1 davis:1 cosine:1 trying:1 pdf:1 stress:1 l1:3 interface:1 image:5 wise:2 recently:2 common:2 sigmoid:3 functional:2 physical:1 rl:3 volume:2 tassa:1 he:1 approximates:2 salah:1 distillation:15 significant:2 imposing:1 smoothness:1 rd:3 similarly:1 erez:1 access:5 han:1 supervision:10 surface:1 actor:1 etc:3 badia:1 sergio:1 curvature:1 multivariate:3 own:1 showed:1 recent:1 scenario:2 inequality:1 meta:1 arbitrarily:3 approximators:5 exploited:1 muller:1 seen:2 additional:8 care:2 nikos:1 somewhat:1 george:1 employed:1 deng:1 determine:2 paradigm:1 xiangyu:1 corrado:1 signal:7 relates:1 full:5 reduces:1 takeru:1 generalises:1 match:7 calculation:1 minimising:2 long:1 sphere:3 cifar:3 believed:1 dkl:4 a1:1 parenthesis:1 raia:1 prediction:6 regression:6 aapo:1 heterogeneous:1 optimisation:10 metric:1 expectation:1 vision:2 arxiv:22 multilayer:2 sometimes:1 sergey:1 agarwal:1 receive:1 addition:4 proposal:3 huffman:1 interval:1 annealing:1 victorri:1 jian:1 crucial:2 rest:2 zagoruyko:1 sure:1 regularly:1 leveraging:1 dxj:4 noting:1 feedforward:2 split:3 enough:2 easy:1 bengio:2 variety:1 sander:1 fit:1 relu:14 architecture:3 suboptimal:2 reduce:5 idea:2 barham:1 rifai:1 vik:1 minimise:3 six:1 veda:1 penalty:1 song:1 karen:2 hessian:4 cause:1 shaoqing:1 action:6 deep:16 ignored:1 generally:1 useful:1 clear:1 amount:1 backpropagated:1 ph:4 simplest:1 struggled:1 http:1 percentage:1 estimated:1 per:1 threefold:1 ichi:1 four:1 demonstrating:1 hdx:2 achieving:1 neither:1 nal:1 backward:1 v1:5 graph:1 enforced:1 powerful:1 uncertainty:3 family:4 almost:2 wu:2 yann:2 sobolev:56 oscillation:1 lanctot:1 layer:9 followed:2 gomez:1 encountered:1 badly:1 ijcnn:1 constraint:2 fei:2 alex:4 hy:7 min:2 extremely:3 relatively:1 martin:1 according:1 ball:2 beneficial:1 slightly:1 smaller:5 mastering:1 making:2 s1:1 den:2 heart:1 pipeline:1 computationally:2 taken:2 equation:1 needed:2 know:2 antonoglou:2 end:3 gulcehre:1 available:1 brevdo:1 panneershelvam:1 apply:2 denker:1 hierarchical:1 v2:3 away:1 original:2 thomas:1 top:7 remaining:2 ensure:1 miyato:1 hinge:1 pushing:1 konda:1 exploit:1 especially:2 build:2 approximating:5 classical:3 disappear:1 objective:4 move:1 question:1 quantity:1 strategy:1 gradient:47 iclr:1 distance:1 majority:1 koyama:1 alonso:2 chris:1 seven:1 maddison:1 fy:1 reason:1 enforcing:2 illustration:1 julian:1 schrittwieser:1 hlog:2 relate:1 suppress:1 policy:17 unknown:3 diamond:1 perform:3 gallant:1 observation:3 convolution:1 datasets:1 sm:3 benchmark:3 finite:4 daan:1 caglar:1 behave:1 situation:2 hinton:1 incorporated:3 looking:2 rn:1 varied:1 arbitrary:3 sharp:1 grzegorz:1 introduced:1 david:3 pair:4 namely:1 connection:3 imagenet:6 learned:1 tensorflow:2 expressivity:1 boost:3 swirszcz:2 nip:3 heiga:1 able:4 usually:2 dynamical:1 beating:1 ev:1 pattern:2 regime:8 maeda:1 max:2 green:2 reliable:1 belief:1 power:1 fairbank:2 natural:2 treated:1 force:1 business:1 residual:1 improve:3 github:1 library:2 hm:2 naive:1 fpl:3 auto:1 resnet50:2 prior:1 sg:38 l2:4 eugene:1 kf:2 discovery:1 tangent:1 regularisation:2 graf:4 loss:19 expect:1 fully:2 highlight:1 generation:1 proven:1 approximator:4 geoffrey:1 agent:5 degree:1 consistent:2 principle:1 playing:3 critic:12 tiny:1 asynchronous:2 drastically:1 formal:2 tsitsiklis:1 senior:1 generalise:1 wide:1 leaky:3 fg:3 distributed:1 van:2 dimension:1 xn:1 evaluating:1 valid:4 avoids:1 computes:1 pillow:1 forward:1 qualitatively:1 reinforcement:5 adaptive:1 sifre:1 employing:1 far:1 styblinski:2 transaction:2 approximate:10 observable:1 compact:2 jaderberg:4 kullback:1 pruning:1 dealing:2 overfitting:2 handbook:2 b1:1 tuples:1 xi:29 training3:1 alternatively:1 continuous:6 search:1 why:3 table:3 additionally:3 promising:1 learn:3 nature:1 transfer:1 ca:1 decoupling:1 hornik:4 improving:1 mse:1 complex:3 constructing:1 domain:8 vj:1 marc:1 main:6 motivation:1 paul:3 n2:1 quadrature:1 x1:2 enriched:1 crafted:1 mikio:1 benchmarking:1 fashion:1 andrei:1 position:1 mao:1 lie:1 jacobian:8 yh:1 zhifeng:1 tang:2 theorem:4 specific:1 showing:5 dx1:1 symbol:1 explored:1 glorot:1 evidence:2 workshop:1 concern:1 incorporating:3 exists:3 socher:1 adding:2 merging:1 effectively:2 gained:1 quantization:1 magnitude:1 corr:1 illustrates:1 cartesian:1 gap:2 easier:1 chen:1 suited:1 vijay:1 led:1 timothy:1 simply:2 explore:1 vinyals:3 kaiming:1 scalar:1 driessche:1 springer:1 truth:9 mart:1 bhusan:1 prop:1 adria:1 consequently:2 towards:2 jeff:1 feasible:1 change:1 content:1 generalisation:4 determined:1 typical:2 specifically:1 uniformly:2 yuval:1 decouple:1 denoising:1 bernard:1 invariance:2 experimental:1 catastrophic:1 player:1 citro:1 colmenarejo:1 formally:1 guillaume:1 internal:1 jonathan:1 oriol:3 goire:1 evaluate:1 audio:2 trainable:1 avoiding:1 correlated:1 |
6,651 | 7,016 | Multi-Information Source Optimization
Matthias Poloczek
Department of Systems and Industrial Engineering
University of Arizona
Tucson, AZ 85721
[email protected]
Jialei Wang
Chief Analytics Office
IBM
Armonk, NY 10504
[email protected]
Peter I. Frazier
School of Operations Research and Information Engineering
Cornell University
Ithaca, NY 14853
[email protected]
Abstract
We consider Bayesian methods for multi-information source optimization (MISO),
in which we seek to optimize an expensive-to-evaluate black-box objective function
while also accessing cheaper but biased and noisy approximations (?information
sources?). We present a novel algorithm that outperforms the state of the art for this
problem by using a Gaussian process covariance kernel better suited to MISO than
those used by previous approaches, and an acquisition function based on a one-step
optimality analysis supported by efficient parallelization. We also provide a novel
technique to guarantee the asymptotic quality of the solution provided by this
algorithm. Experimental evaluations demonstrate that this algorithm consistently
finds designs of higher value at less cost than previous approaches.
1
Introduction
We consider Bayesian multi-information source optimization (MISO), in which we optimize an
expensive-to-evaluate black-box objective function while optionally accessing cheaper biased noisy
approximations, often referred to as ?information sources (IS)?. This arises when tuning hyperparameters of machine learning algorithms: one may evaluate hyperparameters on a smaller related dataset
or subsets of the validation set [34, 15, 17]. We also face this problem in robotics: we can evaluate
a parameterized robot control policy in simulation, in a laboratory, or in a field test [15]. Cheap
approximations promise a route to tractability, but bias and noise complicate their use. An unknown
bias arises whenever a computational model incompletely models a real-world phenomenon, and is
pervasive in applications.
We present a novel algorithm for this problem, misoKG, that is tolerant to both noise and bias and
improves substantially over the state of the art. Specifically, our contributions are:
? The algorithm uses a novel acquisition function that maximizes the incremental gain per unit
cost. This acquisition function generalizes and parallelizes previously proposed knowledgegradient methods for single-IS Bayesian optimization [7, 8, 28, 26, 37] to MISO.
? We prove that this algorithm provides an asymptotically near-optimal solution. If the search
domain is finite, this result establishes the consistency of misoKG.
We present a novel proof technique that yields an elegant, short argument and is thus of
independent interest.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
Related Work: To our knowledge, MISO was first considered by Swersky, Snoek, and Adams
[34], under the the name multi-task Bayesian optimization. This name was used to suggest problems
in which the auxiliary tasks could meaningfully be solved on their own, while we use the term MISO
to indicate that the IS may be useful only in support of the primary task. Swersky et al. [34] showed
that hyperparameter tuning in classification can be accelerated through evaluation on subsets of the
validation data. They proposed a GP model to jointly model such ?auxiliary tasks? and the primary
task, building on previous work on GP regression for multiple tasks in [3, 10, 35]. They choose
points to sample via cost-sensitive entropy search [11, 39], sampling in each iteration a point that
maximally reduces uncertainty in the optimum?s location, normalized by the query cost.
We demonstrate in experiments that our approach improves over the method of Swersky et al. [34],
and we believe this improvement results from two factors: first, our statistical model is more flexible
in its ability to model bias that varies across the domain; second, our acquisition function directly
and maximally reduces simple regret in one step, unlike predictive entropy search which maximally
reduces the maximizer?s entropy in one step and hence only indirectly reduces regret.
Lam, Allaire, and Willcox [18] also consider MISO, under the name non-hierarchical multi-fidelity
optimization. They propose a statistical model that maintains a separate GP for each IS, and fuse
them via the method of Winkler [40]. They apply a modified expected improvement acquisition
function on these surrogates to first decide what design x? to evaluate and then select the IS to
query; the latter is decided by a heuristic that aims to balance information gain and query cost. We
demonstrate in experiments that our approach improves over the method of Lam et al. [18], and
we believe this improvement results from two factors: first, their statistical approach assumes an
independent prior on each IS, despite their being linked through modeling a common objective; and
second their acquisition function selects the point to sample and the IS to query separately via a
heuristic rather than jointly using an optimality analysis.
Beyond these two works, the most closely related work is in the related problem of multi-fidelity
optimization. In this problem, IS are supposed to form a strict hierarchy [16, 14, 6, 24, 20, 19, 15].
In addition, most of these models limit the information that can be obtained from sources of lower
fidelity [16, 14, 6, 20, 19]: Given the observation of x at some IS `, one cannot learn more about the
value of x at IS with higher fidelity by querying IS ` anywhere else (see Sect. C for details and a
proof). Picheny et al. [24] propose a quantile-based criterion for optimization of stochastic simulators,
supposing that all simulators provide unbiased approximations of the true objective. From this body
of work, we compare against Kandasamy et al. [15], who present an approach for minimizing both
simple and cumulative regret, under the assumption that the maximum bias of an information source
strictly decreases with its fidelity.
Outside of both the MISO and multi-fidelity settings, Klein et al. [17] considered hyperparameter
optimization of machine learning algorithms over a large dataset D. Supposing access to subsets
of D of arbitrary sizes, they show how to exploit regularity of performance across dataset sizes to
significantly speed up the optimization process for support vector machines and neural networks.
Our acquisition function is a generalization of the knowledge-gradient policy of Frazier, Powell, and
Dayanik [8] to the MISO setting. This generalization requires extending the one-step optimality
analysis used to derive the KG policy in the single-IS setting to account for the impact of sampling
a cheap approximation on the marginal GP posterior on the primary task. From this literature, we
leverage an idea for computing the expectation of the maximum of a collection of linear functions
of a normal random variable, and propose a parallel algorithm to identify and compute the required
components.
The class of GP covariance kernels we propose are a subset of the class of linear models of coregionalization kernels [10, 2], with a restricted form derived from a generative model particular to MISO.
Focusing on a restricted class of kernels designed for our application supports accurate inference with
less data, which is important when optimizing expensive-to-evaluate functions.
Our work also extends the knowledge-gradient acquisition function to the variable cost setting.
Similar extensions of expected improvement to the variable cost setting can be found in Snoek et al.
[32] (the EI per second criterion) and in Le Gratiet and Cannamela [19].
We now formalize the problem we consider in Sect. 2, describe our statistical analysis in Sect. 3.1,
specify our acquisition function and parallel computation method in Sects. 3.2 and 3.3, provide a
theoretical guarantee in Sect. 3.4, present numerical experiments in Sect. 4, and conclude in Sect. 5.
2
2
Problem Formulation
Given a continuous objective function g : D ? R on a compact set D ? Rd of feasible designs, our
goal is to find a design with objective value close to maxx?D g(x). We have access to M possibly
biased and/or noisy IS indexed by ` ? [M ]0 . (Here, for any a ? Z+ we use [a] as a shorthand for
the set {1, 2, . . . , a}, and further define [a]0 = {0, 1, 2, . . . , a}.) Observing IS ` at design x provides
independent, conditional on f (`, x), and normally distributed observations with mean f (`, x) and
finite variance ?` (x). In [34], IS ` ? [M ]0 are called ?auxiliary tasks? and g the primary task. These
sources are thought of as approximating g, with variable bias. We suppose that g can be observed
directly without bias (but possibly with noise) and set f (0, x) = g(x). The bias f (`, x) ? g(x) is also
referred to as ?model discrepancy? in the engineering and simulation literature [1, 4]. Each IS ` is
also associated with a query cost function c` (x) : D ? R+ . We assume that the cost function c` (x)
and the variance function ?` (x) are both known and continuously differentiable (over D). In practice,
these functions may either be provided by domain experts or may be estimated along with other
model parameters from data (see Sect. 4 and B.2, and [27]).
3
The misoKG Algorithm
We now present the misoKG algorithm and describe its two components: a MISO-focused statistical
model in Sect. 3.1; and its acquisition function and parallel computation in Sect. 3.2. Sect. 3.3
summarizes the algorithm and Sect. 3.4 provides a theoretical performance guarantee. Extensions of
the algorithm are discussed in Sect. D.
3.1
Statistical Model
We now describe a generative model for f that results in a Gaussian process prior on f with a
parameterized class of mean functions ? : [M ]?D 7? R and covariance kernels ? : ([M ]?D)2 7? R.
This allows us to use standard tools for Gaussian process inference ? first finding the MLE estimate
of the parameters indexing this class, and then performing Gaussian process regression using the
selected mean function and covariance kernel ? while also providing better estimates for MISO than
would a generic multi-output GP regression kernel that does not consider the MISO application.
We construct our generative model as follows. For each ` > 0 suppose that a function ?` : D 7? R for
each ` > 0 was drawn from a separate independent GP, ?` ? GP (?` , ?` ), and let ?0 be identically 0.
In our generative model ?` will be the bias f (`, x) ? g(x) for IS `. We additionally set ?` (x) = 0
to encode a lack of a strong belief on the direction of the bias. (If one had a strong belief that
an IS is consistently biased in one direction, one may instead set ?` to a constant estimated using
maximum a posteriori estimation.) Next, within our generative model, we suppose that g : D 7? R
was drawn from its own independent GP, g ? GP (?0 , ?0 ), for some given ?0 and ?0 , and suppose
f (`, x) = f (0, x) + ?` (x) for each `. We assume that ?0 and ?` with ` ? 0 belong to one of
the standard parameterized classes of mean functions and covariance kernels, e.g., constant ?0 and
Mat?rn ?` .
With this construction, f is a GP: given any finite collection of points `i ? [M ], xi ? D with
i = 1, . . . , I, (f (`i , xi ) : i = 1, . . . , I) is a sum of independent multivariate normal random vectors,
and thus is itself multivariate normal. Moreover, we compute the mean function and covariance
kernel of f : for `, m ? [M ]0 and x, x0 ? D,
?(`, x) = E [f (`, x)] = E [g(x)] + E [?` (x)] = ?0 (x)
? ((`, x), (m, x0 )) = Cov(g(x) + ?` (x), g(x0 ) + ?m (x0 )) = ?0 (x, x0 ) + 1`,m ? ?` (x, x0 ),
where 1`,m denotes Kronecker?s delta, and where we have used independence of ?` , ?m , and g. We
refer the reader to https://github.com/misoKG/ for an illustration of the model.
This generative model draws model discrepancies ?` independently across IS. This is appropriate
when IS are different in kind and share no relationship except that they model a common objective.
In the supplement (Sect. B) we generalize this generative model to model correlation between model
discrepancies, which is appropriate when IS can be partitioned into groups, such that IS within one
group tend to agree more amongst themselves than they do with IS in other groups. Sect. B also
discusses the estimation of the hyperparameters in ?0 and ?` .
3
3.2
Acquisition Function
Our optimization algorithm proceeds in rounds, selecting a design x ? D and an information
source ` ? [M ]0 in each. The value of the information obtained by sampling IS ` at x is the expected
gain in the quality of the best design that can be selected using the available information. That is, this
value is the difference in the expected quality of the estimated optimum before and after the sample.
We then normalize this expected gain by the cost c` (x) associated with the respective query, and
sample the IS and design with the largest normalized gain. Without normalization we would always
query the true objective, since no other IS provides more information about g than g itself.
We formalize this idea. Suppose that we have already sampled n points Xn and made the observations Yn . Denote by En the expected value according to the posterior distribution given Xn , Yn , and
let ?(n) (`, x) := En [f (`, x)]. The best expected objective value across the designs, as estimated by
our statistical model, is maxx0 ?D ?(n) (0, x0 ). Similarly, if we take an additional sample of IS `(n+1)
at design x(n+1) and compute our new posterior mean, the new best expected objective value across
the designs is maxx0 ?D ?(n+1) (0, x0 ), whose distribution depends on what IS we sample, and where
sample it. Thus, the expected value of sampling at (`, x) normalized by the cost is
maxx0 ?D ?(n+1) (0, x0 ) ? maxx0 ?D ?(n) (0, x0 ) (n+1)
n
(n+1)
MKG (`, x) = En
`
= `, x
=x ,
c` (x)
(1)
which we refer to as the misoKG factor of the pair (`, x). The misoKG policy then samples at the
pair (`, x) that maximizes MKGn (`, x), i.e., (`(n+1) , x(n+1) ) ? argmax`?[M ]0 ,x?D MKGn (`, x),
which is a nested optimization problem.
To make this nested optimization problem tractable, we first replace the search domain D in Eq. (1)
by a discrete set A ? D of points, for example selected by a Latin Hypercube design. We may then
compute MKGn (`, x) exactly. Toward that end, note that
(n+1)
En max
?
0
x ?A
(n+1)
(n+1)
(0, x ) `
= `, x
=x
(n+1)
(n)
0
n
(n+1)
= `, x
= x , (2)
= En max
{? (0, x ) + ?
?x0 (`, x) ? Z} `
0
0
x ?A
1
where Z ? N (0, 1) and ?
?xn0 (`, x) = ?n ((0, x0 ), (`, x))/ [?` (x) + ?n ((`, x), (`, x))] 2 . Here ?n is
the posterior covariance matrix of f given Xn , Yn .
We parallelize the computation of MKGn (`, x) for fixed `, x, enabling it to utilize multiple cores.
Then (`(n+1) , x(n+1) ) is obtained by computing MKGn (`, x) for all (`, x) ? [M ]0 ? A, a task that
can be parallelized over multiple machines in a cluster. We begin by sorting the points in A in parallel
by increasing value of ?
?xn0 (`, x) (for fixed `, x). Thereby we remove some points easily identified as
dominated. A point xj is dominated if maxi ?(n) (0, xi ) + ?
?xni (`, x)Z is unchanged for all Z if the
maximum is taken excluding xj . Note that a point xj is dominated by xk if ?
?xnj (`, x) = ?
?xnk (`, x)
and ?(n) (0, xj ) ? ?(n) (0, xk ), since xk has a higher expected value than xj for any realization of Z.
Let S be the sorted sequence without such dominated points. We will remove more dominated points
later.
Since
h c` (x) is a constant for
i fixed `, x, we may express the conditional expectation in Eq. (1) as
maxi {ai +bi Z}?maxi ai
i Z}?maxi ai ]
En
= En [maxi {aic+b
, where ai = ?(n) (0, xi ) and bi = ?
?xni (`, x)
c` (x)
` (x)
for xi ? S. We split S into consecutive sequences S1 , S2 , . . . , SC , where C is the number of
cores used for computing MKGn (`, x) and Si , Si+1 overlap in one element: that is, for Sj =
{xj1 , . . . , xjk }, x(j?1)k = xj1 and xjk = x(j+1)1 hold. Each xji ? Sj specifies a linear function aji +bji Z (ordered by increasing slopes in S). We are interested in the realizations of Z for
which aji +bji Z ? ai0 +bi0 Z for any i0 and hence compute the intersections of these functions.
The functions for xji and xji+1 intersect in dji = (aji ?aji+1 )/(bji+1 ?bji ). Observe if dji ? dji?1 ,
then aji +bji Z ? max{aji?1 +bji?1 Z, aji+1 +bji+1 Z} for all Z: xji is dominated and hence dropped
from Sj . In this case we compute the intersection of the affine functions associated with xj?1 and xj+1
and iterate the process.
4
Points in Sj may be dominated by the rightmost (non-dominated) point in Sj?1 . Thus, we compute
the intersection of the rightmost point of Sj?1 and the leftmost point of Sj , iteratively dropping
all dominated points of Sj . If all points of Sj are dominated, we continue the scan with Sj+1 and
so on. Observe that we may stop this scan once there is a point that is not dominated, since the
points in any sequence Sj have non-decreasing d-values. If some of the remaining points in Sj are
dominated by a point in Sj 0 with j 0 < j ? 1, then this will be determined when the scan initiated
by Sj 0 reaches Sj . Subsequently, we check the other direction, i.e. whether xj1 dominates elements
of Sj?1 , starting with the rightmost element of Sj?1 . These checks for dominance are performed in
parallel for neighboring sequences.
[8] showed how to compute sequentially the expected maximum of a collection of affine
functions. In particular, their Eq. (14) [8, p. 605] gives En [maxi {ai +bi Z} ? maxi ai ] =
PC Pk?1
j=1
h=1 (bjh+1 ?bjh )u(?|djh |), where u is defined as u(z) = z?(z) + ?(z) for the CDF and
PDF of the normal distribution. We compute the inner sums simultaneously; the computation of the
outer sum could be parallelized by recursively adding pairs of inner sums, although we do not do so
to avoid communication overhead. We summarize the parallel algorithm below.
The Parallel Algorithm to compute (`(n+1) , x(n+1) ):
1. Scatter the pairs (`, x) ? [M ]0 ? A among the machines.
2. Each computes MKGn (`, x) for its pairs. To compute MKGn (`, x) in parallel:
a. Sort the points in A by ascending ?
?xn0 (`, x) in parallel, thereby removing dominated points.
Let S be the sorted sequence.
b. Split S into sequences S1 , . . . , SC , where
P C is the number of cores used to compute MKGn (`, x). Each core computes xi ?SC (bi+1 ? bi )u(?|di |) in parallel, then the
partial sums are added to obtain En [maxi {ai + bi Z} ? maxi ai ].
3. Determine (`(n+1) , x(n+1) ) ? argmax`?[M ]0 ,x?D MKGn (`, x) in parallel.
3.3
Summary of the misoKG Algorithm.
1. Using samples from all information sources, estimate hyperparameters of the Gaussian
process prior as described in Sect. B.2.
Then calculate the posterior on f based on the prior and samples.
2. Until the budget for samples is exhausted do:
Determine the information source `?[M ]0 and the design x?D that maximize the misoKG
factor proposed in Eq. (1) and observe IS `(x).
Update the posterior distribution with the new observation.
3. Return argmaxx0 ?A ?(n) (0, x0 ).
3.4
Provable Performance Guarantees.
The misoKG chooses an IS and an x such that the expected gain normalized by the query cost is
maximized. Thus, misoKG is one-step Bayes optimal in this respect, by construction.
We establish an additive bound on the difference between misoKG?s solution and the unknown
optimum, as the number of queries N ? ?. For this argument we suppose that ?(`, x)=0 ?`, x
and ?0 is either the squared exponential kernel or a four times differentiable Mat?rn kernel. Moreover,
let xOPT ? argmaxx0 ?D f (0, x0 ), and d = maxx0 ?D minx00 ?A dist(x0 , x00 ).
Theorem 1. Let x?N ? A be the point that misoKG recommends in iteration N . For each p ? [0, 1)
there is a constant Kp such that with probability p
lim f (0, x?N ) ? f (0, xOPT ) ? Kp ? d.
N ??
We point out that Frazier, Powell, and Dayanik [8] showed in their seminal work an analogous result
for the case of a single information source with uniform query cost (Theorem 4 in [8]).
5
In Sect. A we prove the statement for the MISO setting that allows multiple information sources that
each have query costs c` (x) varying over the search domain D. This proof is simple and short. Also
note that Theorem 3 establishes consistency of misoKG for the special case that D is finite, since
then d = 0. Interestingly, we can compute Kp given ? and p. Therefore, we can control the additive
error Kp ? d by increasing the density of A, leveraging the scalability of our parallel algorithm.
4
Numerical Experiments
We now compare misoKG to other state-of-the-art MISO algorithms. We implemented misoKG?s
statistical model and acquisition function in Python 2.7 and C++ leveraging functionality from
the Metrics Optimization Engine [23]. We used a gradient-based optimizer [28] that first
finds an optimizer via multiple restarts for each IS ` separately and then picks (`(n+1) , x(n+1) )
with maximum misoKG factor among these. An implementation of our method is available at
https://github.com/misoKG/.
We compare to misoEI of Lam et al. [18] and to MTBO+, an improved version of Multi-Task Bayesian
Optimization proposed by Swersky et al. [34]. Following a recommendation of Snoek 2016, our implementation of MTBO+ uses an improved formulation of the acquisition function given by Hern?ndezLobato et al. [12], Snoek and et al. [31], but otherwise is identical to MTBO; in particular, it uses the
statistical model of [34]. Sect. E provides detailed descriptions of these algorithms.
Experimental Setup. We conduct experiments on the following test problems: (1) the 2dimensional Rosenbrock function modified to fit the MISO setting by Lam et al. [18]; (2) a MISO
benchmark proposed by Swersky et al. [34] in which we optimize the 4 hyperparameters of a machine
learning algorithm, using a small, related set of smaller images as cheap IS; (3) an assemble-to-order
problem from Hong and Nelson [13] in which we optimize an 8-dimensional target stock vector to
maximize the expected daily profit of a company as estimated by a simulator.
In MISO settings the amount of initial data that one can use to inform the methods about each
information source is typically dictated by the application, in particular by resource constraints and
the availability of the respective source. In our experiments all methods were given identical initial
datasets for each information source in every replication; these sets were drawn randomly via Latin
Hypercube designs. For the sake of simplicity, we provided the same number of points for each
IS, set to 2.5 points per dimension of the design space D. Regarding the kernel and mean function,
MTBO+ uses the settings provided in [31]. The other algorithms used the squared exponential kernel
and a constant mean function set to the average of a random sample.
We report the ?gain? over the best initial solution, that is the true objective value of the respective
design that a method would return at each iteration minus the best value in the initial data set. If
the true objective value is not known for a given design, we report the value obtained from the
information source of highest fidelity. This gain is plotted as a function of the total cost, that is the
cumulative cost for invoking the information sources plus the fixed cost for the initial data; this metric
naturally generalizes the number of function evaluations prevalent in Bayesian optimization. Note
that the computational overhead of choosing the next information source and sample is omitted, as
it is negligible compared to invoking an information source in real-world applications. Error bars
are shown at the mean ? 2 standard errors averaged over at least 100 runs of each algorithm. For
deterministic sources a jitter of 10?6 is added to avoid numerical issues during matrix inversion.
4.1
The Rosenbrock Benchmarks
We consider the design space D = [?2, 2]2 , and M = 2 information sources. IS 0 is the Rosenbrock
function g(x) = (1 ? x1 )2 + 100 ? (x2 ? x21 )2 plus optional Gaussian noise u ? ?. IS 1 returns
g(x)+v ?sin(10?x1 +5?x2 ), where the additional oscillatory component serves as model discrepancy.
We assume a cost of 1000 for each query to IS 0 and a cost of 1 for IS 1.
Since all methods converged to good solutions within few queries, we investigate the ratio of gain to
cost: Fig. 1 (l) displays the gain of each method over the best initial solution as a function of the total
cost inflicted by querying information sources. The new method misoKG offers a significantly better
gain per unit cost and finds an almost optimal solution typically within 5 ? 10 samples. Interestingly,
misoKG relies only on cheap samples, proving its ability to successfully handle uncertainty. MTBO+,
6
40
30
35
25
30
misoKG
MTBO+
misoEI
15
25
Gain
Gain
20
misoKG
MTBO+
misoEI
20
15
10
10
5
5
0
5005
5010
5015
5020
Total Cost
5025
0
255
5030
260
265
270
Total Cost
275
280
Figure 1: (l) The Rosenbrock benchmark with the parameter setting of [18]: misoKG offers an
excellent gain-to-cost ratio and outperforms its competitors substantially. (r) The Rosenbrock
benchmark with the alternative setup.
on the other hand, struggles initially but then eventually obtains a near-optimal solution, too. To this
end, it makes usually one or two queries of the expensive truth source after about 40 steps. misoEI
shows a odd behavior: it takes several queries, one of them to IS 0, before it improves over the best
initial design for the first time. Then it jumps to a very good solution and subsequently samples only
the cheap IS.
For the second setup, we set u = 1, v = 2, and suppose for IS 0 uniform noise of ?0 (x) = 1
and query cost c0 (x) = 50. Now the difference in costs is much smaller, while the variance is
considerably bigger. The results are displayed in Fig. 1 (r): as for the first configuration, misoKG
outperforms the other methods from the start. Interestingly, misoEI?s performance is drastically
decreased compared to the first setup, since it only queries the expensive truth. Looking closer, we
see that misoKG initially queries only the cheap information source IS 1 until it comes close to an
optimal value after about five samples. It starts to query IS 0 occasionally later.
4.2
The Image Classification Benchmark
This classification problem was introduced by Swersky et al. [34] to demonstrate that MTBO can reduce
the cost of hyperparameter optimization by leveraging a small dataset as information source. The
goal is to optimize four hyperparameters of the logistic regression algorithm [36] using a stochastic
gradient method with mini-batches (the learning rate, the L2-regularization parameter, the batch size,
and the number of epochs) to minimize the classification error on the MNIST dataset [21]. This
dataset contains 70,000 images of handwritten digits: each image has 784 pixels. IS 1 uses the USPS
dataset [38] of about 9000 images with 256 pixels each. The query costs are 4.5 for IS 1 and 43.69
for IS 0. A closer examination shows that IS 1 is subject to considerable bias with respect to IS 0,
making it a challenge for MISO algorithms.
Fig.2 (l) summarizes performance: initially, misoKG and MTBO+ are on par. Both clearly outperform misoEI that therefore was stopped after 50 iterations. misoKG and MTBO+ continued for 150
steps (with a lower number of replications). misoKG usually achieves an optimal test error of
about 7.1% on the MNIST testset after about 80 queries, matching the classification performance
of the best setting reported by Swersky et al. [34]. Moreover, misoKG achieves better solutions than
MTBO+ at the same costs. Note that the results in [34] show that MTBO+ will also converge to the
optimum eventually.
4.3
The Assemble-To-Order Benchmark
The assemble-to-order (ATO) benchmark is a reinforcement learning problem from a business
application where the goal is to optimize an 8-dimensional target level vector over [0, 20]8 (see
Sect. G for details). We set up three information sources: IS 0 and 2 use the discrete event simulator
of Xie et al. [42], whereas the cheapest source IS 1 invokes the implementation of Hong and Nelson.
IS 0 models the truth.
7
40
0.20
misoKG
MTBO+
misoEI
0.18
30
25
0.14
Gain
Testerror
0.16
35
0.12
20
15
0.10
10
0.08
5
0.06
6.2
6.4
6.6
6.8
7.0
log(Total Cost)
7.2
0
7.4
misoKG
MTBO+
misoEI
6.08
6.10
6.12
6.14
6.16
log(Total Cost)
6.18
6.20
6.22
Figure 2: (l) The performance on the image classification benchmark of [34]. misoKG achieves better
test errors after about 80 steps and converges to the global optimum. (r) misoKG outperforms the
other algorithms on the assemble-to-order benchmark that has significant model discrepancy.
The two simulators differ subtly in the model of the inventory system. However, the effect in estimated
objective value is significant: on average the outputs of both simulators at the same target vector differ
by about 5% of the score of the global optimum, which is about 120, whereas the largest observed
bias out of 1000 random samples was 31.8. Thus, we are witnessing a significant model discrepancy.
Fig. 2 (r) summarizes the performances. misoKG outperforms the other algorithms from the start:
misoKG averages at a gain of 26.1, but inflicts only an average query cost of 54.6 to the information
sources. This is only 6.3% of the query cost that misoEI requires to achieve a comparable score.
Interestingly, misoKG and MTBO+ utilize mostly the cheap biased IS, and therefore are able to
obtain significantly better gain to cost ratios than misoEI. misoKG?s typically first calls IS 2 after
about 60 ? 80 steps. In total, misoKG queries IS 2 about ten times within the first 150 steps; in some
replications misoKG makes one late call to IS 0 when it has already converged. Our interpretation is
that misoKG exploits the cheap, biased IS 1 to zoom in on the global optimum and switches to the
unbiased but noisy IS 2 to identify the optimal solution exactly. This is the expected (and desired)
behavior for misoKG when the uncertainty of f (0, x? ) is not expected to be reduced sufficiently by
queries to IS 1. MTBO+ trades off the gain versus cost differently: it queries IS 0 once or twice after 100
steps and directs all other queries to IS 1, which might explain the observed lower performance.
misoEI, which employs a two-step heuristic for trading off predicted gain and query cost, almost
always chose to evaluate the most expensive IS.
5
Conclusion
We have presented a novel algorithm for MISO that uses a novel mean function and covariance matrix
motivated by a MISO-specific generative model. We have also proposed a novel acquisition function
that extends the knowledge gradient to the MISO setting and comes with a fast parallel method for
computing it. Moreover, we have provided a theoretical guarantee on the solution quality delivered
by this algorithm, and demonstrated through numerical experiments that it improves significantly
over the state of the art.
Acknowledgments
This work was partially supported by NSF CAREER CMMI-1254298, NSF CMMI-1536895, NSF
IIS-1247696, AFOSR FA9550-12-1-0200, AFOSR FA9550-15-1-0038, and AFOSR FA9550-16-10046.
8
References
[1] D. Allaire and K. Willcox. A mathematical and computational framework for multifidelity
design and analysis with computer models. International Journal for Uncertainty Quantification,
4(1), 2014.
[2] M. A. Alvarez, L. Rosasco, and N. D. Lawrence. Kernels for vector-valued functions: A review.
Foundations and Trends in Machine Learning, 4(3):195?266, 2012.
[3] E. V. Bonilla, K. M. Chai, and C. Williams. Multi-task gaussian process prediction. In Advances
in Neural Information Processing Systems, pages 153?160, 2007.
[4] J. Brynjarsdottir and A. O?Hagan. Learning about physical parameters: the importance of model
discrepancy. Inverse Problems, 30(11), 2014.
[5] E. ??nlar. Probability and Stochastics, volume 261 of Graduate texts in Mathematics. Springer,
2011.
[6] A. I. Forrester, A. S?bester, and A. J. Keane. Multi-fidelity optimization via surrogate modelling.
Proceedings of the Royal Society of London A: Mathematical, Physical and Engineering
Sciences, 463(2088):3251?3269, 2007.
[7] P. I. Frazier, W. B. Powell, and S. Dayanik. A knowledge-gradient policy for sequential
information collection. SIAM Journal on Control and Optimization, 47(5):2410?2439, 2008.
[8] P. I. Frazier, W. B. Powell, and S. Dayanik. The Knowledge Gradient Policy for Correlated
Normal Beliefs. INFORMS Journal on Computing, 21(4):599?613, 2009.
[9] S. Ghosal and A. Roy. Posterior consistency of Gaussian process prior for nonparametric binary
regression. The Annals of Statistics, 34(5):2413?2429, 2006.
[10] P. Goovaerts. Geostatistics for Natural Resources Evaluation. Oxford University, 1997.
[11] P. Hennig and C. J. Schuler. Entropy search for information-efficient global optimization. The
Journal of Machine Learning Research, 13(1):1809?1837, 2012.
[12] J. M. Hern?ndez-Lobato, M. W. Hoffman, and Z. Ghahramani. Predictive entropy search
for efficient global optimization of black-box functions. In Advances in Neural Information
Processing Systems, pages 918?926, 2014.
[13] L. J. Hong and B. L. Nelson. Discrete optimization via simulation using compass. Operations
Research, 54(1):115?129, 2006.
[14] D. Huang, T. Allen, W. Notz, and R. Miller. Sequential kriging optimization using multiplefidelity evaluations. Structural and Multidisciplinary Optimization, 32(5):369?382, 2006.
[15] K. Kandasamy, G. Dasarathy, J. B. Oliva, J. Schneider, and B. Poczos. Gaussian process bandit
optimisation with multi-fidelity evaluations. In Advances in Neural Information Processing
Systems, 2016. The code is available at https://github.com/kirthevasank/mf-gp-ucb.
Last Accessed on 04/22/2017.
[16] M. C. Kennedy and A. O?Hagan. Predicting the output from a complex computer code when
fast approximations are available. Biometrika, 87(1):1?13, 2000.
[17] A. Klein, S. Falkner, S. Bartels, P. Hennig, and F. Hutter. Fast bayesian optimization of machine
learning hyperparameters on large datasets. CoRR, abs/1605.07079, 2016.
[18] R. Lam, D. Allaire, and K. Willcox. Multifidelity optimization using statistical surrogate
modeling for non-hierarchical information sources. In 56th AIAA/ASCE/AHS/ASC Structures,
Structural Dynamics, and Materials Conference, 2015.
[19] L. Le Gratiet and C. Cannamela. Cokriging-based sequential design strategies using fast
cross-validation techniques for multi-fidelity computer codes. Technometrics, 57(3):418?427,
2015.
9
[20] L. Le Gratiet and J. Garnier. Recursive co-kriging model for design of computer experiments
with multiple levels of fidelity. International Journal for Uncertainty Quantification, 4(5), 2014.
[21] Y. LeCun, C. Cortes, and C. J. Burges. The MNIST database of handwritten digits, 2017.
http://yann.lecun.com/exdb/mnist/. Last Accessed on 05/15/2017.
[22] P. Milgrom and I. Segal. Envelope theorems for arbitrary choice sets. Econometrica, 70(2):
583?601, 2002.
[23] MOE. Metrics optimization engine. http://yelp.github.io/MOE/, 2016. Last Accessed
on 05/15/2017.
[24] V. Picheny, D. Ginsbourger, Y. Richet, and G. Caplin. Quantile-based optimization of noisy
computer experiments with tunable precision. Technometrics, 55(1):2?13, 2013.
[25] M. Poloczek, J. Wang, and P. I. Frazier. Warm starting bayesian optimization. In Winter
Simulation Conference (WSC), pages 770?781. IEEE, 2016. Also available on arXiv https:
//arxiv.org/abs/1608.03585.
[26] H. Qu, I. O. Ryzhov, M. C. Fu, and Z. Ding. Sequential selection with unknown correlation
structures. Operations Research, 63(4):931?948, 2015.
[27] C. E. Rasmussen and C. K. I. Williams. Gaussian Processes for Machine Learning. MIT Press,
2006. ISBN ISBN 0-262-18253-X.
[28] W. R. Scott, P. I. Frazier, and W. B. Powell. The correlated knowledge gradient for simulation
optimization of continuous parameters using gaussian process regression. SIAM Journal on
Optimization, 21(3):996?1026, 2011.
[29] A. Shah and Z. Ghahramani. Parallel predictive entropy search for batch global optimization of
expensive objective functions. In Advances in Neural Information Processing Systems, pages
3330?3338, 2015.
[30] J. Snoek. Personal communication, 2016.
[31] J. Snoek and et al. Spearmint. http://github.com/HIPS/Spearmint, 2017. Last Accessed
on 05/15/2017.
[32] J. Snoek, H. Larochelle, and R. P. Adams. Practical bayesian optimization of machine learning
algorithms. In Advances in Neural Information Processing Systems, pages 2951?2959, 2012.
[33] N. Srinivas, A. Krause, S. M. Kakade, and M. Seeger. Gaussian process optimization in the
bandit setting: No regret and experimental design. arXiv preprint arXiv:0912.3995, 2009.
[34] K. Swersky, J. Snoek, and R. P. Adams. Multi-task bayesian optimization. In Advances in
Neural Information Processing Systems, pages 2004?2012, 2013.
[35] Y.-W. Teh, M. Seeger, and M. Jordan. Semiparametric latent factor models. In Artificial
Intelligence and Statistics 10, 2005.
[36] Theano. Theano: Logistic regression, 2017. http://deeplearning.net/tutorial/code/
logistic_sgd.py. Last Accessed on 05/16/2017.
[37] S. Toscano-Palmerin and P. I. Frazier. Stratified bayesian optimization. In Proceedings of the
12th International Conference on Monte Carlo and Quasi-Monte Carlo Methods in Scientific
Computing, 2016. Accepted for Publication. Also available at https://arxiv.org/abs/
1602.02338.
[38] USPS. USPS dataset, 2017. http://mldata.org/repository/data/viewslug/usps/.
Last Accessed on 05/16/2017.
[39] J. Villemonteix, E. Vazquez, and E. Walter. An informational approach to the global optimization
of expensive-to-evaluate functions. Journal of Global Optimization, 44(4):509?534, 2009.
10
[40] R. L. Winkler. Combining probability distributions from dependent information sources.
Management Science, 27(4):479?488, 1981.
[41] J. Wu, M. Poloczek, A. G. Wilson, and P. I. Frazier. Bayesian optimization with gradients. In
Advances in Neural Information Processing Systems, 2017. Accepted for Publication. Also
available at https://arxiv.org/abs/1703.04389.
[42] J. Xie, P. I. Frazier, and S. Chick. Assemble to order simulator. http://simopt.org/wiki/
index.php?title=Assemble_to_Order&oldid=447, 2012. Last Accessed on 05/16/2017.
11
| 7016 |@word repository:1 version:1 inversion:1 c0:1 seek:1 simulation:5 covariance:8 invoking:2 pick:1 thereby:2 profit:1 minus:1 recursively:1 initial:7 configuration:1 contains:1 score:2 selecting:1 ndez:1 interestingly:4 rightmost:3 outperforms:5 xnj:1 com:6 si:2 scatter:1 numerical:4 additive:2 cheap:8 remove:2 designed:1 update:1 kandasamy:2 generative:8 selected:3 intelligence:1 xk:3 rosenbrock:5 short:2 core:4 fa9550:3 provides:5 viewslug:1 location:1 org:5 accessed:7 five:1 mathematical:2 along:1 replication:3 prove:2 shorthand:1 overhead:2 x0:15 snoek:8 expected:15 xji:4 behavior:2 themselves:1 dist:1 multi:14 simulator:7 informational:1 decreasing:1 company:1 richet:1 increasing:3 ryzhov:1 provided:5 begin:1 moreover:4 maximizes:2 what:2 kg:1 kind:1 substantially:2 finding:1 guarantee:5 every:1 exactly:2 biometrika:1 control:3 unit:2 normally:1 yn:3 before:2 negligible:1 engineering:4 dropped:1 aiaa:1 struggle:1 limit:1 yelp:1 io:1 despite:1 oxford:1 initiated:1 parallelize:1 dasarathy:1 falkner:1 black:3 plus:2 twice:1 might:1 chose:1 co:1 ato:1 analytics:1 bi:6 graduate:1 averaged:1 stratified:1 decided:1 acknowledgment:1 lecun:2 practical:1 practice:1 regret:4 recursive:1 digit:2 xopt:2 powell:5 aji:7 goovaerts:1 intersect:1 maxx:1 significantly:4 thought:1 matching:1 suggest:1 cannot:1 close:2 selection:1 seminal:1 py:1 optimize:6 deterministic:1 demonstrated:1 lobato:1 williams:2 starting:2 independently:1 focused:1 simplicity:1 continued:1 inflicted:1 proving:1 handle:1 analogous:1 annals:1 hierarchy:1 suppose:7 construction:2 target:3 us:6 element:3 trend:1 expensive:8 roy:1 hagan:2 database:1 observed:3 preprint:1 ding:1 wang:3 solved:1 calculate:1 sect:19 decrease:1 highest:1 trade:1 kriging:2 jialei:1 accessing:2 econometrica:1 dynamic:1 personal:1 predictive:3 subtly:1 usps:4 easily:1 stock:1 differently:1 walter:1 fast:4 describe:3 london:1 monte:2 kp:4 query:28 sc:3 cokriging:1 artificial:1 outside:1 choosing:1 whose:1 heuristic:3 valued:1 otherwise:1 ability:2 cov:1 winkler:2 statistic:2 gp:12 jointly:2 noisy:5 itself:2 delivered:1 sequence:6 differentiable:2 matthias:1 isbn:2 net:1 propose:4 lam:5 ai0:1 parallelizes:1 neighboring:1 combining:1 realization:2 achieve:1 supposed:1 description:1 normalize:1 az:1 scalability:1 chai:1 regularity:1 optimum:7 extending:1 cluster:1 spearmint:2 incremental:1 adam:3 converges:1 derive:1 informs:1 odd:1 school:1 eq:4 strong:2 auxiliary:3 implemented:1 predicted:1 indicate:1 come:2 trading:1 differ:2 direction:3 larochelle:1 closely:1 functionality:1 stochastic:2 subsequently:2 material:1 generalization:2 strictly:1 extension:2 hold:1 sufficiently:1 considered:2 normal:5 lawrence:1 optimizer:2 consecutive:1 achieves:3 omitted:1 estimation:2 bi0:1 miso:22 title:1 sensitive:1 largest:2 argmaxx0:2 establishes:2 tool:1 successfully:1 hoffman:1 mit:1 clearly:1 gaussian:12 always:2 aim:1 modified:2 rather:1 avoid:2 cornell:2 varying:1 wilson:1 office:1 publication:2 pervasive:1 encode:1 derived:1 frazier:10 consistently:2 improvement:4 prevalent:1 directs:1 modelling:1 check:2 industrial:1 seeger:2 posteriori:1 inference:2 armonk:1 dependent:1 i0:1 typically:3 xnk:1 dayanik:4 initially:3 bandit:2 bartels:1 quasi:1 selects:1 interested:1 pixel:2 issue:1 classification:6 flexible:1 fidelity:11 among:2 art:4 special:1 marginal:1 field:1 construct:1 once:2 beach:1 sampling:4 identical:2 discrepancy:7 report:2 few:1 employ:1 randomly:1 winter:1 simultaneously:1 zoom:1 cheaper:2 argmax:2 multifidelity:2 ab:4 technometrics:2 testerror:1 interest:1 investigate:1 asc:1 evaluation:6 pc:1 accurate:1 xni:2 fu:1 closer:2 partial:1 daily:1 respective:3 indexed:1 conduct:1 desired:1 plotted:1 xjk:2 theoretical:3 stopped:1 hutter:1 hip:1 modeling:2 compass:1 cost:37 tractability:1 subset:4 uniform:2 too:1 reported:1 varies:1 considerably:1 chooses:1 st:1 density:1 international:3 siam:2 off:2 continuously:1 squared:2 management:1 choose:1 possibly:2 rosasco:1 huang:1 expert:1 return:3 account:1 segal:1 availability:1 bonilla:1 depends:1 later:2 performed:1 linked:1 observing:1 start:3 sort:1 maintains:1 parallel:14 bayes:1 slope:1 contribution:1 minimize:1 php:1 variance:3 who:1 maximized:1 yield:1 identify:2 miller:1 ahs:1 generalize:1 bayesian:12 handwritten:2 carlo:2 kennedy:1 vazquez:1 converged:2 oscillatory:1 explain:1 reach:1 inform:1 whenever:1 complicate:1 email:1 against:1 competitor:1 acquisition:14 villemonteix:1 naturally:1 proof:3 associated:3 di:1 gain:19 sampled:1 dataset:8 stop:1 tunable:1 knowledge:7 lim:1 improves:5 formalize:2 focusing:1 higher:3 xie:2 restarts:1 specify:1 maximally:3 improved:2 alvarez:1 formulation:2 box:3 keane:1 anywhere:1 correlation:2 until:2 hand:1 ei:1 maximizer:1 lack:1 logistic:2 multidisciplinary:1 quality:4 scientific:1 believe:2 name:3 effect:1 building:1 normalized:4 usa:1 unbiased:2 true:4 xj1:3 hence:3 regularization:1 laboratory:1 iteratively:1 round:1 sin:1 during:1 criterion:2 leftmost:1 hong:3 pdf:1 exdb:1 demonstrate:4 allen:1 image:6 novel:8 common:2 dji:3 physical:2 volume:1 discussed:1 belong:1 interpretation:1 refer:2 significant:3 ai:8 tuning:2 rd:1 consistency:3 mathematics:1 similarly:1 had:1 robot:1 access:2 posterior:7 own:2 showed:3 multivariate:2 dictated:1 optimizing:1 route:1 occasionally:1 binary:1 continue:1 additional:2 schneider:1 parallelized:2 determine:2 maximize:2 converge:1 ii:1 multiple:6 reduces:4 offer:2 long:1 cross:1 mle:1 bigger:1 impact:1 prediction:1 regression:7 oliva:1 optimisation:1 expectation:2 metric:3 arxiv:6 iteration:4 kernel:14 normalization:1 maxx0:5 robotics:1 addition:1 whereas:2 separately:2 krause:1 decreased:1 semiparametric:1 else:1 source:31 ithaca:1 biased:6 parallelization:1 unlike:1 envelope:1 strict:1 subject:1 supposing:2 elegant:1 tend:1 meaningfully:1 leveraging:3 jordan:1 call:2 structural:2 near:2 leverage:1 latin:2 split:2 identically:1 recommends:1 iterate:1 independence:1 xj:7 fit:1 switch:1 identified:1 inner:2 idea:2 regarding:1 reduce:1 whether:1 motivated:1 peter:1 poczos:1 useful:1 detailed:1 amount:1 nonparametric:1 ten:1 reduced:1 http:12 specifies:1 outperform:1 wiki:1 nsf:3 tutorial:1 estimated:6 delta:1 per:4 klein:2 discrete:3 hyperparameter:3 promise:1 mat:2 dropping:1 express:1 group:3 dominance:1 four:2 hennig:2 drawn:3 utilize:2 asymptotically:1 fuse:1 sum:5 run:1 inverse:1 parameterized:3 uncertainty:5 jitter:1 chick:1 swersky:8 extends:2 almost:2 reader:1 decide:1 yann:1 wu:1 draw:1 summarizes:3 comparable:1 bound:1 aic:1 display:1 arizona:2 assemble:5 kronecker:1 constraint:1 x2:2 sake:1 dominated:13 speed:1 argument:2 optimality:3 performing:1 department:1 according:1 smaller:3 across:5 partitioned:1 kakade:1 stochastics:1 qu:1 making:1 s1:2 restricted:2 indexing:1 theano:2 taken:1 resource:2 agree:1 previously:1 hern:2 discus:1 eventually:2 tractable:1 ascending:1 end:2 serf:1 milgrom:1 generalizes:2 operation:3 available:7 apply:1 observe:3 hierarchical:2 indirectly:1 generic:1 appropriate:2 alternative:1 batch:3 shah:1 assumes:1 denotes:1 remaining:1 charlie:1 x21:1 exploit:2 invokes:1 quantile:2 ghahramani:2 establish:1 approximating:1 hypercube:2 society:1 unchanged:1 objective:14 already:2 added:2 strategy:1 primary:4 cmmi:2 surrogate:3 gradient:9 amongst:1 separate:2 incompletely:1 outer:1 nelson:3 toward:1 provable:1 code:4 index:1 relationship:1 illustration:1 providing:1 balance:1 minimizing:1 ratio:3 optionally:1 setup:4 mini:1 mostly:1 forrester:1 statement:1 design:23 implementation:3 policy:6 unknown:3 teh:1 observation:4 datasets:2 benchmark:9 finite:4 enabling:1 displayed:1 optional:1 excluding:1 communication:2 looking:1 rn:2 arbitrary:2 ghosal:1 introduced:1 pair:5 required:1 moe:2 engine:2 nip:1 geostatistics:1 beyond:1 bar:1 proceeds:1 below:1 usually:2 able:1 scott:1 summarize:1 challenge:1 tucson:1 max:3 royal:1 belief:3 overlap:1 event:1 business:1 examination:1 quantification:2 natural:1 predicting:1 warm:1 github:5 text:1 prior:5 literature:2 l2:1 python:1 epoch:1 review:1 asymptotic:1 afosr:3 par:1 querying:2 versus:1 validation:3 foundation:1 mldata:1 willcox:3 affine:2 share:1 ibm:2 summary:1 supported:2 last:7 rasmussen:1 drastically:1 bias:12 burges:1 face:1 distributed:1 dimension:1 xn:3 world:2 cumulative:2 computes:2 collection:4 coregionalization:1 made:1 jump:1 testset:1 reinforcement:1 ginsbourger:1 picheny:2 sj:17 compact:1 obtains:1 global:8 sequentially:1 tolerant:1 conclude:1 xi:6 x00:1 search:8 continuous:2 latent:1 chief:1 additionally:1 learn:1 schuler:1 ca:1 career:1 inventory:1 excellent:1 complex:1 domain:5 cheapest:1 pk:1 s2:1 noise:5 hyperparameters:7 body:1 x1:2 fig:4 referred:2 caplin:1 en:9 mtbo:16 ny:2 precision:1 exponential:2 late:1 removing:1 theorem:4 specific:1 maxi:9 cortes:1 deeplearning:1 dominates:1 mnist:4 adding:1 sequential:4 importance:1 corr:1 supplement:1 budget:1 exhausted:1 sorting:1 mf:1 suited:1 entropy:6 intersection:3 ordered:1 partially:1 recommendation:1 springer:1 nested:2 truth:3 relies:1 bji:7 cdf:1 conditional:2 goal:3 sorted:2 poloczek:4 replace:1 feasible:1 considerable:1 djh:1 notz:1 specifically:1 except:1 determined:1 called:1 total:7 accepted:2 experimental:3 xn0:3 kirthevasank:1 ucb:1 select:1 support:3 latter:1 arises:2 scan:3 witnessing:1 accelerated:1 phenomenon:1 evaluate:8 srinivas:1 correlated:2 |
6,652 | 7,017 | Deep Reinforcement Learning
from Human Preferences
Paul F Christiano
OpenAI
[email protected]
Miljan Martic
DeepMind
[email protected]
Jan Leike
DeepMind
[email protected]
Shane Legg
DeepMind
[email protected]
Tom B Brown
Google Brain?
[email protected]
Dario Amodei
OpenAI
[email protected]
Abstract
For sophisticated reinforcement learning (RL) systems to interact usefully with
real-world environments, we need to communicate complex goals to these systems.
In this work, we explore goals defined in terms of (non-expert) human preferences
between pairs of trajectory segments. We show that this approach can effectively
solve complex RL tasks without access to the reward function, including Atari
games and simulated robot locomotion, while providing feedback on less than
1% of our agent?s interactions with the environment. This reduces the cost of
human oversight far enough that it can be practically applied to state-of-the-art
RL systems. To demonstrate the flexibility of our approach, we show that we can
successfully train complex novel behaviors with about an hour of human time.
These behaviors and environments are considerably more complex than any which
have been previously learned from human feedback.
1
Introduction
Recent success in scaling reinforcement learning (RL) to large problems has been driven in domains
that have a well-specified reward function (Mnih et al., 2015, 2016; Silver et al., 2016). Unfortunately,
many tasks involve goals that are complex, poorly-defined, or hard to specify. Overcoming this
limitation would greatly expand the possible impact of deep RL and could increase the reach of
machine learning more broadly.
For example, suppose that we wanted to use reinforcement learning to train a robot to clean a table or
scramble an egg. It?s not clear how to construct a suitable reward function, which will need to be a
function of the robot?s sensors. We could try to design a simple reward function that approximately
captures the intended behavior, but this will often result in behavior that optimizes our reward
function without actually satisfying our preferences. This difficulty underlies recent concerns about
misalignment between our values and the objectives of our RL systems (Bostrom, 2014; Russell,
2016; Amodei et al., 2016). If we could successfully communicate our actual objectives to our agents,
it would be a significant step towards addressing these concerns.
If we have demonstrations of the desired task, we can use inverse reinforcement learning (Ng and
Russell, 2000) or imitation learning to copy the demonstrated behavior. But these approaches are not
directly applicable to behaviors that are difficult for humans to demonstrate (such as controlling a
robot with many degrees of freedom but non-human morphology).
?
Work done while at OpenAI.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
An alternative approach is to allow a human to provide feedback on our system?s current behavior
and to use this feedback to define the task. In principle this fits within the paradigm of reinforcement
learning, but using human feedback directly as a reward function is prohibitively expensive for RL
systems that require hundreds or thousands of hours of experience. In order to practically train deep
RL systems with human feedback, we need to decrease the amount of feedback required by several
orders of magnitude.
We overcome this difficulty by asking humans to compare possible trajectories of the agent, using
that data to learn a reward function, and optimizing the learned reward function with RL.
This basic approach has been explored in the past, but we confront the challenges involved in scaling
it up to modern deep RL and demonstrate by far the most complex behaviors yet learned from human
feedback.
Our experiments take place in two domains: Atari games in the Arcade Learning Environment (Bellemare et al., 2013), and robotics tasks in the physics simulator MuJoCo (Todorov et al., 2012). We
show that a small amount of feedback from a non-expert human, ranging from fifteen minutes to five
hours, suffice to learn both standard RL tasks and novel hard-to-specify behaviors such as performing
a backflip or driving with the flow of traffic.
1.1
Related Work
A long line of work studies reinforcement learning from human ratings or rankings, including Akrour
et al. (2011), Pilarski et al. (2011), Akrour et al. (2012), Wilson et al. (2012), Sugiyama et al. (2012),
Wirth and F?rnkranz (2013), Daniel et al. (2015), El Asri et al. (2016), Wang et al. (2016), and
Wirth et al. (2016). Other lines of research consider the general problem of reinforcement learning
from preferences rather than absolute reward values (F?rnkranz et al., 2012; Akrour et al., 2014;
Wirth et al., 2016), and optimizing using human preferences in settings other than reinforcement
learning (Machwe and Parmee, 2006; Secretan et al., 2008; Brochu et al., 2010; S?rensen et al.,
2016).
Our algorithm follows the same basic approach as Akrour et al. (2012) and Akrour et al. (2014), but
considers much more complex domains and behaviors. The complexity of our environments force us
to use different RL algorithms, reward models, and training strategies. One notable difference is that
Akrour et al. (2012) and Akrour et al. (2014) elicit preferences over whole trajectories rather than
short clips, and so would require about an order of magnitude more human time per data point. Our
approach to feedback elicitation closely follows Wilson et al. (2012). However, Wilson et al. (2012)
assumes that the reward function is the distance to some unknown (linear) ?target? policy, and is
never tested with real human feedback.
TAMER (Knox, 2012; Knox and Stone, 2013) also learns a reward function from human feedback,
but learns from ratings rather than comparisons, has the human observe the agent as it behaves,
and has been applied to settings where the desired policy can be learned orders of magnitude more
quickly.
Compared to all prior work, our key contribution is to scale human feedback up to deep reinforcement
learning and to learn much more complex behaviors. This fits into a recent trend of scaling reward
learning methods to large deep learning systems, for example inverse RL (Finn et al., 2016), imitation
learning (Ho and Ermon, 2016; Stadie et al., 2017), semi-supervised skill generalization (Finn et al.,
2017), and bootstrapping RL from demonstrations (Silver et al., 2016; Hester et al., 2017).
2
2.1
Preliminaries and Method
Setting and Goal
We consider an agent interacting with an environment over a sequence of steps; at each time t the
agent receives an observation ot 2 O from the environment and then sends an action at 2 A to the
environment.
In traditional reinforcement learning, the environment would also supply a reward rt 2 R and the
agent?s goal would be to maximize the discounted sum of rewards. Instead of assuming that the
environment produces a reward signal, we assume that there is a human overseer who can express
2
preferences between trajectory segments. A trajectory segment is a sequence of observations and
k
2
actions, = ((o0 , a0 ), (o1 , a1 ), . . . , (ok 1 , ak 1 )) 2 (O ? A) . Write 1
to indicate that the
1
2
human preferred trajectory segment
to trajectory segment . Informally, the goal of the agent is
to produce trajectories which are preferred by the human, while making as few queries as possible to
the human.
More precisely, we will evaluate our algorithms? behavior in two ways:
Quantitative: We say that preferences
o10 , a10 , . . . , o1k
are generated by a reward function2 r : O ? A ! R if
1
1 , ak 1
o20 , a20 , . . . , o2k
2
1 , ak 1
whenever
r o10 , a10 + ? ? ? + r o1k
1
1 , ak 1
> r o20 , a20 + ? ? ? + r o2k
2
1 , ak 1
.
If the human?s preferences are generated by a reward function r, then our agent ought to
receive a high total reward according to r. So if we know the reward function r, we can
evaluate the agent quantitatively. Ideally the agent will achieve reward nearly as high as if it
had been using RL to optimize r.
Qualitative: Sometimes we have no reward function by which we can quantitatively evaluate
behavior (this is the situation where our approach would be practically useful). In these
cases, all we can do is qualitatively evaluate how well the agent satisfies the human?s
preferences. In this paper, we will start from a goal expressed in natural language, ask a
human to evaluate the agent?s behavior based on how well it fulfills that goal, and then
present videos of agents attempting to fulfill that goal.
Our model based on trajectory segment comparisons is very similar to the trajectory preference
queries used in Wilson et al. (2012), except that we don?t assume that we can reset the system to
an arbitrary state3 and so our segments generally begin from different states. This complicates the
interpretation of human comparisons, but we show that our algorithm overcomes this difficulty even
when the human raters have no understanding of our algorithm.
2.2
Our Method
At each point in time our method maintains a policy ? : O ! A and a reward function estimate
r? : O ? A ! R, each parametrized by deep neural networks.
These networks are updated by three processes:
1. The policy ? interacts with the environment to produce a set of trajectories {? 1 , . . . , ? i }.
The parameters of ? are updated by a traditional reinforcement learning algorithm, in order
to maximize the sum of the predicted rewards rt = r?(ot , at ).
2. We select pairs of segments 1 , 2 from the trajectories {? 1 , . . . , ? i } produced in step 1,
and send them to a human for comparison.
3. The parameters of the mapping r? are optimized via supervised learning to fit the comparisons
collected from the human so far.
These processes run asynchronously, with trajectories flowing from process (1) to process (2), human
comparisons flowing from process (2) to process (3), and parameters for r? flowing from process (3)
to process (1). The following subsections provide details on each of these processes.
2
Here we assume here that the reward is a function of the observation and action. In our experiments in
Atari environments, we instead assume the reward is a function of the preceding 4 observations. In a general
partially observable environment, we could instead consider reward functions that depend on the whole sequence
of observations, and model this reward function with a recurrent neural network.
3
Wilson et al. (2012) also assumes the ability to sample reasonable initial states. But we work with high
dimensional state spaces for which random states will not be reachable and the intended policy inhabits a
low-dimensional manifold.
3
2.2.1
Optimizing the Policy
After using r? to compute rewards, we are left with a traditional reinforcement learning problem. We
can solve this problem using any RL algorithm that is appropriate for the domain. One subtlety is
that the reward function r? may be non-stationary, which leads us to prefer methods which are robust
to changes in the reward function. This led us to focus on policy gradient methods, which have been
applied successfully for such problems (Ho and Ermon, 2016).
In this paper, we use advantage actor-critic (A2C; Mnih et al., 2016) to play Atari games, and trust
region policy optimization (TRPO; Schulman et al., 2015) to perform simulated robotics tasks. In
each case, we used parameter settings which have been found to work well for traditional RL tasks.
The only hyperparameter which we adjusted was the entropy bonus for TRPO. This is because TRPO
relies on the trust region to ensure adequate exploration, which can lead to inadequate exploration if
the reward function is changing.
We normalized the rewards produced by r? to have zero mean and constant standard deviation. This is
a typical preprocessing step which is particularly appropriate here since the position of the rewards is
underdetermined by our learning problem.
2.2.2
Preference Elicitation
The human overseer is given a visualization of two trajectory segments, in the form of short movie
clips. In all of our experiments, these clips are between 1 and 2 seconds long.
The human then indicates which segment they prefer, that the two segments are equally good, or that
they are unable to compare the two segments.
The human judgments are recorded in a database D of triples 1 , 2 , ? , where 1 and 2 are the
two segments and ? is a distribution over {1, 2} indicating which segment the user preferred. If the
human selects one segment as preferable, then ? puts all of its mass on that choice. If the human
marks the segments as equally preferable, then ? is uniform. Finally, if the human marks the segments
as incomparable, then the comparison is not included in the database.
2.2.3
Fitting the Reward Function
We can interpret a reward function estimate r? as a preference-predictor if we view r? as a latent factor
explaining the human?s judgments and assume that the human?s probability of preferring a segment
i
depends exponentially on the value of the latent reward summed over the length of the clip:4
P
?
?
exp r? o1t , a1t
2
P
P
P? 1
=
.
(1)
exp r?(o1t , a1t ) + exp r?(o2t , a2t )
We choose r? to minimize the cross-entropy loss between these predictions and the actual human
labels:
X
?
?
?
?
2
1
loss(?
r) =
?(1) log P? 1
+ ?(2) log P? 2
.
(
1 , 2 ,?)2D
This follows the Bradley-Terry model (Bradley and Terry, 1952) for estimating score functions from
pairwise preferences, and is the specialization of the Luce-Shephard choice rule (Luce, 2005; Shepard,
1957) to preferences over trajectory segments.
Our actual algorithm incorporates a number of modifications to this basic approach, which early
experiments discovered to be helpful and which are analyzed in Section 3.3:
? We fit an ensemble of predictors, each trained on |D| triples sampled from D with replacement. The estimate r? is defined by independently normalizing each of these predictors and
then averaging the results.
? A fraction of 1/e of the data is held out to be used as a validation set for each predictor.
We use `2 regularization and adjust the regularization coefficient to keep the validation loss
between 1.1 and 1.5 times the training loss. In some domains we also apply dropout for
regularization.
4
Equation 1 does not use discounting, which could be interpreted as modeling the human to be indifferent
about when things happen in the trajectory segment. Using explicit discounting or inferring the human?s discount
function would also be reasonable choices.
4
? Rather than applying a softmax directly as described in Equation 1, we assume there is a
10% chance that the human responds uniformly at random. Conceptually this adjustment is
needed because human raters have a constant probability of making an error, which doesn?t
decay to 0 as the difference in reward difference becomes extreme.
2.2.4
Selecting Queries
We decide how to query preferences based on an approximation to the uncertainty in the reward
function estimator, similar to Daniel et al. (2014): we sample a large number of pairs of trajectory
segments of length k from the latest agent-environment interactions, use each reward predictor
in our ensemble to predict which segment will be preferred from each pair, and then select those
trajectories for which the predictions have the highest variance across ensemble members5 This is a
crude approximation and the ablation experiments in Section 3 show that in some tasks it actually
impairs performance. Ideally, we would want to query based on the expected value of information of
the query (Akrour et al., 2012; Krueger et al., 2016), but we leave it to future work to explore this
direction further.
3
Experimental Results
We implemented our algorithm in TensorFlow (Abadi et al., 2016). We interface with MuJoCo (Todorov et al., 2012) and the Arcade Learning Environment (Bellemare et al., 2013) through
the OpenAI Gym (Brockman et al., 2016).
3.1
Reinforcement Learning Tasks with Unobserved Rewards
In our first set of experiments, we attempt to solve a range of benchmark tasks for deep RL without
observing the true reward. Instead, the agent learns about the goal of the task only by asking a human
which of two trajectory segments is better. Our goal is to solve the task in a reasonable amount of
time using as few queries as possible.
In our experiments, feedback is provided by contractors who are given a 1-2 sentence description
of each task before being asked to compare several hundred to several thousand pairs of trajectory
segments for that task (see Appendix B for the exact instructions given to contractors). Each trajectory
segment is between 1 and 2 seconds long. Contractors responded to the average query in 3-5 seconds,
and so the experiments involving real human feedback required between 30 minutes and 5 hours of
human time.
For comparison, we also run experiments using a synthetic oracle whose preferences are generated
(in the sense of Section 2.1) by the real reward6 . We also compare to the baseline of RL training
using the real reward. Our aim here is not to outperform but rather to do nearly as well as RL without
access to reward information and instead relying on much scarcer feedback. Nevertheless, note that
feedback from real humans does have the potential to outperform RL (and as shown below it actually
does so on some tasks), because the human feedback might provide a better-shaped reward.
We describe the details of our experiments in Appendix A, including model architectures, modifications to the environment, and the RL algorithms used to optimize the policy.
3.1.1
Simulated Robotics
The first tasks we consider are eight simulated robotics tasks, implemented in MuJoCo (Todorov
et al., 2012), and included in OpenAI Gym (Brockman et al., 2016). We made small modifications
to these tasks in order to avoid encoding information about the task in the environment itself (the
modifications are described in detail in Appendix A). The reward functions in these tasks are quadratic
functions of distances, positions and velocities, and most are linear. We included a simple cartpole
5
Note that trajectory segments almost never start from the same state.
In the case of Atari games with sparse rewards, it is relatively common for two clips to both have zero
reward in which case the oracle outputs indifference. Because we considered clips rather than individual states,
such ties never made up a large majority of our data. Moreover, ties still provide significant information to the
reward predictor as long as they are not too common.
6
5
Figure 1: Results on MuJoCo simulated robotics as measured on the tasks? true reward. We compare
our method using real human feedback (purple), our method using synthetic feedback provided by
an oracle (shades of blue), and reinforcement learning using the true reward function (orange). All
curves are the average of 5 runs, except for the real human feedback, which is a single run, and
each point is the average reward over five consecutive batches. For Reacher and Cheetah feedback
was provided by an author due to time constraints. For all other tasks, feedback was provided by
contractors unfamiliar with the environments and with our algorithm. The irregular progress on
Hopper is due to one contractor deviating from the typical labeling schedule.
task (?pendulum?) for comparison, since this is representative of the complexity of tasks studied in
prior work.
Figure 1 shows the results of training our agent with 700 queries to a human rater, compared to
learning from 350, 700, or 1400 synthetic queries, as well as to RL learning from the real reward.
With 700 labels we are able to nearly match reinforcement learning on all of these tasks. Training
with learned reward functions tends to be less stable and higher variance, while having a comparable
mean performance.
Surprisingly, by 1400 labels our algorithm performs slightly better than if it had simply been given
the true reward, perhaps because the learned reward function is slightly better shaped?the reward
learning procedure assigns positive rewards to all behaviors that are typically followed by high reward.
The difference may also be due to subtle changes in the relative scale of rewards or our use of entropy
regularization.
Real human feedback is typically only slightly less effective than the synthetic feedback; depending
on the task human feedback ranged from being half as efficient as ground truth feedback to being
equally efficient. On the Ant task the human feedback significantly outperformed the synthetic
feedback, apparently because we asked humans to prefer trajectories where the robot was ?standing
upright,? which proved to be useful reward shaping. (There was a similar bonus in the RL reward
function to encourage the robot to remain upright, but the simple hand-crafted bonus was not as
useful.)
3.1.2
Atari
The second set of tasks we consider is a set of seven Atari games in the Arcade Learning Environment (Bellemare et al., 2013), the same games presented in Mnih et al., 2013.
Figure 2 shows the results of training our agent with 5,500 queries to a human rater, compared to
learning from 350, 700, or 1400 synthetic queries, as well as to RL learning from the real reward.
Our method has more difficulty matching RL in these challenging environments, but nevertheless it
displays substantial learning on most of them and matches or even exceeds RL on some. Specifically,
6
Figure 2: Results on Atari games as measured on the tasks? true reward. We compare our method using
real human feedback (purple), our method using synthetic feedback provided by an oracle (shades of
blue), and reinforcement learning using the true reward function (orange). All curves are the average
of 3 runs, except for the real human feedback which is a single run, and each point is the average
reward over about 150,000 consecutive frames.
on BeamRider and Pong, synthetic labels match or come close to RL even with only 3,300 such
labels. On Seaquest and Qbert synthetic feedback eventually performs near the level of RL but learns
more slowly. On SpaceInvaders and Breakout synthetic feedback never matches RL, but nevertheless
the agent improves substantially, often passing the first level in SpaceInvaders and reaching a score of
20 on Breakout, or 50 with enough labels.
On most of the games real human feedback performs similar to or slightly worse than synthetic
feedback with the same number of labels, and often comparably to synthetic feedback that has 40%
fewer labels. On Qbert, our method fails to learn to beat the first level with real human feedback;
this may be because short clips in Qbert can be confusing and difficult to evaluate. Finally, Enduro
is difficult for A3C to learn due to the difficulty of successfully passing other cars through random
exploration, and is correspondingly difficult to learn with synthetic labels, but human labelers tend to
reward any progress towards passing cars, essentially shaping the reward and thus outperforming
A3C in this game (the results are comparable to those achieved with DQN).
3.2
Novel behaviors
Experiments with traditional RL tasks help us understand whether our method is effective, but the
ultimate purpose of human interaction is to solve tasks for which no reward function is available.
Using the same parameters as in the previous experiments, we show that our algorithm can learn
novel complex behaviors. We demonstrate:
1. The Hopper robot performing a sequence of backflips (see Figure 4). This behavior was
trained using 900 queries in less than an hour. The agent learns to consistently perform a
backflip, land upright, and repeat.
2. The Half-Cheetah robot moving forward while standing on one leg. This behavior was
trained using 800 queries in under an hour.
3. Keeping alongside other cars in Enduro. This was trained with roughly 1,300 queries
and 4 million frames of interaction with the environment; the agent learns to stay almost
exactly even with other moving cars for a substantial fraction of the episode, although it gets
confused by changes in background.
7
Figure 3: Performance of our algorithm on MuJoCo tasks after removing various components, as
described in Section Section 3.3. All graphs are averaged over 5 runs, using 700 synthetic labels
each.
Videos of these behaviors can be found at https://goo.gl/MhgvIU. These behaviors were trained
using feedback from the authors.
3.3
Ablation Studies
In order to better understand the performance of our algorithm, we consider a range of modifications:
1. We pick queries uniformly at random rather than prioritizing queries for which there is
disagreement (random queries).
2. We train only one predictor rather than an ensemble (no ensemble). In this setting, we also
choose queries at random, since there is no longer an ensemble that we could use to estimate
disagreement.
3. We train on queries only gathered at the beginning of training, rather than gathered throughout training (no online queries).
4. We remove the `2 regularization and use only dropout (no regularization).
5. On the robotics tasks only, we use trajectory segments of length 1 (no segments).
6. Rather than fitting r? using comparisons, we consider an oracle which provides the true
total reward over a trajectory segment, and fit r? to these total rewards using mean squared
error (target).
The results are presented in Figure 3 for MuJoCo and Figure 4 for Atari.
Training the reward predictor offline can lead to bizarre behavior that is undesirable as measured by
the true reward (Amodei et al., 2016). For instance, on Pong offline training sometimes leads our
agent to avoid losing points but not to score points; this can result in extremely long volleys (videos
at https://goo.gl/L5eAbk). This type of behavior demonstrates that in general human feedback
needs to be intertwined with RL rather than provided statically.
Our main motivation for eliciting comparisons rather than absolute scores was that we found it much
easier for humans to provide consistent comparisons than consistent absolute scores, especially on the
continuous control tasks and on the qualitative tasks in Section 3.2; nevertheless it seems important
to understand how using comparisons affects performance. For continuous control tasks we found
that predicting comparisons worked much better than predicting scores. This is likely because the
scale of rewards varies substantially and this complicates the regression problem, which is smoothed
significantly when we only need to predict comparisons. In the Atari tasks we clipped rewards
8
Figure 4: Performance of our algorithm on Atari tasks after removing various components, as
described in Section 3.3. All curves are an average of 3 runs using 5,500 synthetic labels (see minor
exceptions in Section A.2).
and effectively only predicted the sign, avoiding these difficulties (this is not a suitable solution
for the continuous control tasks because the magnitude of the reward is important to learning). In
these tasks comparisons and targets had significantly different performance, but neither consistently
outperformed the other.
We also observed large performance differences when using single frames rather than clips.7 In order
to obtain the same results using single frames we would need to have collected significantly more
comparisons. In general we discovered that asking humans to compare longer clips was significantly
more helpful per clip, and significantly less helpful per frame. Shrinking the clip length below 1-2
seconds did not significantly decrease the human time required to label each clip in early experiments,
and so seems less efficient per second of human time. In the Atari environments we also found that it
was often easier to compare longer clips because they provide more context than single frames.
4
Discussion and Conclusions
Agent-environment interactions are often radically cheaper than human interaction. We show that by
learning a separate reward model using supervised learning, it is possible to reduce the interaction
complexity by roughly 3 orders of magnitude.
Although there is a large literature on preference elicitation and reinforcement learning from unknown
reward functions, we provide the first evidence that these techniques can be economically scaled up to
state-of-the-art reinforcement learning systems. This represents a step towards practical applications
of deep RL to complex real-world tasks.
In the long run it would be desirable to make learning a task from human preferences no more difficult
than learning it from a programmatic reward signal, ensuring that powerful RL systems can be applied
in the service of complex human values rather than low-complexity goals.
Acknowledgments
We thank Olivier Pietquin, Bilal Piot, Laurent Orseau, Pedro Ortega, Victoria Krakovna, Owain
Evans, Andrej Karpathy, Igor Mordatch, and Jack Clark for reading drafts of the paper. We thank
Tyler Adkisson, Mandy Beri, Jessica Richards, Heather Tran, and other contractors for providing the
7
We only ran these tests on continuous control tasks because our Atari reward model depends on a sequence
of consecutive frames rather than a single frame, as described in Section A.2
9
| 7017 |@word economically:1 seems:2 instruction:1 pick:1 fifteen:1 initial:1 score:6 selecting:1 daniel:2 bilal:1 past:1 bradley:2 current:1 com:6 yet:1 evans:1 happen:1 wanted:1 remove:1 stationary:1 half:2 fewer:1 spaceinvaders:2 beginning:1 short:3 provides:1 draft:1 preference:19 five:2 supply:1 qualitative:2 abadi:1 fitting:2 pairwise:1 expected:1 roughly:2 behavior:23 cheetah:2 morphology:1 brain:1 simulator:1 discounted:1 relying:1 actual:3 becomes:1 begin:1 estimating:1 provided:6 suffice:1 bonus:3 mass:1 moreover:1 confused:1 atari:13 interpreted:1 substantially:2 deepmind:3 unobserved:1 bootstrapping:1 ought:1 quantitative:1 usefully:1 tie:2 preferable:2 prohibitively:1 exactly:1 demonstrates:1 scaled:1 control:4 before:1 positive:1 service:1 tends:1 encoding:1 ak:5 laurent:1 approximately:1 might:1 studied:1 heather:1 challenging:1 mujoco:6 leike:2 range:2 averaged:1 practical:1 acknowledgment:1 procedure:1 jan:1 elicit:1 significantly:7 matching:1 arcade:3 get:1 close:1 undesirable:1 andrej:1 put:1 context:1 applying:1 a2t:1 bellemare:3 function2:1 optimize:2 demonstrated:1 send:1 latest:1 independently:1 assigns:1 rule:1 estimator:1 updated:2 controlling:1 suppose:1 target:3 play:1 user:1 exact:1 losing:1 olivier:1 locomotion:1 trend:1 velocity:1 satisfying:1 expensive:1 particularly:1 richards:1 database:2 observed:1 wang:1 capture:1 thousand:2 region:2 episode:1 russell:2 decrease:2 highest:1 goo:2 ran:1 substantial:2 environment:23 pong:2 complexity:4 a2c:1 reward:80 ideally:2 asked:2 o1t:2 trained:5 depend:1 segment:29 orseau:1 misalignment:1 owain:1 various:2 train:5 describe:1 effective:2 query:21 labeling:1 tamer:1 whose:1 solve:5 say:1 pilarski:1 ability:1 itself:1 asynchronously:1 online:1 sequence:5 advantage:1 tran:1 interaction:7 reset:1 ablation:2 flexibility:1 poorly:1 achieve:1 description:1 breakout:2 produce:3 silver:2 leave:1 help:1 depending:1 recurrent:1 measured:3 minor:1 progress:2 shephard:1 implemented:2 predicted:2 pietquin:1 indicate:1 come:1 direction:1 closely:1 exploration:3 human:72 ermon:2 require:2 generalization:1 preliminary:1 underdetermined:1 adjusted:1 practically:3 considered:1 ground:1 exp:3 tyler:1 mapping:1 predict:2 driving:1 early:2 consecutive:3 purpose:1 outperformed:2 applicable:1 label:12 successfully:4 scramble:1 sensor:1 aim:1 rather:15 fulfill:1 avoid:2 reaching:1 wilson:5 focus:1 legg:2 consistently:2 indicates:1 greatly:1 baseline:1 sense:1 helpful:3 el:1 typically:2 a0:1 expand:1 selects:1 oversight:1 seaquest:1 qbert:3 art:2 summed:1 softmax:1 orange:2 construct:1 never:4 shaped:2 ng:1 beach:1 having:1 represents:1 nearly:3 igor:1 future:1 brockman:2 quantitatively:2 few:2 modern:1 rater:2 individual:1 cheaper:1 deviating:1 intended:2 replacement:1 attempt:1 freedom:1 jessica:1 mnih:3 adjust:1 indifferent:1 analyzed:1 extreme:1 held:1 encourage:1 experience:1 cartpole:1 hester:1 desired:2 a3c:2 a20:2 complicates:2 instance:1 modeling:1 asking:3 cost:1 addressing:1 deviation:1 hundred:2 uniform:1 predictor:8 inadequate:1 too:1 varies:1 considerably:1 synthetic:15 knox:2 st:1 preferring:1 standing:2 stay:1 physic:1 quickly:1 squared:1 recorded:1 o10:2 choose:2 slowly:1 worse:1 a1t:2 expert:2 potential:1 coefficient:1 notable:1 ranking:1 depends:2 try:1 view:1 apparently:1 observing:1 traffic:1 pendulum:1 start:2 maintains:1 contribution:1 minimize:1 purple:2 responded:1 variance:2 who:2 ensemble:6 judgment:2 gathered:2 ant:1 conceptually:1 produced:2 comparably:1 trajectory:25 reach:1 whenever:1 a10:2 involved:1 sampled:1 proved:1 ask:1 subsection:1 car:4 improves:1 schedule:1 subtle:1 shaping:2 sophisticated:1 actually:3 brochu:1 enduro:2 ok:1 higher:1 supervised:3 tom:1 specify:2 bostrom:1 flowing:3 done:1 hand:1 receives:1 trust:2 google:5 perhaps:1 dqn:1 usa:1 dario:1 brown:1 normalized:1 true:8 ranged:1 regularization:6 discounting:2 game:9 stone:1 ortega:1 demonstrate:4 performs:3 interface:1 ranging:1 jack:1 novel:4 krueger:1 common:2 behaves:1 hopper:2 rl:34 exponentially:1 shepard:1 million:1 interpretation:1 interpret:1 significant:2 unfamiliar:1 sugiyama:1 language:1 had:3 reachable:1 moving:2 access:2 robot:8 actor:1 stable:1 longer:3 labelers:1 recent:3 optimizing:3 optimizes:1 driven:1 outperforming:1 success:1 preceding:1 paradigm:1 maximize:2 signal:2 christiano:1 semi:1 desirable:1 reduces:1 exceeds:1 match:4 cross:1 long:7 equally:3 a1:1 ensuring:1 impact:1 prediction:2 underlies:1 basic:3 involving:1 confront:1 essentially:1 regression:1 sometimes:2 robotics:6 achieved:1 irregular:1 receive:1 background:1 want:1 programmatic:1 sends:1 ot:2 shane:1 tend:1 thing:1 flow:1 incorporates:1 near:1 enough:2 todorov:3 affect:1 fit:5 architecture:1 incomparable:1 reduce:1 luce:2 whether:1 specialization:1 o0:1 ultimate:1 impairs:1 passing:3 action:3 adequate:1 deep:9 useful:3 generally:1 clear:1 involve:1 informally:1 karpathy:1 amount:3 discount:1 clip:13 http:2 outperform:2 rensen:1 piot:1 sign:1 per:4 blue:2 broadly:1 write:1 hyperparameter:1 intertwined:1 express:1 key:1 trpo:3 openai:7 nevertheless:4 changing:1 neither:1 clean:1 graph:1 fraction:2 sum:2 run:9 inverse:2 uncertainty:1 communicate:2 powerful:1 place:1 almost:2 reasonable:3 decide:1 throughout:1 clipped:1 prefer:3 scaling:3 appendix:3 comparable:2 scarcer:1 dropout:2 confusing:1 followed:1 beamrider:1 display:1 quadratic:1 oracle:5 precisely:1 constraint:1 worked:1 extremely:1 performing:2 attempting:1 statically:1 relatively:1 contractor:6 according:1 amodei:3 across:1 slightly:4 remain:1 making:2 modification:5 leg:1 equation:2 visualization:1 previously:1 eventually:1 needed:1 know:1 finn:2 available:1 apply:1 observe:1 eight:1 victoria:1 appropriate:2 disagreement:2 alternative:1 gym:2 ho:2 batch:1 assumes:2 ensure:1 especially:1 eliciting:1 objective:2 strategy:1 rt:2 traditional:5 interacts:1 responds:1 gradient:1 distance:2 unable:1 separate:1 simulated:5 thank:2 parametrized:1 majority:1 seven:1 manifold:1 considers:1 collected:2 assuming:1 length:4 o1:1 providing:2 demonstration:2 difficult:5 unfortunately:1 design:1 policy:9 unknown:2 perform:2 observation:5 benchmark:1 beat:1 situation:1 frame:8 interacting:1 discovered:2 smoothed:1 arbitrary:1 overcoming:1 rating:2 prioritizing:1 pair:5 required:3 specified:1 optimized:1 sentence:1 learned:6 tensorflow:1 hour:6 nip:1 able:1 elicitation:3 alongside:1 below:2 mordatch:1 reading:1 challenge:1 including:3 video:3 terry:2 suitable:2 difficulty:6 force:1 natural:1 predicting:2 movie:1 state3:1 bizarre:1 prior:2 understanding:1 schulman:1 literature:1 relative:1 loss:4 limitation:1 triple:2 validation:2 clark:1 agent:23 degree:1 consistent:2 principle:1 raters:2 critic:1 land:1 surprisingly:1 repeat:1 copy:1 keeping:1 gl:2 offline:2 allow:1 understand:3 explaining:1 correspondingly:1 absolute:3 sparse:1 feedback:40 overcome:1 curve:3 world:2 rnkranz:2 doesn:1 author:2 qualitatively:1 reinforcement:19 preprocessing:1 made:2 forward:1 far:3 skill:1 observable:1 preferred:4 overcomes:1 keep:1 imitation:2 akrour:8 don:1 continuous:4 latent:2 table:1 reacher:1 learn:7 robust:1 ca:1 interact:1 complex:11 domain:5 did:1 main:1 whole:2 motivation:1 paul:2 crafted:1 representative:1 egg:1 shrinking:1 fails:1 stadie:1 position:2 explicit:1 inferring:1 volley:1 crude:1 wirth:3 learns:6 minute:2 removing:2 shade:2 explored:1 decay:1 concern:2 normalizing:1 evidence:1 effectively:2 magnitude:5 easier:2 entropy:3 led:1 simply:1 explore:2 likely:1 expressed:1 adjustment:1 indifference:1 partially:1 subtlety:1 pedro:1 radically:1 truth:1 satisfies:1 relies:1 chance:1 goal:12 towards:3 hard:2 change:3 included:3 typical:2 except:3 uniformly:2 upright:3 averaging:1 specifically:1 total:3 experimental:1 indicating:1 select:2 exception:1 mark:2 fulfills:1 evaluate:6 tested:1 avoiding:1 |
6,653 | 7,018 | On the Fine-Grained Complexity of
Empirical Risk Minimization:
Kernel Methods and Neural Networks
Arturs Backurs
CSAIL
MIT
[email protected]
Piotr Indyk
CSAIL
MIT
[email protected]
Ludwig Schmidt
CSAIL
MIT
[email protected]
Abstract
Empirical risk minimization (ERM) is ubiquitous in machine learning and underlies most supervised learning methods. While there is a large body of work on
algorithms for various ERM problems, the exact computational complexity of ERM
is still not understood. We address this issue for multiple popular ERM problems
including kernel SVMs, kernel ridge regression, and training the final layer of a neural network. In particular, we give conditional hardness results for these problems
based on complexity-theoretic assumptions such as the Strong Exponential Time
Hypothesis. Under these assumptions, we show that there are no algorithms that
solve the aforementioned ERM problems to high accuracy in sub-quadratic time.
We also give similar hardness results for computing the gradient of the empirical
loss, which is the main computational burden in many non-convex learning tasks.
1
Introduction
Empirical risk minimization (ERM) has been highly influential in modern machine learning [37].
ERM underpins many core results in statistical learning theory and is one of the main computational
problems in the field. Several important methods such as support vector machines (SVM), boosting,
and neural networks follow the ERM paradigm [34]. As a consequence, the algorithmic aspects of
ERM have received a vast amount of attention over the past decades. This naturally motivates the
following basic question:
What are the computational limits for ERM algorithms?
In this work, we address this question both in convex and non-convex settings. Convex ERM problems
have been highly successful in a wide range of applications, giving rise to popular methods such as
SVMs and logistic regression. Using tools from convex optimization, the resulting problems can be
solved in polynomial time. However, the exact time complexity of many important ERM problems
such as kernel SVMs is not yet well understood. As the size of data sets in machine learning continues
to grow, this question is becoming increasingly important. For ERM problems with millions of
high-dimensional examples, even quadratic time algorithms can become painfully slow (or expensive)
to run.
Non-convex ERM problems have also attracted extensive research interest, e.g., in the context of deep
neural networks. First order methods that follow the gradient of the empirical loss are not guaranteed
to find the global minimizer in this setting. Nevertheless, variants of gradient descent are by far the
most common method for training large neural networks. Here, the computational bottleneck is to
compute a number of gradients, not necessarily to minimize the empirical loss globally. Although we
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
can compute gradients in polynomial time, the large number of parameters and examples in modern
deep learning still makes this a considerable computational challenge.
Unfortunately, there are only few existing results concerning the exact time complexity of ERM or
gradient computations. Since the problems have polynomial time algorithms, the classical machinery
from complexity theory (such as NP hardness) is too coarse to apply. Oracle lower bounds from
optimization offer useful guidance for convex ERM problems, but the results only hold for limited
classes of algorithms. Moreover, they do not account for the cost of executing the oracle calls, as
they simply lower bound their number. Overall, we do not know if common ERM problems allow
for algorithms that compute a high-accuracy solution in sub-quadratic or even nearly-linear time for
all instances.1 Furthermore, we do not know if there are more efficient techniques for computing
(mini-)batch gradients than simply treating each example in the batch independently.2
We address both questions for multiple well-studied ERM problems.
Hardness of ERM. First, we give conditional hardness results for minimizing the empirical risk in
several settings, including kernel SVMs, kernel ridge regression (KRR), and training the top layer
of a neural network. Our results give evidence that no algorithms can solve these problems to high
accuracy in strongly sub-quadratic time. Moreover, we provide similar conditional hardness results
for kernel PCA. All of these methods are popular learning algorithms due to the expressiveness of the
kernel or network embedding. Our results show that this expressiveness also leads to an expensive
computational problem.
Hardness of gradient computation in neural networks. Second, we address the complexity of
computing a gradient for the empirical risk of a neural network. In particular, we give evidence that
computing (or even approximating, up to polynomially large factors) the norm of the gradient of
the top layer in a neural network takes time that is ?rectangular?. The time complexity cannot be
significantly better than O(n ? m), where m is the number of examples and n is the number of units
in the network. Hence, there are no algorithms that compute batch gradients faster than handling each
example individually, unless common complexity-theoretic assumptions fail.
Our hardness results for gradient computation apply to common activation functions such as ReLU
or sigmoid units. We remark that for polynomial activation functions (for instance, studied in [24]),
significantly faster algorithms do exist. Thus, our results can be seen as mapping the ?efficiency
landscape? of basic machine learning sub-routines. They distinguish between what is possible and
(likely) impossible, suggesting further opportunities for improvement.
Our hardness results are based on recent advances in fine-grained complexity and build on conjectures
such as the Strong Exponential Time Hypothesis (SETH) [23, 22, 38]. SETH concerns the classic
satisfiability problem for formulas in Conjunctive Normal Form (CNF). Informally, the conjecture
states that there is no algorithm for checking satisfiability of a formula with n variables and m clauses
in time less than O(cn ? poly(m)) for some c < 2.3 While our results are conditional, SETH has been
employed in many recent hardness results. Its plausibility stems from the fact that, despite 60 years
of research on satisfiability algorithms, no such improvement has been discovered.
Our results hold for a significant range of the accuracy parameter. For kernel methods, our bounds
hold for algorithms approximating the empirical risk up to a factor of 1+?, for log(1/?) = ?(log2 n)).
Thus, they provide conditional quadratic lower bounds for algorithms with, say, a log 1/? runtime
dependence on the approximation error ?. A (doubly) logarithmic dependence on 1/? is generally
seen as the ideal rate of convergence in optimization, and algorithms with this property have been
studied extensively in the machine learning community (cf. [12].). At the same time, approximate
1
More efficient algorithms exist if the running time is allowed to be polynomial in the accuracy parameter,
e.g., [35] give such an algorithm for the kernel SVM problem that we consider as well. See also the discussion
at the end of this section.
2
Consider a network with one hidden layer containing n units and a training set with m examples, for
simplicity in small dimension d = O(log n). No known results preclude an algorithm that computes a full
gradient in time O((n+m) log n). This would be significantly faster than the standard O(n?m?log n) approach
of computing the full gradient example by example.
3
Note that SETH can be viewed as a significant strengthening of the P 6= NP conjecture, which only
postulates that there is no polynomial time algorithm for CNF satisfiability. The best known algorithms for CNF
satisfiability have running times of the form O(2(1?o(1))n ? poly(m)).
2
solutions to ERM problems can be sufficient for good generalization in learning tasks. Indeed,
stochastic gradient descent (SGD) is often advocated as an efficient learning algorithm despite its
polynomial dependence on 1/? in the optimization error [35, 15]. Our results support this viewpoint
since SGD sidesteps the quadratic time complexity of our lower bounds.
For other problems, our assumptions about the accuracy parameter are less stringent. In particular,
for training the top layer of the neural network, we only need to assume that ? ? 1/n. Finally, our
lower bounds for approximating the norm of the gradient in neural networks hold even if ? = nO(1) ,
i.e., for polynomial approximation factors (or alternatively, a constant additive factor for ReLU and
sigmoid activation functions).
Finally, we note that our results do not rule out algorithms that achieve a sub-quadratic running
time for well-behaved instances, e.g., instances with low-dimensional structure. Indeed, many such
approaches have been investigated in the literature, for instance the Nystr?m method or random
features for kernel problems [40, 30]. Our results offer an explanation for the wide variety of
techniques. The lower bounds are evidence that there is no ?silver bullet? algorithm for solving the
aforementioned ERM problems in sub-quadratic time, to high accuracy, and for all instances.
2
Background
Fine-grained complexity. We obtain our conditional hardness results via reductions from two
well-studied problems: Orthogonal Vectors and Bichromatic Hamming Close Pair.
Definition 1 (Orthogonal Vectors problem (OVP)). Given two sets A = {a1 , . . . , an } ? {0, 1}d and
B = {b1 , . . . , bn } ? {0, 1}d of n binary vectors, decide if there exists a pair a ? A and b ? B such
that aT b = 0.
For OVP, we can assume without loss of generality that all vectors in B have the same number of 1s.
This can be achieved by appending d entries to every bi and setting the necessary number of them to
1 and the rest to 0. We then append d entries to every ai and set all of them to 0.
Definition 2 (Bichromatic Hamming Close Pair (BHCP) problem). Given two sets A =
{a1 , . . . , an } ? {0, 1}d and B = {b1 , . . . , bn } ? {0, 1}d of n binary vectors and an integer
t ? {2, . . . , d}, decide if there exists a pair a ? A and b ? B such that the number of coordinates in
which they differ is less than t (formally, Hamming(a, b) := ||a ? b||1 < t). If there is such a pair
(a, b), we call it a close pair.
It is known that both OVP and BHCP require almost quadratic time (i.e., n2?o(1) ) for any d =
?(log n) assuming SETH [5].4 Furthermore, if we allow the sizes |A| = n and |B| = m to be
different, both problems require (nm)1?o(1) time assuming SETH, as long as m = n? for some
constant ? ? (0, 1) [17]. Our proofs will proceed by embedding OVP and BHCP instances into ERM
problems. Such a reduction then implies that the ERM problem requires almost quadratic time if the
SETH is true. If we could solve the ERM problem faster, we would also obtain a faster algorithm for
the satisfiability problem.
3
Our contributions
3.1
Kernel ERM problems
We provide hardness results for multiple kernel problems. In the following, let x1 , . . . , xn ? Rd
be the n input vectors, where d = ?(log n). We use y1 , . . . , yn ? R as n labels or target values.
Finally, let k(x, x0 ) denote a kernel function and let K ? Rn?n be the corresponding kernel
matrix, defined as K i,j := k(xi , xj ) [33]. Concretely, we focus on the Gaussian kernel k(x, x0 ) :=
exp ?Ckx ? x0 k22 for some C > 0. We note that our results can be generalized to any kernel with
exponential tail.
4
We use ?(g(n)) to denote any function f such that limn?? f (n)/g(n) = ?. Similarly, we use o(g(n))
to denote any function f such that limn?? f (n)/g(n) = 0. Consequently, we will refer to functions of the
form ?(1) as super-constant and to n?(1) as super-polynomial.
3
Kernel SVM. For simplicity, we present our result for hard-margin SVMs without bias terms. This
gives the following optimization problem.
Definition 3 (Hard-margin SVM). A (primal) hard-margin SVM is an optimization problem of the
following form:
n
1 X
minimize
?i ?j yi yj k(xi , xj )
2 i,j=1
?1 ,...,?n ?0
(1)
subject to
where f (x) :=
Pn
i=1
yi f (xi ) ? 1, i = 1, . . . , n,
?i yi k(xi , x).
The following theorem is our main result for SVMs, described in more detail in Section 4. In
Sections B, C, and D of the supplementary material we provide similar hardness results for other
common SVM variants, including the soft-margin version.
Theorem 4. Let k(a, a0 ) be the Gaussian kernel with C = 100 log n and let ? = exp(??(log2 n)).
Then approximating the optimal value of Equation (1) within a multiplicative factor 1 + ? requires
almost quadratic time assuming SETH.
Kernel Ridge Regression. Next we consider Kernel Ridge Regression, which is formally defined
as follows.
Definition 5 (Kernel ridge regression). Given a real value ? ? 0, the goal of kernel ridge regression
is to output
?
1
arg min ||y ? K?||22 + ?T K?.
n
2
2
??R
This problem is equivalent to computing the vector (K + ?I)?1 y. We focus on the special case
where ? = 0 and the vector y has all equal entries y1 = . . . = yn = 1. In this case, the entrywise
sum of K ?1 y is equal to the sum of the entries in K ?1 . Thus, we show hardness for computing the
latter quantity (see Section F in the supplementary material for the proof).
Theorem 6. Let k(a, a0 ) be the Gaussian kernel for any parameter C = ?(log n) and let ? =
exp(??(log2 n)). Then computing the sum of the entries in K ?1 up to a multiplicative factor of
1 + ? requires almost quadratic time assuming SETH.
Kernel PCA. Finally, we turn to the Kernel PCA problem, which we define as follows [26].
Definition 7 (Kernel Principal Component Analysis (PCA)). Let 1n be an n ? n matrix where each
entry takes value 1/n, and define K 0 := (I ? 1n )K(I ? 1n ). The goal of the kernel PCA problem is
to output the n eigenvalues of the matrix K 0 .
In the above definition, the output only consists of the eigenvalues, not the eigenvectors. This is
because computing all n eigenvectors trivially takes at least quadratic time since the output itself
has quadratic size. Our hardness proof applies to the potentially simpler problem where only the
eigenvalues are desired. Specifically, we show that computing the sum of the eigenvalues (i.e., the
trace of the matrix) is hard. See Section E in the supplementary material for the proof.
Theorem 8. Let k(a, a0 ) be the Gaussian kernel with C = 100 log n and let ? = exp(??(log2 n)).
Then approximating the sum of the eigenvalues of K 0 = (I ? 1n )K(I ? 1n ) within a multiplicative
factor of 1 + ? requires almost quadratic time assuming SETH.
We note that the argument in the proof shows that even approximating the sum of the entries of K is
hard. This provides an evidence of hardness of the kernel density estimation problem for Gaussian
kernels, complementing recent upper bounds of [20].
3.2
Neural network ERM problems
We now consider neural networks. We focus on the problem of optimizing the top layer while keeping
lower layers unchanged. An instance of this problem is transfer learning with large networks that
would take a long time and many examples to train from scratch [31]. We consider neural networks of
depth 2, with the sigmoid or ReLU activation function. Our hardness result holds for a more general
class of ?nice? activation functions S as described later (see Definition 12).
4
Given n weight vectors w1 , . . . , wn ? Rd and n weights ?1 , . . . , ?n ? R, consider the function
f : Rd ? R using a non-linearity S : R ? R:
f (u) :=
n
X
?j ? S(uT wj ) .
j=1
This function can be implemented as a neural net that has d inputs, n nonlinear activations (units),
and one linear output.
To complete the ERM problem, we also require a loss function. Our hardness results hold for a large
class of ?nice? loss functions, which includes the hinge loss and the logistic loss.5 Given a nice
loss function and m input vectors a1 , . . . , am ? Rd with corresponding labels yi , we consider the
following problem:
m
X
minimize
loss(yi , f (ui )).
(2)
?1 ,...,?n ?R
i=1
Our main result is captured by the following theorem (see Section 5 for the proof). For simplicity, we
set m = n.
Theorem 9. For any d = ?(log n), approximating the optimal value in Equation (2) up to a
1
requires almost quadratic time assuming SETH.
multiplicative factor of 1 + 4n
3.3
Hardness of gradient computation
Finally, we consider the problem of computing the gradient of the loss function for a given set of
examples. We focus on the network architecture from the previous section. Formally, we obtain the
following result:
Theorem 10. Consider the empirical risk in Equation (2) under the following assumptions: (i) The
function f is represented by a neural network with n units, n ? d parameters, and the ReLU activation
function. (ii) We have d = ?(log n). (iii) The loss function is the logistic loss or hinge loss. Then
approximating the `p -norm (for any p ? 1) of the gradient of the empirical risk for m examples
within a multiplicative factor of nC for any constant C > 0 takes at least O (nm)1?o(1) time
assuming SETH.
See Section 6 for the proof. We also prove a similar statement for the sigmoid activation function. At
the same time, we remark that for polynomial activation functions, significantly faster algorithms
do exist, using the polynomial lifting argument. Specifically, for the polynomial activation function
of the form xr for some integer r ? 2, all gradients can be computed in O((n + m)dr ) time. Note
that the running time of the standard backpropagation algorithm is O(dnm) for networks with this
architecture. Thus one can improve over backpropagation for a non-trivial range of parameters,
especially for quadratic activation function when r = 2. See Section H in the supplementary material
for more details.
3.4
Related work
Recent work has demonstrated conditional quadratic hardness results for many combinatorial optimization problems over graphs and sequences. These results include computing diameter in sparse
graphs [32, 21], Local Alignment [2], Fr?chet distance [16], Edit Distance [13], Longest Common
Subsequence, and Dynamic Time Warping [1, 17]. In the machine learning literature, [14] recently
showed a tight lower bound for the problem of inferring the most likely path in a Hidden Markov
Model, matching the upper bound achieved by the Viterbi algorithm [39]. As in our paper, the SETH
and related assumptions underlie these lower bounds. To the best of our knowledge, our paper is
the first application of this methodology to continuous (as opposed to combinatorial) optimization
problems.
There is a long line of work on the oracle complexity of optimization problems, going back to [28].
We refer the reader to [29] for these classical results. The oracle complexity of ERM problems is still
5
In the binary setting we consider, the logistic loss is equivalent to the softmax loss commonly employed in
deep learning.
5
subject of active research, e.g., see [3, 19, 41, 9, 10]. The work closest to ours is [19], which gives
quadratic time lower bounds for ERM algorithms that access the kernel matrix through an evaluation
oracle or a low-rank approximation.
The oracle results are fundamentally different from the lower bounds presented in our paper. Oracle
lower bounds are typically unconditional, but inherently apply only to a limited class of algorithms
due to their information-theoretic nature. Moreover, they do not account for the cost of executing
the oracle calls, as they merely lower bound their number. In contrast, our results are conditional
(based on the SETH and related assumptions), but apply to any algorithm and account for the total
computational cost. This significantly broadens the reach of our results. We show that the hardness is
not due to the oracle abstraction but instead inherent in the computational problem.
4
Overview of the hardness proof for kernel SVMs
Let A = {a1 , . . . , an } ? {0, 1}d and B = {b1 , . . . , bn } ? {0, 1}d be the two sets of binary vectors
from a BHCP instance with d = ?(log n). Our goal is to determine whether there is a close pair of
vectors. We show how to solve this BHCP instance by reducing it to three computations of SVM,
defined as follows:
1. We take the first set A of binary vectors, assign label 1 to all vectors, and solve the
corresponding SVM on the n vectors:
minimize
?1 ,...,?n ?0
n
1 X
?i ?j k(ai , aj )
2 i,j=1
n
X
subject to
(3)
?j k(ai , aj ) ? 1, i = 1, . . . , n.
j=1
Note that we do not have yi in the expressions because all labels are 1.
2. We take the second set B of binary vectors, assign label ?1 to all vectors, and solve the
corresponding SVM on the n vectors:
minimize
?1 ,...,?n ?0
subject to
n
1 X
?i ?j k(bi , bj )
2 i,j=1
?
n
X
(4)
?j k(bi , bj ) ? ?1, i = 1, . . . , n.
j=1
3. We take both sets A and B of binary vectors, assign label 1 to all vectors from the first set A
and label ?1 to all vectors from the second set B. We then solve the corresponding SVM
on the 2n vectors:
n
n
n
X
1 X
1 X
minimize
?i ?j k(ai , aj ) +
?i ?j k(bi , bj ) ?
?i ?j k(ai , bj )
?1 ,...,?n ?0
2 i,j=1
2 i,j=1
i,j=1
?1 ,...,?n ?0
subject to
n
X
?j k(ai , aj ) ?
j=1
?
n
X
j=1
n
X
?j k(ai , bj )
j=1
n
X
? 1,
i = 1, . . . , n ,
?j k(bi , aj ) ? ?1,
?j k(bi , bj ) +
(5)
i = 1, . . . , n .
j=1
Intuition behind the construction. To show a reduction from the BHCP problem to SVM computation, we have to consider two cases:
? The YES case of the BHCP problem when there are two vectors that are close in Hamming
distance. That is, there exist ai ? A and bj ? B such that Hamming(ai , bj ) < t.
? The NO case of the BHCP problem when there is no close pair of vectors. That is, for all
ai ? A and bj ? B, we have Hamming(ai , bj ) ? t.
6
We show that we can distinguish between these two cases by comparing the objective value of the
first two SVM instances above to the objective value of the third.
Intuition for the NO case. We have Hamming(ai , bj ) ? t for all ai ? A and bj ? B. The
Gaussian kernel then gives the inequality
k(ai , bj ) = exp(?100 log n ? kai ? bj k22 ) ? exp(?100 log n ? t)
for all ai ? A and bj ? B. This means that the value k(ai , bj ) is very small. For simplicity, assume
that it is equal to 0, i.e., k(ai , bj ) = 0 for all ai ? A and bj ? B.
Consider the third SVM (5). It contains three terms involving k(ai , bj ): the third term in the objective
function, the second term in the inequalities of the first type, and the second term in the inequalities
of the second type. We assumed that these terms are equal to 0 and we observe that the rest of the
third SVM is equal to the sum of the first SVM (3) and the second SVM (4). Thus we expect that
the optimal value of the third SVM is approximately equal to the sum of the optimal values of the
first and the second SVMs. If we denote the optimal value of the first SVM (3) by value(A), the
optimal value of the second SVM (4) by value(B), and the optimal value of the third SVM (5) by
value(A, B), then we can express our intuition in terms of the approximate equality
value(A, B) ? value(A) + value(B) .
Intuition for the YES case. In this case, there is a close pair of vectors ai ? A and bj ? B
such that Hamming(ai , bj ) ? t ? 1. Since we are using the Gaussian kernel we have the following
inequality for this pair of vectors:
k(ai , bj ) = exp(?100 log n ? kai ? bj k22 ) ? exp(?100 log n ? (t ? 1)) .
We therefore have a large summand in each of the three terms from the above discussion. Thus
the three terms do not (approximately) disappear and there is no reason for us to expect that the
approximate equality holds. We can thus expect
value(A, B) 6? value(A) + value(B) .
Thus, by computing value(A, B) and comparing it to value(A) + value(B) we can distinguish
between the YES and NO instances of BHCP. This completes the reduction. The full proofs are given
in Section B of the supplementary material.
5
Overview of the hardness proof for training the final layer of a neural
network
We start by formally defining the class of ?nice? loss functions and ?nice? activation functions.
Definition 11. For a label y ? {?1, 1} and a prediction w ? R, we call the loss function loss(y, w) :
{?1, 1} ? R ? R?0 nice if the following three properties hold:
? loss(y, w) = l(yw) for some convex function l : R ? R?0 .
? For some sufficiently large constant K > 0, we have that (i) l(x) ? o(1) for all x ? nK ,
(ii) l(x) ? ?(n) for all x ? ?nK , and (iii) l(x) = l(0) ? o(1/n) for all x ? ?O(n?K ).
? l(0) > 0 is some constant strictly larger than 0.
We note that the hinge loss function loss(y, x) = max(0, 1 ? y ? x) and the logistic loss function
loss(y, x) = ln12 ln (1 + e?y?x ) are nice loss functions according to the above definition.
Definition 12. A non-decreasing activation functions S : R ? R?0 is ?nice? if it satisfies the
following property: for all sufficiently large constants T > 0 there exist v0 > v1 > v2 such that
S(v0 ) = ?(1), S(v1 ) = 1/nT , S(v2 ) = 1/n?(1) and v1 = (v0 + v2 )/2.
The ReLU activation S(z) = max(0, z) satisfies these properties since we can choose v0 = 1,
v1 = 1/nT , and v2 = ?1 + 2/nT . For the sigmoid function S(z) = 1+e1?z , we can choose
7
v1 = ? log(nT ? 1), v0 = v1 + C, and v2 = v1 ? C for some C = ?(log n). In the rest of the proof
we set T = 1000K, where K is the constant from Definition 11.
We now describe the proof of Theorem 9. We use the notation ? := (?1 , . . . , ?n )T . Invoking the
first property from Definition 11, we observe that the optimization problem (2) is equivalent to the
following optimization problem:
minimize
n
??R
m
X
l(yi ? (M ?)i ),
(6)
i=1
where M ? Rm?n is the matrix defined as Mi,j := S(uT
i wj ) for i = 1, . . . , m and j = 1, . . . n. For
the rest of the section we will use m = ?(n).6
Let A = {a1 , . . . , an } ? {0, 1}d and B = {b1 , . . . , bn } ? {0, 1}d with d = ?(log n) be the input to
the Orthogonal Vectors problem. To show hardness we define a matrix M as a vertical concatenation
of 3 smaller matrices: M1 , M2 and M2 (repeated). Both matrices M1 , M2 ? Rn?n are of size n ? n.
Thus the number of rows of M (equivalently, the number of training examples) is m = 3n.
Reduction overview. We select the input examples and weights so that the matrices M1 and M2 ,
have the following properties:
? M1 : if two vectors ai and bj are orthogonal, then the corresponding entry (M1 )i,j =
S(v0 ) = ?(1) and otherwise (M1 )i,j ? 0.7
? M2 : (M2 )i,i = S(v1 ) = 1/n1000K and (M2 )i,j ? 0 for all i 6= j
To complete the description of the optimization problem (6), we assign labels to the inputs corresponding to the rows of the matrix M . We assign label 1 to all inputs corresponding to rows of the
matrix M1 and the first copy of the matrix M2 . We assign label ?1 to all remaining rows of the
matrix M corresponding to the second copy of matrix M2 .
The proof of the theorem is completed by the following two lemmas. See Section G in the supplementary material for the proofs.
Lemma 13. If there is a pair of orthogonal vectors, then the optimal value of (6) is upper bounded
by (3n ? 1) ? l(0) + o(1).
Lemma 14. If there is no pair of orthogonal vectors, then the optimal value of (6) is lower bounded
by 3n ? l(0) ? o(1).
6
Hardness proof for gradient computation
Finally, we consider the problem of computing the gradient of the loss function for a given set
of examples.PWe focus on the network architecture as in the previous section. Specifically, let
n
F?,B (a) := j=1 ?j S(a, bj ) be the output of a neural net with activation function S, where: (1) a
is an input vector from the set A := {a1 , . . . , am } ? {0, 1}d ; (2) B := {b1 , . . . , bn } ? {0, 1}d is a
set of binary vectors; (3) ? = {?1 , . . . , ?n }T ? Rn is an n-dimensional real-valued vector. We first
prove the following lemma.
Lemma 15. For some loss function l : R ? R, let l(F?,B (a)) be the loss for
P input a when the
label of the input a is +1. Consider the gradient of the total loss l?,A,B := a?A l(F?,B (a)) at
?1 = .P
. . = ?n = 0 with respect to ?1 , . . . , ?n . The sum of the entries of the gradient is equal to
l0 (0) ? a?A,b?B S(a, b), where l0 (0) is the derivative of the loss function l at 0.
For the hinge loss function, we have that the loss function is l(x) = max(0, 1 ? x) if the label
is +1. Thus, l0 (0) = ?1. For the logistic loss function, we have that the loss function is l(x) =
1
1
?x
) if the label is +1. Thus, l0 (0) = ? 2 ln
ln 2 ln (1 + e
2 in this case.
6
Note that our reduction does not explicitly construct M . Instead, the values of the matrix are induced by the
input examples and weights.
7
We write x ? y if x = y up to an inversely superpolynomial additive factor, i.e., |x ? y| ? n??(1) .
8
Proof of Theorem 10. Since all `p -norms are within a polynomial factor, it suffices to show the
statement for `1 -norm.
We set S(a, b) := max(0, 1 ? 2aT b). Using
P Lemma 15, we get that the `1 -norm of the gradient of
the total loss function is equal to |l0 (0)| ? a?A,b?B 1aT b=0 . Since l0 (0) 6= 0, this reduces OV to the
gradient computation problem. Note that if there is no orthogonal pair, then the `1 -norm is 0 and
otherwise it is a constant strictly greater than 0. Thus approximating the `1 -norm within any finite
factor allows us to distinguish the cases.
See Section H in the supplementary material for other results.
7
Conclusions
We have shown that a range of kernel problems require quadratic time for obtaining a high accuracy
solution unless the Strong Exponential Time Hypothesis is false. These problems include variants
of kernel SVM, kernel ridge regression, and kernel PCA. We also gave a similar hardness result for
training the final layer of a depth-2 neural network. This result is general and applies to multiple
loss and activation functions. Finally, we proved that computing the empirical loss gradient for such
networks takes time that is essentially ?rectangular?, i.e., proportional to the product of the network
size and the number of examples.
We note that our quadratic (rectangular) hardness results hold for general inputs. There is a long line of
research on algorithms for kernel problems with running times depending on various input parameters,
such as its statistical dimension [42], degrees of freedom [11] or effective dimensionality [27]. It
would be interesting to establish lower bounds on the complexity of kernel problems as a function of
the aforementioned input parameters.
Our quadratic hardness results for kernel problems apply to kernels with exponential tails. A natural
question is whether similar results can be obtained for ?heavy-tailed? kernels, e.g., the Cauchy kernel.
We note that similar results for the linear kernel do not seem achievable using our techniques.8
Several of our results are obtained by a reduction from the (exact) Bichromatic Hamming Closest Pair
problem or the Orthogonal Vectors problem. This demonstrates a strong connection between kernel
methods and similarity search, and suggests that perhaps a reverse reduction is also possible. Such a
reduction could potentially lead to faster approximate algorithms for kernel methods: although the
exact closest pair problem has no known sub-quadratic solution, efficient and practical sub-quadratic
time algorithms for the approximate version of the problem exist (see e.g., [6, 36, 8, 7, 4]).
Acknowledgements
Ludwig Schmidt is supported by a Google PhD fellowship. Arturs Backurs is supported by an IBM
Research fellowship. This research was supported by grants from NSF and Simons Foundation.
References
[1] A. Abboud, A. Backurs, and V. V. Williams. Tight hardness results for LCS and other sequence
similarity measures. In Symposium on Foundations of Computer Science (FOCS), 2015.
[2] A. Abboud, V. V. Williams, and O. Weimann. Consequences of faster alignment of sequences.
In International Colloquium on Automata, Languages, and Programming (ICALP), 2014.
[3] A. Agarwal and L. Bottou. A lower bound for the optimization of finite sums. In International
Conference on Machine Learning (ICML), 2015.
8
In particular, assuming a certain strengthening of SETH, known as the ?non-deterministic SETH? [18], it is
provably impossible to prove SETH hardness for any of the linear variants of the studied ERM problems, at least
via deterministic reductions. This is due to the fact that these problems have short certificates of optimality via
duality arguments. Also, it should be noted that linear analogs of some of the problems considered in this paper
(e.g., linear ridge regression) can be solved in O(nd2 ) time using SVD methods.
9
[4] J. Alman, T. M. Chan, and R. Williams. Polynomial Representations of Threshold Functions
and Algorithmic Applications. 2016.
[5] J. Alman and R. Williams. Probabilistic polynomials and hamming nearest neighbors. In
Symposium on Foundations of Computer Science (FOCS), 2015.
[6] A. Andoni and P. Indyk. Near-optimal hashing algorithms for approximate nearest neighbor in
high dimensions. In Symposium on Foundations of Computer Science (FOCS), 2006.
[7] A. Andoni, P. Indyk, T. Laarhoven, I. Razenshteyn, and L. Schmidt. Practical and optimal lsh
for angular distance. In Advances in Neural Information Processing Systems (NIPS). 2015.
[8] A. Andoni and I. Razenshteyn. Optimal data-dependent hashing for approximate near neighbors.
In Symposium on Theory of Computing (STOC), 2015.
[9] Y. Arjevani and O. Shamir. Dimension-free iteration complexity of finite sum optimization
problems. In Advances in Neural Information Processing Systems (NIPS). 2016.
[10] Y. Arjevani and O. Shamir. Oracle complexity of second-order methods for finite-sum problems.
CoRR, abs/1611.04982, 2016.
[11] F. Bach. Sharp analysis of low-rank kernel matrix approximations. In Conference on Learning
Theory (COLT), 2013.
[12] F. Bach and S. Sra. Stochastic optimization: Beyond stochastic gradients and convexity. NIPS
Tutorial, 2016. http://suvrit.de/talks/vr_nips16_bach.pdf.
[13] A. Backurs and P. Indyk. Edit distance cannot be computed in strongly subquadratic time
(unless SETH is false). In Symposium on Theory of Computing (STOC), 2015.
[14] A. Backurs and C. Tzamos. Improving viterbi is hard: Better runtimes imply faster clique
algorithms. International Conference on Machine Learning (ICML), 2017.
[15] L. Bottou and O. Bousquet. The tradeoffs of large scale learning. In Advances in Neural
Information Processing Systems (NIPS), 2007.
[16] K. Bringmann. Why walking the dog takes time: Frechet distance has no strongly subquadratic
algorithms unless SETH fails. In Symposium on Foundations of Computer Science (FOCS),
2014.
[17] K. Bringmann and M. K?nnemann. Quadratic conditional lower bounds for string problems
and dynamic time warping. In Symposium on Foundations of Computer Science (FOCS), 2015.
[18] M. L. Carmosino, J. Gao, R. Impagliazzo, I. Mihajlin, R. Paturi, and S. Schneider. Nondeterministic extensions of the strong exponential time hypothesis and consequences for non-reducibility.
In Proceedings of the 2016 ACM Conference on Innovations in Theoretical Computer Science,
pages 261?270. ACM, 2016.
[19] N. Cesa-Bianchi, Y. Mansour, and O. Shamir. On the complexity of learning with kernels. In
Conference On Learning Theory (COLT), 2015.
[20] M. Charikar and P. Siminelakis. Hashing-based-estimators for kernel density in high dimensions.
FOCS, 2017.
[21] S. Chechik, D. H. Larkin, L. Roditty, G. Schoenebeck, R. E. Tarjan, and V. V. Williams. Better
approximation algorithms for the graph diameter. In Symposium on Discrete Algorithms (SODA),
2014.
[22] R. Impagliazzo and R. Paturi. On the complexity of k-sat. Journal of Computer and System
Sciences, 62(2):367?375, 2001.
[23] R. Impagliazzo, R. Paturi, and F. Zane. Which problems have strongly exponential complexity?
Journal of Computer and System Sciences, 63:512?530, 2001.
10
[24] R. Livni, S. Shalev-Shwartz, and O. Shamir. On the computational efficiency of training neural
networks. In Advances in Neural Information Processing Systems, pages 855?863, 2014.
[25] K.-R. M?ller, S. Mika, G. R?tsch, K. Tsuda, and B. Sch?lkopf. An introduction to kernel-based
learning algorithms. IEEE transactions on neural networks, 12(2):181?201, 2001.
[26] K. P. Murphy. Machine Learning: A Probabilistic Perspective. The MIT Press, 2012.
[27] C. Musco and C. Musco. Recursive sampling for the Nystr?m method. Advances in Neural
Information Processing Systems (NIPS), 2016.
[28] A. S. Nemirovski and D. B. Yudin. Problem Complexity and Method Efficiency in Optimization.
Wiley Interscience, 1983.
[29] Y. Nesterov. Introductory Lectures on Convex Optimization: A Basic Course. Kluwer Academic
Publishers, 2004.
[30] A. Rahimi and B. Recht. Random features for large-scale kernel machines. In Advances in
Neural Information Processing Systems (NIPS). 2008.
[31] A. S. Razavian, H. Azizpour, J. Sullivan, and S. Carlsson. Cnn features off-the-shelf: An
astounding baseline for recognition. In Conference on Computer Vision and Pattern Recognition
Workshops (CVPRW), 2014.
[32] L. Roditty and V. Vassilevska Williams. Fast approximation algorithms for the diameter and
radius of sparse graphs. In Symposium on Theory of Computing (STOC), 2013.
[33] B. Sch?lkopf and A. J. Smola. Learning with kernels: support vector machines, regularization,
optimization, and beyond. MIT press, 2001.
[34] S. Shalev-Shwartz and S. Ben-David. Understanding Machine Learning: From Theory to
Algorithms. Cambridge University Press, 2014.
[35] S. Shalev-Shwartz, Y. Singer, and N. Srebro. Pegasos: Primal estimated sub-gradient solver for
SVM. In International Conference on Machine Learning (ICML), 2007.
[36] G. Valiant. Finding correlations in subquadratic time, with applications to learning parities and
juntas. In Symposium on Foundations of Computer Science (FOCS), 2012.
[37] V. Vapnik. Statistical learning theory. Wiley, 1998.
[38] V. Vassilevska Williams. Hardness of easy problems: Basing hardness on popular conjectures
such as the Strong Exponential Time Hypothesis (invited talk). In LIPIcs-Leibniz International
Proceedings in Informatics, volume 43. Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik,
2015.
[39] A. Viterbi. Error bounds for convolutional codes and an asymptotically optimum decoding
algorithm. IEEE transactions on Information Theory, 13(2):260?269, 1967.
[40] C. K. Williams and M. Seeger. Using the nystr?m method to speed up kernel machines. In
Advances in Neural Information Processing Systems (NIPS). 2001.
[41] B. E. Woodworth and N. Srebro. Tight complexity bounds for optimizing composite objectives.
In Advances in Neural Information Processing Systems (NIPS), 2016.
[42] Y. Yang, M. Pilanci, M. J. Wainwright, et al. Randomized sketches for kernels: Fast and optimal
nonparametric regression. The Annals of Statistics, 45(3):991?1023, 2017.
11
| 7018 |@word cnn:1 version:2 achievable:1 polynomial:15 norm:8 bn:5 invoking:1 sgd:2 nystr:3 reduction:10 contains:1 ours:1 past:1 existing:1 comparing:2 nt:4 activation:16 yet:1 conjunctive:1 attracted:1 additive:2 razenshteyn:2 treating:1 complementing:1 core:1 short:1 junta:1 coarse:1 boosting:1 provides:1 certificate:1 simpler:1 become:1 symposium:10 focs:7 consists:1 doubly:1 prove:3 introductory:1 nondeterministic:1 interscience:1 x0:3 indeed:2 hardness:32 globally:1 decreasing:1 preclude:1 solver:1 moreover:3 linearity:1 notation:1 bounded:2 what:2 underpins:1 string:1 finding:1 every:2 runtime:1 rm:1 demonstrates:1 unit:5 underlie:1 grant:1 yn:2 understood:2 local:1 limit:1 consequence:3 despite:2 dnm:1 becoming:1 path:1 approximately:2 mika:1 studied:5 suggests:1 limited:2 nemirovski:1 range:4 bi:6 practical:2 yj:1 recursive:1 backpropagation:2 xr:1 sullivan:1 empirical:12 significantly:5 composite:1 matching:1 chechik:1 get:1 cannot:2 close:7 pegasos:1 risk:8 context:1 impossible:2 equivalent:3 deterministic:2 demonstrated:1 williams:8 attention:1 independently:1 convex:9 rectangular:3 automaton:1 musco:2 simplicity:4 m2:9 rule:1 estimator:1 embedding:2 classic:1 coordinate:1 annals:1 target:1 construction:1 shamir:4 exact:5 programming:1 hypothesis:5 expensive:2 recognition:2 walking:1 continues:1 solved:2 laarhoven:1 wj:2 intuition:4 colloquium:1 convexity:1 complexity:22 ui:1 dagstuhl:1 tsch:1 nesterov:1 chet:1 dynamic:2 ov:1 solving:1 tight:3 efficiency:3 roditty:2 seth:19 various:2 represented:1 talk:2 train:1 fast:2 describe:1 effective:1 broadens:1 shalev:3 supplementary:7 solve:7 kai:2 say:1 larger:1 otherwise:2 valued:1 statistic:1 itself:1 indyk:5 final:3 sequence:3 eigenvalue:5 net:2 product:1 strengthening:2 fr:1 schoenebeck:1 ludwig:2 achieve:1 description:1 convergence:1 optimum:1 silver:1 executing:2 ben:1 depending:1 nearest:2 advocated:1 received:1 strong:6 implemented:1 implies:1 differ:1 radius:1 stochastic:3 stringent:1 material:7 require:4 assign:6 suffices:1 generalization:1 painfully:1 strictly:2 extension:1 hold:9 sufficiently:2 considered:1 normal:1 exp:8 algorithmic:2 mapping:1 viterbi:3 bj:25 estimation:1 label:14 krr:1 combinatorial:2 individually:1 edit:2 basing:1 tool:1 minimization:3 mit:8 gaussian:7 super:2 pn:1 shelf:1 azizpour:1 l0:6 focus:5 improvement:2 longest:1 rank:2 nd2:1 contrast:1 seeger:1 baseline:1 am:2 abstraction:1 dependent:1 typically:1 a0:3 hidden:2 going:1 provably:1 issue:1 aforementioned:3 overall:1 arg:1 colt:2 special:1 softmax:1 field:1 equal:8 construct:1 piotr:1 beach:1 runtimes:1 sampling:1 icml:3 nearly:1 np:2 subquadratic:3 fundamentally:1 inherent:1 few:1 summand:1 modern:2 zentrum:1 murphy:1 astounding:1 ab:1 freedom:1 interest:1 highly:2 evaluation:1 alignment:2 unconditional:1 primal:2 behind:1 necessary:1 machinery:1 unless:4 orthogonal:8 ovp:4 desired:1 tsuda:1 guidance:1 theoretical:1 instance:12 soft:1 frechet:1 cost:3 entry:9 successful:1 too:1 st:1 density:2 international:5 recht:1 randomized:1 csail:3 probabilistic:2 off:1 informatics:1 decoding:1 w1:1 postulate:1 nm:2 cesa:1 containing:1 opposed:1 choose:2 dr:1 sidestep:1 derivative:1 account:3 suggesting:1 de:1 impagliazzo:3 includes:1 explicitly:1 bichromatic:3 multiplicative:5 later:1 razavian:1 start:1 simon:1 contribution:1 minimize:7 accuracy:8 convolutional:1 ckx:1 landscape:1 yes:3 lkopf:2 informatik:1 reach:1 definition:12 naturally:1 proof:16 mi:1 hamming:10 proved:1 popular:4 knowledge:1 ut:2 dimensionality:1 ubiquitous:1 satisfiability:6 routine:1 back:1 hashing:3 supervised:1 follow:2 methodology:1 entrywise:1 strongly:4 generality:1 furthermore:2 angular:1 smola:1 correlation:1 sketch:1 nonlinear:1 google:1 logistic:6 aj:5 perhaps:1 behaved:1 bullet:1 usa:1 k22:3 true:1 hence:1 equality:2 regularization:1 pwe:1 noted:1 generalized:1 pdf:1 paturi:3 ridge:8 theoretic:3 complete:2 recently:1 common:6 sigmoid:5 clause:1 overview:3 superpolynomial:1 volume:1 million:1 tail:2 analog:1 m1:7 kluwer:1 significant:2 refer:2 cambridge:1 ai:23 rd:4 trivially:1 similarly:1 language:1 lsh:1 access:1 similarity:2 v0:6 closest:3 recent:4 showed:1 chan:1 optimizing:2 perspective:1 reverse:1 certain:1 suvrit:1 inequality:4 binary:8 yi:7 seen:2 captured:1 greater:1 schneider:1 employed:2 determine:1 paradigm:1 ller:1 schloss:1 ii:2 multiple:4 full:3 reduces:1 stem:1 rahimi:1 faster:9 academic:1 plausibility:1 offer:2 long:5 bach:2 concerning:1 e1:1 a1:6 prediction:1 underlies:1 regression:10 basic:3 variant:4 involving:1 essentially:1 vision:1 iteration:1 kernel:56 agarwal:1 achieved:2 background:1 fellowship:2 fine:3 completes:1 grow:1 limn:2 publisher:1 sch:2 invited:1 rest:4 subject:5 induced:1 seem:1 call:4 integer:2 near:2 yang:1 ideal:1 iii:2 easy:1 wn:1 variety:1 xj:2 relu:5 gave:1 fuer:1 architecture:3 arturs:2 cn:1 tradeoff:1 bottleneck:1 whether:2 expression:1 pca:6 arjevani:2 proceed:1 cnf:3 weimann:1 remark:2 deep:3 useful:1 generally:1 yw:1 informally:1 eigenvectors:2 amount:1 nonparametric:1 extensively:1 svms:8 diameter:3 http:1 exist:6 bringmann:2 nsf:1 tutorial:1 estimated:1 write:1 discrete:1 express:1 nevertheless:1 threshold:1 backurs:6 v1:8 vast:1 graph:4 asymptotically:1 merely:1 year:1 sum:12 run:1 soda:1 almost:6 reader:1 decide:2 leibniz:2 bound:20 layer:9 guaranteed:1 distinguish:4 tzamos:1 quadratic:25 oracle:10 bousquet:1 aspect:1 speed:1 argument:3 min:1 optimality:1 conjecture:4 influential:1 charikar:1 according:1 smaller:1 increasingly:1 erm:30 ln:4 equation:3 turn:1 fail:1 singer:1 know:2 end:1 apply:5 observe:2 v2:5 appending:1 schmidt:3 batch:3 top:4 running:5 cf:1 include:2 remaining:1 completed:1 opportunity:1 log2:4 hinge:4 giving:1 woodworth:1 build:1 establish:1 especially:1 approximating:9 classical:2 unchanged:1 disappear:1 warping:2 objective:4 question:5 quantity:1 dependence:3 gradient:29 distance:6 concatenation:1 cauchy:1 trivial:1 reason:1 assuming:8 code:1 mini:1 minimizing:1 innovation:1 nc:1 equivalently:1 unfortunately:1 potentially:2 statement:2 stoc:3 trace:1 rise:1 append:1 motivates:1 bianchi:1 upper:3 vertical:1 markov:1 finite:4 descent:2 defining:1 y1:2 discovered:1 rn:3 mansour:1 sharp:1 tarjan:1 community:1 expressiveness:2 david:1 pair:15 dog:1 extensive:1 connection:1 nip:9 address:4 beyond:2 pattern:1 challenge:1 including:3 max:4 explanation:1 wainwright:1 natural:1 improve:1 inversely:1 imply:1 nice:8 literature:2 acknowledgement:1 checking:1 reducibility:1 carlsson:1 understanding:1 loss:37 expect:3 icalp:1 lecture:1 interesting:1 proportional:1 srebro:2 foundation:7 degree:1 sufficient:1 viewpoint:1 heavy:1 ibm:1 row:4 course:1 supported:3 parity:1 keeping:1 copy:2 free:1 larkin:1 bias:1 allow:2 wide:2 neighbor:3 livni:1 sparse:2 dimension:5 xn:1 depth:2 yudin:1 computes:1 concretely:1 commonly:1 far:1 polynomially:1 transaction:2 approximate:7 clique:1 global:1 active:1 sat:1 b1:5 assumed:1 xi:4 shwartz:3 alternatively:1 lcs:1 subsequence:1 continuous:1 search:1 decade:1 tailed:1 why:1 nature:1 transfer:1 pilanci:1 ca:1 inherently:1 sra:1 obtaining:1 improving:1 investigated:1 necessarily:1 poly:2 bottou:2 main:4 n2:1 ludwigs:1 allowed:1 repeated:1 body:1 x1:1 slow:1 wiley:2 sub:9 inferring:1 fails:1 exponential:8 third:6 grained:3 formula:2 theorem:10 cvprw:1 svm:22 evidence:4 concern:1 burden:1 exists:2 workshop:1 false:2 andoni:3 corr:1 valiant:1 vapnik:1 lifting:1 phd:1 margin:4 nk:2 logarithmic:1 simply:2 likely:2 gao:1 applies:2 minimizer:1 satisfies:2 acm:2 conditional:9 viewed:1 goal:3 consequently:1 considerable:1 hard:6 specifically:3 reducing:1 principal:1 lemma:6 total:3 duality:1 svd:1 abboud:2 formally:4 select:1 support:3 latter:1 scratch:1 handling:1 |
6,654 | 7,019 | Policy Gradient With Value Function Approximation
For Collective Multiagent Planning
Duc Thien Nguyen Akshat Kumar Hoong Chuin Lau
School of Information Systems
Singapore Management University
80 Stamford Road, Singapore 178902
{dtnguyen.2014,akshatkumar,hclau}@smu.edu.sg
Abstract
Decentralized (PO)MDPs provide an expressive framework for sequential decision making in a multiagent system. Given their computational complexity, recent research has focused on tractable yet practical subclasses of Dec-POMDPs.
We address such a subclass called CDec-POMDP where the collective behavior
of a population of agents affects the joint-reward and environment dynamics. Our
main contribution is an actor-critic (AC) reinforcement learning method for optimizing CDec-POMDP policies. Vanilla AC has slow convergence for larger problems. To address this, we show how a particular decomposition of the approximate
action-value function over agents leads to effective updates, and also derive a new
way to train the critic based on local reward signals. Comparisons on a synthetic
benchmark and a real world taxi fleet optimization problem show that our new AC
approach provides better quality solutions than previous best approaches.
1
Introduction
Decentralized partially observable MDPs (Dec-POMDPs) have emerged in recent years as a promising framework for multiagent collaborative sequential decision making (Bernstein et al., 2002).
Dec-POMDPs model settings where agents act based on different partial observations about the
environment and each other to maximize a global objective. Applications of Dec-POMDPs include
coordinating planetary rovers (Becker et al., 2004b), multi-robot coordination (Amato et al., 2015)
and throughput optimization in wireless network (Winstein and Balakrishnan, 2013; Pajarinen et al.,
2014). However, solving Dec-POMDPs is computationally challenging, being NEXP-Hard even for
2-agent problems (Bernstein et al., 2002).
To increase scalability and application to practical problems, past research has explored restricted
interactions among agents such as state transition and observation independence (Nair et al., 2005;
Kumar et al., 2011, 2015), event driven interactions (Becker et al., 2004a) and weak coupling among
agents (Witwicki and Durfee, 2010). Recently, a number of works have focused on settings where
agent identities do not affect interactions among agents. Instead, environment dynamics are primarily driven by the collective influence of agents (Varakantham et al., 2014; Sonu et al., 2015;
Robbel et al., 2016; Nguyen et al., 2017), similar to well known congestion games (Meyers and
Schulz, 2012). Several problems in urban transportation such as taxi supply-demand matching can
be modeled using such collective planning models (Varakantham et al., 2012; Nguyen et al., 2017).
In this work, we focus on the collective Dec-POMDP framework (CDec-POMDP) that formalizes
such a collective multiagent sequential decision making problem under uncertainty (Nguyen et al.,
2017). Nguyen et al. present a sampling based approach to optimize policies in the CDec-POMDP
model. A key drawback of this previous approach is that policies are represented in a tabular form
which scales poorly with the size of observation space of agents. Motivated by the recent suc31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
cess of reinforcement learning (RL) approaches (Mnih et al., 2015; Schulman et al., 2015; Mnih
et al., 2016; Foerster et al., 2016; Leibo et al., 2017), our main contribution is a actor-critic (AC)
reinforcement learning method (Konda and Tsitsiklis, 2003) for optimizing CDec-POMDP policies.
Policies are represented using function approximator such as a neural network, thereby avoiding
ns
ns
ns
the scalability issues of a tabular policy. We derive
the policy gradient and develop a factored actionom
om
om
1
2
1
value approximator based on collective agent interactions in CDec-POMDPs. Vanilla AC is slow
sm
rTm
sm
sm
1
2
T
to converge on large problems due to known issues
of learning with global reward in large multiagent
am
am
am
1
2
T
systems (Bagnell and Ng, 2005). To address this,
m=1:M
we also develop a new way to train the critic, our
Figure 1: T-step DBN for a CDec-POMDP
action-value approximator, that effectively utilizes
local value function of agents.
1
2
T
We test our approach on a synthetic multirobot grid navigation domain from (Nguyen et al., 2017),
and a real world supply-demand taxi matching problem in a large Asian city with up to 8000 taxis (or
agents) showing the scalability of our approach to large multiagent systems. Empirically, our new
factored actor-critic approach works better than previous best approaches providing much higher
solution quality. The factored AC algorithm empirically converges much faster than the vanilla AC
validating the effectiveness of our new training approach for the critic.
Related work: Our work is based on the framework of policy gradient with approximate value
function similar to Sutton et al. (1999). However, as we empirically show, directly applying the
original policy gradient from Sutton et al. (1999) into the multi-agent setting and specifically for
the CDec-POMDP model results in a high variance solution. In this work, we show a suitable
form of compatible value function approximation for CDec-POMDPs that results in an efficient and
low variance policy gradient update. Reinforcement learning for decentralized policies has been
studied earlier in Peshkin et al. (2000), Aberdeen (2006). Guestrin et al. (2002) also proposed using
REINFORCE to train a softmax policy of a factored value function from the coordination graph.
However in such previous works, policy gradient is estimated from the global empirical returns
instead of a decomposed critic. We show in section 4 that having a decomposed critic along with an
individual value function based training of this critic is important for sample-efficient learning. Our
empirical results show that our proposed critic training has faster convergence than training with
global empirical returns.
2
Collective Decentralized POMDP Model
We first describe the CDec-POMDP model introduced in (Nguyen et al., 2017). A T -step Dynamic
Bayesian Network (DBN) for this model is shown using the plate notation in figure 1. It consists of
the following:
? A finite planning horizon H.
? The number of agents M . An agent m can be in one of the states in the state space S. The joint
state space is ?M
m=1 S. We denote a single state as i ? S.
? A set of action A for each agent m. We denote an individual action as j ? A.
m m
m m
? Let (s1:H , a1:H )m = (sm
1 , a1 , s2 . . . , sH , aH ) denote the complete state-action trajectory of an
m
agent m. We denote the state and action of agent m at time t using random variables sm
t , at .
Different indicator functions It (?) are defined in table 1. We define the following count given the
trajectory of each agent m ? M :
nt (i, j, i0 ) =
M
X
0
0
Im
t (i, j, i ) ?i, i ?S, j?A
m=1
0
As noted in table 1, count nt (i, j, i ) denotes the number of agents in state i taking action j
at time step t and transitioning to next state i0 ; other counts, nt (i) and nt (i, j), are defined
analogously. Using these counts, we can define the count tables nst and nst at for the time step
t as shown in table 1.
2
Im
t (i) ? {0, 1}
Im
t (i, j) ? {0, 1}
0
Im
t (i, j, i ) ? {0, 1}
nt (i) ? [0; M ]
nt (i, j) ? [0; M ]
nt (i, j, i0 ) ? [0; M ]
nst
nst at
nst at st+1
if agent m is at state i at time t or sm
t =i
m
if agent m takes action j in state i at time t or (sm
t , at ) = (i, j)
m m
0
if agent m takes action j in state i at time t and transitions to state i0 or (sm
t , at , st+1 ) = (i, j, i )
Number of agents at state i at time t
Number of agents at state i taking action j at time t
Number of agents at state i taking action j at time t and transitioning to state i0 at time t + 1
Count table (nt (i) ?i ? S)
Count table (nt (i, j) ?i ? S, j ? A)
Count table (nt (i, j, i0 ) ?i, i0 ? S, j ? A)
Table 1: Summary of notations given the state-action trajectories, (s1:H , a1:H )m ?m, for all the agents
? We assume a general partially observable setting wherein agents can have different observations
based on the collective influence of other agents. An agent observes its local state sm
t . In
m
addition, it also observes om
t at time t based on its local state st and the count table nst . E.g.,
an agent m in state i at time t can observe the count of other agents also in state i (=nt (i)) or
other agents in some neighborhood of the state i (={nt (j) ?j ? Nb(i)}).
m
0 m
? The transition function is ?t sm
t+1 = i |st = i, at = j, nst . The transition function is the same
for all the agents. Notice that it is affected by nst , which depends on the collective behavior of
the agent population.
? Each agent m has a non-stationary policy ?tm (j|i, om
t (i, nst )) denoting the probability of agent
m to take action j given its observation (i, om
t (i, nst )) at time t. We denote the policy over the
m
).
planning horizon of an agent m to be ? m = (?1m , . . . , ?H
m
? An agent m receives the reward rt = rt (i, j, nst ) dependent on its local state and action, and
the counts nst .
? Initial state distribution, bo = (P (i)?i ? S), is the same for all agents.
We present here the simplest version where all the agents are of the same type having similar state
transition, observation and reward models. The model can handle multiple agent types where agents
have different dynamics based on their type. We can also incorporate an external state that is unaffected by agents? actions (such as taxi demand in transportation domain). Our results are extendible
to address such settings also.
Models such as CDec-POMDPs are useful in settings where agent population is large, and agent
identity does not affect the reward or the transition function. A motivating application of this model
is for the taxi-fleet optimization where the problem is to compute policies for taxis such that the total
profit of the fleet is maximized (Varakantham et al., 2012; Nguyen et al., 2017). The decision making
for a taxi is as follows. At time t, each taxi observes its current city zone z (different zones constitute
the state-space S), and also the count of other taxis in the current zone and its neighboring zones
as well as an estimate of the current local demand. This constitutes the count-based observation
o(?) for the taxi. Based on this observation, the taxi must decide whether to stay in the current
zone z to look for passengers or move to another zone. These decision choices depend on several
factors such as the ratio of demand and the count of other taxis in the current zone. Similarly, the
environment is stochastic with variable taxi demand at different times. Such historical demand data
is often available using GPS traces of the taxi fleet (Varakantham et al., 2012).
Count-Based statistic for planning: A key property in the CDec-POMDP model is that the model
dynamics depend on the collective interaction among agents rather than agent identities. In settings
such as taxi fleet optimization, the agent population size can be quite large (? 8000 for our real
world experiments). Given such a large population, it is not possible to compute unique policy for
each agent. Therefore, similar to previous work (Varakantham et al., 2012; Nguyen et al., 2017),
our goal is to compute a homogenous policy ? for all the agents. As the policy ? is dependent on
counts, it represents an expressive class of policies.
For a fixed population M , let {(s1:T , a1:T )m ?m} denote the state-action trajectories of different
agents sampled from the DBN in figure 1. Let n1:T ={(nst , nst at , nst at st+1 ) ?t = 1 : T } be the
combined vector of the resulting count tables for each time step t. Nguyen et al. show that counts n
are the sufficient statistic for planning. That is, the joint-value function of a policy ? over horizon
3
H can be computed by the expectation over counts as (Nguyen et al., 2017):
X
M X
H
H
X
X
X
V (?) =
P (n; ?)
E[rTm ] =
nT (i, j)rT i, j, nT
m=1 T =1
(1)
T =1 i?S,j?A
n??1:H
Set ?1:H is the set of all allowed consistent count tables as:
X
X
X
nT (i) = M ?T ;
nT (i, j) = nT (i) ?j?T ;
nT (i, j, i0 ) = nT (i, j) ?i ? S, j ? A, ?T
i?S
i0 ?S
j?A
P (n; ?) is the distribution over counts (detailed expression in appendix). A key benefit of this result
is that we can evaluate the policy ? by sampling counts n directly from P (n) without sampling individual agent trajectories (s1:H , a1:H )m for different agents, resulting in significant computational
savings. Our goal is to compute the optimal policy ? that maximizes V (?). We assume a RL setting
with centralized learning and decentralized execution. We assume a simulator is available that can
provide count samples from P (n; ?).
3
Policy Gradient for CDec-POMDPs
Previous work proposed an expectation-maximization (EM) (Dempster et al., 1977) based sampling
approach to optimize the policy ? (Nguyen et al., 2017). The policy is represented as a piecewise
linear tabular policy over the space of counts n where each linear piece specifies a distribution over
next actions. However, this tabular representation is limited in its expressive power as the number of
pieces is fixed apriori, and the range of each piece has to be defined manually which can adversely
affect performance. Furthermore, exponentially many pieces are required when the observation o is
multidimensional (i.e., an agent observes counts from some local neighborhood of its location). To
address such issues, our goal is to optimize policies in a functional form such as a neural network.
We first extend the policy gradient theorem of (Sutton et al., 1999) to CDec-POMDPs. Let ? denote
the vector of policy parameters. We next show how to compute ?? V (?). Let st , at denote the
joint-state and joint-actions of all the agents at time t. The value function of a given policy ? in an
expanded form is given as:
X
Vt (?) =
P ? (st , at |bo , ?)Q?t (st , at )
(2)
st ,at
where P ? (st , at |bo ) = s1:t?1 ,a1:t?1 P ? (s1:t , a1:t |bo ) is the distribution of the joint state-action
st , at under the policy ?. The value function Q?t (st , at ) is computed as:
X
Q?t (st , at ) = rt (st , at ) +
P ? (st+1 , at+1 |st , at )Q?t+1 (st+1 , at+1 )
(3)
P
st+1 ,at+1
We next state the policy gradient theorem for CDec-POMDPs:
Theorem 1. For any CDec-POMDP, the policy gradient is given as:
H
X
X
?
?? V1 (?) =
Est ,at |bo ,? Qt (st , at )
nt (i, j)?? log ?t j|i, o(i, nst )
t=1
(4)
i?S,j?A
The proofs of this theorem and other subsequent results are provided in the appendix.
Notice that computing the policy gradient using the above result is not practical for multiple reasons.
The space of join-state action (st , at ) is combinatorial. Given that the agent population size can be
large, sampling each agent?s trajectory is not computationally tractable. To remedy this, we later
show how to compute the gradient by directly sampling counts n ? P (n; ?) similar to policy evaluation in (1). Similarly, one can estimate the action-value function Q?t (st , at ) using empirical returns
as an approximation. This would be the analogue of the standard REINFORCE algorithm (Williams,
1992) for CDec-POMDPs. It is well known that REINFORCE may learn slowly than other methods
that use a learned action-value function (Sutton et al., 1999). Therefore, we next present a function
approximator for Q?t , and show the computation of policy gradient by directly sampling counts n.
4
3.1
Policy Gradient with Action-Value Approximation
One can approximate the action-value function Q?t (st , at ) in several different ways. We consider
the following special form of the approximate value function fw :
Q?t (st , at ) ? fw (st , at ) =
M
X
m
m
fwm sm
t , o(st , nst ), at
(5)
m=1
where each fwm is defined for each agent m and takes as input the agent?s local state, action and
the observation. Notice that different components fwm are correlated as they depend on the common count table nst . Such a decomposable form is useful as it leads to efficient policy gradient
computation. Furthermore, an important class of approximate value function having this form for
CDec-POMDPs is the compatible value function (Sutton et al., 1999) which results in an unbiased
policy gradient (details in appendix).
Proposition 1. Compatible value function for CDec-POMDPs can be factorized as:
X
m
m
fw (st , at ) =
fwm (sm
t , o(st , nst ), a )
m
We can directly replace Q? (?) in policy gradient (4) by the approximate action-value function fw .
Empirically, we found that variance using this estimator was high. We exploit the structure of fw
and show further factorization of the policy gradient next which works much better empirically.
Theorem 2. For any value function having the decomposition as:
X
m
m
fw (st , at ) =
fwm sm
(6)
t , o(st , nst ), at ,
m
the policy gradient can be computed as
?? V1 (?) =
H
X
Est ,at
hX
m m
i
m
m
m
m
?? log ? am
t |st , o(st , nst ) fw st , o(st , nst ), at
(7)
m
t=1
The above result shows that if the approximate value function is factored, then the resulting policy
gradient also becomes factored. The above result also applies to agents with multiple types as we
assumed the function fwm is different for each agent. In the simpler case when all the agents are of
same type, then we have the same function fw for each agent, and also deduce the following:
X
fw (st , at ) =
nt (i, j)fw i, j, o(i, nst )
(8)
i,j
Using the above result, we simplify the policy gradient as:
hX
i
X
?? V1 (?) =
Est ,at
nt (i, j)?? log ? j|i, o(i, nst ) fw (i, j, o(i, nst ))
t
3.2
(9)
i,j
Count-based Policy Gradient Computation
Notice that in (9), the expectation is still w.r.t. joint-states and actions (st , at ) which is not efficient
in large population sizes. To address this issue, we exploit the insight that the approximate value
function in (8) and the inner expression in (9) depends only on the counts generated by the joint-state
and action (st , at ).
P
Theorem 3. For any value function having the form: fw (st , at ) = i,j nt (i, j)fw i, j, o(i, nst ) ,
the policy gradient can be computed as:
X
H
X
En1:H ??1:H
nt (i, j)?? log ? j|i, o(i, nt ) fw (i, j, o(i, nt ))
(10)
t=1 i?S,j?A
The above result shows that the policy gradient can be computed by sampling count table vectors
n1:H from the underlying distribution P (?) analogous to computing the value function of the policy
in (1), which is tractable even for large population sizes.
5
4
Training Action-Value Function
In our approach, after count samples n1:H are generated to compute the policy gradient, we also
need to adjust the parameters w of our critic fw . Notice that as per (8), the action value function
fw (st , at ) depends only on the counts generated by the joint-state and action (st , at ). Training fw
can be done by taking a gradient step to minimize the following loss function:
min
w
K X
H
X
fw (n?t ) ? Rt?
2
(11)
?=1 t=1
where n?1:H is a count sample generated from the distribution P (n; ?); fw (n?t ) is the action value
function and Rt? is the total empirical return for time step t computed using (1):
fw (n?t ) =
X
n?t (i, j)fw (i, j, o(i, n?t ));
Rt?
=
i,j
H
X
X
n?T (i, j)rT (i, j, n?T )
(12)
T =t i?S,j?A
However, we found that the loss in (11) did not work well for training the critic fw for larger problems. Several count samples were required to reliably train fw which adversely affects scalability
for large problems with many agents. It is already known in multiagent RL that algorithms that
solely rely on the global reward signal (e.g. Rt? in our case) may require several more samples than
approaches that take advantage of local reward signals (Bagnell and Ng, 2005). Motivated by this
observation, we next develop a local reward signal based strategy to train the critic fw .
Let n?
Individual Value Function:
Vt? (i, j)
be a count sample. Given the count sample n?
, let
1:H
1:H
P
?
t
m m
= E[ H
t0 =t rt0 |st = i, am = j, n1:H ] denote the total expected reward obtained by an agent
that is in state i and takes action j at time t. This individual value function can be computed using
dynamic programming as shown in (Nguyen et al., 2017). Based on this value function, we next
show an alternative reparameterization of the global empirical reward Rt? in (12):
Lemma 1. The empirical return Rt? for the time step t given the count sample n?1:H can be reP
parameterized as: Rt? = i?S,j?A n?t (i, j)Vt? (i, j).
Individual Value Function Based Loss: Given lemma 1, we next derive an upper bound on the on
the true loss (11) which effectively utilizes individual value functions:
2 X X X
2
XX
X ?
fw (n? ) ? Rt? =
n?t (i, j)fw (i, j, o(i, n?t )) ?
nt (i, j)Vt? (i, j)
?
t
t
?
=
i,j
XXX
t
?
?M
n?t (i, j) fw (i, j, o(i, n?t )) ? Vt? (i, j)
2
(13)
i,j
XX
?
i,j
2
nt (i, j) fw (i, j, o(i, n?t )) ? Vt? (i, j)
(14)
t,i,j
where the last relation is derived by Cauchy-Schwarz inequality. We train the critic using the modified loss function in (14). Empirically, we observed that for larger problems, this new loss function
in (14) resulted in much faster convergence than the original loss function in (13). Intuitively, this is
because the new loss (14) tries to adjust each critic component fw (i, j, o(i, n?t )) closer to its counterpart empirical return Vt? (i, j). However, in the original loss function (13), the focus is on minimizing
the global loss, rather than adjusting each individual critic factor fw (?) towards the corresponding
empirical return.
Algorithm 1 shows the outline of our AC approach for CDec-POMDPs. Lines 7 and 8 show two
different options to train the critic. Line 7 represents critic update based on local value functions,
also referred to as factored critic update (fC). Line 8 shows update based on global reward or global
critic update (C). Line 10 shows the policy gradient computed using theorem 2 (fA). Line 11 shows
how the gradient is computed by directly using fw from eq. (5) in eq. 4.
6
Algorithm 1: Actor-Critic RL for CDec-POMDPs
1
2
3
4
5
6
7
8
9
10
11
12
13
5
Initialize network parameter ? for actor ? and and w for critic fw
? ? actor learning rate
? ? critic learning rate
repeat
Sample count vectors n?1:H ? P (n1:H ; ?) ?? = 1 to K
Update critic as:
hP P
2 i
?
?
?
1
?w
n
(i,
j)
f
(i,
j,
o(i,
n
))
?
V
(i,
j)
fC : w = w ? ? K
w
t
t
t
?
t,i,j
hP P P
2 i
P
?
?
?
?
1
C : w = w ??K
?w
?
t
i,j nt (i, j)fw (i, j, o(i, nt )) ?
i,j nt (i, j)Vt (i, j)
Update actor as:
i
n?t (i, j) log ? j|i, o(i, n?t ) fw (i, j, o(n?t , i))
i
ih P
P P hP
?
?
?
?
1
A : ? = ? +?K
?? ? t
i,j nt (i, j)fw (i, j, o(nt , i))
i,j nt (i, j) log ? j|i, o(i, nt )
1
??
fA : ? = ? + ? K
P P hP
?
t
i,j
until convergence
return ?, w
Experiments
This section compares the performance of our AC approach with two other approaches for solving CDec-POMDPs?Soft-Max based flow update (SMFU) (Varakantham et al., 2012), and the
Expectation-Maximization (EM) approach (Nguyen et al., 2017). SMFU can only optimize policies
m
where an agent?s action only depends on its local state, ?(am
t |st ), as it approximates the effect
of counts n by computing the single most likely count vector during the planning phase. The EM
m
approach can optimize count-based piecewise linear policies where ?t (am
t |st , ?) is a piecewise
function over the space of all possible count observations ot .
Algorithm 1 shows two ways of updating the critic (in lines 7, 8) and two ways of updating the actor
(in lines 10, 11) leading to 4 possible settings for our actor-critic approach?fAfC, AC, AfC, fAC.
We also investigate the properties of these different actor-critic approaches. The neural network
structure and other experimental settings are provided in the appendix.
For fair comparisons with previous approaches, we use three different models for counts-based
observation ot . In ?o0? setting, policies depend only on agent?s local state sm
t and not on counts. In
m
?o1? setting, policies depend on the local state sm
t and the single count observation nt (st ). That
.
In
?oN?
setting,
is, the agent can only observe the count of other agents in its current state sm
t
and
also
the
count
of
other
agents
from
a
local
neighborhood
the agent observes its local state sm
t
(defined later) of the state sm
t . The ?oN? observation model provides the most information to an
agent. However, it is also much more difficult to optimize as policies have more parameters. The
SMFU only works with ?o0? setting; EM and our actor-critic approach work for all the settings.
Taxi Supply-Demand Matching: We test our approach on this real-world domain described in
section 2, and introduced in (Varakantham et al., 2012). In this problem, the goal is to compute taxi
policies for optimizing the total revenue of the fleet. The data contains GPS traces of taxi movement
in a large Asian city over 1 year. We use the observed demand information extracted from this
dataset. On an average, there are around 8000 taxis per day (data is not exhaustive over all taxi
operators). The city is divided into 81 zones and the plan horizon is 48 half hour intervals over 24
hours. For details about the environment dynamics, we refer to (Varakantham et al., 2012).
Figure 2(a) shows the quality comparisons among different approaches with different observation
models (?o0?, ?o1? and ?oN?). We test with total number of taxis as 4000 and 8000 to see if taxi population size affects the relative performance of different approaches. The y-axis shows the average
per day profit for the entire fleet. For the ?o0? case, all approaches (fAfC-?o0?, SMFU, EM-?o0?)
give similar quality with fAfC-?o0? and EM-?o0? performing slightly better than SMFU for the 8000
taxis. For the ?o1? case, there is sharp improvement in quality by fAfC-?o1? over fAfC-?o0? confirming that taking count based observation into account results in better policies. Our approach
fAfC-?o1? is also significantly better than the policies optimized by EM-?o1? for both 4000 and
8000 taxi setting.
7
2500000
150
fAfC-o0
fAfC-o1
fAfC-oN
SMFU
EM-o0
EM-o1
1500000
fAfC-o0
fAfC-o1
fAfC-oN
SMFU
EM-o0
EM-o1
EM-oN
100
Quality
Quality
2000000
1000000
50
0
500000
0
4000
8000
?50
Taxi Population
Grid Navigation
(a) Solution quality with varying taxi population
(b) Solution quality in grid navigation problem
Figure 2: Solution quality comparisons on the taxi problem and the grid navigation
fA?fC
Quality
Quality
500000
1000000
750000
0
Quality
500000
A?fC
1500000
2000000
1000000
1500000
500000
0
0
5000
10000
15000
Iteration
20000
1000000
500000
500000
500000
250000
fA?C
Quality
1000000
1250000
A?C
0
5000
10000
15000
Iteration
20000
0
5000 10000 15000 20000 25000
Iteration
0 convergence with ?o0?
(a) AC
(b) AC convergence with ?o1?
(c) AC convergence with ?oN?
250000
Figure 3: Convergence of different actor-critic variants on the taxi problem with 8000 taxis
500000
750000
To further
to optimize complex
approach in the ?oN?
0 test the scalability
5000 and the ability10000
15000policies by our20000
setting, we define the neighborhood of each Iteration
state (which is a zone in the city) to be the set of its
geographically connected zones based on the zonal decomposition shown in (Nguyen et al., 2017).
On an average, there are about 8 neighboring zones for a given zone, resulting in 9 count based
observations available to the agent for taking decisions. Each agent observes both the taxi count
and the demand information from such neighboring zones. In figure 2(a), fAfC-?oN? result clearly
shows that taking multiple observations into account significantly increases solution quality?fAfC?oN? provides an increase of 64% in quality over fAfC-?o0? and 20% over fAfC-?o1? for the 8000
taxi case. For EM-?oN?, we used a bare minimum of 2 pieces per observation dimension (resulting
in 29 pieces per time step). We observed that EM was unable to converge within 30K iterations and
provided even worse quality than EM-?o1? at the end. These results show that despite the larger
search space, our fAfC approach can effectively optimize complex policies whereas the tabular
policy based EM approach was ineffective for this case.
Figures 3(a-c) show the quality Vs. iterations for different variations of our actor critic approach?
fAfC, AC, AfC, fAC?for the ?o0?, ?o1? and the ?oN? observation model. These figures clearly
show that using factored actor and the factored critic update in fAfC is the most reliable strategy
over all the other variations and for all the observation models. Variations such as AC and fAC
were not able to converge at all despite having exactly the same parameters as fAfC. These results
validate different strategies that we have developed in our work to make vanilla AC converge faster
for large problems.
Robot navigation in a congested environment: We also tested on a synthetic benchmark introduced in (Nguyen et al., 2017). The goal is for a population of robots (= 20) to move from a set
of initial locations to a goal state in a 5x5 grid. If there is congestion on an edge, then each agent
attempting to cross the edge has higher chance of action failure. Similarly, agents also receive a
negative reward if there is edge congestion. On successfully reaching the goal state, agents receive
a positive reward and transition back to one of the initial state. We set the horizon to 100 steps.
Figure 2(b) shows the solution quality comparisons among different approaches. In the ?oN? observation model, the agent observes its 4 immediate neighbor node?s count information. In this problem, SMFU performed worst, fAfC and EM both performed much better. As expected fAfC-?oN?
8
provides the best solution quality over all the other approaches. In this domain, EM is competitive with fAfC as for this relatively smaller problem with 25 agents, the space of counts is much
smaller than in the taxi domain. Therefore, EM?s piecewise policy is able to provide a fine grained
approximation over the count range.
6
Summary
We addressed the problem of collective multiagent planning where the collective behavior of a population of agents affects the model dynamics. We developed a new actor-critic method for solving
such collective planning problems within the CDec-POMDP framework. We derived several new
results for CDec-POMDPs such as the policy gradient derivation, and the structure of the compatible
value function. To overcome the slow convergence of the vanilla actor-critic method we developed
multiple techniques based on value function factorization and training the critic using individual
value function of agents. Using such techniques, our approach provided significantly better quality
than previous approaches, and proved scalable and effective for optimizing policies in a real world
taxi supply-demand problem and a synthetic grid navigation problem.
7
Acknowledgments
This research project is supported by National Research Foundation Singapore under its Corp Lab
@ University scheme and Fujitsu Limited. First author is also supported by A? STAR graduate
scholarship.
9
References
Aberdeen, D. (2006). Policy-gradient methods for planning. In Advances in Neural Information
Processing Systems, pages 9?16.
Amato, C., Konidaris, G., Cruz, G., Maynor, C. A., How, J. P., and Kaelbling, L. P. (2015). Planning
for decentralized control of multiple robots under uncertainty. In IEEE International Conference
on Robotics and Automation, ICRA, pages 1241?1248.
Bagnell, J. A. and Ng, A. Y. (2005). On local rewards and scaling distributed reinforcement learning.
In International Conference on Neural Information Processing Systems, pages 91?98.
Becker, R., Zilberstein, S., and Lesser, V. (2004a). Decentralized Markov decision processes with
event-driven interactions. In Proceedings of the 3rd International Conference on Autonomous
Agents and Multiagent Systems, pages 302?309.
Becker, R., Zilberstein, S., Lesser, V., and Goldman, C. V. (2004b). Solving transition independent
decentralized Markov decision processes. Journal of Artificial Intelligence Research, 22:423?
455.
Bernstein, D. S., Givan, R., Immerman, N., and Zilberstein, S. (2002). The complexity of decentralized control of Markov decision processes. Mathematics of Operations Research, 27:819?840.
Dempster, A. P., Laird, N. M., and Rubin, D. B. (1977). Maximum likelihood from incomplete data
via the EM algorithm. Journal of the Royal Statistical society, Series B, 39(1):1?38.
Foerster, J. N., Assael, Y. M., de Freitas, N., and Whiteson, S. (2016). Learning to communicate
with deep multi-agent reinforcement learning. In Advances in Neural Information Processing
Systems, pages 2137?2145.
Guestrin, C., Lagoudakis, M., and Parr, R. (2002). Coordinated reinforcement learning. In ICML,
volume 2, pages 227?234.
Konda, V. R. and Tsitsiklis, J. N. (2003). On actor-critic algorithms. SIAM Journal on Control and
Optimization, 42(4):1143?1166.
Kumar, A., Zilberstein, S., and Toussaint, M. (2011). Scalable multiagent planning using probabilistic inference. In Proceedings of the Twenty-Second International Joint Conference on Artificial
Intelligence, pages 2140?2146, Barcelona, Spain.
Kumar, A., Zilberstein, S., and Toussaint, M. (2015). Probabilistic inference techniques for scalable
multiagent decision making. Journal of Artificial Intelligence Research, 53(1):223?270.
Leibo, J. Z., Zambaldi, V. F., Lanctot, M., Marecki, J., and Graepel, T. (2017). Multi-agent reinforcement learning in sequential social dilemmas. In International Conference on Autonomous
Agents and Multiagent Systems.
Meyers, C. A. and Schulz, A. S. (2012). The complexity of congestion games. Networks, 59:252?
260.
Mnih, V., Badia, A. P., Mirza, M., Graves, A., Lillicrap, T., Harley, T., Silver, D., and Kavukcuoglu,
K. (2016). Asynchronous methods for deep reinforcement learning. In International Conference
on Machine Learning, pages 1928?1937.
Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., Graves, A.,
Riedmiller, M. A., Fidjeland, A., Ostrovski, G., Petersen, S., Beattie, C., Sadik, A., Antonoglou,
I., King, H., Kumaran, D., Wierstra, D., Legg, S., and Hassabis, D. (2015). Human-level control
through deep reinforcement learning. Nature, 518(7540):529?533.
Nair, R., Varakantham, P., Tambe, M., and Yokoo, M. (2005). Networked distributed POMDPs: A
synthesis of distributed constraint optimization and POMDPs. In AAAI Conference on Artificial
Intelligence, pages 133?139.
Nguyen, D. T., Kumar, A., and Lau, H. C. (2017). Collective multiagent sequential decision making
under uncertainty. In AAAI Conference on Artificial Intelligence, pages 3036?3043.
10
Pajarinen, J., Hottinen, A., and Peltonen, J. (2014). Optimizing spatial and temporal reuse in wireless networks by decentralized partially observable Markov decision processes. IEEE Trans. on
Mobile Computing, 13(4):866?879.
Peshkin, L., Kim, K.-E., Meuleau, N., and Kaelbling, L. P. (2000). Learning to cooperate via policy
search. In Proceedings of the Sixteenth conference on Uncertainty in artificial intelligence, pages
489?496. Morgan Kaufmann Publishers Inc.
Robbel, P., Oliehoek, F. A., and Kochenderfer, M. J. (2016). Exploiting anonymity in approximate linear programming: Scaling to large multiagent MDPs. In AAAI Conference on Artificial
Intelligence, pages 2537?2543.
Schulman, J., Levine, S., Abbeel, P., Jordan, M., and Moritz, P. (2015). Trust region policy optimization. In International Conference on Machine Learning, pages 1889?1897.
Sonu, E., Chen, Y., and Doshi, P. (2015). Individual planning in agent populations: Exploiting
anonymity and frame-action hypergraphs. In International Conference on Automated Planning
and Scheduling, pages 202?210.
Sutton, R. S., McAllester, D., Singh, S., and Mansour, Y. (1999). Policy gradient methods for
reinforcement learning with function approximation. In International Conference on Neural Information Processing Systems, pages 1057?1063.
Varakantham, P., Adulyasak, Y., and Jaillet, P. (2014). Decentralized stochastic planning with
anonymity in interactions. In AAAI Conference on Artificial Intelligence, pages 2505?2511.
Varakantham, P. R., Cheng, S.-F., Gordon, G., and Ahmed, A. (2012). Decision support for agent
populations in uncertain and congested environments. In AAAI Conference on Artificial Intelligence, pages 1471?1477.
Williams, R. J. (1992). Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine Learning, 8(3):229?256.
Winstein, K. and Balakrishnan, H. (2013). Tcp ex machina: Computer-generated congestion control.
In Proceedings of the ACM SIGCOMM 2013 Conference, SIGCOMM ?13, pages 123?134.
Witwicki, S. J. and Durfee, E. H. (2010). Influence-based policy abstraction for weakly-coupled
Dec-POMDPs. In International Conference on Automated Planning and Scheduling, pages 185?
192.
11
| 7019 |@word version:1 decomposition:3 thereby:1 profit:2 initial:3 contains:1 series:1 denoting:1 past:1 freitas:1 current:6 nt:36 yet:1 must:1 cruz:1 subsequent:1 confirming:1 update:10 v:1 congestion:5 stationary:1 half:1 intelligence:9 yokoo:1 tcp:1 meuleau:1 provides:4 node:1 location:2 simpler:1 wierstra:1 along:1 supply:4 consists:1 expected:2 behavior:3 planning:16 multi:4 simulator:1 fwm:6 decomposed:2 goldman:1 durfee:2 provided:4 becomes:1 notation:2 underlying:1 maximizes:1 factorized:1 xx:3 project:1 spain:1 developed:3 formalizes:1 temporal:1 multidimensional:1 subclass:2 act:1 exactly:1 control:5 positive:1 local:17 taxi:34 sutton:6 despite:2 solely:1 studied:1 challenging:1 zambaldi:1 limited:2 factorization:2 tambe:1 range:2 en1:1 graduate:1 practical:3 unique:1 acknowledgment:1 riedmiller:1 empirical:9 significantly:3 matching:3 road:1 petersen:1 operator:1 scheduling:2 nb:1 influence:3 applying:1 bellemare:1 optimize:8 transportation:2 williams:2 rt0:1 focused:2 pomdp:13 decomposable:1 factored:9 estimator:1 insight:1 reparameterization:1 population:16 handle:1 variation:3 autonomous:2 analogous:1 congested:2 programming:2 gps:2 anonymity:3 updating:2 observed:3 levine:1 oliehoek:1 worst:1 region:1 connected:1 movement:1 observes:7 environment:7 dempster:2 complexity:3 reward:15 dynamic:8 depend:5 solving:4 singh:1 weakly:1 dilemma:1 rover:1 po:1 joint:10 represented:3 derivation:1 train:7 fac:3 effective:2 describe:1 artificial:9 neighborhood:4 exhaustive:1 quite:1 emerged:1 larger:4 statistic:2 laird:1 advantage:1 interaction:7 neighboring:3 networked:1 pajarinen:2 poorly:1 multirobot:1 sixteenth:1 validate:1 scalability:5 exploiting:2 convergence:9 silver:2 converges:1 derive:3 coupling:1 ac:16 develop:3 qt:1 school:1 eq:2 sonu:2 drawback:1 stochastic:2 human:1 mcallester:1 require:1 hx:2 abbeel:1 givan:1 proposition:1 im:4 around:1 parr:1 combinatorial:1 coordination:2 schwarz:1 city:5 successfully:1 clearly:2 modified:1 rather:2 reaching:1 rusu:1 varying:1 mobile:1 geographically:1 zilberstein:5 derived:2 focus:2 amato:2 improvement:1 legg:1 likelihood:1 kim:1 am:7 inference:2 dependent:2 abstraction:1 i0:9 entire:1 relation:1 schulz:2 issue:4 among:6 plan:1 spatial:1 softmax:1 special:1 initialize:1 homogenous:1 apriori:1 saving:1 having:6 beach:1 sampling:8 ng:3 manually:1 represents:2 veness:1 look:1 afc:2 throughput:1 constitutes:1 icml:1 tabular:5 mirza:1 connectionist:1 piecewise:4 simplify:1 primarily:1 gordon:1 resulted:1 national:1 individual:10 asian:2 phase:1 n1:5 assael:1 harley:1 centralized:1 ostrovski:1 investigate:1 mnih:4 evaluation:1 adjust:2 navigation:6 sh:1 edge:3 closer:1 partial:1 varakantham:11 incomplete:1 uncertain:1 earlier:1 soft:1 smu:1 maximization:2 kaelbling:2 motivating:1 rtm:2 synthetic:4 combined:1 st:43 international:10 siam:1 stay:1 probabilistic:2 analogously:1 synthesis:1 aaai:5 management:1 slowly:1 worse:1 external:1 adversely:2 leading:1 return:8 account:2 de:1 star:1 automation:1 inc:1 coordinated:1 depends:4 passenger:1 piece:6 later:2 try:1 performed:2 lab:1 competitive:1 option:1 contribution:2 collaborative:1 om:5 minimize:1 variance:3 kaufmann:1 maximized:1 weak:1 bayesian:1 kavukcuoglu:2 trajectory:6 pomdps:21 unaffected:1 ah:1 failure:1 konidaris:1 doshi:1 proof:1 sampled:1 dataset:1 adjusting:1 proved:1 graepel:1 back:1 higher:2 day:2 wherein:1 done:1 furthermore:2 until:1 receives:1 duc:1 expressive:3 trust:1 quality:21 usa:1 effect:1 lillicrap:1 unbiased:1 remedy:1 true:1 counterpart:1 moritz:1 x5:1 game:2 during:1 noted:1 plate:1 outline:1 complete:1 cdec:24 cooperate:1 recently:1 lagoudakis:1 common:1 functional:1 rl:4 empirically:6 exponentially:1 volume:1 extend:1 hypergraphs:1 approximates:1 nst:26 significant:1 refer:1 rd:1 vanilla:5 dbn:3 grid:6 similarly:3 hp:4 mathematics:1 nexp:1 robot:4 actor:17 badia:1 jaillet:1 deduce:1 recent:3 optimizing:5 driven:3 corp:1 inequality:1 rep:1 vt:8 guestrin:2 minimum:1 morgan:1 converge:4 maximize:1 signal:4 multiple:6 meyers:2 faster:4 ahmed:1 cross:1 long:1 divided:1 sigcomm:2 a1:7 variant:1 scalable:3 expectation:4 foerster:2 iteration:6 robotics:1 dec:7 receive:2 addition:1 whereas:1 fine:1 interval:1 addressed:1 publisher:1 ot:2 ineffective:1 validating:1 balakrishnan:2 flow:1 effectiveness:1 jordan:1 bernstein:3 automated:2 affect:7 independence:1 inner:1 tm:1 lesser:2 fleet:7 peshkin:2 motivated:2 whether:1 expression:2 t0:1 o0:16 reuse:1 becker:4 constitute:1 action:36 deep:3 useful:2 detailed:1 simplest:1 specifies:1 singapore:3 notice:5 coordinating:1 estimated:1 per:5 affected:1 key:3 urban:1 ce:1 leibo:2 v1:3 graph:1 year:2 parameterized:1 uncertainty:4 communicate:1 decide:1 utilizes:2 zonal:1 decision:13 appendix:4 scaling:2 lanctot:1 bound:1 cheng:1 constraint:1 witwicki:2 min:1 kumar:5 performing:1 expanded:1 attempting:1 relatively:1 smaller:2 slightly:1 em:20 making:6 s1:6 lau:2 restricted:1 intuitively:1 computationally:2 count:54 tractable:3 antonoglou:1 end:1 kochenderfer:1 available:3 operation:1 decentralized:11 observe:2 alternative:1 hassabis:1 original:3 denotes:1 include:1 konda:2 exploit:2 scholarship:1 society:1 icra:1 objective:1 move:2 already:1 strategy:3 fa:4 rt:13 bagnell:3 gradient:32 unable:1 reinforce:3 fidjeland:1 cauchy:1 reason:1 o1:14 modeled:1 providing:1 ratio:1 minimizing:1 difficult:1 trace:2 negative:1 reliably:1 collective:15 policy:71 twenty:1 upper:1 observation:23 kumaran:1 markov:4 sm:18 benchmark:2 finite:1 immediate:1 frame:1 mansour:1 sharp:1 introduced:3 required:2 optimized:1 extendible:1 learned:1 planetary:1 marecki:1 hour:2 barcelona:1 nip:1 trans:1 address:6 able:2 max:1 reliable:1 royal:1 analogue:1 power:1 event:2 suitable:1 rely:1 indicator:1 scheme:1 immerman:1 mdps:3 axis:1 coupled:1 bare:1 sg:1 schulman:2 relative:1 graf:2 multiagent:14 loss:10 approximator:4 toussaint:2 revenue:1 foundation:1 agent:89 sufficient:1 consistent:1 rubin:1 critic:35 fujitsu:1 compatible:4 summary:2 repeat:1 wireless:2 last:1 supported:2 asynchronous:1 tsitsiklis:2 neighbor:1 taking:7 benefit:1 distributed:3 overcome:1 dimension:1 world:5 transition:8 author:1 reinforcement:12 nguyen:17 historical:1 social:1 approximate:9 observable:3 global:9 assumed:1 search:2 table:13 promising:1 learn:1 nature:1 ca:1 correlated:1 whiteson:1 complex:2 domain:5 did:1 main:2 s2:1 allowed:1 fair:1 peltonen:1 referred:1 join:1 slow:3 n:3 grained:1 theorem:7 transitioning:2 showing:1 explored:1 ih:1 sequential:5 effectively:3 execution:1 demand:11 horizon:5 chen:1 aberdeen:2 fc:4 likely:1 partially:3 bo:5 applies:1 sadik:1 chance:1 extracted:1 acm:1 nair:2 identity:3 goal:7 king:1 towards:1 replace:1 hard:1 fw:35 specifically:1 beattie:1 lemma:2 called:1 total:5 experimental:1 est:3 zone:13 support:1 akshat:1 incorporate:1 evaluate:1 tested:1 avoiding:1 ex:1 |
6,655 | 702 | Modeling Consistency in a Speaker Independent
Continuous Speech Recognition System
Yochai Konig, Nelson Morgan, Chuck Wooters
International Computer Science Institute
1947 Center Street, Suite 600
Berkeley, CA 94704, USA.
Victor Abrash, Michael Cohen, Horacio Franco
SRI International
333 Ravenswood Ave.
Menlo Park, CA 94025, USA
Abstract
We would like to incorporate speaker-dependent consistencies, such as
gender, in an otherwise speaker-independent speech recognition system.
In this paper we discuss a Gender Dependent Neural Network (GDNN)
which can be tuned for each gender, while sharing most of the speaker
independent parameters. We use a classification network to help generate
gender-dependent phonetic probabilities for a statistical (HMM) recognition system. The gender classification net predicts the gender with high
accuracy, 98.3% on a Resource Management test set. However, the integration of the GDNN into our hybrid HMM-neural network recognizer
provided an improvement in the recognition score that is not statistically
significant on a Resource Management test set.
1 INTRODUCTION
Earlier work [Bourlard and Morgan, 1991l has shown the ability of Multilayer Perceptrons
(MLPs) to estimate emission probabilities for Hidden Markov Models (HMM). As shown
in their report, with a few assumptions, an MLP may be viewed as estimating the probability
P (qIx ) where q is a sub word model (or a state of a subword model) and x is the input acoustic
682
Modeling Consistency in a Speaker Independent Continuous Speech Recognition System
speech data. In this hybrid HMMIMLP recognizer, it was shown that these estimates
led to improved performance over standard estimation techniques when a fairly simple
HMM was used. More recent results have shown improvements using hybrid HMMIMLP
probability estimation over a state-of-the-art pure HMM-based system[Cohen et ai., 1993;
Renals et ai., 19921.
Some speaker dependencies exist in common parametric representations of speech, and it
is possible that making the dependencies explicit may improve performance for a given
speaker (essentially enabling the recognizer to soften the influence of the speaker dependency). The basic problem with modeling and estimating explicitly speaker dependent
p~ rameters is the lack of training data. In the limit, the only available information about a
new speaker is the utterance to be recognized. This limit is our starting point for this study.
Even with this limited information, we can incorporate constraints on analysis that rely on
the same speaker producing all the frames in an utterance, thus ensuring consistency. As
has been observed for some mainstream Hidden Markov Models (HMM) systems [Murveit
et ai., 19901, given enough training data, separate phonetic models for male and female
speakers can be used to improve performance. Our first attack on consistency, then, is to
incorporate gender consistency in the recognition process. In contrast to non-connectionist
HMM systems, our proposed architecture attempts to share the gender-independent parameters.
Our study had two steps: first we trained an MLP to estimate the probability of gender.
Then, we investigated ways to integrate the gender consistency constraint into our existing
MLP-HMM hybrid recognizer, resulting in our GDNN architecture. We will give a short
description of some related work, followed by an explanation of the two steps described
above. We conclude with some discussion and thoughts about future work.
2
RELATED AND PREVIOUS WORK
Our previous experiments with the Gender-Dependent Neural Network (GDNN) are described in [Abrash et ai., 1992; Konig and Morgan, 1992]. Other researchers have worked
on related problems. For example Hampshire and Waibel presented the "Meta-Pi" architecture [Hampshire and Waibel, 19901. The building blocks for the "Meta-Pi" architecture
are multiple TDNN's that are trained to recognize the speech of an individual speaker.
These building blocks are integrated by another multiple TDNN trained in a Bayesian
MAP scheme to maximize the phoneme recognition rate of the overall architecture. The
performance of the "Meta-Pi" architecture on a six speaker /b,d,g/ task was comparable to
a speaker dependent system on the same task.
Another example of related work is speaker normalization, which attempts to minimize
between-speaker variations by transforming the data of a new speaker to that of a reference
speaker, and then applying the speaker dependent system for the reference speaker [Huang
et ai., 1991].
3 THE CLASSIFICATION NET
In order to classify the gender of a new speaker we need features that distinguish between
speakers, in contrast to the features that are used for phoneme recognition that are chosen
to suppress speaker variations. Given our constraint that the only available information
683
684
Konig, Morgan, Wooters, Abrash, Cohen, and Franco
about the new speaker is the sentence to be recognized, we chose features that are a
rough estimate of the vocal tract properties and the fundamental frequency of the new
speaker. Furthermore, we tried to suppress the linguistic information in our estimate. More
specifically, the goal was to build a net that estimates the probability P(GenderIData).
After some experimentation, the first twelve LPC cepstral coefficients were calculated over
a 20 msec window every 10 msec (50% overlap) and averaged along each sentence. The
sampling rate was 16khz. These features were augmented by an estimate of the fundamental
frequency for a total of 13 features per sentence. The MLP had one hidden layer with 24
hidden units. There were two output units, one for each gender. The training set was the
109-speaker DARPA Resource Management corpus. 3500 sentences were used for the
training set and 490 in the cross validation set. The size of the test set was 600 sentences,
and it was a combination of the DARPA Resource Management speaker-independent Feb89
and Oct89 test sets. The trained MLP predicts the gender for the test set with less than 1.7%
error on the sentence level.
4
4.1
INCORPORATING GENDER CONSISTENCY INTO OUR
HYBRID HMMIMLP RECOGNIZER
DISCUSSION
Our goal is to find an architecture that shares the gender independent parameters and
models the gender dependent parameters. Given our gender consistency constraint we
estimate a probability that is explicitly conditioned on gender, as if the phonetic models
were simply doubled to permit male and female forms of each phoneme. We can express P( male, phoneldata) (which is then divided by priors to get the corresponding data
likelihood) by expansion to P(phonelmale, data) x P(maleldata). This factorization
is realized by two separate MLP's: P(maleldata) is estimated by the classification net
described above, and P(phonelmale, data) is realized by our GDNN described below.
For further description on how to factorize probabilities by neural networks see [Morgan
and Bourlard, 19921. The final likelihood for the male case can be expressed as:
P(d
I ) _ P(phonelmale, data) x P(maleldata) x P(data)
Ih
ata pone, ma e P( pone
h
Ima I)
e x P( ma I)
e
(1)
Note that during recognition, P(data) can be ignored. Similarly, a female-assumed probability can be computed for each hypothesized phone. These male and female-assumed
probabilities can then be used in separate Viterbi calculations (since we do not permit any
hypothesis to switch gender in the midst of an utterance). In other words, dynamic programming is used with the framewise network outputs to evaluate the best hypothesized
utterance assuming male gender, and then the same is done for the female case. The case
with the lowest cost (highest probability) is then chosen. Note that the output of the classification net only helps in choosing between the sentence recognized according to female
gender or male gender.
The critical question is how to estimate P(phonelgender, data). A possible answer is to
have two separate nets, one trained only on males, and the other trained only on females.
This approach has the potential disadvantages of doubling the number of parameters in the
system, and of not sharing the gender independent parameters. We have experimented with
a such a net [Konig and Morgan, 1992] and it improved our result over the baseline system.
Modeling Consistency in a Speaker Independent Continuous Speech Recognition System
P(phonelgender,data)
t t t t
69 units
1000 units
l J
~
t
t
t
f
f
9 x (12 mel cepstral + log energy +
if(gender == Male)
{M=l,F=O}
otherwise
{M=O,F=l}
first derivatives) = 234
Figure 1: A Gender Dependent Neural Network(GDNN)
We present here a hybrid GDNN architecture that has the flexibility to tune itself to each
gender. The idea is to have extra binary inputs that specify the gender of the speaker and
then the probabilities that the network estimates will be conditioned on the gender. The
architecture is shown in figure 1.
4.2
EXPERIMENTS AND RESULTS
We have compared four different architectures. The first architecture is our baseline system,
namely, one large net that was trained on all the sentences in the training set. The second
uses two separate nets, for males and females. The third is the hybrid GDNN architecture
described in figure 1. The fourth architecture is a variant of the third architecture, the
difference being that the binary units are connected to the output units instead of to the
hidden units. All the nets have 1000 hidden units and 69 output units, including the totally
separate male and female nets. While one might think that the consequent doubling of
685
686
Konig, Morgan, Wooters, Abrash, Cohen, and Franco
Table 1: Result Summary
Architecture
Baseline
Two Separate Nets
Hybrid Architecture - Variant(Binary to Output)
GDNN Binary to hidden
Test Set Word Error
10.6%
10.9%
10.9%
10.2%
the number of parameters in the system might explain the observed degradation for the
perfonnance for the second architecture, we have also experimented with several sizes of
male and female separate nets, by changing the number of the hidden units and the number
of input frames. None of these experiments resulted in a significant improvement. We used
12 mel-cepstral features and the log-energy along with their first derivatives, so the number
of input features per frame was 26. The features were calculated from 20ms of speech,
computed every 10 msec (as before). The length of the temporal window (the number of
input frames) was 9, so the total number of input features was 234. The training set was
the 109-speaker DARPA Resource Management corpus. 3500 sentences were used for the
training set and 490 in the cross validation set. The size of the test was the 600 sentences
making up the DARPA Feb89 and Oct89 test sets. The results are summarized in table I,
and are achieved using the standard Resource Management wordpair grammar(perplexity
=60) with a simple context-independent HMM recognizer. We should note here that these
results are all somewhat worse than our other results published in [Renals et al., 1992;
Cohen et al., 1993], as the latter were achieved using SRI's phonological models, and these
were done with a single-pronunciation single-state HMM (with each state repeated for a
rough duration model).
5
DISCUSSION AND FUTURE WORK
The best results were achieved by the GDNN hybrid architecture that shares the genderindependent parameters while modeling the gender dependent parameters . . However the
improvement over our baseline is not statistically significant for this test set, although it is
consistent with our experiments with other test sets, not reported here. A possible source
for further improvement is using a training set with a more balanced representation of
gender. In the DARPA Resource Management speaker independent training set there are
2830 sentences uttered by males versus only 1160 sentences uttered by females. Thus,
perfonnance may have suffered from an insufficient number of female training sentences.
A reasonable extension to this work would be the modeling of additional speaker dependent
parameters such as speech rate, accent, etc. Another direction that might be more fruitful
is to combine the gender-dependent models in the local estimation of phonemes, and not
to do separate Viterbi recognitions for each gender. We are currently examining this latter
alternative.
Modeling Consistency in a Speaker Independent Continuous Speech Recognition System
Acknowledgements
Thanks to Steve Renals for his comments along the way. Computations were done on the
RAP machine, with support from software guru Phil Kohn, and hardware wiz Jim Beck.
Thanks to Hynek Hermansky for advising us about the features for the gender classification
net. Thanks to the other members of the speech group at ICSI for their helpful comments.
This work was partially funded by DARPA contract MDA904-90-C-5253.
References
[Abrash et al., 1992] V. Abrash, H. Franco, M. Cohen, N. Morgan, and Y. Konig. Connectionist gender adaptation in a hybrid neural network / hidden markov model speech
recognition system. In Proc. Int'l Conf. on Spoken Lang. Processing, Banff, Canada,
October 1992.
[Bourlard and Morgan, 19911 H. Bourlard and N. Morgan. Merging multilayer perceptrons & hidden markov models: Some experiments in continuous speech recognition.
In E. Gelenbe, editor, Artificial Neural Networks: Advances and Applications. North
Holland Press, 1991.
[Cohen et al., 1993] M. Cohen, H. Franco, N. Morgan, D. Rumelhart, and V. Abrash.
Context-dependent multiple distribution phonetic modeling. In C.L. Giles, Hanson SJ,
and J.D. Cowan, editors, Advances in Neural Information Processing Systems, volume 5.
Morgan Kaufmann, San Mateo, 1993.
[Hampshire and Waibel, 1990] J.B. Hampshire and A. Waibel. Connectionist architectures
for multi-speaker phoneme recognition. In D.S. Touretzky, editor, Advances in Neural
Information Processing Systems 2, San mateo, CA, 1990. Morgan Kaufman.
[Huang et al., 19911 X.D. Huang, K.F. Lee, and A. Waibel. Connectionist speaker normalization and its application to speech recognition. In Neural Networks for Siganl
Processing, proc. of 1991 IEEE Workshop" Princeton, New Jersey, October 1991.
[Konig and Morgan, 1992] Y. Konig and N. Morgan. Gdnn: A gender -dependent neural
network for continuous speech recognition. In Proc. international loint Conference on
Neural Networks, Baltimore, Maryland, June 1992.
[Morgan and Bourlard, 1992] N. Morgan and H. Bourlard. Factoring neural networks by
a statistical method. Neural Computation, (4):835-838,1992.
[Murveit et al., 1990] H. Murveit, M. Weintraub, and M. Cohen. Training set issues in sri's
decipher speech recognition system. In Proc. speech and Natural Language Workshop.
pages 337-340,June 1990.
[Renals et al., 1992] S. Renals, N. Morgan, M. Cohen, H. Franco, and H. Bourlard. Connectionist probability estimation in the decipher speech recognition system. In Proceedings IEEE Inti. Conf. on Acoustics, Speech, and Signal Processing, San Francisco,
California, March 1992. IEEE.
687
| 702 |@word sri:3 tried:1 score:1 tuned:1 subword:1 existing:1 lang:1 short:1 banff:1 attack:1 along:3 framewise:1 combine:1 multi:1 window:2 totally:1 provided:1 estimating:2 lowest:1 kaufman:1 spoken:1 suite:1 temporal:1 berkeley:1 every:2 unit:10 producing:1 before:1 local:1 limit:2 might:3 chose:1 mateo:2 limited:1 factorization:1 statistically:2 averaged:1 block:2 thought:1 word:3 vocal:1 doubled:1 get:1 context:2 influence:1 applying:1 fruitful:1 map:1 center:1 phil:1 uttered:2 starting:1 duration:1 pure:1 oct89:2 his:1 variation:2 programming:1 us:1 hypothesis:1 rumelhart:1 recognition:19 predicts:2 observed:2 connected:1 highest:1 icsi:1 balanced:1 transforming:1 dynamic:1 trained:7 darpa:6 jersey:1 artificial:1 feb89:2 choosing:1 pronunciation:1 otherwise:2 grammar:1 ability:1 think:1 itself:1 final:1 net:14 adaptation:1 renals:5 flexibility:1 description:2 konig:8 tract:1 help:2 direction:1 murveit:3 extension:1 viterbi:2 recognizer:6 estimation:4 proc:4 currently:1 rough:2 ravenswood:1 linguistic:1 emission:1 june:2 improvement:5 likelihood:2 contrast:2 ave:1 baseline:4 helpful:1 dependent:14 factoring:1 integrated:1 hidden:10 overall:1 classification:6 issue:1 art:1 integration:1 fairly:1 phonological:1 sampling:1 park:1 hermansky:1 future:2 mda904:1 report:1 wooters:3 connectionist:5 few:1 recognize:1 resulted:1 individual:1 ima:1 beck:1 attempt:2 mlp:6 male:13 perfonnance:2 rap:1 classify:1 modeling:8 earlier:1 giles:1 disadvantage:1 soften:1 cost:1 examining:1 reported:1 dependency:3 answer:1 guru:1 thanks:3 international:3 fundamental:2 twelve:1 contract:1 lee:1 michael:1 management:7 huang:3 worse:1 conf:2 derivative:2 potential:1 summarized:1 north:1 coefficient:1 pone:2 int:1 explicitly:2 horacio:1 mlps:1 minimize:1 accuracy:1 phoneme:5 kaufmann:1 decipher:2 bayesian:1 none:1 researcher:1 published:1 explain:1 wiz:1 touretzky:1 sharing:2 energy:2 gdnn:11 frequency:2 weintraub:1 steve:1 specify:1 improved:2 done:3 furthermore:1 lack:1 accent:1 usa:2 building:2 hypothesized:2 during:1 speaker:36 mel:2 abrash:7 m:1 common:1 cohen:10 khz:1 volume:1 significant:3 ai:5 consistency:11 similarly:1 language:1 had:2 funded:1 mainstream:1 etc:1 recent:1 female:12 phone:1 perplexity:1 phonetic:4 meta:3 binary:4 chuck:1 victor:1 morgan:18 additional:1 somewhat:1 recognized:3 maximize:1 signal:1 multiple:3 calculation:1 cross:2 divided:1 ensuring:1 variant:2 basic:1 multilayer:2 essentially:1 normalization:2 achieved:3 wordpair:1 baltimore:1 source:1 suffered:1 extra:1 comment:2 cowan:1 member:1 enough:1 switch:1 architecture:19 idea:1 loint:1 six:1 kohn:1 speech:19 ignored:1 tune:1 hardware:1 hynek:1 generate:1 exist:1 estimated:1 per:2 express:1 group:1 four:1 changing:1 fourth:1 reasonable:1 comparable:1 layer:1 followed:1 distinguish:1 constraint:4 worked:1 software:1 franco:6 according:1 waibel:5 combination:1 march:1 making:2 inti:1 resource:7 discus:1 available:2 experimentation:1 permit:2 alternative:1 build:1 question:1 realized:2 parametric:1 separate:9 maryland:1 street:1 hmm:10 nelson:1 assuming:1 length:1 insufficient:1 october:2 suppress:2 markov:4 enabling:1 jim:1 frame:4 canada:1 namely:1 sentence:13 hanson:1 acoustic:2 california:1 below:1 lpc:1 including:1 explanation:1 overlap:1 critical:1 natural:1 rely:1 hybrid:10 bourlard:7 scheme:1 improve:2 tdnn:2 utterance:4 prior:1 acknowledgement:1 rameters:1 versus:1 validation:2 integrate:1 consistent:1 editor:3 share:3 pi:3 ata:1 qix:1 summary:1 institute:1 cepstral:3 calculated:2 san:3 sj:1 corpus:2 conclude:1 assumed:2 francisco:1 factorize:1 continuous:6 table:2 ca:3 menlo:1 expansion:1 investigated:1 midst:1 repeated:1 augmented:1 sub:1 explicit:1 msec:3 third:2 experimented:2 consequent:1 incorporating:1 workshop:2 ih:1 merging:1 conditioned:2 led:1 simply:1 expressed:1 partially:1 doubling:2 holland:1 gender:36 ma:2 viewed:1 goal:2 specifically:1 degradation:1 hampshire:4 total:2 perceptrons:2 support:1 latter:2 incorporate:3 evaluate:1 princeton:1 |
6,656 | 7,020 | Adversarial Symmetric Variational Autoencoder
Yunchen Pu, Weiyao Wang, Ricardo Henao, Liqun Chen, Zhe Gan, Chunyuan Li
and Lawrence Carin
Department of Electrical and Computer Engineering, Duke University
{yp42, ww109, r.henao, lc267, zg27,cl319, lcarin}@duke.edu
Abstract
A new form of variational autoencoder (VAE) is developed, in which the joint
distribution of data and codes is considered in two (symmetric) forms: (i) from
observed data fed through the encoder to yield codes, and (ii) from latent codes
drawn from a simple prior and propagated through the decoder to manifest data.
Lower bounds are learned for marginal log-likelihood fits observed data and latent
codes. When learning with the variational bound, one seeks to minimize the
symmetric Kullback-Leibler divergence of joint density functions from (i) and (ii),
while simultaneously seeking to maximize the two marginal log-likelihoods. To
facilitate learning, a new form of adversarial training is developed. An extensive
set of experiments is performed, in which we demonstrate state-of-the-art data
reconstruction and generation on several image benchmark datasets.
1
Introduction
Recently there has been increasing interest in developing generative models of data, offering the
promise of learning based on the often vast quantity of unlabeled data. With such learning, one
typically seeks to build rich, hierarchical probabilistic models that are able to fit to the distribution of
complex real data, and are also capable of realistic data synthesis.
Generative models are often characterized by latent variables (codes), and the variability in the codes
encompasses the variation in the data [1, 2]. The generative adversarial network (GAN) [3] employs
a generative model in which the code is drawn from a simple distribution (e.g., isotropic Gaussian),
and then the code is fed through a sophisticated deep neural network (decoder) to manifest the data.
In the context of data synthesis, GANs have shown tremendous capabilities in generating realistic,
sharp images from models that learn to mimic the structure of real data [3, 4, 5, 6, 7, 8]. The quality
of GAN-generated images has been evaluated by somewhat ad hoc metrics like inception score [9].
However, the original GAN formulation does not allow inference of the underlying code, given
observed data. This makes it difficult to quantify the quality of the generative model, as it is not
possible to compute the quality of model fit to data. To provide a principled quantitative analysis of
model fit, not only should the generative model synthesize realistic-looking data, one also desires the
ability to infer the latent code given data (using an encoder). Recent GAN extensions [10, 11] have
sought to address this limitation by learning an inverse mapping (encoder) to project data into the
latent space, achieving encouraging results on semi-supervised learning. However, these methods still
fail to obtain faithful reproductions of the input data, partly due to model underfitting when learning
from a fully adversarial objective [10, 11].
Variational autoencoders (VAEs) are designed to learn both an encoder and decoder, leading to
excellent data reconstruction and the ability to quantify a bound on the log-likelihood fit of the
model to data [12, 13, 14, 15, 16, 17, 18, 19]. In addition, the inferred latent codes can be utilized
in downstream applications, including classification [20] and image captioning [21]. However, new
images synthesized by VAEs tend to be unspecific and/or blurry, with relatively low resolution. These
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
limitations of VAEs are becoming increasingly understood. Specifically, the traditional VAE seeks to
maximize a lower bound on the log-likelihood of the generative model, and therefore VAEs inherit
the limitations of maximum-likelihood (ML) learning [22]. Specifically, in ML-based learning one
optimizes the (one-way) Kullback-Leibler (KL) divergence between the distribution of the underlying
data and the distribution of the model; such learning does not penalize a model that is capable of
generating data that are different from that used for training.
Based on the above observations, it is desirable to build a generative-model learning framework with
which one can compute and assess the log-likelihood fit to real (observed) data, while also being
capable of generating synthetic samples of high realism. Since GANs and VAEs have complementary
strengths, their integration appears desirable, with this a principal contribution of this paper. While
integration seems natural, we make important changes to both the VAE and GAN setups, to leverage
the best of both. Specifically, we develop a new form of the variational lower bound, manifested
jointly for the expected log-likelihood of the observed data and for the latent codes. Optimizing
this variational bound involves maximizing the expected log-likelihood of the data and codes, while
simultaneously minimizing a symmetric KL divergence involving the joint distribution of data and
codes. To compute parts of this variational lower bound, a new form of adversarial learning is invoked.
The proposed framework is termed Adversarial Symmetric VAE (AS-VAE), since within the model
(i) the data and codes are treated in a symmetric manner, (ii) a symmetric form of KL divergence is
minimized when learning, and (iii) adversarial training is utilized. To illustrate the utility of AS-VAE,
we perform an extensive set of experiments, demonstrating state-of-the-art data reconstruction and
generation on several benchmarks datasets.
2
Background and Foundations
Consider an observed data sample x, modeled as being drawn from p? (x|z), with model parameters
? and latent code z. The prior distribution on the code is denoted p(z), typically a distribution that is
easy to draw from, such as isotropic Gaussian. The posterior distribution on the code given data x
is p? (z|x), and since this is typically intractable, it is approximated as q? (z|x), parameterized by
learned parameters ?. Conditional distributions q? (z|x) and p? (x|z) are typically designed such
that they are easily sampled and, for flexibility, modeled in terms of neural networks [12]. Since z
is a latent code for x, q? (z|x) is also termed a stochastic encoder, with p? (x|z) a corresponding
stochastic decoder. The observed data are assumed drawn from q(x), for which we do not have a
explicit form, but from which we have samples, i.e., the ensemble {xi }i=1,N used for learning.
R
Our goal is to learn the model p? (x) = p? (x|z)p(z)dz such that it synthesizes samples that are
well matched to those drawn from q(x). We simultaneously seek to learn a corresponding encoder
q? (z|x) that is both accurate and efficient to implement. Samples x are synthesized via x ? p? (x|z)
with z ? p(z); z ? q? (z|x) provides an efficient coding of observed x, that may be used for other
purposes (e.g., classification or caption generation when x is an image [21]).
2.1
Traditional Variational Autoencoders and Their Limitations
Maximum likelihood (ML) learning of ? based on direct evaluation of p? (x) is typically intractable.
The VAE [12, 13] seeks to bound p? (x) by maximizing variational expression LVAE (?, ?), with
respect to parameters {?, ?}, where
p? (x, z)
LVAE (?, ?) = Eq? (x,z) log
= Eq(x) [log p? (x) ? KL(q? (z|x)kp? (z|x))] (1)
q? (z|x)
= ?KL(q? (x, z)kp? (x, z)) + const ,
(2)
with expectations Eq? (x,z) and Eq(x) performed approximately via sampling. Specifically, to evaluate
Eq? (x,z) we draw a finite set of samples z i ? q? (z i |xi ), with xi ? q(x) denoting the observed
data, and for Eq(x) , we directly use observed data xi ? q(x). When learning {?, ?}, the expectation
using samples from z i ? q? (z i |xi ) is implemented via the ?reparametrization trick? [12].
PN
Maximizing LVAE (?, ?) wrt {?, ?} provides a lower bound on N1 i=1 log p? (xi ), hence the VAE
PN
setup is an approximation to ML learning of ?. Learning ? based on N1 i=1 log p? (xi ) is equivalent
to learning ? based on minimizing KL(q(x)kp? (x)), again implemented in terms of the N observed
samples of q(x). As discussed in [22], such learning does not penalize ? severely for yielding x
2
of relatively high probability in p? (x) while being simultaneously of low probability in q(x). This
means that ? seeks to match p? (x) to the properties of the observed data samples, but p? (x) may
also have high probability of generating samples that do not look like data drawn from q(x). This is
a fundamental limitation of ML-based learning [22], inherited by the traditional VAE in (1).
One
R reason for the failing
R of ML-based learning of ? is that the cumulative posterior on latent codes
p? (z|x)q(x)dx ? q? (z|x)q(x)dx = q? (z) is typically different from p(z), which implies that
x ? p? (x|z), with z ? p(z) may yield samples x that are different from those generated from q(x).
Hence, when learning {?, ?} one may seek to match p? (x) to samples of q(x), as done in (1), while
simultaneously matching q? (z) to samples of p(z). The expression in (1) provides a variational
bound for matching p? (x) to samples of q(x), thus one may naively think to simultaneously set a
similar variational expression for q? (z), with these two variational expressions optimized jointly.
However, to compute this additional variational expression we require an analytic expression for
q? (x, z) = q? (z|x)q(x), which also means we need an analytic expression for q(x), which we do
not have.
Examining (2), we also note that LVAE (?, ?) approximates ?KL(q? (x, z)kp? (x, z)), which has
limitations aligned with those discussed above for ML-based learning of ?. Analogous to the above
discussion, we would also like to consider ?KL(p? (x, z)kq? (x, z)). So motivated, in Section 3 we
PN
develop a new form of variational lower bound, applicable to maximizing N1 i=1 log p? (xi ) and
P
M
1
j=1 log q? (z j ), where z j ? p(z) is the j-th of M samples from p(z). We demonstrate that this
M
new framework leverages both KL(p? (x, z)kq? (x, z)) and KL(q? (x, z)kp? (x, z)), by extending
ideas from adversarial networks.
2.2
Adversarial Learning
The original idea of GAN [3] was to build an effective generative model p? (x|z), with z ? p(z), as
discussed above. There was no desire to simultaneously design an inference network q? (z|x). More
recently, authors [10, 11, 23] have devised adversarial networks that seek both p? (x|z) and q? (z|x).
As an important example, Adversarial Learned Inference (ALI) [10] considers the following objective
function:
min max LALI (?, ?, ?) = Eq? (x,z) [log ?(f? (x, z))] + Ep? (x,z) [log(1 ? ?(f? (x, z)))] ,
?,?
?
(3)
where the expectations are approximated with samples, as in (1). The function f? (x, z), termed a
discriminator, is typically implemented using a neural network with parameters ? [10, 11]. Note that
in (3) we need only sample from p? (x, z) = p? (x|z)p(z) and q? (x, z) = q? (z|x)q(x), avoiding
the need for an explicit form for q(x).
The framework in (3) can, in theory, match p? (x, z) and q? (x, z), by finding a Nash equilibrium
of their respective non-convex objectives [3, 9]. However, training of such adversarial networks
is typically based on stochastic gradient descent, which is designed to find a local mode of a cost
function, rather than locating an equilibrium [9]. This objective mismatch may lead to the well-known
instability issues associated with GAN training [9, 22].
To alleviate this problem, some researchers add a regularization term, such as reconstruction loss
[24, 25, 26] or mutual information [4], to the GAN objective, to restrict the space of suitable mapping
functions, thus avoiding some of the failure modes of GANs, i.e., mode collapsing. Below we
will formally match the joint distributions as in (3), and reconstruction-based regularization will be
manifested by generalizing the VAE setup via adversarial learning. Toward this goal we consider the
following lemma, which is analogous to Proposition 1 in [3, 23].
Lemma 1 Consider Random Variables (RVs) x and z with joint distributions, p(x, z) and q(x, z).
The optimal discriminator D? (x, z) = ?(f ? (x, z)) for the following objective
max Ep(x,z) log[?(f (x, z))] + Eq(x,z) [log(1 ? ?(f (x, z)))] ,
f
(4)
is f ? (x, z) = log p(x, z) ? log q(x, z).
Under Lemma 1, we are able to estimate the log q? (x, z) ? log p? (x)p(z) and log p? (x, z) ?
log q(x)q? (z) using the following corollary.
3
Corollary 1.1 For RVs x and z with encoder joint distribution q? (x, z) = q(x)q? (z|x) and
decoder joint distribution p? (x, z) = p(z)p? (x|z), consider the following objectives:
max LA1 (? 1 ) = Ex?q(x),z?q? (z|x) log[?(f?1 (x, z))]
?1
(5)
+ Ex?p? (x|z0 ),z0 ?p(z),z?p(z) [log(1 ? ?(f?1 (x, z)))] ,
max LA2 (? 2 ) = Ez?p(z),x?p? (x|z) log[?(f?2 (x, z))]
?2
(6)
+ Ez?q? (z|x0 ),x0 ?q(x),x?q(x) [log(1 ? ?(f?2 (x, z)))] ,
If the parameters ? and ? are fixed, with f??1 the optimal discriminator for (5) and f??2 the optimal
discriminator for (6), then
f??1 (x, z) = log q? (x, z) ? log p? (x)p(z),
f??2 (x, z) = log p? (x, z) ? log q? (z)q(x) . (7)
The proof is provided in the Appendix A. We also assume in Corollary 1.1 that f?1 (x, z) and
f?2 (x, z) are sufficiently flexible such that there are parameters ? ?1 and ? ?2 capable of achieving
the equalities in (7). Toward that end, f?1 and f?2 are implemented as ? 1 - and ? 2 -parameterized
neural networks (details below), to encourage universal approximation [27].
3
Adversarial Symmetric Variational Auto-Encoder (AS-VAE)
Consider variational expressions
LVAEx (?, ?) = Eq(x) log p? (x) ? KL(q? (x, z)kp? (x, z))
(8)
LVAEz (?, ?) = Ep(z) log q? (z) ? KL(p? (x, z)kq? (x, z)) ,
(9)
where all expectations are again performed approximately using samples from q(x) and p(z). Recall
that Eq(x) log p? (x) = ?KL(q(x)kp? (x)) + const, and Ep(z) log p? (z) = ?KL(p(z)kq? (z)) +
const, thus (8) is maximized when q(x) = p? (x) and q? (x, z) = p? (x, z). Similarly, (9) is
maximized when p(z) = q? (z) and q? (x, z) = p? (x, z). Hence, (8) and (9) impose desired
constraints on both the marginal and joint distributions. Note that the log-likelihood terms in (8)
and (9) are analogous to the data-fit regularizers discussed above in the context of ALI, but here
implemented in a generalized form of the VAE. Direct evaluation of (8) and (9) is not possible, as it
requires an explicit form for q(x) to evaluate q? (x, z) = q? (z|x)q(x).
One may readily demonstrate that
LVAEx (?, ?) = Eq? (x,z) [log p? (x)p(z) ? log q? (x, z) + log p? (x|z)]
= Eq? (x,z) [log p? (x|z) ? f??1 (x, z)] .
A similar expression holds for LVAEz (?, ?), in terms of f??2 (x, z). This naturally suggests the
cumulative variational expression
LVAExz (?, ?, ? 1 , ? 2 ) = LVAEx (?, ?) + LVAEz (?, ?)
(10)
= Eq? (x,z) [log p? (x|z) ? f?1 (x, z)] + Ep? (x,z) [log q? (x|z) ? f?2 (x, z)] ,
where ? 1 and ? 2 are updated using the adversarial objectives in (5) and (6), respectively.
Note that to evaluate (10) we must be able to sample from q? (x, z) = q(x)q? (z|x) and
p? (x, z) = p(z)p? (x|z), both of which are readily available, as discussed above. Further, we
require explicit expressions for q? (z|x) and p? (x|z), which we have. For (5) and (6) we similarly
must be able to sample from the distributions involved, and we must be able to evaluate f?1 (x, z)
and f?2 (x, z), each of which is implemented via a neural network. Note as well that the bound in
(1) for Eq(x) log p? (x) is in terms of the KL distance between conditional distributions q? (z|x) and
p? (z|x), while (8) utilizes the KL distance between joint distributions q? (x, z) and p? (x, z) (use
of joint distributions is related to ALI). By combining (8) and (9), the complete variational bound
LVAExz employs the symmetric KL between these two joint distributions. By contrast, from (2),
the original variational lower bound only addresses a one-way KL distance between q? (x, z) and
p? (x, z). While [23] had a similar idea of employing adversarial methods in the context variational
learning, it was only done within the context of the original form in (1), the limitations of which were
discussed in Section 2.1.
4
In the original VAE, in which (1) was optimized, the reparametrization trick [12] was invoked
wrt q? (z|x), with samples z ? (x, ) and ? N (0, I), as the expectation was performed wrt this
distribution; this reparametrization is convenient for computing gradients wrt ?. In the AS-VAE
in (10), expectations are also needed wrt p? (x|z). Hence, to implement gradients wrt ?, we
also constitute a reparametrization of p? (x|z). Specifically, we consider samples x? (z, ?) with
? ? N (0, I). LVAExz (?, ?, ? 1 , ? 2 ) in (10) is re-expressed as
LVAExz (?, ?, ? 1 , ? 2 ) = Ex?q(x),?N (0,I) f?1 (x, z ? (x, )) ? log p? (x|z ? (x, ))
+ Ez?p(z),??N (0,I) f?2 (x? (z, ?), z) ? log q? (z|x? (z, ?)) .
(11)
The expectations in (11) are approximated via samples drawn from q(x) and p(z), as well as samples
of and ?. x? (z, ?) and z ? (x, ) can be implemented with a Gaussian assumption [12] or via
density transformation [14, 16], detailed when presenting experiments in Section 5.
The complete objective of the proposed Adversarial Symmetric VAE (AS-VAE) requires the cumulative variational in (11), which we maximize wrt ? 1 and ? 1 as in (5) and (6), using the results in (7).
Hence, we write
min max ?LVAExz (?, ?, ? 1 , ? 2 ) .
(12)
?,? ? 1 ,? 2
The following proposition characterizes the solutions of (12) in terms of the joint distributions of x
and z.
Proposition 1 The equilibrium for the min-max objective in (12) is achieved by specification
{? ? , ?? , ? ?1 , ? ?2 } if and only if (7) holds, and p?? (x, z) = q?? (x, z).
The proof is provided in the Appendix A. This theoretical result implies that (i) ? ? is an estimator that
yields good reconstruction, and (ii) ?? matches the aggregated posterior q? (z) to prior distribution
p(z).
4
Related Work
VAEs [12, 13] represent one of the most successful deep generative models developed recently.
Aided by the reparameterization trick, VAEs can be trained with stochastic gradient descent. The
original VAEs implement a Gaussian assumption for the encoder. More recently, there has been a
desire to remove this Gaussian assumption. Normalizing flow [14] employs a sequence of invertible
transformation to make the distribution of the latent codes arbitrarily flexible. This work was followed
by inverse auto-regressive flow [16], which uses recurrent neural networks to make the latent codes
more expressive. More recently, SteinVAE [28] applies Stein variational gradient descent [29] to
infer the distribution of latent codes, discarding the assumption of a parametric form of posterior
distribution for the latent code. However, these methods are not able to address the fundamental
limitation of ML-based models, as they are all based on the variational formulation in (1).
GANs [3] constitute another recent framework for learning a generative model. Recent extensions of
GAN have focused on boosting the performance of image generation by improving the generator [5],
discriminator [30] or the training algorithm [9, 22, 31]. More recently, some researchers [10, 11, 33]
have employed a bidirectional network structure within the adversarial learning framework, which in
theory guarantees the matching of joint distributions over two domains. However, non-identifiability
issues are raised in [32]. For example, they have difficulties in providing good reconstruction in latent
variable models, or discovering the correct pairing relationship in domain transformation tasks. It was
shown that these problems are alleviated in DiscoGAN [24], CycleGAN [26] and ALICE [32] via
additional `1 , `2 or adversarial losses. However, these methods lack of explicit probabilistic modeling
of observations, thus could not directly evaluate the likelihood of given data samples.
A key component of the proposed framework concerns integrating a new VAE formulation with
adversarial learning. There are several recent approaches that have tried to combining VAE and
GAN [34, 35], Adversarial Variational Bayes (AVB) [23] is the one most closely related to our work.
AVB employs adversarial learning to estimate the posterior of the latent codes, which makes the
encoder arbitrarily flexible. However, AVB seeks to optimize the original VAE formulation in (1),
and hence it inherits the limitations of ML-based learning of ?. Unlike AVB, the proposed use of
adversarial learning is based on a new VAE setup, that seeks to minimize the symmetric KL distance
between p? (x, z) and q? (x, z), while simultaneously seeking to maximize the marginal expected
likelihoods Eq(x) [log p? (x)] and Ep(z) [log p? (z)].
5
5
Experiments
We evaluate our model on three datasets: MNIST, CIFAR-10 and ImageNet. To balance performance
and computational cost, p? (x|z) and q? (z|x) are approximated with a normalizing flow [14] of
length 80 for the MNIST dataset, and a Gaussian approximation for CIFAR-10 and ImageNet data.
All network architectures are provided in the Appendix B. All parameters were initialized with Xavier
[36], and optimized via Adam [37] with learning rate 0.0001. We do not perform any dataset-specific
tuning or regularization other than dropout [38]. Early stopping is employed based on average
reconstruction loss of x and z on validation sets.
We show three types of results, using part of or all of our model to illustrate each component. i)
AS-VAE-r: This model trained with the first half of the objective in (11) to minimize LVAEx (?, ?)
in (8); it is an ML-based method which focuses on reconstruction. ii) AS-VAE-g: This model trained
with the second half of the objective in (11) to minimize LVAEz (?, ?) in (9); it can be considered as
maximizing the likelihood of q? (z), and designed for generation. iii) AS-VAE This is our proposed
model, developed in Section 3.
5.1
Evaluation
We evaluate our model on both reconstruction and generation. The performance of the former is
evaluated using negative log-likelihood (NLL) estimated via the variational lower bound defined
in (1). Images are modeled as continuous. To do this, we add [0, 1]-uniform noise to natural images
(one color channel at the time), then divide by 256 to map 8-bit images (256 levels) to the unit
interval. This technique is widely used in applications involving natural images [12, 14, 16, 39, 40],
since it can be proved that in terms of log-likelihood, modeling in the discrete space is equivalent
to modeling in the continuous space (with added noise) [39, 41]. During testing, the likelihood is
computed as p(x = i|z) = p? (x ? [i/256, (i + 1)/256]|z) where i = 0, . . . , 255. This is done to
guarantee a fair comparison with prior work (that assumed quantization). For the MNIST dataset, we
treat the [0, 1]-mapped continuous input as the probability of a binary pixel value (on or off) [12]. The
inception score (IS), defined as exp(Eq (x)KL(p(y|x)kp(y))), is employed to quantitatively evaluate
the quality of generated natural images, where p(y) is the empirical distribution of labels (we do not
leverage any label information during training) and p(y|x) is the output of the Inception model [42]
on each generated image.
To the authors? knowledge, we are the first to report both inception score (IS) and NLL for natural
images from a single model. For comparison, we implemented DCGAN [5] and PixelCNN++ [40] as
baselines. The implementation of DCGAN is based on a similar network architectures as our model.
Note that for NLL a lower value is better, whereas for IS a higher value is better.
5.2
MNIST
We first evaluate our model on the MNIST dataset. The log-likelihood results are summarized in
Table 1. Our AS-VAE achieves a negative log-likelihood of 82.51 nats, outperforming normalizing
flow (85.1 nats) with a similar architecture. The perfomance of AS-VAE-r (81.14 nats) is competitive
to the state-of-the-art (79.2 nats). The generated samples are showed in Figure 1. AS-VAE-g and
AS-VAE both generate good samples while the results of AS-VAE-r are slightly more blurry, partly
due to the fact that AS-VAE-r is an ML-based model.
5.3
CIFAR
Next we evaluate our models on the CIFAR-10 dataset. The quantitative results are listed in Table 2.
AS-VAE-r and AS-VAE-g achieve encouraging results on reconstruction and generation, respectively,
while our AS-VAE model (leveraging the full objective) achieves a good balance between these
two tasks, which demonstrates the benefit of optimizing a symmetric objective. Compared with
Table 1: NLL on MNIST.
Method
NF (k=80) [14]
IAF [16]
AVB [23]
PixelRNN [39]
AS-VAE-r
AS-VAE-g
AS-VAE
NLL (nats)
85.1
80.9
79.5
79.2
81.14
146.32
82.51
6
state-of-the-art ML-based models [39, 40], we achieve competitive results on reconstruction but
provide a much better performance on generation, also outperforming other adversarially-trained
models. Note that our negative ELBO (evidence lower bound) is an upper bound of NLL as reported
in [39, 40]. We also achieve a smaller root-mean-square-error (RMSE). Generated samples are shown
in Figure 2. Additional results are provided in the Appendix C.
ALI [10], which also seeks to match Table 2: Quantitative Results on CIFAR-10; ? 2.96 is based on our
the joint encoder and decoder distribu- implementation and 2.92 is reported in [40].
tion, is also implemented as a baseline.
Since the decoder in ALI is a deterMethod
NLL(bits)
RMSE
IS
ministic network, the NLL of ALI is
WGAN [43]
3.82
impractical to compute. Alternatively,
MIX+WassersteinGAN [43]
4.05
we report the RMSE of reconstruction
DCGAN [5]
4.89
as a reference. Figure 3 qualitatively
ALI
14.53
4.79
compares the reconstruction perforPixelRNN [39]
3.06
mance of our model, ALI and VAE.
PixelCNN++ [40]
2.96 (2.92)?
3.289
5.51
As can be seen, the reconstruction of
AS-VAE-r
3.09
3.17
2.91
ALI is related to but not faithful reproAS-VAE-g
93.12
13.12
6.89
duction of the input data, which eviAS-VAE
3.32
3.36
6.34
dences the limitation in reconstruction
ability of adversarial learning. This is
also consistent in terms of RMSE.
5.4
ImageNet
ImageNet 2012 is used to evaluate the scalability of our model to large datasets. The images are
resized to 64?64. The quantitative results are shown in Table 3. Our model significantly improves the
performance on generation compared with DCGAN and PixelCNN++, while achieving competitive
results on reconstruction compared with PixelRNN and PixelCNN++.
Note that the PixelCNN++ takes more than two weeks
(44 hours per epoch) for training and 52.0 seconds/image Table 3: Quantitative Results on ImageNet.
for generating samples while our model only requires less
than 2 days (4 hours per epoch) for training and 0.01 secMethod
NLL
IS
onds/image for generating on a single TITAN X GPU. As a
DCGAN [5]
5.965
reference, the true validation set of ImageNet 2012 achieves
3.63
PixelRNN [39]
53.24% accuracy. This is because ImageNet has much
7.65
PixelCNN++ [40] 3.27
greater variety of images than CIFAR-10. Figure 4 shows
AS-VAE
3.71 11.14
generated samples based on trained with ImageNet, compared with DCGAN and PixelCNN++. Our model is able
to produce sharp images without label information while capturing more local spatial dependencies
than PixelCNN++, and without suffering from mode collapse as DCGAN. Additional results are
provided in the Appendix C.
6
Conclusions
We presented Adversarial Symmetrical Variational Autoencoders, a novel deep generative model for
unsupervised learning. The learning objective is to minimizing a symmetric KL divergence between
the joint distribution of data and latent codes from encoder and decoder, while simultaneously maximizing the expected marginal likelihood of data and codes. An extensive set of results demonstrated
excellent performance on both reconstruction and generation, while scaling to large datasets. A
possible direction for future work is to apply AS-VAE to semi-supervised learning tasks.
Acknowledgements
This research was supported in part by ARO, DARPA, DOE, NGA, ONR and NSF.
7
Figure 1: Generated samples trained on MNIST. (Left) AS-VAE-r; (Middle) AS-VAE-g (Right) AS-VAE.
Figure 2: Samples generated by AS-VAE Figure 3: Comparison of reconstruction with ALI [10].
In each block: column one for ground-truth, column two
when trained on CIFAR-10.
for ALI and column three for AS-VAE.
Figure 4: Generated samples trained on ImageNet. (Top) AS-VAE; (Middle) DCGAN [5];(Bottom) PixelCNN++ [40].
8
References
[1] Y. Pu, X. Yuan, A. Stevens, C. Li, and L. Carin. A deep generative deconvolutional image
model. Artificial Intelligence and Statistics (AISTATS), 2016.
[2] Y. Pu, X. Yuan, and L. Carin. Generative deep deconvolutional learning. In ICLR workshop,
2015.
[3] I.. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S.l Ozair, A. Courville,
and Y. Bengio. Generative adversarial nets. In NIPS, 2014.
[4] X. Chen, Y. Duan, R. Houthooft, J. Schulman, I. Sutskever, and P. Abbeel. Infogan: Interpretable
representation learning by information maximizing generative adversarial nets. In NIPS, 2016.
[5] A. Radford, L. Metz, and S. Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. In ICLR, 2016.
[6] S. Reed, Z. Akata, X. Yan, L. Logeswaran, B. Schiele, and H. Lee. Generative adversarial text
to image synthesis. In ICML, 2016.
[7] Y. Zhang, Z. Gan, K. Fan, Z. Chen, R. Henao, D. Shen, and L. Carin. Adversarial feature
matching for text generation. In ICML, 2017.
[8] Y. Zhang, Z. Gan, and L. Carin. Generating text with adversarial training. In NIPS workshop,
2016.
[9] T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen. Improved
techniques for training gans. In NIPS, 2016.
[10] V. Dumoulin, I. Belghazi, B. Poole, O. Mastropietro, A. Lamb, M. Arjovsky, and A. Courville.
Adversarially learned inference. In ICLR, 2017.
[11] J. Donahue, . Kr?henb?hl, and T. Darrell. Adversarial feature learning. In ICLR, 2017.
[12] D. P. Kingma and M. Welling. Auto-encoding variational Bayes. In ICLR, 2014.
[13] D. J. Rezende, S. Mohamed, and D. Wierstra. Stochastic backpropagation and approximate
inference in deep generative models. In ICML, 2014.
[14] D.J. Rezende and S. Mohamed. Variational inference with normalizing flows. In ICML, 2015.
[15] Y. Burda, R. Grosse, and R. Salakhutdinov. Importance weighted autoencoders. In ICLR, 2016.
[16] D. P. Kingma, T. Salimans, R. Jozefowicz, X.i Chen, I. Sutskever, and M. Welling. Improving
variational inference with inverse autoregressive flow. In NIPS, 2016.
[17] Y. Zhang, D. Shen, G. Wang, Z. Gan, R. Henao, and L. Carin. Deconvolutional paragraph
representation learning. In NIPS, 2017.
[18] L. Chen, S. Dai, Y. Pu, C. Li, and Q. Su Lawrence Carin. Symmetric variational autoencoder
and connections to adversarial learning. In arXiv, 2017.
[19] D. Shen, Y. Zhang, R. Henao, Q. Su, and L. Carin. Deconvolutional latent-variable model for
text sequence matching. In arXiv, 2017.
[20] D.P. Kingma, D.J. Rezende, S. Mohamed, and M. Welling. Semi-supervised learning with deep
generative models. In NIPS, 2014.
[21] Y. Pu, Z. Gan, R. Henao, X. Yuan, C. Li, A. Stevens, and L. Carin. Variational autoencoder for
deep learning of images, labels and captions. In NIPS, 2016.
[22] M. Arjovsky and L. Bottou. Towards principled methods for training generative adversarial
networks. In ICLR, 2017.
[23] L. Mescheder, S. Nowozin, and A. Geiger. Adversarial variational bayes: Unifying variational
autoencoders and generative adversarial networks. In arXiv, 2016.
9
[24] T. Kim, M. Cha, H. Kim, J. Lee, and J. Kim. Learning to discover cross-domain relations with
generative adversarial networks. In arXiv, 2017.
[25] C. Li, K. Xu, J. Zhu, and B. Zhang. Triple generative adversarial nets. In arXiv, 2017.
[26] JY Zhu, T. Park, P. Isola, and A. Efros. Unpaired image-to-image translation using cycleconsistent adversarial networks. In arXiv, 2017.
[27] K. Hornik, M. Stinchcombe, and H. White. Multilayer feedforward networks are universal
approximators. Neural networks, 1989.
[28] Y. Pu, Z. Gan, R. Henao, C. Li, S. Han, and L. Carin. Vae learning via stein variational gradient
descent. In NIPS, 2017.
[29] Q. Liu and D. Wang. Stein variational gradient descent: A general purpose bayesian inference
algorithm. In NIPS, 2016.
[30] J. Zhao, M. Mathieu, and Y. LeCun. Energy-based generative adversarial network. In ICLR,
2017.
[31] M. Arjovsky, S. Chintala, and L. Bottou. Wasserstein gan. In arXiv, 2017.
[32] C. Li, H. Liu, C. Chen, Y. Pu, L. Chen, R. Henao, and L. Carin. Alice: Towards understanding
adversarial learning for joint distribution matching. In NIPS, 2017.
[33] Z. Gan, L. Chen, W. Wang, Y. Pu, Y. Zhang, H. Liu, C. Li, and Lawrence Carin. Triangle
generative adversarial networks. In NIPS, 2017.
[34] A. Makhzani, J. Shlens, N. Jaitly, I. Goodfellow, and B. Frey. Adversarial autoencoders. In
arXiv, 2015.
[35] A. B. L. Larsen, S. K. S?nderby, H. Larochelle, and O. Winther. Autoencoding beyond pixels
using a learned similarity metric. In ICML, 2016.
[36] X. Glorot and Y. Bengio. Understanding the difficulty of training deep feedforward neural
networks. In AISTATS, 2010.
[37] D. Kingma and J. Ba. Adam: A method for stochastic optimization. In ICLR, 2015.
[38] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov. Dropout: A simple
way to prevent neural networks from overfitting. JMLR, 2014.
[39] A. Oord, N. Kalchbrenner, and K. Kavukcuoglu. Pixel recurrent neural network. In ICML,
2016.
[40] T. Salimans, A. Karpathy, X. Chen, and D. P. Kingma. Pixelcnn++: Improving the pixelcnn
with discretized logistic mixture likelihood and other modifications. In ICLR, 2017.
[41] L. Thei, A. Oord, and M. Bethge. A note on the evaluation of generative models. In ICLR,
2016.
[42] C. Szegedy, W. Liui, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and
A. Rabinovich. Going deeper with convolutions. In CVPR, 2015.
[43] S. Arora, R. Ge, Y. Liang, T. Ma, and Y. Zhang. Generalization and equilibrium in generative
adversarial nets. In arXiv, 2017.
10
| 7020 |@word middle:2 seems:1 cha:1 seek:11 tried:1 liu:3 score:3 offering:1 denoting:1 deconvolutional:4 dx:2 must:3 readily:2 gpu:1 realistic:3 analytic:2 remove:1 designed:4 interpretable:1 generative:28 discovering:1 half:2 intelligence:1 isotropic:2 realism:1 regressive:1 provides:3 boosting:1 zhang:7 wierstra:1 direct:2 pairing:1 yuan:3 underfitting:1 paragraph:1 manner:1 x0:2 expected:4 discretized:1 salakhutdinov:2 duan:1 encouraging:2 increasing:1 project:1 provided:5 underlying:2 matched:1 discover:1 developed:4 finding:1 transformation:3 impractical:1 guarantee:2 quantitative:5 nf:1 onds:1 zaremba:1 demonstrates:1 unit:1 engineering:1 understood:1 local:2 treat:1 frey:1 severely:1 encoding:1 becoming:1 approximately:2 suggests:1 alice:2 collapse:1 logeswaran:1 faithful:2 lecun:1 testing:1 block:1 implement:3 backpropagation:1 lcarin:1 empirical:1 yan:1 universal:2 significantly:1 matching:6 convenient:1 alleviated:1 integrating:1 pixelrnn:3 unlabeled:1 context:4 instability:1 optimize:1 equivalent:2 map:1 demonstrated:1 dz:1 maximizing:7 mescheder:1 convex:1 focused:1 resolution:1 shen:3 pouget:1 estimator:1 shlens:1 reparameterization:1 variation:1 analogous:3 updated:1 caption:2 duke:2 us:1 goodfellow:3 jaitly:1 trick:3 synthesize:1 approximated:4 utilized:2 nderby:1 observed:12 ep:6 bottom:1 wang:4 electrical:1 principled:2 nash:1 schiele:1 nats:5 warde:1 trained:8 ali:11 triangle:1 easily:1 joint:16 discogan:1 darpa:1 effective:1 duction:1 kp:8 artificial:1 kalchbrenner:1 yp42:1 widely:1 cvpr:1 elbo:1 encoder:12 ability:3 statistic:1 think:1 jointly:2 hoc:1 sequence:2 nll:9 autoencoding:1 net:4 reconstruction:19 aro:1 aligned:1 combining:2 flexibility:1 achieve:3 scalability:1 sutskever:3 darrell:1 extending:1 captioning:1 generating:7 adam:2 produce:1 illustrate:2 develop:2 recurrent:2 ex:3 eq:16 implemented:9 involves:1 implies:2 larochelle:1 quantify:2 avb:5 direction:1 closely:1 correct:1 stevens:2 stochastic:6 require:2 abbeel:1 generalization:1 alleviate:1 proposition:3 weiyao:1 extension:2 hold:2 sufficiently:1 considered:2 ground:1 exp:1 lawrence:3 mapping:2 equilibrium:4 week:1 efros:1 sought:1 early:1 achieves:3 purpose:2 failing:1 applicable:1 label:4 weighted:1 gaussian:6 rather:1 pn:3 resized:1 vae:48 unspecific:1 corollary:3 rezende:3 inherits:1 focus:1 likelihood:20 contrast:1 adversarial:44 baseline:2 kim:3 inference:8 stopping:1 typically:8 relation:1 going:1 henao:8 issue:2 classification:2 flexible:3 pixel:3 denoted:1 art:4 integration:2 raised:1 mutual:1 marginal:5 spatial:1 beach:1 sampling:1 adversarially:2 park:1 look:1 unsupervised:2 carin:12 icml:6 mimic:1 minimized:1 report:2 future:1 quantitatively:1 mirza:1 employ:4 simultaneously:9 divergence:5 wgan:1 n1:3 interest:1 evaluation:4 mixture:1 yielding:1 farley:1 regularizers:1 accurate:1 capable:4 encourage:1 respective:1 divide:1 initialized:1 desired:1 re:1 theoretical:1 column:3 modeling:3 rabinovich:1 cost:2 kq:4 uniform:1 krizhevsky:1 examining:1 successful:1 reported:2 dependency:1 synthetic:1 st:1 density:2 fundamental:2 winther:1 oord:2 probabilistic:2 off:1 lee:2 invertible:1 synthesis:3 bethge:1 gans:5 again:2 collapsing:1 zhao:1 leading:1 ricardo:1 li:8 szegedy:1 coding:1 summarized:1 titan:1 ad:1 performed:4 root:1 tion:1 dumoulin:1 characterizes:1 competitive:3 bayes:3 metz:1 capability:1 reparametrization:4 inherited:1 identifiability:1 jia:1 rmse:4 contribution:1 minimize:4 ass:1 square:1 accuracy:1 convolutional:1 ensemble:1 yield:3 maximized:2 bayesian:1 la2:1 kavukcuoglu:1 researcher:2 failure:1 energy:1 involved:1 mohamed:3 larsen:1 naturally:1 associated:1 proof:2 chintala:2 propagated:1 sampled:1 dataset:5 proved:1 manifest:2 recall:1 color:1 knowledge:1 improves:1 sophisticated:1 akata:1 appears:1 bidirectional:1 higher:1 supervised:3 day:1 improved:1 formulation:4 evaluated:2 done:3 cyclegan:1 inception:4 autoencoders:6 expressive:1 su:2 lack:1 mode:4 logistic:1 quality:4 facilitate:1 usa:1 true:1 xavier:1 hence:6 regularization:3 equality:1 former:1 symmetric:14 leibler:2 white:1 during:2 generalized:1 presenting:1 complete:2 demonstrate:3 image:24 variational:35 invoked:2 novel:1 recently:6 discussed:6 perfomance:1 approximates:1 synthesized:2 jozefowicz:1 anguelov:1 tuning:1 similarly:2 had:1 pixelcnn:11 specification:1 han:1 similarity:1 pu:8 add:2 posterior:5 recent:4 showed:1 optimizing:2 optimizes:1 termed:3 manifested:2 binary:1 arbitrarily:2 outperforming:2 onr:1 approximators:1 seen:1 arjovsky:3 additional:4 somewhat:1 impose:1 greater:1 employed:3 dai:1 isola:1 aggregated:1 maximize:4 wasserstein:1 ii:5 semi:3 rv:2 desirable:2 full:1 infer:2 mix:1 match:6 characterized:1 cross:1 long:1 cifar:7 devised:1 jy:1 involving:2 multilayer:1 metric:2 expectation:7 arxiv:9 represent:1 achieved:1 penalize:2 background:1 addition:1 whereas:1 interval:1 yunchen:1 unlike:1 tend:1 flow:6 leveraging:1 leverage:3 feedforward:2 iii:2 easy:1 bengio:2 mastropietro:1 variety:1 fit:7 architecture:3 restrict:1 idea:3 expression:11 motivated:1 utility:1 locating:1 henb:1 constitute:2 deep:10 detailed:1 listed:1 karpathy:1 stein:3 unpaired:1 generate:1 nsf:1 estimated:1 per:2 write:1 discrete:1 promise:1 key:1 demonstrating:1 achieving:3 drawn:7 prevent:1 zg27:1 vast:1 downstream:1 houthooft:1 nga:1 inverse:3 parameterized:2 lamb:1 utilizes:1 geiger:1 draw:2 appendix:5 scaling:1 bit:2 dropout:2 bound:17 capturing:1 followed:1 courville:2 fan:1 strength:1 constraint:1 dences:1 min:3 relatively:2 department:1 developing:1 liqun:1 smaller:1 slightly:1 increasingly:1 modification:1 hl:1 fail:1 wrt:7 needed:1 ge:1 fed:2 end:1 available:1 mance:1 apply:1 hierarchical:1 salimans:3 blurry:2 original:7 top:1 gan:18 unifying:1 const:3 build:3 seeking:2 objective:15 added:1 quantity:1 parametric:1 makhzani:1 traditional:3 gradient:7 iclr:11 distance:4 mapped:1 decoder:8 considers:1 reason:1 toward:2 ozair:1 code:27 length:1 modeled:3 relationship:1 reed:2 providing:1 minimizing:3 balance:2 sermanet:1 liang:1 difficult:1 setup:4 thei:1 negative:3 ba:1 design:1 implementation:2 iaf:1 perform:2 upper:1 observation:2 convolution:1 datasets:5 benchmark:2 finite:1 descent:5 hinton:1 variability:1 looking:1 chunyuan:1 sharp:2 inferred:1 kl:21 extensive:3 optimized:3 discriminator:5 imagenet:9 connection:1 learned:5 tremendous:1 hour:2 kingma:5 nip:13 address:3 able:7 beyond:1 poole:1 below:2 mismatch:1 encompasses:1 including:1 max:6 stinchcombe:1 suitable:1 treated:1 natural:5 difficulty:2 zhu:2 mathieu:1 arora:1 autoencoder:4 auto:3 text:4 prior:4 epoch:2 acknowledgement:1 schulman:1 understanding:2 fully:1 loss:3 generation:11 limitation:10 generator:1 validation:2 foundation:1 triple:1 vanhoucke:1 consistent:1 nowozin:1 translation:1 supported:1 distribu:1 allow:1 burda:1 deeper:1 benefit:1 cumulative:3 rich:1 autoregressive:1 author:2 qualitatively:1 employing:1 erhan:1 welling:3 approximate:1 kullback:2 cl319:1 belghazi:1 ml:12 overfitting:1 symmetrical:1 assumed:2 xi:8 zhe:1 alternatively:1 continuous:3 latent:18 table:6 learn:4 channel:1 ca:1 hornik:1 synthesizes:1 improving:3 excellent:2 complex:1 bottou:2 domain:3 inherit:1 aistats:2 noise:2 fair:1 complementary:1 suffering:1 xu:2 grosse:1 explicit:5 infogan:1 jmlr:1 donahue:1 z0:2 discarding:1 specific:1 abadie:1 glorot:1 reproduction:1 normalizing:4 intractable:2 naively:1 concern:1 mnist:7 quantization:1 evidence:1 workshop:2 kr:1 importance:1 chen:10 generalizing:1 ez:3 desire:3 lvae:4 expressed:1 dcgan:8 applies:1 radford:2 truth:1 ma:1 conditional:2 goal:2 cheung:1 towards:2 lali:1 change:1 aided:1 specifically:5 principal:1 lemma:3 partly:2 vaes:8 formally:1 evaluate:11 avoiding:2 srivastava:1 |
6,657 | 7,021 | Unified representation of tractography and
diffusion-weighted MRI data using sparse
multidimensional arrays
Cesar F. Caiafa?
Department of Psychological and Brain Sciences
Indiana University (47405) Bloomington, IN, USA
IAR - CCT La Plata, CONICET / CIC-PBA
(1894) V. Elisa, ARGENTINA
[email protected]
Olaf Sporns
Department of Psychological and Brain Sciences
Indiana University (47405) Bloomington, IN, USA
[email protected]
Andrew J. Saykin
Department of Radiology - Indiana University
School of Medicine. (46202) Indianapolis, IN, USA
[email protected]
Franco Pestilli?
Department of Psychological and Brain Sciences
Indiana University (47405) Bloomington, IN, USA
[email protected]
Abstract
Recently, linear formulations and convex optimization methods have been
proposed to predict diffusion-weighted Magnetic Resonance Imaging (dMRI)
data given estimates of brain connections generated using tractography
algorithms. The size of the linear models comprising such methods grows
with both dMRI data and connectome resolution, and can become very
large when applied to modern data. In this paper, we introduce a method
to encode dMRI signals and large connectomes, i.e., those that range from
hundreds of thousands to millions of fascicles (bundles of neuronal axons), by
using a sparse tensor decomposition. We show that this tensor decomposition
accurately approximates the Linear Fascicle Evaluation (LiFE) model, one
of the recently developed linear models. We provide a theoretical analysis of
the accuracy of the sparse decomposed model, LiFESD , and demonstrate that
it can reduce the size of the model significantly. Also, we develop algorithms
to implement the optimization solver using the tensor representation in an
efficient way.
?
?
http://web.fi.uba.ar/~ccaiafa/Cesar.html
http://www.brain-life.org/plab/
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
1
Introduction
Multidimensional arrays, hereafter referred to as tensors, are useful mathematical objects to
model a variety of problems in machine learning [2, 47] and neuroscience [27, 8, 50, 48, 3, 26,
13]. Tensor decomposition algorithms have a long history of applications in signal processing,
however, only recently their relation to sparse representations has started to be explored
[35, 11]. In this work, we present a sparse tensor decomposition model and its associated
algorithm applied to diffusion-weighted Magnetic Resonance Imaging (dMRI).
Diffusion-weighted MRI allows us to estimate structural brain connections in-vivo by measuring the diffusion of water molecules at different spatial directions. Brain connections
are comprised of a set of fascicles describing the putative position and orientation of the
neuronal axons bundles wrapped by myelin sheaths traveling within the living human brain
[25]. The process by which fascicles (the connectome) are identified from dMRI measurements
is called tractography. Tractography and dMRI are the primary methods for mapping structural brain networks and white matter tissue properties in living human brains [6, 46, 34].
Despite current limits and criticisms, through these methods we have learned much about
the macrostructural organization of the human brain, such that network neuroscience has
become one of the fastest-growing scientific fields [38, 43, 44].
In recent years, a large variety of tractography algorithms have been proposed and tested
on modern datasets such as the Human Connectome Project (HCP) [45]. However, it
has been established that the estimated anatomical properties of the fascicles depend on
data type, tractography algorithm and parameters settings [32, 39, 7]. Such variability in
estimates makes it difficult to trust a single algorithm for all applications, and calls for
routine statistical evaluation methods of brain connectomes [32]. For this reason, linear
methods based on convex optimization have been proposed for connectome evaluation [32, 39]
and simultaneous connectome and white matter microstructure estimation [15]. However,
these methods can require substantial computational resources (memory and computation
load) making it prohibitive to apply them to the highest resolution datasets.
In this article, we propose a method to encode brain connectomes in multidimensional arrays
and perform statistical evaluation efficiently on high-resolution datasets. The article is
organized as follows: in section 2, the connectome encoding method is introduced; in section
2.1, a linear formulation of the connectome evaluation problem is described; in section 3, the
approximated tensor decomposed model is introduced; in section 3.3, we derive a theoretical
bound of the approximation error and compute the theoretical compression factor obtained
with the tensor decomposition; in section 4 we develop algorithms to make the operations
needed for solving the connectome evaluation optimization problem; in section 5 we present
experimental results using high resolution in vivo datasets; finally, in section 6, the main
conclusions of our work are outlined.
2
Encoding brain connectomes into multidimensional array
structures.
We propose a framework to encode brain connectome data (both dMRI and white matter
fascicles) into tensors [12, 11, 23] to allow fast and efficient mathematical operations on the
structure of the connectome. Here, we introduce the tensor encoding framework and show
how it can be used to implement recent methods for statistical evaluation of tractography
[32]. More specifically, we demonstrate that the framework can be used to approximate the
Linear Fascicle Evaluation model [32] with high accuracy while reducing the size of the model
substantially (with measured compression factors up to 40x). Hereafter, we refer to the new
tensor encoding method as ENCODE [10]. ENCODE maps fascicles from their natural brain
space (Fig. 1(a)) into a three dimensional sparse tensor ? (Fig. 1(b)). The first dimension
of ? (1st mode) encodes each individual white matter fascicle?s orientation at each position
along their path through the brain. Individual segments (nodes) in a fascicle are coded as
non-zero entries in the sparse array (dark-blue cubes in Fig. 1(b)). The second dimension
of ? (2nd mode) encodes each fascicle?s spatial position within dMRI data volume (voxels).
Slices in this second dimension represent single voxels (cyan lateral slice in Fig. 1(b)). The
2
third dimension (3rd mode) encodes the indices of each fascicle within the connectome. Full
fascicles are encoded as ? frontal slices (c.f., yellow and blue in Fig. 1(b)).
(a)
(b)
le
cic
s
Orientation
s
Fa
Voxe
ls
fascicle
fascicle
voxel
non-zero entry
Figure 1: The ENCODE method: mapping structural connectomes from natural brain space to
tensor space. (a) Two example white matter fascicles (f1 and f2 ) passing through three voxels (v1 ,
v2 and v3 ). (b) Encoding of the two fascicles in a three dimensional tensor. The non-zero entries in
? indicate fascicle?s orientation (1st mode), position (voxel, 2nd mode) and identity (3rd mode).
Below we demonstrate how to use ENCODE to integrate connectome each fascicle?s structure
and measured dMRI signal into a single tensor decomposition model. We then show how to
use this decompositon model to implement very efficiently a recent model for tractography
evaluation, the linear fascicle evaluation method, also referred to as LiFE [32]. Before
introducing the tensor decomposition method, we briefly describe the LiFE model, as this is
needed to explain the model decomposition using the ENCODE method. We then calculate
the theoretical bounds to accuracy and compression factor that can be achieved using
ENCODE and tensor decomposition. Finally, we report the results of experiments on real
data and validate the theoretical calculations.
2.1
Statistical evaluation for brain connectomes by convex optimization.
The Linear Fascicle Evaluation (LiFE) method was introduced to compute the statistical
error of the fascicles comprising a structural brain connectome in predicting the measured
diffusion signal [32]. The fundamental idea behind LiFE is that a connectome should contain
fascicles whose trajectories represent the measured diffusion signal well. LiFE implements
a method for connectome evaluation that can be used, among other things, to eliminate
tracked fascicles that do not predict well the diffusion signal. LiFE takes as input the set of
fascicles generated by using tractography methods (the candidate connectome) and returns
as output the subset of fascicles that best predict the measured dMRI signal (the optimized
connectome). Fascicles are scored with respect to how well their trajectories represent the
measured diffusion signal in the voxels along the their path. To do so, weights are assigned
to each fascicle using convex optimization. Fascicles assigned a weight of zero are removed
from the connectome, as their contribution to predicting the diffusion signal is null. The
following linear system describes the equation of LiFE (see Fig. 2(a)):
y ? Mw,
(2.1)
N? Nv
?
where y ? R
is a vector containing the demeaned signal yi = S(?ni , vi ) measured
at all white-matter voxels vi ? V = {1, 2, . . . , Nv } and across all diffusion directions ?n ?
? = {?1 , ?2 , . . . , ?N? } ? R3 , and w ? RNf contains the weights for each fascicle in the
connectome.
Matrix M ? RN? Nv ?Nf contains, at column f , the predicted demeaned signal contributed
by fascicle f at all voxels V and across all directions ?:
M(i, f ) = S0 (vi )Of (?ni , vf ).
(2.2)
S0 (v) is defined as the non diffusion-weighted signal and Of (?, vf ) is the orientation distribution function [32] of fascicle f at diffusion direction ?, i.e.
T
2
1 X ?b(?nT vf )2
Of (?, vf ) = e?b(? vf ) ?
e
,
(2.3)
N?
?n ??
3
(a)
(b)
Voxel
voxel
(c)
Empty entries (zero values)
Figure 2: The Linear Fascicle Evaluation (LiFE) model. (a) The predicted signal y ? RN? Nv in
all voxels and gradient directions is obtained by multiplying matrix M ? RN? Nv ?Nf by the vector
of weights w ? RNf (see equation 2.1). (b) A voxel containing two fascicles, f1 and f2 . (c) The
predicted diffusion signal yv ? RN? at voxel v is approximated by a nonnegative weighted linear
combination of the predicted signals for the fascicles in the voxel.
where the simple ?stick? diffusion tensor model [31] was used and vector vf ? R3 is defined
as the spatial orientation of the fascicle in that voxel.
Whereas vector y and matrix M in equation (2.1) are fully determined by the dMRI
measurements and the output of a tractography algorithm, respectively, the vector of weights
w needs to be estimated by solving a Non-Negative Least squares (NNLS) optimization
problem, which is defined as follows:
1
min
ky ? Mwk2 subject to wf ? 0, ?f.
(2.4)
w
2
As a result, a sparse non-negative vector of weights w is obtained. Whereas nonzero weights
correspond to fascicles that contribute to predict the measured dMRI signal, fascicles with
zero weight make no contribution to predicting the measurements and can be eliminated.
In this way, LiFE identifies the fascicles supported by the data in a candidate connectome
providing a principled approach to evaluate connectomes in terms of prediction error as well
as the number of non-zero weighted fascicles.
A noticeable property of the LiFE method is that the size of matrix M in equation (2.1)
can require tens of gigabytes for full-brain connectomes, even when using optimized sparse
matrix formats [19]. Below we show how to use ENCODE to implement a sparse tensor
decomposition [9, 11] of matrix M. This decomposition allows accurate approximation of
the original LiFE model with dramatic reduction in memory requirements.
3
Theoretical results: Tensor decomposition and approximation
of the linear model for tractography evaluation.
We describe the theoretical approach to factorizing the LiFE model, eq. (2.1). We note
that matrix M ? RN? Nv ?Nf (Fig. 2(a)) can be rewritten as a tensor (3D-array) M ?
RN? ?Nv ?Nf by decoupling the gradient direction and voxel indices into separate indices, i.e.
M(ni , vi , f ) = M(i, f ), where ni = {1, 2, . . . , N? }, vi = {1, 2, . . . , Nv } and f = {1, 2, . . . , Nf }.
Thus, equation (2.1) can be rewritten in tensor form as follows:
Y ? M ?3 w T ,
(3.1)
where Y ? RN? ?Nv is obtained by converting vector y ? RN? Nv into a matrix (matricization)
and ??n ? is the tensor-by-matrix product in mode-n [23], more specifically, the mode-3
4
PNf
product in the above equation is defined as follows: Y(n, v) = f =1
M(n, v, f )wf . Below,
we show how to approximate the tensor model in equation (3.1) using a sparse Tucker
decomposition [9] by first focusing on the dMRI signal in individual voxels and then across
voxels.
(a)
Empty entries (zero values)
(b)
(c)
Max. discretization error,
Figure 3: The LiFESD model: (a) Each block Mv of matrix M (a lateral slice in tensor M) is
factorized by using a dictionary of diffusion signal predictions D and a sparse matrix of coefficients
?v . (b) LiFESD model is written as a Tucker decomposition model with a sparse core tensor ? and
factors D (mode-1) and wT (mode-3). (c). The maximum distance between a fascicle orientation
vector v and its approximation va is determined by the discretization of azimuth (?? ) and elevation
(?? ) spherical coordinates. More specifically, for ?? = ?? = ?/L, the maximum discretization
error is k?v k ? ??2L .
3.1
Approximation of the linear model within individual brain voxels.
We focus on writing the linear formulation of the diffusion prediction model (Fig. 2(b)-(c))
by restricting equation (3.1) to individual voxels, v:
yv ? Mv w,
(3.2)
where vector yv = Y(:, v) ? RN? and matrix Mv = M(:, v, :) ? RN? ?Nf , correspond to a
column in Y and a lateral slice in tensor M, respectively. We propose to factorize matrix
Mv as follows
? v = D?v ,
Mv ? M
(3.3)
where matrix D ? RN? ?Na is a dictionary of diffusion predictions whose columns (atoms)
correspond to precomputed fascicle orientations, and ?v ? RNa ?Nf is a sparse matrix
whose non-zero entries, ?v (a, f ), indicate the orientation of fascicle f in voxel v, which
is approximated by atom a (see Fig. 3(a) for an example of a voxel v as shown in Fig.
2(b)-(c)). For computing the diffusion predictions, we use a discrete grid in the sphere
by uniformly sampling the spherical coordinates using L points in azimuth and elevation
coordinates (Fig. 2(c)).
3.2
Approximation of the linear model across multiple brain voxels.
By applying the approximation introduced in equation (3.3) to every slice in tensor M in
equation 3.1, we obtain the following tensor Sparse Decomposed LiFE model, hereafter
referred to as LiFESD (Fig. 3(b)):
Y ? ? ?1 D ?3 w T ,
(3.4)
where D is a common factor in mode-1, i.e., it multiplies all lateral slices. It is noted that, the
formula in the above equation (3.4), is a particular case of the Tucker decomposition [42, 16]
where the core tensor ? is sparse [9, 11], and only factors in mode-1 (D) and mode-3 (wT )
5
are present. By comparing equations (3.4) and (3.1) we define the LiFESD approximated
tensor model as
? = ? ?1 D
M
(3.5)
3.3
Theoretical bound for model decomposition accuracy and data
compression.
In this section, we derive a theoretical bound on the accuracy of LiFESD compared to the
original LiFE model (Proposition 3.1) and we theoretically analyze the compression factor
associated to the factorized tensor approximation (Proposition 3.2). Hereafter, we assume
that, in a given connectome having Nf fascicles, each fascicle has a fixed number of nodes
(Nn ), and the diffusion weighted measurements were taken on N? gradient directions with
a gradient strength b. The proofs of the propositions can be found in the Supplementary
material.
Proposition 3.1 (accuracy). For a given connectome, and dictionary D obtained by
uniformly sampling the azimuth-elevation (?, ?) space using ?? = ?? = ?/L (see Fig.
3(c)), the following upper bound on the Frobenius norm based model error is verified:
p
? F ? 2b? 6Nf Nn N? .
kM ? Mk
(3.6)
L
The importance of this theoretical result is that the error is inversely proportional to the
discretization parameter L, which allows one to design the decomposed model so that a
prescribed accuracy is met.
Proposition 3.2 (size reduction). For a given connectome, and a dictionary D ? RN? ?Na
containing Na atoms (columns of matrix D), the achieved compression factor is
?1
4
Na
CF =
?
,
(3.7)
3N?
3Nn Nf
? with C(M) and C(M)
? being the storage costs of LiFE and
where CF = C(M)/C(M),
LiFESD models, respectively.
It is noted that, usually 3Nn Nf Na , which implies that the compression factor can be
approximated by CF ? 3N4 ? , i.e., it is proportional to the number of gradient directions N? .
4
Model optimization using tensor encoding.
Once the LiFESD model has been built, the final step to validate a connectome requires
finding the non-negative weights that least-squares fit the measured diffusion data. This is
a convex optimization problem that can be solved using a variety of NNLS optimization
algorithms. We used a NNLS algorithm based on first-order methods specially designed for
large scale problems [22]. Next, we show how to exploit the decomposed LiFESD model in
the optimization.
The gradient of the original objective function for the LiFE model can be written as follows:
1
?w
ky ? Mwk2 = MT Mw ? 2MT y,
(4.1)
2
where M ? RN? Nv ?Nf is the original LiFE model, w ? RNf the fascicle weights and
y ? RN? Nv the demeaned diffusion signal. Because the decomposed version does not
explicitly store M, below we describe how to perform two basic operations (y = Mw and
w = MT y) using the sparse decomposition.
4.1
Computing y = Mw
Using equation (3.1) we can see that the product Mw can be computed using equation
(3.4) and vectorizing the result, i.e. y = vec(Y), where vec() stands for the vectorization
6
operation, i.e., to convert a matrix to a vector by stacking its columns in a long vector. In
Algorithm 1, we present the steps for computing y = Mw in an efficient way.
Algorithm 1 : y = M_times_w(?,D,w)
Require: Decomposition components (?, D and vector w ? RNf ).
Ensure: y = Mw
1: Y = ? ?3 wT ; the result is a large but very sparse matrix (Na ? Nv )
2: Y = DY; the result is a relatively small matrix (N? ? Nv )
3: y = vec(Y)
4: return y;
4.2
Computing w = MT y
The product w = MT y can be computed using LiFESD in the following way:
w = MT y = M(3) y = ?(3) (I ? DT )y,
(4.2)
where M(3) ? RNf ?N? Nv and ?(3) ? RNf ?Na Nv are the unfolding matrices [23] of tensors
M ? RN? ?Nv ?Nf and ? ? RNa ?Nv ?Nf , respectively; ? is the Kronecker product and I is
the (Nv ? Nv ) identity matrix. Equation (4.2) can be written also as follows [9]:
w = ?(3) vec(DT Y).
(4.3)
Because matrix ?(3) is very sparse, we avoid computing the large and dense matrix DT Y
and compute instead only its blocks that are being multiplied by the non-zero entries in
?(3) . This allows maintaining efficient memory usage and limits the number of CPU cycles
needed. In Algorithm 2, we present the steps for computing w = MT y in an efficient way.
Algorithm 2 : w = Mtransp_times_y(?,D,y)
Require: Decomposition components (?, D) and vector y ? RN? Nv .
Ensure: w = MT y
1: Y ? RN? ?Nv ? y ? RN? Nv ; reshape vector y into a matrix Y
2: [a, v, f , c] = get_nonzero_entries(?); a(n), v(n), f (n), c(n) indicate the atom, the voxel, the
fascicle and the entry in tensor ? associated to node n, respectively, with n = 1, 2, . . . , Nn ;
3: w = 0 ? RNf ; Initialize weights with zeros
4: for n = 1 to Nn do
5:
w(f (n)) = w(f (n)) + DT (:, a(n))Y(:, v(n))c(n);
6: end for
7: return w;
5
Experimental results: Validation of the theoretical bounds for
model decomposition accuracy and data compression.
Here, we validate our theoretical findings by using dMRI data from subjects in a public
source (the Stanford dataset [32]). The data were collected using N? = 96 (STN96, five
subjects) and N? = 150 (STN150, one subject) directions with b-value b = 2, 000s/mm2 . We
performed tractography using these data and both, probabilistic and deterministic methods,
in combination with Constrained Spherical Deconvolution (CSD) and the diffusion tensor
model (DTI) [41, 17, 5]. We generated candidate connectomes with Nf = 500, 000 fascicles
per brain brain. See for [10, 32, 39] for additional details on data preprocessing.
We first analyzed the accuracy of the approximated model (LiFESD ) as a function of the
parameter, L, which describes the number of fascicles orientations encoded in the dictionary D.
In theory, the larger the number of atoms in D the higher the accuracy of the approximation.
?
kM?Mk
We show that model error (defined as eM = kMkF F ) decreases as a function of the
parameter L for all subjects in the dataset Fig. 4(a). This result validates the theoretical
upper bound in Proposition 3.1. We also solved the convex optimization problem of equation
7
(2.4) for both, LiFE and LiFESD , and estimated the error in the weights assigned to each
?
wk
fascicle by the two models (we computed the error in weights as follows ew = kw?
kwk ). Fig.
4(b) shows the error ew as a function of the parameter L. It is noted that for L > 180 the
error is lower than 0.1% in all subjects.
0.1%
0
23
40
45
90
180
360
720
Matrix based LiFE
LiFESD (L=360)
30
(d)
0.1%
0
23
45
40
90
180
360
720
Matrix based LiFE
LiFESD (L=360)
Model size (GB)
30
20
20
10
10
0
0
50
150
(f)
1,000
Deterministic
(64,134 fascicles)
10,000
100,000
1,000,000
(g) 1000
r.m.s.e (det)
Probabilistic
(121,050 fascicles)
100
500
-3
3x10
0
10
10
500
r.m.s.e (prob)
Probability
Model size (GB)
(e)
STN96
(%)
0.5
0.5
0
Subject 1
Subject 2
Subject 3
Subject 4
Subject 5
1
Weights error
Model error
1
(c)
1.5
STN150
(b)
1.5
(%)
(a)
1000
Figure 4: Experimental results: (a) The model error eM in approximating the matrix M with
LiFESD is inversely proportional to the parameter L as predicted by our Proposition 3.1 (eM ? C/L
was fitted to the data with C = 27.78 and a fitting error equal to 2.94%). (b) Error in the weights
obtained by LiFESD compared with original LiFE?s weights, ew , as a function of parameter L.
(c)-(d) Model size (GB) scales linearly with the number of directions N? and the number of fascicles
Nf , however it increases much faster in the LiFE model compared to the LiFESD model. LiFESD
was computed using L = 360. (e)-(f) Probabilistic and deterministic connectomes validated with
LiFESD for a HCP subject. (g) Comparison of the Root-mean-squared-error (r.m.s, as defined in
[32]) obtained in all voxels for probabilistic and deterministic connectomes. The averaged r.m.s.e
are 361.12 and 423.06 for the probabilistic and deterministic cases, respectively.
Having experimentally demonstrated that model approximation error decreases as function
of L, we move on to demonstrate the magnitude of model compression achieved by the
tensor decomposition approach. To do so, we fixed L = 360 and computed the model size for
both, LiFE and LiFESD , as a function of the number of gradient directions N? (Fig. 4(c))
and fascicles Nf (Fig. 4(d)). Results show that, as predicted by our theoretical results in
Proposition 3.2, model size scales linearly with the number of directions for both, LiFE and
LiFESD , but that the difference in slope is profound. Experimentally measured compression
ratios raise up to approximately 40 as it is the case for the subjects in the STN150 dataset
(Nf = 500, 000 and N? = 150).
8
Finally, we show an example comparison between two connectomes obtained by applying
probabilistic [17] and deterministic [4] tracking algorithms to one brain dataset (a single
subject) from the Human Connectome Project dataset [45], with N? = 90, Nv = 267, 306
and Nf = 500, 000. Figs. 4e-f show the detected 20 major tracts in a human brain using
only the fascicles with nonzero weigths. In this case, the probabilistic connectome has more
fascicles (121, 050) than the deterministic one (64, 134). Moreover, we replicate previous
results demonstrating that probabilistic connectomes have lower error than the deterministic
one in a majority of the voxels (see Fig. 4(g)).
6
Conclusions
We introduced a method to encode brain connectomes in multidimensional arrays and
decomposition approach that can accurately approximate the linear model for connectome
evaluation used in the LiFE method [32]. We demonstrate that the decomposition approach
dramatically reduces the memory requirements of the LiFE model, approximately from 40GB
to 1GB, with a small model approximation error of less than 1%. The compactness of the
decomposed LIFE model has important implications for other computational problems. For
example, model optimization can be implemented by using operations involving tensorial
operations avoiding the use of large matrices such as M and using instead the sparse tensor
and prediction dictionary (? and D respectively).
Multidimensional tensors and decomposition methods have been used to help investigators
make sense of large multimodal datasets [27, 11]. Yet to date these methods have found
only a few applications in neuroscience, such as performing multi-subject, clustering and
electroencephalography analyses [49, 48, 3, 28, 26, 13, 8]. Generally, decomposition methods
have been used to find compact representations of complex data by estimating the combination
of a limited number of common meaningful factors that best fit the data [24, 27, 23]. We
propose a new application that, instead of using the decomposition to estimate latent factors,
it encodes the structure of the problem explicitly.
The new application of tensor decomposition proposed here has the potential to improve
future generations of models of connectomics, tractography evaluation and microstructure
[32, 15, 36, 39]. Improving these models will allow going beyond the current limitations of
the state of the art methods [14]. Finally, tensorial representations for brain imaging data
have the potential to contribute advancing the application of machine learning algorithms to
mapping the human connectome [18, 37, 21, 20, 30, 1, 51, 29, 40, 33].
Acknowledgments
This research was supported by (NSF IIS-1636893; BCS-1734853; NIH ULTTR001108) to F.P.
Data provided by Stanford University (NSF BCS 1228397). F.P. were partially supported by
the Indiana University Areas of Emergent Research initiative Learning: Brains, Machines,
Children.
References
[1] Daniel C Alexander, Darko Zikic, Aurobrata Ghosh, Ryutaro Tanno, Viktor Wottschel, Jiaying
Zhang, Enrico Kaden, Tim B Dyrby, Stamatios N Sotiropoulos, Hui Zhang, and Antonio
Criminisi. Image quality transfer and applications in diffusion MRI. Human Brain Mapping
Journal, pages 1?65, March 2017.
[2] Animashree Anandkumar, Rong Ge 0001, Daniel J Hsu, and Sham M Kakade. A tensor
approach to learning mixed membership community models. Journal of Machine Learning
Research (JMLR), 15:2239?2312, 2014.
[3] Michael Barnathan, Vasileios Megalooikonomou, Christos Faloutsos, Scott Faro, and Feroze B
Mohamed. TWave: High-order analysis of functional MRI. Human Brain Mapping Journal,
58(2):537?548, September 2011.
[4] P J Basser, S Pajevic, C Pierpaoli, J Duda, and A Aldroubi. In vivo fiber tractography using
DT-MRI data. Magnetic Resonance in Medicine, 44(4):625?632, October 2000.
9
[5] PJ Basser, J Mattiello, and D Lebihan. Estimation of the effective self-diffusion tensor from
the NMR spin echo. Journal of Magnetic Resonance, Series B, 103(3):247?254, January 1994.
[6] Danielle S Bassett and Olaf Sporns. Network neuroscience. Nature Neuroscience, 20(3):353?364,
February 2017.
[7] Matteo Bastiani, Nadim Jon Shah, Rainer Goebel, and Alard Roebroeck. Human cortical
connectome reconstruction from diffusion weighted MRI: the effect of tractography algorithm.
Human Brain Mapping Journal, 62(3):1732?1749, 2012.
[8] C F Beckmann and S M Smith. Tensorial extensions of independent component analysis for
multisubject FMRI analysis. NeuroImage, 25(1):294?311, March 2005.
[9] Cesar F Caiafa and A Cichocki. Computing Sparse representations of multidimensional signals
using Kronecker bases. Neural Computation, pages 186?220, December 2012.
[10] Cesar F Caiafa and Franco Pestilli. Multidimensional encoding of brain connectomes. Scientific
Reports, 7(1):11491, September 2017.
[11] Andrzej Cichocki, Danilo Mandic, Lieven De Lathauwer, Guoxu Zhou, Qibin Zhao, Cesar
Caiafa, and Anh Huy Phan. Tensor decompositions for signal processing applications: from
two-way to multiway component analysis. IEEE Signal Processing Magazine, 32:145?163, March
2015.
[12] Pierre Comon. Tensors : A brief introduction. IEEE Signal Processing Magazine, 31(3):44?53,
April 2014.
[13] Fengyu Cong, Qiu-Hua Lin, Li-Dan Kuang, Xiao-Feng Gong, Piia Astikainen, and Tapani
Ristaniemi. Tensor decomposition of EEG signals: a brief review. Journal of neuroscience
methods, 248:59?69, 2015.
[14] Alessandro Daducci, Alessandro Dal Palu, Maxime Descoteaux, and Jean-Philippe Thiran.
Microstructure Informed Tractography: Pitfalls and Open Challenges. Frontiers in Neuroscience,
10(8):1374?13, June 2016.
[15] Alessandro Daducci, Alessandro Dal Pal?, Alia Lemkaddem, and Jean-Philippe Thiran. COMMIT: Convex optimization modeling for microstructure informed tractography. Medical Imaging,
IEEE Transactions on, 34(1):246?257, January 2015.
[16] Lieven De Lathauwer, Bart De Moor, and Joos Vandewalle. A multilinear singular value
decomposition. SIAM J. Matrix Anal. Appl, 21(4):1253?1278, 2000.
[17] M Descoteaux, R Deriche, T R Knosche, and A Anwander. Deterministic and Probabilistic
Tractography Based on Complex Fibre Orientation Distributions. Medical Imaging, IEEE
Transactions on, 28(2):269?286, January 2009.
[18] Andrew T Drysdale, Logan Grosenick, Jonathan Downar, Katharine Dunlop, Farrokh Mansouri,
Yue Meng, Robert N Fetcho, Benjamin Zebley, Desmond J Oathes, Amit Etkin, Alan F
Schatzberg, Keith Sudheimer, Jennifer Keller, Helen S Mayberg, Faith M Gunning, George S
Alexopoulos, Michael D Fox, Alvaro Pascual-Leone, Henning U Voss, B J Casey, Marc J Dubin,
and Conor Liston. Resting-state connectivity biomarkers define neurophysiological subtypes of
depression. Nature Medicine, pages 1?16, December 2016.
[19] John R Gilbert, Cleve Moler, and Robert Schreiber. Sparse matrices in matlab: design and
implementation. SIAM Journal on Matrix Analysis and Applications, 13(1):333?356, January
1992.
[20] Matthew F Glasser, Timothy S Coalson, Emma C Robinson, Carl D Hacker, John Harwell, Essa
Yacoub, Kamil Ugurbil, Jesper Andersson, Christian F Beckmann, Mark Jenkinson, Stephen M
Smith, and David C Van Essen. A multi-modal parcellation of human cerebral cortex. Nature
Publishing Group, 536(7615):171?178, August 2016.
[21] Heather Cody Hazlett, Hongbin Gu, Brent C Munsell, Sun Hyung Kim, Martin Styner, Jason J
Wolff, Jed T Elison, Meghan R Swanson, Hongtu Zhu, Kelly N Botteron, D Louis Collins,
John N Constantino, Stephen R Dager, Annette M Estes, Alan C Evans, Vladimir S Fonov,
Guido Gerig, Penelope Kostopoulos, Robert C McKinstry, Juhi Pandey, Sarah Paterson, John R
Pruett, Robert T Schultz, Dennis W Shaw, Lonnie Zwaigenbaum, and Joseph Piven. Early
brain development in infants at high risk for autism spectrum disorder. Nature Publishing
Group, 542(7641):348?351, February 2017.
10
[22] Dongmin Kim, Suvrit Sra, and Inderjit S Dhillon. A non-monotonic method for large-scale
non-negative least squares. Optimization Methods and Software, 28(5):1012?1039, October
2013.
[23] TG Kolda and BW Bader. Tensor decompositions and applications. SIAM Review, 51(3):455?
500, 2009.
[24] Pieter M Kroonenberg. Applied Multiway Data Analysis. John Wiley & Sons, February 2008.
[25] Junning Li, Yonggang Shi, and Arthur W Toga. Mapping Brain Anatomical Connectivity Using
Diffusion Magnetic Resonance Imaging: Structural connectivity of the human brain. IEEE
Signal Processing Magazine, 33(3):36?51, April 2016.
[26] F Miwakeichi, E Mart?nez-Montes, PA Vald?s-Sosa, N Nishiyama, H Mizuhara, and Y Yamaguchi. Decomposing EEG Data into Space?time?frequency Components using Parallel Factor
Analysis. NeuroImage, 22(3):1035?1045, July 2004.
[27] M M?rup. Applications of tensor (multiway array) factorizations and decompositions in data
mining. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 1(1):24?40,
January 2011.
[28] Morten M?rup, Lars Kai Hansen, Christoph S Herrmann, Josef Parnas, and Sidse M. Arnfred.
Parallel Factor Analysis as an exploratory tool for wavelet transformed event-related EEG.
Human Brain Mapping Journal, 29(3):938?947, 2006.
[29] Gemma L Nedjati-Gilani, Torben Schneider, Matt G Hall, Niamh Cawley, Ioana Hill, Olga
Ciccarelli, Ivana Drobnjak, Claudia A M Gandini Wheeler-Kingshott, and Daniel C Alexander.
Machine learning based compartment models with permeability for white matter microstructure
imaging. Human Brain Mapping Journal, 150:119?135, April 2017.
[30] Peter Florian Neher, Marc-Alexandre Cote, Jean-Christophe Houde, Maxime Descoteaux, and
Klaus H Maier-Hein. Fiber tractography using machine learning. bioRxiv, pages 1?20, January
2017.
[31] Eleftheria Panagiotaki, Torben Schneider, Bernard Siow, Matt G Hall, Mark F Lythgoe, and
Daniel C Alexander. Compartment models of the diffusion MR signal in brain white matter: A
taxonomy and comparison. Human Brain Mapping Journal, 59(3):2241?2254, February 2012.
[32] Franco Pestilli, Jason D Yeatman, Ariel Rokem, Kendrick N Kay, and Brian A Wandell.
Evaluation and statistical inference for human connectomes. Nature Methods, 11(10):1058?1063,
September 2014.
[33] Ariel Rokem, Hiromasa Takemura, Andrew S Bock, K Suzanne Scherf, Marlene Behrmann,
Brian A Wandell, Ione Fine, Holly Bridge, and Franco Pestilli. The visual white matter: The
application of diffusion MRI and fiber tractography to vision science. Journal of Vision, 17(2):4,
February 2017.
[34] Ariel Rokem, Jason D Yeatman, Franco Pestilli, Kendrick N Kay, Aviv Mezer, Stefan van der
Walt, and Brian A Wandell. Evaluating the accuracy of diffusion MRI models in white matter.
PLoS ONE, 10(4):e0123272, April 2015.
[35] Parikshit Shah, Nikhil S Rao, and Gongguo Tang. Sparse and Low-Rank Tensor Decomposition.
NIPS, 2015.
[36] Robert E Smith, Jacques-Donald Tournier, Fernando Calamante, and Alan Connelly. SIFT2:
Enabling dense quantitative assessment of brain white matter connectivity using streamlines
tractography. Human Brain Mapping Journal, 119(C):338?351, October 2015.
[37] Stephen M Smith, Thomas E Nichols, Diego Vidaurre, Anderson M Winkler, Timothy E J
Behrens, Matthew F Glasser, Kamil Ugurbil, Deanna M Barch, David C Van Essen, and
Karla L Miller. A positive-negative mode of population covariation links brain connectivity,
demographics and behavior. Nature Publishing Group, 18(11):1565?1567, September 2015.
[38] Olaf Sporns. Making sense of brain network data. Nature Methods, 10(6):491?493, May 2013.
[39] Hiromasa Takemura, Cesar F Caiafa, Brian A Wandell, and Franco Pestilli. Ensemble Tractography. PLoS Computational Biology, 12(2):e1004692?, February 2016.
11
[40] Chantal M W Tax, Tom Dela Haije, Andrea Fuster, Carl-Fredrik Westin, Max A Viergever, Luc
Florack, and Alexander Leemans. Sheet Probability Index (SPI): Characterizing the geometrical
organization of the white matter with diffusion MRI. Human Brain Mapping Journal, pages
1?53, July 2016.
[41] J-Donald Tournier, Fernando Calamante, and Alan Connelly. MRtrix: Diffusion tractography
in crossing fiber regions. International Journal of Imaging Systems and Technology, 22(1):53?66,
February 2012.
[42] L R Tucker. Some mathematical notes on three-mode factor analysis. Psychometrika, 31(3):279?
311, September 1966.
[43] M P Van den Heuvel and O Sporns. Rich-Club Organization of the Human Connectome.
Journal of Neuroscience, 31(44):15775?15786, November 2011.
[44] Martijn P Van den Heuvel, Edward T Bullmore, and Olaf Sporns. Comparative Connectomics.
Trends in Cognitive Sciences, 20(5):345?361, 2016.
[45] David C Van Essen, Stephen M Smith, Deanna M Barch, Timothy E J Behrens, Essa Yacoub,
Kamil Ugurbil, and for the WU-Minn HCP Consortium. The WU-Minn Human Connectome
Project: An overview. NeuroImage, 80(C):62?79, October 2013.
[46] Brian A Wandell. Clarifying Human White Matter. Annual Review of Neuroscience, 39(1):103?
128, July 2016.
[47] Kishan Wimalawarne, Masashi Sugiyama, and Ryota Tomioka. Multitask learning meets tensor
factorization - task imputation via convex optimization. NIPS, 2014.
[48] Yeyang Yu, Jin Jin, Feng Liu, and Stuart Crozier. Multidimensional Compressed Sensing MRI
Using Tensor Decomposition-Based Sparsifying Transform. PLoS ONE, 9(6):e98441, June 2014.
[49] Qibin Zhao, C F Caiafa, D P. Mandic, Z C Chao, Y Nagasaka, N Fujii, Liqing Zhang, and
A Cichocki. Higher Order Partial Least Squares (HOPLS): A Generalized Multilinear Regression
Method. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(7):1660?1673,
May 2013.
[50] Qibin Zhao, Cesar F Caiafa, Danilo P Mandic, Liqing Zhang, Tonio Ball, Andreas Schulzebonhage, and Andrzej S Cichocki. Multilinear Subspace Regression: An Orthogonal Tensor
Decomposition Approach. In J Shawe-Taylor, R S Zemel, P L Bartlett, F Pereira, and K Q
Weinberger, editors, Advances in Neural Information Processing Systems 24, pages 1269?1277.
Curran Associates, Inc., 2011.
[51] D Zhu, N Jahanshad, B C Riedel, and L Zhan. Population learning of structural connectivity
by white matter encoding and decoding. In 2016 IEEE 13th International Symposium on
Biomedical Imaging (ISBI), pages 554?558. IEEE, 2016.
12
| 7021 |@word multitask:1 mri:10 briefly:1 compression:10 norm:1 version:1 nd:2 replicate:1 tensorial:3 duda:1 open:1 km:2 pieter:1 decomposition:35 dramatic:1 reduction:2 liu:1 contains:2 series:1 hereafter:4 daniel:4 current:2 com:1 nt:1 discretization:4 comparing:1 torben:2 gmail:1 yet:1 written:3 connectomics:2 john:5 sosa:1 evans:1 christian:1 designed:1 bart:1 infant:1 intelligence:1 prohibitive:1 smith:5 core:2 node:3 contribute:2 club:1 org:1 zhang:4 five:1 penelope:1 mathematical:3 along:2 lathauwer:2 fujii:1 become:2 profound:1 symposium:1 initiative:1 fitting:1 dan:1 emma:1 introduce:2 multisubject:1 theoretically:1 andrea:1 behavior:1 growing:1 voss:1 brain:46 multi:2 cct:1 decomposed:7 spherical:3 pitfall:1 cpu:1 solver:1 electroencephalography:1 psychometrika:1 project:3 estimating:1 moreover:1 provided:1 factorized:2 gongguo:1 null:1 anh:1 substantially:1 developed:1 informed:2 unified:1 argentina:1 indiana:7 finding:2 ghosh:1 megalooikonomou:1 dti:1 every:1 multidimensional:9 nf:19 quantitative:1 masashi:1 nmr:1 stick:1 medical:2 vald:1 louis:1 before:1 positive:1 limit:2 despite:1 encoding:8 meng:1 meet:1 path:2 matteo:1 approximately:2 heather:1 christoph:1 appl:1 fastest:1 limited:1 factorization:2 range:1 averaged:1 acknowledgment:1 block:2 implement:5 wheeler:1 area:1 kendrick:2 significantly:1 donald:2 consortium:1 sheet:1 storage:1 risk:1 applying:2 writing:1 www:1 gilbert:1 map:1 deterministic:9 demonstrated:1 shi:1 helen:1 l:1 convex:8 keller:1 resolution:4 disorder:1 dubin:1 array:8 kay:2 gigabyte:1 coalson:1 population:2 exploratory:1 coordinate:3 nnls:3 kolda:1 diego:1 behrens:2 magazine:3 qibin:3 guido:1 carl:2 curran:1 pa:1 crossing:1 trend:1 approximated:6 associate:1 solved:2 thousand:1 calculate:1 cong:1 region:1 cycle:1 rnf:7 sun:1 plo:3 decrease:2 highest:1 removed:1 substantial:1 principled:1 alessandro:4 benjamin:1 rup:2 iupui:1 sotiropoulos:1 depend:1 solving:2 segment:1 raise:1 f2:2 munsell:1 kmkf:1 gu:1 multimodal:1 emergent:1 fiber:4 fast:1 describe:3 effective:1 iar:1 jesper:1 detected:1 monte:1 zemel:1 klaus:1 whose:3 encoded:2 supplementary:1 stanford:2 larger:1 jean:3 kai:1 nikhil:1 compressed:1 bullmore:1 winkler:1 commit:1 radiology:1 grosenick:1 echo:1 validates:1 final:1 transform:1 essa:2 propose:4 reconstruction:1 product:5 caiafa:7 connelly:2 date:1 karla:1 tax:1 martijn:1 ioana:1 frobenius:1 validate:3 faith:1 ky:2 olaf:4 empty:2 requirement:2 jenkinson:1 gemma:1 jed:1 comparative:1 tract:1 houde:1 object:1 help:1 derive:2 andrew:3 develop:2 gong:1 tim:1 measured:10 sarah:1 school:1 noticeable:1 keith:1 eq:1 edward:1 implemented:1 predicted:6 fredrik:1 indicate:3 implies:1 met:1 direction:12 criminisi:1 bader:1 lars:1 human:22 material:1 public:1 require:4 microstructure:5 f1:2 elevation:3 proposition:8 brian:5 multilinear:3 subtypes:1 extension:1 rong:1 frontier:1 cic:2 hall:2 vidaurre:1 mapping:12 predict:4 matthew:2 major:1 dictionary:6 early:1 estimation:2 hansen:1 bridge:1 schreiber:1 tool:1 weighted:9 moor:1 unfolding:1 stefan:1 rna:2 avoid:1 zhou:1 rainer:1 encode:11 validated:1 focus:1 june:2 casey:1 rank:1 cesar:7 criticism:1 yamaguchi:1 wf:2 sense:2 inference:1 kim:2 membership:1 nn:6 eliminate:1 compactness:1 relation:1 transformed:1 going:1 comprising:2 josef:1 among:1 html:1 orientation:11 multiplies:1 development:1 resonance:5 spatial:3 constrained:1 initialize:1 art:1 cube:1 field:1 once:1 equal:1 having:2 beach:1 eliminated:1 atom:5 sampling:2 mm2:1 kw:1 biology:1 stuart:1 yu:1 jon:1 future:1 fmri:1 report:2 few:1 deriche:1 modern:2 individual:5 parikshit:1 bw:1 mckinstry:1 organization:3 essen:3 mining:2 evaluation:18 analyzed:1 behind:1 bundle:2 implication:1 accurate:1 partial:1 arthur:1 orthogonal:1 fox:1 permeability:1 taylor:1 logan:1 biorxiv:1 dal:2 hein:1 theoretical:14 mk:2 psychological:3 fitted:1 column:5 modeling:1 rao:1 conor:1 ar:1 measuring:1 tg:1 cost:1 introducing:1 stacking:1 entry:8 subset:1 hopls:1 hundred:1 comprised:1 kuang:1 vandewalle:1 azimuth:3 pal:1 st:3 fundamental:1 siam:3 alvaro:1 international:2 interdisciplinary:1 probabilistic:9 decoding:1 connectome:31 michael:2 na:7 connectivity:6 squared:1 containing:3 cognitive:1 brent:1 zhao:3 return:3 li:2 potential:2 harwell:1 de:3 wk:1 coefficient:1 matter:14 inc:1 explicitly:2 mv:5 vi:5 toga:1 performed:1 root:1 jason:3 analyze:1 kwk:1 yv:3 parallel:2 slope:1 vivo:3 contribution:2 square:4 ni:4 pba:1 accuracy:11 spin:1 maier:1 efficiently:2 miller:1 correspond:3 ensemble:1 compartment:2 yellow:1 accurately:2 trajectory:2 multiplying:1 autism:1 nagasaka:1 tissue:1 history:1 simultaneous:1 explain:1 joos:1 frequency:1 tucker:4 mohamed:1 associated:3 proof:1 hsu:1 bloomington:3 dataset:5 animashree:1 covariation:1 knowledge:1 indianapolis:1 organized:1 routine:1 calamante:2 elisa:1 focusing:1 alexandre:1 higher:2 dt:5 danilo:2 tom:1 modal:1 april:4 leone:1 formulation:3 walt:1 anderson:1 biomedical:1 traveling:1 dennis:1 web:1 trust:1 assessment:1 mode:15 quality:1 scientific:2 grows:1 aviv:1 usa:5 usage:1 effect:1 contain:1 matt:2 holly:1 nichols:1 assigned:3 nonzero:2 dhillon:1 white:14 wrapped:1 self:1 noted:3 claudia:1 generalized:1 dunlop:1 vasileios:1 hill:1 demonstrate:5 pestilli:6 geometrical:1 image:1 recently:3 fi:1 nih:1 common:2 wandell:5 functional:1 mt:8 tracked:1 overview:1 volume:1 million:1 viktor:1 cerebral:1 approximates:1 lieven:2 resting:1 heuvel:2 measurement:4 refer:1 goebel:1 vec:4 rd:2 desmond:1 outlined:1 grid:1 cody:1 sugiyama:1 multiway:3 shawe:1 cortex:1 base:1 recent:3 aldroubi:1 store:1 suvrit:1 moler:1 life:29 christophe:1 yi:1 der:1 lebihan:1 additional:1 tapani:1 george:1 schneider:2 florian:1 converting:1 mr:1 v3:1 fernando:2 dela:1 signal:26 living:2 ii:1 full:2 multiple:1 bcs:2 reduces:1 x10:1 sham:1 alan:4 stephen:4 faster:1 calculation:1 long:3 sphere:1 lin:1 mandic:3 rokem:3 liqing:2 coded:1 va:1 mattiello:1 prediction:6 involving:1 basic:1 regression:2 vision:2 represent:3 achieved:3 maxime:2 whereas:2 mayberg:1 fine:1 enrico:1 basser:2 cawley:1 singular:1 source:1 specially:1 nv:24 subject:15 yue:1 henning:1 thing:1 december:2 call:1 anandkumar:1 structural:6 mw:7 paterson:1 tractography:23 variety:3 fit:2 spi:1 identified:1 lonnie:1 reduce:1 idea:1 andreas:1 det:1 palu:1 biomarkers:1 glasser:2 ugurbil:3 tournier:2 gb:5 bartlett:1 peter:1 hacker:1 passing:1 depression:1 antonio:1 dramatically:1 useful:1 myelin:1 generally:1 matlab:1 weigths:1 dark:1 ten:1 fascicle:56 parnas:1 http:2 nsf:2 neuroscience:9 estimated:3 per:1 jacques:1 anatomical:2 blue:2 discrete:1 group:3 sparsifying:1 demonstrating:1 imputation:1 pj:1 verified:1 diffusion:32 advancing:1 v1:1 imaging:9 year:1 convert:1 fibre:1 prob:1 wu:2 putative:1 dy:1 vf:6 zhan:1 bound:7 cyan:1 cleve:1 nonnegative:1 annual:1 strength:1 kronecker:2 riedel:1 software:1 encodes:4 wimalawarne:1 pierpaoli:1 alia:1 franco:6 min:1 prescribed:1 performing:1 darko:1 format:1 relatively:1 martin:1 department:4 combination:3 march:3 ball:1 describes:2 across:4 em:3 son:1 kakade:1 joseph:1 n4:1 making:2 comon:1 den:2 ariel:3 taken:1 suzanne:1 resource:1 equation:16 jennifer:1 describing:1 r3:2 precomputed:1 needed:3 ge:1 end:1 demographic:1 operation:6 rewritten:2 decomposing:1 multiplied:1 apply:1 v2:1 reshape:1 magnetic:5 pierre:1 shaw:1 faloutsos:1 shah:2 weinberger:1 original:5 thomas:1 andrzej:2 clustering:1 cf:3 ensure:2 publishing:3 estes:1 maintaining:1 medicine:3 exploit:1 parcellation:1 amit:1 approximating:1 february:7 feng:2 tensor:51 objective:1 move:1 styner:1 fa:1 primary:1 september:5 gradient:7 kishan:1 subspace:1 distance:1 separate:1 morten:1 lateral:4 pnf:1 majority:1 link:1 clarifying:1 collected:1 water:1 reason:1 swanson:1 connectomes:16 index:4 minn:2 beckmann:2 providing:1 ratio:1 vladimir:1 difficult:1 october:4 robert:5 taxonomy:1 ryota:1 negative:5 july:3 design:2 anal:1 implementation:1 perform:2 contributed:1 upper:2 datasets:5 enabling:1 november:1 jin:2 philippe:2 january:6 variability:1 rn:18 august:1 community:1 introduced:5 thiran:2 david:3 connection:3 optimized:2 learned:1 established:1 nip:3 robinson:1 beyond:1 deanna:2 below:4 usually:1 scott:1 pattern:1 challenge:1 built:1 max:2 memory:4 sporns:5 event:1 natural:2 predicting:3 zhu:2 gerig:1 improve:1 technology:1 brief:2 inversely:2 identifies:1 started:1 cichocki:4 chao:1 review:4 voxels:14 vectorizing:1 kelly:1 discovery:1 fully:1 mixed:1 generation:1 limitation:1 proportional:3 takemura:2 bock:1 isbi:1 validation:1 integrate:1 s0:2 article:2 xiao:1 editor:1 constantino:1 supported:3 allow:2 characterizing:1 saykin:1 sparse:23 van:6 slice:7 dimension:4 cortical:1 stand:1 evaluating:1 rich:1 herrmann:1 preprocessing:1 schultz:1 voxel:12 transaction:3 approximate:3 compact:1 demeaned:3 factorize:1 spectrum:1 factorizing:1 pandey:1 vectorization:1 latent:1 fuster:1 matricization:1 nature:7 transfer:1 molecule:1 ca:1 decoupling:1 sra:1 eeg:3 improving:1 complex:2 kamil:3 marc:2 roebroeck:1 main:1 dense:2 csd:1 linearly:2 scored:1 bassett:1 huy:1 qiu:1 child:1 neuronal:2 fig:19 referred:3 streamlines:1 pascual:1 axon:2 wiley:2 christos:1 neuroimage:3 position:4 tomioka:1 pereira:1 candidate:3 jmlr:1 third:1 behrmann:1 wavelet:1 nishiyama:1 tang:1 formula:1 load:1 sensing:1 explored:1 deconvolution:1 restricting:1 importance:1 hui:1 barch:2 magnitude:1 westin:1 phan:1 timothy:3 nez:1 neurophysiological:1 visual:1 hcp:3 yacoub:2 tracking:1 partially:1 inderjit:1 hua:1 monotonic:1 mart:1 identity:2 viergever:1 luc:1 experimentally:2 specifically:3 determined:2 reducing:1 uniformly:2 wt:3 olga:1 wolff:1 called:1 andersson:1 bernard:1 experimental:3 la:1 meaningful:1 ew:3 mark:2 jonathan:1 alexander:4 collins:1 frontal:1 investigator:1 evaluate:1 tested:1 avoiding:1 |
6,658 | 7,022 | A Minimax Optimal Algorithm for Crowdsourcing
Thomas Bonald
Telecom ParisTech
[email protected]
Richard Combes
Centrale-Supelec / L2S
[email protected]
Abstract
We consider the problem of accurately estimating the reliability of workers based
on noisy labels they provide, which is a fundamental question in crowdsourcing.
We propose a novel lower bound on the minimax estimation error which applies
to any estimation procedure. We further propose Triangular Estimation (TE), an
algorithm for estimating the reliability of workers. TE has low complexity, may
be implemented in a streaming setting when labels are provided by workers in real
time, and does not rely on an iterative procedure. We prove that TE is minimax
optimal and matches our lower bound. We conclude by assessing the performance
of TE and other state-of-the-art algorithms on both synthetic and real-world data.
1 Introduction
The performance of many machine learning techniques, and in particular data classification, strongly
depends on the quality of the labeled data used in the initial training phase. A common way to label
new datasets is through crowdsourcing: many workers are asked to label data, typically texts or
images, in exchange of some low payment. Of course, crowdsourcing is prone to errors due to
the difficulty of some classification tasks, the low payment per task and the repetitive nature of the
job. Some workers may even introduce errors on purpose. Thus it is essential to assign the same
classification task to several workers and to learn the reliability of each worker through her past
activity so as to minimize the overall error rate and to improve the quality of the labeled dataset.
Learning the reliability of each worker is a tough problem because the true label of each task, the
so-called ground truth, is unknown; it is precisely the objective of crowdsourcing to guess the true
label. Thus the reliability of each worker must be inferred from the comparison of her labels on
some set of tasks with those of other workers on the same set of tasks.
In this paper, we consider binary labels and study the problem of estimating the workers reliability
based on the answers they provide to tasks. We make two novel contributions to that problem:
(i) We derive a lower bound on the minimax estimation error which applies to any estimator of
the workers reliability. In doing so we identify "hard" instances of the problem, and show that the
minimax error depends on two factors: the reliability of the three most informative workers and the
mean reliability of all workers.
(ii) We propose TE (Triangular Estimation), a novel algorithm for estimating the reliability of each
worker based on the correlations between triplets of workers. We analyze the performance of TE and
prove that it is minimax optimal in the sense that it matches the lower bound we previously derived.
Unlike most prior work, we provide non-asymptotic performance guarantees which hold even for a
finite number of workers and tasks. As our analysis reveals, non-asymptotic performance guarantees
require to use finer concentration arguments than asymptotic ones.
TE has low complexity in terms of memory space and computation time, does not require to store
the whole data set in memory and can be easily applied in a setting in which answers to tasks arrive
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
sequentially, i.e., in a streaming setting. Finally, we compare the performance of TE to state-of-theart algorithms through numerical experiments using both synthetic and real datasets.
2 Related Work
The first problems of data classification using independent workers appeared in the medical context, where each label refers to the state of a patient (e.g., sick or sane) and the workers are clinicians. [Dawid and Skene, 1979] proposed an expectation-maximization (EM) algorithm, admitting
that the accuracy of the estimate was unknown. Several versions and extensions of this algorithm
have since been proposed and tested in various settings [Hui and Walter, 1980, Smyth et al., 1995,
Albert and Dodd, 2004, Raykar et al., 2010, Liu et al., 2012].
A number of Bayesian techniques have also been proposed and applied to this problem by
[Raykar et al., 2010, Welinder and Perona, 2010, Karger et al., 2011, Liu et al., 2012, Karger et al.,
2014, 2013] and references therein. Of particular interest is the belief-propagation (BP) algorithm
of [Karger et al., 2011], which is provably order-optimal in terms of the number of workers required
per task for any given target error rate, in the limit of an infinite number of tasks and an infinite
population of workers.
Another family of algorithms is based on the spectral analysis of some matrix representing the
correlations between tasks or workers. [Ghosh et al., 2011] work on the task-task matrix whose
entries correspond to the number of workers having labeled two tasks in the same manner, while
[Dalvi et al., 2013] work on the worker-worker matrix whose entries correspond to the number of
tasks labeled in the same manner by two workers. Both obtain performance guarantees by the
perturbation analysis of the top eigenvector of the corresponding expected matrix. The BP algorithm
of Karger, Oh and Shah is in fact closely related to these spectral algorithms: their message-passing
scheme is very similar to the power-iteration method applied to the task-worker matrix, as observed
in [Karger et al., 2011].
Two notable recent contributions are [Chao and Dengyong, 2015] and [Zhang et al., 2014]. The
former provides performance guarantees for two versions of EM, and derives lower bounds on the
attainable prediction error (the probability of estimating labels incorrectly). The latter provides
lower bounds on the estimation error of the workers? reliability as well as performance guarantees
for an improved version of EM relying on spectral methods in the initialization phase. Our lower
bound cannot be compared to that of [Chao and Dengyong, 2015] because it applies to the workers?
reliability and not the prediction error; and our lower bound is tighter than that of [Zhang et al.,
2014]. Our estimator shares some features of the algorithm proposed by [Zhang et al., 2014] to
initialize EM, which suggests that the EM phase itself is not essential to attain minimax optimality.
All these algorithms require the storage of all labels in memory and, to the best of our knowledge,
the only known streaming algorithm is the recursive EM algorithm of [Wang et al., 2013], for which
no performance guarantees are available.
The remainder of the paper is organized as follows. In section 3 we state the problem and introduce
our notations. The important question of identifiability is addressed in section 4. In section 5 we
present a lower bound on the minimax error rate of any estimator. In section 6 we present TE, discuss
its compexity and prove that it is minimax optimal. In section 7 we present numerical experiments
on synthetic and real-world data sets and section 8 concludes the paper. Due to space constraints,
we only provide proof outlines for our two main results in this document. Complete proofs are
presented in the supplementary material.
3 Model
Consider n workers, for some integer n ? 3. Each task consists in determining the answer to a
binary question. The answer to task t, the ?ground-truth", is denoted by G(t) ? {+1, ?1}. We
assume that the random variables G(1), G(2), . . . are i.i.d. and centered, so that there is no bias
towards one of the answers.
Each worker provides an answer with probability ? ? (0, 1]. When worker i ? {1, ..., n} provides
an answer, this answer is correct with probability 21 (1 + ?i ), independently of the other workers, for
some parameter ?i ? [?1, 1] that we refer to as the reliability of worker i. If ?i > 0 then worker
2
i tends to provide correct answers; if ?i < 0 then worker i tends to provide incorrect anwsers; if
?i = 0 then worker i is non-informative. We denote by ? = (?1 , . . . , ?n ) the reliability vector. Both
? and ? are unknown.
Let Xi (t) ? {?1, 0, 1} be the output of worker i for task t, where the output 0 corresponds to the
absence of an answer. We have:
?
i
w.p. ? 1+?
? G(t)
2 ,
1??i
Xi (t) = ?G(t) w.p. ? 2
(1)
?
0
w.p. 1 ? ?.
Since the workers are independent, the random variables X1 (t), ..., Xn (t) are independent given
G(t), for each task t. We denote by X(t) the corresponding vector. The goal is to estimate the
? that minimizes the error
ground-truth G(t) as accurately as possible by designing an estimator G(t)
?
?
probability P(G(t) 6= G(t)). The estimator G(t) is adaptive and may be a function of X(1), ..., X(t)
but not of the unknown parameters ?, ?.
It is well-known that, given ? and ? = 1, an optimal estimator of G(t) is the weighted majority vote
[Nitzan and Paroush, 1982, Shapley and Grofman, 1984], namely
? = 1{W (t) > 0} ? 1{W (t) < 0} + Z1{W (t) = 0},
G(t)
(2)
P
n
1+?i
1
where W (t) = n i=1 wi Xi (t), wi = ln( 1??i ) is the weight of worker i (possibly infinite), and Z
is a Bernoulli random variable of parameter 21 over {+1, ?1} (for random tie-breaking). We prove
this result for any ? ? (0, 1].
Proposition 1 Assuming that ? is known, the estimator (2) is an optimal estimator of G(t).
Proof. Finding an optimal estimator of G(t) amounts to finding an optimal statistical test between
hypotheses {G(t) = +1} and {G(t) = ?1}, under a symmetry constraint so that type I and type II
error probability are equal. For any x ? {?1, 0, 1}n, let L+ (x) and L? (x) be the probabilities that
X(t) = x under hypotheses {G(t) = +1} and {G(t) = ?1}, respectively. We have
n
Y
L+ (x) = H(x) (1 + ?i )1{xi =+1} (1 ? ?i )1{xi =?1} ,
L? (x) = H(x)
i=1
n
Y
i=1
Pn
(1 + ?i )1{xi =?1} (1 ? ?i )1{xi =+1} ,
|xi | is the number of answers and H(x) = 21? ?? (1 ? ?)n?? . We deduce the
+
Pn
L (x)
log-likelihood ratio ln L
=
? (x)
i=1 wi xi . By the Neyman-Pearson theorem, for any level
of significance, there exists a and b such that the uniformly most powerful test for that level is:
1{wT x > a} ? 1{wT x < a} + Z1{wT x = a}, where Z is a Bernoulli random variable of
parameter b over {+1, ?1}. By symmetry, we must have a = 0 and b = 12 , as announced.
where ? =
i=1
This result shows that estimating the true answer G(t) reduces to estimating the unknown parameters
? and ?, which is the focus of the paper. Note that the problem of estimating ? is important in itself,
due to the presence of "spammers" (i.e., workers with low reliability); a good estimator can be used
by the crowdsourcing platform to incentivize good workers.
4 Identifiability
Estimating ? and ? from X(1), ..., X(t) is not possible unless we have identifiability, namely
there cannot exist two distinct sets of parameters ?, ? and ?? , ?? under which the distribution of
X(1), ..., X(t) is the same. Let X ? {?1, 0, 1}n be any sample, for some parameters ? ? (0, 1]
and ? ? [?1, 1]n . The parameter ? is clearly identifiable since ? = P(X1 6= 0). The identifiability
of ? is less obvious. Assume for instance that ?i = 0 for all i ? 3. It follows from (1) that for any
x ? {?1, 0, 1}n, with H(x) defined as in the proof of Proposition 1:
(
1 + ?1 ?2 if x1 x2 = 1,
1 ? ?1 ?2 if x1 x2 = ?1,
P(X = x) = H(x) ?
1
if x1 x2 = 0.
3
In particular, two parameters ?, ?? such that ?1 ?2 = ?1? ?2? and ?i = ?i? = 0 for all i ? 3 cannot be distinguished. Similarly, by symmetry, two parameters ?, ?? such that ?? = ?? cannot be distinguished.
Let:
)
(
n
n
X
X
n
?i > 0 .
1{?i 6= 0} ? 3,
? = ? ? [?1, 1] :
i=1
i=1
The first condition states that there are at least 3 informative workers, the second that the average
reliability is positive.
Proposition 2 Any parameter ? ? ? is identifiable.
Proof. Any parameter ? ? ? can be expressed as a function of the covariance matrix of X (section
6 below): the absolute value and the sign of ? follow from (4) and (5), respectively.
5 Lower bound on the minimax error
The estimation of ? is straightforward and we here focus on the best estimation of ? one can
expect, assuming ? is known. Specifically, we derive a lower bound on the minimax error of
any estimator ?? of ?. p
Define ||?? ? ?||? = maxi=1,...,n |??i ? ?i | and for all ? ? [?1, 1]n ,
P
A(?) = mink maxi,j6=k |?i ?j | and B(?) = ni=1 ?i .
Observe that ? = {? ? [?1, 1]n : A(?) > 0, B(?) > 0}. This suggests that the estimation of
? becomes hard when either A(?) or B(?) is small. Define for any a, b ? (0, 1), ?a,b =
{? ? [?1, 1]n : A(?) ? a , B(?) ? b}. We have the following lower bound on the minimax error.
As the proof reveals, the parameters a and b characterize the difficulty of estimating the absolute
value and the sign of ?, respectively.
? of ?.
Theorem 1 (Minimax error) Consider any estimator ?(t)
For any ? ? (0, min(a, (1 ? a)/2, 1/4)) and ? ? (0, 1/4), we have
? ? ?||? ? ? ? ? , ?t ? max(T1 , T2 ),
min P ||?(t)
???a,b
with T1 =
c1 ?1?a
2 a4 ?2 ln
1
4?
4
(n?4)
ln
, T2 = c2 (1?a)
?2 a2 b2
1
4?
and c1 , c2 > 0 two universal constants.
Outline of proof. The proof is based on an information theoretic argument. Denote by P? the distribution of X under parameter ? ? ?, and D(.||.) the Kullback-Leibler (KL) divergence. The main
element of proof is lemma 1, where we bound D(P?? ||P? ) for two well chosen pairs of parameters.
The pair ?, ?? in statement (i) is hard to distinguish when a is small, hence it is hard to estimate the
absolute value of ?. The pair ?, ?? of statement (ii) is also hard to distinguish when a or b are small,
which shows that it is difficult to estimate the sign of ?. Proving lemma 1 is involved because of
the particular form of distribution P? , and requires careful manipulations of the likelihood ratio. We
conclude by reduction to a binary hypothesis test between ? and ?? using lemma 2.
a
a
Lemma 1 (i) Let a ? (0, 1), ? = (1, a, a, 0, . . . , 0) and ?? = (1 ? 2?, 1?2?
, 1?2?
, 0, . . . , 0).
Then:
D(P?? ||P? ) ?
1 ?2 a4 ?2
c1 1?a
?
(ii) Let n > 4, define c = b/(n ? 4), and ?
(a, a, ?a, ?a, c, . . . , c), ? = (?a, ?a, a, a, c, . . . , c). Then: D(P?? ||P? ) ?
=
?2 a2 b2
1
c2 (n?4)(1?a)4 .
?
Lemma 2 [Tsybakov, 2008, Theorem 2.2] Consider any estimator ?(t).
For any ?, ?? ? ? with ||? ? ?? ||? ? 2? we have:
? ? ?? ||? ? ?) ?
? ? ?||? ? ?), P?? (||?(t)
min P? (||?(t)
1
.
4 exp(?tD(P?? ||P? ))
Relation with prior work. The lower bound derived in [Zhang et al., 2014][Theorem 3] shows
1
that the minimax error of any estimator ?? must be greater than O((?t)? 2 ). Our lower bound is
1
stricter, and shows that the minimax error is in fact greater than O(a?2 ??1 t? 2 ). Another lower
bound was derived in [Chao and Dengyong, 2015][Theorems 3.4 and 3.5], but this concerns the
? 6= G), so that it cannot be easily compared to our result.
prediction error rate, that is P(G
4
6 Triangular estimation
We here present our estimator. The absolute value of the reliability of each worker k is estimated
through the correlation of her answers with those of the most informative pair i, j 6= k. We refer to
this algorithm as triangular estimation (TE). The sign of the reliability of each worker is estimated
in a second step. We use the convention that sign(0) = +.
Covariance matrix. Let X ? {?1, 0, 1}n be any sample, for some parameters ? ? (0, 1] and
? ? ?. We shall see that the parameter ? could be recovered exactly if the covariance matrix of X
were perfectly known. For any i 6= j, let Cij be the covariance of Xi and Xj given that Xi Xj 6= 0
(that is, both workers i and j provide an answer). In view of (1),
Cij =
E(Xi Xj )
= ?i ?j .
E(|Xi Xj |)
(3)
In particular, for any distinct indices i, j, k, Cik Cjk = ?i ?j ?k2 = Cij ?k2 . We deduce that, for any
k = 1, . . . , n and any pair i, j 6= k such that Cij 6= 0,
?k2 =
Cik Cjk
.
Cij
(4)
Note
exists for each k because ? ? ?. To recover the sign of ?k , we use the fact that
Pnthat such a pairP
?k i=1 ?i = ?k2 + i6=k Cik . Since ? ? ?, we get
?
?
X
sign(?k ) = sign ??k2 +
(5)
Cik ? .
i6=k
The TE algorithm consists in estimating the covariance matrix to recover ? from the above expressions.
TE algorithm. At any time t, define
?i, j = 1, . . . , n,
C?ij =
Pt
X (s)Xj (s)
Ps=1 i
.
t
max
|X
(s)X
(s)|,
1
i
j
s=1
(6)
For all k = 1, . . . , n, find the most informative pair (ik , jk ) ? arg maxi6=j6=k |C?ij | and let
? s
?
? C?ik k C?jk k
?
C?ik jk if |Cik jk (t)| > 0,
|??k | =
?
?
0
otherwise.
P
Next, define k ? = arg maxk ??k2 + i6=k C?ik and let
sign(??k ) =
(
P
sign(??k2? + i6=k? C?ik? ) if k = k ? ,
otherwise,
sign(??k? C?kk? )
Complexity. First note that the TE algorithm is a streaming algorithm since C?ij (t) can be written
t
C?ij =
t
X
X
Mij
with Mij =
Xi (s)Xj (s) and Nij =
|Xi (s)Xj (s)|.
max(Nij , 1)
s=1
s=1
Thus TE requires O(n2 ) memory space (to store the matrices M and N ) and has a time complexity
? O(n2 ln(n)) operations to sort the entries of
of O(n2 ln(n)) per task: O(n2 ) operations to update C,
? O(n2 ) operations to compute the sign of ?.
?
?
|C(t)|,
O(n2 ) operations to compute |?|,
5
Minimax optimality. The following result shows that the proposed estimator is minimax optimal.
Namely the sample complexity of our estimator matches the lower bound up to an additive logarithmic term ln(n) and a multiplicative constant.
? the estimator defined above. For any ? ? (0, min( b , 1))
Theorem 2 Let ? ? ?a,b and denote by ?(t)
3
and ? ? (0, 1), we have
? ? ?||? ? ?) ? ? , ?t ? max(T ? , T ? ),
P(||?(t)
1
2
2
2
with T1? = c?1 ?2 a14 ?2 ln 6n? , T2? = c?2 ?2 an2 b2 ln 4n? , and c?1 , c?2 > 0 two universal constants.
Outline of proof. Define ||C? ? C||? = maxi,j:i6=j |C?ij ? Cij |. The TE estimator is a function of
P
the empirical pairwise correlations (C?ij )i,j and the sums j6=i C?ij . The main difficulty is to prove
P
lemma 3, a concentration inequality for j6=i C?ij .
Lemma 3 For all i = 1, . . . , n and all ? > 0,
X
?
P |
(Cij ? Cij )| ? ? ? 2 exp ?
j6=i
?2 ?2 t
30 max(B(?)2 , n)
+ 2n exp ?
t?2
8(n ? 1)
.
Consider i fixed. We dissociate the set of tasks answered by each worker from the actual answers
and the truth. Let U = (Uj (t))j,t be i.i.d Bernoulli random variables with E(Uj (t)) = ? and
V = (Vj (t))j,t be independent random variables on {?1, 1} with E(Vj (t)) = ?j . One may readily
check that (Xj (t))j,t has the same distribution as (G(t)Uj (t)Vj (t))j,t . Hence, in distribution:
X
C?ij =
t
XX
Ui (s)Uj (s)Vi (s)Vj (s)
Nj
j6=i s=1
j6=i
with Nj =
t
X
Ui (s)Uj (s).
s=1
We prove lemma 3 by conditionning with respect to U . Denote by PU the conditional probability
with respect to U . Define N = minj6=i Nij . We prove that for all ? ? 0:
PU
X
j6=i
t X
2
X
?2
(n ? 1)N + S
Ui (s)Uj (s)?j and ? 2 =
.
(C?ij ? Cij ) ? ? ? e? ?2 with S =
N2
s=1
j6=i
The quantity ? is an upper bound on the conditional variance of
applying Chernoff?s inequality to both N and S. We get:
P(N ? ?2 t/2) ? (n ? 1)e?
t?2
8
and
P
j6=i
C?ij , which we control by
t?2
P(S ? 2t?2 max(Bi (?)2 , n ? 1)) ? e? 3(n?1) .
Removing the conditionning on U yields the result. We conclude the proof of theorem 2 by linking
the fluctuations of C? to that of ?? in lemma 4.
P
A(?)B(?)
1 B(?)
2
?
?
Lemma 4 If (a) ||C?C||
,
? ? ? ? A (?) min( 2 , 64 ) and (b) maxi |
j6=i Cij ?Cij | ?
8
24?
?
then ||? ? ?||? ? 2 .
A (?)
Relation with prior work. Our upper bound brings improvement over [Zhang et al., 2014] as
follows. Two conditions are required for the upper bound of [Zhang et al., 2014][Theorem 4] to
hold: (i) it is required that maxi |?i | < 1, and (ii) the number of workers n must grow
? with both ?
and t, and in fact must depend on a and b, so that n has to be large if b is smaller than n. Our result
does not require condition (i) to hold. Further there
are values of a and b such that condition (ii) is
?
b
b
never satisfied, for instance n ? 5, a = 21 , b = n?4
and ? = (a, ?a, a, ?a, n?4
, ..., n?4
) ? ?a,b .
2
2
For [Zhang et al., 2014][Theorem 4] to hold, n should satisfy n ? c3 nln(t n/?) with c3 a universal
constant (see discussion in the supplement) and for t or 1/? large enough no such n exists. It is
noted that for such values of a and b, our result remains informative. Our result shows that one can
obtain a minimax optimal algorithm for crowdsourcing which does not involve any EM step.
The analysis of [Chao and Dengyong, 2015] also imposes n to grow with t and conditions on the minimal value of b. Specifically the first and the last condition of [Chao and Dengyong, 2015][Theorem
6
P 2
3.3], require that n ? ln(t)
? and that i ?i ? 6ln(t). Using the previous example (even for t = 3),
this translates to b ? 2 n ? 4.
?
In fact, the value b = O( n) seems to mark the transition between "easy" and "hard"
? instances
of the crowdsourcing problem. Indeed, when n is large and b is large with respect to n, then the
majority vote outputs the truth with high probability by the Central Limit Theorem.
7 Numerical Experiments
Synthetic data. We consider three instances: (i) n = 50, t = 103 , ? = 0.25, ?i = a if i ? n/2 and
0 otherwise; (ii) n = 50, t = 104 , ? = 0.25, ? = (1, a, a, 0, ..., 0); (iii) n = 50, t = 104 , ? = 0.25,
b
b
a = 0.9, ? = (a, ?a, a, ?a, n?4
, ..., n?4
).
Instance (i) is an "easy" instance where half of the workers are informative, with A(?) = a and
B(?) = na/2. Instance (ii) is a "hard" instance, the difficulty being to estimate the absolute value
of ? accurately by identifying the 3 informative workers. Instance (iii) is another "hard" instance,
where estimating the sign of the components of ? is difficult. In particular, one must distinguish ?
b
b
from ?? = (?a, a, ?a, a, n?4
, ..., n?4
), otherwise a large error occurs.
Both "hard" instances (ii) and (iii) are inspired by our derivation of the lower bound and constitute
the hardest instances in ?a,b . For each instance we average the performance of algorithms on 103
independent runs and apply a random permutation of the components of ? before each run. We
consider the following algorithms: KOS (the BP algorithm of [Karger et al., 2011]), Maj (majority voting), Oracle (weighted majority voting with optimal weights, the optimal estimator of the
ground truth), RoE (first spectral algorithm of [Dalvi et al., 2013]), EoR (second spectral algorithm
of [Dalvi et al., 2013]), GKM (spectral algorithm of [Ghosh et al., 2011]), S-EMk (EM algorithm
with spectral initialization of [Zhang et al., 2014] with k iterations of EM) and TE (our algorithm).
We do not present the estimation error of KOS, Maj and Oracle since these algorithms only predict
the ground truth but do not estimate ? directly.
The results are shown in Tables 1 and 2, where the best results are indicated in bold. The spectral
algorithms RoE, EoR and GKM tend to be outperformed by the other algorithms. To perform well,
GKM needs ?1 to be positive and large (see [Ghosh et al., 2011]); whenever ?1 ? 0 or |?1 | is
small, GKN tends to make a sign mistake causing a large error. Also the analysis of RoE and EoR
assumes that the task-worker graph is a random D-regular graph (so that the worker-worker matrix
has a large spectral gap). Here this assumption is violated and the practical performance suffers
noticeably, so that this limitation is not only theoretical. KOS performs consistently well, and seems
immune to sign ambiguity, see instance (iii). Further, while the analysis of KOS also assumes that
the task-worker graph is random D-regular, its practical performance does not seem sensitive to that
assumption. The performance of S-EM is good except when sign estimation is hard (instance (iii),
b = 1). This seems due to the fact that the initialization of S-EM
? (see the algorithm description) is
not good in this case. Hence the limitation of b being of order n is not only theoretical but practical
as well. In fact (combining our results and the ideas of [Zhang et al., 2014]), this suggests a new
algorithm where one uses EM with TE as the initial value of ?.
Further, the number of iterations of EM brings significant gains in some cases and should affect the
universal constants in front of the various error bounds (providing theoretical evidence for this seems
non trival). TE performs consistently well except for (i) a = 0.3 (which we believe is due to the
fact that t is relatively small in that instance). In particular when sign estimation is hard TE clearly
outperforms the competing algorithms.
This indeed suggests two regimes for sign estimation: b =
?
O(1) (hard regime) and b = O( n) (easy regime).
Real-world data. We next consider 6 publicly available data sets (see [Whitehill et al., 2009,
Zhou et al., 2015] and summary information in Table 3), each consisting of labels provided by workers and the ground truth. The density is the average number of labels per worker, i.e., ? in our model.
The worker degree is the average number of tasks labeled by a worker.
First, for data sets with more than 2 possible label values, we split the label values into two groups
and associate them with ?1 and +1 respectively. The partition of the labels is given in Table 3.
Second, we remove any worker who provides less than 10 labels. Our preliminary numerical experiments (not shown here for concision) show that without this, none of the studied algorithms
7
even match the majority consistently. Workers with low degree create noise and (to the best of our
knowledge) any theoretical analysis of crowdsourcing algorithms assumes that the worker degree
is sufficiently large. The performance of various algorithms is reported in Table 4. No information
? 6= G). Further,
about the workers reliability is available so we only report the prediction error P(G
one cannot compare algorithms to the Oracle, so that the main goal is to outperform the majority.
Apart from "Bird" and "Web", none of the algorithms seem to be able to significantly outperform
the majority and are sometimes noticeably worse. For "Web" which has both the largest number of
labels and a high worker degree, there is a significant gain over the majority vote, and TE, despite
its low complexity, slightly outperforms S-EM and is competitive with KOS and GKM which both
perform best on this dataset.
Instance
(i) a = 0.3
(i) a = 0.9
(ii) a = 0.55
(ii) a = 0.95
(iii) b = ?
1
(iii) b = n
Table 1:
Instance
(i) a = 0.3
(i) a = 0.9
(ii) a = 0.55
(ii) a = 0.95
(iii) b = ?
1
(iii) b = n
Data Set
Bird
Dog
Duchenne
RTE
Temp
Web
Oracle
0.227
0.004
0.284
0.219
0.181
0.126
RoE
EoR GKM S-EM1 S-EM10
0.200 0.131 0.146
0.100
0.041
0.274 0.265 0.271
0.022
0.022
0.551 0.459 0.479
0.045
0.044
0.528 0.522 0.541
0.034
0.033
0.253 0.222 0.256
0.533
0.389
0.105 0.075 0.085
0.437
0.030
Synthetic data: estimation error E(||?? ? ?||? ).
GKM S-EM1
0.374
0.251
0.202
0.004
0.495
0.284
0.483
0.219
0.386
0.388
0.207
0.258
?
Table 2: Synthetic data: prediction error P(G 6= G).
# Tasks
108
807
159
800
462
2,653
Data Set
Bird
Dog
Duchenne
RTE
Temp
Web
Maj
0.298
0.046
0.441
0.419
0.472
0.315
KOS
0.228
0.004
0.292
0.220
0.185
0.133
RoE
0.402
0.217
0.496
0.495
0.443
0.266
EoR
0.398
0.218
0.497
0.496
0.455
0.284
# Workers # Labels Density Worker Degree
39
4,212
1
108
109
8,070
0.09
74
64
1,221
0.12
19
164
8,000
0.06
49
76
4,620
0.13
61
177
15,539
0.03
88
Table 3: Summary of the real-world datasets.
TE
0.134
0.038
0.050
0.039
0.061
0.045
S-EM10
0.228
0.004
0.285
0.219
0.404
0.127
TE
0.250
0.004
0.284
0.219
0.192
0.128
Label Domain
{0} vs {1}
{0,2} vs {1,3}
{0} vs {1}
{0} vs {1}
{1} vs {2}
{1,2,3} vs {4,5}
Maj KOS RoE EoR GKM S-EM1 S-EM10
0.24 0.28 0.29 0.29 0.28
0.20
0.28
0.18 0.19 0.18 0.18 0.20
0.24
0.17
0.28 0.30 0.29 0.28 0.29
0.28
0.30
0.10 0.50 0.50 0.89 0.49
0.32
0.16
0.06 0.43 0.24 0.10 0.43
0.06
0.06
0.14 0.02 0.13 0.14 0.02
0.04
0.06
?
Table 4: Real-world data: prediction error P(G 6= G).
TE
0.18
0.20
0.26
0.38
0.08
0.03
8 Conclusion
We have derived a minimax error lower bound for the crowdsourcing problem and have proposed
TE, a low-complexity algorithm which matches this lower bound. Our results open several questions
of interest. First, while recent work has shown that one can obtain strong theoretical guarantees by
combining one step of EM with a well-chosen initialization, we have shown that, at least in the case
of binary labels, one can forgo the EM phase altogether and still obtain both minimax optimality
and good numerical performance. It would be interesting to know if this is still possible when there
are more than two possible labels, and also if one can do so using a streaming algorithm.
8
References
Paul S Albert and Lori E Dodd. A cautionary note on the robustness of latent class models for
estimating diagnostic error without a gold standard. Biometrics, 60(2):427?435, 2004.
Gao Chao and Zhou Dengyong. Minimax optimal convergence rates for estimating ground truth
from crowdsourced labels. Tech Report http://arxiv.org/abs/1310.5764, 2015.
Nilesh Dalvi, Anirban Dasgupta, Ravi Kumar, and Vibhor Rastogi. Aggregating crowdsourced
binary ratings. In Proc. of WWW, 2013.
A. P. Dawid and A. M. Skene. Maximum likelihood estimation of observer error-rates using the
EM algorithm. Journal of the Royal Statistical Society. Series C (Applied Statistics), 28(1):20?28,
1979.
Arpita Ghosh, Satyen Kale, and R. Preston McAfee. Who moderates the moderators?: crowdsourcing abuse detection in user-generated content. In Proc. of ACM EC, 2011.
Sui L Hui and Steven D Walter. Estimating the error rates of diagnostic tests. Biometrics, pages
167?171, 1980.
David R. Karger, Sewoong Oh, and Devavrat Shah. Iterative learning for reliable crowdsourcing
systems. In Proc. of NIPS, 2011.
David R Karger, Sewoong Oh, and Devavrat Shah. Efficient crowdsourcing for multi-class labeling.
ACM SIGMETRICS Performance Evaluation Review, 41(1):81?92, 2013.
David R Karger, Sewoong Oh, and Devavrat Shah. Budget-optimal task allocation for reliable
crowdsourcing systems. Operations Research, 62(1):1?24, 2014.
Qiang Liu, Jian Peng, and Alex T Ihler. Variational inference for crowdsourcing. In Proc. of NIPS,
2012.
Shmuel Nitzan and Jacob Paroush. Optimal decision rules in uncertain dichotomous choice situations. International Economic Review, pages 289?297, 1982.
Vikas C Raykar, Shipeng Yu, Linda H Zhao, Gerardo Hermosillo Valadez, Charles Florin, Luca
Bogoni, and Linda Moy. Learning from crowds. Journal of Machine Learning Research, 11:
1297?1322, 2010.
Lloyd Shapley and Bernard Grofman. Optimizing group judgmental accuracy in the presence of
interdependencies. Public Choice, 43(3):329?343, 1984.
Padhraic Smyth, Usama Fayyad, Michael Burl, Pietro Perona, and Pierre Baldi. Inferring ground
truth from subjective labelling of venus images. In Proc. of NIPS, 1995.
Alexandre B. Tsybakov. Introduction to non-parametric estimation. Springer, 2008.
Dong Wang, Tarek Abdelzaher, Lance Kaplan, and Charu C Aggarwal. Recursive fact-finding: A
streaming approach to truth estimation in crowdsourcing applications. In Proc. of IEEE ICDCS,
2013.
Peter Welinder and Pietro Perona. Online crowdsourcing: rating annotators and obtaining costeffective labels. In Proc. of IEEE CVPR (Workshops), 2010.
Jacob Whitehill, Ting-fan Wu, Jacob Bergsma, Javier R Movellan, and Paul L Ruvolo. Whose vote
should count more: Optimal integration of labels from labelers of unknown expertise. In Proc. of
NIPS, 2009.
Yuchen Zhang, Xi Chen, Dengyong Zhou, and Michael I Jordan. Spectral methods meet EM: A
provably optimal algorithm for crowdsourcing. In Proc. of NIPS, 2014.
Dengyong Zhou, Qiang Liu, John C Platt, Christopher Meek, and Nihar B Shah. Regularized
minimax conditional entropy for crowdsourcing. Tech Report, http://arxiv.org/pdf/1503.07240,
2015.
9
| 7022 |@word eor:6 version:3 seems:4 open:1 covariance:5 jacob:3 attainable:1 reduction:1 initial:2 liu:4 series:1 karger:9 document:1 hermosillo:1 past:1 outperforms:2 subjective:1 recovered:1 must:6 written:1 readily:1 john:1 numerical:5 additive:1 informative:8 partition:1 remove:1 update:1 v:6 half:1 guess:1 ruvolo:1 provides:5 org:2 zhang:10 c2:3 ik:5 incorrect:1 prove:7 consists:2 shapley:2 dalvi:4 baldi:1 manner:2 introduce:2 pairwise:1 peng:1 indeed:2 expected:1 multi:1 inspired:1 relying:1 td:1 actual:1 becomes:1 provided:2 estimating:15 notation:1 xx:1 linda:2 minimizes:1 eigenvector:1 ghosh:4 finding:3 nj:2 guarantee:7 voting:2 tie:1 stricter:1 exactly:1 k2:7 platt:1 control:1 medical:1 positive:2 t1:3 before:1 aggregating:1 tends:3 limit:2 mistake:1 despite:1 meet:1 fluctuation:1 abuse:1 bird:3 therein:1 initialization:4 studied:1 suggests:4 bi:1 practical:3 recursive:2 rte:2 movellan:1 dodd:2 procedure:2 universal:4 empirical:1 attain:1 significantly:1 refers:1 regular:2 get:2 cannot:6 minj6:1 storage:1 context:1 applying:1 www:1 straightforward:1 kale:1 independently:1 identifying:1 estimator:20 rule:1 oh:4 population:1 proving:1 usama:1 target:1 pt:1 user:1 smyth:2 us:1 designing:1 hypothesis:3 associate:1 dawid:2 element:1 jk:4 labeled:5 observed:1 steven:1 wang:2 dissociate:1 complexity:7 ui:3 asked:1 concision:1 depend:1 easily:2 em1:3 various:3 derivation:1 walter:2 distinct:2 labeling:1 pearson:1 crowd:1 whose:3 supplementary:1 cvpr:1 otherwise:4 triangular:4 satyen:1 statistic:1 noisy:1 itself:2 online:1 propose:3 fr:2 remainder:1 causing:1 vibhor:1 combining:2 gold:1 description:1 convergence:1 p:1 assessing:1 maxi6:1 derive:2 dengyong:8 ij:11 job:1 strong:1 implemented:1 convention:1 closely:1 correct:2 centered:1 material:1 public:1 noticeably:2 exchange:1 require:5 assign:1 preliminary:1 proposition:3 tighter:1 extension:1 hold:4 sufficiently:1 ground:8 exp:3 predict:1 a2:2 purpose:1 estimation:19 proc:9 outperformed:1 label:25 sensitive:1 largest:1 create:1 weighted:2 clearly:2 sigmetrics:1 pn:2 zhou:4 derived:4 focus:2 improvement:1 consistently:3 bernoulli:3 likelihood:3 costeffective:1 check:1 tech:2 sense:1 inference:1 streaming:6 typically:1 her:3 perona:3 relation:2 gkn:1 provably:2 overall:1 classification:4 arg:2 denoted:1 art:1 platform:1 initialize:1 integration:1 equal:1 nitzan:2 having:1 beach:1 never:1 chernoff:1 qiang:2 hardest:1 yu:1 theart:1 t2:3 report:3 richard:2 divergence:1 phase:4 consisting:1 ab:1 detection:1 interest:2 message:1 evaluation:1 admitting:1 l2s:1 worker:65 unless:1 biometrics:2 yuchen:1 nij:3 minimal:1 theoretical:5 uncertain:1 instance:19 maximization:1 entry:3 supelec:2 welinder:2 front:1 characterize:1 reported:1 answer:15 synthetic:6 st:1 density:2 fundamental:1 international:1 dong:1 nilesh:1 michael:2 na:1 central:1 satisfied:1 ambiguity:1 padhraic:1 possibly:1 worse:1 zhao:1 valadez:1 b2:3 bold:1 lloyd:1 satisfy:1 notable:1 depends:2 vi:1 multiplicative:1 view:1 observer:1 doing:1 analyze:1 competitive:1 recover:2 sort:1 crowdsourced:2 identifiability:4 contribution:2 minimize:1 ni:1 accuracy:2 publicly:1 variance:1 who:2 correspond:2 identify:1 yield:1 rastogi:1 bayesian:1 accurately:3 none:2 expertise:1 finer:1 j6:11 moderator:1 suffers:1 whenever:1 involved:1 obvious:1 proof:11 ihler:1 gain:2 dataset:2 knowledge:2 organized:1 javier:1 cik:5 alexandre:1 follow:1 improved:1 strongly:1 correlation:4 web:4 christopher:1 combes:2 propagation:1 brings:2 quality:2 indicated:1 believe:1 usa:1 true:3 tarek:1 former:1 hence:3 burl:1 leibler:1 preston:1 raykar:3 noted:1 pdf:1 outline:3 complete:1 theoretic:1 performs:2 image:2 variational:1 novel:3 charles:1 common:1 linking:1 emk:1 refer:2 significant:2 similarly:1 i6:5 reliability:19 immune:1 an2:1 deduce:2 pu:2 labelers:1 sick:1 bergsma:1 recent:2 optimizing:1 moderate:1 apart:1 manipulation:1 store:2 inequality:2 binary:5 nln:1 greater:2 ii:13 interdependency:1 reduces:1 aggarwal:1 match:5 long:1 luca:1 prediction:6 ko:7 patient:1 expectation:1 albert:2 repetitive:1 iteration:3 roe:6 trival:1 sometimes:1 arxiv:2 c1:3 addressed:1 grow:2 jian:1 unlike:1 tend:1 tough:1 gkm:7 seem:2 integer:1 jordan:1 presence:2 iii:9 enough:1 easy:3 split:1 mcafee:1 xj:8 affect:1 florin:1 perfectly:1 competing:1 economic:1 idea:1 venus:1 translates:1 expression:1 moy:1 spammer:1 peter:1 passing:1 constitute:1 involve:1 amount:1 tsybakov:2 http:2 outperform:2 exist:1 sign:18 estimated:2 diagnostic:2 per:4 shall:1 dasgupta:1 group:2 arpita:1 ravi:1 incentivize:1 graph:3 pietro:2 sum:1 run:2 powerful:1 arrive:1 family:1 wu:1 charu:1 decision:1 announced:1 bound:24 meek:1 distinguish:3 fan:1 identifiable:2 activity:1 oracle:4 precisely:1 constraint:2 alex:1 bp:3 x2:3 cautionary:1 dichotomous:1 answered:1 argument:2 optimality:3 min:5 kumar:1 fayyad:1 relatively:1 skene:2 centrale:1 anirban:1 smaller:1 slightly:1 em:18 wi:3 temp:2 ln:11 neyman:1 payment:2 previously:1 discus:1 remains:1 devavrat:3 count:1 know:1 available:3 operation:5 apply:1 observe:1 spectral:10 pierre:1 distinguished:2 robustness:1 shah:5 altogether:1 vikas:1 thomas:2 top:1 assumes:3 a4:2 ting:1 uj:6 society:1 objective:1 question:4 quantity:1 occurs:1 parametric:1 concentration:2 majority:8 assuming:2 index:1 kk:1 ratio:2 providing:1 difficult:2 cij:11 statement:2 whitehill:2 mink:1 kaplan:1 unknown:6 perform:2 upper:3 datasets:3 finite:1 incorrectly:1 maxk:1 situation:1 perturbation:1 nihar:1 inferred:1 rating:2 david:3 namely:3 required:3 kl:1 pair:6 z1:2 c3:2 dog:2 nip:6 able:1 below:1 appeared:1 regime:3 max:6 memory:4 royal:1 belief:1 reliable:2 power:1 lance:1 difficulty:4 rely:1 regularized:1 paroush:2 minimax:22 representing:1 improve:1 scheme:1 maj:4 concludes:1 text:1 prior:3 chao:6 review:2 determining:1 asymptotic:3 expect:1 permutation:1 interesting:1 limitation:2 allocation:1 bonald:2 annotator:1 degree:5 imposes:1 sewoong:3 share:1 prone:1 course:1 summary:2 last:1 bias:1 absolute:5 xn:1 world:5 transition:1 adaptive:1 ec:1 kullback:1 sequentially:1 reveals:2 conclude:3 xi:16 iterative:2 latent:1 triplet:1 table:8 nature:1 learn:1 shmuel:1 ca:1 symmetry:3 obtaining:1 shipeng:1 domain:1 vj:4 significance:1 main:4 whole:1 noise:1 paul:2 n2:7 x1:5 telecom:2 inferring:1 breaking:1 sui:1 theorem:11 removing:1 maxi:5 concern:1 derives:1 essential:2 exists:3 cjk:2 evidence:1 workshop:1 hui:2 supplement:1 te:24 labelling:1 budget:1 gap:1 lori:1 chen:1 entropy:1 logarithmic:1 gao:1 duchenne:2 bogoni:1 expressed:1 applies:3 springer:1 mij:2 corresponds:1 truth:11 acm:2 abdelzaher:1 conditional:3 goal:2 careful:1 towards:1 absence:1 content:1 paristech:2 hard:12 infinite:3 clinician:1 uniformly:1 specifically:2 wt:3 except:2 lemma:10 called:1 bernard:1 forgo:1 vote:4 mark:1 latter:1 violated:1 gerardo:1 tested:1 crowdsourcing:19 |
6,659 | 7,023 | Estimating Accuracy from Unlabeled Data:
A Probabilistic Logic Approach
Emmanouil A. Platanios
Carnegie Mellon University
Pittsburgh, PA
[email protected]
Hoifung Poon
Microsoft Research
Redmond, WA
[email protected]
Tom M. Mitchell
Carnegie Mellon University
Pittsburgh, PA
[email protected]
Eric Horvitz
Microsoft Research
Redmond, WA
[email protected]
Abstract
We propose an efficient method to estimate the accuracy of classifiers using only
unlabeled data. We consider a setting with multiple classification problems where
the target classes may be tied together through logical constraints. For example, a
set of classes may be mutually exclusive, meaning that a data instance can belong to
at most one of them. The proposed method is based on the intuition that: (i) when
classifiers agree, they are more likely to be correct, and (ii) when the classifiers
make a prediction that violates the constraints, at least one classifier must be making
an error. Experiments on four real-world data sets produce accuracy estimates
within a few percent of the true accuracy, using solely unlabeled data. Our models
also outperform existing state-of-the-art solutions in both estimating accuracies,
and combining multiple classifier outputs. The results emphasize the utility of
logical constraints in estimating accuracy, thus validating our intuition.
1
Introduction
Estimating the accuracy of classifiers is central to machine learning and many other fields. Accuracy
is defined as the probability of a system?s output agreeing with the true underlying output, and thus
is a measure of the system?s performance. Most existing approaches to estimating accuracy are
supervised, meaning that a set of labeled examples is required for the estimation. Being able to
estimate the accuracies of classifiers using only unlabeled data is important for many applications,
including: (i) any autonomous learning system that operates under no supervision, as well as (ii)
crowdsourcing applications, where multiple workers provide answers to questions, for which the
correct answer is unknown. Furthermore, tasks which involve making several predictions which are
tied together by logical constraints are abundant in machine learning. As an example, we may have
two classifiers in the Never Ending Language Learning (NELL) project [Mitchell et al., 2015] which
predict whether noun phrases represent animals or cities, respectively, and we know that something
cannot be both an animal and a city (i.e., the two categories are mutually exclusive). In such cases, it
is not hard to observe that if the predictions of the system violate at least one of the constraints, then
at least one of the system?s components must be wrong. This paper extends this intuition and presents
an unsupervised approach (i.e., only unlabeled data are needed) for estimating accuracies that is able
to use information provided by such logical constraints. Furthermore, the proposed approach is also
able to use any available labeled data, thus also being applicable to semi-supervised settings.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
Classifier Outputs
Instance
Classifier #1
Classifier #2
Logical Constraints
Category
animal
fish
shark
bird
.
animal
fish
sparrow
bird
.
. . .
animal
fish
shark
bird
.
animal
fish
sparrow
bird
.
. . .
. . .
Probability
99%
95%
5%
. .
95%
10%
26%
. .
. .
. .
99%
95%
5%
95%
2%
84%
animal
bird
. . .
Inputs: Predicted probability for
each classifier-object-category.
Outputs: Set of object-category
classification pairs and categoryclassifier error-rate pairs that are
not directly constrained to be 0
or 1 from the logical constraints.
Description: Section 3.3.2.
animal
1%
fish
5%
bird
57%
. . .
animal
1%
fish
2%
Classifier #2
bird
9%
. . .
. . .
Classifier #1
Unobserved
f?1animal (shark)
f?fish (shark)
= 0.99
eanimal
1
= 0.95
efish
1
f?1bird (shark)
= 0.05
ebird
1
f?1animal (sparrow)
f?fish (sparrow)
= 0.95
f?1bird (sparrow)
= 0.26
1
Grounding
1
= 0.10
...
SUB(animal, fish) = 1
SUB(animal, bird) = 1
ME(fish, bird)
=1
...
f animal (shark)
f fish (shark)
f bird (shark)
f animal (sparrow)
f fish (sparrow)
f bird (sparrow)
Ground Rules
SUB(animal, fish) ^ ?f?1animal (shark) ^ f fish (shark) ! eanimal
1
...
ME(fish, bird) ^ f?1fish (sparrow) ^ f bird (sparrow)
Results
Error Rates
Ground Predicates
Observed
fish
Combined Predictions
animal
99%
fish
95%
bird
8%
. . .
animal
95%
fish
4%
sparrow
bird
75%
. . .
. . .
! efish
1
Probabilistic Inference
shark
Inputs: Ground predicates and rules.
Step 1: Create a Markov Random Field (MRF).
Step 2: Perform probabilistic inference to obtain the most likely values for the
unobserved ground predicates. Inference is performed using a modified
version of the Probabilistic Soft Logic (PSL) framework.
Outputs: Classifier error rates and underlying function values.
Description: Section 3.3.
Figure 1: System overview diagram. The classifier outputs (corresponding to the function approximation outputs) and the logical constraints make up the system inputs. The representation of the
logical constraints in terms of the function approximation error rates is described in section 3.2. In
the logical constraints box, blue arrows represent subsumption constraints, and labels connected by
a red dashed line represent a mutually exclusive set. Given the inputs, the first step is grounding
(computing all feasible ground predicates and rules that the system will need to perform inference
over) and is described in section 3.3.2. In the ground rules box, ?, ?, ? correspond to the logic AND,
OR, and IMPLIES. Then, inference is performed in order to infer the most likely truth values of the
unobserved ground predicates, given the observed ones and the ground rules (described in detail in
section 3.3). The results constitute the outputs of our system and they include: (i) the estimated error
rates, and (ii) the most likely target function outputs (i.e., combined predictions).
We consider a ?multiple approximations? problem setting in which we have several different apd
d
proximations, f?1d , . . . , f?N
: X 7? {0, 1} for
d , to a set of target boolean classification functions, f
d = 1, . . . , D, and we wish to know the true accuracies of each of these different approximations,
using only unlabeled data, as well as the response of the true underlying functions, f d . Each value
of d characterizes a different domain (or problem setting) and each domain can be interpreted as a
class or category of objects. Similarly, the function approximations can be interpreted as classifying
inputs as belonging or not to these categories. We consider the case where we may have a set of
logical constraints defined over the domains. Note that, in contrast with related work, we allow the
function approximations to provide soft responses in the interval [0, 1] (as opposed to only allowing
binary responses ? i.e., they can now return the probability for the response being 1), thus allowing
modeling of their ?certainty?. As an example of this setting, to which we will often refer throughout
this paper, let us consider a part of NELL, where the input space of our functions, X , is the space of
all possible noun phrases (NPs). Each target function, f d , returns a boolean value indicating whether
the input NP belongs to a category, such as ?city? or ?animal?, and these categories correspond to our
domains. There also exist logical constraints between these categories that may be hard (i.e., strongly
enforced) or soft (i.e., enforced in a probabilistic manner). For example, ?city? and ?animal? may
be mutually exclusive (i.e., if an object belongs to ?city?, then it is unlikely that it also belongs to
?animal?). In this case, the function approximations correspond to different classifiers (potentially
using a different set of features / different views of the input data), which may return a probability
for a NP belonging to a class, instead of a binary value. Our goal is to estimate the accuracies of
these classifiers using only unlabeled data. In order to quantify accuracy, we define the error rate of
classifier j in domain d as edj , PD [f?jd (X) 6= f d (X)], for the binary case, for j = 1, . . . , N d , where
2
D is the true underlying distribution of the input data. Note that accuracy is equal to one minus error
rate. This definition may be relaxed for the case where f?jd (X) ? [0, 1] representing a probability:
edj , f?jd (X)PD [f d (X) 6= 1] + (1 ? f?jd (X))PD [f d (X) 6= 0], which resembles an expected probability of error. Even though our work is motivated by the use of logical constraints defined over the
domains, we also consider the setting where there are no such constraints.
2
Related Work
The literature covers many projects related to estimating accuracy from unlabeled data. The setting
we are considering was previously explored by Collins and Singer [1999], Dasgupta et al. [2001],
Bengio and Chapados [2003], Madani et al. [2004], Schuurmans et al. [2006], Balcan et al. [2013],
and Parisi et al. [2014], among others. Most of their approaches made some strong assumptions,
such as assuming independence given the outputs, or assuming knowledge of the true distribution
of the outputs. None of the previous approaches incorporated knowledge in the form of logical
constraints. Collins and Huynh [2014] review many methods that were proposed for estimating the
accuracy of medical tests in the absence of a gold standard. This is effectively the same problem that
we are considering, applied to the domains of medicine and biostatistics. They present a method
for estimating the accuracy of tests, where these tests are applied in multiple different populations
(i.e., different input data), while assuming that the accuracies of the tests are the same across the
populations, and that the test results are independent conditional on the true ?output?. These are
similar assumptions to the ones made by several of the other papers already mentioned, but the idea
of applying the tests to multiple populations is new and interesting. Platanios et al. [2014] proposed a
method relaxing some of these assumptions. They formulated the problem of estimating the error
rates of several approximations to a function as an optimization problem that uses agreement rates
of these approximations over unlabeled data. Dawid and Skene [1979] were the first to formulate
the problem in terms of a graphical model and Moreno et al. [2015] proposed a nonparametric
extension to that model applied to crowdsourcing. Tian and Zhu [2015] proposed an interesting
max-margin majority voting scheme for combining classifier outputs, also applied to crowdsourcing.
However, all of these approaches were outperformed by the models of Platanios et al. [2016], which
are most similar to the work of Dawid and Skene [1979] and Moreno et al. [2015]. To the best of
our knowledge, our work is the first to use logic for estimating accuracy from unlabeled data and, as
shown in our experiments, outperforms all competing methods. Logical constraints provide additional
information to the estimation method and this partially explains the performance boost.
3
Proposed Method
Our method consists of: (i) defining a set of logic rules for modeling the logical constraints between
the f d and the f?jd , in terms of the error rates edj and the known logical constraints, and (ii) performing
probabilistic inference using these rules as priors, in order to obtain the most likely values of the
edj and the f d , which are not observed. The intuition behind the method is that if the constraints
are violated for the function approximation outputs, then at least one of these functions has to be
making an error. For example, in the NELL case, if two function approximations respond that a
NP belongs to the ?city? and the ?animal? categories, respectively, then at least one of them has to
be making an error. We define the form of the logic rules in section 3.2 and then describe how to
perform probabilistic inference over them in section 3.3. An overview of our system is shown in
figure 1. In the next section we introduce the notion of probabilistic logic, which fuses classical logic
with probabilistic reasoning and that forms the backbone of our method.
3.1
Probabilistic Logic
In classical logic, we have a set of predicates (e.g., mammal(x) indicating whether x is a mammal,
where x is a variable) and a set of rules defined in terms of these predicates (e.g., mammal(x) ?
animal(x), where ??? can be interpreted as ?implies?). We refer to predicates and rules defined for
a particular instantiation of their variables as ground predicates and ground rules, respectively (e.g.,
mammal(whale) and mammal(whale) ? animal(whale)). These ground predicates and rules take
boolean values (i.e., are either true or false ? for rules, the value is true if the rule holds). Our goal
3
is to infer the most likely values for a set of unobserved ground predicates, given a set of observed
ground predicate values and logic rules.
In probabilistic logic, we are instead interested in inferring the probabilities of these ground predicates
and rules being true, given a set of observed ground predicates and rules. Furthermore, the truth
values of ground predicates and rules may be continuous and lie in the interval [0, 1], instead of being
boolean, representing the probability that the corresponding ground predicate or rule is true. In this
case, boolean logic operators, such as AND (?), OR (?), NOT (?), and IMPLIES (?), need to be
redefined. For the next section, we will assume their classical logical interpretation.
3.2
Model
As described earlier, our goal is to estimate the true accuracies of each of the function approximations,
d
f?1d , . . . , f?N
d for d = 1, . . . , D, using only unlabeled data, as well as the response of the true
underlying functions, f d . We now define the logic rules that we perform inference over in order to
achieve that goal. The rules are defined in terms of the following predicates, for d = 1, . . . , D:
? Function Approximation Outputs: f?jd (X), defined over all approximations j = 1, . . . , N d , and
inputs X ? X , for which the corresponding function approximation has provided a response. Note
that the values of these ground predicates lie in [0, 1] due to their probabilistic nature (i.e., they do
not have to be binary, as in related work), and some of them are observed.
? Target Function Outputs: f d (X), defined over all inputs X ? X . Note that, in the purely
unsupervised setting, none of these ground predicate values are observed, in contrast with the
semi-supervised setting.
? Function Approximation Error Rates: edj , defined over all approximations j = 1, . . . , N d . Note
that none of these ground predicate values are observed. The primary goal of this paper is to infer
their values.
The goal of the logic rules we define is two-fold: (i) to combine the function approximation outputs
in a single output value, and (ii) to account for the logical constraints between the domains. We aim
to achieve both goals while accounting for the error rates of the function approximations. We first
define a set of rules that relate the function approximation outputs with the true underlying function
output. We call this set of rules the ensemble rules and we describe them in the following section.
We then discuss how to account for the logical constraints between the domains.
3.2.1
Ensemble Rules
This first set of rules specifies a relation between the target function outputs, f d (X), and the function
approximation outputs, f?jd (X), independent of the logical constraints:
f?jd (X) ? ?edj ? f d (X), ?f?jd (X) ? ?edj ? ?f d (X),
f?d (X) ? ed ? ?f d (X), and ?f?d (X) ? ed ? f d (X),
j
j
d
j
j
(1)
(2)
for d = 1, . . . , D, j = 1, . . . , N , and X ? X . In words: (i) the first set of rules state that if a
function approximation is not making an error, its output should match the output of the target
function, and (ii) the second set of rules state that if a function approximation is making an error, its
output should not match the output of the target function.
An interesting point to make is that the ensemble rules effectively constitute a weighted majority
vote for combining the function approximation outputs, where the weights are determined by the
error rates of the approximations. These error rates are implicitly computed based on agreement
between the function approximations. This is related to the work of Platanios et al. [2014]. There,
the authors try to answer the question of whether consistency in the outputs of the approximations
implies correctness. They directly use the agreement rates of the approximations in order to estimate
their error rates. Thus, there exists an interesting connection in our work in that we also implicitly
use agreement rates to estimate error rates, and our results, even though improving upon theirs
significantly, reinforce their claim.
Identifiability. Let us consider flipping the values of all error rates (i.e., setting them to one minus
their value) and the target function responses. Then, the ensemble logic rules would evaluate to
the same value as before (e.g., satisfied or unsatisfied). Therefore, the error rates and the target
function values are not identifiable when there are no logical constraints. As we will see in the next
4
section, the constraints may sometimes help resolve this issue as, often, the corresponding logic
rules do not exhibit that kind of symmetry. However, for cases where that symmetry exists, we
can resolve it by assuming that most of the function approximations have error rates better than
chance (i.e., < 0.5). This can be done by considering the two rules: (i) f?jd (X) ? f d (X), and
?f?jd (X) ? ?f d (X), for d = 1, . . . , D, j = 1, . . . , N d , and X ? X . Note that all that these rules
imply is that f?jd (X) = f d (X) (i.e., they represent the prior belief that function approximations are
correct). As will be discussed in section 3.3, in probabilistic frameworks where rules are weighted
with a real value in [0, 1], these rules will be given a weight that represents their significance or
strength. In such a framework, we can consider using a smaller weight for these prior belief rules,
compared to the remainder of the rules, which would simply correspond to a regularization weight.
This weight can be a tunable or even learnable parameter.
3.2.2
Constraints
The space of possible logical constraints is huge; we do not deal with every possible constraint in
this paper. Instead, we focus our attention on two types of constraints that are abundant in structured
prediction problems in machine learning, and which are motivated by the use of our method in the
context of NELL:
? Mutual Exclusion: If domains d1 and d2 are mutually exclusive, then f d1 = 1 implies that f d2 = 0.
For example, in the NELL setting, if a NP belongs to the ?city? category, then it cannot also belong
to the ?animal? category.
? Subsumption: If d1 subsumes d2 , then if f d2 = 1, we must have that f d1 = 1. For example, in
the NELL setting, if a NP belongs to the ?cat? category, then it must also belong to the ?animal?
category.
This set of constraints is sufficient to model most ontology constraints between categories in NELL,
as well as a big subset of the constraints more generally used in practice.
Mutual Exclusion Rule. We first define the predicate ME(d1 , d2 ), indicating that domains d1 and
d2 are mutually exclusive1 . This predicate has value 1 if domains d1 and d2 are mutually exclusive,
and value 0 otherwise, and its truth value is observed for all values of d1 and d2 . Furthermore, note
that it is symmetric, meaning that if ME(d1 , d2 ) is true, then ME(d2 , d1 ) is also true. We define the
mutual exclusion logic rule as:
ME(d1 , d2 ) ? f?jd1 (X) ? f d2 (X) ? edj 1 ,
(3)
d1
d2
for d1 6= d2 = 1, . . . , D, j = 1, . . . , N , and X ? X . In words, this rule says that if f (X) = 1
and domains d1 and d2 are mutually exclusive, then f?jd1 (X) must be equal to 0, as it is an approximation to f d1 (X) and ideally we want that f?jd1 (X) = f d1 (X). If that is not the case, then f?jd1 must
be making an error.
Subsumption Rule. We first define the predicate SUB(d1 , d2 ), indicating that domain d1 subsumes
domain d2 . This predicate has value 1 if domain d1 subsumes domain d2 , and 0 otherwise, and its
truth value is always observed. Note that, unlike mutual exclusion, this predicate is not symmetric.
We define the subsumption logic rule as:
SUB(d1 , d2 ) ? ?f?jd1 (X) ? f d2 (X) ? edj 1 ,
d1
(4)
d2
for d1 , d2 = 1, . . . , D, j = 1, . . . , N , and X ? X . In words, this rule says that if f (X) = 1 and
d1 subsumes d2 , then f?jd1 (X) must be equal to 1, as it is an approximation to f d1 (X) and ideally we
want that f?jd1 (X) = f d1 (X). If that is not the case, then f?jd1 must be making an error.
Having defined all of the logic rules that comprise our model, we now describe how to perform
inference under such a probabilistic logic model, in the next section. Inference in this case comprises
determining the most likely truth values of the unobserved ground predicates, given the observed
predicates and the set of rules that comprise our model.
1
A set of mutually-exclusive domains can be reduced to pairwise ME constraints for all pairs in that set.
5
3.3
Inference
In section 3.1 we introduced the notion of probabilistic logic and we defined our model in terms
of probabilistic predicates and rules. In this section we discuss in more detail the implications of
using probabilistic logic, and the way in which we perform inference in our model. There exist
various probabilistic logic frameworks, each making different assumptions. In what is arguably the
most popular such framework, Markov Logic Networks (MLNs) [Richardson and Domingos, 2006],
inference is performed over a constructed Markov Random Field (MRF) based on the model logic
rules. Each potential function in the MRF corresponds to a ground rule and takes an arbitrary positive
value when the ground rule is satisfied and the value 0 otherwise (the positive values are often called
rule weights and can be either fixed or learned). Each variable is boolean-valued and corresponds
to a ground predicate. MLNs are thus a direct probabilistic extension to boolean logic. It turns out
that due to the discrete nature of the variables in MLNs, inference is NP-hard and can thus be very
inefficient. Part of our goal in this paper is for our method to be applicable at a very large scale (e.g.,
for systems like NELL). We thus resorted to Probabilistic Soft Logic (PSL) [Br?cheler et al., 2010],
which can be thought of as a convex relaxation of MLNs.
Note that the model proposed in the previous section, which is also the primary contribution of this
paper, can be used with various probabilistic logic frameworks. Our choice, which is described in
this section, was motivated by scalability. One could just as easily perform inference for our model
using MLNs, or any other such framework.
3.3.1
Probabilistic Soft Logic (PSL)
In PSL, models, which are composed of a set of logic rules, are represented using hinge-loss
Markov random fields (HL-MRFs) [Bach et al., 2013]. In this case, inference amounts to solving a
convex optimization problem. Variables of the HL-MRF correspond to soft truth values of ground
predicates. Specifically, a HL-MRF, f , is a probability density over m random variables, Y =
{Y1 , . . . , Ym } with domain D = [0, 1]m , corresponding to the unobserved ground predicate values.
Let X = {X1 , . . . , Xn } be an additional set of variables with known values in the domain [0, 1]n ,
corresponding to observed ground predicate values. Let ? = {?1 , . . . , ?k } be a finite set of k
continuous potential functions of the form ?j (X, Y) = (max {`j (X, Y), 0})pj , where `j is a linear
function of X and Y, and pj ? {1, 2}. We will soon see how these functions relate to the ground
rules of the model. Given the above, for a set of non-negative free parameters ? = {?1 , . . . , ?k } (i.e.,
the equivalent of MLN rule weights), the HL-MRF density is defined as:
f (Y) =
k
X
1
exp ?
?j ?j (X, Y),
Z
j=1
(5)
where Z is a normalizing constant so that f is a proper probability density function. Our goal is to
infer the most probable explanation (MPE), which consists of the values of Y that maximize the
likelihood of our data2 . This is equivalent to solving the following convex problem:
min m
Y?[0,1]
k
X
?j ?j (X, Y).
(6)
j=1
Each variable Xi or Yi corresponds to a soft truth value (i.e., Yi ? [0, 1]) of a ground predicate.
Each function `j corresponds to a measure of the distance to satisfiability of a logic rule. The set
of rules used is what characterizes a particular PSL model. The rules represent prior knowledge
we might have about the problem we are trying to solve. For our model, these rules were defined
in section 3.2. As mentioned above, variables are allowed to take values in the interval [0, 1]. We
thus need to define what we mean by the truth value of a rule and its distance to satisfiability. For
the logical operators AND (?), OR (?), NOT (?), and IMPLIES (?), we use the definitions from
?ukasiewicz Logic [Klir and Yuan, 1995]: P ?Q , max {P + Q ? 1, 0}, P ?Q , min {P + Q, 1},
?P , 1 ? P , and P ? Q , min{1 ? P + Q, 1}. Note that these operators are a simple continuous
relaxation of the corresponding boolean operators, in that for boolean-valued variables, with 0
corresponding to FALSE and 1 to TRUE, they are equivalent. By writing all logic rules in the form
B1 ? B2 ? ? ? ? ? Bs ? H1 ? H2 ? ? ? ? ? Ht , it is easy to observe that the distance to satisfiability
2
As opposed to performing marginal inference which aims to infer the marginal distribution of these values.
6
Animal
Vertebrate
Bird
Fish
Location
Invertebrate
Mammal
Arthropod
Mollusk
City
Country
River
Lake
Figure 2: Illustration of the NELL-11 data set constraints. Each box represents a label, each blue
arrow represents a subsumption constraint, and each set of labels connected by a red dashed line
represents a mutually exclusive set of labels. For example, Animal subsumes Vertebrate and
Bird, Fish, and Mammal are mutually exclusive.
Ps
Pt
(i.e., 1 minus its truth value) of a rule evaluates to max {0, i=1 Bi ? j=1 Ht + 1 ? s}. Note
that any set of rules of first-order predicate logic can be represented in this form [Br?cheler et al.,
2010], and that minimizing this quantity amounts to making the rule ?more satisfied?.
In order to complete our system description we need to describe: (i) how to obtain a set of ground
rules and predicates from a set of logic rules of the form presented in section 3.2 and a set of
observed ground predicates, and define the objective function of equation 6, and (ii) how to solve
the optimization problem of that equation to obtain the most likely truth values for the unobserved
ground predicates. These two steps are described in the following two sections.
3.3.2
Grounding
Grounding is the process of computing all possible groundings of each logic rule to construct the
inference problem variables and the objective function. As already described in section 3.3.1, the
variables X and Y correspond to ground predicates and the functions `j correspond to ground rules.
The easiest way to ground a set of logic rules would be to go through each one and create a ground
rule instance of it, for each possible value of its arguments. However, if a rule depends on n variables
and each variable can take m possible values, then mn ground rules would be generated. For example,
the mutual exclusion rule of equation 3 depends on d1 , d2 , j, and X, meaning that D2 ?N d1 ?|X|
ground rule instances would be generated, where |X| denotes the number of values that X can
take. The same applies to predicates; f?jd1 (X) would result in D ? N d1 ? |X| ground instances,
which would become variables in our optimization problem. This approach would thus result in a
huge optimization problem rendering it impractical when dealing with large scale problems such as
NELL. The key to scaling up the grounding procedure is to notice that many of the possible ground
rules are always satisfied (i.e., have distance to satisfiability equal to 0), irrespective of the values
of the unobserved ground predicates that they depend upon. These ground rules would therefore
not influence the optimization problem solution and can be safely ignored. Since in our model we
are only dealing with a small set of predefined logic rule forms, we devised a heuristic grounding
procedure that only generates those ground rules and predicates that may influence the optimization.
Our grounding algorithm is shown in the supplementary material and is based on the idea that a
ground rule is only useful if the function approximation predicate that appears in its body is observed.
It turns out that this approach is orders of magnitude faster than existing state-of-the-art solutions
such as the grounding solution used by Niu et al. [2011].
3.3.3
Solving the Optimization Problem
For large problems, the objective function of equation 6 will be a sum of potentially millions of
terms, each one of which only involving a small set of variables. In PSL, the method used to solve
this optimization problem is based on the consensus Alternating Directions Method of Multipliers
(ADMM). The approach consists of handling each term in that sum as a separate optimization
problem using copies of the corresponding variables, while adding the constraint that all copies of
each variable must be equal. This allows for solving the subproblems completely in parallel and
is thus scalable. The algorithm is summarized in the supplementary material. More details on this
algorithm and on its convergence properties can be found in the latest PSL paper [Bach et al., 2015].
We propose a stochastic variation of this consensus ADMM method that is even more scalable.
During each iteration, instead of solving all subproblems and aggregating their solutions in the
consensus variables, we sample K << k subproblems to solve. The probability of sampling each
7
Table 1: Mean absolute deviation (MAD) of the error rate rankings and the error rate estimates (lower
MAD is better), and area under the curve (AUC) of the label estimates (higher AUC is better). The
best results for each experiment, across all methods, are shown in bolded text and the results for our
proposed method are highlighted in blue.
MAJ
AR-2
AR
BEE
CBEE
HCBEE
LEE
?10?2
MAJ
GIBBS-SVM
GD-SVM
DS
AR-2
AR
BEE
CBEE
HCBEE
LEE
?10?1
MAJ
GIBBS-SVM
GD-SVM
DS
AR-2
BEE
CBEE
HCBEE
LEE
MADerror rank
7.71
12.0
11.4
6.00
6.00
5.03
3.71
MADerror rank
23.3
102.0
26.7
170.0
48.3
48.3
40.0
40.0
81.7
30.0
MADerror rank
8.76
7.77
7.60
7.77
16.40
7.98
10.90
28.10
7.60
NELL-7
MADerror
0.238
0.261
0.260
0.231
0.232
0.229
0.152
uNELL-All
MADerror
0.47
2.05
0.42
7.08
2.63
2.60
0.60
0.61
2.53
0.37
uBRAIN-All
MADerror
0.57
0.43
0.44
0.44
0.87
0.40
0.43
0.85
0.38
AUCtarget
0.372
0.378
0.374
0.314
0.314
0.452
0.508
MADerror rank
7.54
10.8
11.1
5.69
5.69
5.14
4.77
AUCtarget
99.9
28.6
71.3
12.1
96.7
96.7
99.8
99.8
99.4
96.5
MADerror rank
33.3
101.7
93.3
180.0
50.0
48.3
31.7
118.0
81.7
30.0
AUCtarget
8.49
4.65
5.24
8.76
9.71
9.32
9.34
9.20
9.95
MADerror rank
1.52
1.51
1.50
1.32
2.28
1.38
1.77
3.25
1.32
NELL-11
MADerror
0.303
0.350
0.350
0.291
0.291
0.324
0.180
uNELL-10%
MADerror
0.54
2.15
1.90
6.96
2.56
2.52
0.64
45.40
2.45
0.43
uBRAIN-10%
MADerror
0.68
0.66
0.68
0.63
0.97
0.63
0.89
0.97
0.47
AUCtarget
0.447
0.455
0.477
0.368
0.368
0.462
0.615
AUCtarget
87.7
28.2
67.8
12.3
96.4
96.4
79.5
55.4
84.9
97.3
AUCtarget
7.84
5.28
8.56
4.59
9.89
9.35
9.30
9.37
9.98
subproblem is proportional to the distance of its variable copies from the respective consensus
variables. The intuition and motivation behind this approach is that at the solution of the optimization
problem, all variable copies should be in agreement with the consensus variables. Therefore, prioritizing subproblems whose variables are in greater disagreement with the consensus variables might
facilitate faster convergence. Indeed, this modification to the inference algorithm allowed us to apply
our method to the NELL data set and obtain results within minutes instead of hours.
4
Experiments
Our implementation as well as the experiment data sets are available at https://github.com/
eaplatanios/makina.
Data Sets.
First, we considered the following two data sets with logical constraints:
? NELL-7: Classify noun phrases (NPs) as belonging to a category or not (categories correspond
to domains in this case). The categories considered for this data set are Bird, Fish, Mammal,
City, Country, Lake, and River. The only constraint considered is that all these categories
are mutually exclusive.
? NELL-11: Perform the same task, but with the categories and constraints illustrated in figure 2.
For both of these data sets, we have a total of 553,940 NPs and 6 classifiers, which act as our function
approximations and are described in [Mitchell et al., 2015]. Not all of the classifiers provide a
response every input NP. In order to show the applicability of our method in cases where there are no
logical constraints between the domains, we also replicated the experiments of Platanios et al. [2014]:
? uNELL: Same task as NELL-7, but without considering the constraints and using 15 categories, 4
classifiers, and about 20,000 NPs per category.
8
? uBRAIN: Classify which of two 40 second long story passages corresponds to an unlabeled 40
second time series of Functional Magnetic Resonance Imaging (fMRI) neural activity. 11 classifiers
were used and the domain in this case is defined by 11 different locations in the brain, for each of
which we have 924 examples. Additional details can be found in [Wehbe et al., 2014].
Methods. Some of the methods we compare against do not explicitly estimate error rates. Rather,
they combine the classifier outputs to produce a single label. For these methods, we produce an
estimate of the error rate using these labels and compare against this estimate.
1. Majority Vote (MV): This is the most intuitive method and it consists of taking the most common
output among the provided function approximation responses, as the combined output.
2. GIBBS-SVM/GD-SVM: Methods of Tian and Zhu [2015].
3. DS: Method of Dawid and Skene [1979].
4. Agreement Rates (AR): This is the method of Platanios et al. [2014]. It estimates error rates
but does not infer the combined label. To that end, we use a weighted majority vote, where the
classifiers? predictions are weighted according to their error rates in order to produce a single
output label. We also compare against a method denoted by AR-2 in our experiments, which is
the same method, except only pairwise function approximation agreements are considered.
5. BEE/CBEE/HCBEE: Methods of Platanios et al. [2016].
In the results, LEE stands for Logic Error Estimation and refers to the proposed method of this paper.
Evaluation. We compute the sample error rate estimates using the true target function labels (which
are always provided), and we then compute three metrics for each domain and average over domains:
? Error Rank MAD: We rank the function approximations by our estimates and by the sample
estimates to produce two vectors with the ranks. We then compute the mean absolute deviation
(MAD) between the two vectors, where by MAD we mean the `1 norm of the vectors? difference.
? Error MAD: MAD between the vector of our estimates and the vector of the sample estimates,
where each vector is indexed by the function approximation index.
? Target AUC: Area under the precision-recall curve for the inferred target function values, relative
to the true function values that are observed.
Results. First, note that the largest execution time of our method among all data sets was about 10
minutes, using a 2013 15-inch MacBook Pro. The second best performing method, HCBEE, required
about 100 minutes. This highlights the scalability of our approach. Results are shown in table 1.
1. NELL-7 and NELL-11 Data Sets: In this case we have logical constraints and thus, this set of
results is most relevant to the central research claims in this paper (our method was motivated by
the use of such logical constraints). It is clear that our method outperforms all existing methods,
including the state-of-the-art, by a significant margin. Both the MADs of the error rate estimation,
and the AUCs of the target function response estimation, are significantly better.
2. uNELL and uBRAIN Data Sets: In this case there exist no logical constraints between the domains.
Our method still almost always outperforms the competing methods and, more specifically, it
always does so in terms of error rate estimation MAD. This set of results makes it clear that our
method can also be used effectively in cases where there are no logical constraints.
Acknowledgements
We would like to thank Abulhair Saparov and Otilia Stretcu for the useful feedback they provided in
early versions of this paper. This research was performed during an internship at Microsoft Research,
and was also supported in part by NSF under award IIS1250956, and in part by a Presidential
Fellowship from Carnegie Mellon University.
References
S. H. Bach, B. Huang, B. London, and L. Getoor. Hinge-loss Markov Random Fields: Convex
Inference for Structured Prediction. In Conference on Uncertainty in Artificial Intelligence, 2013.
9
S. H. Bach, M. Broecheler, B. Huang, and L. Getoor. Hinge-loss markov random fields and
probabilistic soft logic. CoRR, abs/1505.04406, 2015. URL http://dblp.uni-trier.de/
db/journals/corr/corr1505.html#BachBHG15.
M.-F. Balcan, A. Blum, and Y. Mansour. Exploiting Ontology Structures and Unlabeled Data for
Learning. International Conference on Machine Learning, pages 1112?1120, 2013.
Y. Bengio and N. Chapados. Extensions to Metric-Based Model Selection. Journal of Machine
Learning Research, 3:1209?1227, 2003.
M. Br?cheler, L. Mihalkova, and L. Getoor. Probabilistic Similarity Logic. In Conference on
Uncertainty in Artificial Intelligence, pages 73?82, 2010.
J. Collins and M. Huynh. Estimation of Diagnostic Test Accuracy Without Full Verification: A
Review of Latent Class Methods. Statistics in Medicine, 33(24):4141?4169, June 2014.
M. Collins and Y. Singer. Unsupervised Models for Named Entity Classification. In Joint Conference
on Empirical Methods in Natural Language Processing and Very Large Corpora, 1999.
S. Dasgupta, M. L. Littman, and D. McAllester. PAC Generalization Bounds for Co-training. In
Neural Information Processing Systems, pages 375?382, 2001.
A. P. Dawid and A. M. Skene. Maximum Likelihood Estimation of Observer Error-Rates Using the
EM Algorithm. Journal of the Royal Statistical Society. Series C (Applied Statistics), 28(1):20?28,
1979.
G. J. Klir and B. Yuan. Fuzzy Sets and Fuzzy Logic: Theory and Applications. Prentice-Hall, Inc.,
Upper Saddle River, NJ, USA, 1995. ISBN 0-13-101171-5.
O. Madani, D. Pennock, and G. Flake. Co-Validation: Using Model Disagreement on Unlabeled Data
to Validate Classification Algorithms. In Neural Information Processing Systems, 2004.
T. Mitchell, W. W. Cohen, E. Hruschka Jr, P. Pratim Talukdar, J. Betteridge, A. Carlson, B. Dalvi,
M. Gardner, B. Kisiel, J. Krishnamurthy, N. Lao, K. Mazaitis, T. Mohamed, N. Nakashole, E. A.
Platanios, A. Ritter, M. Samadi, B. Settles, R. Wang, D. Wijaya, A. Gupta, X. Chen, A. Saparov,
M. Greaves, and J. Welling. Never-Ending Learning. In Association for the Advancement of
Artificial Intelligence, 2015.
P. G. Moreno, A. Art?s-Rodr?guez, Y. W. Teh, and F. Perez-Cruz. Bayesian Nonparametric Crowdsourcing. Journal of Machine Learning Research, 16, 2015.
F. Niu, C. R?, A. Doan, and J. Shavlik. Tuffy: Scaling up statistical inference in markov logic networks
using an rdbms. Proc. VLDB Endow., 4(6):373?384, Mar. 2011. ISSN 2150-8097. doi: 10.14778/
1978665.1978669. URL http://dx.doi.org/10.14778/1978665.1978669.
F. Parisi, F. Strino, B. Nadler, and Y. Kluger. Ranking and combining multiple predictors without
labeled data. Proceedings of the National Academy of Sciences, 2014.
E. A. Platanios, A. Blum, and T. M. Mitchell. Estimating Accuracy from Unlabeled Data. In
Conference on Uncertainty in Artificial Intelligence, 2014.
E. A. Platanios, A. Dubey, and T. M. Mitchell. Estimating Accuracy from Unlabeled Data: A
Bayesian Approach. In International Conference on Machine Learning, pages 1416?1425, 2016.
M. Richardson and P. Domingos. Markov Logic Networks. Mach. Learn., 62(1-2):107?136, 2006.
D. Schuurmans, F. Southey, D. Wilkinson, and Y. Guo. Metric-Based Approaches for SemiSupervised Regression and Classification. In Semi-Supervised Learning. 2006.
T. Tian and J. Zhu. Max-Margin Majority Voting for Learning from Crowds. In Neural Information
Processing Systems, 2015.
L. Wehbe, B. Murphy, P. Talukdar, A. Fyshe, A. Ramdas, and T. Mitchell. Predicting brain activity
during story processing. in review, 2014.
10
| 7023 |@word version:2 norm:1 vldb:1 d2:25 pratim:1 accounting:1 mammal:8 minus:3 series:2 horvitz:2 existing:4 outperforms:3 greave:1 com:3 nell:18 guez:1 must:9 dx:1 cruz:1 moreno:3 intelligence:4 advancement:1 mln:1 data2:1 location:2 org:1 constructed:1 direct:1 become:1 yuan:2 consists:4 combine:2 dalvi:1 introduce:1 manner:1 pairwise:2 indeed:1 expected:1 ontology:2 brain:2 resolve:2 considering:4 vertebrate:2 project:2 estimating:13 underlying:6 provided:5 biostatistics:1 what:3 backbone:1 kind:1 interpreted:3 easiest:1 fuzzy:2 unobserved:8 impractical:1 nj:1 certainty:1 safely:1 every:2 voting:2 act:1 classifier:26 wrong:1 medical:1 arguably:1 before:1 positive:2 subsumption:5 aggregating:1 mach:1 niu:2 solely:1 might:2 bird:20 resembles:1 fyshe:1 relaxing:1 co:2 tian:3 bi:1 mads:1 hoifung:2 practice:1 procedure:2 area:2 empirical:1 significantly:2 thought:1 word:3 refers:1 cannot:2 unlabeled:16 selection:1 operator:4 prentice:1 context:1 applying:1 writing:1 influence:2 equivalent:3 go:1 attention:1 latest:1 convex:4 formulate:1 rule:78 population:3 notion:2 autonomous:1 variation:1 krishnamurthy:1 crowdsourcing:4 target:14 pt:1 us:1 domingo:2 agreement:7 pa:2 dawid:4 labeled:3 observed:15 subproblem:1 wang:1 connected:2 mentioned:2 intuition:5 pd:3 ideally:2 littman:1 wilkinson:1 depend:1 solving:5 purely:1 upon:2 eric:1 completely:1 easily:1 joint:1 cat:1 various:2 represented:2 describe:4 london:1 doi:2 artificial:4 crowd:1 whose:1 heuristic:1 supplementary:2 valued:2 solve:4 say:2 otherwise:3 presidential:1 statistic:2 richardson:2 highlighted:1 parisi:2 isbn:1 propose:2 talukdar:2 remainder:1 relevant:1 combining:4 poon:1 achieve:2 gold:1 academy:1 description:3 intuitive:1 validate:1 scalability:2 exploiting:1 convergence:2 p:1 produce:5 object:4 help:1 klir:2 strong:1 c:2 predicted:1 implies:6 quantify:1 direction:1 correct:3 stochastic:1 rdbms:1 kluger:1 mcallester:1 settle:1 violates:1 material:2 explains:1 generalization:1 probable:1 extension:3 hold:1 considered:4 ground:44 hall:1 exp:1 nadler:1 predict:1 claim:2 early:1 mlns:5 estimation:8 proc:1 outperformed:1 applicable:2 label:10 largest:1 correctness:1 create:2 city:9 weighted:4 always:5 aim:2 modified:1 rather:1 endow:1 focus:1 june:1 rank:9 likelihood:2 contrast:2 inference:21 mrfs:1 unlikely:1 relation:1 interested:1 issue:1 classification:6 html:1 rodr:1 denoted:1 among:3 animal:29 art:4 noun:3 constrained:1 mutual:5 marginal:2 equal:5 comprise:2 construct:1 never:2 beach:1 sampling:1 having:1 whale:3 represents:4 field:6 unsupervised:3 fmri:1 np:11 others:1 few:1 composed:1 proximations:1 national:1 madani:2 murphy:1 microsoft:5 ab:1 huge:2 evaluation:1 platanios:11 perez:1 behind:2 implication:1 predefined:1 worker:1 respective:1 indexed:1 abundant:2 edj:9 instance:5 classify:2 soft:8 boolean:9 modeling:2 earlier:1 cover:1 ar:7 phrase:3 applicability:1 deviation:2 subset:1 predictor:1 predicate:42 answer:3 combined:4 gd:3 st:1 density:3 international:2 river:3 probabilistic:24 lee:4 ritter:1 together:2 ym:1 central:2 satisfied:4 opposed:2 huang:2 inefficient:1 return:3 account:2 potential:2 de:1 b2:1 subsumes:5 summarized:1 inc:1 explicitly:1 ranking:2 depends:2 mv:1 performed:4 view:1 try:1 h1:1 observer:1 mpe:1 characterizes:2 red:2 parallel:1 identifiability:1 contribution:1 accuracy:24 bolded:1 chapados:2 ensemble:4 correspond:8 inch:1 bayesian:2 none:3 ed:2 definition:2 evaluates:1 against:3 mihalkova:1 internship:1 mohamed:1 tunable:1 macbook:1 popular:1 mitchell:8 logical:29 knowledge:4 recall:1 satisfiability:4 appears:1 higher:1 supervised:4 tom:2 response:10 emmanouil:1 box:3 strongly:1 though:2 furthermore:4 done:1 just:1 mar:1 d:3 resonance:1 semisupervised:1 usa:2 grounding:9 facilitate:1 true:19 multiplier:1 regularization:1 alternating:1 symmetric:2 illustrated:1 deal:1 during:3 huynh:2 auc:4 tuffy:1 trying:1 mazaitis:1 complete:1 passage:1 percent:1 balcan:2 reasoning:1 meaning:4 pro:1 common:1 functional:1 overview:2 cohen:1 million:1 belong:3 interpretation:1 discussed:1 association:1 theirs:1 mellon:3 refer:2 significant:1 gibbs:3 consistency:1 similarly:1 language:2 supervision:1 similarity:1 something:1 exclusion:5 belongs:6 binary:4 yi:2 additional:3 relaxed:1 greater:1 maximize:1 dashed:2 ii:7 semi:3 full:1 multiple:7 violate:1 infer:6 trier:1 sparrow:11 match:2 faster:2 bach:4 long:2 devised:1 award:1 prediction:8 mrf:6 involving:1 scalable:2 regression:1 cmu:2 metric:3 iteration:1 represent:5 sometimes:1 want:2 fellowship:1 interval:3 diagram:1 country:2 unlike:1 pennock:1 validating:1 db:1 call:1 bengio:2 easy:1 rendering:1 independence:1 competing:2 idea:2 br:3 psl:7 arthropod:1 whether:4 motivated:4 utility:1 url:2 constitute:2 ignored:1 generally:1 useful:2 clear:2 involve:1 dubey:1 amount:2 nonparametric:2 category:22 reduced:1 http:3 specifies:1 outperform:1 exist:3 nsf:1 fish:22 notice:1 estimated:1 diagnostic:1 per:1 blue:3 carnegie:3 discrete:1 dasgupta:2 key:1 four:1 blum:2 pj:2 ht:2 resorted:1 imaging:1 fuse:1 relaxation:2 sum:2 enforced:2 uncertainty:3 respond:1 named:1 extends:1 throughout:1 almost:1 shark:11 lake:2 scaling:2 bound:1 fold:1 identifiable:1 activity:2 strength:1 constraint:46 invertebrate:1 generates:1 argument:1 min:3 performing:3 skene:4 structured:2 according:1 belonging:3 flake:1 across:2 smaller:1 em:1 agreeing:1 jr:1 making:10 b:1 modification:1 hl:4 equation:4 mutually:12 agree:1 previously:1 discus:2 turn:2 needed:1 know:2 singer:2 end:1 available:2 wehbe:2 apply:1 observe:2 disagreement:2 magnetic:1 hruschka:1 jd:12 denotes:1 samadi:1 include:1 graphical:1 hinge:3 medicine:2 carlson:1 classical:3 society:1 objective:3 question:2 already:2 flipping:1 quantity:1 primary:2 exclusive:11 exhibit:1 distance:5 separate:1 reinforce:1 thank:1 entity:1 majority:5 me:7 consensus:6 mad:8 assuming:4 issn:1 index:1 mollusk:1 illustration:1 minimizing:1 potentially:2 relate:2 subproblems:4 negative:1 implementation:1 proper:1 redefined:1 unknown:1 perform:8 allowing:2 upper:1 teh:1 markov:8 finite:1 kisiel:1 defining:1 incorporated:1 y1:1 mansour:1 arbitrary:1 inferred:1 prioritizing:1 introduced:1 pair:3 required:2 connection:1 learned:1 boost:1 hour:1 nip:1 able:3 redmond:2 including:2 max:5 explanation:1 belief:2 royal:1 getoor:3 natural:1 predicting:1 zhu:3 representing:2 scheme:1 mn:1 github:1 lao:1 imply:1 gardner:1 irrespective:1 maj:3 text:1 review:3 literature:1 prior:4 acknowledgement:1 bee:4 determining:1 relative:1 unsatisfied:1 loss:3 highlight:1 interesting:4 proportional:1 validation:1 h2:1 southey:1 sufficient:1 verification:1 doan:1 story:2 classifying:1 supported:1 soon:1 free:1 copy:4 allow:1 shavlik:1 taking:1 absolute:2 curve:2 feedback:1 xn:1 world:1 ending:2 stand:1 author:1 made:2 replicated:1 welling:1 emphasize:1 uni:1 implicitly:2 logic:45 dealing:2 instantiation:1 b1:1 pittsburgh:2 corpus:1 xi:1 continuous:3 latent:1 table:2 nature:2 learn:1 ca:1 symmetry:2 schuurmans:2 improving:1 domain:26 significance:1 arrow:2 big:1 motivation:1 ramdas:1 allowed:2 x1:1 body:1 precision:1 sub:5 inferring:1 comprises:1 wish:1 lie:2 tied:2 minute:3 pac:1 learnable:1 explored:1 svm:6 gupta:1 betteridge:1 normalizing:1 exists:2 false:2 adding:1 effectively:3 corr:2 magnitude:1 execution:1 margin:3 dblp:1 chen:1 broecheler:1 simply:1 likely:8 saddle:1 partially:1 applies:1 jd1:9 truth:10 chance:1 corresponds:5 conditional:1 goal:9 formulated:1 absence:1 feasible:1 hard:3 admm:2 strino:1 determined:1 specifically:2 operates:1 except:1 called:1 total:1 vote:3 indicating:4 guo:1 collins:4 violated:1 evaluate:1 ebird:1 d1:28 handling:1 |
6,660 | 7,024 | A Decomposition of Forecast Error in
Prediction Markets
Miroslav Dud?k
Microsoft Research, New York, NY
[email protected]
Ryan Rogers
University of Pennsylvania, Philadelphia, PA
[email protected]
S?bastien Lahaie
Google, New York, NY
[email protected]
Jennifer Wortman Vaughan
Microsoft Research, New York, NY
[email protected]
Abstract
We analyze sources of error in prediction market forecasts in order to bound
the difference between a security?s price and the ground truth it estimates. We
consider cost-function-based prediction markets in which an automated market
maker adjusts security prices according to the history of trade. We decompose the
forecasting error into three components: sampling error, arising because traders
only possess noisy estimates of ground truth; market-maker bias, resulting from
the use of a particular market maker (i.e., cost function) to facilitate trade; and
convergence error, arising because, at any point in time, market prices may still be
in flux. Our goal is to make explicit the tradeoffs between these error components,
influenced by design decisions such as the functional form of the cost function
and the amount of liquidity in the market. We consider a specific model in which
traders have exponential utility and exponential-family beliefs representing noisy
estimates of ground truth. In this setting, sampling error vanishes as the number
of traders grows, but there is a tradeoff between the other two components. We
provide both upper and lower bounds on market-maker bias and convergence error,
and demonstrate via numerical simulations that these bounds are tight. Our results
yield new insights into the question of how to set the market?s liquidity parameter
and into the forecasting benefits of enforcing coherent prices across securities.
1
Introduction
A prediction market is a marketplace in which participants can trade securities with payoffs that
depend on the outcomes of future events [19]. Consider the simple setting in which we are interested
in predicting the outcome of a political election: whether the incumbent or challenger will win.
A prediction market might issue a security that pays out $1 per share if the incumbent wins, and
$0 otherwise. The market price p of this security should always lie between 0 and 1, and can be
construed as an event probability. If a trader believes that the likelihood of the incumbent winning is
greater than p, she will buy shares with the expectation of making a profit. Market prices increase
when there is more interest in buying and decrease when there is more interest in selling. By this
process, the market aggregates traders? information into a consensus forecast, represented by the
market price. With sufficient activity, prediction markets are competitive with alternative forecasting
methods such as polls [4], but while there is a mature literature on sources of error and bias in polls,
the impact of prediction market structure on forecast accuracy is still an active area of research [17].
We consider prediction markets in which all trades occur through a centralized entity known as a
market maker. Under this market structure, security prices are dictated by a fixed cost function and
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
the current number of outstanding shares [6]. The basic conditions that a cost function should satisfy
to correctly elicit beliefs, while bounding the market maker?s loss, are now well-understood, chief
among them being convexity [1]. Nonetheless, the class of allowable cost functions remains broad,
and the literature so far provides little formal guidance on the specific form of cost function to use in
order to achieve good forecast accuracy, including how to set the liquidity parameter which controls
price responsiveness to trade. In practice, the impact of the liquidity parameter is difficult to quantify
a priori, so implementations typically resort to calibrations based on market simulations [8, 18].
Prior work also suggests that maintaining coherence among prices of logically related securities has
informational advantages [8], but there has been little work aimed at understanding why.
This paper provides a framework to quantify the impact of the choice of cost function on forecast
accuracy. We introduce a decomposition of forecast error, in analogy with the bias-variance decomposition familiar from statistics or the approximation-estimation-optimization decomposition for
large-scale machine learning [5]. Our decomposition consists of three components. First, there is the
sampling error resulting from the fact that the market consists of a finite population of traders, each
holding a noisy estimate of ground truth. Second, there is a market-maker bias which stems from the
use of a cost function to provide liquidity and induce trade. Third, there is convergence error due to
the fact that the market prices may not have fully converged to their equilibrium point.
The central contribution of this paper is a theoretical characterization of the market-maker bias and
convergence error, the two components of this decomposition that depend on market structure as
defined by the form of the cost function and level of liquidity. We consider a tractable model of agent
behavior, originally studied by Abernethy et al. [2], in which traders have exponential utility functions
and beliefs drawn from an exponential family. Under this model it is possible to characterize
the market?s equilibrium prices in terms of the traders? belief and risk aversion parameters, and
thereby quantify the discrepancy between current market prices and ground truth. To analyze market
convergence, we consider the trader dynamics introduced by Frongillo and Reid [9], under which
trading can be viewed as randomized block-coordinate descent on a suitable potential function.
Our analysis is local in that the bounds depend on the market equilibrium prices. This allows us to
exactly identify the main asymptotic terms of error. We demonstrate via numerical experiments that
these asymptotic bounds are accurate early on and therefore can be used to compare market designs.
We make the following specific contributions:
1. We precisely define the three components of the forecasting error.
2. We show that the market-maker bias equals cb ? O(b2 ) as b ! 0, where b is the liquidity
parameter, and c is an explicit constant that depends on the cost function and trader beliefs.
3. We show that the convergence error decreases with the number of trades t as t with = 1 ?(b).
We provide explicit upper and lower bounds on that depend on the cost function and trader
beliefs. In the process, we prove a new local convergence bound for block-coordinate descent.
4. We use our explicit formulas for bias and convergence error to compare two common cost
functions: independent markets (IND), under which security prices vary independently, and
the logarithmic market scoring rule (LMSR) [10], which enforces logical relationships between
security prices. We show that at the same value of the market-maker bias, IND requires at least
half-as-many and at most twice-as-many trades as LMSR to achieve the same convergence error.
We consider a specific utility model (exponential utility), but our bias and convergence analysis
immediately carry over if we assume that each trader is optimizing a risk measure (rather than an
exponential utility function) similar to the setup of Frongillo and Reid [9]. Exponential utility was
chosen because it was previously well studied and allowed us to focus on the analysis of the cost
function and liquidity. The role of the liquidity parameter in trading off the bias and convergence error
has been informally recognized in the literature [7, 10, 13], but our precise definition of market-maker
bias and explicit formulas for the bias and convergence error are novel. Abernethy et al. [2] provide
results that can be used to derive the bias for LMSR, but not for generic cost functions, so they do not
enable comparison of biases of different costs. Frongillo and Reid [9] observe that the convergence
error can be locally bounded as t , but they only provide an upper bound and do not show how
is related to the liquidity or cost function. Our analysis establishes both upper and lower bounds
on convergence and relates explicitly to the liquidity and cost function. This is necessary for a
2
meaningful comparison of cost function families. Thus our framework provides the first meaningful
way to compare the error tradeoffs inherent in different choices of cost functions and liquidity levels.
2
Preliminaries
We use the notation [N ] to denote the set {1, . . . , N }. Given a convex function f : Rd ! R [ {1},
its effective domain, denoted dom f , is the set of points where f is finite. Whenever dom f is
non-empty, the conjugate f ? : Rd ! R [ {1} is defined by f ? (v) := supu2Rd [v | u f (u)]. We
write k?k for the Euclidean norm. A centralized mathematical reference is provided in Appendix A.1
Cost-function-based market makers We study cost-function-based prediction markets [1]. Let
? be a finite set of mutually exclusive and exhaustive states of the world. A market administrator,
known as market maker, wishes to elicit information about the likelihood of various states ! 2 ?, and
to that end offers to buy and sell any number of shares of K securities. Each security is associated
with a coordinate of a payoff function : ? ! RK , where each share of the k th security is worth
k (!) in the event that the true state of the world is ! 2 ?. Traders arrive in the market sequentially
and trade with the market maker. The market price is fully determined by a convex potential function
C called the cost function. In particular, if the market maker has previously sold sk 2 R shares of
each security k and a trader would like to purchase a bundle consisting of k 2 R shares of each, the
trader is charged C(s + ) C(s). The instantaneous price of security k is then @C(s)/@sk . Note
that negative values of k are allowed and correspond to the trader (short) selling security k.
Let M := conv{ (!) : ! 2 ?} be the convex hull of the set of payoff vectors. It is exactly the set
of expectations E [ (!)] across all possible probability distributions over ?, which we call beliefs.
We refer to elements of M as coherent prices. Abernethy et al. [1] characterize the conditions that a
cost function must satisfy in order to guarantee important properties such as bounded loss for the
market maker and no possibility of arbitrage. To start, we assume only that C : RK ! R is convex
and differentiable and that M ? dom C ? , which corresponds to the bounded loss property.
Example 2.1 (Logarithmic Market Scoring Rule: LMSR [10]). Consider a complete market with a
single security for each outcome worth $1 if that outcome occurs and $0 otherwise, i.e., ? = [K] and
k (!) = 1{k = !} for all k. The LMSR cost function and instantaneous security prices are given by
?P
?
@C(s)
e sk
K
sk
C(s) = log
and
= PK
, 8k 2 [K].
(1)
k=1 e
s`
@sk
`=1 e
P
Its conjugate is the entropy function, C ? (?) = k ?k log ?k + I{? 2 K }, where K is the
simplex in RK and I{?} is the convex indicator, equal to zero if its argument is true and infinity if
false. Thus, in this case M = K = dom C ? .
Notice that the LMSR security prices are coherent because they always sum to one. This prevents
arbitrage opportunities for traders. Our second running example does not have this property.
Example 2.2 (Sum of Independent LMSRs: IND). Let ? = [K] and k (!) = 1{k = !} for all k.
The cost function and instantaneous security prices for the sum of independent LMSRs are given by
C(s) =
C ? (?) =
P
k [?k
PK
k=1
log (1 + esk )
and
@C(s)
e sk
=
, 8k 2 [K],
@sk
1 + e sk
log ?k +(1 ?k ) log(1 ?k )]+I{? 2 [0, 1]K }, M =
K , and dom C
(2)
?
= [0, 1]K .
When choosing a cost function, one important consideration is liquidity, that is, how quickly prices
change in response to trades. Any cost function C can be viewed as a member of a parametric family
of cost functions of the form Cb (s) := bC(s/b) for all b > 0. With larger values of b, larger trades
are required to move market prices by some fixed amount, and the worst-case loss of the market
maker is larger; with smaller values, small purchases can result in big changes to the market price.
Basic model In our analysis of error we assume that there exists an unknown true probability
distribution ptrue 2 |?| over the outcome set ?. The true expected payoffs of the K market
securities are then given by the vector ?true := E!?ptrue [ (!)].
1
A longer version of this paper containing the appendix is available on arXiv and the authors? websites.
3
? i over
We assume that there are N traders and that each trader i 2 [N ] has a private belief p
outcomes. We additionally assume that each trader i has a utility function ui : R ! R for wealth
and would like to maximize expected utility subject to her beliefs. For now we assume that ui
is differentiable and concave, meaning that each trader is risk averse, though later we focus on
K
exponential utility. The
? expected utility of
? trader i owning a security bundle r i 2 R and cash ci is
Ui (r i , ci ) := E!??pi ui ci + (!) ? r i . We assume that each trader begins with zero cash. This
is without loss of generality because we could incorporate any initial cash holdings into ui .
3
A Decomposition of Error
In this section, we decompose the market?s forecast error into three major components. The first is
sampling error, which arises because traders have only noisy observations of the ground truth. The
second is market-maker bias, which arises because the shape of the cost function impacts the traders?
willingness to invest. Finally, convergence error arises due to the fact that at any particular point in
time the market prices may not have fully converged. To formalize our decomposition, we introduce
two new notions of equilibrium.
Our first notion of equilibrium, called a market-clearing equilibrium, does not assume the existence
of a market maker, but rather assumes that traders trade only among themselves, and so no additional
securities or cash are available beyond the traders? initial allocations.This equilibrium is described by
? 2 RK and allocations (?
security prices ?
r i , c?i ) of security bundles and cash to each trader i such
that, given her allocation, no trader wants to buy or sell any bundle of securities at the those prices.
? = (?
Trader bundles and cash are summarized as r? = (?
r i )i2[N ] and c
ci )i2[N ] .
?, ?)
? is a market-clearing equilibrium if
Definition 3.1 (Market-clearing equilibrium). A triple (?
r, c
PN
PN
? We call
r
?
=
0,
c
?
=
0,
and
for
all
i
2
[N
],
0
2
argmax
r i + , c?i
? ?).
i
i
2RK Ui (?
i=1
i=1
? market-clearing prices if there exist r? and c
? such that (?
?, ?)
? is a market-clearing equilibrium.
?
r, c
Similarly, we call r? a market-clearing allocation if there exists a corresponding equilibrium.
PN
PN
The requirements on i=1 r?i and i=1 c?i guarantee that no additional securities or cash have
been created. In other words, there exists some set of trades among traders that would lead to the
market-clearing allocation, although the definition says nothing about how the equilibrium is reached.
Since we rely on a market maker to orchestrate trade, our markets generally do not reach the marketclearing equilibrium. Instead, we introduce the notion of market-maker equilibrium. This equilibrium
is again described by a set of security prices ?? and trader allocations (r ?i , c?i ), summarized as
(r ? , c? ), such that no trader wants to trade at these prices given her allocation. The difference is that
we now require r ? and c? to be reachable via some sequence of trade with the market maker instead
of via trade among only the traders, and ?? must be the market prices after such a sequence of trade.
Definition 3.2 (Market-maker equilibrium). A triple (r ?, c?, ?? ) is a market-maker equilibrium
PN
PN
for cost function Cb if, for the market state s? = i=1 r ?i , we have i=1 c?i = Cb (0) Cb (s? ),
?? = rCb (s? ), and for all i 2 [N ], 0 2 argmax 2RK Ui r ?i + , c?i Cb (s? + ) + Cb (s? ) . We
call ?? market-maker equilibrium prices if there exist r ? and c? such that (r ?, c?, ?? ) is a marketmaker equilibrium. Similarly, we call r ? a market-maker equilibrium allocation if there exists a
corresponding equilibrium. We sometimes write ?? (b; C) to show the dependence of ?? on C and b.
? and the market-maker equilibrium prices ?? (b; C) are not unique in
The market-clearing prices ?
general, but are unique for the specific utility functions that we study in this paper.
Using these notions of equilibrium, we can formally define our error components. Sampling error is
the difference between the true security values and the market-clearing equilibrium prices. The bias
is the difference between the market-clearing equilibrium prices and the market-maker equilibrium
prices. Finally, the convergence error is the difference between the market-maker equilibrium prices
and the market prices ?t (b; C) at a particular round t. Putting this together, we have that
?true
? +?
?
?t = ?true ?
| {z } |
Sampling Error
?? (b; C) + ?? (b; C) ?t (b; C) .
{z
} |
{z
}
Bias
4
Convergence Error
(3)
4
The Exponential Trader Model
For the remainder of the paper, we work with the exponential trader model introduced by Abernethy
et al. [2] in which traders have exponential utility functions and exponential-family beliefs. Under
this model, both the market-clearing prices and market-maker equilibrium prices are unique and can
be expressed cleanly in terms of potential functions [9], yielding a tractable analysis. The results of
this section are immediate consequences of prior work [2, 9], but our equilibrium concepts bring
them into a common framework.
We consider a specific exponential family [3] of probability distributions over ? defined as p(!; ?) =
e (!)?? T (?) , where ? 2PRK is the natural parameter of the distribution, and T is the log partition
(!)??
function, T (?) := log
. The gradient rT (?) coincides with the expectation of
!2? e
?
under p(?; ?), and dom T = conv{ (!) : ! 2 ?} = M.
Following Abernethy et al. [2], we assume that each trader i has exponential-family beliefs with
natural parameter ??i . From the perspective of trader i, the expected payoffs of the K market securities
P
? i with ?
can then be expressed as the vector ?
?i,k := !2? k (!)p(!; ??i ).
As in Abernethy et al. [2], we also assume that traders are risk averse with exponential utility for
wealth, so the utility of trader i for wealth W is ui (W ) = (1/ai )e ai W , where ai is the the trader?s
risk aversion coefficient. We assume that the traders? risk aversion coefficients are fixed.
Using the definitions of the expected utility Ui , the exponential family distribution p(?; ??i ), the log
partition function T , and the exponential utility ui , it is straightforward to show [2] that
Ui (r i , ci ) =
1
e
ai
?i ) ai ci
T (?
P
!2?
e
?i ai r i )
(!)?(?
=
1 T (??i
e
ai
?i ) ai ci
ai r i ) T (?
.
(4)
Under this trader model, we can use the techniques of Frongillo and Reid [9] to construct potential
functions which yield alternative characterizations of the equilibria as solutions of minimization
problems. Consider first a market-clearing equilibrium. Define Fi (s) := a1i T (??i + ai s) for each
trader i. From Eq. (4) we can observe that Fi ( r i ) + ci is a monotone transformation of trader i?s
utility. Since each trader?s utility is locally maximized at a market-clearing equilibrium, the sum
PN
of traders? utilities is also locally maximized, as is i=1 ( Fi ( r i ) + ci ). Since the equilibrium
PN
conditions require that i=1 ci = 0, the security allocation associated with any market-clearing
PN
equilibrium must be a local minimum of i=1 Fi ( r i ). This idea is formalized in the following
theorem. The proof follows from an analysis of the KKT conditions of the equilibrium. (See the
appendix for all omitted proofs.)
Theorem 4.1. Under the exponential trader model, a market-clearing equilibrium always exists and
market-clearing prices are unique. Market-clearing allocations and prices are exactly the solutions
of the following optimization problems:
hP
i
hP
i
N
N
?
? = argmin
r? 2 P
argmin
?
(5)
i=1 Fi ( r i ) ,
i=1 Fi (?) .
r:
N
i=1
?2RK
r i =0
Using a similar argument, we can show that the allocation associated with any market-maker equilibPN
PN
rium is a local minimum of the function F (r) := i=1 Fi ( r i ) + Cb
i=1 r i .
Theorem 4.2. Under the exponential trader model, a market-maker equilibrium always exists and
equilibrium prices are unique. Market-maker equilibrium allocations and prices are exactly the
solutions of the following optimization problems:
hP
i
N
?
?
r ? 2 argmin F (r) ,
?? = argmin
F
(?)
+
bC
(?)
.
(6)
i=1 i
r
?2RK
Sampling error We finish this section with an analysis of the first component of error identified in
Section 3: the sampling error. We begin by deriving a more explicit form of market-clearing prices:
Theorem 4.3. Under the exponential trader model, the unique market-clearing equilibrium prices
? := PN ??i /ai / PN 1/ai is the risk-aversion? = E?? [ (!)], where ?
can be written as ?
i=1
i=1
?
weighted average belief and E?? is the expectation under p(?; ?).
5
The sampling error arises because the beliefs ??i are only noisy signals of the ground truth. From
Theorem 4.3 we see that this error may be compounded by the weighting according to risk aversions,
? we need to make
which can skew the prices. To obtain a concrete bound on the error term k?true ?k,
some assumptions about risk aversion coefficients, the true distribution of the outcome, and how this
distribution is related to trader beliefs. For instance, suppose risk aversion coefficients are bounded
both below and above, the true outcome is drawn from an exponential-family distribution with natural
parameter ? true , and the beliefs ??i are independent samples with mean ? true and a bounded covariance
matrix. Under these assumptions,
p one can show using standard concentration bounds that with high
true
? = O( 1/N ) as N ! 1. In other words, market-clearing prices approach
probability, k?
?k
the ground truth as the number of traders increases. In Appendix B.4 we make the dependence on risk
aversion and belief noise more explicit. The analysis of other information structures (e.g., biased or
correlated beliefs) is beyond the scope of this paper; instead, we focus on the two error components
that depend on the market design.
5
Market-maker Bias
We now analyze the market-maker bias?the difference between the marker-maker equilibrium prices
? We first state a global bound that depends on the liquidity b and cost
?? and market-clearing prices ?.
? with the rate O(b) as b ! 0. The proof
function C, but not on trader beliefs, and show that ?? ! ?
builds on Theorems 4.1 and 4.2 and uses the facts that C ? is bounded on M (by our assumptions on
C), and conjugates Fi? are strongly convex on M (from properties of the log partition function).
Theorem 5.1 (Global Bias Bound). Under the exponential trader model, for any C, there exists a
? ? cb for all b 0.
constant c such that k?? (b; C) ?k
This result makes use of strong convexity constants that are valid over the entire set M, which can
? Furthermore, it gives us only an upper bound, which
be overly conservative when ?? is close to ?.
cannot be used to compare different cost function families. In the rest of this section we pursue
? Our local analysis requires
a tighter local analysis, based on the properties of Fi? and C ? at ?.
assumptions that go beyond convexity and differentiability of the cost function. We call the class of
functions that satisfy these assumptions convex+ functions. (See Appendix A.3 for their complete
treatment and a more general definition than provided here.) These functions are related to functions
of Legendre type (see Sec. 26 of Rockafellar [15]). Informally, they are smooth functions that are
strictly convex along directions in a certain space (the gradient space) and linear in orthogonal
directions. For cost functions, strict convexity means that prices change in response to arbitrarily
small trades, while the linear directions correspond to bundles with constant payoffs, whose prices
are therefore fixed.
Definition 5.2. Let f : Rd ! R be differentiable and convex. Its gradient space is the linear space
parallel to the affine hull of its gradients, denoted as G(f ) := span{rf (u) rf (u0 ) : u, u0 2 Rd }.
Definition 5.3. We say that a convex function f : Rd ! R is convex+ if it has continuous third
derivatives and range(r2 f (u)) = G(f ) for all u 2 Rd .
It can be checked that if P is a projection on G(f ) then there exists some a such that f (u) =
f (P u) + a| u, so f is up to a linear term fully described by its values on G(f ). The condition on
the range of the Hessian ensures that f is strictly convex over G(f ), so its gradient map is invertible
over G(f ). This means that the Hessian can be expressed as a function of the gradient, i.e., there
exists a matrix-valued function Hf such that r2 f (u) = Hf (rf (u)) (see Proposition A.8). The cost
functions C for both the LMSR and the sum of independent LMSRs (IND) are convex+ .
Example 5.4 (LMSR as a convex+ function). For LMSR, the gradient space of C is parallel to
the simplex: G(C) = {u : 1| u = 0}. The gradients of C are points in the relative interior of
the simplex. Given such a point ? = rC(s), the corresponding Hessian is r2 C(s) = HC (?) =
(diagk2[K] ?k ) ??| , where diagk2[K] ?k denotes the diagonal matrix with values ?k on the
diagonal. The null space of HC (?) is {c1 : c 2 R}, so C is linear in the all-ones direction (buying
one share of each security always has cost one), but strictly convex in directions from G(C).
Example 5.5 (IND as a convex+ function). For IND, the gradient space is RK and the gradients are
the points in (0, 1)K . In this case, HC (?) = diagk [?k (1 ?k )]. This matrix has full rank.
? and C, we have
Our next theorem shows that for an appropriate vector u, which depends on ?
? + bu + "b , where k"b k = O(b2 ). Here, the O(?) is taken as b ! 0, so the error term
?? (b; C) = ?
6
"b goes to zero faster than the term bu, which we call the asymptotic bias. Our analysis is local in
? This analysis fully uncovers the
the sense that the constants hiding within O(?) may depend on ?.
main asymptotic term and therefore allows comparison of cost families. In our experiments, we show
that the asymptotic bias is an accurate estimate of the bias even for moderately large values of b.
Theorem 5.6 (Local Bias Bound). Assume that the cost function C is convex+ . Then
? ?
?
b(?
a/N )HT (?)@C
(?) + "b , where k"b k = O(b2 ).
PN
In the expression above, a
? = N/( i=1 1/ai ) is the harmonic mean of risk-aversion coefficients and
? ?
?
? is a set.
HT (?)@C
(?) is guaranteed to consist of a single point even when @C ? (?)
?
?? (b; C) = ?
The theorem is proved by a careful application of Taylor?s Theorem and crucially uses properties of
conjugates of convex+ functions, which we derive in Appendix A.3. It gives us a formula to calculate
? or evaluate the worst-case bias
the asymptotic bias for any cost function for a particular value of ?,
against some set of possible market-clearing prices. It also constitutes an important step in comparing
cost function families. To compare the convergence error of two costs C and C 0 in the next section,
we require that their liquidities b and b0 be set so that they have (approximately) the same bias, i.e.,
? ? k?? (b; C) ?k.
? Theorem 5.6 tells us that this can be achieved by the linear
k?? (b0 ; C 0 ) ?k
0
0? ?
? ?
?
?
rule b = b/? where ? = kHT (?)@C
(?)k / kHT (?)@C
(?)k. For C = LMSR and C 0 = IND, we
prove that the corresponding ? 2 [1, 2]. Equivalently, this means that for the same value of b the
asymptotic bias of IND is at least as large as that of LMSR, but no more than twice as large:
? there exists ? 2 [1, 2] such that for all b, k?? (b/?; IND) ?k
? =
Theorem 5.7. For any ?
2
2
?
? = ?k?? (b; LMSR) ?k?O(b
?
k?? (b; LMSR) ?k?O(b
). For this same ?, also k?? (b; IND) ?k
).
Theorem 5.6 also captures an intuitive relationship which can guide the market maker in adjusting the
market liquidity b as the number of traders N and their risk aversion coefficients ai vary. In particular,
? and the cost function fixed, we can maintain the same amount of bias by setting b / N/?
holding ?
a.
Note that 1/ai plays the role of the budget of trader i in the sense
P that at fixed prices, the trader
will spend an amount of cash proportional to 1/ai . Thus N/?
a = i (1/ai ) corresponds to the total
amount of available cash among the traders in the market. Similarly, the marketP
maker?s worst-case
loss, amounting to the market maker?s cash, is proportional to b, so setting b / i (1/ai ) is natural.
6
Convergence Error
We now study the convergence error, namely the difference between the prices ?t at round t and the
market-maker equilibrium prices ?? . To do so, we must posit a model of how the traders interact with
the market. Following Frongillo and Reid [9], we assume that in each round, a trader i 2 [N ], chosen
uniformly at random, buys a bundle 2 RK that optimizes her utility given the current market state s
and her existing security and cash allocations, r i and ci . The resulting updates of the allocation vector
r = (r i )N
i=1 correspond to randomized block-coordinate descent on the potential function F (r) with
blocks r i (see Appendix D.1 and Frongillo and Reid [9]). We refer to this model as the all-security
(trader) dynamics (ASD).2 We apply and extend the analysis of block-coordinate descent to this setting.
We focus on convex+ functions and conduct local convergence analysis around the minimizer of F .
Our experiments demonstrate that the local analysis accurately estimates the convergence rate.
Let r ? denote an arbitrary minimizer of F and let F ? be the minimum value of F . Also, let r t denote
the allocation vector and ?t the market price vector after the tth trade. Instead of directly analyzing
the convergence error k?t ?? k, we bound the suboptimality F (r t ) F ? since k?t ?? k2 =
?(F (r t ) F ? ) for convex+ costs C under ASD (see Appendix D.7.1).
Convex+ functions are locally strongly convex and have a Lipschitz-continuous gradient, so the
standard analysis of block-coordinate descent [9, 11] implies linear convergence, i.e., E [F (r t )]
F ? ? O( t ) for some < 1, where the expectation is under the randomness of the algorithm. We
refine the standard analysis by (1) proving not only upper, but also lower bounds on the convergence
rate, and (2) proving an explicit dependence of on the cost function C and the liquidity b. These
two refinements are crucial for comparison of cost families, as we demonstrate with the comparison
of LMSR and IND. We begin by formally defining bounds on local convergence of any randomized
iterative algorithm that minimizes a function F (r) via a sequence of iterates r t .
2
In Appendix D, we also analyze the single-security (trader) dynamics (SSD), in which a randomly chosen
trader randomly picks a single security to trade, corresponding to randomized coordinate descent on F .
7
Definition 6.1. We say that high is an upper bound on the local convergence rate of an algorithm
if, with probability 1 under the randomness
the algorithm reaches an iteration t0
? of the algorithm,
?
t t0
such that for some c > 0 and all t t0 , E F (r t ) r t0
F ? ? c high
. We say that low is a lower
bound on the local convergence rate if high
holds
for
all
upper
bounds
low
high .
To state explicit bounds, we use the notation D := diagi2[N ] ai and P := IN 11| /N , where IN
is the N ? N identity matrix and 1 is the all-ones vector. We write M + for the pseudoinverse of a
matrix M and min (M ) and max (M ) for its smallest and largest positive eigenvalues.
? and
Theorem 6.2 (Local Convergence Bound). Assume that C is convex+ . Let HT := HT (?)
? For the all-securities dynamics, the local convergence rate is bounded between
HC := HC (?).
ASD
high
=1
2b
N
ASD
low
=1
2b
N
?
min (P DP )
?
min
?
max (P DP )
?
max
1/2
1/2
HT HC+ HT
1/2
1/2
HT HC+ HT
+ O(b2 ) ,
O(b2 ) .
In our proof, we first establish both lower and upper bounds on convergence of a generic blockcoordinate descent that extend the results of Nesterov [11]. We then analyze the behavior of the
algorithm for the specific structure of our objective to obtain explicit lower and upper bounds. Our
bounds prove linear convergence with the rate = 1 ?(b). Since the convergence gets worse as
b ! 0, there is a trade-off with the bias, which decreases as b ! 0.
Theorems 5.6 and 6.2 enable systematic quantitative comparisons of cost families. For simplicity,
assume that N 2 and all risk aversions are a, so min (P DP ) = max (P DP ) = a. To compare
convergence rates of two costs C and C 0 , we need to control for bias. As discussed after Theorem 5.6,
their biases are (asymptotically) equal if their liquidities are linearly related as b0 = b/? for a suitable
?. Theorem 6.2 then states that Cb0 0 requires (asymptotically) at most a factor of ? as many trades as Cb
1/2
1/2
1/2
1/2
to achieve the same convergence error, where ? := ? ? max (HT HC+ HT )/ min (HT HC+0 HT ).
0
0
0
Similarly, Cb requires at most a factor of ? as many trades as Cb0 , with ? defined symmetrically to ?.
For C = LMSR and C 0 = IND, we can show that ? ? 2 and ?0 ? 2, yielding the following result:
Theorem 6.3. Assume that N 2 and all risk aversions are equal to a. Consider running LMSR with
liquidity b and IND with liquidity b0 = b/? such that their asymptotic biases are equal. Denote the
iterates of the two runs of the market as ?tLMSR and ?tIND and the respective market-maker equilibria
as ??LMSR and ??IND . Then, with probability 1, there exist t0 and t1 t0 such that for all t t1 and
sufficiently small b
? 2t(1+")
?
? (t/2)(1 ")
2?
2?
2?
Et0 ?LMSR
??LMSR
? Et0 ?tIND ??IND
? Et0 ?LMSR
??LMSR
,
where " = O(b) and Et0 [?] = E[? | r t0 ] conditions on the t0 th iterate of a given run.
This result means that LMSR and IND are roughly equivalent (up to a factor of two) in terms of the
number of trades required to achieve a given accuracy. This is somewhat surprising as this implies
that maintaining price coherence does not offer strong informational advantages (at least when traders
are individually coherent, as assumed here). However, while there is little difference between the
two costs in terms of accuracy, there is a difference in terms of the worst-case loss. For K securities,
the worst-case loss of LMSR with the liquidity b is b log K, and the worst-case loss of IND with the
liquidity b0 is b0 K log 2. If liquidities are chosen as in Theorem 6.3, so that b0 is up to a factor-of-two
smaller than b, then the worst-case loss of IND is at least (bK/2) log 2, which is always worse than
the LMSR?s loss of b log K, and the ratio of the two losses increases as K grows.
When all risk aversion coefficients are equal to some constant a, then the dependence of Theorem 6.2
on the number of traders N and their risk aversion is similar to the dependence in Theorem 5.6. For
instance, to guarantee that stays below a certain level for varying N and a requires b = ?(N/a).
7
Numerical Experiments
We evaluate the tightness of our theoretical bounds via numerical simulation. We consider a complete
market over K = 5 securities and simulate N = 10 traders with risk aversion coefficients equal to 1.
These values of N and K are large enough to demonstrate the tightness of our results, but small
enough that simulations are tractable. While our theory comprehensively covers heterogeneous risk
8
0.00
0.0
0.2
0.4
0.6
Liquidity Parameter b
0.8
1.0
0.0
0.2
0.4
0.6
Liquidity Parameter b
0.8
1.0
0
?2
?4
LMSR
IND
b
0.01
0.03
0.05
0.07
?6
Log10 of Suboptimality of Objective
Actual Bias
?8
0.08
0.06
LMSR
IND
0.04
Market?Maker Bias
Asymptotic Bias
0.02
0.12
0.02
0.04
0.06
0.08
0.10
100
200
500
1000
0.00
Bias Plus Convergence Error
#Trades
0
200
400
600
800
1000
1200
1400
Number of Trades
Figure 1: (Left) The tradeoff between market-maker bias and convergence. Solid lines are for LMSR,
dashed for IND, the color indicates the number of trades. (Center) Market-maker bias as a function
of b. (Right) Convergence in the objective. Shading indicates 95% confidence based on 20 trading
sequences.
aversions and the dependence on the number of traders and securities, we have chosen to keep these
values fixed and more cleanly explore the impact of liquidity and number of trades. We consider
the two most commonly studied cost functions: LMSR and IND. We fix the ground-truth natural
parameter ? true and independently sample the belief ??i of each trader from Normal(? true , 2 IK ), with
= 5. We consider a single-peaked ground truth distribution with ?1true = log(1 ?(K 1)) and
?ktrue = log ? for k 6= 1, with ? = 0.02. Trading is simulated according to the all-security dynamics
(ASD) as described at the start of Section 6. In Appendix E, we show qualitatively similar results
using a uniform ground truth distribution and single-security dynamics (SSD).
We first examine the tradeoff that arises between market-maker bias and convergence error as the
?
liquidity parameter is adjusted. Fig. 1 (left) shows the combined bias and convergence error, k?t ?k,
as a function of liquidity and the number of trades t (indicated by the color of the line) for the two
cost functions, averaged over twenty random trading sequences. The minimum point on each curve
tells us the optimal value of the liquidity parameter b for the particular cost function and particular
number of trades. When the market is run for a short time, larger values of b lead to lower error. On
the other hand, smaller values of b are preferable as the number of trades grows, with the combined
error approaching 0 for small b.
? as a function of b for both LMSR and IND. We
In Fig. 1 (center) we plot the bias k?? (b; C) ?k
? ?
? ? b(?
?
compare this with the theoretical approximation k?? (b; C) ?k
a/N )kHT (?)@C
(?)k from
Theorem 5.6. Although Theorem 5.6 only gives an asymptotic guarantee as b ! 0, the approximation
is fairly accurate even for moderate values of b. In agreement with Theorem 5.7, the bias of IND is
higher than that of LMSR at any fixed value of b, but by no more than a factor of two.
? (r t )] F ? as a function of the number of trades t for our two
In Fig. 1 (right) we plot the log of E[F
cost functions and several liquidity levels. Even for small t the curves are close to linear, showing
that the local linear convergence rate kicks in essentially from the start of trade in our simulations.
? (r t )] F ? ? c?? t , or
In other words, there exist some c? and ? such that, empirically, we have E[F
t
?
? (r )] F ) ? log c? + t log ? . Plugging the belief values into Theorem 6.2, the
equivalently, log(E[F
slope of the curve for LMSR should be log10 ? ? 0.087b for sufficiently small b, and the slope for
IND should be between 0.088b and 0.164b. In Appendix E, we verify that this is the case.
8
Conclusion
Our theoretical framework provides a meaningful way to quantitatively evaluate the error tradeoffs
inherent in different choices of cost functions and liquidity levels. We find, for example, that to
maintain a fixed amount of bias, one should set the liquidity parameter b proportional to a measure of
the amount of cash that traders are willing to spend. We also find that, although the LMSR maintains
coherent prices while IND does not, the two are equivalent up to a factor of two in terms of the
number of trades required to reach any fixed accuracy, though LMSR has lower worst-case loss.
We have assumed that traders? beliefs are individually coherent. Experimental evidence suggests that
LMSR might have additional informational advantages over IND when traders? beliefs are incoherent
or each trader is informed about only a subset of events [12]. We touch on this in Appendix C.2, but
leave a full exploration of the impact of different assumptions on trader beliefs to future work.
9
References
[1] Jacob Abernethy, Yiling Chen, and Jennifer Wortman Vaughan. Efficient market making via
convex optimization, and a connection to online learning. ACM Transactions on Economics
and Computation, 1(2):Article 12, 2013.
[2] Jacob Abernethy, Sindhu Kutty, S?bastien Lahaie, and Rahul Sami. Information aggregation in
exponential family markets. In Proceedings of the 15th ACM Conference on Economics and
Computation (EC), 2014.
[3] Ole Barndorff-Nielsen. Exponential Families. Wiley Online Library, 1982.
[4] Joyce Berg, Robert Forsythe, Forrest Nelson, and Thomas Rietz. Results from a dozen years of
election futures markets research. Handbook of experimental economics results, 1:742?751,
2008.
[5] Olivier Bousquet and L?on Bottou. The tradeoffs of large scale learning. In Advances in Neural
Information Processing Systems (NIPS), 2008.
[6] Yiling Chen and David M. Pennock. A utility framework for bounded-loss market makers. In
Proceedings of the 23rd Conference on Uncertainty in Artificial Intelligence (UAI), 2007.
[7] Yiling Chen and Jennifer Wortman Vaughan. A new understanding of prediction markets via
no-regret learning. In Proceedings of the 11th ACM Conference on Electronic Commerce (EC),
2010.
[8] Miroslav Dud?k, S?bastien Lahaie, David M. Pennock, and David Rothschild. A combinatorial
prediction market for the US elections. In Proceedings of the 14th ACM Conference on
Electronic Commerce (EC), 2013.
[9] Rafael Frongillo and Mark D. Reid. Convergence analysis of prediction markets via randomized
subspace descent. In Advances in Neural Information Processing Systems (NIPS), 2015.
[10] Robin Hanson. Combinatorial information market design. Information Systems Frontiers, 5(1):
105?119, 2003.
[11] Yu. Nesterov. Efficiency of coordinate descent methods on huge-scale optimization problems.
SIAM Journal on Optimization, 22(2):341?362, 2012.
[12] Kenneth C. Olson, Charles R. Twardy, and Kathryn B. Laskey. Accuracy of simulated flat,
combinatorial, and penalized prediction markets. Presented at Collective Intelligence, 2015.
[13] Abraham Othman, David M Pennock, Daniel M Reeves, and Tuomas Sandholm. A practical
liquidity-sensitive automated market maker. ACM Transactions on Economics and Computation,
1(3):14, 2013.
[14] Kaare Brandt Petersen and Michael Syskind Pedersen. The matrix cookbook. Technical Report,
Technical University of Denmark, Nov 2012.
[15] R. Tyrrell Rockafellar. Convex analysis. Princeton University Press, 1970.
[16] R. Tyrrell Rockafellar and Roger J-B Wets. Variational analysis. Springer-Verlag, 2009.
[17] David Rothschild. Forecasting elections: comparing prediction markets, polls, and their biases.
Public Opinion Quarterly, 73(5):895?916, 2009.
[18] Christian Slamka, Bernd Skiera, and Martin Spann. Prediction market performance and market
liquidity: A comparison of automated market makers. IEEE Transactions on Engineering
Management, 60(1):169?185, 2013.
[19] Justin Wolfers and Eric Zitzewitz. Prediction markets. The Journal of Economic Perspectives,
18(2):107?126, 2004.
10
| 7024 |@word private:1 version:1 norm:1 willing:1 cleanly:2 simulation:5 crucially:1 uncovers:1 decomposition:8 covariance:1 jacob:2 pick:1 profit:1 thereby:1 solid:1 shading:1 carry:1 initial:2 daniel:1 bc:2 existing:1 current:3 com:4 comparing:2 surprising:1 gmail:1 must:4 written:1 numerical:4 partition:3 shape:1 christian:1 prk:1 plot:2 update:1 half:1 intelligence:2 website:1 short:2 provides:4 characterization:2 iterates:2 brandt:1 mathematical:1 along:1 rc:1 ik:1 barndorff:1 consists:2 prove:3 introduce:3 expected:5 market:132 roughly:1 themselves:1 examine:1 behavior:2 buying:2 informational:3 little:3 election:4 actual:1 conv:2 provided:2 begin:3 bounded:8 notation:2 hiding:1 null:1 argmin:4 pursue:1 minimizes:1 informed:1 transformation:1 guarantee:4 quantitative:1 concave:1 exactly:4 preferable:1 k2:1 control:2 kaare:1 reid:7 positive:1 t1:2 understood:1 local:16 engineering:1 consequence:1 analyzing:1 approximately:1 might:2 plus:1 twice:2 studied:3 suggests:2 range:2 averaged:1 unique:6 commerce:2 enforces:1 practical:1 practice:1 block:6 regret:1 area:1 elicit:2 projection:1 word:3 induce:1 confidence:1 petersen:1 get:1 cannot:1 close:2 interior:1 risk:19 vaughan:3 equivalent:2 map:1 charged:1 center:2 straightforward:1 go:2 economics:4 independently:2 convex:25 formalized:1 simplicity:1 immediately:1 adjusts:1 insight:1 rule:3 deriving:1 population:1 proving:2 notion:4 coordinate:8 suppose:1 play:1 olivier:1 us:2 kathryn:1 agreement:1 pa:1 element:1 blockcoordinate:1 role:2 capture:1 worst:8 calculate:1 ensures:1 averse:2 decrease:3 trade:35 vanishes:1 convexity:4 ui:11 moderately:1 nesterov:2 dynamic:6 dom:6 depend:6 tight:1 esk:1 efficiency:1 eric:1 selling:2 represented:1 various:1 effective:1 ole:1 artificial:1 marketplace:1 tell:2 aggregate:1 outcome:8 choosing:1 abernethy:8 exhaustive:1 whose:1 larger:4 valued:1 spend:2 say:4 tightness:2 otherwise:2 rietz:1 statistic:1 noisy:5 online:2 advantage:3 differentiable:3 sequence:5 eigenvalue:1 yiling:3 remainder:1 achieve:4 intuitive:1 olson:1 amounting:1 invest:1 convergence:42 empty:1 requirement:1 leave:1 derive:2 b0:7 eq:1 strong:2 trading:5 implies:2 quantify:3 direction:5 posit:1 hull:2 exploration:1 enable:2 opinion:1 rogers:1 public:1 require:3 fix:1 decompose:2 preliminary:1 proposition:1 tighter:1 ryan:1 adjusted:1 strictly:3 frontier:1 hold:1 around:1 sufficiently:2 ground:11 normal:1 equilibrium:43 cb:11 scope:1 major:1 vary:2 early:1 smallest:1 omitted:1 estimation:1 wet:1 combinatorial:3 maker:48 sensitive:1 individually:2 lmsr:34 largest:1 establishes:1 weighted:1 minimization:1 challenger:1 always:6 forsythe:1 rather:2 cash:12 frongillo:7 pn:13 varying:1 focus:4 she:1 rank:1 likelihood:2 logically:1 indicates:2 political:1 rothschild:2 sense:2 typically:1 entire:1 her:5 interested:1 issue:1 among:6 denoted:2 priori:1 fairly:1 equal:7 construct:1 beach:1 sampling:9 sell:2 broad:1 yu:1 cookbook:1 constitutes:1 peaked:1 future:3 discrepancy:1 purchase:2 simplex:3 quantitatively:1 inherent:2 report:1 randomly:2 familiar:1 argmax:2 consisting:1 microsoft:4 maintain:2 interest:2 centralized:2 huge:1 possibility:1 yielding:2 bundle:7 accurate:3 necessary:1 respective:1 lahaie:3 orthogonal:1 conduct:1 euclidean:1 taylor:1 joyce:1 guidance:1 theoretical:4 miroslav:2 instance:2 cover:1 cost:54 subset:1 trader:74 uniform:1 wortman:3 characterize:2 combined:2 st:1 incumbent:3 randomized:5 siam:1 stay:1 bu:2 systematic:1 off:2 a1i:1 invertible:1 michael:1 together:1 quickly:1 concrete:1 again:1 central:1 management:1 containing:1 administrator:1 worse:2 resort:1 derivative:1 potential:5 b2:5 summarized:2 sec:1 coefficient:8 rockafellar:3 satisfy:3 explicitly:1 depends:3 later:1 analyze:5 reached:1 competitive:1 start:3 participant:1 parallel:2 hf:2 maintains:1 aggregation:1 slope:2 contribution:2 construed:1 accuracy:7 variance:1 maximized:2 yield:2 identify:1 correspond:3 pedersen:1 accurately:1 worth:2 randomness:2 converged:2 history:1 influenced:1 reach:3 whenever:1 checked:1 definition:9 against:1 nonetheless:1 associated:3 proof:4 proved:1 treatment:1 adjusting:1 logical:1 color:2 formalize:1 nielsen:1 originally:1 higher:1 response:2 rahul:1 though:2 strongly:2 generality:1 furthermore:1 roger:1 hand:1 touch:1 marker:1 google:2 willingness:1 indicated:1 laskey:1 grows:3 usa:1 facilitate:1 concept:1 true:17 verify:1 dud:2 i2:2 ind:27 round:3 coincides:1 suboptimality:2 kutty:1 allowable:1 complete:3 demonstrate:5 bring:1 meaning:1 harmonic:1 instantaneous:3 novel:1 consideration:1 fi:9 ktrue:1 common:2 charles:1 variational:1 functional:1 empirically:1 wolfers:1 extend:2 discussed:1 refer:2 ai:19 rd:7 reef:1 similarly:4 hp:3 ssd:2 reachable:1 calibration:1 longer:1 dictated:1 perspective:2 optimizing:1 optimizes:1 moderate:1 certain:2 verlag:1 arbitrarily:1 scoring:2 zitzewitz:1 responsiveness:1 minimum:4 greater:1 additional:3 somewhat:1 recognized:1 maximize:1 dashed:1 signal:1 relates:1 u0:2 full:2 stem:1 compounded:1 smooth:1 faster:1 technical:2 offer:2 long:1 plugging:1 impact:6 prediction:16 basic:2 heterogeneous:1 essentially:1 expectation:5 arxiv:1 iteration:1 sometimes:1 achieved:1 c1:1 want:2 wealth:3 source:2 crucial:1 biased:1 rest:1 posse:1 strict:1 pennock:3 subject:1 mature:1 member:1 call:7 symmetrically:1 kick:1 enough:2 sami:1 automated:3 iterate:1 finish:1 pennsylvania:1 identified:1 approaching:1 economic:1 idea:1 tradeoff:7 othman:1 t0:8 whether:1 expression:1 utility:21 forecasting:5 york:3 hessian:3 generally:1 aimed:1 informally:2 amount:7 locally:4 differentiability:1 tth:1 exist:4 notice:1 arising:2 per:1 correctly:1 overly:1 write:3 putting:1 poll:3 drawn:2 asd:5 ht:12 kenneth:1 asymptotically:2 monotone:1 sum:5 year:1 run:3 uncertainty:1 arrive:1 family:16 electronic:2 forrest:1 coherence:2 decision:1 appendix:12 et0:4 bound:27 pay:1 guaranteed:1 refine:1 activity:1 occur:1 precisely:1 infinity:1 flat:1 bousquet:1 simulate:1 argument:2 span:1 min:5 martin:1 according:3 conjugate:4 legendre:1 across:2 smaller:3 sandholm:1 making:2 tind:2 taken:1 mutually:1 remains:1 jennifer:3 previously:2 skew:1 tractable:3 end:1 available:3 apply:1 observe:2 quarterly:1 generic:2 appropriate:1 alternative:2 existence:1 thomas:1 assumes:1 running:2 denotes:1 opportunity:1 maintaining:2 log10:2 build:1 establish:1 move:1 objective:3 question:1 occurs:1 parametric:1 concentration:1 exclusive:1 dependence:6 rt:1 diagonal:2 gradient:11 win:2 dp:4 subspace:1 simulated:2 entity:1 nelson:1 consensus:1 enforcing:1 cb0:2 denmark:1 tuomas:1 relationship:2 ratio:1 equivalently:2 difficult:1 setup:1 robert:1 holding:3 negative:1 design:4 implementation:1 collective:1 unknown:1 twenty:1 upper:10 observation:1 sold:1 finite:3 descent:9 immediate:1 payoff:6 defining:1 precise:1 arbitrary:1 introduced:2 bk:1 namely:1 required:3 david:5 bernd:1 connection:1 security:42 hanson:1 coherent:6 nip:3 kht:3 beyond:3 syskind:1 justin:1 below:2 rf:3 including:1 max:5 belief:24 event:4 suitable:2 natural:5 rely:1 predicting:1 indicator:1 representing:1 library:1 created:1 incoherent:1 philadelphia:1 prior:2 literature:3 understanding:2 asymptotic:10 relative:1 loss:14 fully:5 rium:1 allocation:15 proportional:3 analogy:1 triple:2 aversion:16 agent:1 affine:1 sufficient:1 article:1 share:8 pi:1 arbitrage:2 penalized:1 clearing:22 bias:46 formal:1 guide:1 comprehensively:1 benefit:1 liquidity:34 curve:3 world:2 valid:1 author:1 commonly:1 refinement:1 qualitatively:1 far:1 ec:3 flux:1 transaction:3 nov:1 rafael:1 keep:1 global:2 active:1 buy:4 sequentially:1 kkt:1 pseudoinverse:1 handbook:1 assumed:2 uai:1 continuous:2 iterative:1 sk:8 chief:1 why:1 additionally:1 robin:1 ca:1 interact:1 hc:9 bottou:1 domain:1 pk:2 main:2 linearly:1 abraham:1 bounding:1 big:1 noise:1 nothing:1 allowed:2 fig:3 owning:1 ny:3 wiley:1 explicit:10 wish:1 exponential:24 winning:1 lie:1 third:2 weighting:1 dozen:1 formula:3 rk:10 theorem:26 bastien:3 specific:7 sindhu:1 showing:1 rcb:1 r2:3 evidence:1 exists:10 consist:1 false:1 ci:11 jenn:1 budget:1 forecast:8 chen:3 entropy:1 logarithmic:2 explore:1 prevents:1 expressed:3 ptrue:2 springer:1 corresponds:2 truth:11 minimizer:2 acm:5 goal:1 viewed:2 identity:1 careful:1 price:61 lipschitz:1 mdudik:1 change:3 determined:1 tyrrell:2 uniformly:1 conservative:1 called:2 total:1 experimental:2 meaningful:3 formally:2 berg:1 mark:1 arises:5 outstanding:1 incorporate:1 evaluate:3 princeton:1 correlated:1 |
6,661 | 7,025 | Safe Adaptive Importance Sampling
Sebastian U. Stich
EPFL
Anant Raj
Max Planck Institute for Intelligent Systems
[email protected]
[email protected]
Martin Jaggi
EPFL
[email protected]
Abstract
Importance sampling has become an indispensable strategy to speed up optimization algorithms for large-scale applications. Improved adaptive variants?using
importance values defined by the complete gradient information which changes
during optimization?enjoy favorable theoretical properties, but are typically computationally infeasible. In this paper we propose an efficient approximation of
gradient-based sampling, which is based on safe bounds on the gradient. The
proposed sampling distribution is (i) provably the best sampling with respect to
the given bounds, (ii) always better than uniform sampling and fixed importance
sampling and (iii) can efficiently be computed?in many applications at negligible
extra cost. The proposed sampling scheme is generic and can easily be integrated
into existing algorithms. In particular, we show that coordinate-descent (CD) and
stochastic gradient descent (SGD) can enjoy significant a speed-up under the novel
scheme. The proven efficiency of the proposed sampling is verified by extensive
numerical testing.
1
Introduction
Modern machine learning applications operate on massive datasets. The algorithms that are used
for data analysis face the difficult challenge to cope with the enormous amount of data or the vast
dimensionality of the problems. A simple and well established strategy to reduce the computational
costs is to split the data and to operate only on a small part of it, as for instance in coordinate
descent (CD) methods and stochastic gradient (SGD) methods. These kind of methods are state of
the art for a wide selection of machine learning, deep leaning and signal processing applications [9,
11, 35, 27]. The application of these schemes is not only motivated by their practical preformance,
but also well justified by theory [18, 19, 2].
Deterministic strategies are seldom used for the data selection?examples are steepest coordinate
descent [4, 34, 20] or screening algorithms [14, 15]. Instead, randomized selection has become
ubiquitous, most prominently uniform sampling [27, 29, 7, 8, 28] but also non-uniform sampling based
on a fixed distribution, commonly referred to as importance sampling [18, 19, 2, 33, 16, 6, 25, 24].
While these sampling strategies typically depend on the input data, they do not adapt to the information
of the current parameters during optimization. In contrast, adaptive importance sampling strategies
constantly re-evaluate the relative importance of each data point during training and thereby often
surpass the performance of static algorithms [22, 5, 26, 10, 21, 23]. Common strategies are gradientbased sampling [22, 36, 37] (mostly for SGD) and duality gap-based sampling for CD [5, 23].
The drawbacks of adaptive strategies are twofold: often the provable theoretical guarantees can be
worse than the complexity estimates for uniform sampling [23, 3] and often it is computationally
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
inadmissible to compute the optimal adaptive sampling distribution. For instance gradient based
sampling requires the computation of the full gradient in each iteration [22, 36, 37]. Therefore one
has to rely on approximations based on upper bounds [36, 37], or stale values [22, 1]. But in general
these approximations can again be worse than uniform sampling.
This makes it necessary to develop adaptive strategies that can efficiently be computed in every
iteration and that come with theoretical guarantees that show their advantage over fixed sampling.
Our contributions. In this paper we propose an efficient approximation of the gradient-based
sampling in the sense that (i) it can efficiently be computed in every iteration, (ii) is provably better
than uniform or fixed importance sampling and (iii) recovers the gradient-based sampling in the fullinformation setting. The scheme is completely generic and can easily be added as an improvement to
both CD and SGD type methods.
As our key contributions, we
(1) show that gradient-based sampling in CD methods is theoretically better than the classical fixed
sampling, the speed-up can reach a factor of the dimension n (Section 2);
(2) propose a generic and efficient adaptive importance sampling strategy that can be applied in CD
and SGD methods and enjoys favorable properties?such as mentioned above (Section 3);
(3) demonstrate how the novel scheme can efficiently be integrated in CD and SGD on an important
class of structured optimization problems (Section 4);
(4) supply numerical evidence that the novel sampling performs well on real data (Section 5).
Notation. For x ? Rn define [x]i := hx, ei i with ei the standard unit vectors in Rn . We abbreviate
?i f := [?f ]i . A convex function f : Rn ? R with L-Lipschitz continuous gradient satisfies
f (x + ?u) ? f (x) + ? hu, ?f (x)i +
? 2 Lu
2
2
kuk
?x ? Rn , ?? ? R ,
(1)
for every direction u ? Rn and Lu = L. A function with coordinate-wise Li -Lipschitz continuous
gradients1 for constants Li > 0, i ? [n] := {1, . . . , n}, satisfies (1) just along coordinate directions,
i.e. u = ei , Lei = Li for every i ? [n]. A function is coordinate-wise L-smooth if Li ? L for
i = 1, . . . , n. For convenience we introduce vector l = (L1 , . . . , n )> and matrix L = diag(l). A
probability vector p ? ?n := {x ? Rn?0 : kxk1 = 1} defines a probability distribution P over [n]
and we denote by i ? p a sample drawn from P.
2
Adaptive Importance Sampling with Full Information
In this section we argue that adaptive sampling strategies are theoretically well justified, as they
can lead to significant improvements over static strategies. In our exhibition we focus first on CD
methods, as we also propose a novel stepsize strategy for CD in this contribution. Then we revisit the
results regarding stochastic gradient descent (SGD) already present in the literature.
2.1
Coordinate Descent with Adaptive Importance Sampling
We address general minimization problems minx f (x). Let the objective f : Rn ? R be convex with
coordinate-wise Li -Lipschitz continuous gradients. Coordinate descent methods generate sequences
{xk }k?0 of iterates that satisfy the relation
xk+1 = xk ? ?k ?ik f (xk )eik .
(2)
Here, the direction ik is either chosen deterministically (cyclic descent, steepest descent), or randomly
picked according to a probability vector pk ? ?n . In the classical literature, the stepsize is often
chosen such as to minimize the quadratic upper bound (1), i.e. ?k = L?1
ik . In this work we
propose to set ?k = ?k [pk ]?1
where
?
does
not
depend
on
the
chosen
direction
ik . This leads to
k
ik
1
|?i f (x + ?ei ) ? ?i f (x)| ? Li |?| ,
?x ? Rn , ?? ? R.
2
directionally-unbiased updates, like it is common among SGD-type methods. It holds
(1)
Li ?k2
?k
2
2
(?ik f (xk )) | xk
Eik ?pk [f (xk+1 ) | xk ] ? Eik ?pk f (xk ) ?
(?ik f (xk )) +
[pk ]ik
2[pk ]2ik
n
X
Li ?k2
2
2
= f (xk ) ? ?k k?f (xk )k2 +
(?i f (xk )) .
(3)
2[p
]
k i
i=1
In adaptive strategies we have the freedom to chose both variables ?k and pk as we like. We therefore
propose to chose them in such a way that they minimize the upper bound (3) in order to maximize the
expected progress. The optimal pk in (3) is independent of ?k , but the optimal ?k depends on pk .
We can state the following useful observation.
Lemma 2.1. If ?k = ?k (pk ) is the minimizer of (3), then xk+1 := xk ? [p?kk]i ?ik f (xk )eik satisfies
k
Eik ?pk
?k (pk )
2
[f (xk+1 ) | xk ] ? f (xk ) ?
k?f (xk )k2 .
2
(4)
Consider two examples. In the first one we pick a sub-optimal, but very common [18] distribution:
Li
for i ? [n], where
Example 2.2 (Li -based sampling). Let pL ? ?n defined as [pL ]i = Tr[L]
1
L = diag(L1 , . . . , Ln ). Then ?k (pL ) = Tr[L] .
The distribution pL is often referred to as (fixed) importance sampling. In the special case when
Li = L for all i ? [n], this boils down to uniform sampling.
Example 2.3 (Optimal sampling2 ). Equation (3) is minimized for probabilities [p?k ]i =
and
?k (p?k )
=
k?f (xk )k22
?
2.
k L?f (xk )k
?
L |? f (xk )|
?i i
k L?f (x)k
1
Observe
1
Tr[L]
?
?k (p?k )
?
1
Lmin ,
where Lmin := mini?[n] Li .
1
To prove this result, we rely on the following Lemma?the proof of which, as well as for the claims
above, is deferred to Section A.1 of the appendix. Here |?| is applied entry-wise.
?
Pn Li [x]2i
| Lx|
?
Lemma 2.4. Define V (p, x) := i=1 [p]
.
Then
arg
min
V
(p,
x)
=
.
n
p??
i
k Lxk1
The ideal adaptive algorithm. We propose to chose the stepsize and the sampling distribution for
CD as in Example 2.3. One iteration of the resulting CD method is illustrated in Algorithm 1. Our
bounds on the expected one-step progress can be used to derive convergence rates of this algorithm
with the standard techniques. This is exemplified in Appendix A.1. In the next Section 3 we develop
a practical variant of the ideal algorithm.
Efficiency gain. By comparing the estimates provided in the examples above, we see that the
expected progress of the proposed method is always at least as good as for the fixed sampling. For
instance in the special case where L = Li for i ? [n], the Li -based sampling is just uniform sampling
k?f (x )k2
1
with ?k (punif ) = Ln
. On the other hand ?k (p?k ) = Lk?f (xk )k22 , which can be n times larger than
k
1
?k (punif ). The expected one-step progress in this extreme case coincides with the one-step progress
of steepest coordinate descent [20].
2.2
SGD with Adaptive Sampling
SGD methods are applicable to objective functions which decompose as a sum
Pn
f (x) = n1 i=1 fi (x)
(5)
d
with each fi : R ? R convex. In previous work [22, 36, 37] is has been argued that the following
k?fi (xk )k2
gradient-based sampling [p??k ]i = Pn k?f
is optimal in the sense that it maximizes the
i (xk )k2
i=1
expected progress (3). Zhao and Zhang [36] derive complexity estimates for composite functions.
For non-composite functions it becomes easier to derive the complexity estimate. For completeness,
we add this simpler proof in Appendix A.2.
Here ?optimal? refers to the fact that p?k is optimal with respect to the given model (1) of the objective
function. If the model is not accurate, there might exist a sampling that yields larger expected progress on f .
2
3
Algorithm 1 Optimal sampling
Algorithm 2 Proposed safe sampling Algorithm 3 Fixed sampling
(compute full gradient)
Compute ?f (xk )
(update l.- and u.-bounds)
Update `, u
(define optimal sampling)
Define (p?k , ?k? ) as in Example 2.3
ik ? p?k
xk+1 := xk ?
??
k
[p?
k ]ik
?ik f (xk )
(compute safe sampling)
Define (p?k , ?
? k ) as in (7)
ik ? p?k
Compute ?ik f (xk )
xk+1 := xk ?
?
?k
[p
?k ]ik
?ik f (xk )
(define fixed sampling)
Define (pL , ?
? ) as in Example 2.2
ik ? pL
Compute ?ik f (xk )
xk+1 := xk ?
?
?
[pL ]ik
?ik f (xk )
Figure 1: CD with different sampling strategies. Whilst Alg. 1 requires to compute the full gradient,
the compute operation in Alg. 2 is as cheap as for fixed importance sampling, Alg. 3. Defining the
safe sampling p?k requires O(n log n) time.
3
Safe Adaptive Importance Sampling with Limited Information
In the previous section we have seen that gradient-based sampling (Example 2.3) can yield a massive
speed-up compared to a static sampling distribution (Example 2.2). However, sampling according
to p?k in CD requires the knowledge of the full gradient ?f (xk ) in each iteration. And likewise,
sampling from p??k in SGD requires the knowledge of the gradient norms of all components?both
these operations are in general inadmissible, i.e. the compute cost would void all computational
benefits of the iterative (stochastic) methods over full gradient methods.
However, it is often possible to efficiently compute approximations of p?k or p??k instead. In contrast
to previous contributions, we here propose a safe way to compute such approximations. By this we
mean that our approximate sampling is provably never worse than static sampling, and moreover, we
show that our solution is the best possible with respect to the limited information at hand.
3.1
An Optimization Formulation for Sampling
Formally, we assume that we have in each iteration access to two vectors `k , uk ? Rn?0 that
provide safe upper and lower bounds on either the absolute values of the gradient entries ([`k ]i ?
|?i f (xk )| ? [uk ]i ) for CD, or of the gradient norms in SGD: ([`k ]i ? k?fi (xk )k2 ? [uk ]i ). We
postpone the discussion of this assumption to Section 4, where we give concrete examples.
The minimization of the upper bound (3) amounts to the equivalent problem3
?2
V (pk , ck )
2
min minn ??k kck k2 + k V (pk , ck )
?
min
2
?k pk ??
pk ??n
2
kck k2
(6)
where ck ? Rn represents the unknown true gradient. That is, with respect to the bounds `k , uk ,
we can write ck ? Ck := {x ? Rn : [`k ]i ? [x]i ? [uk ]i , i ? [n]}. In Example 2.3 we derived the
optimal solution for a fixed ck ? Ck . However, this is not sufficient to find the optimal solution for
an arbitrary ck ? Ck . Just computing the optimal solution for an arbitrary (but fixed) ck ? Ck is
unlikely to yield a good solution. For instance both extreme cases ck = `k and ck = uk (the latter
choice is quite common, cf. [36, 23]) might be poor. This is demonstrated in the next example.
`
Example 3.1. Let ` = (1, 2)> , u = (2, 3)> , c = (2, 2)> and L1 = L2 = 1. Then V k`k
,c =
1
25
2
2
2
9
u
c
4 kck2 , V kuk , c = 12 kck2 , whereas for uniform sampling V kck , c = 2 kck2 .
1
1
The proposed sampling. As a consequence of these observations, we propose to solve the following optimization problem to find the best sampling distribution with respect to Ck :
V (p, c)
vk := minn max
,
and to set
(?k , pk ) := v1k , p?k ,
(7)
2
p?? c?Ck kck
2
where p?k denotes a solution of (7). The resulting algorithm for CD is summarized in Alg. 2.
In the remainder of this section we discuss the properties of the solution p?k (Theorem 3.2) an how
such a solution can be efficiently be computed (Theorem 3.4, Algorithm 4).
3
Although only shown here for CD, an equivalent optimization problem arises for SGD methods, cf. [36].
4
3.2
Proposed Sampling and its Properties
Theorem 3.2. Let (p,
? c?) ? ?n ? Rn?0 denote a solution of (7). Then Lmin ? vk ? Tr [L] and
(i) max
c?Ck
V (p,
? c)
2
kck2
? max
V (p, c)
2
kck2
c?Ck
, ?p ? ?n ;
(p? has the best worst-case guarantee)
2
(p? is always better than Li -based sampling)
(ii) V (p,
? c) ? Tr [L] ? kck2 , ?c ? Ck .
Remark 3.3. In the special case Li = L for all i ? [n], the Li -based sampling boils down to uniform
2
sampling (Example 2.2) and p? is better than uniform sampling: V (p,
? c) ? Ln kck2 , ?c ? Ck .
Proof. Property (i) is an immediate consequence of (7). Moreover, observe that the Li -based
L ,c)
sampling pL is a feasible solution in (7) with value V (p
? Tr [L] for all c ? Ck . Hence
kck22
?
k Lck21 2.4
V (p, c)
V (p,
? c) (?) V (p,
? c?) (7)
V (pL , c)
Lmin ?
= minn
?
?
? max
= Tr [L] , (8)
2
2
2
2
2
p??
c?C
k
kck2
kck2
kck2
k?
ck2
kck2
for all c ? Ck , thus vk ? [Lmin , Tr [L]] and (ii) follows. We prove inequality (?) in the appendix, by
showing that min and max can be interchanged in (7).
A geometric interpretation. We show in ?Appendix B that the
optimization problem (7) can
?
?
k Lck1
h l,ci
equivalently be written as vk = maxc?Ck kck = maxc?Ck kck , where [l]i = Li for i ? [n].
2
2
The maximum is thus attained for vectors c ? Ck that minimize the angle with the vector l.
?
?
2
Lc
Theorem 3.4. Let c ? Ck , p = k?Lck
and denote m = kck2 ? k Lck?1
1 . If
1
?
?
if [uk ]i ? Li m ,
?[uk ]i
?
?i ? [n] ,
(9)
[c]i = [`k ]i
if [`k ]i ? Li m ,
??
Li m otherwise,
then (p, c) is a solution to (7). Moreover, such a solution can be computed in time O(n log n).
Proof. This can be proven by examining the optimality conditions of problem (7). This is deferred to
Section B.1 of the appendix. A procedure that computes such a solution is depicted in Algorithm 4.
The algorithm makes extensive use of (9). For simplicity, assume first L = In for now. In each
iteration t , a potential solution vector ct is proposed, and it is verified whether this vector satisfies all
optimality conditions.
In Algorithm 4, ct is just implicit, with [ct ]i = [c]i for decided indices i ? D
?
and [ct ]i = [ Lm]i for undecided
?
/ D. After at most n iterations a valid solution is found.
? indices i ?
By sorting the components of L?1 `k and L?1 uk by their magnitude, at most a linear number of
inequality checks in (9) have to be performed in total. Hence the running time is dominated by the
O(n log n) complexity of the sorting algorithm. A formal proof is given in the appendix.
Algorithm 4 Computing the Safe Sampling for Gradient Information `, u
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
13:
Input: 0n ? ` ? u,
? L, Initialize: c = 0n , u =?1, ` = n, D = ?.
`sort := sort_asc( L?1 `), usort := sort_asc( L?1 u), m = max(`sort )
while u ? ` do
if [`sort ]` > m then
(largest undecided lower bound is violated)
?
Set corresponding [c]index := [ L`sort ]` ; ` := ` ? 1; D := D ? {index}
else if [usort ]u < m then
(smallest undecided upper bound is violated)
?
Set corresponding [c]index := [ Lusort ]u ; u := u + 1; D := D ? {index}
else
break
(no constraints are violated)
end if
?
2
m := kck2 ? k Lck?1
(update m as in (9))
1
end while ?
?
?
k Lck2
Lc
Set [c]i := Li m for all i ?
/ D and Return c, p = k?Lck
, v = kck2 1
1
5
2
Competitive Ratio. We now compare the proposed sampling distribution p?k with the optimal
sampling solution in hindsight. We know that if the true (gradient) vector c? ? C?k would be given to
L?
c
us, then the corresponding optimal probability distribution would be p? (?
c) = k?L?
(Example 2.3).
ck
1
c)
k ,?
Thus, for this c? we can now analyze the ratio V V(p(?p?(?
c),?
c) . As we are interested in the worst case ratio
among all possible candidates c? ? Ck , we define
?k := max
c?Ck
Lemma 3.5. Let wk := minc?Ck
V (p,
? c)
V (p,
? c)
= max ?
.
?
c?C
V (p (c), c)
k k
Lck21
?
k Lck21
.
kck22
(10)
Then Lmin ? wk ? vk , and ?k ?
vk
vk
wk (? Lmin ).
Lemma 3.6. Let ? ? 1. If [Ck ]i ? ?[Ck ]i = ? and ? ?1 [Ck ]i ? [Ck ]i = ? for all i ? [n] (here [Ck ]i
denotes the projection on the i-th coordinate), then ?k ? ? 4 .
These two lemma provide bounds on the competitive ratio. Whilst Lemma 3.6 relies on a relative
accuracy condition, Lemma 3.5 can always be applied. However, the corresponding minimization
problem is non-convex. Note that knowledge of ?k is not needed to run the algorithm.
4
Example Safe Gradient Bounds
In this section, we argue that for a large class of objective functions of interest in machine learning,
suitable safe upper and lower bounds `, u on the gradient along every coordinate direction can be
estimated and maintained efficiently during optimization. A similar argument can be given for the
efficient approximation of component wise gradient norms in finite sum objective based stochastic
gradient optimization.
As the guiding example, we will here showcase the training of generalized linear models (GLMs) as
e.g. in regression, classification and feature selection. These models are formulated in terms of a
given data matrix A ? Rd?n with columns ai ? Rd for i ? [n].
Coordinate Descent - GLMs
Pn with Arbitrary Regularizers. Consider general objectives of the
form f (x) := h(Ax) + i=1 ?i ([x]i ) with an arbitrary convex separable regularizer term given
by the ?i : R ? R for i ? [n]. A key example is when h : Rd ? R describes the least-squares
2
regression objective h(Ax) = 12 kAx ? bk2 for a b ? Rd . Using that this h is twice differentiable
2
with ? h(Ax) = In , it is easy to see that we can track the evolution of all gradient entries, when
performing CD steps, as follows:
?i f (xk+1 ) ? ?i f (xk ) = ?k hai , aik i ,
?i 6= ik .
(11)
for ik being the coordinate changed in step k (here we also used the separability of the regularizer).
Therefore, all gradient changes can be tracked exactly if the inner products of all datapoints are
available, or approximately if those inner products can be upper and lower bounded. For computational efficiency, we in our experiments simply use Cauchy-Schwarz |hai , aik i| ? kai k ? kaik k. This
results in safe upper and lower bounds [`k+1 ]i ? ?i f (xk+1 ) ? [uk+1 ]i for all inactive coordinates
i 6= ik . (For the active coordinate ik itself one observes the true value without uncertainty). These
bounds can be updated in linear time O(n) in every iteration.
For general smooth h (again with arbitrary separable regularizers ?i ), (11) can readily be extended to
? ik i instead, when
hold [32, Lemma 4.1], the inner product change term becoming hai , ?2 f (Ax)a
? will be an element of the line segment [xk , xk+1 ].
assuming h is twice-differentiable. Here x
Stochastic Gradient Descent - GLMs. We now presentP
a similar result forPfinite sum problems (5)
n
n
for the use in SGD based optimization, that is f (x) := n1 i=1 fi (x) = n1 i=1 hi (a>
i x).
Lemma 4.1. Consider f : Rd ? R as above, with twice differentiable hi : R ? R. Let xk , xk+1 ?
Rd denote two successive iterates of SGD, i.e. xk+1 := xk ? ?k aik ?hik (a>
ik xk ) = xk + ?k aik .
d
Then there exists x
? ? R on the line segment between xk and xk+1 , x
? ? [xk , xk+1 ] with
?fi (xk+1 ) ? ?fi (xk ) = ?k ?2 hi (a>
?) hai , aik i ai ,
i x
6
? i 6= ik .
(12)
This leads to safe upper and lower bounds for the norms of the partial gradient, [`k ]i ? k?fi (xk )k2 ?
[uk ]i , that can be updated in linear time O(n), analogous to the coordinate case discussed above.4
We note that there are many other ways to track safe gradient bounds for relevant machine learning problems, including possibly more tight ones. We here only illustrate the simplest variants,
highlighting the fact that our new sampling procedure works for any safe bounds `, u.
Computational Complexity. In this section, we have demonstrated how safe upper and lower
bounds `, u on the gradient information can be obtained for GLMs, and argued that these bounds can
be updated in time O(n) per iteration of CD and SGD. The computation of the proposed sampling
takes O(n log n) time (Theorem 3.4). Hence, the introduced overhead in Algorithm 2 compared
to fixed sampling (Algorithm 3) is of the order O(n log n) in every iteration. The computation of
one coordinate of the gradient, ?ik f (xk ), takes time ?(d) for general data matrices. Hence, when
d = ?(n), the introduced overhead reduces to O(log n) per iteration.
5
Empirical Evaluation
In this section we evaluate the empirical performance of our proposed adaptive sampling scheme on
relevant machine learning tasks. In particular, we illustrate performance on generalized linear models
with L1 and L2 regularization, as of the form (5),
n
1X
hi (a>
(13)
min
i x) + ? ? r(x)
x?Rd n
i=1
We use square loss, squared hinge loss as well as logistic loss for the data fitting terms hi , and
kxk1 and kxk22 for the regularizer r(x). The datasets used in the evaluation are rcv1, real-sim and
news20.5 The rcv1 dataset consists of 20,242 samples with 47,236 features, real-sim contains 72,309
datapoints and 20,958 features and news20 contains 19,996 datapoints and 1,355,191 features. For
all datasets we set unnormalized features with all the non-zero entries set to 1 (bag-of-words features).
By real-sim? and rcv1? we denote a subset of the data chosen by randomly selecting 10,000 features
and 10,000 datapoints. By news20? we denote a subset of the data chose by randomly selecting
15% of the features and 15% of the datapoints. A regularization parameter ? = 0.1 is used for all
experiments.
Our results show the evolution of the optimization objective over time or number of epochs (an epoch
corresponding to n individual updates). To compute safe lower and upper bounds we use the methods
presented in Section 4 with no special initialization, i.e. `0 = 0n , u0 = ?n .
1
Coordinate Descent. In Figure 2 we compare the effect of the fixed stepsize ?k = Ln
(denoted
as ?small?) vs. the time varying optimal stepsize (denoted as ?big?) as discussed in Section 2.
Results are shown for optimal sampling p?k (with optimal stepsize ?k (p?k ), cf. Example 2.3), our
proposed sampling p?k (with optimal stepsize ?k (p?k ) = vk?1 , cf. (7)) and uniform sampling (with
1
optimal stepsize ?k (pL ) = Ln
, as here L = LIn , cf. Example 2.2). As the experiment aligns
with theory?confirming the advantage of the varying ?big? stepsizes?we only show the results for
Algorithms 1?3 in the remaining plots.
Performance for squared hinge loss, as well as logistic regression with L1 and L2 regularization is
presented in Figure 3 and Figure 4 respectively. In Figures 5 and 6 we report the iteration complexity
vs. accuracy as well as timing vs. accuracy results on the full dataset for coordinate descent with
square loss and L1 (Lasso) and L2 regularization (Ridge).
Theoretical Sampling Quality. As part of the CD performance results in Figures 2?6 we include
an additional evolution plot on the bottom of each figure to illustrate the values vk which determine
the stepsize (?
?k = vk?1 ) for the proposed Algorithm 2 (blue) and the optimal stepsizes of Algorithm 1
vk
(black) which rely on the full gradient information. The plots show the normalized values Tr[L]
, i.e.
the relative improvement over Li -based importance sampling. The results show that despite only
relying on very loose safe gradient bounds, the proposed adaptive sampling is able to strongly benefit
from the additional information.
4
5
Here we use the efficient representation ?fi (x) = ?(x) ? ai for ?(x) ? R.
All data are available at www.csie.ntu.edu.tw/?cjlin/libsvmtools/datasets/
7
1.00
Uniform
Proposed (big step)
Proposed (small step) 0.95
1.00
0.99
0.98
0.97
1.00
0.98
0.96
0.94
0.92
0.90
0.88
0.86 f(xk )
Optimal (big step)
Optimal (small step)
0.90
0.96
0.95
0.85
0.94 f(x )
k
0
-1
-2
-3 vk
-4
10
0
f(xk )
2
1
Epochs
5
0
-1
-2
-3 vk
-4
10
6
0
2
1
(a) rcv1?, L1 reg.
5
Epochs
0
-1
-2
-3 vk
-4
10
0
6
Uniform
Proposed
Optimal
6.90
6.85
0.68
6.80
6.75
f(xk )
0
-1
-2
-3 vk
-4
10
0
2
1
Epochs
5
Uniform
Proposed
Optimal
0.69
5
Epochs
6
0
-1
-2
-3 vk
-4
10
0
Uniform
Proposed
Optimal
0.69
0.68
0.69
0.67
0.65
0.64
0.64 f(xk )
0.63 f(xk )
0
-1
-2
-3 vk
-4
10
0
-1
-2
-3 vk
-4
10
(a) rcv1?, L1 reg.
5
Epochs
3
0.66
0.65
2
2.5
Uniform
Proposed
Optimal
0.68
0.66
1
Epochs
Figure 3: (CD, squared hinge loss) Function
value vs. number of iterations for optimal stepsize ?k = vk?1 .
0.66
0
1
(b) real-sim?, L2 reg.
0.67
f(xk )
Uniform
Proposed
Optimal
0.5
0.67
0
-1
-2
-3 vk
-4
10
6
2
1
1.00
0.95
0.90
0.85
0.80
0.75
0.70
0.65 f(xk )
(a) rcv1?, L1 reg.
(b) rcv1?, L2 reg.
Figure 2: (CD, square loss) Fixed vs. adaptive
sampling strategies, and dependence on stepsizes.
1
.
With ?big? ?k = vk?1 and ?small? ?k = Tr[L]
0.1 x
Uniform
Proposed
Optimal
6
0
(b) rcv1?, L2 reg.
1
0.5
2.5
Epochs
3
0
(c) real-sim?, L1 reg.
1
0.5
Epochs
2.5
3
(d) real-sim?, L2 reg.
Figure 4: (CD, logistic loss) Function value vs. number of iterations for different sampling strategies.
Bottom: Evolution of the value vk which determines the optimal stepsize (?
?k = vk?1 ). The plots
vk
show the normalized values Tr[L] , i.e. the relative improvement over Li -based importance sampling.
1.00
Uniform
Proposed
Optimal
0.95
0.90
1.00
Uniform
Proposed
Optimal
0.95
0.90
0.80
0.75
f(xk )
0
-1
-2
-3 vk
-4
10
0
0.70
0.5
1
1.5 Epochs
3
f(xk )
1
Epochs
(b) real-sim, L1 reg.
90
Uniform
Proposed
Optimal
60
55
70
45
40
35
0.5
1
Epochs
2
2.5
(a) rcv1?, L1 reg.
0.80
f(xk )
f(xk )
2
4
6
Time
12 14 16
0.75
0
-1
-2
-3 vk
-4
10
0
(a) real-sim, L1 reg.
140
Uniform
Proposed
Optimal
120
100
60
60
40
40
40
Epochs
Time
12 14 16
2
20
0
(b) rcv1?, L2 reg.
Uniform
Proposed
Optimal
80
80
1
6
100
50
0.5
4
(b) real-sim, L2 reg.
60
0
2
Figure 6: (CD, square loss) Function value vs.
clock time on the full datasets. (Data for the
optimal sampling omitted, as this strategy is not
competitive time-wise.)
Uniform
Proposed
Optimal
80
50
0
0.85
0
Figure 5: (CD, square loss) Function value vs.
number of iterations on the full datasets.
65
0.85
0
-1
-2
-3 vk
-4
10
2
Uniform
Proposed
0.95
0.90
0.75
0.5
1.00
0.90
0.80
0
-1
-2
-3 vk
-4
3.5 10 0
(a) rcv1, L1 reg.
Uniform
Proposed
0.95
0.85
0.85
0.80
1.00
0.5
1
Epochs
2
0
2.5
0.5
1
Epochs
2
2.5
(d) real-sim?, L2 reg.
(c) real-sim?, L1 reg.
Figure 7: (SGD, square loss) Function value vs. number of iterations.
6
5
4
3
2
0
1
2
Epochs
Uniform
Proposed
40
35
30
25
20
15
10
Uniform
Proposed
Optimal
7
0
4
5
10
Time
20
25
(a) news20?, L1 reg.
(a) news20?, L1 reg.
Figure 8: (SGD, square loss) Function value vs.
number of iterations.
Figure 9: (SGD square loss) Function value vs.
clock time.
8
Stochastic Gradient Descent. Finally, we also evaluate the performance of our approach when
used within SGD with L1 and L2 regularization and square loss. In Figures 7?8 we report the
iteration complexity vs. accuracy results and in Figure 9 the timing vs. accuracy results. The time
units in Figures 6 and 9 are not directly comparable, as the experiments were conducted on different
machines.
We observe that on all three datasets SGD with the optimal sampling performs only slightly better than
uniform sampling. This is in contrast with the observations for CD, where the optimal sampling yields
a significant improvement. Consequently, the effect of the proposed sampling is less pronounced in
the three SGD experiments.
Summary.
The main findings of our experimental study can be summarized as follows:
? Adaptive importance sampling significantly outperforms fixed importance sampling
in iterations and time. The results show that (i) convergence in terms of iterations is almost
as good as for the optimal (but not efficiently computable) gradient-based sampling and
(ii) the introduced computational overhead is small enough to outperform fixed importance
sampling in terms of total computation time.
? Adaptive sampling requires adaptive stepsizes. The adaptive stepsize strategies of Algorithms 1 and 2 allow for much faster convergence than conservative fixed-stepsize strategies.
In the experiments, the measured value vk was always significantly below the worst case
estimate, in alignment with the observed convergence.
? Very loose safe gradient bounds are sufficient. Even the bounds derived from the the very
na?ve gradient information obtained by estimating scalar products resulted in significantly
better sampling than using no gradient information at all. Further, no initialization of the
gradient estimates is needed (at the beginning of the optimization process the proposed
adaptive method performs close to the fixed sampling but accelerates after just one epoch).
6
Conclusion
In this paper we propose a safe adaptive importance sampling scheme for CD and SGD algorithms.
We argue that optimal gradient-based sampling is theoretically well justified. To make the computation
of the adaptive sampling distribution computationally tractable, we rely on safe lower and upper
bounds on the gradient. However, in contrast to previous approaches, we use these bounds in a novel
way: in each iteration, we formulate the problem of picking the optimal sampling distribution as a
convex optimization problem and present an efficient algorithm to compute the solution. The novel
sampling provably performs better than any fixed importance sampling?a guarantee which could
not be established for previous samplings that were also derived from safe lower and upper bounds.
The computational cost of the proposed scheme is of the order O(n log n) per iteration?this is on
many problems comparable with the cost to evaluate a single component (coordinate, sum-structure)
of the gradient, and the scheme can thus be implemented at no extra computational cost. This is
verified by timing experiments on real datasets.
We discussed one simple method to track the gradient information in GLMs during optimization.
However, we feel that the machine learning community could profit from further research in that
direction, for instance by investigating how such safe bounds can efficiently be maintained on more
complex models. Our approach can immediately be applied when the tracking of the gradient is
delegated to other machines in a distributed setting, like for instance in [1].
References
[1] Guillaume Alain, Alex Lamb, Chinnadhurai Sankar, Aaron Courville, and Yoshua Bengio. Variance
Reduction in SGD by Distributed Importance Sampling. arXiv.org, February 2015.
[2] Zeyuan Allen-Zhu, Zheng Qu, Peter Richt?rik, and Yang Yuan. Even Faster Accelerated Coordinate
Descent Using Non-Uniform Sampling. In ICML 2017 - Proceedings of the 34th International Conference
on Machine Learning, pages 1110?1119. June 2016.
[3] Ichiro Takeuchi Atsushi Shibagaki. Stochastic Primal Dual Coordinate Method with Non-Uniform
Sampling Based on Optimality Violations. arXiv.org, October 2017.
9
[4] Stephen P Boyd and Lieven Vandenberghe. Convex optimization. Cambridge University Press, 2004.
[5] Dominik Csiba, Zheng Qu, and Peter Richt?rik. Stochastic Dual Coordinate Ascent with Adaptive
Probabilities. In ICML 2015 - Proceedings of the 32th International Conference on Machine Learning,
February 2015.
[6] Dominik Csiba and Peter Richt?rik. Importance Sampling for Minibatches. arXiv.org, February 2016.
[7] Jerome Friedman, Trevor Hastie, Holger H?fling, and Robert Tibshirani. Pathwise coordinate optimization.
The Annals of Applied Statistics, 1(2):302?332, December 2007.
[8] Jerome Friedman, Trevor Hastie, and Robert Tibshirani. Regularization Paths for Generalized Linear
Models via Coordinate Descent. Journal of Statistical Software, 33(1):1?22, 2010.
[9] Wenjiang J. Fu. Penalized regressions: The bridge versus the lasso. Journal of Computational and
Graphical Statistics, 7(3):397?416, 1998.
[10] Xi He and Martin Tak??c. Dual Free Adaptive Mini-batch SDCA for Empirical Risk Minimization.
arXiv.org, October 2015.
[11] Cho-Jui Hsieh, Kai-Wei Chang, Chih-Jen Lin, S Sathiya Keerthi, and S Sundararajan. A Dual Coordinate
Descent Method for Large-scale Linear SVM. In ICML 2008 - the 25th International Conference on
Machine Learning, pages 408?415, New York, USA, 2008. ACM Press.
[12] Hidetoshi Komiya. Elementary proof for sion?s minimax theorem. Kodai Math. J., 11(1):5?7, 1988.
[13] Simon Lacoste-Julien, Mark Schmidt, and Francis Bach. A simpler approach to obtaining an O(1/t)
convergence rate for projected stochastic subgradient descent. arXiv.org, December 2012.
[14] Jun Liu, Zheng Zhao, Jie Wang, and Jieping Ye. Safe Screening with Variational Inequalities and Its
Application to Lasso. In ICML 2014 - Proceedings of the 31st International Conference on Machine
Learning, pages 289?297, 2014.
[15] Eugene Ndiaye, Olivier Fercoq, Alexandre Gramfort, and Joseph Salmon. Gap Safe screening rules for
sparsity enforcing penalties. JMLR, 2017.
[16] Deanna Needell, Rachel Ward, and Nathan Srebro. Stochastic Gradient Descent, Weighted Sampling, and
the Randomized Kaczmarz algorithm. In NIPS 2014 - Advances in Neural Information Processing Systems
27, pages 1017?1025, 2014.
[17] A. Nemirovski, A. Juditsky, G. Lan, and A. Shapiro. Robust stochastic approximation approach to
stochastic programming. SIAM Journal on Optimization, 19(4):1574?1609, 2009.
[18] Yurii Nesterov. Efficiency of Coordinate Descent Methods on Huge-Scale Optimization Problems. SIAM
Journal on Optimization, 22(2):341?362, 2012.
[19] Yurii Nesterov and Sebastian U. Stich. Efficiency of the accelerated coordinate descent method on
structured optimization problems. SIAM Journal on Optimization, 27(1):110?123, 2017.
[20] Julie Nutini, Mark W Schmidt, Issam H Laradji, Michael P Friedlander, and Hoyt A Koepke. Coordinate
Descent Converges Faster with the Gauss-Southwell Rule Than Random Selection. In ICML, pages
1632?1641, 2015.
[21] Anton Osokin, Jean-Baptiste Alayrac, Isabella Lukasewitz, Puneet K. Dokania, and Simon Lacoste-Julien.
Minding the gaps for block frank-wolfe optimization of structured svms. In Proceedings of the 33rd
International Conference on International Conference on Machine Learning - Volume 48, ICML?16, pages
593?602. JMLR.org, 2016.
[22] Guillaume Papa, Pascal Bianchi, and St?phan Cl?men?on. Adaptive Sampling for Incremental Optimization
Using Stochastic Gradient Descent. ALT 2015 - 26th International Conference on Algorithmic Learning
Theory, pages 317?331, 2015.
[23] Dmytro Perekrestenko, Volkan Cevher, and Martin Jaggi. Faster Coordinate Descent via Adaptive
Importance Sampling. In AISTATS 2017 - Proceedings of the 20th International Conference on Artificial
Intelligence and Statistics, volume 54, pages 869?877. PMLR, 20?22 Apr 2017.
[24] Zheng Qu, Peter Richt?rik, and Tong Zhang. Randomized Dual Coordinate Ascent with Arbitrary Sampling.
arXiv.org, November 2014.
10
[25] Peter Richt?rik and Martin Tak??c. On optimal probabilities in stochastic coordinate descent methods.
Optimization Letters, 10(6):1233?1243, 2016.
[26] Mark Schmidt, Reza Babanezhad, Mohamed Ahmed, Aaron Defazio, Ann Clifton, and Anoop Sarkar.
Non-Uniform Stochastic Average Gradient Method for Training Conditional Random Fields. In AISTATS
2015 - Proceedings of the Eighteenth International Conference on Artificial Intelligence and Statistics,
volume 38, pages 819?828. PMLR, 09?12 May 2015.
[27] Shai Shalev-Shwartz, Yoram Singer, Nathan Srebro, and Andrew Cotter. Pegasos: Primal Estimated
Sub-Gradient Solver for SVM. Mathematical Programming, 127(1):3?30, October 2010.
[28] Shai Shalev-Shwartz and Ambuj Tewari. Stochastic Methods for l1 -regularized Loss Minimization. JMLR,
12:1865?1892, June 2011.
[29] Shai Shalev-Shwartz and Tong Zhang. Stochastic Dual Coordinate Ascent Methods for Regularized Loss
Minimization. JMLR, 14:567?599, February 2013.
[30] Maurice Sion. On general minimax theorems. Pacific Journal of Mathematics, 8(1):171?176, 1958.
[31] S. U. Stich, C. L. M?ller, and B. G?rtner. Variable metric random pursuit. Mathematical Programming,
156(1):549?579, Mar 2016.
[32] Sebastian U. Stich, Anant Raj, and Martin Jaggi. Approximate steepest coordinate descent. In Doina
Precup and Yee Whye Teh, editors, ICML 2017 - Proceedings of the 34th International Conference on
Machine Learning, volume 70, pages 3251?3259. PMLR, 06?11 Aug 2017.
[33] Thomas Strohmer and Roman Vershynin. A randomized kaczmarz algorithm with exponential convergence.
Journal of Fourier Analysis and Applications, 15(2):262, 2008.
[34] Paul Tseng and Sangwoon Yun. A coordinate gradient descent method for nonsmooth separable minimization. Mathematical Programming, 117(1):387?423, 2009.
[35] Stephen J Wright. Coordinate descent algorithms. Mathematical Programming, 151(1):3?34, 2015.
[36] Peilin Zhao and Tong Zhang. Stochastic optimization with importance sampling for regularized loss
minimization. In ICML 2015 - Proceedings of the 32nd International Conference on Machine Learning,
volume 37, pages 1?9. PMLR, 07?09 Jul 2015.
[37] Rong Zhu. Gradient-based sampling: An adaptive importance sampling for least-squares. In NIPS Advances in Neural Information Processing Systems 29, pages 406?414. 2016.
11
| 7025 |@word norm:4 nd:1 hu:1 hsieh:1 pick:1 sgd:24 thereby:1 tr:11 profit:1 reduction:1 minding:1 cyclic:1 contains:2 liu:1 selecting:2 outperforms:1 existing:1 ndiaye:1 current:1 comparing:1 written:1 readily:1 numerical:2 wenjiang:1 confirming:1 cheap:1 plot:4 update:5 juditsky:1 v:13 intelligence:2 xk:71 beginning:1 steepest:4 ck2:1 volkan:1 iterates:2 completeness:1 math:1 lx:1 successive:1 org:7 simpler:2 zhang:4 mathematical:4 along:2 become:2 supply:1 ik:29 yuan:1 prove:2 consists:1 overhead:3 fitting:1 introduce:1 theoretically:3 news20:5 expected:6 mpg:1 relying:1 solver:1 becomes:1 provided:1 estimating:1 notation:1 moreover:3 maximizes:1 bounded:1 kind:1 whilst:2 hindsight:1 finding:1 guarantee:4 every:7 exactly:1 k2:11 uk:11 unit:2 enjoy:2 planck:1 negligible:1 timing:3 consequence:2 despite:1 path:1 becoming:1 approximately:1 might:2 chose:4 twice:3 initialization:2 black:1 limited:2 lck:4 nemirovski:1 exhibition:1 decided:1 practical:2 testing:1 block:1 postpone:1 kaczmarz:2 procedure:2 sdca:1 empirical:3 significantly:3 composite:2 projection:1 boyd:1 word:1 isabella:1 refers:1 jui:1 convenience:1 close:1 selection:5 pegasos:1 risk:1 lmin:7 yee:1 www:1 equivalent:2 deterministic:1 demonstrated:2 jieping:1 eighteenth:1 convex:7 formulate:1 simplicity:1 immediately:1 needell:1 rule:2 vandenberghe:1 datapoints:5 coordinate:37 analogous:1 updated:3 feel:1 delegated:1 annals:1 massive:2 aik:5 olivier:1 programming:5 element:1 wolfe:1 papa:1 showcase:1 kxk1:2 bottom:2 csie:1 observed:1 wang:1 worst:3 richt:5 observes:1 mentioned:1 complexity:7 nesterov:2 depend:2 tight:1 segment:2 efficiency:5 completely:1 easily:2 lukasewitz:1 regularizer:3 undecided:3 artificial:2 shalev:3 quite:1 jean:1 larger:2 solve:1 kai:2 otherwise:1 statistic:4 ward:1 itself:1 directionally:1 advantage:2 sequence:1 differentiable:3 propose:10 product:4 remainder:1 relevant:2 pronounced:1 convergence:6 incremental:1 converges:1 derive:3 develop:2 illustrate:3 andrew:1 measured:1 progress:7 aug:1 sim:11 implemented:1 come:1 direction:6 safe:25 drawback:1 stochastic:19 libsvmtools:1 argued:2 hx:1 decompose:1 ntu:1 elementary:1 rong:1 pl:10 hold:2 gradientbased:1 wright:1 babanezhad:1 algorithmic:1 claim:1 lm:1 interchanged:1 smallest:1 omitted:1 favorable:2 shibagaki:1 applicable:1 bag:1 schwarz:1 bridge:1 largest:1 weighted:1 cotter:1 minimization:8 always:5 ck:34 pn:4 sion:2 minc:1 varying:2 stepsizes:4 koepke:1 derived:3 focus:1 ax:4 june:2 vk:29 improvement:5 check:1 contrast:4 sense:2 epfl:4 typically:2 integrated:2 unlikely:1 relation:1 tak:2 interested:1 provably:4 arg:1 among:2 classification:1 dual:6 denoted:2 pascal:1 art:1 special:4 initialize:1 gramfort:1 field:1 never:1 beach:1 sampling:113 represents:1 holger:1 icml:8 eik:5 minimized:1 report:2 yoshua:1 intelligent:1 roman:1 nonsmooth:1 modern:1 randomly:3 fling:1 ve:1 resulted:1 individual:1 keerthi:1 n1:3 friedman:2 freedom:1 screening:3 interest:1 huge:1 zheng:4 evaluation:2 alignment:1 deferred:2 violation:1 extreme:2 primal:2 regularizers:2 strohmer:1 accurate:1 fu:1 partial:1 necessary:1 re:1 theoretical:4 cevher:1 instance:6 column:1 cost:6 entry:4 subset:2 uniform:33 examining:1 conducted:1 cho:1 vershynin:1 st:3 international:11 randomized:4 siam:3 hoyt:1 picking:1 michael:1 precup:1 concrete:1 na:1 again:2 squared:3 possibly:1 worse:3 maurice:1 zhao:3 return:1 li:26 potential:1 de:1 summarized:2 wk:3 satisfy:1 doina:1 depends:1 performed:1 break:1 picked:1 analyze:1 ichiro:1 francis:1 competitive:3 sort:4 shai:3 jul:1 simon:2 contribution:4 minimize:3 square:11 accuracy:5 takeuchi:1 variance:1 efficiently:9 likewise:1 yield:4 anton:1 lu:2 maxc:2 reach:1 sebastian:4 aligns:1 trevor:2 mohamed:1 proof:6 recovers:1 static:4 boil:2 gain:1 dataset:2 knowledge:3 dimensionality:1 ubiquitous:1 alexandre:1 attained:1 improved:1 wei:1 formulation:1 strongly:1 mar:1 just:5 implicit:1 clock:2 glms:5 hand:2 jerome:2 ei:4 defines:1 logistic:3 quality:1 lei:1 stale:1 usa:2 effect:2 ye:1 k22:2 unbiased:1 true:3 normalized:2 evolution:4 hence:4 regularization:6 illustrated:1 during:5 maintained:2 coincides:1 unnormalized:1 generalized:3 whye:1 hik:1 yun:1 complete:1 demonstrate:1 ridge:1 performs:4 l1:19 allen:1 atsushi:1 wise:6 variational:1 novel:6 fi:9 salmon:1 common:4 tracked:1 reza:1 volume:5 discussed:3 interpretation:1 he:1 lieven:1 sundararajan:1 significant:3 cambridge:1 ai:3 rd:8 seldom:1 mathematics:1 access:1 add:1 jaggi:4 raj:3 indispensable:1 inequality:3 seen:1 additional:2 zeyuan:1 determine:1 maximize:1 ller:1 signal:1 ii:5 u0:1 full:10 stephen:2 reduces:1 smooth:2 faster:4 adapt:1 ahmed:1 bach:1 long:1 lin:2 baptiste:1 kax:1 variant:3 regression:4 metric:1 arxiv:6 iteration:23 justified:3 whereas:1 void:1 else:2 extra:2 operate:2 ascent:3 sangwoon:1 december:2 alayrac:1 yang:1 ideal:2 iii:2 split:1 easy:1 enough:1 bengio:1 hastie:2 lasso:3 reduce:1 regarding:1 inner:3 computable:1 inactive:1 whether:1 motivated:1 defazio:1 penalty:1 peter:5 dokania:1 york:1 remark:1 deep:1 jie:1 useful:1 tewari:1 dmytro:1 amount:2 svms:1 simplest:1 generate:1 shapiro:1 kck2:14 exist:1 outperform:1 sankar:1 revisit:1 estimated:2 track:3 per:3 tibshirani:2 blue:1 write:1 kck:6 key:2 lan:1 enormous:1 drawn:1 verified:3 kuk:2 lacoste:2 vast:1 subgradient:1 sum:4 run:1 angle:1 letter:1 uncertainty:1 rachel:1 almost:1 chih:1 lamb:1 appendix:7 peilin:1 comparable:2 accelerates:1 bound:30 ct:4 hi:5 courville:1 quadratic:1 constraint:1 alex:1 software:1 dominated:1 fourier:1 nathan:2 speed:4 min:5 optimality:3 argument:1 performing:1 separable:3 rcv1:11 fercoq:1 martin:6 structured:3 pacific:1 according:2 poor:1 describes:1 slightly:1 separability:1 puneet:1 tw:1 qu:3 joseph:1 southwell:1 computationally:3 ln:5 equation:1 rtner:1 discus:1 loose:2 cjlin:1 needed:2 know:1 singer:1 tractable:1 end:2 yurii:2 issam:1 available:2 operation:2 pursuit:1 observe:3 generic:3 pmlr:4 stepsize:13 batch:1 schmidt:3 thomas:1 denotes:2 running:1 cf:5 remaining:1 include:1 graphical:1 hinge:3 yoram:1 february:4 classical:2 objective:8 added:1 already:1 strategy:19 dependence:1 hai:4 gradient:55 minx:1 argue:3 cauchy:1 tuebingen:1 tseng:1 provable:1 enforcing:1 assuming:1 minn:3 index:6 kk:1 mini:2 ratio:4 equivalently:1 difficult:1 mostly:1 october:3 robert:2 frank:1 unknown:1 bianchi:1 upper:14 teh:1 observation:3 datasets:8 finite:1 descent:29 november:1 immediate:1 defining:1 extended:1 rn:12 arbitrary:6 community:1 sarkar:1 introduced:3 extensive:2 anant:3 established:2 nip:3 address:1 able:1 deanna:1 below:1 exemplified:1 csiba:2 challenge:1 sparsity:1 ambuj:1 max:9 including:1 suitable:1 rely:4 regularized:3 abbreviate:1 zhu:2 minimax:2 scheme:9 kxk22:1 julien:2 lk:1 jun:1 epoch:18 literature:2 l2:12 geometric:1 eugene:1 friedlander:1 relative:4 loss:17 men:1 proven:2 versus:1 srebro:2 rik:5 sufficient:2 bk2:1 leaning:1 editor:1 cd:26 changed:1 summary:1 penalized:1 free:1 infeasible:1 enjoys:1 alain:1 formal:1 allow:1 institute:1 wide:1 fullinformation:1 face:1 absolute:1 julie:1 benefit:2 distributed:2 dimension:1 valid:1 computes:1 commonly:1 adaptive:29 projected:1 osokin:1 cope:1 approximate:2 active:1 investigating:1 sathiya:1 xi:1 shwartz:3 continuous:3 iterative:1 robust:1 ca:1 obtaining:1 alg:4 complex:1 cl:1 diag:2 aistats:2 pk:17 main:1 apr:1 big:5 paul:1 referred:2 tong:3 lc:2 sub:2 guiding:1 deterministically:1 exponential:1 prominently:1 candidate:1 dominik:2 jmlr:4 down:2 theorem:7 jen:1 showing:1 svm:2 alt:1 evidence:1 exists:1 importance:26 v1k:1 ci:1 magnitude:1 gap:3 easier:1 sorting:2 phan:1 depicted:1 simply:1 highlighting:1 tracking:1 pathwise:1 scalar:1 hidetoshi:1 chang:1 clifton:1 ch:2 nutini:1 minimizer:1 satisfies:4 constantly:1 relies:1 determines:1 minibatches:1 acm:1 conditional:1 formulated:1 consequently:1 ann:1 twofold:1 lipschitz:3 feasible:1 change:3 stich:5 laradji:1 surpass:1 inadmissible:2 lemma:10 total:2 conservative:1 duality:1 experimental:1 gauss:1 aaron:2 formally:1 guillaume:2 mark:3 latter:1 arises:1 anoop:1 violated:3 accelerated:2 evaluate:4 reg:18 problem3:1 |
6,662 | 7,026 | Variational Walkback: Learning a Transition
Operator as a Stochastic Recurrent Net
Anirudh Goyal
MILA, Universit? de Montr?al
[email protected]
Surya Ganguli
Stanford University
[email protected]
Nan Rosemary Ke
MILA, ?cole Polytechnique de Montr?al
[email protected]
Yoshua Bengio
MILA, Universit? de Montr?al
[email protected]
Abstract
We propose a novel method to directly learn a stochastic transition operator whose
repeated application provides generated samples. Traditional undirected graphical
models approach this problem indirectly by learning a Markov chain model whose
stationary distribution obeys detailed balance with respect to a parameterized energy
function. The energy function is then modified so the model and data distributions
match, with no guarantee on the number of steps required for the Markov chain to
converge. Moreover, the detailed balance condition is highly restrictive: energy
based models corresponding to neural networks must have symmetric weights,
unlike biological neural circuits. In contrast, we develop a method for directly
learning arbitrarily parameterized transition operators capable of expressing nonequilibrium stationary distributions that violate detailed balance, thereby enabling
us to learn more biologically plausible asymmetric neural networks and more general non-energy based dynamical systems. The proposed training objective, which
we derive via principled variational methods, encourages the transition operator to
"walk back" (prefer to revert its steps) in multi-step trajectories that start at datapoints, as quickly as possible back to the original data points. We present a series
of experimental results illustrating the soundness of the proposed approach, Variational Walkback (VW), on the MNIST, CIFAR-10, SVHN and CelebA datasets,
demonstrating superior samples compared to earlier attempts to learn a transition
operator. We also show that although each rapid training trajectory is limited to a
finite but variable number of steps, our transition operator continues to generate
good samples well past the length of such trajectories, thereby demonstrating the
match of its non-equilibrium stationary distribution to the data distribution. Source
Code: http://github.com/anirudh9119/walkback_nips17
1
Introduction
A fundamental goal of unsupervised learning involves training generative models that can understand
sensory data and employ this understanding to generate, or sample new data and make new inferences.
In machine learning, the vast majority of probabilistic generative models that can learn complex probability distributions over data fall into one of two classes: (1) directed graphical models, corresponding
to a finite time feedforward generative process (e.g. variants of the Helmholtz machine (Dayan
et al., 1995) like the Variational Auto-Encoder (VAE) (Kingma and Welling, 2013; Rezende et al.,
2014)), or (2) energy function based undirected graphical models, corresponding to sampling from a
stochastic process whose equilibrium stationary distribution obeys detailed balance with respect to the
energy function (e.g. various Boltzmann machines (Salakhutdinov and Hinton, 2009)). This detailed
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
balance condition is highly restrictive: for example, energy-based undirected models corresponding
to neural networks require symmetric weight matrices and very specific computations which may not
match well with what biological neurons or analog hardware could compute.
In contrast, biological neural circuits are capable of powerful generative dynamics enabling us to
model the world and imagine new futures. Cortical computation is highly recurrent and therefore its
generative dynamics cannot simply map to the purely feed-forward, finite time generative process of
a directed model. Moreover, the recurrent connectivity of biological circuits is not symmetric, and so
their generative dynamics cannot correspond to sampling from an energy-based undirected model.
Thus, the asymmetric biological neural circuits of our brain instantiate a type of stochastic dynamics
arising from the repeated application of a transition operator? whose stationary distribution over
neural activity patterns is a non-equilibrium distribution that does not obey detailed balance with
respect to any energy function. Despite these fundamental properties of brain dynamics, machine
learning approaches to training generative models currently lack effective methods to model complex
data distributions through the repeated application a transition operator, that is not indirectly specified
through an energy function, but rather is directly parameterized in ways that are inconsistent with the
existence of any energy function. Indeed the lack of such methods constitutes a glaring gap in the
pantheon of machine learning methods for training probabilistic generative models.
The fundamental goal of this paper is to provide a step to filling such a gap by proposing a novel
method to learn such directly parameterized transition operators, thereby providing an empirical
method to control the stationary distributions of non-equilibrium stochastic processes that do not
obey detailed balance, and match these distributions to data. The basic idea underlying our training
approach is to start from a training example, and iteratively apply the transition operator while
gradually increasing the amount of noise being injected (i.e., temperature). This heating process
yields a trajectory that starts from the data manifold and walks away from the data due to the heating
and to the mismatch between the model and the data distribution. Similarly to the update of a
denoising autoencoder, we then modify the parameters of the transition operator so as to make the
reverse of this heated trajectory more likely under a reverse cooling schedule. This encourages the
transition operator to generate stochastic trajectories that evolve towards the data distribution, by
learning to walk back the heated trajectories starting at data points. This walkback idea had been
introduced for generative stochastic networks (GSNs) and denoising autoencoders (Bengio et al.,
2013b) as a heuristic, and without temperature annealing. Here, we derive the specific objective
function for learning the parameters through a principled variational lower bound, hence we call our
training method variational walkback (VW). Despite the fact that the training procedure involves
walking back a set of trajectories that last a finite, but variable number of time-steps, we find
empirically that this yields a transition operator that continues to generate sensible samples for many
more time-steps than are used to train, demonstrating that our finite time training procedure can sculpt
the non-equilibrium stationary distribution of the transition operator to match the data distribution.
We show how VW emerges naturally from a variational derivation, with the need for annealing
arising out of the objective of making the variational bound as tight as possible. We then describe
experimental results illustrating the soundness of the proposed approach on the MNIST, CIFAR-10,
SVHN and CelebA datasets. Intriguingly, we find that our finite time VW training process involves
modifications of variational methods for training directed graphical models, while our potentially
asymptotically infinite generative sampling process corresponds to non-equilibrium generalizations
of energy based undirected models. Thus VW goes beyond the two disparate model classes of
undirected and directed graphical models, while simultaneously incorporating good ideas from each.
2
The Variational Walkback Training Process
Our goal is to learn a stochastic transition operator pT (s0 |s) such that its repeated application yields
samples from the data manifold. Here T reflects an underlying temperature, which we will modify
during the training process. The transition operator is further specified by other parameters which
must be learned from data. When K steps are chosen to generate a sample, the generative process
QK
has joint probability p(sK
0 ) = p(sK )
t=1 pTt (st?1 |st ), where Tt is the temperature at step t. We
first give an intuitive description of our learning algorithm before deriving it via variational methods
in the next section. The basic idea, as illustrated in Fig. 1 and Algorithm 1 is to follow a walkback
?
A transition operator maps the previous-state distribution to a next-state distribution, and is implemented by
a stochastic transformation which from the previous state of a Markov chain generates the next state
2
Figure 1: Variational WalkBack framework. The generative process is represented in the blue arrows
with the sequence of pTt (st?1 |st ) transitions. The destructive forward process starts at a datapoint
(from qT0 (s0 )) and gradually heats it through applications of qTt (st |st?1 ). Larger temperatures on
the right correspond to a flatter distribution, so the whole destructive forward process maps the data
distribution to a Gaussian and the creation process operates in reverse.
strategy similar to that introduced in Alain and Bengio (2014). In particular, imagine a destructive
process qTt+1 (st+1 |st ) (red arrows in Fig. 1), which starts from a data point s0 = x, and evolves it
QK
K
stochastically to obtain a trajectory s0 , . . . , sK ? sK
0 , i.e., q(s0 ) = q(s0 )
t=1 qTt (st |st?1 ), where
q(s0 ) is the data distribution. Note that the p and q chains will share the same parameters for the
transition operator (one going backwards and one forward) but they start from different priors for
their first step: q(s0 ) is the data distribution while p(s0 ) is a flat factorized prior (e.g. Gaussian).
The training procedure trains the transition operator pT to make reverse transitions of the destructive
process more likely. For this reason we index time so the destructive process operates forward in time,
while the reverse generative process operates backwards in time, with the data distribution occurring
at t = 0. In particular, we need only train the transition operator to reverse time by 1-step at each step,
making it unnecessary to solve a deep credit assignment problem by performing backpropagation
through time across multiple walk-back steps. Overall, the destructive process generates trajectories
that walk away from the data manifold, and the transition operator pT learns to walkback these
trajectories to sculpt the stationary distribution of pT at T = 1 to match the data distribution.
Because we choose qT to have the same parameters as pT , they have the same transition operator but
not the same joint over the whole sequence because of differing initial distributions for each trajectory.
We also choose to increase temperature with time in the destructive process, following a temperature
schedule T1 ? ? ? ? ? TK . Thus the forward destructive (reverse generative) process corresponds to a
heating (cooling) protocol. This training procedure is similar in spirit to DAE?s (Vincent et al., 2008)
or NET (Sohl-Dickstein et al., 2015) but with one major difference: the destructive process in these
works corresponds to the addition of random noise which knows nothing about the current generative
process during training. To understand why tying together destruction and creation may be a good
idea, consider the special case in which pT corresponds to a stochastic process whose stationary
distribution obeys detailed balance with respect to the energy function of an undirected graphical
model. Learning any such model involves two fundamental goals: the model must place probability
mass (i.e. lower the energy function) where the data is located, and remove probability mass (i.e.
raise the energy function) elsewhere. Probability modes where there is no data are known as spurious
modes, and a fundamental goal of learning is to hunt down these spurious modes and remove them.
Making the destructive process identical to the transition operator to be learned is motivated by the
notion that the destructive process should then efficiently explore the spurious modes of the current
transition operator. The walkback training will then destroy these modes. In contrast, in DAE?s and
NET?s, since the destructive process corresponds to the addition of unstructured noise that knows
nothing about the generative process, it is not clear that such an agnostic destructive process will
efficiently seek out the spurious modes of the reverse, generative process.
We chose the annealing schedule empirically to minimize training time. The generative process
starts by sampling a state sK from a broad Gaussian p? (sK ), whose variance is initially equal to
2
the total data variance ?max
(but can be later adapted to match the final samples from the inference
trajectories). Then we sample from pTmax (sK?1 |sK ), where Tmax is a high enough temperature
so that the resultant injected noise can move the state across the whole domain of the data. The
injected noise used to simulate the effects of finite temperature has variance linearly proportional to
3
temperature. Thus if ? 2 is the equivalent noise injected by the transition operator pT at T = 1, we
?2
choose Tmax = ?max
to achieve the goal of the first sample sK?1 being able to move across the entire
2
range of the data distribution. Then we successively cool the temperature as we sample ?previous?
states st?1 according to pT (st?1 |st ), with T reduced by a factor of 2 at each step, followed by n
steps at temperature 1. This cooling protocol requires the number of steps to be
K = log2 Tmax + n,
(1)
in order to go from T = Tmax to T = 1 in K steps. We choose K from a random distribution.
Thus the training procedure trains pT to rapidly transition from a simple Gaussian distribution to
the data distribution in a finite but variable number of steps. Ideally, this training procedure should
then indirectly create a transition operator pT at T = 1 whose repeated iteration samples the data
distribution with a relatively rapid mixing time. Interestingly, this intuitive learning algorithm for a
recurrent dynamical system, formalized in Algorithm 1, can be derived in a principled manner from
variational methods that are usually applied to directed graphical models, as we see next.
Algorithm 1 VariationalWalkback(?)
Train a generative model associated with a transition operator pT (s|s0 ) at temperature T (temperature
1 for sampling from the actual model), parameterized by ?. This transition operator injects noise of
variance T ? 2 at each step, where ? 2 is the noise level at temperature 1.
Require: Transition operator pT (s|s0 ) from which one can both sample and compute the gradient
of log pT (s|s0 ) with respect to parameters ?, given s and s0 .
2
Require: Precomputed ?max
, initially data variance (or squared diameter).
Require: N1 > 1 the number of initial temperature-1 steps of q trajectory (or ending a p trajectory).
repeat
Set p? to be a Gaussian with mean and variance of the data.
?2
Tmax ? ?max
2
Sample n as a uniform integer between 0 and N1
K ? ceil(log2 Tmax ) + n
Sample x ? data (or equivalently sample a minibatch to parallelize computation and process
each element of the minibatch independently)
Let s0 = (x) and initial temperature T = 1, initialize L = 0
for t = 1 to K do
Sample st ? pT (s|st?1 )
Increment L ? L + log pT (st?1 |st )
(st?1 |st )
Update parameters with log likelihood gradient ? log pT??
If t > n, increase temperature with T ? 2T
end for
Increment L ? L + log p? (sK )
Update mean and variance of p? to match the accumulated 1st and 2nd moment statistics of the
samples of sK
until convergence monitoring L on a validation set and doing early stopping =0
3
Variational Derivation of Walkback
The marginal probability of a data point s0 at the end of the K-step generative cooling process is
!
K
X
Y
p(s0 ) =
pT0 (s0 |s1 )
pTt (st?1 |st ) p? (sK )
(2)
t=2
sK
1
where sK
1 = (s1 , s2 , . . . , sK ) and v = s0 is a visible variable in our generative process, while the
cooling trajectory that lead to it can be thought of as a latent, hidden variable h = sK
1 . Recall the
decomposition of the marginal log-likelihood via a variational lower bound,
ln p(v) ? ln
X
p(v|h)p(h) =
X
h
p(v, h)
+DKL [q(h|v)||p(h|v)].
q(h|v)
{z
}
q(h|v) ln
h
|
L
4
(3)
Here L is the variational lower bound which motivates the proposed training procedure, and q(h|v) is
a variational approximation to p(h|v). Applying this decomposition to v = s0 and h = sK
1 , we find
ln p(s0 ) =
X
q(sk1 |s0 ) ln
sk
1
p(s0 |sk1 )p(sk1 )
+ DKL [q(sk1 |s0 ) || p(sk1 |s0 )].
q(sk1 |s0 )
(4)
Similarly to the EM algorithm, we aim to approximately maximize the log-likelihood with a 2-step
procedure. Let ?p be the parameters of the generative model p and ?q be the parameters of the
approximate inference procedure q. Before seeing the next example we have ?q = ?p . Then in the
first step we update ?p towards maximizing the variational bound L, for example by a stochastic
gradient descent step. In the second step, we update ?q by setting ?q ? ?p , with the objective to
reduce the KL term in the above decomposition. See Sec. 3.1 below regarding conditions for the
tightness of the bound, which may not be perfect, yielding a possibly biased gradient when we force
the constraint ?p = ?q . We continue iterating this procedure, with training examples s0 . We can
obtain an unbiased Monte-Carlo estimator of L as follows from a single trajectory:
L(s0 ) ?
K
X
t=1
ln
pTt (st?1 |st )
+ ln p? (sK )
qTt (st |st?1 )
(5)
with respect to p? , where s0 is sampled from the data distribution qT0 (s0 ), and the single sequence sK
1
is sampled from the heating process q(sK
1 |s0 ). We are making the reverse of heated trajectories more
likely under the cooling process, leading to Algorithm 1. Such variational bounds have been used
successfully in many learning algorithms in the past, such as the VAE (Kingma and Welling, 2013),
except that they use an explicitly different set of parameters for p and q. Some VAE variants (S?nderby
et al., 2016; Kingma et al., 2016) however mix the p-parameters implicitly in forming q, by using the
likelihood gradient to iteratively form the approximate posterior.
3.1
Tightness of the variational lower bound
As seen in (4), the gap between L(s0 ) and ln p(s0 ) is controlled by DKL [q(sk1 |s0 )||p(sk1 |s0 )], and is
therefore tight when the distribution of the heated trajectory, starting from a point s0 , matches the
posterior distribution of the cooled trajectory ending at s0 . Explicitly, this KL divergence is given by
K
X
p(s0 ) Y qTt (st |st?1 )
DKL =
q(sk1 |s0 ) ln ?
.
(6)
p (sK ) t=1 pTt (st?1 |st )
k
s1
As the heating process q unfolds forward in time, while the cooling process p unfolds backwards in
time, we introduce the time reversal of the transition operator pT , denoted by pR
T , as follows. Under
repeated application of the transition operator pT , state s settles into a stationary distribution ?T (s)
at temperature T . The probability of observing a transition st ? st?1 under pT in its stationary state
is then pT (st?1 |st )?T (st ). The time-reversal pR
T is the transition operator that makes the reverse
transition equally likely for all state pairs, and therefore obeys
PT (st?1 |st )?T (st ) = PTR (st |st?1 )?T (st?1 )
(7)
pR
T
for all pairs of states st?1 and st . It is well known that
is a valid stochastic transition operator and
has the same stationary distribution ?T (s) as pT . Furthermore, the process pT obeys detailed balance
if and only if it is invariant under time-reversal, so that pT = pR
T.
To better understand the KL divergence in (6), at each temperature Tt , we use relation (7) to replace
the cooling process PTt which occurs backwards in time with its time-reversal, unfolding forward in
time, at the expense of introducing ratios of stationary probabilities. We also exploit the fact that q
and p are the same transition operator. With these substitutions in (6), we find
DKL =
X
sk
1
q(sk1 |s0 ) ln
K
K
Y
pTt (st |st?1 ) X
p(s0 ) Y ?Tt (st )
k
+
q(s
|s
)
ln
.
0
1
p? (sK ) t=1 ?Tt (st?1 )
pR (s |s )
k
t=1 Tt t t?1
(8)
s1
The first term in (8) is simply the KL divergence between the distribution over heated trajectories, and
the time reversal of the cooled trajectories. Since the heating (q) and cooling (p) processes are tied,
this KL divergence is 0 if and only if pTt = pR
Tt for all t. This time-reversal invariance requirement
for vanishing KL divergence is equivalent to the transition operator pT obeying detailed balance at all
temperatures.
5
Now intuitively, the second term can be made small in the limit where K is large and the temperature
sequence is annealed slowly. To see why, note we can write the ratio of probabilities in this term as,
?T
(sK?1 ) ?TK (sK )
p(s0 ) ?T1 (s1 )
? ? ? K?1
.
(9)
?T1 (s0 ) ?T2 (s1 )
?TK?1 (sK ) p? (sK )
which is similar in shape (but arising in a different context) to the product of probability ratios
computed for annealed importance sampling (Neal, 2001) and reverse annealed importance sampling (Burda et al., 2014). Here it is manifest that, under slow incremental annealing schedules, we
are comparing probabilities of the same state under slightly different distributions, so all ratios are
close to 1. For example, under many steps, with slow annealing, the generative process approximately
reaches its stationary distribution, p(s0 ) ? ?T1 (s0 ).
This slow annealing to go from p? (sK ) to p(s0 ) corresponds to the quasistatic limit in statistical
physics, where the work required to perform the transformation is equal to the free energy difference
between states. To go faster, one must perform excess work, above and beyond the free energy difference, and this excess work is dissipated as heat into the surrounding environment. By writing the distributions in terms of energies and free energies: ?Tt (st ) ? e?E(st )/Tt , p? (sK ) = e?[EK (sK )?FK ] ,
and p(s0 ) = e?[E0 (s0 )?F0 ] , one can see that the second term in the KL divergence is closely related
to average heat dissipation in a finite time heating process (see e.g. (Crooks, 2000)).
This intriguing connection between the size of the gap in a variational lower bound, and the excess
heat dissipation in a finite time heating process opens the door to exploiting a wealth of work in
statistical physics for finding optimal thermodynamic paths that minimize heat dissipation (Schmiedl
and Seifert, 2007; Sivak and Crooks, 2012; Gingrich et al., 2016), which may provide new ideas
to improve variational inference. In summary, tightness of the variational bound can be achieved
if: (1) The transition operator of p approximately obeys detailed balance, and (2) the temperature
annealing is done slowly over many steps. And intriguingly, the magnitude of the looseness of the
bound is related to two physical quantities: (1) the degree of irreversiblity of the transition operator p,
as measured by the KL divergence between p and its time reversal pR , and (2) the excess physical
work, or equivalently, excess heat dissipated, in performing the heating trajectory.
To check, post-hoc, potential looseness of the variational lower bound, we can measure the degree of
irreversibility of pT by estimating the KL divergence DKL (pT (s0 |s)?T (s) || pT (s|s0 )?T (s0 )), which
is 0 if and only if pT obeys detailed balance and is therefore time-reversal invariant. This quantity
PK
pT (st+1 |st )
1
K
can be estimated by K
t=1 ln pT (st |st+1 ) , where s1 is a long sequence sampled by repeatedly
applying transition operator pT from a draw s1 ? ?T . If this quantity is strongly positive (negative)
then forward transitions are more (less) likely than reverse transitions, and the process pT is not
time-reversal invariant. This estimated KL divergence can be normalized by the corresponding
entropy to get a relative value (with 3.6% measured on a trained model, as detailed in Appendix).
3.2 Estimating log likelihood via importance sampling
We can derive an importance sampling estimate of the negative log-likelihood by the following
procedure. For each training example x, we sample a large number of destructive paths (as in
Algorithm 1). We then use the following formulation to estimate the log-likelihood log p(x) via
Q
?
?
K
?
pT0 (s0 = x|s1 )
t=2 pTt (st?1 |st ) p (sK )
Q
?
log Ex?pD ,qT (x)qT (s1 |s0 (x,))(QK qT (st |st?1 )) ?
K
t=2
t
0
1
q
(s
|s
)
qT0 (x)qT1 (s1 |s0 = x)
T
t
t?1
t
t=2
(10)
3.3
VW transition operators and their convergence
The VW approach allows considerable freedom in choosing transition operators, obviating the need
for specifying them indirectly through an energy function. Here we consider Bernoulli and isotropic
Gaussian transition operators for binary and real-valued data respectively. The form of the stochastic
state update imitates a discretized version of the Langevin differential equation. The Bernoulli
(1??)?st?1 +??F? (st?1 )
transition operator computes the element-wise probability as ? = sigmoid(
).
Tt
The Gaussian operator computes a conditional mean and standard deviation via ? = (1 ? ?) ? st?1 +
? ? F? (st?1 ) and ? = Tt log(1 + eF? (st?1 ) ). Here the F functions can be arbitrary parametrized
functions, such as a neural net and Tt is the temperature at time step t.
6
A natural question is when will the finite time VW training process learn a transition operator whose
stationary distribution matches the data distribution, so that repeated sampling far beyond the training
time continues to yield data samples. To partially address this, we prove the following theorem:
Proposition 1. If p has enough capacity, training data and training time, with slow enough annealing
and a small departure from reversibility so p can match q, then at convergence of VW training, the
transition operator pT at T = 1 has the data generating distribution as its stationary distribution.
A proof can be found in the Appendix, but the essential intuition is that if the finite time generative
process converges to the data distribution at multiple different VW walkback time-steps, then it
remains on the data distribution for all future time at T = 1. We cannot always guarantee the
preconditions of this theorem but we find experimentally that its essential outcome holds in practice.
4
Related Work
A variety of learning algorithms can be cast in the framework of Fig. 1. For example, for directed
graphical models like VAEs (Kingma and Welling, 2013; Rezende et al., 2014), DBNs (Hinton et al.,
2006), and Helmholtz machines in general, q corresponds to a recognition model, transforming data
to a latent space, while p corresponds to a generative model that goes from latent to visible data in
a finite number of steps. None of these directed models are designed to learn transition operators
that can be iterated ad infinitum, as we do. Moreover, learning such models involves a complex,
deep credit assignment problem, limiting the number of unobserved latent layers that can be used to
generate data. Similar issues of limited trainable depth in a finite time feedforward generative process
apply to Generative Adversarial Networks (GANs) (Goodfellow et al., 2014), which also further
eschew the goal of specifically assigning probabilities to data points. Our method circumvents this
deep credit assignment problem by providing training targets at each time-step; in essence each past
time-step of the heated trajectory constitutes a training target for the future output of the generative
operator pT , thereby obviating the need for backpropagation across multiple steps. Similarly, unlike
VW, Generative Stochastic Networks (GSN) (Bengio et al., 2014) and the DRAW (Gregor et al.,
2015) also require training iterative operators by backpropagating across multiple computational
steps.
VW is similar in spirit to DAE (Bengio et al., 2013b), and NET approaches (Sohl-Dickstein et al.,
2015) but it retains two crucial differences. First, in each of these frameworks, q corresponds to
a very simple destruction process in which unstructured Gaussian noise is injected into the data.
This agnostic destruction process has no knowledge of underlying generative process p that is to
be learned, and therefore cannot be expected to efficiently explore spurious modes, or regions of
space, unoccupied by data, to which p assigns high probability. VW has the advantage of using a
high-temperature version of the model p itself as part of the destructive process, and so should be
better than random noise injection at finding these spurious modes. A second crucial difference is
that VW ties weights of the transition operator across time-steps, thereby enabling us to learn a bona
fide transition operator than can be iterated well beyond the training time, unlike DAEs and NET.
There?s also another related recent approach to learning a transition operator with a denoising cost,
developed in parallel, called Infusion training (Bordes et al., 2017), which tries to reconstruct the
target data in the chain, instead of the previous step in the destructive chain.
5
Experiments
VW is evaluated on four datasets: MNIST, CIFAR10 (Krizhevsky and Hinton, 2009), SVHN (Netzer
et al., 2011) and CelebA (Liu et al., 2015). The MNIST, SVHN and CIFAR10 datasets were used
as is except for uniform noise added to MNIST and CIFAR10, as per Theis et al. (2016), and the
aligned and cropped version of CelebA was scaled from 218 x 178 pixels to 78 x 64 pixels and
center-cropped at 64 x 64 pixels (Liu et al., 2015). We used the Adam optimizer (Kingma and Ba,
2014) and the Theano framework (Al-Rfou et al., 2016). More details are in Appendix and code for
training and generation is at http://github.com/anirudh9119/walkback_nips17.
Table 1 compares with published NET results on CIFAR.
Image Generation. Figure 3, 5, 6, 7, 8 (see supplementary section) show VW samples on each of
the datasets. For MNIST, real-valued views of the data are modeled. Image Inpainting. We clamped
the bottom part of CelebA test images (for each step during sampling), and ran it through the model.
Figure 1 (see Supplementary section) shows the generated conditional samples.
7
Model
bits/dim ?
NET (Sohl-Dickstein et al., 2015)
5.40
VW(20 steps)
5.20
Deep VAE
< 4.54
VW(30 steps)
4.40
DRAW (Gregor et al., 2015)
< 4.13
ResNet VAE with IAF (Kingma et al., 2016)
3.11
Table 1: Comparisons on CIFAR10, test set average number of bits/data dimension(lower is better)
6
6.1
Discussion
Summary of results
Our main advance involves using variational inference to learn recurrent transition operators that
can rapidly approach the data distribution and then be iterated much longer than the training time
while still remaining on the data manifold. Our innovations enabling us to achieve this involved: (a)
tying weights across time, (b) tying the destruction and generation process together to efficiently
destroy spurious modes, (c) using the past of the destructive process to train the future of the creation
process, thereby circumventing issues with deep credit assignment (like NET), (d) introducing an
aggressive temperature annealing schedule to rapidly approach the data distribution (e.g. NET takes
1000 steps while VWB only takes 30 steps to do so), and (e) introducing variable trajectory lengths
during training to encourage the generator to stay on the data manifold for times longer than the
training sequence length.
Indeed, it is often difficult to sample from recurrent neural networks for many more time steps than
the duration of their training sequences, especially non-symmetric networks that could exhibit chaotic
activity. Transition operators learned by VW can be stably sampled for exceedingly long times; for
example, in experiments (see supplementary section) we trained our model on CelebA for 30 steps,
while at test time we sampled for 100000 time-steps. Overall, our method of learning a transition
operator outperforms previous attempts at learning transition operators (i.e. VAE, GSN and NET)
using a local learning rule.
Overall, we introduced a new approach to learning non-energy-based transition operators which
inherits advantages from several previous generative models, including a training objective that
requires rapidly generating the data in a finite number of steps (as in directed models), re-using the
same parameters for each step (as in undirected models), directly parametrizing the generator (as in
GANs and DAEs), and using the model itself to quickly find its own spurious modes (the walk-back
idea). We also anchor the algorithm in a variational bound and show how its analysis suggests to use
the same transition operator for the destruction or inference process, and the creation or generation
process, and to use a cooling schedule during generation, and a reverse heating schedule during
inference.
6.2
New bridges between variational inference and non-equilibrium statistical physics
We connected the variational gap to physical notions like reversibility and heat dissipation. This novel
bridge between variational inference and concepts like excess heat dissipation in non-equilbrium
statistical physics, could potentially open the door to improving variational inference by exploiting a
wealth of work in statistical physics. For example, physical methods for finding optimal thermodynamic paths that minimize heat dissipation (Schmiedl and Seifert, 2007; Sivak and Crooks, 2012;
Gingrich et al., 2016), could potentially be exploited to tighten lowerbounds in variational inference.
Moreover, motivated by the relation between the variational gap and reversibility, we verified empirically that the model converges towards an approximately reversible chain (see Appendix) making the
variational bound tighter.
6.3
Neural weight asymmetry
A fundamental aspect of our approach is that we can train stochastic processes that need not exactly
8
obey detailed balance, yielding access to a larger and potentially more powerful space of models. In
particular, this enables us to relax the weight symmetry constraint of undirected graphical models
corresponding to neural networks, yielding a more brain like iterative computation characteristic
of asymmetric biological neural circuits. Our approach thus avoids the biologically implausible
requirement of weight transport (Lillicrap et al., 2014) which arises as a consequence of imposing
weight symmetry as a hard constraint. With VW, this hard constraint is removed, although the
training procedure itself may converge towards more symmetry. Such approach towards symmetry is
consistent with both empirical observations (Vincent et al., 2010) and theoretical analysis (Arora et al.,
2015) of auto-encoders, for which symmetric weights are associated with minimizing reconstruction
error.
6.4 A connection to the neurobiology of dreams
The learning rule underlying VW, when applied to an asymmetric stochastic neural network, yields a
speculative, but intriguing connection to the neurobiology of dreams. As discussed in Bengio et al.
(2015), spike-timing dependent plasticity (STDP), a plasticity rule found in the brain (Markram
and Sakmann, 1995), corresponds to increasing the probability of configurations towards which the
network intrinsically likes to go (i.e., remembering observed configurations), while reverse-STDP
corresponds to forgetting or unlearning the states towards which the network goes (which potentially
may occur during sleep).
In the VW update applied to a neural network, the resultant learning rule does indeed strengthen
synapses for which a presynaptic neuron is active before a postsynaptic neuron in the generative
cooling process (STDP), and it weakens synapses in which a postsynaptic neuron is active before a
presynaptic neuron in the heated destructive process (reverse STDP). If, as suggested, the neurobiological function of sleep involves re-organizing memories and in particular unlearning spurious modes
through reverse-STDP, then the heating destructive process may map to sleep states, in which the
brain is hunting down and destroying spurious modes. In contrast, the cooling generative dynamics
of VW may map to awake states in which STDP reinforces neural trajectories moving towards
observed sensory data. Under this mapping, the relative incoherence of dreams compared to reality
is qualitatively consistent with the heated destructive dynamics of VW, compared to the cooled
transition operator in place during awake states.
6.5 Future work
Many questions remain open in terms of analyzing and extending VW. Of particular interest is the
incorporation of latent layers. The state at each step would now include both visible x and latent
h components. Essentially the same procedure can be run, except for the chain initialization, with
s0 = (x, h0 ) where h0 a sample from the posterior distribution of h given x.
Another interesting direction is to replace the log-likelihood objective at each step by a GAN-like
objective, thereby avoiding the need to inject noise independently on each of the pixels, during
each transition step, and allowing latent variable sampling to inject the required high-level decisions
associated with the transition. Based on the earlier results from (Bengio et al., 2013a), sampling in
the latent space rather than in the pixel space should allow for better generative models and even
better mixing between modes (Bengio et al., 2013a).
Overall, our work takes a step to filling a relatively open niche in the machine learning literature on
directly training non-energy-based iterative stochastic operators, and we hope that the many possible
extensions of this approach could lead to a rich new class of more powerful brain-like machine
learning models.
Acknowledgments
The authors would like to thank Benjamin Scellier, Ben Poole, Tim Cooijmans, Philemon Brakel,
Ga?tan Marceau Caron, and Alex Lamb for their helpful feedback and discussions, as well as
NSERC, CIFAR, Google, Samsung, Nuance, IBM and Canada Research Chairs for funding, and
Compute Canada for computing resources. S.G. would like to thank the Simons, McKnight, James S.
McDonnell, and Burroughs Wellcome Foundations and the Office of Naval Research for support. Y.B
would also like to thank Geoff Hinton for an analogy which is used in this work, while discussing
contrastive divergence (personnal communication). The authors would also like to express debt of
gratitude towards those who contributed to theano over the years (as it is no longer maintained),
making it such a great tool.
9
References
Al-Rfou, R., Alain, G., Almahairi, A., and et al. (2016). Theano: A python framework for fast
computation of mathematical expressions. CoRR, abs/1605.02688.
Alain, G. and Bengio, Y. (2014). What regularized auto-encoders learn from the data-generating
distribution. J. Mach. Learn. Res., 15(1):3563?3593.
Arora, S., Liang, Y., and Ma, T. (2015). Why are deep nets reversible: a simple theory, with
implications for training. Technical report, arXiv:1511.05653.
Bengio, Y., Mesnard, T., Fischer, A., Zhang, S., and Wu, Y. (2015). An objective function for STDP.
CoRR, abs/1509.05936.
Bengio, Y., Mesnil, G., Dauphin, Y., and Rifai, S. (2013a). Better mixing via deep representations.
Bengio, Y., Thibodeau-Laufer, E. r., Alain, G., and Yosinski, J. (2014). Deep generative stochastic networks trainable by backprop. In Proceedings of the 31st International Conference on International
Conference on Machine Learning - Volume 32, ICML?14, pages II?226?II?234. JMLR.org.
Bengio, Y., Yao, L., Alain, G., and Vincent, P. (2013b). Generalized denoising auto-encoders as
generative models. In NIPS?2013, arXiv:1305.6663.
Bordes, F., Honari, S., and Vincent, P. (2017). Learning to generate samples from noise through
infusion training. CoRR, abs/1703.06975.
Burda, Y., Grosse, R. B., and Salakhutdinov, R. (2014). Accurate and conservative estimates of MRF
log-likelihood using reverse annealing. CoRR, abs/1412.8566.
Crooks, G. E. (2000). Path-ensemble averages in systems driven far from equilibrium. Physical
review E, 61(3):2361.
Dayan, P., Hinton, G. E., Neal, R. M., and Zemel, R. S. (1995). The helmholtz machine. Neural
Comput., 7(5):889?904.
Gingrich, T. R., Rotskoff, G. M., Crooks, G. E., and Geissler, P. L. (2016). Near-optimal protocols in
complex nonequilibrium transformations. Proceedings of the National Academy of Sciences, page
201606273.
Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., and
Bengio, Y. (2014). Generative adversarial nets. In Advances in Neural Information Processing
Systems, pages 2672?2680.
Gregor, K., Danihelka, I., Graves, A., and Wierstra, D. (2015). Draw: A recurrent neural network for
image generation. arXiv preprint arXiv:1502.04623.
Hinton, G. E., Osindero, S., and Teh, Y.-W. (2006). A fast learning algorithm for deep belief nets.
Neural Comput., 18(7):1527?1554.
Kingma, D. and Ba, J. (2014). Adam: A method for stochastic optimization. arXiv preprint
arXiv:1412.6980.
Kingma, D. P., Salimans, T., and Welling, M. (2016). Improving variational inference with inverse
autoregressive flow. CoRR, abs/1606.04934.
Kingma, D. P. and Welling, M. (2013).
arXiv:1312.6114.
Auto-encoding variational bayes.
arXiv preprint
Krizhevsky, A. and Hinton, G. (2009). Learning multiple layers of features from tiny images.
Lillicrap, T. P., Cownden, D., Tweed, D. B., and Akerman, C. J. (2014). Random feedback weights
support learning in deep neural networks. arXiv:1411.0247.
Liu, Z., Luo, P., Wang, X., and Tang, X. (2015). Deep learning face attributes in the wild. In
Proceedings of the IEEE International Conference on Computer Vision, pages 3730?3738.
10
Markram, H. and Sakmann, B. (1995). Action potentials propagating back into dendrites triggers
changes in efficacy. Soc. Neurosci. Abs, 21.
Neal, R. M. (2001). Annealed importance sampling. Statistics and Computing, 11(2):125?139.
Netzer, Y., Wang, T., Coates, A., Bissacco, A., Wu, B., and Ng, A. Y. (2011). Reading digits in natural
images with unsupervised feature learning. In NIPS workshop on deep learning and unsupervised
feature learning, volume 2011, page 5.
Rezende, D. J., Mohamed, S., and Wierstra, D. (2014). Stochastic backpropagation and approximate
inference in deep generative models. arXiv preprint arXiv:1401.4082.
Salakhutdinov, R. and Hinton, G. (2009). Deep boltzmann machines. In Artificial Intelligence and
Statistics.
Schmiedl, T. and Seifert, U. (2007). Optimal finite-time processes in stochastic thermodynamics.
Physical review letters, 98(10):108301.
Sivak, D. A. and Crooks, G. E. (2012). Thermodynamic metrics and optimal paths. Physical review
letters, 108(19):190602.
Sohl-Dickstein, J., Weiss, E. A., Maheswaranathan, N., and Ganguli, S. (2015). Deep unsupervised
learning using nonequilibrium thermodynamics. CoRR, abs/1503.03585.
S?nderby, C. K., Raiko, T., Maal?e, L., S?nderby, S. K., and Winther, O. (2016). Ladder variational
autoencoders. In Advances in Neural Information Processing Systems 29: Annual Conference
on Neural Information Processing Systems 2016, December 5-10, 2016, Barcelona, Spain, pages
3738?3746.
Theis, L., van den Oord, A., and Bethge, M. (2016). A note on the evaluation of generative models.
In International Conference on Learning Representations.
Vincent, P., Larochelle, H., Bengio, Y., and Manzagol, P.-A. (2008). Extracting and composing robust
features with denoising autoencoders. In Proceedings of the 25th international conference on
Machine learning, pages 1096?1103. ACM.
Vincent, P., Larochelle, H., Lajoie, I., Bengio, Y., and Manzagol, P.-A. (2010). Stacked denoising
autoencoders: Learning useful representations in a deep network with a local denoising criterion.
J. Machine Learning Res., 11.
11
| 7026 |@word illustrating:2 version:3 nd:1 open:4 seek:1 decomposition:3 contrastive:1 thereby:7 inpainting:1 moment:1 hunting:1 substitution:1 series:1 liu:3 configuration:2 efficacy:1 pt0:2 initial:3 interestingly:1 past:4 outperforms:1 current:2 com:5 comparing:1 luo:1 gmail:3 intriguing:2 must:4 assigning:1 visible:3 adam:2 plasticity:2 shape:1 enables:1 remove:2 designed:1 update:7 stationary:16 generative:40 instantiate:1 intelligence:1 isotropic:1 vanishing:1 bissacco:1 avoids:1 provides:1 org:1 glaring:1 zhang:1 mathematical:1 wierstra:2 differential:1 prove:1 wild:1 unlearning:2 introduce:1 manner:1 forgetting:1 expected:1 indeed:3 rapid:2 multi:1 brain:6 discretized:1 salakhutdinov:3 actual:1 increasing:2 spain:1 estimating:2 moreover:4 underlying:4 circuit:5 factorized:1 mass:2 agnostic:2 what:2 tying:3 developed:1 proposing:1 differing:1 finding:3 transformation:3 unobserved:1 guarantee:2 tie:1 exactly:1 universit:2 scaled:1 control:1 danihelka:1 before:4 t1:4 positive:1 laufer:1 ceil:1 modify:2 limit:2 local:2 consequence:1 despite:2 timing:1 mach:1 analyzing:1 encoding:1 parallelize:1 path:5 incoherence:1 approximately:4 tmax:6 chose:1 initialization:1 specifying:1 suggests:1 limited:2 hunt:1 range:1 obeys:7 directed:8 acknowledgment:1 practice:1 goyal:1 backpropagation:3 chaotic:1 digit:1 procedure:13 empirical:2 thought:1 seeing:1 get:1 cannot:4 close:1 ga:1 operator:59 context:1 applying:2 writing:1 heated:8 equivalent:2 map:5 center:1 maximizing:1 annealed:4 go:7 destroying:1 starting:2 independently:2 duration:1 ke:2 formalized:1 unstructured:2 assigns:1 pouget:1 estimator:1 rule:4 deriving:1 datapoints:1 gsn:2 notion:2 cooled:3 increment:2 limiting:1 imagine:2 pt:35 dbns:1 target:3 strengthen:1 tan:1 trigger:1 goodfellow:2 element:2 helmholtz:3 recognition:1 walking:1 continues:3 located:1 asymmetric:4 nderby:3 cooling:12 bottom:1 observed:2 preprint:4 wang:2 precondition:1 region:1 connected:1 mesnil:1 removed:1 ran:1 principled:3 intuition:1 environment:1 pd:1 transforming:1 benjamin:1 ideally:1 warde:1 sk1:10 dynamic:7 trained:2 raise:1 tight:2 creation:4 purely:1 joint:2 samsung:1 geoff:1 maheswaranathan:1 various:1 represented:1 derivation:2 train:7 revert:1 heat:9 fast:2 effective:1 describe:1 monte:1 stacked:1 artificial:1 zemel:1 walkback:11 choosing:1 sculpt:2 outcome:1 whose:8 heuristic:1 stanford:2 plausible:1 larger:2 solve:1 relax:1 tightness:3 valued:2 encoder:1 reconstruct:1 soundness:2 statistic:3 qt1:1 fischer:1 itself:3 final:1 hoc:1 sequence:7 advantage:2 net:14 propose:1 reconstruction:1 product:1 aligned:1 rapidly:4 organizing:1 mixing:3 achieve:2 academy:1 intuitive:2 description:1 exploiting:2 convergence:3 requirement:2 asymmetry:1 extending:1 generating:3 perfect:1 incremental:1 ben:1 resnet:1 tk:3 converges:2 derive:3 recurrent:7 propagating:1 develop:1 measured:2 supplementary:3 qt:4 weakens:1 soc:1 implemented:1 cool:1 involves:7 larochelle:2 direction:1 closely:1 attribute:1 stochastic:21 settle:1 backprop:1 require:5 equilbrium:1 generalization:1 proposition:1 biological:6 tighter:1 extension:1 hold:1 credit:4 stdp:7 great:1 equilibrium:8 rfou:2 mapping:1 major:1 optimizer:1 early:1 currently:1 cole:1 bridge:2 almahairi:1 create:1 successfully:1 tool:1 reflects:1 unfolding:1 hope:1 gaussian:8 always:1 aim:1 modified:1 rather:2 vae:6 office:1 rosemary:2 rezende:3 derived:1 inherits:1 naval:1 bernoulli:2 likelihood:9 check:1 sganguli:1 contrast:4 adversarial:2 helpful:1 inference:13 ganguli:2 dayan:2 destruction:5 stopping:1 accumulated:1 dim:1 entire:1 dependent:1 initially:2 spurious:10 hidden:1 relation:2 seifert:3 going:1 pixel:5 overall:4 issue:2 dauphin:1 denoted:1 special:1 initialize:1 marginal:2 equal:2 intriguingly:2 beach:1 sampling:14 reversibility:3 identical:1 ng:1 broad:1 unsupervised:4 constitutes:2 filling:2 icml:1 celeba:6 future:5 yoshua:2 t2:1 report:1 mirza:1 employ:1 anirudh:1 simultaneously:1 divergence:10 national:1 n1:2 attempt:2 freedom:1 ab:7 montr:3 interest:1 highly:3 evaluation:1 yielding:3 farley:1 chain:8 implication:1 accurate:1 capable:2 encourage:1 cifar10:4 netzer:2 walk:6 re:4 e0:1 dae:3 theoretical:1 earlier:2 retains:1 assignment:4 cost:1 introducing:3 deviation:1 nonequilibrium:3 uniform:2 krizhevsky:2 osindero:1 encoders:3 thibodeau:1 st:64 qtt:5 fundamental:6 international:5 winther:1 oord:1 stay:1 probabilistic:2 physic:5 together:2 quickly:2 bethge:1 gans:2 yao:1 connectivity:1 squared:1 successively:1 choose:4 possibly:1 slowly:2 stochastically:1 ek:1 inject:2 leading:1 aggressive:1 potential:2 de:3 flatter:1 sec:1 explicitly:2 infinitum:1 ad:1 later:1 try:1 view:1 doing:1 observing:1 red:1 start:7 bayes:1 parallel:1 simon:1 minimize:3 qk:3 variance:7 efficiently:4 characteristic:1 correspond:2 yield:5 who:1 ensemble:1 vincent:6 iterated:3 none:1 carlo:1 trajectory:26 monitoring:1 published:1 datapoint:1 implausible:1 reach:1 synapsis:2 tweed:1 energy:22 destructive:20 involved:1 james:1 mohamed:1 naturally:1 resultant:2 associated:3 proof:1 burroughs:1 sampled:5 intrinsically:1 recall:1 manifest:1 emerges:1 knowledge:1 schedule:7 back:7 feed:1 follow:1 wei:1 formulation:1 done:1 evaluated:1 strongly:1 furthermore:1 autoencoders:4 until:1 transport:1 unoccupied:1 reversible:2 lack:2 google:1 minibatch:2 mode:13 stably:1 usa:1 effect:1 lillicrap:2 normalized:1 unbiased:1 concept:1 hence:1 symmetric:5 iteratively:2 neal:3 illustrated:1 daes:2 during:9 encourages:2 essence:1 backpropagating:1 maintained:1 ptr:1 criterion:1 generalized:1 tt:11 polytechnique:1 dissipation:6 svhn:4 temperature:26 image:6 variational:36 wise:1 novel:3 ef:1 umontreal:1 funding:1 superior:1 sigmoid:1 speculative:1 empirically:3 physical:7 volume:2 analog:1 discussed:1 yosinski:1 expressing:1 caron:1 imposing:1 fk:1 similarly:3 had:1 moving:1 access:1 f0:1 longer:3 posterior:3 own:1 recent:1 driven:1 reverse:17 binary:1 arbitrarily:1 continue:1 discussing:1 exploited:1 seen:1 remembering:1 converge:2 maximize:1 surrounding:1 ii:2 violate:1 multiple:5 mix:1 thermodynamic:3 technical:1 match:11 ptt:9 faster:1 long:3 cifar:4 post:1 equally:1 dkl:6 controlled:1 variant:2 basic:2 mrf:1 essentially:1 vision:1 metric:1 arxiv:11 iteration:1 achieved:1 gingrich:3 addition:2 cropped:2 annealing:10 wealth:2 source:1 crucial:2 biased:1 unlike:3 tim:1 undirected:9 december:1 inconsistent:1 spirit:2 flow:1 call:1 extracting:1 integer:1 vw:25 near:1 backwards:4 feedforward:2 bengio:16 enough:3 door:2 variety:1 reduce:1 idea:7 regarding:1 rifai:1 motivated:2 expression:1 repeatedly:1 action:1 deep:16 useful:1 iterating:1 detailed:14 clear:1 amount:1 hardware:1 diameter:1 reduced:1 generate:7 http:2 coates:1 estimated:2 arising:3 per:1 reinforces:1 blue:1 write:1 dickstein:4 express:1 pantheon:1 four:1 demonstrating:3 verified:1 vast:1 asymptotically:1 destroy:2 circumventing:1 injects:1 year:1 run:1 inverse:1 parameterized:5 powerful:3 injected:5 letter:2 place:2 lamb:1 wu:2 draw:4 circumvents:1 decision:1 prefer:1 appendix:4 fide:1 bit:2 bound:14 layer:3 nan:2 followed:1 courville:1 sleep:3 annual:1 activity:2 adapted:1 occur:1 constraint:4 incorporation:1 alex:1 awake:2 flat:1 generates:2 aspect:1 simulate:1 chair:1 performing:2 injection:1 h0:2 relatively:2 according:1 mcknight:1 mcdonnell:1 across:7 slightly:1 em:1 postsynaptic:2 remain:1 evolves:1 biologically:2 making:6 modification:1 s1:11 den:1 intuitively:1 gradually:2 pr:7 invariant:3 theano:3 wellcome:1 ln:12 equation:1 resource:1 remains:1 precomputed:1 know:2 cownden:1 end:2 reversal:9 maal:1 apply:2 obey:3 away:2 indirectly:4 salimans:1 existence:1 original:1 remaining:1 include:1 gan:1 graphical:9 log2:2 exploit:1 infusion:2 restrictive:2 especially:1 gregor:3 objective:8 move:2 question:2 quantity:3 occurs:1 added:1 strategy:1 spike:1 traditional:1 qt0:3 gradient:5 exhibit:1 thank:3 irreversibility:1 majority:1 sensible:1 parametrized:1 capacity:1 lajoie:1 manifold:5 presynaptic:2 reason:1 nuance:1 dream:3 ozair:1 length:3 code:2 index:1 modeled:1 manzagol:2 providing:2 balance:13 ratio:4 innovation:1 equivalently:2 difficult:1 minimizing:1 liang:1 potentially:5 expense:1 lowerbounds:1 negative:2 disparate:1 honari:1 ba:2 bona:1 akerman:1 sakmann:2 motivates:1 boltzmann:2 looseness:2 perform:2 teh:1 iaf:1 allowing:1 contributed:1 neuron:5 observation:1 markov:3 datasets:5 enabling:4 finite:16 descent:1 parametrizing:1 philemon:1 langevin:1 hinton:8 neurobiology:2 communication:1 arbitrary:1 canada:2 introduced:3 gratitude:1 pair:2 required:3 specified:2 kl:10 connection:3 cast:1 learned:4 kingma:9 barcelona:1 nip:3 address:1 beyond:4 able:1 suggested:1 dynamical:2 pattern:1 mismatch:1 usually:1 below:1 departure:1 poole:1 reading:1 valid:1 max:4 eschew:1 including:1 memory:1 debt:1 belief:1 natural:2 force:1 regularized:1 thermodynamics:2 improve:1 github:2 ladder:1 arora:2 raiko:1 dissipated:2 autoencoder:1 auto:5 imitates:1 prior:2 understanding:1 literature:1 python:1 theis:2 evolve:1 review:3 relative:2 graf:1 generation:6 interesting:1 proportional:1 analogy:1 generator:2 validation:1 foundation:1 degree:2 consistent:2 s0:54 tiny:1 share:1 bordes:2 ibm:1 elsewhere:1 summary:2 repeat:1 last:1 free:3 alain:5 allow:1 understand:3 burda:2 fall:1 face:1 markram:2 van:1 feedback:2 dimension:1 depth:1 cortical:1 transition:64 rich:1 autoregressive:1 world:1 author:2 ending:2 unfolds:2 sensory:2 forward:9 made:1 computes:2 exceedingly:1 qualitatively:1 far:2 tighten:1 welling:5 brakel:1 excess:6 approximate:3 implicitly:1 neurobiological:1 active:2 anchor:1 cooijmans:1 unnecessary:1 surya:1 latent:8 iterative:3 sk:32 why:3 table:2 reality:1 learn:12 gsns:1 robust:1 composing:1 ca:1 symmetry:4 improving:2 dendrite:1 complex:4 protocol:3 domain:1 pk:1 main:1 linearly:1 arrow:2 whole:3 noise:13 s2:1 neurosci:1 heating:11 nothing:2 repeated:7 obviating:2 xu:1 fig:3 mila:3 grosse:1 slow:4 obeying:1 comput:2 clamped:1 tied:1 jmlr:1 learns:1 niche:1 tang:1 down:2 theorem:2 specific:2 abadie:1 incorporating:1 essential:2 mnist:6 workshop:1 sohl:4 corr:6 importance:5 magnitude:1 occurring:1 gap:6 entropy:1 simply:2 likely:5 explore:2 forming:1 crook:6 nserc:1 partially:1 corresponds:11 acm:1 ma:1 conditional:2 goal:7 towards:9 replace:2 considerable:1 experimentally:1 hard:2 change:1 infinite:1 except:3 operates:3 specifically:1 denoising:7 conservative:1 total:1 called:1 invariance:1 experimental:2 vaes:1 support:2 arises:1 trainable:2 avoiding:1 ex:1 |
6,663 | 7,027 | Polynomial Codes: an Optimal Design for
High-Dimensional Coded Matrix Multiplication
?
Qian Yu? , Mohammad Ali Maddah-Ali? , and A. Salman Avestimehr?
Department of Electrical Engineering, University of Southern California, Los Angeles, CA, USA
?
Nokia Bell Labs, Holmdel, NJ, USA
Abstract
We consider a large-scale matrix multiplication problem where the computation
is carried out using a distributed system with a master node and multiple worker
nodes, where each worker can store parts of the input matrices. We propose a
computation strategy that leverages ideas from coding theory to design intermediate
computations at the worker nodes, in order to optimally deal with straggling
workers. The proposed strategy, named as polynomial codes, achieves the optimum
recovery threshold, defined as the minimum number of workers that the master
needs to wait for in order to compute the output. This is the first code that
achieves the optimal utilization of redundancy for tolerating stragglers or failures
in distributed matrix multiplication. Furthermore, by leveraging the algebraic
structure of polynomial codes, we can map the reconstruction problem of the final
output to a polynomial interpolation problem, which can be solved efficiently.
Polynomial codes provide order-wise improvement over the state of the art in
terms of recovery threshold, and are also optimal in terms of several other metrics
including computation latency and communication load. Moreover, we extend this
code to distributed convolution and show its order-wise optimality.
1
Introduction
Matrix multiplication is one of the key building blocks underlying many data analytics and machine
learning algorithms. Many such applications require massive computation and storage power to
process large-scale datasets. As a result, distributed computing frameworks such as Hadoop MapReduce [1] and Spark [2] have gained significant traction, as they enable processing of data sizes at the
order of tens of terabytes and more.
As we scale out computations across many distributed nodes, a major performance bottleneck is the
latency in waiting for slowest nodes, or ?stragglers? to finish their tasks [3]. The current approaches
to mitigate the impact of stragglers involve creation of some form of ?computation redundancy?.
For example, replicating the straggling task on another available node is a common approach to
deal with stragglers (e.g., [4]). However, there have been recent results demonstrating that coding
can play a transformational role for creating and exploiting computation redundancy to effectively
alleviate the impact of stragglers [5, 6, 7, 8, 9]. Our main result in this paper is the development
of optimal codes, named polynomial codes, to deal with stragglers in distributed high-dimensional
matrix multiplication, which also provides order-wise improvement over the state of the art.
More specifically, we consider a distributed matrix multiplication problem where we aim to compute
C = A| B from input matrices A and B. As shown in Fig. 1, the computation is carried out using
1
a distributed system with a master node and N worker nodes that can each store m
fraction of A
1
+
and n fraction of B, for some parameters m, n ? N . We denote the stored submtarices at each
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
?i , which can be designed as arbitrary functions of A and B
worker i ? {0, . . . , N ? 1} by A?i and B
?i and returns the result to the master.
respectively. Each worker i then computes the product A?|i B
...
...
...
Figure 1: Overview of the distributed matrix multiplication framework. Coded data are initially stored distributedly at N workers according to data assignment. Each worker computes the product of the two stored matrices
and returns it to the master. By carefully designing the computation strategy, the master can decode given the
computing results from a subset of workers, without having to wait for the stragglers (worker 1 in this example).
?i ), the master
By carefully designing the computation strategy at each worker (i.e. designing A?i and B
only needs to wait for the fastest subset of workers before recovering output C, hence mitigating the
impact of stragglers. Given a computation strategy, we define its recovery threshold as the minimum
number of workers that the master needs to wait for in order to compute C. In other words, if any
subset of the workers with size no smaller than the recovery threshold finish their jobs, the master is
able to compute C. Given this formulation, we are interested in the following main problem.
What is the minimum possible recovery threshold for distributed matrix multiplication? Can
we find an optimal computation strategy that achieves the minimum recovery threshold, while
allowing efficient decoding of the final output at the master node?
There have been two computing schemes proposed earlier for this problem that leverage ideas from
coding theory. The first one, introduced in [5] and extended in [10], injects redundancy in only one
of the input matrices using maximum distance separable (MDS) codes [11] 1 . We illustrate this
approach, referred to as one dimensional MDS code (1D MDS code), using the example shown in
Fig. 2a, where we aim to compute C = A| B using 3 workers that can each store half of A and the
entire B. The 1D MDS code evenly divides A along the column into two submatrices denoted by A0
and A1 , encodes them into 3 coded matrices A0 , A1 , and A0 + A1 , and then assigns them to the 3
workers. This design allows the master to recover the final output given the results from any 2 of
the 3 workers, hence achieving a recovery threshold of 2. More generally, one can show that the 1D
MDS code achieves a recovery threshold of
N
+ m = ?(N ).
(1)
n
An alternative computing scheme was recently proposed in [10] for the case of m = n, referred to as
the product code, which instead injects redundancy in both input matrices. This coding technique has
also been proposed earlier in the context of?Fault Tolerant
? Computing in [12, 13]. As demonstrated in
Fig. 2b, product code aligns workers?in an N ?by? N layout.
? A is divided along the columns into
m submatrices,
encoded
using
an
(
N
,
m)
MDS
code
into
N coded matrices, and then assigned
?
?
to
the
N
columns
of
workers.
Similarly
N
coded
matrices
of B are created and assigned to the
?
N rows. Given the property of MDS codes, the master can decode an entire row after obtaining any
m results in that row; likewise for the columns. Consequently, the master can recover the final output
using a peeling algorithm, iteratively decoding the MDS codes on rows and columns until the output
C is completely available. For example, if the 5 computing results A|1 B0 , A|1 B1 , (A0 + A1 )| B1 ,
A|0 (B0 + B1 ), and A|1 (B0 + B1 ) are received as demonstrated in Fig. 2b, the master can recover the
K1D-MDS , N ?
1
An (n, k) MDS code is a linear code which transforms k raw inputs to n coded outputs, such that from any
subset of size k of the outputs, the original k inputs can be recovered.
2
needed results by computing A|0 B1 = (A0 + A1 )| B1 ? A|1 B1 then A|0 B0 = A|0 (B0 + B1 ) ? A|0 B1 .
In general, one can show that the product code achieves a recovery threshold of
?
?
Kproduct , 2(m ? 1) N ? (m ? 1)2 + 1 = ?( N ),
(2)
which significantly improves over K1D-MDS .
(a) 1D MDS-code [5] in an example with 3 workers
that can each store half of A and the entire B.
(b) Product code [10] in an example with 9 workers
that can each store half of A and half of B.
Figure 2: Illustration of (a) 1D MDS code, and (b) product code.
In this paper, we show that quite interestingly, the optimum recovery threshold can be far less
than what the above two schemes achieve. In fact, we show that the minimum recovery threshold
does not scale with the number of workers (i.e. ?(1)). We prove this fact by designing a novel
coded computing strategy, referred to as the polynomial code, which achieves the optimum recovery
threshold of mn, and significantly improves the state of the art. Hence, our main result is as follows.
For a general matrix multiplication task C = A| B using N workers, where each worker can
1
store m
fraction of A and n1 fraction of B, we propose polynomial codes that achieve the
optimum recovery threshold of
Kpoly , mn = ?(1).
(3)
Furthermore, polynomial code only requires a decoding complexity that is almost linear to
the input size.
The main novelty and advantage of the proposed polynomial code is that, by carefully designing the
algebraic structure of the encoded submatrices, we ensure that any mn intermediate computations at
the workers are sufficient for recovering the final matrix multiplication product at the master. This
in a sense creates an MDS structure on the intermediate computations, instead of only the encoded
matrices as in prior works. Furthermore, by leveraging the algebraic structure of polynomial codes, we
can then map the reconstruction problem of the final output at the master to a polynomial interpolation
problem (or equivalently Reed-Solomon decoding [14]), which can be solved efficiently [15]. This
mapping also bridges the rich theory of algebraic coding and distributed matrix multiplication.
We prove the optimality of polynomial code by showing that it achieves the information theoretic
lower bound on the recovery threshold, obtained by cut-set arguments (i.e., we need at least mn matrix
blocks returned from workers to recover the final output, which exactly have size mn blocks). Hence,
the proposed polynomial code essentially enables a specific computing strategy such that, from any
subset of workers that give the minimum amount of information needed to recover C, the master can
successfully decode the final output. As a by-product, we also prove the optimality of polynomial
code under several other performance metrics considered in previous literature: computation latency
[5, 10], probability of failure given a deadline [9], and communication load [16, 17, 18].
We extend the polynomial code to the problem of distributed convolution [9]. We show that by simply
reducing the convolution problem to matrix multiplication and applying the polynomial code, we
strictly and unboundedly improve the state of the art. Furthermore, by exploiting the computing
structure of convolution, we propose a variation of the polynomial code, which strictly reduces the
recovery threshold even further, and achieves the optimum recovery threshold within a factor of 2.
Finally, we implement and benchmark the polynomial code on an Amazon EC2 cluster. We measure
the computation latency and empirically demonstrate its performance gain under straggler effects.
3
2
System Model, Problem Formulation, and Main Result
We consider a problem of matrix multiplication with two input matrices A ? Fs?r
and B ? Fs?t
q
q ,
for some integers r, s, t and a sufficiently large finite field Fq . We are interested in computing the
product C , A| B in a distributed computing environment with a master node and N worker nodes,
1
where each worker can store m
fraction of A and n1 fraction of B, for some parameters m, n ? N+
(see Fig. 1). We assume at least one of the two input matrices A and B is tall (i.e. s ? r or s ? t),
because otherwise the output matrix C would be rank inefficient and the problem is degenerated.
r
t
s?
n
?i ? Fs?
Specifically, each worker i can store two matrices A?i ? Fq m and B
, computed based
q
?i ,
on arbitrary functions of A and B respectively. Each worker can compute the product C?i , A?|i B
and return it to the master. The master waits only for the results from a subset of workers, before
proceeding to recover the final output C given these products using certain decoding functions.2
2.1
Problem Formulation
Given the above system model, we formulate the distributed matrix multiplication problem based on
the following terminology: We define the computation strategy as the 2N functions, denoted by
f = (f0 , f1 , ..., fN ?1 ),
g = (g0 , g1 , ..., gN ?1 ),
(4)
?i . Specifically,
that are used to compute each A?i and B
A?i = fi (A),
?i = gi (B),
B
? i ? {0, 1, ..., N ? 1}.
(5)
For any integer k, we say a computation strategy is k-recoverable if the master can recover C given
the computing results from any k workers. We define the recovery threshold of a computation strategy,
denoted by k(f , g), as the minimum integer k such that computation strategy (f , g) is k-recoverable.
Using the above terminology, we define the following concept:
Definition 1. For a distributed matrix multiplication problem of computing A| B using N workers
1
that can each store m
fraction of A and n1 fraction of B, we define the optimum recovery threshold,
?
denoted by K , as the minimum achievable recovery threshold among all computation strategies, i.e.
K ? , min k(f , g).
f ,g
(6)
The goal of this problem is to find the optimum recovery threshold K ? , as well as a computation
strategy that achieves such an optimum threshold.
2.2
Main Result
Our main result is stated in the following theorem:
Theorem 1. For a distributed matrix multiplication problem of computing A| B using N workers
1
that can each store m
fraction of A and n1 fraction of B, the minimum recovery threshold K ? is
K ? = mn.
(7)
Furthermore, there is a computation strategy, referred to as the polynomial code, that achieves the
above K ? while allowing efficient decoding at the master node, i.e., with complexity equal to that of
polynomial interpolation given mn points.
Remark 1. Compared to the state of the art [5, 10], the polynomial code provides order-wise
improvement in terms of the recovery threshold. Specifically, the?recovery thresholds achieved by 1D
MDS code [19] and product code [10] scale linearly with N and N respectively, while the proposed
polynomial code actually achieves a recovery threshold that does not scale with N . Furthermore,
polynomial code achieves the optimal recovery threshold. To the best of our knowledge, this is the
first optimal design proposed for the distributed matrix multiplication problem.
2
Note that we consider the most general model and do not impose any constraints on the decoding functions.
However, any good decoding function should have relatively low computation complexity.
4
Remark 2. We prove the optimality of polynomial code using a matching information theoretic
lower bound, which is obtained by applying a cut-set type argument around the master node. As a
by-product, we can also prove that the polynomial code simultaneously achieves optimality in terms
of several other performance metrics, including the computation latency [5, 10], the probability of
failure given a deadline [9], and the communication load [16, 17, 18], as discussed in Section 3.4.
Remark 3. The polynomial code not only improves the state of the art asymptotically, but also gives
strict and significant improvement for any parameter values of N , m, and n (See Fig. 3 for example).
Figure 3: Comparison of the recovery thresholds achieved by the proposed polynomial code and the state of the
1
arts (1D MDS code [5] and product code [10]), where each worker can store 10
fraction of each input matrix.
?
The polynomial code attains the optimum recovery threshold K , and significantly improves the state of the art.
Remark 4. As we will discuss in Section 3.2, decoding polynomial code can be mapped to a
polynomial interpolation problem, which can be solved in time almost linear to the input size [15].
This is enabled by carefully designing the computing strategies at the workers, such that the computed
products form a Reed-Solomon code [20] , which can be decoded efficiently using any polynomial
interpolation algorithm or Reed-Solomon decoding algorithm that provides the best performance
depending on the problem scenario (e.g., [21]).
Remark 5. Polynomial code can be extended to other distributed computation applications involving
linear algebraic operations. In Section 4, we focus on the problem of distributed convolution, and
show that we can obtain order-wise improvement over the state of the art (see [9]) by directly applying
the polynomial code. Furthermore, by exploiting the computing structure of convolution, we propose
a variation of the polynomial code that achieves the optimum recovery threshold within a factor of 2.
Remark 6. In this work we focused on designing optimal coding techniques to handle stragglers
issues. The same technique can also be applied to the fault tolerance computing setting (e.g., within
the algorithmic fault tolerance computing framework of [12, 13], where a module can produce
arbitrary error results under failure), to improve robustness to failures in computing. Specifically,
given that the polynomial code produces computing results that are coded by Reed-Solomon code,
which has the optimum hamming distance, it allows detecting, or correcting the maximum possible
number of module errors. This provides the first optimum code for matrix multiplication under fault
tolerance computing.
3
Polynomial Code and Its Optimality
In this section, we formally describe the polynomial code and its decoding procedure. We then
prove its optimality with an information theoretic converse, which completes the proof of Theorem 1.
Finally, we conclude this section with the optimality of polynomial code under other settings.
3.1
Motivating Example
We first demonstrate the main idea through a motivating example. Consider a distributed matrix
multiplication task of computing C = A| B using N = 5 workers that can each store half of the
matrices (see Fig. 4). We evenly divide each input matrix along the column side into 2 submatrices:
A = [A0 A1 ],
B = [B0 B1 ].
Given this notation, we essentially want to compute the following 4 uncoded components:
|
A0 B0 A|0 B1
C = A| B =
.
|
|
A1 B 0 A1 B 1
5
(8)
(9)
Figure 4: Example using polynomial code, with 5 workers that can each store half of each input matrix. (a)
Computation strategy: each worker i stores A0 + iA1 and B0 + i2 B1 , and computes their product. (b) Decoding:
master waits for results from any 4 workers, and decodes the output using fast polynomial interpolation algorithm.
Now we design a computation strategy to achieve the optimum recovery threshold of 4. Suppose elements of A, B are in F7 , let each worker i ? {0, 1, ..., 4} store the following two coded submatrices:
A?i = A0 + iA1 ,
?i = B0 + i2 B1 .
B
(10)
To prove that this design gives a recovery threshold of 4, we need to design a valid decoding function
for any subset of 4 workers. We demonstrate this decodability through a representative scenario,
where the master receives the computation results from workers 1, 2, 3, and 4, as shown in Figure 4.
The decodability for the other 4 possible scenarios can be proved similarly.
According to the designed computation strategy, we have
? ? ?
?? | ?
C?1
10 11 12 13
A0 B 0
?C?2 ? ?20 21 22 23 ? ?A| B0 ?
1
? ?=? 0
.
|
?C?3 ?
3 31 32 33 ? ?A0 B1 ?
|
0
1
2
3
A1 B 1
4 4 4 4
C?4
(11)
The coefficient matrix in the above equation is a Vandermonde matrix, which is invertible because its
parameters 1, 2, 3, 4 are distinct in F7 . So one way to recover C is to directly invert equation (11),
which proves the decodability. However, directly computing this inverse using the classical inversion
algorithm might be expensive in more general cases. Quite interestingly, because of the algebraic
structure we designed for the computation strategy (i.e., equation (10)), the decoding process can be
viewed as a polynomial interpolation problem (or equivalently, decoding a Reed-Solomon code).
Specifically, in this example each worker i returns
?i = A| B0 + iA| B0 + i2 A| B1 + i3 A| B1 ,
C?i = A?|i B
0
1
0
1
(12)
which is essentially the value of the following polynomial at point x = i:
h(x) , A|0 B0 + xA|1 B0 + x2 A|0 B1 + x3 A|1 B1 .
(13)
Hence, recovering C using computation results from 4 workers is equivalent to interpolating a 3rddegree polynomial given its values at 4 points. Later in this section, we will show that by mapping
the decoding process to polynomial interpolation, we can achieve almost-linear decoding complexity.
3.2
General Polynomial Code
Now we present the polynomial code in a general setting that achieves the optimum recovery threshold
stated in Theorem 1 for any parameter values of N , m, and n. First of all, we evenly divide each
input matrix along the column side into m and n submatrices respectively, i.e.,
A = [A0 A1 ... Am?1 ],
B = [B0 B1 ... Bn?1 ],
(14)
We then assign each worker i ? {0, 1, ..., N ? 1} a number in Fq , denoted by xi , and make sure that
all xi ?s are distinct. Under this setting, we define the following class of computation strategies.
6
Definition 2. Given parameters ?, ? ? N, we define the (?, ?)-polynomial code as
A?i =
m?1
X
Aj xj?
i ,
?i =
B
j=0
n?1
X
Bj xj?
i ,
? i ? {0, 1, ..., N ? 1}.
(15)
j=0
In an (?, ?)-polynomial code, each worker i essentially computes
?i =
C?i = A?|i B
m?1
X n?1
X
A|j Bk xj?+k?
.
i
(16)
j=0 k=0
In order for the master to recover the output given any mn results (i.e. achieve the optimum recovery
threshold), we carefully select the design parameters ? and ?, while making sure that no two terms in
the above formula has the same exponent of x. One such choice is (?, ?) = (1, m), i.e, let
A?i =
m?1
X
Aj xji ,
?i =
B
j=0
n?1
X
Bj xjm
i .
(17)
j=0
Hence, each worker computes the value of the following degree mn ? 1 polynomial at point x = xi :
h(x) ,
m?1
X n?1
X
A|j Bk xj+km ,
(18)
j=0 k=0
where the coefficients are exactly the mn uncoded components of C. Since all xi ?s are selected to be
distinct, recovering C given results from any mn workers is essentially interpolating h(x) using mn
distinct points. Since h(x) has degree mn ? 1, the output C can always be uniquely decoded.
In terms of complexity, this decoding process can be viewed as interpolating degree mn ? 1 polynort
mials of Fq for mn
times. It is well known that polynomial interpolation of degree k has a complexity
2
of O(k log k log log k) [15]. Therefore, decoding polynomial code also only requires a complexity
of O(rt log2 (mn) log log(mn)). Furthermore, this complexity can be reduced by simply swapping
in any faster polynomial interpolation algorithm or Reed-Solomon decoding algorithm.
Remark 7. We can naturally extend polynomial code to the scenario where input matrix elements
are real or complex numbers. In practical implementation, to avoid handling large elements in the
coefficient matrix, we can first quantize input values into numbers of finite digits, embed them into a
finite field that covers the range of possible values of the output matrix elements, and then directly
apply polynomial code. By embedding into finite fields, we avoid large intermediate computing
results, which effectively saves storage and computation time, and reduces numerical errors.
3.3
Optimality of Polynomial Code for Recovery Threshold
So far we have constructed a computing scheme that achieves a recovery threshold of mn, which
upper bounds K ? . To complete the proof of Theorem 1, here we establish a matching lower bound
through an information theoretic converse.
We need to prove that for any computation strategy, the master needs to wait for at least mn workers
in order to recover the output. Recall that at least one of A and B is a tall matrix. Without loss
of generality, assume A is tall (i.e. s ? r). Let A be an arbitrary fixed full rank matrix and B be
sampled from Fs?t
uniformly at random. It is easy to show that C = A| B is uniformly distributed
q
on Fr?t
q . This means that the master essentially needs to recover a random variable with entropy
rt
of H(C) = rt log2 q bits. Note that each worker returns mn
elements of Fq , providing at most
rt
log
q
bits
of
information.
Consequently,
using
a
cut-set
bound
around the master, we can show
2
mn
that at least mn results from the workers need to be collected, and thus we have K ? ? mn.
Remark 8 (Random Linear Code). We conclude this subsection by noting that, another computation
design is to let each worker store two random linear combinations of the input submatrices. Although
this design can achieve the optimal recovery threshold with high probability, it creates a large coding
overhead and requires high decoding complexity (e.g., O(m3 n3 + mnrt) using the classical inversion
decoding algorithm). Compared to random linear code, the proposed polynomial code achieves the
optimum recovery threshold deterministically, with a significantly lower decoding complexity.
7
3.4
Optimality of Polynomial Code for Other Performance Metrics
In the previous subsection, we proved that polynomial code is optimal in terms of the recovery
threshold. As a by-product, we can prove that it is also optimal in terms of some other performance
metrics. In particular, we consider the following 3 metrics considered in prior works, and formally
establish the optimality of polynomial code for each of them. Proofs can be found in Appendix A.
Computation latency is considered in models where the computation time Ti of each worker i is
a random variable with a certain probability distribution (e.g, [5, 10]). The computation latency is
defined as the amount of time required for the master to collect enough information to decode C.
Theorem 2. For any computation strategy, the computation latency T is always no less than the
latency achieved by polynomial code, denoted by Tpoly . Namely,
T ? Tpoly .
(19)
Probability of failure given a deadline is defined as the probability that the master does not receive
enough information to decode C at any time t [9].
Corollary 1. For any computation strategy, let T denote its computation latency, and let Tpoly denote
the computation latency of polynomial code. We have
P(T > t) ? P(Tpoly > t)
? t ? 0.
(20)
Corollary 1 directly follows from Theorem 2 since (19) implies (20) .
Communication load is another important metric in distributed computing (e.g. [16, 17, 18]), defined
as the minimum number of bits needed to be communicated in order to complete the computation.
Theorem 3. Polynomial code achieves the minimum communication load for distributed matrix
multiplication, which is given by
L? = rt log2 q.
4
(21)
Extension to Distributed Convolution
We can extend our proposed polynomial code to distributed convolution. Specifically, we consider a
convolution task with two input vectors
a = [a0 a1 ... am?1 ],
b = [b0 b1 ... bn?1 ],
(22)
where all ai ?s and bi ?s are vectors of length s over a sufficiently large field Fq . We want to compute
c , a ? b using a master and N workers. Each worker can store two vectors of length s, which are
functions of a and b respectively. We refer to these functions as the computation strategy.
Each worker computes the convolution of its stored vectors, and returns it to the master. The master
only waits for the fastest subset of workers, before proceeding to decode c. Similar to distributed
matrix multiplication, we define the recovery threshold for each computation strategy. We aim to
?
characterize the optimum recovery threshold denoted by Kconv
, and find computation strategies that
closely achieve this optimum threshold, while allowing efficient decoding at the master.
Distributed convolution has also been studied in [9], where the coded convolution scheme was
proposed. The main idea of the coded convolution scheme is to inject redundancy in only one of
the input vectors using MDS codes. The master waits for enough results such that all intermediate
values ai ? bj can be recovered, which allows the final output to be computed. One can show that
this coded convolution scheme is in fact equivalent to the 1D MDS-coded scheme proposed in [10].
Consequently, it achieves a recovery threshold of K1D-MDS = N ? N
n + m.
Note that by simply adapting our proposed polynomial code designed for distributed matrix multiplication to distributed convolution, the master can recover all intermediate values ai ? bj after receiving
results from any mn workers, to decode the final output. Consequently, this achieves a recovery
threshold of Kpoly = mn, which already strictly and significantly improves the state of the art.
In this paper, we take one step further and propose an improved computation strategy, strictly reducing
the recovery threshold on top of the naive polynomial code. The result is summarized as follows:
8
Theorem 4. For a distributed convolution problem of computing a ? b using N workers that can
1
each store m
fraction of a and n1 fraction of b, we can find a computation strategy that achieves a
recovery threshold of
Kconv-poly , m + n ? 1.
(23)
Furthermore, this computation strategy allows efficient decoding, i.e., with complexity equal to that
of polynomial interpolation given m + n ? 1 points.
We prove Theorem 4 by proposing a variation of the polynomial code, which exploits the computation
structure of convolution. This new computing scheme is formally demonstrated in Appendix B.
Remark 9. Similar to distributed matrix multiplication, our proposed computation strategy provides
order-wise improvement compared to the state of the art [9] in many different settings. Furthermore,
it achieves almost-linear decoding complexity using the fastest polynomial interpolation algorithm or
the Reed-Solomon decoding algorithm.
Moreover, we characterize Kconv within a factor of 2, as stated in the following theorem and proved
in Appendix C.
?
Theorem 5. For a distributed convolution problem, the minimum recovery threshold Kconv
can be
characterized within a factor of 2, i.e.:
1
?
Kconv-poly < Kconv
? Kconv-poly .
(24)
2
5
Experiment Results
To examine the efficiency of our proposed polynomial code, we implement the algorithm in Python
using the mpi4py library and deploy it on an AWS EC2 cluster of 18 nodes, with the master running
on a c1.medium instance, and 17 workers running on t2.micro instances.
The input matrices are randomly generated as two numpy matrices of size 4000 by 4000, and then
encoded and assigned to the workers in the preprocessing stage. Each worker stores 14 fraction of each
input matrix. In the computation stage, each worker computes the product of their assigned matrices,
and then returns the result using MPI.Comm.Isend(). The master actively listens to responses from
the 17 worker nodes through MPI.Comm.Irecv(), and uses MPI.Request.Waitany() to keep
polling for the earliest fulfilled request. Upon receiving 16 responses, the master stops listening and
starts decoding the result. To achieve the best performance, we implement an FFT-based algorithm
for the Reed-Solomon decoding.
Figure 5: Comparison of polynomial code and the uncoded scheme. We implement polynomial code and the
uncoded scheme using Python and mpi4py library and deploy them on an Amazon EC2 cluster of 18 instances.
We measure the computation latency of both algorithms and plot their CCDF. Polynomial code can reduce the
tail latency by 34% even taking into account of the decoding overhead.
We compare our results with distributed matrix multiplication without coding.3 The uncoded implementation is similar, except that only 16 out of the 17 workers participate in the computation, each of
them storing and processing 14 fraction of uncoded rows from each input matrix. The master waits for
all 16 workers to return, and does not need to perform any decoding algorithm to recover the result.
To simulate straggler effects in large-scale systems, we compare the computation latency of these
two schemes in a setting where a randomly picked worker is running a background thread which
approximately doubles the computation time. As shown in Fig. 5, polynomial code can reduce the
tail latency by 34% in this setting, even taking into account of the decoding overhead.
3
Due to the EC2 instance request quota limit of 20, 1D MDS code and product code could not be implemented
in this setting, which require at least 21 and 26 nodes respectively.
9
6
Acknowledgement
This work is in part supported by NSF grants CCF-1408639, NETS-1419632, ONR award
N000141612189, NSA grant, and a research gift from Intel. This material is based upon
work supported by Defense Advanced Research Projects Agency (DARPA) under Contract No.
HR001117C0053. The views, opinions, and/or findings expressed are those of the author(s) and
should not be interpreted as representing the official views or policies of the Department of Defense
or the U.S. Government.
10
References
[1] J. Dean and S. Ghemawat, ?MapReduce: Simplified data processing on large clusters,? Sixth USENIX
Symposium on Operating System Design and Implementation, Dec. 2004.
[2] M. Zaharia, M. Chowdhury, M. J. Franklin, S. Shenker, and I. Stoica, ?Spark: cluster computing with
working sets,? in Proceedings of the 2nd USENIX HotCloud, vol. 10, p. 10, June 2010.
[3] J. Dean and L. A. Barroso, ?The tail at scale,? Communications of the ACM, vol. 56, no. 2, pp. 74?80,
2013.
[4] M. Zaharia, A. Konwinski, A. D. Joseph, R. H. Katz, and I. Stoica, ?Improving MapReduce performance
in heterogeneous environments,? OSDI, vol. 8, p. 7, Dec. 2008.
[5] K. Lee, M. Lam, R. Pedarsani, D. Papailiopoulos, and K. Ramchandran, ?Speeding up distributed machine
learning using codes,? e-print arXiv:1512.02673, 2015.
[6] S. Li, M. A. Maddah-Ali, and A. S. Avestimehr, ?A unified coding framework for distributed computing
with straggling servers,? arXiv preprint arXiv:1609.01690, 2016.
[7] A. Reisizadehmobarakeh, S. Prakash, R. Pedarsani, and S. Avestimehr, ?Coded computation over heterogeneous clusters,? arXiv preprint arXiv:1701.05973, 2017.
[8] R. Tandon, Q. Lei, A. G. Dimakis, and N. Karampatziakis, ?Gradient coding,? arXiv preprint
arXiv:1612.03301, 2016.
[9] S. Dutta, V. Cadambe, and P. Grover, ?Coded convolution for parallel and distributed computing within a
deadline,? arXiv preprint arXiv:1705.03875, 2017.
[10] K. Lee, C. Suh, and K. Ramchandran, ?High-dimensional coded matrix multiplication,? in 2017 IEEE
International Symposium on Information Theory (ISIT), pp. 2418?2422, June 2017.
[11] R. Singleton, ?Maximum distance q-nary codes,? IEEE Transactions on Information Theory, vol. 10, no. 2,
pp. 116?118, 1964.
[12] K.-H. Huang and J. A. Abraham, ?Algorithm-based fault tolerance for matrix operations,? IEEE Transactions on Computers, vol. C-33, pp. 518?528, June 1984.
[13] J.-Y. Jou and J. A. Abraham, ?Fault-tolerant matrix arithmetic and signal processing on highly concurrent
computing structures,? Proceedings of the IEEE, vol. 74, pp. 732?741, May 1986.
[14] F. Didier, ?Efficient erasure decoding of reed-solomon codes,? arXiv preprint arXiv:0901.1886, 2009.
[15] K. S. Kedlaya and C. Umans, ?Fast polynomial factorization and modular composition,? SIAM Journal on
Computing, vol. 40, no. 6, pp. 1767?1802, 2011.
[16] S. Li, M. A. Maddah-Ali, and A. S. Avestimehr, ?Coded MapReduce,? 53rd Annual Allerton Conference
on Communication, Control, and Computing, Sept. 2015.
[17] S. Li, M. A. Maddah-Ali, Q. Yu, and A. S. Avestimehr, ?A fundamental tradeoff between computation and
communication in distributed computing,? e-print arXiv:1604.07086. Submitted to IEEE Transactions on
Information Theory, 2016.
[18] Q. Yu, S. Li, M. A. Maddah-Ali, and A. S. Avestimehr, ?How to optimally allocate resources for coded
distributed computing?,? arXiv preprint arXiv:1702.07297, 2017.
[19] F. Le Gall, ?Powers of tensors and fast matrix multiplication,? in Proceedings of the 39th international
symposium on symbolic and algebraic computation, pp. 296?303, ACM, 2014.
[20] R. Roth, Introduction to coding theory. Cambridge University Press, 2006.
[21] S. Baktir and B. Sunar, ?Achieving efficient polynomial multiplication in fermat fields using the fast fourier
transform,? in Proceedings of the 44th annual Southeast regional conference, pp. 549?554, ACM, 2006.
11
| 7027 |@word inversion:2 achievable:1 polynomial:76 nd:1 km:1 bn:2 k1d:3 interestingly:2 franklin:1 current:1 recovered:2 fn:1 numerical:1 enables:1 designed:4 plot:1 half:6 selected:1 detecting:1 provides:5 node:16 didier:1 allerton:1 along:4 constructed:1 symposium:3 prove:10 overhead:3 xji:1 examine:1 gift:1 project:1 moreover:2 underlying:1 notation:1 medium:1 what:2 interpreted:1 dimakis:1 proposing:1 unified:1 finding:1 nj:1 mitigate:1 ti:1 prakash:1 exactly:2 utilization:1 control:1 converse:2 grant:2 before:3 engineering:1 limit:1 interpolation:12 approximately:1 might:1 studied:1 collect:1 fastest:3 factorization:1 analytics:1 range:1 bi:1 practical:1 block:3 implement:4 x3:1 communicated:1 digit:1 procedure:1 erasure:1 bell:1 submatrices:7 significantly:5 matching:2 adapting:1 word:1 wait:10 symbolic:1 storage:2 context:1 applying:3 equivalent:2 map:2 demonstrated:3 dean:2 roth:1 layout:1 focused:1 formulate:1 distributedly:1 spark:2 recovery:44 assigns:1 qian:1 amazon:2 correcting:1 enabled:1 embedding:1 handle:1 variation:3 papailiopoulos:1 play:1 suppose:1 massive:1 decode:7 deploy:2 tandon:1 us:1 designing:7 gall:1 element:5 expensive:1 cut:3 role:1 module:2 preprint:6 electrical:1 solved:3 environment:2 comm:2 complexity:12 agency:1 straggler:11 ali:6 creation:1 creates:2 upon:2 efficiency:1 completely:1 darpa:1 distinct:4 fast:4 describe:1 quite:2 encoded:4 modular:1 say:1 otherwise:1 gi:1 g1:1 transform:1 final:11 advantage:1 net:1 propose:5 reconstruction:2 lam:1 product:20 fr:1 achieve:8 straggling:3 los:1 exploiting:3 cluster:6 optimum:18 unboundedly:1 double:1 produce:2 tall:3 illustrate:1 depending:1 b0:16 received:1 job:1 recovering:4 implemented:1 implies:1 closely:1 enable:1 opinion:1 material:1 require:2 government:1 assign:1 f1:1 alleviate:1 isit:1 strictly:4 extension:1 sufficiently:2 considered:3 around:2 mapping:2 algorithmic:1 bj:4 major:1 achieves:22 f7:2 bridge:1 southeast:1 concurrent:1 successfully:1 always:2 aim:3 i3:1 avoid:2 corollary:2 earliest:1 focus:1 june:3 improvement:6 rank:2 fq:6 karampatziakis:1 slowest:1 attains:1 sense:1 am:2 osdi:1 entire:3 kconv:7 a0:13 initially:1 interested:2 mitigating:1 polling:1 issue:1 among:1 denoted:7 exponent:1 development:1 art:11 field:5 equal:2 having:1 beach:1 yu:3 t2:1 micro:1 randomly:2 simultaneously:1 numpy:1 n1:5 highly:1 chowdhury:1 nsa:1 swapping:1 worker:71 divide:3 kpoly:2 instance:4 column:7 earlier:2 gn:1 cover:1 assignment:1 subset:8 characterize:2 motivating:2 optimally:2 stored:4 st:1 international:2 ec2:4 siam:1 fundamental:1 contract:1 lee:2 receiving:2 decoding:33 invertible:1 solomon:9 huang:1 creating:1 inject:1 inefficient:1 return:8 actively:1 li:4 account:2 transformational:1 singleton:1 coding:11 summarized:1 mpi4py:2 coefficient:3 later:1 view:2 picked:1 lab:1 stoica:2 start:1 recover:13 parallel:1 dutta:1 efficiently:3 likewise:1 barroso:1 raw:1 decodes:1 fermat:1 tolerating:1 submitted:1 aligns:1 definition:2 sixth:1 failure:6 pp:8 naturally:1 proof:3 hamming:1 gain:1 sampled:1 proved:3 stop:1 recall:1 knowledge:1 subsection:2 improves:5 carefully:5 actually:1 response:2 improved:1 formulation:3 generality:1 furthermore:10 xa:1 stage:2 until:1 working:1 receives:1 aj:2 lei:1 usa:3 effect:2 building:1 concept:1 ccf:1 hence:6 assigned:4 iteratively:1 i2:3 deal:3 uniquely:1 mpi:3 theoretic:4 mohammad:1 demonstrate:3 complete:2 wise:6 novel:1 recently:1 fi:1 common:1 empirically:1 overview:1 extend:4 discussed:1 tail:3 shenker:1 katz:1 significant:2 refer:1 composition:1 cambridge:1 ai:3 rd:1 similarly:2 replicating:1 f0:1 operating:1 recent:1 scenario:4 store:19 certain:2 server:1 onr:1 fault:6 minimum:12 impose:1 terabyte:1 novelty:1 signal:1 arithmetic:1 recoverable:2 multiple:1 full:1 reduces:2 faster:1 characterized:1 long:1 divided:1 deadline:4 award:1 coded:18 a1:11 impact:3 involving:1 heterogeneous:2 essentially:6 metric:7 arxiv:14 avestimehr:6 achieved:3 invert:1 receive:1 c1:1 want:2 background:1 dec:2 completes:1 aws:1 nary:1 regional:1 strict:1 sure:2 leveraging:2 integer:3 maddah:5 leverage:2 noting:1 intermediate:6 easy:1 enough:3 fft:1 xj:4 finish:2 reduce:2 idea:4 tradeoff:1 listening:1 angeles:1 bottleneck:1 thread:1 allocate:1 defense:2 f:4 algebraic:7 returned:1 remark:9 generally:1 latency:15 involve:1 transforms:1 amount:2 traction:1 ten:1 reduced:1 nsf:1 fulfilled:1 jou:1 vol:7 waiting:1 redundancy:6 key:1 terminology:2 threshold:46 demonstrating:1 achieving:2 asymptotically:1 fraction:15 injects:2 inverse:1 master:41 named:2 almost:4 appendix:3 holmdel:1 bit:3 cadambe:1 bound:5 annual:2 constraint:1 x2:1 n3:1 encodes:1 fourier:1 simulate:1 argument:2 optimality:11 min:1 separable:1 relatively:1 department:2 according:2 combination:1 request:3 across:1 smaller:1 joseph:1 making:1 equation:3 resource:1 discus:1 needed:3 available:2 operation:2 apply:1 save:1 alternative:1 robustness:1 original:1 top:1 running:3 ensure:1 log2:3 exploit:1 prof:1 establish:2 classical:2 tensor:1 g0:1 already:1 print:2 strategy:31 rt:5 md:20 southern:1 gradient:1 distance:3 mapped:1 evenly:3 participate:1 collected:1 degenerated:1 code:95 length:2 reed:9 illustration:1 providing:1 equivalently:2 stated:3 design:11 implementation:3 policy:1 perform:1 allowing:3 upper:1 convolution:19 datasets:1 benchmark:1 finite:4 extended:2 communication:8 arbitrary:4 usenix:2 introduced:1 bk:2 namely:1 required:1 california:1 nip:1 able:1 ccdf:1 including:2 power:2 ia:1 advanced:1 mn:25 representing:1 scheme:12 improve:2 library:2 uncoded:6 created:1 carried:2 naive:1 sept:1 speeding:1 prior:2 literature:1 mapreduce:4 python:2 acknowledgement:1 multiplication:27 loss:1 zaharia:2 grover:1 vandermonde:1 quota:1 degree:4 sufficient:1 pedarsani:2 storing:1 row:5 supported:2 side:2 nokia:1 taking:2 distributed:38 tolerance:4 valid:1 rich:1 computes:7 author:1 xjm:1 preprocessing:1 simplified:1 far:2 transaction:3 keep:1 tolerant:2 b1:20 conclude:2 xi:4 suh:1 ca:2 decodability:3 hadoop:1 obtaining:1 improving:1 quantize:1 listens:1 interpolating:3 complex:1 poly:3 official:1 main:9 linearly:1 abraham:2 fig:8 referred:4 representative:1 intel:1 decoded:2 deterministically:1 ia1:2 peeling:1 theorem:12 formula:1 embed:1 load:5 specific:1 showing:1 salman:1 ghemawat:1 effectively:2 gained:1 ramchandran:2 entropy:1 simply:3 expressed:1 acm:3 goal:1 viewed:2 consequently:4 specifically:7 except:1 reducing:2 uniformly:2 m3:1 formally:3 select:1 handling:1 |
6,664 | 7,028 | Unsupervised Learning of Disentangled
Representations from Video
Emily Denton
Department of Computer Science
New York University
[email protected]
Vighnesh Birodkar
Department of Computer Science
New York University
[email protected]
Abstract
We present a new model D R N ET that learns disentangled image representations
from video. Our approach leverages the temporal coherence of video and a novel
adversarial loss to learn a representation that factorizes each frame into a stationary
part and a temporally varying component. The disentangled representation can be
used for a range of tasks. For example, applying a standard LSTM to the time-vary
components enables prediction of future frames. We evaluate our approach on a
range of synthetic and real videos, demonstrating the ability to coherently generate
hundreds of steps into the future.
1
Introduction
Unsupervised learning from video is a long-standing problem in computer vision and machine
learning. The goal is to learn, without explicit labels, a representation that generalizes effectively to a
previously unseen range of tasks, such as semantic classification of the objects present, predicting
future frames of the video or classifying the dynamic activity taking place. There are several prevailing
paradigms: the first, known as self-supervision, uses domain knowledge to implicitly provide labels
(e.g. predicting the relative position of patches on an object [4] or using feature tracks [36]). This
allows the problem to be posed as a classification task with self-generated labels. The second general
approach relies on auxiliary action labels, available in real or simulated robotic environments. These
can either be used to train action-conditional predictive models of future frames [2, 20] or inversekinematics models [1] which attempt to predict actions from current and future frame pairs. The
third and most general approaches are predictive auto-encoders (e.g.[11, 12, 18, 31]) which attempt
to predict future frames from current ones. To learn effective representations, some kind of constraint
on the latent representation is required.
In this paper, we introduce a form of predictive auto-encoder which uses a novel adversarial loss
to factor the latent representation for each video frame into two components, one that is roughly
time-independent (i.e. approximately constant throughout the clip) and another that captures the
dynamic aspects of the sequence, thus varying over time. We refer to these as content and pose
components, respectively. The adversarial loss relies on the intuition that while the content features
should be distinctive of a given clip, individual pose features should not. Thus the loss encourages
pose features to carry no information about clip identity. Empirically, we find that training with this
loss to be crucial to inducing the desired factorization.
We explore the disentangled representation produced by our model, which we call DisentangledRepresentation Net (D R N ET ), on a variety of tasks. The first of these is predicting future video
frames, something that is straightforward to do using our representation. We apply a standard LSTM
model to the pose features, conditioning on the content features from the last observed frame. Despite
the simplicity of our model relative to other video generation techniques, we are able to generate
convincing long-range frame predictions, out to hundreds of time steps in some instances. This is
significantly further than existing approaches that use real video data. We also show that D R N ET can
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
be used for classification. The content features capture the semantic content of the video thus can be
used to predict object identity. Alternately, the pose features can be used for action prediction.
2
Related work
On account of its natural invariances, image data naturally lends itself to an explicit ?what? and
?where? representation. The capsule model of Hinton et al. [10] performed this separation via
an explicit auto-encoder structure. Zhao et al. [40] proposed a multi-layered version, which has
similarities to ladder networks [23]. Several weakly supervised approaches have been proposed to
factor images into style and content (e.g. [19, 24]). These methods all operate on static images,
whereas our approach uses temporal structure to separate the components.
Factoring video into time-varying and time-independent components has been explored in many
settings. Classic structure-from-motion methods use an explicit affine projection model to extract a
3D point cloud and camera homography matrices [8]. In contrast, Slow Feature Analysis [38] has no
model, instead simply penalizing the rate of change in time-independent components and encouraging
their decorrelation. Most closely related to ours is Villegas et al. [33] which uses an unsupervised
approach to factoring video into content and motion. Their architecture is also broadly similar to
ours, but the loss functions differ in important ways. They rely on pixel/gradient space `p -norm
reconstructions, plus a GAN term [6] that encourages the generated frames to be sharp. We also use
an `2 pixel-space reconstruction. However, this pixel-space loss is only applied, in combination with
a novel adversarial term applied to the pose features, to learn the disentangled representation. In
contrast to [33], our forward model acts on latent pose vectors rather than predicting pixels directly.
Other approaches explore general methods for learning disentangled representations from video.
Kulkarni et al. [14] show how explicit graphics code can be learned from datasets with systematic
dimensions of variation. Whitney et al. [37] use a gating principle to encourage each dimension of
the latent representation to capture a distinct mode of variation. Grathwohl et al. [7] propose a deep
variational model to disentangle space and time in video sequences.
A range of generative video models, based on deep nets, have recently been proposed. Ranzato et
al. [22] adopt a discrete vector quantization approach inspired by text models. Srivastava et al. [31]
use LSTMs to generate entire frames. Video Pixel Networks [12] use these models is a conditional
manner, generating one pixel at a time in raster-scan order (similar image models include [27, 32]).
Finn et al. [5] use an LSTM framework to model motion via transformations of groups of pixels.
Cricri et al. [3] use a ladder of stacked-autoencoders. Other works predict optical flows fields that
can be used to extrapolate motion beyond the current frame, e.g. [17, 39, 35]. In contrast, a single
pose vector is predicted in our model, rather than a spatial field.
Chiappa et al. [2] and Oh et al. [20] focus on prediction in video game environments, where known
actions at each frame can be permit action-conditional generative models that can give accurate
long-range predictions. In contrast to the above works, whose latent representations combine both
content and motion, our approach relies on a factorization of the two, with a predictive model only
being applied to the latter. Furthermore, we do not attempt to predict pixels directly, instead applying
the forward model in the latent space. Chiappa et al. [2], like our approach, produces convincing
long-range generations. However, the video game environment is somewhat more constrained than
the real-world video we consider since actions are provided during generation.
Several video prediction approaches have been proposed that focus on handling the inherent uncertainty in predicting the future. Mathieu et al. [18] demonstrate that a loss based on GANs can produce
sharper generations than traditional `2 -based losses. [34] train a series of models, which aim to span
possible outcomes and select the most likely one at any given instant. While we considered GANbased losses, the more constrained nature of our model, and the fact that our forward model does not
directly generate in pixel-space, meant that standard deterministic losses worked satisfactorily.
3
Approach
In our model, two separate encoders produce distinct feature representations of content and pose for
each frame. They are trained by requiring that the content representation of frame xt and the pose
representation of future frame xt+k can be combined (via concatenation) and decoded to predict the
pixels of future frame xt+k . However, this reconstruction constraint alone is insufficient to induce
2
the desired factorization between the two encoders. We thus introduce a novel adversarial loss on the
pose features that prevents them from being discriminable from one video to another, thus ensuring
that they cannot contain content information. A further constraint, motivated by the notion that
content information should vary slowly over time, encourages temporally close content vectors to be
similar to one another.
More formally, let xi = (x1i , ..., xTi ) denote a sequence of T images from video i. We subsequently
drop index i for brevity. Let Ec denote a neural network that maps an image xt to the content
representation htc which captures structure shared across time. Let Ep denote a neural network that
maps an image xt to the pose representation htp capturing content that varies over time. Let D denote
a decoder network that maps a content representation from a frame, htc , and a pose representation
ht+k
from future time step t + k to a prediction of the future frame x
?t+k . Finally, C is the scene
p
discriminator network that takes pairs of pose vectors (h1 , h2 ) and outputs a scalar probability that
they came from the same video or not.
The loss function used during training has several terms:
?t+k
Reconstruction loss: We use a standard per-pixel `2 loss between the predicted future frame x
t+k
and the actual future frame x
for some random frame offset k ? [0, K]:
t+k 2
Lreconstruction (D) = ||D(htc , ht+k
||2
p )?x
(1)
Note that many recent works on video prediction that rely on more complex losses that can capture
uncertainty, such as GANs [19, 6].
Similarity loss: To ensure the content encoder extracts mostly time-invariant representations, we
penalize the squared error between the content features htc , ht+k
of neighboring frames k ? [0, K]:
c
Lsimilarity (Ec ) = ||Ec (xt ) ? Ec (xt+k )||22
(2)
Adversarial loss: We now introduce a novel adversarial loss that exploits the fact that the objects
present do not typically change within a video, but they do between different videos. Our desired
disenanglement would thus have the content features be (roughly) constant within a clip, but distinct
between them. This implies that the pose features should not carry any information about the identity
of objects within a clip.
We impose this via an adversarial framework between the scene discriminator network C and pose
encoder Ep , shown in Fig. 1. The latter provides pairs of pose vectors, either computed from the same
t+k
t
video (htp,i , ht+k
p,i ) or from different ones (hp,i , hp,j ), for some other video j. The discriminator then
attempts to classify the pair as being from the same/different video using a cross-entropy loss:
?Ladversarial (C) = log(C(Ep (xti ), Ep (xt+k
))) + log(1 ? C(Ep (xti ), Ep (xt+k
)))
i
j
(3)
The other half of the adversarial framework imposes a loss function on the pose encoder Ep that tries
to maximize the uncertainty (entropy) of the discriminator output on pairs of frames from the same
clip:
?Ladversarial (Ep ) =
1
1
log(C(Ep (xti ), Ep (xt+k
))) + log(1 ? C(Ep (xti ), Ep (xt+k
)))
i
i
2
2
(4)
Thus the pose encoder is encouraged to produce features that the discriminator is unable to classify if
they come from the same clip or not. In so doing, the pose features cannot carry information about
object content, yielding the desired factorization. Note that this does assume that the object?s pose is
not distinctive to a particular clip. While adversarial training is also used by GANs, our setup purely
considers classification; there is no generator network, for example.
Overall training objective:
During training we minimize the sum of the above losses, with respect to Ec , Ep , D and C:
L = Lreconstruction (Ec , Ep , D)+?Lsimilarity (Ec )+?(Ladversarial (Ep )+Ladversarial (C)) (5)
where ? and ? are hyper-parameters. The first three terms can be jointly optimized, but the discriminator C is updated while the other parts of the model (Ec , Ep , D) are held constant. The overall
model is shown in Fig. 1. Details of the training procedure and model architectures for Ec , Ep , D
and C are given in Section 4.1.
3
Content encoder: Ec(x)
xt
Scene discriminator:
D(Ep(x), Ep(x?))
Pose encoder: Ep(x)
xit
Target 1
(same scene)
Frame decoder:
D( Ec(xt), Ep(xt+k) )
Lsimilarity
Scene discriminator:
C(Ep(x), Ep(x?))
xt+k
~
x t+k
LBCE
Target 1
(same scene)
xit+k
Lreconstruction
Pose encoder: Ep(x)
xit
Target 0
(different scenes)
Ladversarial
xt+k
Target 0
(different scenes)
Target=0.5
(maximal
uncertainty)
LBCE
xj
x
t+k
t+k?
Figure 1: Left: The discriminator C is trained with binary cross entropy (BCE) loss to predict if a
pair of pose vectors comes from the same (top portion) or different (lower portion) scenes. xi and xj
denote frames from different sequences i and j. The frame offset k is sampled uniformly in the range
[0, K]. Note that when C is trained, the pose encoder Ep is fixed. Right: The overall model, showing
all terms in the loss function. Note that when the pose encoder Ep is updated, the scene discriminator
is held fixed.
~
xt
~t
hp
hct
xt+k
Ec
Frame decoder: D( Ec(xt), Ep(xt+k) )
xt
x t+3
D
D
D
~ t+1
hp
~ t+2
hp
hct
LSTM
hct
hpt-1
Ep
x
t-1
~
x t+2
hct
LSTM
~
x t+k
~
x t+1
~ t+3
hp
hct
LSTM
hct
hpt
hct
LSTM
~ t+1
hp
~ t+2
hp
hct
Ep
xt
Figure 2: Generating future frames by recurrently predicting hp , the latent pose vector.
Ladversary
3.1
Target 1/2
(maximal
uncertainty)
Scene discriminator not updated, only
used for pose encoder loss
Forward Prediction
After training, the pose and content encoders Ep and Ec provide a representation which enables
video prediction in a straightforward manner. Given a frame xt , the encoders produce htp and htc
respectively. To generate the next frame, we use these as input to an LSTM model to predict the next
pose features ht+1
p . These are then passed (along with the content features) to the decoder, which
generates a pixel-space prediction x
?t+1 :
? t+1 = LST M (Ep (xt ), ht )
? t+1 , ht )
h
x
?t+1 = D(h
(6)
p
? t+2
h
p
c
? t+1 , ht )
= LST M (h
p
c
p
c
? t+2 , ht )
x
?t+2 = D(h
p
c
(7)
Note that while pose estimates are generated in a recurrent fashion, the content features htc remain
fixed from the last observed real frame. This relies on the nature of Lreconstruction which ensured
that content features can be combined with future pose vectors to give valid reconstructions.
? t+1 and
The LSTM is trained separately from the main model using a standard `2 loss between h
p
ht+1
p . Note that this generative model is far simpler than many other recent approaches, e.g. [12].
This largely due to the forward model being applied within our disentangled representation, rather
than directly on raw pixels.
3.2
Classification
Another application of our disentangled representation is to use it for classification tasks. Content
features, which are trained to be invariant to local temporal changes, can be used to classify the
semantic content of an image. Conversely, a sequence of pose features can be used to classify actions
in a video sequence. In either case, we train a two layer classifier network S on top of either hc or hp ,
with its output predicting the class label y.
4
4
Experiments
We evaluate our model on both synthetic (MNIST, NORB, SUNCG) and real (KTH Actions) video
datasets. We explore several tasks with our model: (i) the ability to cleanly factorize into content and
pose components; (ii) forward prediction of video frames using the approach from Section 3.1; (iii)
using the pose/content features for classification tasks.
4.1
Model details
We explored a variety of convolutional architectures for the content encoder Ec , pose encoder Ep
and decoder D. For MNIST, Ec , Ep and D all use a DCGAN architecture [21] with |hp | = 5 and
|hc | = 128. The encoders consist of 5 convolutional layers with subsampling. Batch normalization
and Leaky ReLU?s follow each convolutional layer except the final layer which normalizes the
pose/content vectors to have unit norm. The decoder is a mirrored version of the encoder with 5
deconvolutional layers and a sigmoid output layer.
For both NORB and SUNCG, D is a DCGAN architecture while Ec and Ep use a ResNet-18
architecture [9] up until the final pooling layer with |hp | = 10 and |hc | = 128.
For KTH, Ep uses a ResNet-18 architecture with |hp | = 24. Ec uses the same architecture as VGG16
[29] up until the final pooling layer with |hc | = 128. The decoder is a mirrored version of the content
encoder with pooling layers replaced with spatial up-sampling. In the style of U-Net [25], we add
skip connections from the content encoder to the decoder, enabling the model to easily generate static
background features.
In all experiments the scene discriminator C is a fully connected neural network with 2 hidden layers
of 100 units. We trained all our models with the ADAM optimizer [13] and learning rate ? = 0.002.
We used ? = 0.1 for MNIST, NORB and SUNCG and ? = 0.0001 for KTH experiments. We used
? = 1 for all datasets.
For future prediction experiments we train a two layer LSTM with 256 cells using the ADAM
optimizer. On MNIST, we train the model by observing 5 frames and predicting 10 frames. On KTH,
we train the model by observing 10 frames and predicting 10 frames.
4.2
Synthetic datasets
MNIST: We start with a toy dataset consisting of two MNIST digits bouncing around a 64x64
image. Each video sequence consists of a different pair of digits with independent trajectories.
Fig. 3(left) shows how the content vector from one frame and the pose vector from another generate
new examples that transfer the content and pose from the original frames. This demonstrates the
clean disentanglement produced by our model. Interestingly, for this data we found it to be necessary
to use a different color for the two digits. Our adversarial term is so aggressive that it prevents the
Input frames
1
3
5
Generated frames
6
9
12
15
18
21
50
24
100
200
500
...
actionDim=5-latentDi
m=128-maxStep=8-a
dvWeight=0-normaliz
e=true-ngf=64-ndf=64
-model=basic-output=
sigmoid-linWeight=0
...
...
...
...
...
...
Figure 3: Left: Demonstration of content/pose factorization on held out MNIST examples. Each
image in the grid is generated using the pose and content vectors hp and hc taken from the corresponding images in the top row and first column respectively. The model has clearly learned to
disentangle content and pose. Right: Each row shows forward modeling up to 500 time steps into the
future, given 5 initial frames. For each generation, note that only the pose part of the representation is
being predicted from the previous time step (using an LSTM), with the content vector being fixed
from the 5th frame. The generations remain crisp despite the long-range nature of the predictions.
5
Pose
Content
Pose
Pose
Pose
Content
Content
Pose
Figure 4: Left: Factorization examples using our D R N ET model on held out NORB images. Each
image in the grid is generated using the pose and content vectors hp and hc taken from the corresponding images in the top row and first column respectively. Center: Examples where D R N ET was
trained without the adversarial loss term. Note how content and pose are no longer factorized cleanly:
the pose vector now contains content information which ends up dominating the generation. Right:
factorization examples from Mathieu et al. [19].
x1
Interpolations
x2
Content
Pose
Figure 5: Left: Examples of linear interpolation in pose space between the examples x1 and x2 .
Right: Factorization examples on held out images from the SUNCG dataset. Each image in the grid
is generated using the pose and content vectors hp and hc taken from the corresponding images in
the top row and first column respectively. Note how, even for complex objects, the model is able to
rotate them accurately.
pose vector from capturing any content information, thus without a color cue the model is unable to
determine which pose information to associate with which digit. In Fig. 3(right) we perform forward
modeling using our representation, demonstrating the ability to generate crisp digits 500 time steps
into the future.
NORB: We apply our model to the NORB dataset [16], converted into videos by taking sequences of
different azimuths, while holding object identity, lighting and elevation constant. Fig. 4.2(left) shows
that our model is able to factor content and pose cleanly on held out data. In Fig. 4.2(center) we train
a version of our model without the adversarial loss term, which results in a significant degradation in
the model and the pose vectors are no longer isolated from content. For comparison, we also show the
factorizations produced by Mathieu et al. [19], which are less clean, both in terms of disentanglement
and generation quality than our approach. Table 1 shows classification results on NORB, following
the training of a classifier on pose features and also content features. When the adversarial term is
used (? = 0.1) the content features perform well. Without the term, content features become less
effective for classification.
SUNCG: We use the rendering engine from the SUNCG dataset [30] to generate sequences where
the camera rotates around a range of 3D chair models. The dataset consists of 324 different chair
models of varying size, shape and color. D R N ET learns a clean factorization of content and pose and
is able to generate high quality examples of this dataset, as shown in Fig. 4.2(right).
6
4.3
KTH Action Dataset
Finally, we apply D R N ET to the KTH dataset [28]. This is a simple dataset of real-world videos of
people performing one of six actions (walking, jogging, running, boxing, handwaving, hand-clapping)
against fairly uniform backgrounds. In Fig. 4.3 we show forward generations of different held out
examples, comparing against two baselines: (i) the MCNet of Villegas et al. [33]which, to the best
of our knowledge, produces the current best quality generations of on real-world video and (ii) a
baseline auto-encoder LSTM model (AE-LSTM). This is essentially the same as ours, but with
a single encoder whose features thus combine content and pose (as opposed to factoring them in
D R N ET ). It is also similar to [31].
Fig. 7 shows more examples, with generations out to 100 time steps. For most actions this is sufficient
time for the person to have left the frame, thus further generations would be of a fixed background.
In Fig. 9 we attempt to quantify the fidelity of the generations by comparing our approach to MCNet
[33] using a metric derived from the Inception score [26]. The Inception score is used for assessing
generations from GANs and is more appropriate for our scenario that traditional metrics such as
PSNR or SSIM (see appendix B for further discussion). The curves show the mean scores of our
generations decaying more gracefully than MCNet [33]. Further examples and generated movies may
be viewed in appendix A and also at https://sites.google.com/view/drnet-paper//.
A natural concern with high capacity models is that they might be memorizing the training examples.
We probe this in Fig. 4.3, where we show the nearest neighbors to our generated frames from the
training set. Fig. 8 uses the pose representation produced by D R N ET to train an action classifier
from very few examples. We extract pose vectors from video sequences of length 24 and train a fully
connected classifier on these vectors to predict the action class. We compare against an autoencoder
baseline, which is the same as ours but with a single encoder whose features thus combine content
and pose. We find the factorization significantly boosts performance.
t=1
t=5
t = 10
t = 12
t = 15
t = 17
t = 21
t = 25
t = 27
t = 30
Ground
truth
future
DrNet
(ours)
AE-LSTM
Walking
t=1
t=5
MCNet
t = 10
t = 12
t = 15
t = 17
t = 21
t = 25
t = 27
t = 30
Ground
truth
future
DrNet
(ours)
AE-LSTM
Running
MCNet
Figure 6: Qualitative comparison between our D R N ET model, MCNet [33] and the AE-LSTM
baseline. All models are conditioned on the first 10 video frames and generate 20 frames. We display
predictions of every 3rd frame. Video sequences are taken from held out examples of the KTH dataset
for the classes of walking (top) and running (bottom).
7
t=11
t=14
t=17
t=20
t=23
t=26
t=29
t=32
t=35
t=38
t=41
t=44
t=47
t=50
t=60
t=70
t=80
t=90
t=100
DrNet
MCNet
Figure 7: Four additional examples of generations on held out examples of the KTH dataset, rolled
out to 100 timesteps.
1.75
D R N ET ?=0.1
D R N ET ?=0
hc
hp
hc
hp
Mathieu et al. [19]
Accuracy (%)
1.7
93.3
60.9
72.6
80.8
86.5
1.65
Inception Score
Model
Table 1: Classification results on
t = 15
t = 17
t = 21
t = 25
1.6
1.55
1.5
1.45
1.4
NORB dataset, with/without adversarial loss (? = 0.1/0) using content or pose representations (hc , hp
respectively). The adversarial term
is crucial for forcing semantic information into the content vectors ?
without it performance drops significantly.
t = 12
DrNet
MCNet
1.35
1.3
0
20
40
60
80
100
Future time step
Figure 8: Classification of
KTH actions from pose vectors with few labeled examples, with autoencoder baseline. N.B. SOA (fully supervised) is 93.9% [15].
t = 27
t = 30
t = 12
t = 15
t = 17
Figure 9: Comparison of
KTH video generation quality
using Inception score. X-axis
indicated how far from conditioned input the start of the
generated sequence is.
t = 21
t = 25
t = 27
t = 30
DrNet
generations
Nearest
neighbour in
pose space
Nearest
neighbour in
pose+content
space
DrNet
generations
Nearest
neighbour in
pose space
Nearest
neighbour in
pose+content
space
Figure 10: For each frame generated by D R N ET (top row in each set), we show nearest-neighbor
images from the training set, based on pose vectors (middle row) and both content and pose vectors
(bottom row). It is evident that our model is not simply copying examples from the training data.
Furthermore, the middle row shows that the pose vector generalizes well, and is independent of
background and clothing.
8
5
Discussion
In this paper we introduced a model based on a pair of encoders that factor video into content and
pose. This seperation is achieved during training through novel adversarial loss term. The resulting
representation is versatile, in particular allowing for stable and coherent long-range prediction through
nothing more than a standard LSTM. Our generations compare favorably with leading approaches,
despite being a simple model, e.g. lacking the GAN losses or probabilistic formulations of other
video generation approaches. Source code is available at https://github.com/edenton/drnet.
Acknowledgments
We thank Rob Fergus, Will Whitney and Jordan Ash for helpful comments and advice. Emily Denton
is grateful for the support of a Google Fellowship
References
[1] P. Agrawal, A. Nair, P. Abbeel, J. Malik, and S. Levine. Learning to poke by poking: Experiential
learning of intuitive physics. arXiv preprint arXiv:1606.07419, 2016.
[2] S. Chiappa, S. Racaniere, D. Wierstra, and S. Mohamed. Recurrent environment simulators. In
ICLR, 2017.
[3] F. Cricri, M. Honkala, X. Ni, E. Aksu, and M. Gabbouj. Video ladder networks. arXiv preprint
arXiv:1612.01756, 2016.
[4] C. Doersch, A. Gupta, and A. A. Efros. Unsupervised visual representation learning by context
prediction. In CVPR, pages 1422?1430, 2015.
[5] C. Finn, I. Goodfellow, and S. Levine. Unsupervised learning for physical interaction through
video prediction. In arXiv 1605.07157, 2016.
[6] I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville,
and Y. Bengio. Generative adversarial nets. In NIPS, 2014.
[7] W. Grathwohl and A. Wilson. Disentangling space and time in video with hierarchical variational
auto-encoders. arXiv preprint arXiv:1612.04440, 2016.
[8] R. Hartley and A. Zisserman. Multiple view geometry in computer vision, 2000.
[9] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In The
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2016.
[10] G. E. Hinton, A. Krizhevsky, and S. Wang. Transforming auto-encoders. In ICANN, 2011.
[11] G. E. Hinton and R. R. Salakhutdinov. Reducing the dimensionality of data with neural networks.
Science, 313(5786):504?507, 2006.
[12] N. Kalchbrenner, A. van den Oord, K. Simonyan, I. Danihelka, O. Vinyals, A. Graves, and
K. Kavukcuoglu. Video pixel networks. In arXiv:1610.00527, 2016.
[13] D. Kingma and J. Ba. Adam: A method for stochastic optimization. In International Conference
on Learning Representations, 2015.
[14] T. D. Kulkarni, W. F. Whitney, P. Kohli, and J. Tenenbaum. Deep convolutional inverse graphics
network. In Advances in Neural Information Processing Systems, pages 2539?2547, 2015.
[15] Q. V. Le, W. Y. Zou, S. Y. Yeung, and A. Y. Ng. Learning hierarchical invariant spatio-temporal
features for action recognition with independent subspace analysis. In Proceedings of the 2011
IEEE Conference on Computer Vision and Pattern Recognition, 2011.
[16] Y. LeCun, F. Huang, and L. Bottou. Learning methods for generic object recognition with
invariance to pose and lighting. In CVPR, 2004.
[17] C. Liu. Beyond pixels: exploring new representations and applications for motion analysis.
PhD thesis, Massachusetts Institute of Technology, 2009.
[18] M. Mathieu, C. Couprie, and Y. LeCun. Deep multi-scale video prediction beyond mean square
error. arXiv 1511.05440, 2015.
[19] M. Mathieu, P. S. Junbo Zhao, A. Ramesh, and Y. LeCun. Disentangling factors of variation in
deep representations using adversarial training. In Advances in Neural Information Processing
Systems 29, 2016.
[20] J. Oh, X. Guo, H. Lee, R. Lewis, and S. Singh. Action-conditional video prediction using deep
networks in Atari games. In NIPS, 2015.
[21] A. Radford, L. Metz, and S. Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. In The International Conference on Learning
Representations, 2016.
9
[22] M. Ranzato, A. Szlam, J. Bruna, M. Mathieu, R. Collobert, and S. Chopra. Video (language)
modeling: a baseline for generative models of natural videos. arXiv 1412.6604, 2014.
[23] A. Rasmus, M. Berglund, M. Honkala, H. Valpola, and T. Raiko. Semi-supervised learning with
ladder network. In Advances in Neural Information Processing Systems 28, 2015.
[24] S. Reed, Z. Zhang, Y. Zhang, and H. Lee. Deep visual analogy-making. In NIPS, 2015.
[25] O. Ronneberger, P. Fischer, and T. Brox. U-net: Convolutional networks for biomedical image
segmentation. In International Conference on Medical Image Computing and Computer-Assisted
Intervention, pages 234?241. Springer International Publishing, 2015.
[26] T. Salimans, I. J. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen. Improved
techniques for training gans. arXiv 1606.03498, 2016.
[27] T. Salimans, A. Karpathy, X. Chen, and D. P. Kingma. Pixelcnn++: Improving the pixelcnn with
discretized logistic mixture likelihood and other modifications. arXiv preprint arXiv:1701.05517,
2017.
[28] C. Schuldt, I. Laptev, and B. Caputo. Recognizing human actions: A local svm approach. In
Pattern Recognition, 2004. ICPR 2004. Proceedings of the 17th International Conference on,
volume 3, pages 32?36. IEEE, 2004.
[29] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image
recognition. In The International Conference on Learning Representations, 2015.
[30] S. Song, F. Yu, A. Zeng, A. X. Chang, M. Savva, and T. Funkhouser. Semantic scene completion
from a single depth image. IEEE Conference on Computer Vision and Pattern Recognition,
2017.
[31] N. Srivastava, E. Mansimov, and R. Salakhutdinov. Unsupervised learning of video representations using LSTMs. In ICML, 2015.
[32] A. van den Oord, N. Kalchbrenner, and K. Kavukcuoglu. Pixel recurrent neural networks. In
ICML, 2016.
[33] R. Villegas, J. Yang, S. Hong, X. Lin, and H. Lee. Decomposing motion and content for natural
video sequence prediction. In ICLR, 2017.
[34] C. Vondrick, H. Pirsiavash, and A. Torralba. Generating videos with scene dynamics. In arXiv
1609.02612, 2016.
[35] J. Walker, A. Gupta, and M. Hebert. Dense optical flow prediction from a static image. In ICCV,
2015.
[36] X. Wang and A. Gupta. Unsupervised learning of visual representations using videos. In CVPR,
pages 2794?2802, 2015.
[37] W. F. Whitney, M. Chang, T. Kulkarni, and J. B. Tenenbaum. Understanding visual concepts
with continuation learning. arXiv preprint arXiv:1502.04623, 2016.
[38] L. Wiskott and T. Sejnowski. Slow feature analysis: Unsupervised learning of invariance.
Neural Computation, 14(4):715?770, 2002.
[39] T. Xue, J. Wu, K. L. Bouman, and W. T. Freeman. Visual dynamics: Probabilistic future frame
synthesis via cross convolutional networks. In NIPS, 2016.
[40] J. Zhao, M. Mathieu, R. Goroshin, and Y. LeCun. Stacked what-where auto-encoders. In
International Conference on Learning Representations, 2016.
10
| 7028 |@word kohli:1 version:4 middle:2 norm:2 cleanly:3 versatile:1 carry:3 initial:1 liu:1 series:1 contains:1 score:5 ours:6 interestingly:1 deconvolutional:1 existing:1 current:4 comparing:2 com:2 shape:1 enables:2 drop:2 stationary:1 generative:6 alone:1 half:1 cue:1 provides:1 simpler:1 zhang:3 wierstra:1 along:1 become:1 junbo:1 qualitative:1 consists:2 combine:3 manner:2 introduce:3 roughly:2 multi:2 simulator:1 discretized:1 inspired:1 salakhutdinov:2 freeman:1 encouraging:1 xti:5 actual:1 provided:1 factorized:1 what:2 kind:1 atari:1 transformation:1 temporal:4 every:1 act:1 zaremba:1 ensured:1 classifier:4 demonstrates:1 mansimov:1 unit:2 szlam:1 medical:1 intervention:1 danihelka:1 local:2 despite:3 interpolation:2 approximately:1 might:1 plus:1 conversely:1 factorization:11 range:11 acknowledgment:1 camera:2 satisfactorily:1 lecun:4 digit:5 procedure:1 significantly:3 projection:1 induce:1 ronneberger:1 cannot:2 close:1 layered:1 context:1 applying:2 crisp:2 deterministic:1 map:3 center:2 straightforward:2 emily:2 simplicity:1 pouget:1 disentangled:8 oh:2 racaniere:1 classic:1 notion:1 variation:3 x64:1 updated:3 target:6 us:7 goodfellow:3 associate:1 recognition:8 walking:3 labeled:1 observed:2 cloud:1 ep:35 bottom:2 levine:2 preprint:5 capture:5 wang:2 connected:2 sun:1 ranzato:2 intuition:1 environment:4 transforming:1 warde:1 dynamic:4 trained:7 weakly:1 grateful:1 singh:1 htc:6 laptev:1 predictive:4 purely:1 distinctive:2 easily:1 lst:2 train:9 stacked:2 distinct:3 effective:2 sejnowski:1 hyper:1 outcome:1 kalchbrenner:2 whose:3 posed:1 dominating:1 cvpr:4 encoder:20 ability:3 simonyan:2 fischer:1 unseen:1 jointly:1 itself:1 final:3 sequence:13 agrawal:1 net:5 reconstruction:5 propose:1 interaction:1 maximal:2 poke:1 neighboring:1 intuitive:1 inducing:1 assessing:1 produce:6 generating:3 adam:3 object:10 resnet:2 poking:1 recurrent:3 completion:1 pose:75 chiappa:3 nearest:6 auxiliary:1 c:1 predicted:3 implies:1 come:2 skip:1 differ:1 quantify:1 goroshin:1 closely:1 lsimilarity:3 hartley:1 subsequently:1 stochastic:1 seperation:1 human:1 villegas:3 abbeel:1 elevation:1 disentanglement:2 exploring:1 clothing:1 assisted:1 around:2 considered:1 ground:2 predict:9 efros:1 vary:2 adopt:1 optimizer:2 torralba:1 label:5 honkala:2 htp:3 clearly:1 aim:1 rather:3 factorizes:1 varying:4 wilson:1 derived:1 focus:2 xit:3 june:1 likelihood:1 contrast:4 adversarial:20 baseline:6 helpful:1 factoring:3 entire:1 typically:1 lbce:2 hidden:1 pixel:16 overall:3 classification:11 fidelity:1 prevailing:1 spatial:2 constrained:2 fairly:1 brox:1 field:2 beach:1 sampling:1 encouraged:1 ng:1 yu:1 unsupervised:9 denton:3 icml:2 future:23 mirza:1 inherent:1 few:2 neighbour:4 bce:1 individual:1 replaced:1 geometry:1 consisting:1 attempt:5 rolled:1 mixture:1 yielding:1 hpt:2 farley:1 held:9 accurate:1 encourage:1 necessary:1 desired:4 isolated:1 bouman:1 instance:1 classify:4 column:3 modeling:3 whitney:4 hundred:2 uniform:1 krizhevsky:1 recognizing:1 azimuth:1 graphic:2 encoders:10 discriminable:1 varies:1 xue:1 synthetic:3 combined:2 st:1 person:1 lstm:17 international:7 oord:2 standing:1 lee:3 systematic:1 probabilistic:2 physic:1 homography:1 synthesis:1 gans:5 squared:1 thesis:1 opposed:1 huang:1 slowly:1 berglund:1 zhao:3 style:2 leading:1 toy:1 aggressive:1 account:1 converted:1 collobert:1 performed:1 h1:1 try:1 view:2 doing:1 observing:2 portion:2 start:2 decaying:1 metz:1 cricri:2 minimize:1 square:1 ni:1 accuracy:1 convolutional:8 largely:1 raw:1 kavukcuoglu:2 accurately:1 produced:4 ren:1 trajectory:1 lighting:2 against:3 raster:1 mohamed:1 naturally:1 chintala:1 static:3 sampled:1 dataset:12 massachusetts:1 knowledge:2 color:3 dimensionality:1 psnr:1 segmentation:1 supervised:3 follow:1 zisserman:2 improved:1 formulation:1 furthermore:2 inception:4 biomedical:1 autoencoders:1 until:2 hand:1 schuldt:1 lstms:2 zeng:1 google:2 mode:1 logistic:1 quality:4 indicated:1 usa:1 requiring:1 contain:1 true:1 concept:1 semantic:5 funkhouser:1 game:3 self:2 encourages:3 during:4 hong:1 evident:1 demonstrate:1 motion:7 vondrick:1 image:25 variational:2 novel:6 recently:1 sigmoid:2 empirically:1 physical:1 conditioning:1 volume:1 he:1 refer:1 significant:1 rd:1 doersch:1 grid:3 hp:19 language:1 bruna:1 stable:1 pixelcnn:2 supervision:1 similarity:2 longer:2 add:1 something:1 disentangle:2 recent:2 forcing:1 scenario:1 binary:1 came:1 additional:1 somewhat:1 impose:1 determine:1 paradigm:1 maximize:1 ii:2 vgg16:1 multiple:1 semi:1 cross:3 long:7 lin:1 ensuring:1 prediction:22 basic:1 ae:4 vision:5 essentially:1 metric:2 arxiv:16 yeung:1 normalization:1 achieved:1 cell:1 penalize:1 whereas:1 background:4 separately:1 fellowship:1 walker:1 source:1 crucial:2 operate:1 comment:1 pooling:3 flow:2 jordan:1 call:1 chopra:1 leverage:1 yang:1 iii:1 bengio:1 rendering:1 variety:2 xj:2 relu:1 timesteps:1 architecture:8 motivated:1 six:1 passed:1 song:1 york:2 action:18 deep:10 karpathy:1 tenenbaum:2 clip:8 generate:11 http:2 continuation:1 mirrored:2 track:1 per:1 broadly:1 discrete:1 group:1 four:1 demonstrating:2 penalizing:1 clean:3 ht:10 sum:1 inverse:1 uncertainty:5 bouncing:1 place:1 throughout:1 wu:1 patch:1 separation:1 coherence:1 appendix:2 capturing:2 layer:11 display:1 courville:1 activity:1 constraint:3 worked:1 scene:14 x2:2 generates:1 aspect:1 span:1 chair:2 performing:1 optical:2 department:2 icpr:1 combination:1 across:1 remain:2 rob:1 making:1 modification:1 memorizing:1 den:2 invariant:3 iccv:1 taken:4 previously:1 finn:2 end:1 soa:1 generalizes:2 decomposing:1 available:2 permit:1 hct:8 apply:3 boxing:1 clapping:1 probe:1 appropriate:1 hierarchical:2 generic:1 salimans:2 batch:1 original:1 top:7 running:3 include:1 ensure:1 gan:2 subsampling:1 publishing:1 instant:1 exploit:1 experiential:1 objective:1 malik:1 coherently:1 traditional:2 gradient:1 lends:1 kth:10 iclr:2 separate:2 unable:2 simulated:1 concatenation:1 decoder:8 rotates:1 gracefully:1 capacity:1 thank:1 valpola:1 considers:1 ozair:1 code:2 length:1 index:1 reed:1 insufficient:1 copying:1 convincing:2 demonstration:1 rasmus:1 setup:1 mostly:1 disentangling:2 sharper:1 holding:1 favorably:1 ba:1 perform:2 allowing:1 ssim:1 datasets:4 enabling:1 ramesh:1 hinton:3 frame:51 sharp:1 introduced:1 pair:8 required:1 optimized:1 discriminator:12 connection:1 engine:1 coherent:1 learned:2 extrapolate:1 boost:1 kingma:2 nip:5 alternately:1 able:4 beyond:3 pattern:4 pirsiavash:1 video:56 decorrelation:1 natural:4 rely:2 predicting:9 residual:1 movie:1 github:1 technology:1 ladder:4 temporally:2 mathieu:8 axis:1 raiko:1 auto:7 extract:3 autoencoder:2 text:1 understanding:1 relative:2 graf:1 lacking:1 loss:31 fully:3 generation:21 analogy:1 generator:1 ash:1 h2:1 affine:1 sufficient:1 imposes:1 wiskott:1 principle:1 classifying:1 normalizes:1 row:8 subspace:1 ngf:1 last:2 hebert:1 institute:1 neighbor:2 taking:2 leaky:1 van:2 curve:1 dimension:2 depth:1 world:3 valid:1 forward:9 ec:18 far:2 implicitly:1 robotic:1 norb:8 xi:2 factorize:1 fergus:1 spatio:1 latent:7 table:2 learn:4 capsule:1 nature:3 ca:1 transfer:1 improving:1 caputo:1 hc:10 complex:2 zou:1 bottou:1 domain:1 icann:1 main:1 dense:1 nothing:1 jogging:1 x1:2 xu:1 fig:12 site:1 advice:1 normaliz:1 fashion:1 slow:2 position:1 decoded:1 explicit:5 x1i:1 third:1 learns:2 xt:24 handwaving:1 gating:1 showing:1 recurrently:1 nyu:2 explored:2 offset:2 gupta:3 abadie:1 concern:1 svm:1 consist:1 quantization:1 mnist:7 mcnet:8 effectively:1 phd:1 conditioned:2 chen:2 entropy:3 savva:1 simply:2 explore:3 likely:1 visual:5 prevents:2 vinyals:1 dcgan:2 scalar:1 chang:2 srivastava:2 radford:2 springer:1 truth:2 gabbouj:1 relies:4 lewis:1 nair:1 conditional:4 goal:1 identity:4 viewed:1 cheung:1 couprie:1 shared:1 content:63 change:3 except:1 uniformly:1 reducing:1 degradation:1 invariance:3 select:1 formally:1 support:1 people:1 latter:2 scan:1 meant:1 brevity:1 rotate:1 guo:1 kulkarni:3 evaluate:2 handling:1 |
6,665 | 7,029 | Federated Multi-Task Learning
Virginia Smith
Stanford
[email protected]
Chao-Kai Chiang?
USC
[email protected]
Maziar Sanjabi?
USC
Ameet Talwalkar
CMU
[email protected] [email protected]
Abstract
Federated learning poses new statistical and systems challenges in training machine
learning models over distributed networks of devices. In this work, we show that
multi-task learning is naturally suited to handle the statistical challenges of this
setting, and propose a novel systems-aware optimization method, M OCHA, that is
robust to practical systems issues. Our method and theory for the first time consider
issues of high communication cost, stragglers, and fault tolerance for distributed
multi-task learning. The resulting method achieves significant speedups compared
to alternatives in the federated setting, as we demonstrate through simulations on
real-world federated datasets.
1
Introduction
Mobile phones, wearable devices, and smart homes are just a few of the modern distributed networks
generating massive amounts of data each day. Due to the growing storage and computational
power of devices in these networks, it is increasingly attractive to store data locally and push more
network computation to the edge. The nascent field of federated learning explores training statistical
models directly on devices [37]. Examples of potential applications include: learning sentiment,
semantic location, or activities of mobile phone users; predicting health events like low blood sugar
or heart attack risk from wearable devices; or detecting burglaries within smart homes [3, 39, 42].
Following [25, 36, 26], we summarize the unique challenges of federated learning below.
1. Statistical Challenges: The aim in federated learning is to fit a model to data, {X1 , . . . , Xm },
generated by m distributed nodes. Each node, t ? [m], collects data in a non-IID manner across the
network, with data on each node being generated by a distinct distribution Xt ? Pt . The number
of data points on each node, nt , may also vary significantly, and there may be an underlying
structure present that captures the relationship amongst nodes and their associated distributions.
2. Systems Challenges: There are typically a large number of nodes, m, in the network, and
communication is often a significant bottleneck. Additionally, the storage, computational, and
communication capacities of each node may differ due to variability in hardware (CPU, memory),
network connection (3G, 4G, WiFi), and power (battery level). These systems challenges, compounded with unbalanced data and statistical heterogeneity, make issues such as stragglers and
fault tolerance significantly more prevalent than in typical data center environments.
In this work, we propose a modeling approach that differs significantly from prior work on federated
learning, where the aim thus far has been to train a single global model across the network [25, 36, 26].
Instead, we address statistical challenges in the federated setting by learning separate models for each
node, {w1 , . . . , wm }. This can be naturally captured through a multi-task learning (MTL) framework,
where the goal is to consider fitting separate but related models simultaneously [14, 2, 58, 28].
Unfortunately, current multi-task learning methods are not suited to handle the systems challenges
that arise in federated learning, including high communication cost, stragglers, and fault tolerance.
Addressing these challenges is therefore a key component of our work.
?
Authors contributed equally.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
1.1
Contributions
We make the following contributions. First, we show that MTL is a natural choice to handle statistical
challenges in the federated setting. Second, we develop a novel method, M OCHA, to solve a general
MTL problem. Our method generalizes the distributed optimization method C O C OA [22, 31] in
order to address systems challenges associated with network size and node heterogeneity. Third, we
provide convergence guarantees for M OCHA that carefully consider these unique systems challenges
and provide insight into practical performance. Finally, we demonstrate the superior empirical
performance of M OCHA with a new benchmarking suite of federated datasets.
2
Related Work
Learning Beyond the Data Center. Computing SQL-like queries across distributed, low-powered
nodes is a decades-long area of research that has been explored under the purview of query processing
in sensor networks, computing at the edge, and fog computing [32, 12, 33, 8, 18, 15]. Recent works
have also considered training machine learning models centrally but serving and storing them locally,
e.g., this is a common approach in mobile user modeling and personalization [27, 43, 44]. However,
as the computational power of the nodes within distributed networks grows, it is possible to do even
more work locally, which has led to recent interest in federated learning.2 In contrast to our proposed
approach, existing federated learning approaches [25, 36, 26, 37] aim to learn a single global model
across the data.3 This limits their ability to deal with non-IID data and structure amongst the nodes.
These works also come without convergence guarantees, and have not addressed practical issues of
stragglers or fault tolerance, which are important characteristics of the federated setting. The work
proposed here is, to the best of our knowledge, the first federated learning framework to consider
these challenges, theoretically and in practice.
Multi-Task Learning. In multi-task learning, the goal is to learn models for multiple related tasks
simultaneously. While the MTL literature is extensive, most MTL modeling approaches can be
broadly categorized into two groups based on how they capture relationships amongst tasks. The first
(e.g., [14, 4, 11, 24]) assumes that a clustered, sparse, or low-rank structure between the tasks is known
a priori. A second group instead assumes that the task relationships are not known beforehand and
can be learned directly from the data (e.g., [21, 58, 16]). In this work, we focus our attention on this
latter group, as task relationships may not be known beforehand in real-world settings. In comparison
to learning a single global model, these MTL approaches can directly capture relationships amongst
non-IID and unbalanced data, which makes them particularly well-suited for the statistical challenges
of federated learning. We demonstrate this empirically on real-world federated datasets in Section 5.
However, although MTL is a natural modeling choice to address the statistical challenges of federated
learning, currently proposed methods for distributed MTL (discussed below) do not adequately
address the systems challenges associated with federated learning.
Distributed Multi-Task Learning. Distributed multi-task learning is a relatively new field, in
which the aim is to solve an MTL problem when data for each task is distributed over a network.
While several recent works [1, 35, 54, 55] have considered the issue of distributed MTL training, the
proposed methods do not allow for flexibility of communication versus computation. As a result, they
are unable to efficiently handle concerns of fault tolerance and stragglers, the latter of which stems
from both data and system heterogeneity. The works of [23] and [7] allow for asynchronous updates
to help mitigate stragglers, but do not address fault tolerance. Moreover, [23] provides no convergence
guarantees, and the convergence of [7] relies on a bounded delay assumption that is impractical for
the federated setting, where delays may be significant and devices may drop out completely. Finally,
[30] proposes a method and setup leveraging the distributed framework C O C OA [22, 31], which
we show in Section 4 to be a special case of the more general approach in this work. However, the
authors in [30] do not explore the federated setting, and their assumption that the same amount of
work is done locally on each node is prohibitive in federated settings, where unbalance is common
due to data and system variability.
2
The term on-device learning has been used to describe both the task of model training and of model serving.
Due to the ambiguity of this phrase, we exclusively use the term federated learning.
3
While not the focus of our work, we note privacy is an important concern in the federated setting, and that
the privacy benefits associated with global federated learning (as discussed in [36]) also apply to our approach.
2
3
Federated Multi-Task Learning
In federated learning, the aim is to learn a model over data that resides on, and has been generated by,
m distributed nodes. As a running example, consider learning the activities of mobile phone users in
a cell network based on their individual sensor, text, or image data. Each node (phone), t ? [m], may
generate data via a distinct distribution, and so it is natural to fit separate models, {w1 , . . . , wm },
to the distributed data?one for each local dataset. However, structure between models frequently
exists (e.g., people may behave similarly when using their phones), and modeling these relationships
via multi-task learning is a natural strategy to improve performance and boost the effective sample
size for each node [10, 2, 5]. In this section, we suggest a general MTL framework for the federated
setting, and propose a novel method, M OCHA, to handle the systems challenges of federated MTL.
3.1
General Multi-Task Learning Setup
Given data Xt ? Rd?nt from m nodes, multi-task learning fits separate weight vectors wt ? Rd to
the data for each task (node) through arbitrary convex loss functions `t (e.g., the hinge loss for SVM
models). Many MTL problems can be captured via the following general formulation:
(m n
)
t
XX
T i i
min
`t (wt xt , yt ) + R(W, ?) ,
(1)
W,?
t=1 i=1
where W := [w1 , . . . , wm ] ? Rd?m is a matrix whose t-th column is the weight vector for the
t-th task. The matrix ? ? Rm?m models relationships amongst tasks, and is either known a
priori or estimated while simultaneously learning task models. MTL problems differ based on their
assumptions on R, which takes ? as input and promotes some suitable structure amongst the tasks.
As an example, several popular MTL approaches assume that tasks form clusters based on whether or
not they are related [14, 21, 58, 59]. This can be expressed via the following bi-convex formulation:
R(W, ?) = ?1 tr W?WT + ?2 kWk2F ,
(2)
with constants ?1 , ?2 > 0, and where the second term performs L2 regularization on each local model.
We use a similar formulation with variable clusters (12) in our experiments in Section 5, and provide
details on other common classes of MTL models that can be formulated via (1) in Appendix B.
3.2
M OCHA: A Framework for Federated Multi-Task Learning
In the federated setting, the aim is to train statistical models directly on the edge, and thus we
solve (1) while assuming that the data {X1 , . . . , Xm } is distributed across m nodes or devices.
Before proposing our federated method for solving (1), we make the following observations:
? Observation 1: In general, (1) is not jointly convex in W and ?, and even in the cases where (1)
is convex, solving for W and ? simultaneously can be difficult [5].
? Observation 2: When fixing ?, updating W depends on both the data X, which is distributed
across the nodes, and the structure ?, which is known centrally.
? Observation 3: When fixing W, optimizing for ? only depends on W and not on the data X.
Based on these observations, it is natural to propose an alternating optimization approach to solve
problem (1), in which at each iteration we fix either W or ? and optimize over the other, alternating
until convergence is reached. Note that solving for ? is not dependent on the data and therefore can
be computed centrally; as such, we defer to prior work for this step [59, 21, 58, 16]. In Appendix B,
we discuss updates to ? for several common MTL models.
In this work, we focus on developing an efficient distributed optimization method for the W step. In
traditional data center environments, the task of distributed training is a well-studied problem, and
various communication-efficient frameworks have been recently proposed, including the state-of-theart primal-dual C O C OA framework [22, 31]. Although C O C OA can be extended directly to update
W in a distributed fashion across the nodes, it cannot handle the unique systems challenges of the
federated environment, such as stragglers and fault tolerance, as discussed in Section 3.4. To this
end, we extend C O C OA and propose a new method, M OCHA, for federated multi-task learning. Our
method is given in Algorithm 1 and described in detail in Sections 3.3 and 3.4.
3
Algorithm 1 M OCHA: Federated Multi-Task Learning Framework
1: Input: Data Xt from t = 1, . . . , m tasks, stored on one of m nodes, and initial matrix ?0
2: Starting point ?(0) := 0 ? Rn , v(0) := 0 ? Rb
3: for iterations i = 0, 1, . . . do
4:
Set subproblem parameter ? 0 and number of federated iterations, Hi
5:
for iterations h = 0, 1, ? ? ? , Hi do
for tasks t ? {1, 2, . . . , m} in parallel over m nodes do
6:
7:
call local solver, returning ?th -approximate solution ??t of the local subproblem (4)
8:
update local variables ?t ? ?t + ??t
return updates ?vt := Xt ??t
9:
reduce: vt ? vt + ?vt
10:
11:
Update ? centrally based on w(?) for latest ?
12: Central node computes w = w(?) based on the lastest ?
13: return: W := [w1 , . . . , wm ]
3.3
Federated Update of W
To update W in the federated setting, we begin by extending works on distributed primal-dual
optimization [22, 31, 30] to apply to the generalized multi-task framework (1). This involves deriving
the appropriate dual formulation, subproblems, and problem parameters, as we detail below.
Dual problem. Considering the dual formulation of (1) will allow us to better separate the
Pglobal
m
problem into distributed subproblems for federated computation across the nodes. Let n := t=1 nt
and X := Diag(X1 , ? ? ? , Xm ) ? Rmd?n . With ? fixed, the dual of problem (1), defined with
respect to dual variables ? ? Rn , is given by:
(
)
nt
m X
X
?
i
?
min D(?) :=
`t (??t ) + R (X?) ,
(3)
?
`?t
t=1 i=1
?
where
and R are the conjugate dual functions of `t and R, respectively, and ?it is the dual
variable for the data point (xit , yti ). Note that R? depends on ?, but for the sake of simplicity, we
have removed this in our notation. To derive distributed subproblems from this global dual, we make
an assumption described below on the regularizer R.
Assumption 1. Given ?, we assume that there exists a symmetric positive definite matrix M ?
Rmd?md , depending on ?, for which the function R is strongly convex with respect to M?1 . Note
that this corresponds to assuming that R? will be smooth with respect to matrix M.
? = R(W, ?), where
?
?)
Remark 1. We can reformulate the MTL regularizer in the form of R(w,
? := ? ? Id?d ? Rmd?md . For example,
w ? Rmd is a vector containing the columns of W and ?
? = tr wT (?1 ?
? + ?2 I)w . Writing the regularizer
?
?)
we can rewrite the regularizer in (2) as R(w,
? + ?2 I.
in this form, it is clear that it is strongly convex with respect to matrix M?1 = ?1 ?
Data-local quadratic subproblems. To solve (1) across distributed nodes, we define the following
data-local subproblems, which are formed via a careful quadratic approximation of the dual problem
(3) to separate computation across the nodes. These subproblems find updates ??t ? Rnt to the dual
variables in ? corresponding to a single node t, and only require accessing data which is available
locally, i.e., Xt for node t. The t-th subproblem is given by:
nt
X
0
?0
2
min Gt? (??t ; vt , ?t ) :=
`?t (??it ???it )+hwt (?), Xt ??t i+ kXt ??t kMt +c(?) , (4)
??t
2
i=1
1
where c(?) := m
R? (X?), and Mt ? Rd?d is the t-th diagonal block of the symmetric positive
definite matrix M. Given dual variables ?, corresponding primal variables can be found via w(?) =
?R? (X?), where wt (?) is the t-th block in the vector w(?). Note that computing w(?) requires
the vector v = X?. The t-th block of v, vt ? Rd , is the only information that must be communicated
between nodes at each iteration. Finally, ? 0 > 0 measures the difficulty of the data partitioning, and
helps to relate progress made to the subproblems to the global dual problem. It can be easily selected
based on M for many applications of interest; we provide details in Lemma 9 of the Appendix.
4
3.4
Practical Considerations
During M OCHA?s federated update of W, the central node requires a response from all workers before
performing a synchronous update. In the federated setting, a naive execution of this communication
protocol could introduce dramatic straggler effects due to node heterogeneity. To avoid stragglers,
0
M OCHA provides the t-th node with the flexibility to approximately solve its subproblem Gt? (?),
where the quality of the approximation is controled by a per-node parameter ?th . The following
factors determine the quality of the t-th node?s solution to its subproblem:
0
1. Statistical challenges, such as the size of Xt and the intrinsic difficulty of subproblem Gt? (?).
2. Systems challenges, such as the node?s storage, computational, and communication capacities
due to hardware (CPU, memory), network connection (3G, 4G, WiFi), and power (battery level).
3. A global clock cycle imposed by the central node specifying a deadline for receiving updates.
We define ?th as a function of these factors, and assume that each node has a controller that may
derive ?th from the current clock cycle and statistical/systems setting. ?th ranges from zero to one,
0
where ?th = 0 indicates an exact solution to Gt? (?) and ?th = 1 indicates that node t made no progress
during iteration h (which we refer to as a dropped node). For instance, a node may ?drop? if it runs
out of battery, or if its network bandwidth deteriorates during iteration h and it is thus unable to return
its update within the current clock cycle. A formal definition of ?th is provided in (5) of Section 4.
M OCHA mitigates stragglers by enabling the t-th node to define its own ?th . On every iteration
h, the local updates that a node performs and sends in a clock cycle will yield a specific value
for ?th . As discussed in Section 4, M OCHA is additionally robust to a small fraction of nodes
periodically dropping and performing no local updates (i.e., ?th := 1) under suitable conditions,
as defined in Assumption 2. In contrast, prior work of C O C OA may suffer from severe straggler
effects in federated settings, as it requires a fixed ?th = ? across all nodes and all iterations while still
maintaining synchronous updates, and it does not allow for the case of dropped nodes (? := 1).
Finally, we note that asynchronous updating schemes are an alternative approach to mitigate stragglers.
We do not consider these approaches in this work, in part due to the fact that the bounded-delay
assumptions associated with most asynchronous schemes limits fault tolerance. However, it would be
interesting to further explore the differences and connections between asynchronous methods and
approximation-based, synchronous methods like M OCHA in future work.
4
Convergence Analysis
M OCHA is based on a bi-convex alternating approach, which is guaranteed to converge [17, 45] to
a stationary solution of problem (1). In the case where this problem is jointly convex with respect
to W and ?, such a solution is also optimal. In the rest of this section, we therefore focus on the
convergence of solving the W update of M OCHA in the federated setting. Following the discussion
in Section 3.4, we first introduce the following per-node, per-round approximation parameter.
Definition 1 (Per-Node-Per-Iteration-Approximation Parameter). At each iteration h, we define the
accuracy level of the solution calculated by node t to its subproblem (4) as:
0
?th :=
(h)
0
(h)
(h)
Gt? (??t ; v(h) , ?t ) ? Gt? (???t ; v(h) , ?t )
0
(h)
(h)
0
Gt? (0; v(h) , ?t ) ? Gt? (???t ; v(h) , ?t )
0
,
(5)
(h)
where ???t is the minimizer of subproblem Gt? (? ; v(h) , ?t ). We allow this value to vary between
0
[0, 1], with ?th := 1 meaning that no updates to subproblem Gt? are made by node t at iteration h.
While the flexible per-node, per-iteration approximation parameter ?th in (5) allows the consideration
of stragglers and fault tolerance, these additional degrees of freedom also pose new challenges in
providing convergence guarantees for M OCHA. We introduce the following assumption on ?th to
provide our convergence guarantees.
Assumption 2. Let Hh := (?(h) , ?(h?1) , ? ? ? , ?(1) ) be the dual vector history until the beginning
of iteration h, and define ?ht := E[?th |Hh ]. For all tasks t and all iterations h, we assume pht :=
? ht := E[?th |Hh , ?th < 1] ? ?max < 1.
P[?th = 1] ? pmax < 1 and ?
5
This assumption states that at each iteration, the probability of a node sending a result is non-zero,
and that the quality of the returned result is, on average, better than the previous iterate. Compared
to [49, 30] which assumes ?th = ? < 1, our assumption is significantly less restrictive and better
models the federated setting, where nodes are unreliable and may periodically drop out.
Using Assumption 2, we derive the following theorem, which characterizes the convergence of the
federated update of M OCHA in finite horizon when the losses `t in (1) are smooth.
Theorem 1. Assume that the losses `t are (1/?)-smooth. Then, under Assumptions 1 and 2, there
exists a constant s ? (0, 1] such that for any given convergence target D , choosing H such that
1
n
H?
log
,
(6)
?
D
(1 ? ?)s
will satisfy E[D(?(H) ) ? D(?? )] ? D .
? := pmax + (1 ? pmax )?max < 1. While Theorem 1 is concerned with finite horizon converHere, ?
gence, it is possible to get asymptotic convergence results, i.e., H ? ?, with milder assumptions on
the stragglers; see Corollary 8 in the Appendix for details.
When the loss functions are non-smooth, e.g., the hinge loss for SVM models, we provide the
following sub-linear convergence for L-Lipschitz losses.
Theorem 2. If the loss functions `t are L-Lipschitz, then there exists a constant ?, defined in (24),
such that for any given D > 0, if we choose
2
2L2 ?? 0
H ? H0 +
max
1,
,
(7)
?
n 2 D
(1 ? ?)
2
2n (D(?? ) ? D(?0 ))
16L2 ?? 0
1
with H0 ? h0 +
,
? 2 D , h0 = 1 + (1 ? ?)
? log
4L2 ?? 0
(1 ? ?)n
+
PH
1
(h)
? := H?H0
? ? D(?? )] ? D .
then ?
will satisfy E[D(?)
h=H0 +1 ?
These theorems guarantee that M OCHA will converge in the federated setting, under mild assumptions
on stragglers and capabilities of the nodes. While these results consider convergence in terms of the
dual, we show that they hold analogously for the duality gap. We provide all proofs in Appendix C.
Remark 2. Following from the discussion in Section 3.4, our method and theory generalize the
results in [22, 31]. In the limiting case that all ?th are identical, our results extend the results of
C O C OA to the multi-task framework described in (1).
Remark 3. Note that the methods in [22, 31] have an aggregation parameter ? ? (0, 1]. Though we
prove our results for a general ?, we simplify the method and results here by setting ? := 1, which
has been shown to have the best performance, both theoretically and empirically [31].
5
Simulations
In this section we validate the empirical performance of M OCHA. First, we introduce a benchmarking
suite of real-world federated datasets and show that multi-task learning is well-suited to handle the
statistical challenges of the federated setting. Next, we demonstrate M OCHA?s ability to handle stragglers, both from statistical and systems heterogeneity. Finally, we explore the performance of M OCHA
when devices periodically drop out. Our code is available at: github.com/gingsmith/fmtl.
5.1
Federated Datasets
In our simulations, we use several real-world datasets that have been generated in federated settings.
We provide additional details in the Appendix, including information about data sizes, nt .
? Google Glass (GLEAM)4 : This dataset consists of two hours of high resolution sensor data
collected from 38 participants wearing Google Glass for the purpose of activity recognition.
Following [41], we featurize the raw accelerometer, gyroscope, and magnetometer data into 180
statistical, spectral, and temporal features. We model each participant as a separate task, and
predict between eating and other activities (e.g., walking, talking, drinking).
4
http://www.skleinberg.org/data/GLEAM.tar.gz
6
? Human Activity Recognition5 : Mobile phone accelerometer and gyroscope data collected from
30 individuals, performing one of six activities: {walking, walking-upstairs, walking-downstairs,
sitting, standing, lying-down}. We use the provided 561-length feature vectors of time and
frequency domain variables generated for each instance [3]. We model each individual as a
separate task and predict between sitting and the other activities.
? Vehicle Sensor6 : Acoustic, seismic, and infrared sensor data collected from a distributed network
of 23 sensors, deployed with the aim of classifying vehicles driving by a segment of road [13].
Each instance is described by 50 acoustic and 50 seismic features. We model each sensor as a
separate task and predict between AAV-type and DW-type vehicles.
5.2
Multi-Task Learning for the Federated Setting
We demonstrate the benefits of multi-task learning for the federated setting by comparing the error
rates of a multi-task model to that of a fully local model (i.e., learning a model for each task separately)
and a fully global model (i.e., combining the data from all tasks and learning one single model). Work
on federated learning thus far has been limited to the study of fully global models [25, 36, 26].
We use a cluster-regularized multi-task model [59, 21], as described in Section 3.1. For each dataset
from Section 5.1, we randomly split the data into 75% training and 25% testing, and learn multi-task,
local, and global support vector machine models, selecting the best regularization parameter, ? ?{1e5, 1e-4, 1e-3, 1e-2, 0.1, 1, 10}, for each model using 5-fold cross-validation. We repeat this process
10 times and report the average prediction error across tasks, averaged across these 10 trials.
Table 1: Average prediction error: Means and standard errors over 10 random shuffles.
Model
Human Activity
Google Glass
Vehicle Sensor
Global
2.23 (0.30)
5.34 (0.26)
13.4 (0.26)
Local
1.34 (0.21)
4.92 (0.26)
7.81 (0.13)
MTL
0.46 (0.11)
2.02 (0.15)
6.59 (0.21)
In Table 1, we see that for each dataset, multi-task learning significantly outperforms the other
models in terms of achieving the lowest average error across tasks. The global model, as proposed
in [25, 36, 26] performs the worst, particularly for the Human Activity and Vehicle Sensor datasets.
Although the datasets are already somewhat unbalanced, we note that a global modeling approach may
benefit tasks with a very small number of instances, as information can be shared across tasks. For this
reason, we additionally explore the performance of global, local, and multi-task modeling for highly
skewed data in Table 4 of the Appendix. Although the performance of the global model improves
slightly relative to local modeling in this setting, the global model still performs the worst for the
majority of the datasets, and MTL still significantly outperforms both global and local approaches.
5.3
Straggler Avoidance
Two challenges that are prevalent in federated learning are stragglers and high communication.
Stragglers can occur when a subset of the devices take much longer than others to perform local
updates, which can be caused either by statistical or systems heterogeneity. Communication can also
exacerbate poor performance, as it can be slower than computation by many orders of magnitude in
typical cellular or wireless networks [52, 20, 48, 9, 38]. In our experiments below, we simulate the
time needed to run each method by tracking the operations and communication complexities, and
scaling the communication cost relative to computation by one, two, or three orders of magnitude,
respectively. These numbers correspond roughly to the clock rate vs. network bandwidth/latency
(see, e.g., [52]) for modern cellular and wireless networks. Details are provided in Appendix E.
5
6
https://archive.ics.uci.edu/ml/datasets/Human+Activity+Recognition+Using+Smartphones
http://www.ecs.umass.edu/~mduarte/Software.html
7
Human Activity: Statistical Heterogeneity (WiFi)
100
10-1
10-2
10-3
1
2
3
4
Estimated Time
5
6
100
10-1
7
10-3
106
MOCHA
CoCoA
Mb-SDCA
Mb-SGD
101
10-2
0
Human Activity: Statistical Heterogeneity (3G)
102
MOCHA
CoCoA
Mb-SDCA
Mb-SGD
101
Primal Sub-Optimality
Primal Sub-Optimality
101
Human Activity: Statistical Heterogeneity (LTE)
102
MOCHA
CoCoA
Mb-SDCA
Mb-SGD
Primal Sub-Optimality
102
100
10-1
10-2
0
1
2
3
4
5
6
7
10-3
8
0
0.5
1
106
Estimated Time
1.5
2
107
Estimated Time
Figure 1: The performance of M OCHA compared to other distributed methods for the W update of (1). While
increasing communication tends to decrease the performance of the mini-batch methods, M OCHA performs well
in high communication settings. In all settings, M OCHA with varied approximation values, ?ht , performs better
than without (i.e., naively generalizing C O C OA), as it avoids stragglers from statistical heterogeneity.
Statistical Heterogeneity. We explore the effect of statistical heterogeneity on stragglers for various
methods and communication regimes (3G, LTE, WiFi). For a fixed communication network, we
compare M OCHA to C O C OA, which has a single ? parameter, and to mini-batch stochastic gradient
descent (Mb-SGD) and mini-batch stochastic dual coordinate ascent (Mb-SDCA), which have limited
communication flexibility depending on the batch size. We tune all compared methods for best
performance, as we detail in Appendix E. In Figure 1, we see that while the performance degrades
for mini-batch methods in high communication regimes, M OCHA and C O C OA are robust to high
communication. However, C O C OA is significantly affected by stragglers?because ? is fixed across
nodes and rounds, difficult subproblems adversely impact convergence. In contrast, M OCHA performs
well regardless of communication cost and is robust to statistical heterogeneity.
Systems Heterogeneity. M OCHA is also equipped to handle heterogeneity from changing systems
environments, such as battery power, memory, or network connection, as we show in Figure 2. In
particular, we simulate systems heterogeneity by randomly choosing the number of local iterations
for M OCHA or the mini-batch size for mini-batch methods, between 10% and 100% of the minimum
number of local data points for high variability environments, to between 90% and 100% for low
variability (see Appendix E for full details). We do not vary the performance of C O C OA, as the
impact from statistical heterogeneity alone significantly reduces performance. However, adding
systems heterogeneity would reduce performance even further, as the maximum ? value across all
nodes would only increase if additional systems challenges were introduced.
5.4
Tolerance to Dropped Nodes
Vehicle Sensor: Systems Heterogeneity (Low)
102
Primal Sub-Optimality
10
-1
10
-2
MOCHA
CoCoA
Mb-CD
Mb-SGD
101
100
10-3
100
10-1
10-2
0
1
2
3
4
5
6
7
8
10-3
9
0
1
2
3
106
Estimated Time
4
5
6
7
8
9
106
Estimated Time
Figure 2: M OCHA can handle variability from systems
heterogeneity.
Google Glass: Fault Tolerance, W Step
102
Primal Sub-Optimality
101
100
10
-1
10
-2
10-3
Google Glass: Fault Tolerance, Full Method
102
101
Primal Sub-Optimality
Finally, we explore the effect of nodes dropping
on the performance of M OCHA. We do not draw
comparisons to other methods, as to the best of
our knowledge, no other methods for distributed
multi-task learning directly address fault tolerance. In M OCHA, we incorporate this setting
by allowing ?th := 1, as explored theoretically
in Section 4. In Figure 3, we look at the performance of M OCHA, either for one fixed W
update, or running the entire M OCHA method, as
the probability that nodes drop at each iteration
(pht in Assumption 2) increases. We see that the
performance of M OCHA is robust to relatively
high values of pht , both during a single update of
W and in how this affects the performance of
the overall method. However, as intuition would
suggest, if one of the nodes never sends updates
(i.e., ph1 := 1 for all h, green dotted line), the
method does not converge to the correct solution.
This provides validation for our Assumption 2.
Primal Sub-Optimality
101
Vehicle Sensor: Systems Heterogeneity (High)
102
MOCHA
CoCoA
Mb-SDCA
Mb-SGD
100
10-1
10-2
0
2
4
6
Estimated Time
8
10
106
10-3
0
1
2
3
4
5
6
Estimated Time
7
8
107
Figure 3: The performance of M OCHA is robust to
nodes periodically dropping out (fault tolerance).
8
6
Discussion
To address the statistical and systems challenges of the burgeoning federated learning setting, we have
presented M OCHA, a novel systems-aware optimization framework for federated multi-task learning.
Our method and theory for the first time consider issues of high communication cost, stragglers,
and fault tolerance for multi-task learning in the federated environment. While M OCHA does not
apply to non-convex deep learning models in its current form, we note that there may be natural
connections between this approach and ?convexified? deep learning models [6, 34, 51, 57] in the
context of kernelized federated multi-task learning.
Acknowledgements
We thank Brendan McMahan, Chlo? Kiddon, Jakub Kone?cn?, Evan Sparks, Xinghao Pan, Lisha Li,
and Hang Qi for valuable discussions and feedback.
References
[1] A. Ahmed, A. Das, and A. J. Smola. Scalable hierarchical multitask learning algorithms for conversion
optimization in display advertising. In Conference on Web Search and Data Mining, 2014.
[2] R. K. Ando and T. Zhang. A framework for learning predictive structures from multiple tasks and unlabeled
data. Journal of Machine Learning Research, 6:1817?1853, 2005.
[3] D. Anguita, A. Ghio, L. Oneto, X. Parra, and J. L. Reyes-Ortiz. A public domain dataset for human activity
recognition using smartphones. In European Symposium on Artificial Neural Networks, Computational
Intelligence and Machine Learning, 2013.
[4] A. Argyriou, T. Evgeniou, and M. Pontil. Multi-task feature learning. In Neural Information Processing
Systems, 2007.
[5] A. Argyriou, T. Evgeniou, and M. Pontil. Convex multi-task feature learning. Machine Learning,
73(3):243?272, 2008.
[6] ?. Aslan, X. Zhang, and D. Schuurmans. Convex deep learning via normalized kernels. In Advances in
Neural Information Processing Systems, 2014.
[7] I. M. Baytas, M. Yan, A. K. Jain, and J. Zhou. Asynchronous multi-task learning. In International
Conference on Data Mining, 2016.
[8] F. Bonomi, R. Milito, J. Zhu, and S. Addepalli. Fog computing and its role in the internet of things. In
SIGCOMM Workshop on Mobile Cloud Computing, 2012.
[9] A. Carroll and G. Heiser. An analysis of power consumption in a smartphone. In USENIX Annual Technical
Conference, 2010.
[10] R. Caruana. Multitask learning. Machine Learning, 28:41?75, 1997.
[11] J. Chen, J. Zhou, and J. Ye. Integrating low-rank and group-sparse structures for robust multi-task learning.
In Conference on Knowledge Discovery and Data Mining, 2011.
[12] A. Deshpande, C. Guestrin, S. R. Madden, J. M. Hellerstein, and W. Hong. Model-based approximate
querying in sensor networks. VLDB Journal, 14(4):417?443, 2005.
[13] M. F. Duarte and Y. H. Hu. Vehicle classification in distributed sensor networks. Journal of Parallel and
Distributed Computing, 64(7):826?838, 2004.
[14] T. Evgeniou and M. Pontil. Regularized multi-task learning. In Conference on Knowledge Discovery and
Data Mining, 2004.
[15] P. Garcia Lopez, A. Montresor, D. Epema, A. Datta, T. Higashino, A. Iamnitchi, M. Barcellos, P. Felber,
and E. Riviere. Edge-centric computing: Vision and challenges. SIGCOMM Computer Communication
Review, 45(5):37?42, 2015.
[16] A. R. Gon?alves, F. J. Von Zuben, and A. Banerjee. Multi-task sparse structure learning with gaussian
copula models. Journal of Machine Learning Research, 17(33):1?30, 2016.
[17] J. Gorski, F. Pfeuffer, and K. Klamroth. Biconvex sets and optimization with biconvex functions: a survey
and extensions. Mathematical Methods of Operations Research, 66(3):373?407, 2007.
[18] K. Hong, D. Lillethun, U. Ramachandran, B. Ottenw?lder, and B. Koldehofe. Mobile fog: A programming
model for large-scale applications on the internet of things. In SIGCOMM Workshop on Mobile Cloud
Computing, 2013.
[19] C.-J. Hsieh, M. A. Sustik, I. S. Dhillon, and P. Ravikumar. Sparse Inverse Covariance Matrix Estimation
Using Quadratic Approximation. In Neural Information Processing Systems 27, 2014.
9
[20] J. Huang, F. Qian, Y. Guo, Y. Zhou, Q. Xu, Z. M. Mao, S. Sen, and O. Spatscheck. An in-depth study of
lte: Effect of network protocol and application behavior on performance. In ACM SIGCOMM Conference,
2013.
[21] L. Jacob, J.-p. Vert, and F. R. Bach. Clustered multi-task learning: A convex formulation. In Neural
Information Processing Systems, 2009.
[22] M. Jaggi, V. Smith, J. Terhorst, S. Krishnan, T. Hofmann, and M. I. Jordan. Communication-Efficient
Distributed Dual Coordinate Ascent. In Neural Information Processing Systems, 2014.
[23] X. Jin, P. Luo, F. Zhuang, J. He, and Q. He. Collaborating between local and global learning for distributed
online multiple tasks. In Conference on Information and Knowledge Management, 2015.
[24] S. Kim and E. P. Xing. Statistical estimation of correlated genome associations to a quantitative trait
network. PLoS Genet, 5(8):e1000587, 2009.
[25] J. Kone?cn`y, H. B. McMahan, and D. Ramage. Federated optimization: Distributed optimization beyond
the datacenter. arXiv:1511.03575, 2015.
[26] J. Kone?cn`y, H. B. McMahan, F. X. Yu, P. Richt?rik, A. T. Suresh, and D. Bacon. Federated learning:
Strategies for improving communication efficiency. arXiv:1610.05492, 2016.
[27] T. Kuflik, J. Kay, and B. Kummerfeld. Challenges and solutions of ubiquitous user modeling. In Ubiquitous
display environments, pages 7?30. Springer, 2012.
[28] A. Kumar and H. Daum?. Learning task grouping and overlap in multi-task learning. In International
Conference on Machine Learning, 2012.
[29] S. L. Lauritzen. Graphical Models, volume 17. Clarendon Press, 1996.
[30] S. Liu, S. J. Pan, and Q. Ho. Distributed multi-task relationship learning. Conference on Knowledge
Discovery and Data Mining, 2017.
[31] C. Ma, V. Smith, M. Jaggi, M. I. Jordan, P. Richt?rik, and M. Tak??c. Adding vs. averaging in distributed
primal-dual optimization. In International Conference on Machine Learning, 2015.
[32] S. Madden, M. J. Franklin, J. M. Hellerstein, and W. Hong. TAG: A tiny aggregation service for ad-hoc
sensor networks. In Symposium on Operating Systems Design and Implementation, 2002.
[33] S. Madden, M. J. Franklin, J. M. Hellerstein, and W. Hong. TinyDB: An acquisitional query processing
system for sensor networks. ACM Transactions on Database Systems, 30(1):122?173, 2005.
[34] J. Mairal, P. Koniusz, Z. Harchaoui, and C. Schmid. Convolutional kernel networks. In Advances in Neural
Information Processing Systems, 2014.
[35] D. Mateos-N??ez and J. Cort?s. Distributed optimization for multi-task learning via nuclear-norm
approximation. In IFAC Workshop on Distributed Estimation and Control in Networked Systems, 2015.
[36] H. B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas. Communication-efficient learning
of deep networks from decentralized data. In Conference on Artificial Intelligence and Statistics, 2017.
[37] H. B. McMahan and D. Ramage.
http://www.googblogs.com/federated-learningcollaborative-machine-learning-without-centralized-training-data/. Google, 2017.
[38] A. P. Miettinen and J. K. Nurminen. Energy efficiency of mobile clients in cloud computing. In USENIX
Conference on Hot Topics in Cloud Computing, 2010.
[39] A. Pantelopoulos and N. G. Bourbakis. A survey on wearable sensor-based systems for health monitoring
and prognosis. IEEE Transactions on Systems, Man, and Cybernetics, 40(1):1?12, 2010.
[40] H. Qi, E. R. Sparks, and A. Talwalkar. Paleo: A performance model for deep neural networks. In
International Conference on Learning Representations, 2017.
[41] S. A. Rahman, C. Merck, Y. Huang, and S. Kleinberg. Unintrusive eating recognition using google glass.
In Conference on Pervasive Computing Technologies for Healthcare, 2015.
[42] P. Rashidi and D. J. Cook. Keeping the resident in the loop: Adapting the smart home to the user. IEEE
Transactions on systems, man, and cybernetics, 39(5):949?959, 2009.
[43] M. Rastegari, V. Ordonez, J. Redmon, and A. Farhadi. XNOR-Net: ImageNet classification using binary
convolutional neural networks. In European Conference on Computer Vision, 2016.
[44] S. Ravi. https://research.googleblog.com/2017/02/on-device-machine-intelligence.
html. Google, 2017.
[45] M. Razaviyayn, M. Hong, and Z.-Q. Luo. A unified convergence analysis of block successive minimization
methods for nonsmooth optimization. SIAM Journal on Optimization, 23(2):1126?1153, 2013.
[46] S. Shalev-Shwartz, Y. Singer, and N. Srebro. Pegasos: Primal Estimated sub-GrAdient SOlver for SVM.
International Conference on Machine Learning, June 2007.
[47] S. Shalev-Shwartz and T. Zhang. Stochastic dual coordinate ascent methods for regularized loss minimization. Journal of Machine Learning Research, 14:567?599, 2013.
[48] D. Singel?e, S. Seys, L. Batina, and I. Verbauwhede. The communication and computation cost of wireless
security. In ACM Conference on Wireless Network Security, 2011.
[49] V. Smith, S. Forte, C. Ma, M. Tak?c, M. I. Jordan, and M. Jaggi. CoCoA: A general framework for
communication-efficient distributed optimization. arXiv:1611.02189, 2016.
10
[50] M. Tak??c, A. Bijral, P. Richt?rik, and N. Srebro. Mini-Batch Primal and Dual Methods for SVMs. In
International Conference on Machine Learning, 2013.
[51] C.-Y. Tsai, A. M. Saxe, and D. Cox. Tensor switching networks. In Advances in Neural Information
Processing Systems, 2016.
[52] C. Van Berkel. Multi-core for mobile phones. In Proceedings of the Conference on Design, Automation
and Test in Europe, pages 1260?1265. European Design and Automation Association, 2009.
[53] H. Wang, A. Banerjee, C.-J. Hsieh, P. K. Ravikumar, and I. S. Dhillon. Large scale distributed sparse
precision estimation. In Neural Information Processing Systems, 2013.
[54] J. Wang, M. Kolar, and N. Srebro. Distributed multi-task learning. In Conference on Artificial Intelligence
and Statistics, 2016.
[55] J. Wang, M. Kolar, and N. Srebro. Distributed multi-task learning with shared representation.
arXiv:1603.02185, 2016.
[56] Y. Xue, X. Liao, L. Carin, and B. Krishnapuram. Multi-task learning for classification with dirichlet
process priors. Journal of Machine Learning Research, 8:35?63, 2007.
[57] Y. Zhang, P. Liang, and M. J. Wainwright. Convexified convolutional neural networks. International
Conference on Machine Learning, 2017.
[58] Y. Zhang and D.-Y. Yeung. A convex formulation for learning task relationships in multi-task learning. In
Conference on Uncertainty in Artificial Intelligence, 2010.
[59] J. Zhou, J. Chen, and J. Ye. Clustered multi-task learning via alternating structure optimization. In Neural
Information Processing Systems, 2011.
11
| 7029 |@word mild:1 trial:1 hampson:1 cox:1 multitask:2 norm:1 vldb:1 hu:1 simulation:3 heiser:1 hsieh:2 covariance:1 jacob:1 sgd:6 dramatic:1 tr:2 initial:1 liu:1 exclusively:1 selecting:1 uma:1 franklin:2 cort:1 outperforms:2 existing:1 current:4 com:4 nt:6 comparing:1 luo:2 gmail:1 must:1 periodically:4 hofmann:1 drop:5 update:24 v:2 stationary:1 alone:1 prohibitive:1 device:11 selected:1 intelligence:5 cook:1 beginning:1 smith:4 core:1 chiang:1 oneto:1 detecting:1 provides:3 node:61 location:1 successive:1 attack:1 org:1 zhang:5 mathematical:1 rnt:1 symposium:2 lopez:1 prove:1 consists:1 fitting:1 introduce:4 privacy:2 manner:1 theoretically:3 behavior:1 roughly:1 frequently:1 growing:1 multi:46 cpu:2 equipped:1 solver:2 considering:1 increasing:1 begin:1 xx:1 underlying:1 moreover:1 bounded:2 notation:1 provided:3 lowest:1 proposing:1 unified:1 impractical:1 suite:2 guarantee:6 temporal:1 mitigate:2 every:1 quantitative:1 returning:1 rm:1 partitioning:1 datacenter:1 control:1 healthcare:1 before:2 positive:2 dropped:3 local:19 service:1 tends:1 limit:2 switching:1 id:1 approximately:1 studied:1 collect:1 specifying:1 limited:2 bi:2 range:1 averaged:1 lte:3 practical:4 unique:3 testing:1 practice:1 block:4 definite:2 differs:1 communicated:1 suresh:1 pontil:3 sdca:5 evan:1 area:1 empirical:2 yan:1 significantly:8 vert:1 adapting:1 road:1 integrating:1 suggest:2 krishnapuram:1 get:1 cannot:1 unlabeled:1 pegasos:1 bonomi:1 storage:3 risk:1 context:1 writing:1 optimize:1 www:3 imposed:1 center:3 yt:1 latest:1 attention:1 starting:1 regardless:1 convex:13 survey:2 resolution:1 simplicity:1 spark:2 qian:1 insight:1 avoidance:1 deriving:1 nuclear:1 kay:1 dw:1 handle:10 e1000587:1 coordinate:3 limiting:1 pt:1 target:1 massive:1 user:5 exact:1 programming:1 recognition:4 particularly:2 updating:2 walking:4 infrared:1 gon:1 database:1 rmd:4 role:1 subproblem:9 cloud:4 wang:3 capture:3 worst:2 cycle:4 plo:1 shuffle:1 removed:1 decrease:1 valuable:1 richt:3 accessing:1 environment:7 intuition:1 complexity:1 sugar:1 battery:4 straggler:23 solving:4 rewrite:1 smart:3 segment:1 predictive:1 efficiency:2 completely:1 pfeuffer:1 easily:1 various:2 regularizer:4 train:2 distinct:2 jain:1 describe:1 effective:1 query:3 artificial:4 choosing:2 h0:6 shalev:2 whose:1 stanford:2 kai:1 solve:6 lder:1 ability:2 statistic:2 jointly:2 online:1 hoc:1 kxt:1 net:1 sen:1 propose:5 mb:12 uci:1 combining:1 networked:1 loop:1 flexibility:3 validate:1 ghio:1 convergence:16 cluster:3 extending:1 generating:1 help:2 derive:3 develop:1 depending:2 fixing:2 pose:2 lauritzen:1 progress:2 involves:1 come:1 differ:2 correct:1 stochastic:3 human:8 saxe:1 public:1 require:1 fix:1 clustered:3 parra:1 extension:1 drinking:1 hold:1 lying:1 considered:2 ic:1 predict:3 driving:1 achieves:1 vary:3 purpose:1 hwt:1 estimation:4 currently:1 minimization:2 sensor:15 gaussian:1 aim:7 avoid:1 zhou:4 tar:1 mobile:10 eating:2 corollary:1 pervasive:1 focus:4 june:1 xit:1 prevalent:2 rank:2 sanjabi:1 indicates:2 kuflik:1 contrast:3 brendan:1 talwalkar:3 kim:1 duarte:1 glass:6 milder:1 dependent:1 typically:1 entire:1 kernelized:1 tak:3 issue:6 dual:21 flexible:1 html:2 overall:1 priori:2 classification:3 proposes:1 special:1 copula:1 field:2 aware:2 evgeniou:3 never:1 beach:1 identical:1 look:1 yu:1 carin:1 wifi:4 theart:1 future:1 report:1 others:1 simplify:1 nonsmooth:1 few:1 modern:2 randomly:2 simultaneously:4 individual:3 usc:3 ando:1 ortiz:1 freedom:1 interest:2 centralized:1 highly:1 mining:5 severe:1 personalization:1 fog:3 primal:13 kone:3 beforehand:2 edge:4 worker:1 instance:4 column:2 modeling:9 bijral:1 caruana:1 phrase:1 cost:6 addressing:1 subset:1 delay:3 virginia:1 stored:1 xue:1 st:1 explores:1 international:7 siam:1 standing:1 receiving:1 analogously:1 w1:4 von:1 ambiguity:1 central:3 management:1 containing:1 choose:1 huang:2 adversely:1 return:3 li:1 potential:1 accelerometer:2 automation:2 satisfy:2 caused:1 depends:3 ad:1 vehicle:8 characterizes:1 reached:1 wm:4 aggregation:2 participant:2 parallel:2 capability:1 xing:1 defer:1 contribution:2 formed:1 accuracy:1 convolutional:3 characteristic:1 efficiently:1 downstairs:1 yield:1 sitting:2 correspond:1 generalize:1 raw:1 iid:3 advertising:1 monitoring:1 cybernetics:2 history:1 definition:2 energy:1 frequency:1 deshpande:1 naturally:2 associated:5 proof:1 chlo:1 wearable:3 kummerfeld:1 dataset:5 exacerbate:1 popular:1 knowledge:6 improves:1 ubiquitous:2 carefully:1 centric:1 clarendon:1 day:1 mtl:20 response:1 klamroth:1 formulation:7 done:1 though:1 strongly:2 just:1 smola:1 until:2 clock:5 milito:1 ramachandran:1 rahman:1 web:1 banerjee:2 google:8 resident:1 quality:3 ordonez:1 aav:1 grows:1 usa:1 effect:5 ye:2 normalized:1 adequately:1 regularization:2 ramage:3 alternating:4 symmetric:2 dhillon:2 moore:1 pht:3 semantic:1 xnor:1 deal:1 attractive:1 round:2 during:4 skewed:1 forte:1 biconvex:2 hong:5 generalized:1 demonstrate:5 performs:7 reyes:1 image:1 meaning:1 consideration:2 novel:4 recently:1 superior:1 common:4 mt:1 empirically:2 volume:1 discussed:4 extend:2 he:2 association:2 trait:1 significant:3 refer:1 rd:5 similarly:1 kmt:1 convexified:2 sql:1 longer:1 carroll:1 operating:1 gt:10 europe:1 jaggi:3 own:1 recent:3 optimizing:1 phone:7 store:1 binary:1 fault:14 vt:6 cocoa:6 captured:2 minimum:1 additional:3 somewhat:1 guestrin:1 determine:1 converge:3 multiple:3 full:2 harchaoui:1 reduces:1 stem:1 gorski:1 compounded:1 smooth:4 technical:1 ifac:1 ahmed:1 cross:1 long:2 bach:1 magnetometer:1 deadline:1 equally:1 sigcomm:4 promotes:1 ravikumar:2 impact:2 prediction:2 qi:2 scalable:1 controller:1 vision:2 cmu:2 arca:1 liao:1 arxiv:4 iteration:18 kernel:2 yeung:1 cell:1 separately:1 addressed:1 sends:2 rest:1 featurize:1 archive:1 ascent:3 thing:2 leveraging:1 jordan:3 call:1 split:1 concerned:1 krishnan:1 iterate:1 affect:1 fit:3 bandwidth:2 prognosis:1 reduce:2 cn:3 genet:1 bottleneck:1 whether:1 synchronous:3 six:1 sentiment:1 suffer:1 returned:1 remark:3 deep:5 latency:1 clear:1 tune:1 amount:2 locally:5 ph:1 hardware:2 svms:1 generate:1 http:5 dotted:1 estimated:9 deteriorates:1 per:7 rb:1 serving:2 broadly:1 controled:1 dropping:3 affected:1 group:4 key:1 burgeoning:1 blood:1 achieving:1 changing:1 ravi:1 ht:3 fraction:1 fmtl:1 run:2 inverse:1 uncertainty:1 nascent:1 home:3 draw:1 appendix:10 scaling:1 internet:2 hi:2 guaranteed:1 centrally:4 display:2 fold:1 quadratic:3 annual:1 activity:14 occur:1 software:1 sake:1 tag:1 kleinberg:1 simulate:2 min:3 optimality:7 kumar:1 performing:3 ameet:1 relatively:2 speedup:1 developing:1 poor:1 conjugate:1 across:17 slightly:1 increasingly:1 pan:2 heart:1 discus:1 hh:3 needed:1 singer:1 end:1 sending:1 sustik:1 generalizes:1 available:2 operation:2 decentralized:1 xinghao:1 apply:3 hierarchical:1 hellerstein:3 appropriate:1 spectral:1 alternative:2 batch:8 ho:1 slower:1 assumes:3 running:2 include:1 dirichlet:1 graphical:1 hinge:2 maintaining:1 unbalance:1 daum:1 restrictive:1 tensor:1 already:1 strategy:2 degrades:1 md:2 traditional:1 diagonal:1 amongst:6 gradient:2 separate:9 unable:2 thank:1 capacity:2 oa:12 gence:1 majority:1 consumption:1 spatscheck:1 miettinen:1 topic:1 collected:3 cellular:2 reason:1 assuming:2 code:1 length:1 relationship:9 reformulate:1 providing:1 mini:7 kolar:2 liang:1 setup:2 unfortunately:1 difficult:2 relate:1 subproblems:8 pmax:3 design:3 implementation:1 contributed:1 seismic:2 perform:1 allowing:1 observation:5 conversion:1 datasets:10 farhadi:1 enabling:1 finite:2 descent:1 behave:1 jin:1 heterogeneity:21 extended:1 communication:27 variability:5 rn:2 varied:1 lisha:1 arbitrary:1 usenix:2 datta:1 introduced:1 extensive:1 connection:5 smartphones:2 imagenet:1 security:2 acoustic:2 learned:1 boost:1 hour:1 nip:1 address:7 beyond:2 below:5 xm:3 regime:2 challenge:27 summarize:1 gyroscope:2 including:3 memory:3 max:3 green:1 wainwright:1 power:6 event:1 suitable:2 natural:6 difficulty:2 regularized:3 predicting:1 overlap:1 bacon:1 client:1 zhu:1 scheme:2 improve:1 github:1 zhuang:1 mateos:1 technology:1 madden:3 gz:1 naive:1 health:2 schmid:1 chao:1 review:1 prior:4 literature:1 text:1 powered:1 l2:4 acknowledgement:1 asymptotic:1 relative:2 discovery:3 loss:9 fully:3 interesting:1 aslan:1 querying:1 versus:1 srebro:4 validation:2 degree:1 rik:3 smartphone:1 storing:1 classifying:1 cd:1 tiny:1 repeat:1 wireless:4 asynchronous:5 keeping:1 formal:1 allow:5 sparse:5 distributed:40 tolerance:15 benefit:3 calculated:1 feedback:1 world:5 avoids:1 resides:1 computes:1 depth:1 author:2 made:3 genome:1 far:2 ec:1 transaction:3 approximate:2 hang:1 unreliable:1 ml:1 global:18 koniusz:1 mairal:1 shwartz:2 search:1 decade:1 ph1:1 table:3 additionally:3 learn:4 robust:7 ca:1 rastegari:1 purview:1 schuurmans:1 e5:1 improving:1 european:3 protocol:2 diag:1 domain:2 da:1 kwk2f:1 arise:1 googleblog:1 razaviyayn:1 categorized:1 x1:3 xu:1 benchmarking:2 fashion:1 deployed:1 precision:1 sub:9 mao:1 mcmahan:5 anguita:1 third:1 theorem:5 down:1 xt:8 specific:1 mitigates:1 jakub:1 explored:2 svm:3 concern:2 grouping:1 exists:4 intrinsic:1 naively:1 workshop:3 adding:2 federated:62 magnitude:2 execution:1 push:1 terhorst:1 horizon:2 alves:1 gap:1 chen:2 suited:4 generalizing:1 led:1 garcia:1 explore:6 ez:1 expressed:1 tracking:1 talking:1 van:1 springer:1 corresponds:1 minimizer:1 collaborating:1 relies:1 acm:3 ma:2 goal:2 formulated:1 careful:1 lipschitz:2 shared:2 yti:1 man:2 typical:2 redmon:1 wt:5 averaging:1 lemma:1 duality:1 people:1 support:1 latter:2 guo:1 unbalanced:3 hot:1 tsai:1 incorporate:1 wearing:1 argyriou:2 correlated:1 |
6,666 | 703 | Learning Cellular Automaton Dynamics
with Neural Networks
N H Wulff* and J A Hertz t
CONNECT, the Niels Bohr Institute and Nordita
Blegdamsvej 17, DK-2100 Copenhagen 0, Denmark
Abstract
We have trained networks of E - II units with short-range connections to simulate simple cellular automata that exhibit complex or
chaotic behaviour. Three levels of learning are possible (in decreasing order of difficulty): learning the underlying automaton rule,
learning asymptotic dynamical behaviour, and learning to extrapolate the training history. The levels of learning achieved with and
without weight sharing for different automata provide new insight
into their dynamics.
1
INTRODUCTION
Neural networks have been shown to be capable of learning the dynamical behaviour
exhibited by chaotic time series composed of measurements of a single variable
among many in a complex system [1, 2, 3]. In this work we consider instead cellular
automaton arrays (CA)[4], a class of many-degree-of-freedom systems which exhibits
very complex dynamics, including universal computation. We would like to know
whether neural nets can be taught to imitate these dynamics, both locally and
globally.
One could say we are turning the usual paradigm for studying such systems on
its head. Conventionally, one is given the rule by which each automaton updates
its state, and the (nontrivial) problem is to find what kind of global dynamical
?Present address: NEuroTech AjS, Copenhagen, Denmark
t Address until October 1993: Laboratory of Neuropsychology, NIMH, Bethesda MD
20892. email: [email protected]
631
632
Wulff and Hertz
behaviour results. Here we suppose that we are given the history of some CA, and
we would like, if possible, to find the rule that generated it.
We will see that a network can have different degrees of success in this task, depending on the constraints we place on the learning. Furthermore, we will be able
to learn something about the dynamics of the automata themselves from knowing
what level of learning is possible under what constraints.
This note reports some preliminary investigations of these questions. We study
only the simplest automata that produce chaotic or complex dynamic behaviour.
Nevertheless, we obtain some nontrivial results which lead to interesting conjectures
for future investigation.
A CA is a lattice of formal computing units, each of which is characterized by a
state variable Si(t), where i labels the site in the lattice and t is the (digital) time.
Every such unit updates itself according to a particular rule or function f( ) of its
own state and that of the other units in its local neighbourhood. The rule is the
same for all units, and the updatings of all units are simultaneous.
Different models are characterized by the nature of the state variable (e.g. binary,
continuous, vector, etc), the dimensionality of the lattice, and the choice of neighbourhood. In the two cases we study here, the neighbourhoods are of size N = 3,
consisting of the unit itself and its two immediate neighbours on a chain, and
N = 9, consisting of the unit itself and its 8 nearest neighbours on a square lattice
(the 'Moore neighbourhood'). We will consider only binary units, for which we take
Si(t) = ?1. Thus, if the neighbourhood (including the unit itself) includes N sites,
f( ) is a Boolean function on the N -hypercube. There are 22N such functions.
Wolfram [4) has divided the rules for such automata further into three classes:
1. Class 1: rules that lead to a uniform state.
2. Class 2: rules that lead to simple stable or periodic patterns.
3. Class 3: rules that lead to chaotic patterns.
4. Class 4: rules that lead to complex, long-lived transient patterns.
Rules in the fourth cla.ss lie near (in a sense not yet fully understood [5)) a critical
boundary between classes 2 and 3. They lead eventually to asymptotic behaviour
in class 2 (or possibly 3); what distinguishes them is the length of the transient. It
is classes 3 and 4 that we are interested in here.
More specifically, for class 3 we expect that after the (short) initial transients, the
motion is confined to some sort of attractor. Different attractors may be reached
for a given rule, depending on initial conditions. For such systems we will focus
on the dynamics on these attractors, not on the short transients. We will want to
know what we can learn from a given history about the attractor characterizing it,
about the asymptotic dynamics of the system generally (i.e. about all attractors),
and, if possible, about the underlying rule.
For class 4 CA, in contra.st, only the transients are of interest. Different initial
conditions will give rise to very different transient histories; indeed, this sensitivity
is the dynamical ba.sis for the capability for universal computation that has been
Learning Cellular Automaton Dynamics with Neural Networks
proved for some of these systems. Here we will want to know what we can learn
from a portion of such a history about its future, as well as about the underlying
rule.
2
REPRESENTING A CA AS A NETWORK
Any Boolean function of N arguments can be implemented by a ~-n unit of order
P ::; N with a threshold activation function, i.e. there exist weights wJlh ... jp such
that
I(SI, S2 ... SN) = sgn [.
L . wJd~ ...jp Sjl Sh ... Sjp] .
(1)
Jl.J~.?"JP
The indices ile run over the sites in the neighbourhood (1 to N) and zero, which
labels a constant formal bias unit So = 1. Because the updating rule we are looking
for is the same for the entire lattice, the weight WJ1 ... jp doesn't depend on i. Furthermore, because of the discrete nature of the outputs, the weights that implement
a given rule are not unique; rather, there is a region of weight space for each rule.
Although we could work with other architectures, it is natural to study networks
with the same structure as the CA to be simulated. We therefore make a lattice
of formal 1: - n neurons with short-range connections, which update themselves
according to
Vi(t+ 1) =
9
r.~
Wit ... jPVjl(t) ... Vjp(t)] ,
(2)
Jt"'Jp
In these investigations, we have assumed that we know a priori what the relevant
neighbourhood size is, thereby fixing the connectivity of the network. At the end of
the day, we will take the limit where the gain of the activation function 9 becomes
infinite. However, during learning we use finite gain and continuous-valued units.
We know that the order P of our ~ - n units need not be higher than the neighbourhood size N. However, in most cases a smaller P will do. More precisely, a
network with any P > ~N can in principle (Le. given the right learning algorithm
and sufficient training examples) implement almost all possible rules. This is an
asymptotic result for large N but is already quite accurate for N = 3, where only
two of the 256 possible rules are not implementable by a second-order unit, and
N = 5, where we found from simple learning experiments that 99.87% of 10000
randomly-chosen rules could be implemented by a third-order unit.
3
LEARNING
Having chosen a suitable value of P, we can begin our main task: training the
network to simulate a CA, with the training examples {Si(t) - t Si(t + I)} taken
from a particular known history.
The translational invariance of the CA suggests that weight sharing is appropriate
in the learning algorithm. On the other hand, we can imagine situations in which
we did not possess a priori knowledge that the CA rule was the same for all units,
633
634
Wulff and Hertz
or where we only had access to the automaton state in one neighbourhood. This
case is analogous to the conventional time series extrapolation paradigm, where
we typically only have access to a few variables in a large system. The difference
is that here the accessible variables are binary rather than continuous. In these
situations we should or are constrained to learn without each unit having access to
error information at other units. In what follows we will perform the training both
with and without weight sharing. The differences in what can be learned in the two
cases will give interesting information about the CA dynamics being simulated.
Most of our results are for chaotic (class 3) CA. For these systems, this training
history is taken after initial transients have died out. Thus many of the 2N possible
examples necessary to specify the rule at each site may be missing from the training
set, and it is possible that our training procedure will not result in the network
learning the underlying rule of the original system. It might instead learn another
rule that coincides with the true one on the training examples. This is even more
likely if we are not using weight sharing, because then a unit at one site does not
have access to examples from the training history at other sites.
However, we may relax our demand on the network, asking only that it evolve
exactly like the original system when it is started in a configuration the original
system could be in after transients have died out (Le. on an attractor of the original
system). Thus we are restricting the test set in a way that is "fairer" to the network,
given the instruction it has received.
Of course, if the CA has more than one attractor, several rules which yield the
same evolution on one attractor need not do so on another one. It is therefore
possible that a network can learn the attractor of the training history (Le. will
simula.te the original system correctly on a part of the history subsequent to the
training sequence) but will not be found to evolve correctly when tested on data
from another attractor.
For class 4 automata, we cannot formulate the distinctions between different levels
of learning meaningfully in terms of attractors, since the object of interest is the
transient portion of the history. Nevertheless, we can still ask whether a network
trained on part of the transient can learn the full rule, whether it can simulate the
dynamics for other initial conditions, or whether it can extrapolate the training
history.
We therefore distinguish three degrees of successful learning:
1. Learning the rule, where the network evolves exactly like the original system
from any initial configuration.
2. Learning the dynamics, the intermediate case where the network can simulate the original system exactly after transients, irrespective of initial conditions, despite not having learned the full rule.
3. Learning to continue the dynamics, where the successful simulation of the
original system is only achieved for the particular initial condition used to
generate the training history.
Our networks are recurrent, but because they have no hidden units, they can be
trained by a simple variant of the delta-rule algorithm. It can be obtained formally
Learning Cellular Automaton Dynamics with Neural Networks
from gradient descent on a modified cross entropy
E
= ~ '"
[(1 + Si(t)) log 1 + Si~t~ + (1 _
L
l+v.-t
it
Si(t)) log 1 -
t
~~~t~l 8[-Si(t)Vi(t)]
1- t?t
(3)
We used the online version:
f!lwith ... jp
= 7]8[-Si(t+ l)l/i(t+ l)J[Si(t+ 1) -
Vi(t+ l)]l';l(t)V}l(t).?. V}p(t) (4)
This is like an extension of the Adatron algorithm[6} to E- n units, but with the
added feature that we are using a nonlinear activation function.
The one-dimensional N = 3 automata we simulated were the 9 legal cha.otic ones
identified by Wolfram l4]. Using his system for labeling the rules, these are rules
18, 22, 54, 90, 122, 126, 146, 150, and 182. We used networks of order P = 3
so that all rules were learnable. (Rule 150 would not have been learnable by a
second-order net.) Each network was a chain 60 units long, subjected to periodic
boundary conditions.
The training histories {Si (t)} were 1000 steps long, beginning 100 steps after randomly chosen initial configurations. To test for learning the rules, all neighbourhood
configurations were checked at every site. To test for learning the dynamics, the
CA were reinitialized with different random starting configurations and run 100
steps to eliminate transients, after which new test histories of length 100 steps were
constructed. Networks were then tested on 100 such histories. The test set for
continuing the dynamics was made simply by allowing the CA that had generated
the training set to continue for 100 more steps.
There are no class 4 CA among the one-dimensional N = 3 systems. As an example
of such a rule, we chose the Game of Life which is defined on a square lattice with
a neighbourhood size N = 9 and has been proved capable of universaJ computation
(see, e.g. [7, 8]). We worked with a lattice of 60 x 60 units.
The training history for the Game of Life consisted of 200 steps in the transient. The
trained networks were tested, as in the case of the chaotic one-dimensional systems,
on all possible configurations at every site (learning the rule), on other transient
histories generated from different initial conditions (learning the dynamics), and
on the evolution of the original system immediately following the training history
(learning to continue the dynamics).
4
RESULTS
With weight sharing, it proved possible to learn the dynamics for all 9 of the onedimensional chaotic rules very easily. In fact, it took no more than 10 steps of the
training history to achieve this.
Learning the underlying rules proved harder. After training on the histories of
1000 steps, the networks were able to do so in only 4 of the 9 cases. No qualitative
difference in the two groups of patterns is evident to us from looking at their histories
(Fig. 1). Nevertheless, we conclude that their ergodic properties must be different,
at lea.st quantitatively.
Life was also easy with weight sharing. Our network succeed in learning the underlying rule starting almost anywhere in the long transient.
635
636
Wulff and Hertz
22
54
90
126
182
Figure 1: Histories of the 4 one-dimensional rules that could be learned (top) and
the 5 that could not (bottom) . (Learning with weight sharing.)
Without weight sharing, all learning naturally proved more difficult. While it was
possible to learn to continue the dynamics for all the one-dimensional chaotic rules,
it proved impossible except in one case (rule 22) to learn the dynamics within
the training history of 1000 steps. The networks failed on about 25% of the test
histories. It was never possible to learn the underlying rule. Thus, apparently these
chaotic states are not as homogeneous as they appear (at least on the time scale of
the training period).
Life is also difficult without weight sharing. Our network was unable even to continue the dynamics from histories of several hundred steps in the transient (Fig. 2).
5
DISCUSSION
In previous studies of learning chaotic behaviour in single-variable time series
(e.g. [1, 2, 3]), the test to which networks have been put has been to extrapolate
the training series, i.e. to continue the dynamics. We have found that this is also
possible in cellular automata for all the chaotic rules we have studied, even when
only local information about the training history is available to the units. Thus, the
CA evolution history at any site is rich enough to permit error-free extrapolation.
However, local training data are not sufficient (except in one system, rule 22) to
permit our networks to pass the more stringent test oflearning the dynamics. Thus,
viewed from any single site, the different attra.ctors of these systems are dissimilar
enough that data from one do not permit generalization to another.
Learning Cellular Automaton Dynamics with Neural Networks
. . t.=:;"......-0-
~
Ii,. ,
0.i!-o.~
-.~
-- ,
,,
,-,
Q(.
-=
,-
'\1(0
,
oc;>
('v
~
~
.
... JI~
~.
0
.
_v
~
I
(.
,."'~
I
,
,-
-
:~".
...,.~.
.~
+.~.
-;.
.-.
Figure 2: The original Game of Life CA (left) and the network (right), both 20
steps a.fter the end of the training history. (Training done without weight sharing.)
With the access to training data from other sites implied by weight sharing, the
situation changes dramatically. Learning the dynamics is then very easy, implying
that all possible asymptotic local dynamics that could occur for any initial condition
actually do occur somewhere in the system in any given history.
Furthermore, with weight sharing, not only the dynamics but also the underlying
rule can be learned for some rules. This suggests that these rules are ergodic, in
the sense that all configurations occur somewhere in the system at some time. This
division of the chaotic rules into two classes according to this global ergodicity is a
new finding .
Turning to our class 4 example, Life proves to be impossible without weight sharing,
even by OUr most lenient test, continuing the dynamics. Thus, although one might
be tempted to think that the transient in Life is so long that it can be treated
opera.tionallyas if it were a chaotic attractor, it cannot. For real chaotic attractors,
in both in the CA studied here and continuous dynamical systems, networks can
learn to continue the dynamics on the basis of local data, while in Life they cannot.
On the other hand, the result that the rule of Life is easy to learn with weight
sharing implies that looked at globally, the history of the transient is quite rich.
Somewhere in the system, it contains sufficient information (together with the a
priori knowledge that a second-order network is sufficient) to allow us to predict
the evolution from any configuration correctly.
This study is a very preliminary one and raises more questions than it answers. We
would like to know whether the results we have obtained for these few simple systems
are generic to complex and chaotic CA. To answer this question we will have to study
systems in higher dimensions and with larger updating neighbourhoods. Perhaps
significant universal patterns will only begin to emerge for large neighborhoods (cf
[5]). However, we have identified some questions to ask about these problems.
637
638
Wulff and Hertz
References
[1J A Lapedes and R Farber, Nonlinear Signal Processing Using Neural Networks:
Prediction and System Modelling, Tech Rept LA-UR-87 -2662, Los Alamos National Laboratory. Los Alamos NM USA
[2] A S Weigend, B A Huberman a.nd D E Rumelhart, Int J Neural Systems 1
193-209 (1990)
[3] K Stokbro, D K Umberger and J A Hertz, Complex Systems 4 603-622 (1991)
[4] S Wolfram, Theory and Applications of Cellular Automata (World Scientific,
1986)
[5] C G Langton, pp 12-37 in Emergent Computation (S Forrest, ed) MIT
Press/North Holland, 1991
[6] J K Anlauf and M Biehl, Europhys Letters 10 687 (1989)
[7] H V Mcintosh, Physica D 45 105-121 (1990)
[8] S Wolfram, Physic a D 10 1-35 (1984)
| 703 |@word version:1 nd:1 instruction:1 cha:1 fairer:1 simulation:1 cla:1 thereby:1 harder:1 initial:11 configuration:8 series:4 contains:1 lapedes:1 si:13 yet:1 activation:3 must:1 subsequent:1 update:3 implying:1 imitate:1 beginning:1 short:4 wolfram:4 constructed:1 qualitative:1 indeed:1 themselves:2 globally:2 decreasing:1 becomes:1 begin:2 underlying:8 what:9 kind:1 finding:1 adatron:1 every:3 exactly:3 unit:25 appear:1 understood:1 local:5 died:2 rept:1 limit:1 despite:1 might:2 chose:1 studied:2 suggests:2 range:2 unique:1 implement:2 chaotic:15 procedure:1 universal:3 cannot:3 put:1 impossible:2 conventional:1 missing:1 starting:2 automaton:17 ergodic:2 formulate:1 wit:1 immediately:1 rule:50 insight:1 array:1 his:1 analogous:1 imagine:1 suppose:1 homogeneous:1 simula:1 rumelhart:1 updating:3 bottom:1 region:1 neuropsychology:1 nimh:1 dynamic:30 trained:4 depend:1 raise:1 division:1 basis:1 easily:1 emergent:1 labeling:1 neighborhood:1 europhys:1 quite:2 larger:1 valued:1 biehl:1 say:1 s:1 relax:1 think:1 itself:4 online:1 sequence:1 net:2 took:1 relevant:1 achieve:1 los:2 reinitialized:1 produce:1 object:1 depending:2 recurrent:1 fixing:1 nearest:1 received:1 implemented:2 implies:1 farber:1 sgn:1 transient:18 stringent:1 behaviour:7 generalization:1 preliminary:2 investigation:3 extension:1 physica:1 predict:1 niels:1 label:2 mit:1 modified:1 rather:2 focus:1 modelling:1 tech:1 sense:2 entire:1 typically:1 eliminate:1 hidden:1 interested:1 translational:1 among:2 priori:3 constrained:1 never:1 having:3 stokbro:1 contra:1 future:2 report:1 quantitatively:1 few:2 distinguishes:1 randomly:2 neighbour:2 composed:1 national:1 consisting:2 attractor:13 freedom:1 interest:2 sh:1 chain:2 accurate:1 bohr:1 capable:2 necessary:1 continuing:2 boolean:2 asking:1 lattice:8 ctors:1 oflearning:1 uniform:1 hundred:1 alamo:2 successful:2 connect:1 answer:2 periodic:2 st:2 sensitivity:1 accessible:1 physic:1 together:1 connectivity:1 nm:1 possibly:1 langton:1 includes:1 int:1 north:1 vi:3 extrapolation:2 apparently:1 portion:2 reached:1 sort:1 capability:1 square:2 opera:1 yield:1 history:31 simultaneous:1 sharing:14 ed:1 checked:1 email:1 pp:1 naturally:1 gain:2 proved:6 ask:2 knowledge:2 dimensionality:1 wjd:1 actually:1 higher:2 day:1 specify:1 done:1 furthermore:3 anywhere:1 ergodicity:1 until:1 hand:2 nonlinear:2 perhaps:1 scientific:1 usa:1 consisted:1 true:1 evolution:4 laboratory:2 moore:1 during:1 game:3 coincides:1 oc:1 evident:1 motion:1 ji:1 jp:6 jl:1 onedimensional:1 measurement:1 significant:1 mcintosh:1 had:2 stable:1 access:5 etc:1 something:1 own:1 binary:3 success:1 continue:7 life:9 paradigm:2 period:1 signal:1 ii:2 full:2 characterized:2 cross:1 long:5 divided:1 ile:1 variant:1 prediction:1 confined:1 achieved:2 lea:1 want:2 exhibited:1 posse:1 meaningfully:1 near:1 intermediate:1 easy:3 enough:2 architecture:1 identified:2 knowing:1 whether:5 sjl:1 dramatically:1 generally:1 locally:1 simplest:1 generate:1 exist:1 delta:1 correctly:3 discrete:1 nordita:2 taught:1 group:1 nevertheless:3 threshold:1 weigend:1 run:2 letter:1 fourth:1 place:1 almost:2 forrest:1 distinguish:1 nontrivial:2 occur:3 constraint:2 precisely:1 worked:1 simulate:4 argument:1 conjecture:1 according:3 anlauf:1 hertz:7 smaller:1 ur:1 bethesda:1 evolves:1 taken:2 legal:1 eventually:1 know:6 subjected:1 end:2 studying:1 ajs:1 available:1 permit:3 appropriate:1 generic:1 neighbourhood:12 original:10 top:1 cf:1 somewhere:3 lenient:1 prof:1 hypercube:1 implied:1 question:4 already:1 added:1 looked:1 usual:1 md:1 exhibit:2 gradient:1 unable:1 simulated:3 blegdamsvej:1 cellular:8 denmark:2 length:2 index:1 difficult:2 october:1 rise:1 ba:1 lived:1 perform:1 allowing:1 neuron:1 finite:1 implementable:1 descent:1 immediate:1 situation:3 looking:2 head:1 copenhagen:2 connection:2 learned:4 distinction:1 address:2 able:2 dynamical:5 pattern:5 including:2 critical:1 suitable:1 difficulty:1 natural:1 treated:1 turning:2 representing:1 started:1 irrespective:1 conventionally:1 sn:1 wj1:1 evolve:2 asymptotic:5 fully:1 expect:1 interesting:2 digital:1 degree:3 sufficient:4 principle:1 course:1 free:1 formal:3 bias:1 allow:1 institute:1 characterizing:1 emerge:1 boundary:2 dimension:1 world:1 rich:2 doesn:1 made:1 global:2 assumed:1 conclude:1 continuous:4 learn:13 nature:2 ca:19 complex:7 did:1 main:1 s2:1 wulff:5 site:11 fig:2 lie:1 third:1 jt:1 learnable:2 dk:2 restricting:1 te:1 demand:1 entropy:1 simply:1 likely:1 failed:1 holland:1 succeed:1 viewed:1 tempted:1 sjp:1 change:1 specifically:1 infinite:1 except:2 huberman:1 pas:1 invariance:1 la:1 l4:1 formally:1 fter:1 dissimilar:1 tested:3 extrapolate:3 |
6,667 | 7,030 | Is Input Sparsity Time Possible for
Kernel Low-Rank Approximation?
Cameron Musco
MIT
[email protected]
David P. Woodruff
Carnegie Mellon University
[email protected]
Abstract
Low-rank approximation is a common tool used to accelerate kernel methods: the
? which can be stored
n ? n kernel matrix K is approximated via a rank-k matrix K
in much less space and processed more quickly. In this work we study the limits
of computationally efficient low-rank kernel approximation. We show that for a
broad class of kernels, including the popular Gaussian and polynomial kernels,
computing a relative error k-rank approximation to K is at least as difficult as
multiplying the input data matrix A 2 Rn?d by an arbitrary matrix C 2 Rd?k .
Barring a breakthrough in fast matrix multiplication, when k is not too large, this
requires ?(nnz(A)k) time where nnz(A) is the number of non-zeros in A. This
lower bound matches, in many parameter regimes, recent work on subquadratic
time algorithms for low-rank approximation of general kernels [MM16, MW17],
demonstrating that these algorithms are unlikely to be significantly improved, in
particular to O(nnz(A)) input sparsity runtimes. At the same time there is hope:
we show for the first time that O(nnz(A)) time approximation is possible for
general radial basis function kernels (e.g., the Gaussian kernel) for the closely
related problem of low-rank approximation of the kernelized dataset.
1
Introduction
The kernel method is a popular technique used to apply linear learning and classification algorithms
to datasets with nonlinear structure. Given training input points a1 , ..., an 2 Rd , the idea is to
replace the standard Euclidean dot product hai , aj i = aTi aj with the kernel dot product (ai , aj ),
where : Rd ? Rd ! R+ is some positive semidefinite function. Popular kernel functions include
2
e.g., the Gaussian kernel with (ai , aj ) = e kai aj k / for some bandwidth parameter and the
polynomial kernel of degree q with (ai , aj ) = (c + aTi aj )q for some parameter c.
Throughout this work, we focus on kernels where (ai , aj ) is a function of the dot products
aTi ai = kai k2 , aTj aj = kaj k2 , and aTi aj . Such functions encompass many kernels used in practice,
including the Gaussian kernel, the Laplace kernel, the polynomial kernel, and the Matern kernels.
Letting F be the reproducing kernel Hilbert space associated with (?, ?), we can write (ai , aj ) =
h (ai ), (aj )i where
: Rd ! F is a typically non-linear feature map. We let
=
T
[ (a1 ), ..., (an )] denote the kernelized dataset, whose ith row is the kernelized datapoint (ai ).
There is no requirement that can be efficiently computed or stored ? for example, in the case of the
Gaussian kernel, F is an infinite dimensional space. Thus, kernel methods typically work with the
kernel matrix K 2 Rn?n with Ki,j = (ai , aj ). We will also sometimes denote K = { (ai , aj )}
T
to make it clear which kernel function it is generated by. We can equivalently write K =
. As
long as all operations of an algorithm only access via the dot products between its rows, they can
thus be implemented using just K without explicitly computing the feature map.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
Unfortunately computing K is expensive, and a bottleneck for scaling kernel methods to large
datasets. For the kernels we consider, where depends on dot products between the input points, we
must at least compute the Gram matrix AAT , requiring ?(n2 d) time in general. Even if A is sparse,
this takes ?(nnz(A)n) time. Storing K then takes ?(n2 ) space, and processing it for downstream
applications like kernel ridge regression and kernel SVM can be even more expensive.
1.1
Low-rank kernel approximation
For this reason, a vast body of work studies how to efficiently approximate K via a low-rank sur? [SS00, AMS01, WS01, FS02, RR07, ANW14, LSS13, BJ02, DM05, ZTK08, BW09,
rogate K
? is rank-k, it can be stored in factored form in O(nk) space and
CKS11, WZ13, GM13]. If K
operated on quickly ? e.g., it can be inverted in just O(nk 2 ) time to solve kernel ridge regression.
? = Kk where Kk is K?s best k-rank approximation ? the projection onto
One possibility is to set K
? the error kK Kk
? F , where kM kF is
its top k eigenvectors.PKk minimizes, over all rank-k K,
2 1/2
the Frobenius norm: ( i,j Mi,j ) . It in fact minimizes error under any unitarily invariant norm,
e.g., the popular spectral norm. Unfortunately, Kk is prohibitively expensive to compute, requiring
?(n3 ) time in practice, or n! in theory using fast matrix multiplication, where ! ? 2.373 [LG14].
? which is nearly as good
The idea of much prior work on low-rank kernel approximation is to find K
? fulfilling the
as Kk , but can be computed much more quickly. Specifically, it is natural to ask for K
following relative error guarantee for some parameter ? > 0:
kK
? F ? (1 + ?)kK
Kk
K k kF .
(1)
Other goals, such as nearly matching the spectral norm error kK Kk k or approximating K entrywise have also been considered [RR07, GM13]. Of particular interest to our results is the closely
T
related goal of outputting an orthonormal basis Z 2 Rn?k satisfying for any with
= K:
k
ZZ T kF ? (1 + ?)k
k kF .
(2)
(2) can be viewed as a Kernel PCA guarantee ? its asks us to find a low-rank subspace Z such that
the projection of our kernelized dataset onto Z nearly optimally approximates this dataset. Given
T
? = ZZ T
Z, we can approximate K using K
ZZ T = ZZ T KZZ T . Alternatively, letting P
T
? = P T , which can be computed
be the projection onto the row span of ZZ , we can write K
efficiently, for example, when P is a projection onto a subset of the kernelized datapoints [MM16].
1.2
Fast algorithms for relative-error kernel approximation
Until recently, all algorithms achieving the guarantees of (1) and (2) were at least as expensive as
computing the full matrix K, which was needed to compute the low-rank approximation [GM13].
However, recent work has shown that this is not required. Avron, Nguyen, and Woodruff [ANW14]
demonstrate that for the polynomial kernel, Z satisfying (2) can be computed in O(nnz(A)q) +
n poly(3q k/?) time for a polynomial kernel with degree q.
Musco and Musco [MM16] give a fast algorithm for any kernel, using recursive Nystr?m sampling,
? (in factored form) satisfying kK
? ? , for input parameter . With
which computes K
Kk
the proper setting of , it can output Z satisfying (2) (see Section C.3 of [MM16]). Computing
! 1
?
?
Z requires evaluating O(k/?)
columns of the kernel matrix along with O(n(k/?)
) additional
time for other computations. Assuming the kernel is a function of the dot products between the
?
input points, the kernel evaluations require O(nnz(A)k/?)
time. The results of [MM16] can also be
? satisfying (1) with ? = pn in O(nnz(A)k
?
used to compute K
+ nk ! 1 ) time (see Appendix A of
[MW17]).
Woodruff and Musco [MW17] show that for any kernel, and for any ? > 0, it is p
possible to
?
? nk/?2 )?
achieve
(1)
in
O(nnz(A)k/?)+n
poly(k/?)
time
plus
the
time
needed
to
compute
an
O(
p
? nk/?) submatrix of K. If A has uniform row sparsity ? i.e., nnz(ai ) ? c nnz(A)/n for some
O(
p
2.5
?
constant c and all i, this step can be done in O(nnz(A)k/?
) time. Alternatively, if d ? ( nk/?2 )?
4
4
?
?
for ? < .314 this can be done in O(nk/?
) = O(nnz(A)k/?
) time using fast rectangular matrix
multiplication [LG12, GU17] (assuming that there are no all zero data points so n ? nnz(A).)
2
1.3
Our results
The algorithms of [MM16, MW17] make significant progress in efficiently solving (1) and (2) for
general kernel matrices. They demonstrate that, surprisingly, a relative-error low-rank approximation can be computed significantly faster than the time required to write down all of K.
A natural question is if these results can be improved. Even ignoring ? dependencies and typically
lower order terms, both algorithms use ?(nnz(A)k) time. One might hope to improve this to input
?
sparsity, or near input sparsity time, O(nnz(A)),
which is known for computing a low-rank approximation of A itself [CW13]. The work of Avron et al. affirms that this is possible for the kernel PCA
guarantee of (2) for degree-q polynomial kernels, for constant q. Can this result be extended to other
popular kernels, or even more general classes?
1.3.1
Lower bounds
We show that achieving the guarantee of (1) significantly more efficiently than the work of [MM16,
MW17] is likely very difficult. Specifically, we prove that for a wide class of kernels, the kernel
low-rank approximation problem is as hard as multiplying the input A 2 Rn?d by an arbitrary
C 2 Rd?k . We have the following result for some common kernels to which our techniques apply:
Theorem 1 (Hardness for low-rank kernel approximation). Consider any polynomial kernel
2
(mi , mj ) = (c + mTi mj )q , Gaussian kernel (mi , mj ) = e kmi mj k / , or the linear kernel (mi , mj ) = mTi mj . Assume there is an algorithm which given M 2 Rn?d with associated
kernel matrix K = { (mi , mj )}, returns N 2 Rn?k in o(nnz(M )k) time satisfying:
kK
N N T k2F ?
kK
Kk k2F
for some approximation factor . Then there is an o(nnz(A)k) + O(nk 2 ) time algorithm for multiplying arbitrary integer matrices A 2 Rn?d , C 2 Rd?k .
The above applies for any approximation factor . While we work in the real RAM model, ignoring
bit complexity, as long as = poly(n) and A, C have polynomially bounded entries, our reduction
from multiplication to low-rank approximation is achieved using matrices that can be represented
with just O(log(n + d)) bits per entry.
p
?
Theorem 1 shows that the runtime of O(nnz(A)k
+ nk ! 1 ) for
= n achieved by [MM16]
for general kernels cannot be significantly improved without advancing the state-of-the-art in matrix
multiplication. Currently no general algorithm is known for multiplying integer A 2 Rn?d , C 2
Rd?k in o(nnz(A)k) time, except when k n? for ? < .314 and A is dense. In this case, AC can
be computed in O(nd) time using fast rectangular matrix multiplication [LG12, GU17].
p
As discussed, when A has uniform row sparsity or when d ? ( nk/?2 )? , the runtime of [MW17]
?
for = (1 + ?), ignoring ? dependencies and typically lower order terms, is O(nnz(A)k),
which is
also nearly tight.
In recent work, Backurs et al. [BIS17] give lower bounds for a number of kernel learning problems,
including kernel PCA for the Gaussian kernel. However, their strong bound, of ?(n2 ) time, requires
very small error = exp( !(log2 n), whereas ours applies for any relative error .
1.3.2
Improved algorithm for radial basis function kernels
In contrast to the above negative result, we demonstrate that achieving the alternative Kernel PCA
guarantee of (2) is possible in input sparsity time for any shift and rotationally invariant kernel ? e.g.,
any radial basis function kernel where (xi , xj ) = f (kxi xj k). This result significantly extends
the progress of Avron et al. [ANW14] on the polynomial kernel.
Our algorithm is based off a fast implementation of the random Fourier features method [RR07],
which uses the fact that that the Fourier transform of any shift invariant kernel is a probability
distribution after appropriate scaling (a consequence of Bochner?s theorem). Sampling frequencies
from this distribution gives an approximation to (?, ?) and consequentially the matrix K.
3
? 2n random
We employ a new analysis of this method [AKM+ 17], which shows that sampling O
?
? = ? ? T satisfying the spectral approximation guarantee:
Fourier features suffices to give K
? + I) K + I (1 + ?)(K
? + I).
(1 ?)(K
If we set
? k+1 (K)/k, we can show that ? also gives a projection-cost preserving sketch
+
[CEM 15] for the kernelized dataset . This ensures that any Z satisfying k ?
ZZ T ? k2F ?
2
(1 + ?)k ? ? k k2F also satisfies k
ZZ T k2F ? (1 + O(?))k
k kF and thus achieves (2).
?
?
? 2n = O
? 2 nk
Our algorithm samples s = O
?
? k+1 (K) random Fourier features, which naively requires O(nnz(A)s) time. We show that this can be accelerated to O(nnz(A)) + poly(n, s) time, using a recent result of Kapralov et al. on fast multiplication by random Gaussian matrices [KPW16].
Our technique is analogous to the ?Fastfood? approach to accelerating random Fourier features using
fast Hadamard transforms [LSS13]. However, our runtime scales with nnz(A), which can be signif?
icantly smaller than the O(nd)
runtime given by Fastfood when A is sparse. Our main algorithmic
result is:
Theorem 2 (Input sparsity time kernel PCA). There is an algorithm that given A 2 Rn?d along
with shift and rotation-invariant kernel function : Rd ? Rd ! R+ with (x, x) = 1, outputs, with
probability 99/100, Z 2 Rn?k satisfying:
2
k
ZZ T k2F ? (1 + ?)k
k kF
T
for any with
= K = { (ai , aj )} and any ? > 0. Letting k+1 denote the (k + 1)th largest
eigenvalue of K and
be the exponent
of fast matrix multiplication, the algorithm runs in
? ! < 2.373
?
?! 1.5 ?
k
? n!+1.5 ?
O(nnz(A)) + O
time.
2
k+1 ?
We note that the runtime of our algorithm is O(nnz(A)) whenever n, k, 1/ k+1 , and 1/? are not
too large. Due to the relatively poor dependence on n, the algorithm is relevant for very high
dimensional datasets with d
n. Such datasets are found often, e.g., in genetics applications
[HDC+ 01, JDMP11]. While we have dependence on 1/ k+1 , in the natural setting, we only compute a low-rank approximation up to an error threshold, ignoring very small eigenvalues of K, and
so k+1 will not be too small. We do note that if we apply Theorem 2 to the low-rank approximation
instances given by our lower bound construction, k+1 can be very small, ? 1/ poly(n, d) for matrices with poly(n) bounded entries. Thus, removing this dependence is an important open question
in understanding the complexity of low-rank kernel approximation.
We leave open the possibility of improving our algorithm, achieving O(nnz(A)) + n ? poly(k, ?)
runtime, which would match the state-of-the-art for low-rank approximation of non-kernelized matrices [CW13]. Alternatively, it is possible that a lower bound can be shown, proving the that high n
dependence, or the 1/ k+1 term are required even for the Kernel PCA guarantee of (2).
2
Lower bounds
Our lower bound proof argues that for a broad class of kernels, given input M , a low-rank approximation of the associated kernel matrix K achieving (1) can be used to obtain a close approximation
to the Gram matrix M M T . We write (mi , mj ) as a function of mTi mj (or kmi mj k2 for distance kernels) and expand this function as a power series. We show that the if input points are
appropriately rescaled, the contribution of degree-1 term mTi mj dominates, and hence our kernel
matrix approximates M M T , up to some easy to compute low-rank components.
We then show that such an approximation can be used to give a fast algorithm for multiplying any
two integer matrices A 2 Rn?d and C 2 Rd?k . The key idea is to set M = [A, wC] where w is a
large weight. We then have:
?
AAT
wAC
MMT =
.
wC T AT w2 C T C
Since w is very large, the AAT block is relatively very small, and so M M T is nearly rank-2k ?
it has a ?heavy? strip of elements in its last k rows and columns. Thus, computing a relative-error
rank-2k approximation to M M T recovers all entries except those in the AAT block very accurately,
and importantly, recovers the wAC block and so the product AC.
4
2.1
Lower bound for low-rank approximation of M M T .
We first illustrate our lower bound technique by showing hardness of direct approximation of M M T .
Theorem 3 (Hardness of low-rank approximation for M M T ). Assume there is an algorithm A
which given any M 2 Rn?d returns N 2 Rn?k such that kM M T N N T k2F ? 1 kM M T
(M M T )k k2F in T (M, k) time for some approximation factor 1 .
T
T
For any A 2pRn?d and C 2 Rd?k each with integer entries in [
2 , 2 ], let B = [A , wC]
2
! 1
where w = 3
nd.
It
is
possible
to
compute
the
product
AC
in
time
T
(B,
2k)
+
O(nk
).
1 2
Proof. We can write the (n + k) ? (n + k) matrix BB T as:
?
AAT
BB T = [AT , wC]T [A, wC] =
wC T AT
wAC
.
w2 C T C
Let Q 2 Rn?2k be an orthogonal span for the columns of the n ? 2k matrix:
?
0
wAC
V w2 C T C
where V 2 Rk?k spans the columns of wC T AT 2 Rk?n . The projection QQT BB T gives the best
Frobenius norm approximation to BB T in the span of Q. We can see that:
?
2
AAT 0
kBB T (BB T )2k k2F ? kBB T QQT BB T k2F ?
? 42 n2 d2
(3)
0
0 F
since each entry of A is bounded in magnitude by
2
and so each entry of AAT is bounded by d
2
2.
Let N be the matrix returned by running A on B with rank 2k. In order to achieve the approximation
bound of kBB T N N T k2F ? 1 kBB T (BB T )2k k2F we must have, for all i, j:
(BB T
N N T )2i,j ? kBB T
N N T k2F ?
1
4 2 2
2n d
p
2
where the last inequality is from (3). This gives |BB T
N N T |i,j ?
1 2 nd. Since A and
T
Cphave integer entries, each entry in the submatrix wAC of BBp is an integer multiple of w =
2
T
2
3
1 2 nd. Since (N N )i,j approximates this entry to error
1 2 nd, by simply rounding
T
(N N )i,j to the nearest multiple of w, we obtain the entry exactly. Thus, given N , we can exactly
recover AC in O(nk ! 1 ) time by computing the n?k submatrix corresponding to AC in BB T .
Theorem 3 gives our main bound Theorem 1 for the case of the linear kernel (mi , mj ) = mTi mj .
Proof of Theorem 1 ? Linear Kernel. We apply Theorem 3 after noting that for B = [AT , wC]T ,
nnz(B) ? nnz(A) + nk and so T (B, 2k) = o(nnz(A)k) + O(nk 2 ).
We show in Appendix A that there is an algorithm which nearly matches the lower bound of Theorem
1 for any = (1 + ?) for any ? > 0. Further, in Appendix B we show that even just outputting an
? = ZZ T M M T is a relative-error low-rank approximation
orthogonal matrix Z 2 Rn?k such that K
T
? itself, is enough to give fast multiplication of
of M M , but not computing a factorization of K
integer matrices A and C.
2.2
Lower bound for dot product kernels
We now extend Theorem 3 to general dot product kernels ? where (ai , aj ) = f (aTi aj ) for some
function f . This includes, for example, the polynomial kernel.
Theorem 4 (Hardness of low-rank approximation for dot product kernels). Consider any kernel
: Rd P
? Rd ! R+ with (ai , aj ) = f (aTi aj ) for some function f which can be expanded as
1
f (x) = q=0 cq xq with c1 6= 0 and |cq /c1 | ? Gq 1 and for all q 2 and some G 1.
Assume there is an algorithm A which given M 2 Rn?d with kernel matrix K = { (mi , mj )},
returns N 2 Rn?k satisfying kK N N T k2F ? 1 kK Kk k in T (M, k) time.
T
T
For any A 2 Rn?d , C 2 Rd?k with integer entries in [
2 , 2 ], let B = [w1 A , w2 C] with w1 =
1
p w2 2
p
,
w
=
.
Then
it
is
possible
to
compute
AC
in
time
T
(B,
2k
+
1)
+
O(nk ! 1 ).
2
12
nd
4 Gd
1
2
2
5
Proof. Using our decomposition of (?, ?), we can write the kernel matrix for B and as:
?
?
1 1
w12 AAT
w1 w2 AC
K = c0
+ c1
+ c2 K (2) + c3 K (3) + ...
1 1
w1 w2 C T AT w22 C T C
(4)
(q)
where Ki,j = (bTi bj )q and 1 denotes the all ones matrix of appropriate size. The key idea is to show
that the contribution of the K (q) terms is small, and so any relative-error rank-(2k+1) approximation
to K must recover an approximation to BB T , and thus the product AC as in Theorem 3.
By our setting of w2 =
we have for all i, j,
1
X
q=2
p 1
4 Gd
|bTi bj |
2
, the fact that w1 < w2 , and our bound on the entries of A and C,
?
w22 d
cq Ki,j ? c1 |bTi bj | ?
1
X
(q)
1
16G .
<
Gq
q=2
1
Thus, for any i, j, using that |cq /c1 | ? Gq
|bTi bj |q
1
? c1 |bTi bj |
1
X
q=2
Gq 1
(16G)q
1
?
1
:
1
c1 |bTi bj |.
12
(5)
?
1 1
? just has its last
K c0
, with its top right n ? n block set to 0. K
1 1
k columns and rows non-zero, so has rank ? 2k. Let Q 2 Rn?2k+1 be an orthogonal span for the
? along with the all ones vector of length n. Let N be the result of running A on B with
columns K
rank 2k + 1. Then we have:
? be the matrix
Let K
kK
?
2
2
?
N N T k2F ?
1 kK
K2k+1 k2F ?
QQT Kk2F
1 kK
?
1
?
? (2) + ...)
(c1 w12 AAT + c2 K
0
0
0
2
(6)
F
? (q) denotes the top left n ? n submatrix of K (q) . By our bound on the entries of A and (5):
where K
?
?
13
? (2) + c3 K
? (3) + ...
c1 w12 AAT + c2 K
?
c1 w12 AAT i,j ? 2c1 w12 d 22 .
12
i,j
Plugging back into (6) and using w1 =
(K
12
N N T )i,j ? kK
p w2
1
2 nd
2
, this gives for any i, j:
N N T kF ?
p
p
1n
2
? 2c1 w12 d
2
2
2
1 n ? 2c1 d 2
p
? w1 w2
2
12
1 2 nd
w1 w2 c1
?
.
6
?
(7)
Since A and C have integer entries, each entry of c1 w1 w2 AC is an integer multiple of c1 w1 w2 . By
the decomposition of (4) and the bound of (5), if we subtract c0 from the corresponding entry of
K and round it to the nearest multiple of c1 w1 w2 , we will recover the entry of AC. By the bound
of (7), we can likewise round the corresponding entry of N N T . Computing all nk of these entries
given N takes time O(nk ! 1 ), giving the theorem.
Theorem 4 lets us lower bound the time to compute a low-rank kernel approximation for any kernel
function expressible as a reasonable power expansion of aTi aj . As a straightforward example, it
gives the lower bound for the polynomial kernel of any degree stated in Theorem 1.
Proof of Theorem 1 ? Polynomial Kernel. We apply Theorem
4, noting that (mi , mj ) = (c +
Pq
mTi mj )q can be written as f (mTi mj ) where f (x) = j=0 cj xj with cj = cq j qj . Thus c1 6= 0
and |cj /c1 | ? Gj 1 for G = (q/c). Finally note that nnz(B) ? nnz(A) + nk giving the result.
2.3
Lower bound for distance kernels
We finally extend Theorem 4 to handle kernels like the Gaussian kernel whose value depends on the
squared distance kai aj k2 rather than just the dot product aTi aj . We prove:
6
Theorem 5 (Hardness of low-rank approximation for distance kernels). Consider any kernel function : RdP
?Rd ! R+ with (ai , aj ) = f (kai aj k2 ) for some function f which can be expanded
1
as f (x) = q=0 cq xq with c1 6= 0 and |cq /c1 | ? Gq 1 and for all q 2 and some G 1.
Assume there is an algorithm A which given input M 2 Rn?d with kernel matrix K =
{ (mi , mj )}, returns N 2 Rn?k satisfying kK N N T k2F ? 1 kK Kk k in T (M, k) time.
T
T
For any A 2 Rn?d , C 2 Rd?k with integer entries in [
2 , 2 ], let B = [w1 A , w2 C] with
1 p
w1 = 36p w2 2 nd , w2 = (16Gd2 4 )(36
.
It
is
possible
to
compute
AC
in
T
(B,
2k
+
3) +
2 nd)
1
O(nk !
1
2
) time.
2
1
2
The proof of Theorem 5 is similar to that of Theorem 4, and relegated to Appendix C. The key
idea is to write K as a polynomial in the distance matrix D with Di,j = kbi bj k22 . Since kbi
bj k22 = kbi k22 + kbj k22 2bTi bj , D can be written as 2BB T plus a rank-2 component. By setting
w1 , w2 sufficiently small, as in the proof of Theorem 4, we ensure that the higher powers of D are
negligible, and thus that our low-rank approximation must accurately recover the submatrix of BB T
corresponding to AC. Theorem 5 gives Theorem 1 for the popular Gaussian kernel:
Proof of Theorem 1 ? Gaussian Kernel. (mi , mj ) can be written as f (kmi mj k2 ) where f (x) =
P1
)q q
e x/ = q=0 ( 1/
x . Thus c1 6= 0 and |cq /c1 | ? Gq 1 for G = 1/ . Applying Theorem 5
q!
and bounding nnz(B) ? nnz(A) + nk, gives the result.
3
Input sparsity time kernel PCA for radial basis kernels
Theorem 1 gives little hope for achieving o(nnz(A)k) time for low-rank kernel approximation.
? Here we show that
However, the guarantee of (1) is not the only way of measuring the quality of K.
for shift/rotationally invariant kernels, including e.g., radial basis kernels, input sparsity time can be
achieved for the kernel PCA goal of (2).
3.1
Basic algorithm
Our technique is based on the random Fourier features technique [RR07]. Given any shift-invariant
kernel, (x, y) = (x y) with (0) = 1 (we will assume this w.l.o.g. as the function can always
be scaled), there is a probability density function p(?) over vectors in Rd such that:
Z
T
(x y) =
e 2?i? (x y) p(?)d?.
(8)
Rd
p(?) is just the (inverse) Fourier transform of (?), and is a density function by Bochner?s theorem.
Informally, given A 2 Rn?d if we let Z denote the matrix with columns z(?) indexed by ? 2 Rd .
T
z(?)j = e 2?i? aj . Then (8) gives ZP Z ? = K where P is diagonal with P?,? = p(?), and Z ?
denotes the Hermitian transpose.
The idea of random Fourier features is to select s frequencies ?1 , ..., ?s according to the density p(?)
? = Z? Z? T is then used to approximate K.
and set Z? = p1s [z(?1 ), ...z(?s )]. K
In recent work, Avron et al. [AKM+ 17] give a new analysis of random Fourier features. Extending
prior work on ridge leverage scores in the discrete setting [AM15, CMM17], they define the ridge
leverage function for parameter > 0:
? (?) = p(?)z(?)? (K + I)
1
z(?)
(9)
? that spectrally approximates K, they prove the following:
As part of their results, which seek K
Lemma 6. For all ?, ? (?) ? n/ .
While simple, this bound is key to our algorithm. It was shown in [CMM17] that if the columns of
a matrix are sampled by over-approximations to their ridge leverage scores (with appropriately set
), the sample is a projection-cost preserving sketch for the original matrix. That is, it can be used
as a surrogate in computing a low-rank approximation. The results of [CMM17] carry over to the
continuous setting giving, in conjunction with Lemma 6:
7
Lemma 7 (Projection-cost preserving sketch via random Fourier features). Consider any A 2 Rn?d
and shift-invariant kernel (?) with (0) = 1, with associated kernel matrix K = { (ai aj )}
Pn
)
and kernel Fourier transform p(?). For any 0 < ? k1 i=k+1 i (K), let s = cn log(n/
for
?2
1
?
p
sufficiently large c and let Z = s [z(?1 ), ..., z(?s )] where ?1 , ..., ?s are sampled independently
according to p(?). Then with probability 1
, for any orthonormal Q 2 Rn?k and any with
T
= K:
(1
?)kQQT Z?
? 2F ? kQQT
Zk
By (10) if we compute Q satisfying kQQT Z?
kQQT
k2F ? (1 + ?)2 kZ?
k2F ? (1 + ?)kQQT Z?
? 2 ? (1 + ?)kZ?
Zk
F
? 2F .
Zk
(10)
Z?k k2F then we have:
(1 + ?)2
Z?k k2F ?
kUk UkT
1 ?
= (1 + O(?))k
k2F
2
k kF
where Uk 2 Rn?k contains the top k column singular vectors of . By adjusting constants on ? by
making c large enough, we thus have the relative error low-rank approximation guarantee of (2). It
remains to show that this approach can be implemented efficiently.
3.2
Input sparsity time implementation
Given Z? sampled as in Lemma 7, we can find a near optimal subspace Q using any input sparsity
time low-rank approximation algorithm (e.g., [CW13, NN13]). We have the following Corollary:
? 2 nk
Corollary 8. Given Z? sampled as in Lemma 7 with s = ?(
), there is an algorithm running
?
? 2
in time O(
?
n2 k
)
k+1 (K)
k+1 (K)
that computes Q satisfying with high probability, for any
kQQT
k2F ? (1 + ?)k
with
T
= K:
2
k kF .
?
With Corollary 8 in place the main bottleneck to our approach becomes computing Z.
3.2.1
Sampling Frequencies
? we first sample ?1 , ..., ?s according to p(?). Here we use the rotational invariance of
To compute Z,
(?). In this case, p(?) is also rotationally invariant [LSS13] and so, letting p?(?) be the distribution
over norms of vectors sampled from p(?) we can sample ?1 , ..., ?n by first selecting s random
Gaussian vectors and then rescaling them to have norms distributed according to p?(?). That is, we
can write [?1 , ..., ?n ] = GD where G 2 Rd?s is a random Gaussian matrix and D is a diagonal
m
rescaling matrix with Dii = kG
with m ? p?. We will assume that p? can be sampled from in
ik
O(1) time. This is true for many natural kernels ? e.g., for the Gaussian kernel, p? is just a Gaussian
density.
3.2.2
Computing Z?
Due to our large sample size, s > n, even writing down G above requires ?(nd) time. However,
to form Z? we do not need G itself: it suffices to compute for m = 1, ..., s the column z(?m )
T
with z(?m )j = e 2?i?m aj . This requires computing AGD, which contains the appropriate dot
products aTj ?m for all m, j. We use a recent result [KPW16] which shows that this can be performed
approximately in input sparsity time:
Lemma 9 (From Theorem 1 of [KPW16]). There is an algorithm running in O(nnz(A) +
log4 dn3 s! 1.5
) time which outputs random B whose distribution has total variation distance at most
from the distribution of AG where G 2 Rd?s is a random Gaussian matrix. Here, ! < 2.373 is
the exponent of fast matrix multiplication.
Proof. Theorem 1 of [KPW16] shows that for B to have total variation distance from the distribution of AG it suffices to set B = ACG0 where C is a d ? O(log4 dn2 s1/2 / ) CountSketch matrix
8
and G0 is an O(log4 dn2 s1/2 / ) ? s random Gaussian matrix. Computing AC requires O(nnz(A))
4
3 1.5
time. Multiplying the result by G0 then requires O( log dn s ) time if fast matrix multiplication is
not employed. Using fast matrix multiplication, this can be improved to O( log
4
dn3 s!
1.5
).
Applying Lemma 9 with = 1/200 lets us compute random BD with total variation distance 1/200
from AGD. Thus, the distribution of Z? generated from this matrix has total variation distance
? 1/200 from the Z? generated from the true random Fourier features distribution. So, by Corollary
2
8, we can use Z? to compute Q satisfying kQQT
k2F ? (1 + ?)k
k kF with probability
1/100 accounting for the the total variation difference and the failure probability of Corollary 8.
This yields our main algorithmic result, Theorem 2.
3.3
An alternative approach
We conclude by noting that near input sparsity time Kernel PCA can also be achieved for a broad
class of kernels using a very different approach. We can approximate (?, ?) via an expansion into
polynomial kernel matrices as is done in [CKS11] and then apply the sketching algorithms for the
polynomial kernel developed in [ANW14]. As long as the expansion achieves high accuracy with
low degree, and as long as 1/ k+1 is not too small ? since this will control the necessary approxima?
tion factor, this technique can yield runtimes of the form O(nnz(A))+poly(n,
k, 1/ k+1 , 1/?), giving improved dependence on n for some kernels over our random Fourier features method. Improving the poly(n, k, 1/ k+1 , 1/?) term in both these methods, and especially removing the 1/ k+1
dependence and achieving linear dependence on n is an interesting open question for future work.
4
Conclusion
In this work we have shown that for a broad class of kernels, including the Gaussian, polynomial, and
linear kernels, given data matrix A, computing a relative error low-rank approximation to A?s kernel
matrix K (i.e., satisfying (1)) requires at least ?(nnz(A)k) time, barring a major breakthrough in
the runtime of matrix multiplication. In the constant error regime, this lower bound essentially
matches the runtimes given by recent work on subquadratic time kernel and PSD matrix low-rank
approximation [MM16, MW17].
We show that for the alternative kernel PCA guarantee of (2), a potentially faster runtime of
O(nnz(A)) + poly(n, k, 1/ k+1 , 1/?) can be achieved for general shift and rotation-invariant kernels. Practically, improving the second term in our runtime, especially the poor dependence on
n, is an important open question. Generally, computing the kernel matrix K explicitly requires
O(n2 d) time, and so our algorithm only gives runtime gains when d is large compared to n ? at least
?(n! .5 ), even ignoring k, k+1 , and ? dependencies. Theoretically, removing the dependence on
k+1 would be of interest, as it would give input sparsity runtime without any assumptions on the
matrix A (i.e., that k+1 is not too small). Resolving this question has strong connections to finding
efficient kernel subspace embeddings, which approximate the full spectrum of K.
References
[AKM+ 17] Haim Avron, Michael Kapralov, Cameron Musco, Christopher Musco, Ameya Velingker, and Amir Zandieh. Random Fourier features for kernel ridge regression: Approximation bounds and statistical guarantees. In Proceedings of the 34th International
Conference on Machine Learning (ICML), 2017.
[AM15] Ahmed Alaoui and Michael W Mahoney. Fast randomized kernel ridge regression
with statistical guarantees. In Advances in Neural Information Processing Systems 28
(NIPS), pages 775?783, 2015.
[AMS01] Dimitris Achlioptas, Frank Mcsherry, and Bernhard Sch?lkopf. Sampling techniques
for kernel methods. In Advances in Neural Information Processing Systems 14 (NIPS),
2001.
9
[ANW14] Haim Avron, Huy Nguyen, and David Woodruff. Subspace embeddings for the polynomial kernel. In Advances in Neural Information Processing Systems 27 (NIPS), pages
2258?2266, 2014.
[BIS17] Arturs Backurs, Piotr Indyk, and Ludwig Schmidt. On the fine-grained complexity
of empirical risk minimization: Kernel methods and neural networks. In Advances in
Neural Information Processing Systems 30 (NIPS), 2017.
[BJ02] Francis Bach and Michael I. Jordan. Kernel independent component analysis. Journal
of Machine Learning Research, 3(Jul):1?48, 2002.
[BW09] Mohamed-Ali Belabbas and Patrick J. Wolfe. Spectral methods in machine learning:
New strategies for very large datasets. Proceedings of the National Academy of Sciences of the USA, 106:369?374, 2009.
[CEM+ 15] Michael B. Cohen, Sam Elder, Cameron Musco, Christopher Musco, and Madalina
Persu. Dimensionality reduction for k-means clustering and low rank approximation.
In Proceedings of the 47th Annual ACM Symposium on Theory of Computing (STOC),
pages 163?172, 2015.
[CKS11] Andrew Cotter, Joseph Keshet, and Nathan Srebro. Explicit approximations of the
Gaussian kernel. arXiv:1109.4603, 2011.
[CMM17] Michael B. Cohen, Cameron Musco, and Christopher Musco. Input sparsity time lowrank approximation via ridge leverage score sampling. In Proceedings of the 28th
Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 1758?1777,
2017.
[CW13] Kenneth L Clarkson and David P Woodruff. Low rank approximation and regression
in input sparsity time. In Proceedings of the 45th Annual ACM Symposium on Theory
of Computing (STOC), pages 81?90, 2013.
[CW17] Kenneth L. Clarkson and David P. Woodruff. Low-rank PSD approximation in inputsparsity time. In Proceedings of the 28th Annual ACM-SIAM Symposium on Discrete
Algorithms (SODA), pages 2061?2072, 2017.
[DM05] Petros Drineas and Michael W Mahoney. On the Nystr?m method for approximating a Gram matrix for improved kernel-based learning. Journal of Machine Learning
Research, 6:2153?2175, 2005.
[FS02] Shai Fine and Katya Scheinberg. Efficient SVM training using low-rank kernel representations. Journal of Machine Learning Research, 2:243?264, 2002.
[FT07] Shmuel Friedland and Anatoli Torokhti. Generalized rank-constrained matrix approximations. SIAM Journal on Matrix Analysis and Applications, 29(2):656?659, 2007.
[GM13] Alex Gittens and Michael Mahoney. Revisiting the Nystr?m method for improved
large-scale machine learning. In Proceedings of the 30th International Conference on
Machine Learning (ICML), pages 567?575, 2013. Full version at arXiv:1303.1849.
[GU17] Fran?ois Le Gall and Florent Urrutia. Improved rectangular matrix multiplication using
powers of the Coppersmith-Winograd tensor. arXiv:1708.05622, 2017.
[HDC+ 01] Ingrid Hedenfalk, David Duggan, Yidong Chen, Michael Radmacher, Michael Bittner, Richard Simon, Paul Meltzer, Barry Gusterson, Manel Esteller, Mark Raffeld,
et al. Gene-expression profiles in hereditary breast cancer. New England Journal of
Medicine, 344(8):539?548, 2001.
[JDMP11] Asif Javed, Petros Drineas, Michael W Mahoney, and Peristera Paschou. Efficient
genomewide selection of PCA-correlated tSNPs for genotype imputation. Annals of
Human Genetics, 75(6):707?722, 2011.
[KPW16] Michael Kapralov, Vamsi Potluru, and David Woodruff. How to fake multiply by a
Gaussian matrix. In Proceedings of the 33rd International Conference on Machine
Learning (ICML), pages 2101?2110, 2016.
10
[LG12] Fran?ois Le Gall. Faster algorithms for rectangular matrix multiplication. In Proceedings of the 53rd Annual IEEE Symposium on Foundations of Computer Science
(FOCS), pages 514?523, 2012.
[LG14] Fran?ois Le Gall. Powers of tensors and fast matrix multiplication. In Proceedings
of the 39th international symposium on symbolic and algebraic computation, pages
296?303. ACM, 2014.
[LSS13] Quoc Le, Tam?s Sarl?s, and Alexander Smola. Fastfood - Computing Hilbert space
expansions in loglinear time. In Proceedings of the 30th International Conference on
Machine Learning (ICML), pages 244?252, 2013.
[MM16] Cameron Musco and Christopher Musco. Recursive sampling for the Nystr?m method.
In Advances in Neural Information Processing Systems 30 (NIPS), 2016.
[MW17] Cameron Musco and David P Woodruff. Sublinear time low-rank approximation of
positive semidefinite matrices. In Proceedings of the 58th Annual IEEE Symposium on
Foundations of Computer Science (FOCS), 2017.
[NN13] Jelani Nelson and Huy L Nguy?n. OSNAP: Faster numerical linear algebra algorithms
via sparser subspace embeddings. In Proceedings of the 54th Annual IEEE Symposium
on Foundations of Computer Science (FOCS), pages 117?126, 2013.
[RR07] Ali Rahimi and Benjamin Recht. Random features for large-scale kernel machines.
In Advances in Neural Information Processing Systems 20 (NIPS), pages 1177?1184,
2007.
[SS00] Alex J Smola and Bernhard Sch?kopf. Sparse greedy matrix approximation for machine learning. In Proceedings of the 17th International Conference on Machine Learning (ICML), pages 911?918, 2000.
[WS01] Christopher Williams and Matthias Seeger. Using the Nystr?m method to speed up
kernel machines. In Advances in Neural Information Processing Systems 14 (NIPS),
pages 682?688, 2001.
[WZ13] Shusen Wang and Zhihua Zhang. Improving CUR matrix decomposition and the Nystr?m approximation via adaptive sampling. Journal of Machine Learning Research,
14:2729?2769, 2013.
[ZTK08] Kai Zhang, Ivor W. Tsang, and James T. Kwok. Improved Nystr?m low-rank approximation and error analysis. In Proceedings of the 25th International Conference on
Machine Learning (ICML), pages 1232?1239, 2008.
11
| 7030 |@word version:1 polynomial:16 norm:7 nd:12 c0:3 open:4 km:3 d2:1 seek:1 decomposition:3 accounting:1 asks:1 nystr:7 carry:1 reduction:2 series:1 score:3 contains:2 selecting:1 woodruff:8 ours:1 ati:8 must:4 written:3 bd:1 numerical:1 am15:2 greedy:1 amir:1 ith:1 ws01:2 zhang:2 atj:2 along:3 c2:3 direct:1 dn:1 symposium:8 ik:1 ingrid:1 focs:3 prove:3 hermitian:1 theoretically:1 hardness:5 p1:1 little:1 becomes:1 osnap:1 bounded:4 kg:1 minimizes:2 spectrally:1 developed:1 ag:2 finding:1 guarantee:13 avron:6 runtime:11 exactly:2 prohibitively:1 k2:6 scaled:1 uk:1 control:1 positive:2 negligible:1 aat:11 limit:1 ss00:2 consequence:1 approximately:1 might:1 plus:2 katya:1 factorization:1 practice:2 recursive:2 block:4 nnz:40 empirical:1 significantly:5 projection:8 matching:1 radial:5 kbi:3 symbolic:1 onto:4 cannot:1 close:1 selection:1 rogate:1 applying:2 writing:1 risk:1 map:2 straightforward:1 williams:1 kbb:5 independently:1 rectangular:4 musco:13 cw17:1 factored:2 importantly:1 orthonormal:2 datapoints:1 bj02:2 proving:1 handle:1 manel:1 variation:5 laplace:1 analogous:1 annals:1 construction:1 us:1 gall:3 element:1 wolfe:1 approximated:1 expensive:4 satisfying:15 winograd:1 wang:1 tsang:1 revisiting:1 ensures:1 rescaled:1 prn:1 benjamin:1 complexity:3 kmi:3 solving:1 tight:1 algebra:1 ali:2 basis:6 drineas:2 accelerate:1 represented:1 fast:17 sarl:1 whose:3 kai:5 solve:1 belabbas:1 transform:3 itself:3 indyk:1 eigenvalue:2 matthias:1 outputting:2 product:14 gq:6 relevant:1 hadamard:1 ludwig:1 achieve:2 academy:1 frobenius:2 requirement:1 zp:1 extending:1 fs02:2 leave:1 illustrate:1 andrew:1 ac:13 nearest:2 lowrank:1 approxima:1 progress:2 strong:2 implemented:2 c:1 ois:3 closely:2 human:1 dii:1 require:1 suffices:3 lss13:4 practically:1 sufficiently:2 considered:1 exp:1 algorithmic:2 bj:9 genomewide:1 major:1 achieves:2 currently:1 largest:1 ams01:2 tool:1 cotter:1 hope:3 minimization:1 mit:2 gaussian:20 always:1 rather:1 pn:2 conjunction:1 corollary:5 focus:1 rank:53 contrast:1 seeger:1 p1s:1 unlikely:1 typically:4 kernelized:7 expand:1 expressible:1 relegated:1 classification:1 exponent:2 art:2 breakthrough:2 constrained:1 barring:2 beach:1 sampling:8 zz:9 runtimes:3 piotr:1 broad:4 k2f:24 nearly:6 icml:6 future:1 subquadratic:2 richard:1 employ:1 national:1 psd:2 interest:2 possibility:2 multiply:1 shusen:1 evaluation:1 rdp:1 mahoney:4 genotype:1 semidefinite:2 operated:1 mcsherry:1 necessary:1 orthogonal:3 indexed:1 euclidean:1 instance:1 column:10 measuring:1 cost:3 subset:1 entry:21 uniform:2 rounding:1 too:5 optimally:1 stored:3 dependency:3 kxi:1 gd:3 inputsparsity:1 recht:1 st:1 density:4 international:7 randomized:1 siam:3 off:1 michael:11 quickly:3 sketching:1 w1:14 squared:1 ukt:1 tam:1 return:4 rescaling:2 includes:1 explicitly:2 depends:2 performed:1 tion:1 matern:1 francis:1 kapralov:3 recover:4 shai:1 jul:1 simon:1 contribution:2 accuracy:1 efficiently:6 likewise:1 yield:2 lkopf:1 accurately:2 multiplying:6 datapoint:1 whenever:1 strip:1 failure:1 frequency:3 mohamed:1 james:1 associated:4 mi:11 proof:9 petros:2 cur:1 recovers:2 di:1 sampled:6 gain:1 dataset:5 adjusting:1 popular:6 ask:1 dimensionality:1 hilbert:2 cj:3 back:1 elder:1 higher:1 improved:10 entrywise:1 done:3 just:8 smola:2 achlioptas:1 until:1 sketch:3 akm:3 christopher:5 nonlinear:1 aj:27 quality:1 usa:2 unitarily:1 k22:4 requiring:2 true:2 hence:1 anw14:5 round:2 generalized:1 ridge:8 demonstrate:3 argues:1 kopf:1 recently:1 common:2 rotation:2 vamsi:1 cohen:2 discussed:1 extend:2 approximates:4 mellon:1 significant:1 ai:16 rd:24 dot:11 pq:1 access:1 bti:7 gj:1 patrick:1 recent:7 inequality:1 asif:1 inverted:1 rotationally:3 preserving:3 additional:1 employed:1 bochner:2 barry:1 resolving:1 encompass:1 full:3 multiple:4 rahimi:1 match:4 faster:4 ahmed:1 bach:1 long:5 england:1 cameron:6 a1:2 plugging:1 regression:5 basic:1 breast:1 essentially:1 cmu:1 arxiv:3 kernel:128 sometimes:1 achieved:5 c1:23 whereas:1 fine:2 singular:1 appropriately:2 w2:19 sch:2 alaoui:1 jordan:1 integer:11 near:3 noting:3 leverage:4 easy:1 enough:2 embeddings:3 meltzer:1 xj:3 bandwidth:1 arturs:1 florent:1 idea:6 cn:1 wz13:2 shift:7 bottleneck:2 qj:1 expression:1 pca:11 accelerating:1 clarkson:2 returned:1 algebraic:1 generally:1 fake:1 clear:1 eigenvectors:1 informally:1 transforms:1 processed:1 per:1 carnegie:1 write:9 discrete:3 key:4 kzz:1 paschou:1 demonstrating:1 threshold:1 achieving:7 kbj:1 imputation:1 backurs:2 kuk:1 kenneth:2 advancing:1 vast:1 ram:1 downstream:1 mti:7 run:1 inverse:1 soda:2 extends:1 throughout:1 reasonable:1 place:1 fran:3 w12:6 appendix:4 scaling:2 kaj:1 submatrix:5 bit:2 bound:24 ki:3 haim:2 pkk:1 dn3:2 annual:7 w22:2 alex:2 n3:1 wc:8 fourier:14 nathan:1 speed:1 span:5 expanded:2 relatively:2 according:4 poor:2 smaller:1 sam:1 gittens:1 joseph:1 making:1 s1:2 quoc:1 invariant:9 fulfilling:1 computationally:1 remains:1 scheinberg:1 needed:2 letting:4 operation:1 apply:6 kwok:1 spectral:4 appropriate:3 alternative:3 schmidt:1 original:1 top:4 running:4 include:1 denotes:3 ensure:1 madalina:1 log2:1 clustering:1 anatoli:1 medicine:1 giving:4 k1:1 especially:2 approximating:2 tensor:2 g0:2 question:5 strategy:1 rr07:5 dependence:9 diagonal:2 surrogate:1 hai:1 loglinear:1 friedland:1 subspace:5 distance:9 nelson:1 dwoodruf:1 reason:1 assuming:2 length:1 sur:1 kk:26 cq:8 rotational:1 equivalently:1 difficult:2 unfortunately:2 potentially:1 frank:1 stoc:2 negative:1 stated:1 affirms:1 implementation:2 proper:1 javed:1 datasets:5 ztk08:2 extended:1 rn:26 reproducing:1 arbitrary:3 wac:5 david:7 required:3 c3:2 connection:1 nip:8 dimitris:1 regime:2 sparsity:17 coppersmith:1 including:5 cw13:4 power:5 natural:4 improve:1 jelani:1 xq:2 prior:2 understanding:1 kf:10 multiplication:16 relative:10 sublinear:1 interesting:1 srebro:1 foundation:3 degree:6 storing:1 heavy:1 row:7 cancer:1 genetics:2 surprisingly:1 last:3 transpose:1 icantly:1 wide:1 hereditary:1 sparse:3 distributed:1 gram:3 evaluating:1 computes:2 kz:2 dn2:2 adaptive:1 nguyen:2 polynomially:1 agd:2 bb:13 approximate:5 bernhard:2 gene:1 consequentially:1 cem:2 persu:1 conclude:1 xi:1 alternatively:3 spectrum:1 continuous:1 mj:20 zk:3 shmuel:1 ca:1 ignoring:5 improving:4 expansion:4 poly:10 dense:1 fastfood:3 main:4 mmt:1 k2k:1 bounding:1 paul:1 huy:2 n2:6 profile:1 body:1 explicit:1 bw09:2 qqt:3 grained:1 down:2 theorem:33 removing:3 rk:2 showing:1 svm:2 dominates:1 naively:1 dm05:2 keshet:1 magnitude:1 cmm17:4 nk:22 chen:1 sparser:1 subtract:1 simply:1 likely:1 nguy:1 ivor:1 zhihua:1 applies:2 satisfies:1 acm:5 goal:3 viewed:1 replace:1 hard:1 infinite:1 specifically:2 except:2 bbp:1 lemma:7 total:5 cnmusco:1 invariance:1 select:1 log4:3 mark:1 alexander:1 accelerated:1 gm13:4 correlated:1 |
6,668 | 7,031 | The Expxorcist: Nonparametric Graphical Models
Via Conditional Exponential Densities
Arun Sai Suggala ?
Carnegie Mellon University
Pittsburgh, PA 15213
Mladen Kolar ?
University of Chicago
Chicago, IL 60637
Pradeep Ravikumar ?
Carnegie Mellon University
Pittsburgh, PA 15213
Abstract
Non-parametric multivariate density estimation faces strong statistical and computational bottlenecks, and the more practical approaches impose near-parametric
assumptions on the form of the density functions. In this paper, we leverage recent developments to propose a class of non-parametric models which have very
attractive computational and statistical properties. Our approach relies on the
simple function space assumption that the conditional distribution of each variable
conditioned on the other variables has a non-parametric exponential family form.
1
Introduction
Let X = (X1 , . . . , Xp ) be a p-dimensional random vector. Let G = (V, E) be the graph that encodes
conditional independence assumptions underlying the distribution of X, that is, each node of the
graph corresponds to a component of vector X and (a, b) ? E if and only if Xa ?
6 ? Xb | X?ab with
X?ab := {Xc | c ? V \{a, b}}. The graphical model represented by G is then the set of distributions
over X that satisfy the conditional independence assumptions specified by the graph G.
There has been a considerable line of work on learning parametric families of such graphical model
distributions from data [22, 20, 13, 28], where the distribution is indexed by a finite-dimensional
parameter vector. The goal of this paper, however, is on specifying and learning nonparametric
families of graphical model distributions, indexed by infinite-dimensional parameters, and for which
there has been comparatively limited work. Non-parametric multivariate density estimation broadly,
even without the graphical model constraint, has not proved as popular in practical machine learning
contexts, for both statistical and computational reasons. Loosely, estimating a non-parametric
multivariate density, with mild assumptions, typically requires the number of samples to scale
exponentially in the dimension p of the data, which is infeasible even in the big-data era when n is
very large. And the resulting estimators are typically computationally expensive or intractable, for
instance requiring repeated computations of multivariate integrals.
We present a review of multivariate density estimation, that is necessarily incomplete but sets up
our proposed approach. A common approach dating back to [15] uses the logistic density transform
to satisfy the unity and positivity constraints for densities, and considers densities of the form
f (X) = R exp(?(X))
, with some constraints on ? for identifiability such as ?(X0 ) = 0 for some
exp(?(x))dx
XR
X0 ? X or X ?(x)dx = 0.
With the logistic density transform, differing approaches for non-parametric density estimation can
be contrasted in part by their assumptions on the infinite-dimensional function space domain of ?(?).
An early approach [8] considered function spaces of functions with bounded ?roughness? functionals.
The predominant line of work however has focused on the setting where ?(?) lies in a Reproducing
Kernel Hilbert Space (RKHS), dating back to [21]. Consider the estimation of these logistic density
?
[email protected]
?
[email protected]
?
[email protected]
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
transforms ?(X) given n i.i.d. samples Xn = {X (i) }ni=1 drawn from f? (X). A natural loss
functional is penalized log likelihood, with a penalty
a smooth fit with respect
R
P functional that ensures
to the function space domain: `(?; Xn ) := ? n1 i?[n] ?(X (i) ) + log exp(?(x))dx + ? pen(?), for
functions ?(?) that lie in an RKHS H, and where pen(?) = k?k2H is the squared RKHS norm. This
was studied by many [21, 11, 6]. A crucial caveat is that the representer theorem for RKHSs does not
hold. Nonetheless, one can consider finite-dimensional function space approximations consisting
of the linear span of kernel functions evaluated at the sample points [12]. Computationally this still
scales
poorly with the dimension due to the need to compute multidimensional integrals of the form
R
exp(?(x)dx which do not, in general, decompose. These approximations also do not come with
strong statistical guarantees.
We briefly note that the function space assumption that ?(?) lies in an RKHS could also be viewed
from the lens of an infinite-dimensional exponential family [4]. Specifically, let H be a Reproducing Kernel Hilbert Space with reproducing kernel k(?, ?), and inner product h?, ?iH . Then
?(X) = h?(?), k(X, ?)iH , so that the density f (X) can in turn be viewed as a member of an
infinite-dimensional exponential family with sufficient statistics k(X, ?) : X 7? H, and natural
parameter ?(?) ? H. Following this viewpoint, [4] propose estimators via linear span approximations
similar to [11].
Due to the computational caveat with exact likelihood based functionals, a line of approaches
have focused on
instead. [14] study the following loss functional:
R
Ppenalized surrogate likelihoods
`(?; Xn ) := n1 i?[n] exp(??(X (i) ))+ ?(x)?(x)dx+?pen(?), where ?(X) is some fixed known
density with the same support as the unknown density f (X). While this estimation procedure is
much more computationally amenable than minimizing the exact penalized likelihood, the caveat,
however, is that for a general RKHS this requires solving higher order integrals. The next level of
simplification has thus focused on the form of the logistic transform function itself. There has been a
line of work on an ANOVA
logistic density function into node-wise and
Pptype decomposition
Pp of
Pthe
p
pairwise terms: ?(X) = s=1 ?s (Xs ) + s=1 t=s+1 ?st (Xs , Xt ). A line of work has coupled
such a decomposition with the assumption that each of the terms lie in an RKHS. This does not
immediately provide a computational benefit: with penalized likelihood based loss functionals, the
loss functional does not necessarily decompose into such node and pairwise terms. [24] thus couple
this ANOVA type pairwise decomposition with a score matching based objective. [10] use the above
decomposition with the surrogate loss functional of [14] discussed above, but note that this still
requires the aforementioned function space approximation as a linear span of kernel evaluations, as
well as two-dimensional integrals.
A line of recent work has thus focused on further stringent assumptions on the density function space,
by assuming some components of the logistic transform to be finite-dimensional. [30] use an ANOVA
decomposition but assume the terms belong to finite-dimensional function spaces instead of RKHSs,
specified by a pre-defined finite set of basis functions. [29] consider logistic transform functions ?(?)
that have the pairwise decomposition above, with a specific class of parametric pairwise functions
?st Xs Xt , and non-parametric node-wise functions. [17, 16] consider the problem of estimating
monotonic node-wise functions such that the transformed random vector is multivariate Gaussian;
which could also be viewed as estimating a Gaussian copula density.
To summarize the (necessarily incomplete) review above, non-parametric density estimation faces
strong statistical and computational bottlenecks, and the more practical approaches impose stringent
near-parametric assumptions on the form of the (logistic transform of the) density functions. In this
paper, we leverage recent developments to propose a very computationally simple non-parametric
density estimation algorithm, that still comes with strong statistical guarantees. Moreover, the
density could be viewed as a graphical model distribution, with a corresponding sparse conditional
independence graph.
Our approach relies on the following simple function space assumption: that the conditional distribution of each variable conditioned on the other variables has a non-parametric exponential family
form. As we show, for there to exist a consistent joint density, the logistic density transform with
respect to a particular
necessarily
Pp decomposes into the following semi-parametric
Pp base measureP
p
form: ?(X) = s=1 ?s Bs (Xs ) + s=1 t=s+1 ?st Bs (Xs ) Bt (Xt ) in the pairwise case, with
both a parametric component {?s : s = 1, . . . , p}, {?st : s < t; s, t = 1, . . . , p}, as well as
non-parametric components {Bs : s = 1, . . . , p}. We call this class of models the ?expxorcist?, fol-
2
lowing other ?ghostbusting? semi-parametric models such as the nonparanormal and nonparanormal
skeptic [17, 16].
Since the conditional distributions are exponential families, we show that there exist computationally
amenable estimators, even in our more general non-parametric setting, where the sufficient statistics
have to be estimated as well. The statistical analysis in our non-parametric setting however is more
subtle, due in part to non-convexity and in part to the non-parametric setting. We also show how the
Expxorcist class of densities is closely related to a semi-parametric exponential family copula density
that generalizes the Gaussian copula density of [17, 16]. We corroborate the applicability of our class
of models with experiments on synthetic and real data sets.
2
Multivariate Density Specification via Conditional Densities
We are interested in the approach of estimating a multivariate density by estimating node-conditional
densities. Since node-conditional densities focus on the density of a single variable, though conditioned on the rest of the variables, estimating these is potentially a simpler problem, both statistically
and computationally, than estimating the entire joint density itself. Let us consider the general
non-parametric conditional density estimation problem. Given the general multivariate density
f (X) = R exp(?(X))
, the conditional density of a variable Xs given the rest of the variables X?s
exp(?(x))dx
X
is given by f (Xs | X?s ) =
R exp(?((Xs ,X?s ))) ,
exp(?((x,X?s )))dx
X
which does not have a multi-dimensional integral,
s
but otherwise does not have a computationally amenable form. There has been a line of work on such
conditional density estimation, mirroring developments in multivariate density estimation [9, 18, 23],
but unlike parametric settings, there are no large sample complexity gains with non-parametric
conditional density estimation under general settings. There have also been efforts to use ANOVA
decompositions in a conditional density context [31, 26].
In addition to computational and sample complexity caveats, recall that in our context, we would
like to use conditional density estimates to infer a joint multivariate density. A crucial caveat with
using the above estimates to do so is that it is not clear when the estimated node-conditional densities
would be consistent with a joint multivariate density. There has been a line of work on this question
(of when conditional densities are consistent with a joint density) for parametric densities; see [1] for
an overview, with more recent results in [27, 5, 2, 25]. Overall, while estimating node-conditional
densities could be viewed as surrogate estimation of a joint density, arbitrary node-conditional
distributions need not be consistent in general with any joint density. There has however been a line
of work in recent years [3, 28], where it was shown that when the node-conditional distributions
belong to an exponential family, then under certain conditions on their parameterization, there do
exist multivariate densities consistent with the node-conditional densities. In the next section, we
leverage these results towards non-parametric estimation of conditional densities.
3
Conditional Densities of an Exponential Family Form
We first recall the definition of an exponential family in the context of a conditional density.
Definition 1. A conditional density of a random variable Y ? Y given covariates Z :=
(Z1 , . . . , Zm ) ? Z is said to have an exponential family form if it can be written as f (Y | Z) =
exp(B(Y )T E(Z) + C(Y ) + D(Z)), for some functions B : Y 7? Rk (for some finite integer k > 0),
E : Z 7? Rk , C : Y 7? R and D : Z 7? R.
Thus, f (Y | Z) belongs to a finite-dimensional exponential family with sufficient statistics B(Y ),
base measure exp(C(Y )), and with natural parameter E(Z) and where ?D(Z) is the log-partition
function. Contrast this with a general conditional density f (Y | Z) = exp(h(Y, Z) + C(Y ) + D(Z))
with respect to the base measure exp(C(Y )) and ?D(Z) being the log-normalization constant, and it
can be seen that a conditional density of the exponential family form has its logistic density transform
h(Y, Z) that factorizes as B(Y )T E(Z).
Consider the case where the sufficient statistic function is real-valued. The non-parametric estimation
problem of a conditional density of exponential form then reduces to the estimation of the sufficient
statistics function B(?), the exponential natural parameter function E(?), assuming the base measure
C(?) is given. But when would such estimated conditional densities be consistent with a joint density?
3
To answer this question, we draw upon developments in [28]. Suppose that the node-conditional
distributions of each random variable Xs conditioned on the rest of random variables have the
exponential family form as in Definition 1, so that for each s ? V
P(Xs | X?s ) ? exp{Es (X?s )Bs (Xs ) + Cs (Xs )} ,
(1)
for some arbitrary functions Es (?), Bs (?), Cs (?) that specify a valid conditional density. Then [28]
show that these node-conditional densities are consistent with a unique joint density over the random
vector X, that moreover factors according to a set of cliques C in the graph G, if and only if
the functions
{Es (?)}
P
Qs?V specifying the node-conditional distributions have the form Es (X?s ) =
?s + C?C:s?C ?C t?C,t6=s Bt (Xt ), where {?s } ? {?C }C?C is a set of parameters. Moreover, the
corresponding consistent joint distribution has the following form
nX
o
X
Y
X
P(X) ? exp
?s Bs (Xs ) +
?C
Bs (Xs ) +
Cs (Xs ) .
(2)
s?V
C?C
s?C
s?V
In this paper, we are interested in the non-parametric estimation of the Expxorcist class of densities
in (2), where we estimate both the finite-dimensional parameters {?s } ? {?C }C?C , as well as the
functions {Bs (Xs )}s?V . We assume we are given the base measures
{Cs (Xs )}s?V , so that the
Q
joint density is with respect to a given product base measure s?V exp(Cs (XS )), as is common
in the multivariate density estimation literature. Note that this is not a very restrictive assumption.
In practice the base measure at each node can be well approximated using the empirical univariate
marginal density of that node. We could also extend our algorithm, which we present next, to estimate
the base measures along with sufficient statistic functions.
4
Regularized Conditional Likelihood Estimation for Exponential Family
Form Densities
We consider the nonparametric estimation problem of estimating a joint density of the form in (2),
focusing on the pairwise case where the factors have size at most k = 2, so that the joint density
takes the form
X
X
X
P(X) ? exp
?s Bs (Xs ) +
?st Bs (Xs ) Bt (Xt ) +
Cs (Xs ) . (3)
s?V
(s,t)?E
s?V
As detailed in the previous section, estimating this joint density can be reduced to estimating its
node-conditional densities, which take the form
X
P(Xs | X?s ) ? exp Bs (Xs ) ?s +
?st Bt (Xt ) + Cs (Xs ) .
(4)
t?NG (s)
We now introduce some notation which we use in the sequel. Let ? = {?s }s?V ? {?st }s6=t and
?s = ?s ? {?st }t?V \{s} . Let B = {Bs }s?V be the set of sufficient statistics. Let Xs be the domain
of Xs , which we assume is bounded and L2 (Xs ) be the Hilbert space of square integrable functions
over Xs with respect to Lebesgue measure. We assume that the sufficient statistics Bs (?) ? L2 (Xs ).
Note that the model in Equation (3) is unidentifiable. To overcome this issue we
R impose additional
constraints on its parameters. Specifically, we require Bs (Xs ) to satisfy Xs Bs (X)dX = 0,
R
B (X)2 dX = 1 and ?s ? 0, ?s ? V .
Xs s
Optimization objective: Let Xn = {X (1) , . . . X (n) } be n i.i.d. samples drawn from a joint density
of the form in Equation (3), with parameters ?? , B ? . And let Ls (?s , B; Xn ) be the node conditional
negative log likelihood at node s
X
1 Xn
(i)
(i)
Ls (?s , B; Xn ) =
?Bs (Xs(i) ) ?s +
?st Bt (Xt ) + A(X?s ; ?s , B) ,
i=1
t?V \s
n
where A(X?s ; ?s , B) is the log partition function. To estimate the unknown parameters, we solve
the following regularized node conditional log-likelihood estimation problem at each node s ? V
min Ls (?s , B; Xn ) + ?n k?s k1
R
R
s.t. ?s ? 0, Xt Bt (X)dX = 0, Xt Bt (X)2 dX = 1 ?t ? V.
?s ,B
4
(5)
The equality constraints on the norm of functions Bt (?) makes the above optimization problem a
difficult one to solve. While the norm constraints on Bt (?), ?t ? V \ s can be handled through reparametrization, the constraint on Bs (?) can not be handled efficiently. To make the optimization more
amenable for numerical optimization techniques, we solve a closely related optimization problem.
At each node s ? V , we consider the following re-parametrization of B: Bs (Xs ) ? ?s Bs (Xs ),
Bt (Xt ) ? (?st /?s )Bt (Xt ), ?t ? V \ {s}. With a slight abuse of notation we redefine Ls using this
re-parametrization as
X
1 Xn
(i)
(i)
Ls (B; Xn ) =
?Bs (Xs(i) ) 1 +
Bt (Xt ) + A(X?s ; B) ,
(6)
i=1
t?V \s
n
where A(X?s ; B) is the log partition function. We solve the following optimization problem, which
is closely related to the original optimization in Equation (5)
qR
P
B (X)2 dX
min Ls (B; Xn ) + ?n t?V
Xt t
B
(7)
R
s.t. Xt Bt (X)dX = 0 ?t ? V.
For more details on the relation between (5) and (7), please refer to Appendix.
Algorithm: We now present our algorithm for optimization of (7). In the sequel, for simplicity,
we assume that the domains Xt of random variables Xt are all the same and equal to X . In order to
estimate functions Bt , we expand them over a uniformly bounded, orthonormal basis {?k (?)}?
k=0 of
L2 (X ) with ?0 (?) ? 1. Expansion of the functions Bt (?) over this basis yields
Xm
X?
Bt (X) =
?t,k ?k (X)+?t,m (X) where ?t,m (X) = ?t,0 ?0 (X)+
?t,k ?k (X).
k=1
k=m+1
R
Note that the constraint X Bt (X)dX = 0 in Equation (7), translates to ?t,0 = 0. To convert the
infinite dimensional optimization problem in (7) into a finite dimensional
problem, we truncate the
Pm
basis expansion to the top m terms and approximate Bt (?) as k=1 ?t,k ?k (?). The optimization
problem in Equation (7) can then be rewritten as
X
min Ls,m (?m ; Xn ) + ?n
k?t,m k2 ,
(8)
?m
t?V
{?t,k }m
k=1 ,
?m = {?t,m }t?V and Ls,m is defined as
?
?
?
?
n ? X
m
m
?
X
X
X
1
(i)
(i)
Ls,m (?m ; Xn ) =
?
?s,k ?k (Xs(i) ) ?1 +
?t,l ?l (Xt )? + A(X?s ; ?m ) .
?
?
n
where ?t,m =
i=1
t?V \{s} l=1
k=1
Iterative minimization of (8): Note that the objective in (8) is non-convex. In this work, we use
a simple alternating minimization technique for its optimization. In this technique, we alternately
minimize ?s,m , {?t,m }t?V \s while fixing the other parameters. The resulting optimization problem
in each of the alternating steps is convex. We use Proximal Gradient Descent to optimize these
sub-problems. To compute the objective and its gradients, we need to numerically evaluate the
one-dimensional integrals in the log partition function. To do this, we choose a uniform grid of points
over the domain and use quadrature rules to approximate the integrals.
Convergence: Although (8) is non-convex, we can show that under certain conditions on the
objective function, the alternating minimization procedure converges to the global minimum. In a
recent work [32] analyze alternating minimization for low rank matrix factorization problems and
show that it converges to a global minimum if the sequence of convex problems are strongly convex
and satisfy certain other regularity condition. The analysis of [32] can be extended to show global
convergence of alternating minimization for (8).
5
Statistical Properties
In this section we provide parameter estimation error rates for the node conditional estimator in
Equation (8). Note that these rates are for the re-parameterized model described in Equation (6) and
can be easily translated to guarantees on the original model described in Equations (3), (4).
5
Notation: Let B2 (x, r) = {y : ky ? xk2 ? r} be the `2 ball with center x and radius r. Let
{Bt? (?)}t?V be the true functions of the re-parametrized model, which we would like to estimate
from the data. Denote the basis expansion coefficients of Bt (?) with respect to orthonormal basis
?
?
{?k (?)}?
k=0 by ?t , which is an infinite dimensional vector and let ?t be the coefficients of Bt (?).
And let ?t,m
R be the coefficients corresponding to the top m basis in the basis expansion of Bt (?).
Note that Bt (X)2 dX = k?t k22 . Let ? = {?t }t?V and ?m = {?t,m }t?V . Let L?s,m (?m ) =
E [Ls,m (?m ; Xn )] be the population version of the sample loss defined in Equation (8). We will often
omit Xn from Ls,m (?m ; Xn ) when clear from the context. We let (?t ? ?t,m ) be the difference
between infinite dimensional vector ?t and the vector obtained
by appropriately padding ?t,m with
P
zeros. Finally, we define the norm R(?) as R(?m ) = t?V k?t,m k2 and its dual as R? (?m ) =
supt?V k?t,m k2 . The norms on infinite dimensional vector ? are similarly defined.
We now state our key assumption on the loss function Ls,m (?). This assumption imposes strong
?
curvature condition on Ls,m along certain directions in a ball around ?m
.
Assumption 1. There exists rm > 0 and constants c, ? > 0 such that for any ?m ? B2 (0, rm ) the
?
?
gradient
of the sample loss Ls,m satisfies: h?Ls,m (?m
+ ?m ) ? ?Ls,m (?m
), ?m i ? ?k?m k22 ?
q
c
m log(p)
R(?m ).
n
Similar assumptions are increasingly common in analysis of non-convex estimators, see [19] and
references therein. We are now ready to state our results which give the parameter estimation error
rates, the proofs of which can be found in Appendix. We first provide a deterministic bound on
?
?
the error k?m ? ?m
k2 in terms of the random quantity R? (?Ls,m (?m
)). We derive probabilistic
results in the subsequent corollaries.
Theorem 2. Let Ns be the true neighborhood of node s, with |Ns | = d. Suppose Ls,m satisfies
?
Assumption
1. If the regularization parameter ?n is chosen such that ?n ? 2R? (?Ls,m (?m
)) +
q
2c
m log(p)
,
n
?
, rm ) satisfies:
then any stationary point ?
? m of (8) in B2 (?m
?
6 2?
?
k?
?m ? ?m
k2 ?
d?n .
?
?
We now provide a set of sufficient conditions under which the random quantity R? (?Ls,m (?m
))
can be bounded.
Assumption 2. There exists a constant L > 0 such that the gradient of the population loss L?s,m at
?
?
?
?m
satisfies: R? (?L?s,m (?m
)) ? LR? (?? ? ?m
).
Corollary 3. Suppose the conditions in Theorem
Pm ? 2 are satisfied. Moreover, let ? =
supi?N,X?X |?i (X)| and ?m = supt?V,X?X | i=1 ?t,i
?i (X)|. Suppose Ls,m satisfies Assumption
q
2
?
2. If the regularization parameter ?n is chosen such that ?n ? 2LR? (?? ? ?m
) + c??m md nlog(p) ,
?
then then with probability at least 1 ? 2m/p2 any stationary point ?
? m of (8) in B2 (?m
, rm ) satisfies:
?
6 2?
?
k?
?m ? ?m
k2 ?
d?n .
?
Theorem 2 and Corollary 3 bound the error of the estimated coefficients in the truncated expansion.
The approximation error of the truncated expansion itself depends on the function space assumption,
as well as the basis chosen, but can be simply combined with the statement of the above corollary to
derive the overall error. As an instance, we present a corollary below for the specific case of Sobolev
space of order two, and the trigonometric basis.
Corollary 4. Suppose the conditions in Corollary 3 are satisfied. Moreover, suppose the true functions
2
Bt? (?) lie in a Sobolev space of order two. Let {?k }?
k=0 be the trigonometric basis of L (X ). If the
2
2/5
2
optimization problem (8) is solved with ?n = c1 (d log(p)/n) and m = c2 (n/d log(p))1/5 , then
?
with probability at least 1 ? 2m/p2 any stationary point ?
? m of (8) in B2 (?m
, rm ) satisfies:
13/4
2/5
d
log(p)
?
k?
?m ? ? k2 ? c3
,
n
where c1 , c2 , c3 depend on L, ?, ?, ?m .
6
Discussion on Assumption 1: We now provide a set of sufficient conditions which ensure the
restricted strong convexity (RSC) condition. Suppose the population risk L?s,m (?) is strongly convex
?
in a ball of radius rm around ?m
?
?
?L?s,m (?m
+ ?m ) ? ?L?s,m (?m
), ?m ? ?k?m k22 ??m ? B2 (0, rm ).
(9)
Moreover, suppose the empirical gradients converge uniformly to the population gradients
r
m log p
?
?
sup
R ?Ls,m (?m ) ? ?Ls,m (?m ) ? c
.
n
?m ?B2 (??
m ,rm )
(10)
For example, this condition holds with high probability when the gradient of Ls,m (?m ) w.r.t
?t,m , for any t ? [p] is a sub-Gaussian process. Equations (9),(10) are easier to check and ensure that Ls,m (?m ) satisfies the RSC property in Assumption 1.
6
Connections to Exponential Family MRF Copulas
The Expxorcist class of models could be viewed as being closely related to an exponential family MRF [28] copula density.
exponential family MRF joint density o
in
nP Consider the parametric
P
P
(3): PMRF;? (X) ? exp
s?V ?s Bs (Xs ) +
(s,t)?E(G) ?st Bs (Xs ) Bt (Xt ) +
s?V Cs (Xs ) ,
where the distribution is indexed by the finite-dimensional parameters {?s }s?V , {?st }(s,t)?E , and
where in contrast to the previous sections, we assume we are given the sufficient statistics functions
{Bs (?)}s?V as well as the nodewise base measures {Cs (?)}s?V . Now consider the following nonparametric problem. Given a random vector X, suppose we are interested in estimating monotonic
node-wise functions {fs (Xs )}s?V such that (f1 (X1 ), . . . , fp (Xp )) follows PMRF;? for some ?. Letting f(X) = (f1 (X1 ), . . . , fp (Xp )), we Q
have that P(f(X)) = PMRF;? (f(X)), so that the density of
X can be written as P(X) ? P(f(X)) s?V fs0 (Xs ). This is now a semi-parametric estimation
problem, where the unknowns are the functions {fs (Xs )}s?V as well as the finite-dimensional parameters ?. To simplify this density, suppose we assume that the given node-wise sufficient statistics
are linear, so that Bs (z) = z, for all s ? V , so that density reduces to
P(X) ? exp
?
?X
?
?s fs (Xs ) +
s?V
X
?st fs (Xs ) ft (Xt ) +
X
(Cs (fs (Xs )) +
?
?
log fs0 (Xs ))
. (11)
?
s?V
(s,t)?E(G)
In contrast, the Expxorcist nonparametric exponential family graphical model takes the form
P(X) ? exp
?
?X
?
s?V
?s fs (Xs ) +
X
(s,t)?E(G)
?st fs (Xs ) ft (Xt ) +
X
s?V
Cs (Xs )
?
?
.
(12)
?
It can be seen that the two densities have very similar forms, except that the density in (11) has a
more complex base measure that depends on the unknown functions {fs }s?V and importantly the
functions {fs }s?V in (11) are monotonic.
The class of densities in (11) can be cast as an exponential family MRF copula density. Suppose
we denote the CDF of the parametric exponential family MRF joint density by FMRF;? (X), with
nodewise marginal CDFs FMRF;?,s (Xs ). Then the marginal CDF of the density (11) can be written
as Fs (xs ) = P[Xs ? xs ] = P[fs (Xs ) ? fs (xs )] = FMRF;?,s (fs (xs )), so that
?1
fs (xs ) = FMRF;?,s
(Fs (xs )).
(13)
?1
?1
It then follows that: F (X) = FMRF;? FMRF;?,1
(F1 (X1 )), . . . , FMRF;?,p
(Fp (Xp )) , where F (X)
?1
?1
is the CDF of density (11). By letting FCOP;? (U ) = FMRF;? FMRF;?,1
(U1 ), . . . , FMRF;?,p
(Up )
be the exponential family MRF copula density function, we see that the CDF of X is precisely:
F (X) = FCOP;? (F1 (X1 ), . . . , Fp (Xp )), which is specified by the marginal CDFs {Fs (Xs )}s?V and
the copula density FCOP;? corresponding to the exponential family MRF density. In other words, the
non-parametric extension in (11) of the exponential family MRF densities is precisely an exponential
family MRF copula density. This development thus generalizes the non-parametric extension of
Gaussian MRF densities via the Gaussian copula nonparanormal densities [17]. The caveats with the
copula density however are two-fold: the node-wise functions are restricted to be monotonic, but
7
also the estimation of these as in (13) requires the estimation of inverses of marginal CDFs of an
exponential family MRF, which is intractable in general. Thus, minor differences in the expressions
of the Expxorcist density (12) and an exponential family MRF copula density (11) nonetheless have
seemingly large consequences for tractable estimation of these densities from data.
7
Experiments
We present experimental results on both synthetic and real datasets. We compare our estimator,
Expxorcist, with the Nonparanormal model of [17] and Gaussian Graphical Model (GGM). We use
glasso [7] to estimate GGM and the two step estimator of [17] to estimate Nonparanormal model.
7.1
Synthetic Experiments
Data: We generated synthetic data from the Expxorcist model with chain and grid graph structures.
For both the graph structures, we set ?s = 1, ?s ? V ,?st = 1, ?(s, t) ? E and fix the domain
X
with two choices
to [?1, 1]. We experimented
for
sufficient statistics Bs (X): sin(4?X) and
exp ?20(X ? 0.5)2 + exp ?20(X + 0.5)2 ? 1 and picked the log base measure Cs (X) to
be 0. The grid graph we considered has a 10 ? (p/10) structure. We used Gibbs sampling to sample
data from these models. We also generated data from Gaussian distribution with chain and grid graph
structures. To generate this data we set the off diagonal non-zero entries of inverse covariance matrix
to 0.49 for chain graph and 0.25 for grid graph and diagonal entries to 1.
Evaluation Metric: We compared the performance of Expxorcist against baselines, on graph
structure recovery, using ROC curves. The ROC curve plots the true positive rate (TPR) against false
positive rate (FPR) over different choices of regularization parameter, where TPR is the fraction of
correctly detected edges and FPR is the fraction of mis-identified non edges.
Experiment Settings: For this experiment we set p = 50 and n ? {100, 200, 500} and varied the
regularization parameter ? from 10?2 to 1. To fit the data to the non parametric model (3), we used
cosine basis and truncated the basis expansion to top 30 terms. In practice, one could choose the
number of basis (m) based on domain knowledge (e.g. ?smooth? functions), or in the absence of
? (s), the estimated neighborhood
which, one could use hold-out validation/cross validation. Given N
for node s, we estimated the overall graph structure as: ?s?V ?t?N? (s) {(s, t)}. To reduce the variance
in the ROC plots, we averaged results over 10 repetitions.
Results: Figure 1 shows the ROC plots obtained from this experiment. Due to the lack of space,
we present more experimental results in Appendix. It can be seen that Expxorcist has much better
performance on non-Gaussian data. On these datasets, even at n = 500 the baselines chose edges
at random. This suggests that in the presence of multiple modes and fat tails, Expxorcist is a better
model. Expxorcist has slightly poor performance than baselines on Gaussian data. However, this is
expected because it learns a broader family of distributions than Nonparanormal.
7.2
Futures Intraday Data
We now present our analysis on the Futures price returns. This dataset was downloaded from
http://www.kibot.com/. We focus on the Top-26 most liquid instruments being traded at the
Chicago Mercantile Exchange (CME). The instruments span different sectors like Energy, Agriculture,
Currencies, Equity Indices, Metals and Interest Rates. We focus on the hours of maximum liquidity
(9am Eastern to 3pm Eastern) and look at the 1 minute price returns. The return distribution is a
mixture of 1 minute returns with the overnight return. Since overnight returns tend to be bigger than
the 1 minute return within the day, the return distribution is multimodal and fat-tailed. We treat each
instrument as a random variable and the 1 minute returns as independent samples drawn from these
random variables. We use the data collected in February 2010 as training data and data from March
2010 as held out data for tuning parameter selection. After removing samples with missing entries
we are left with 894 training and 650 held out data samples. We fit Expxorcist and baselines on this
data with the same parameter settings described above. For each of these models, we select the best
tuning parameter through log likelihood on held out data. However, this criteria resulted in complete
graphs for Nonparanormal and GGM (325 edges) and a relatively sparser graph for Expxorcist (168
edges). So for a better comparison of these models, we selected tuning parameters for each of the
models such that the resulting graphs have almost the same number of edges. Figure 2 shows the
8
Sine(n = 500)
TPR
1
0.8
0.8
0.6
0.6
0.6
0.4
0.4
0.4
0.2
0.2
0.2
0
0
0.2
0.4
0.6
0.8
1
0.2
0.4
0.6
0.8
1
1
1
0.8
0.8
0.8
0.6
0.6
0.6
0.4
0.4
0.4
0.2
0.2
0.2
0
0
0.2
0.4
0.6
0.8
1
FPR
Expxorcist
GGM
Nonparanormal
0
0
1
0
Gaussian(n = 200)
1
0.8
0
TPR
Exp (n = 500)
1
0
0.2
0.4
0.6
0.8
1
0
0.2
0.4
0.6
0.8
1
0
0
0.2
0.4
0.6
0.8
1
FPR
FPR
Figure 1: ROC plots from synthetic experiments. Top and bottom rows show plots for chain and grid graphs
respectively. Left column shows plots for data generated from our non-parametric model with Bs (X) = sin(X),
n = 500 and center column shows plots for the other choice of sufficient statistic with n = 500. Right column
shows plots for Gaussian data with n = 200.
(a) Nonparanormal
(b) Expxorcist
Figure 2: Graph Structures learned for the Futures Intraday Data. The Expxorcist graph shown here was
obtained by selecting ? = 0.1. Nodes are colored based on their categories. Edge thickness is proportional to
the magnitude of the interaction.
learned graphs for one such choice of tuning parameters, which resulted in ? 52 edges in the graphs.
Nonparanormal and GGM resulted in very similar graphs, so we only present Nonparanormal here. It
can be seen that Expxorcist is able to identify the clusters better than Nonparanormal. More detailed
graphs and comparison with GGM can be found in Appendix.
8
Conclusion
In this work we considered the problem of non-parametric density estimation and introduced Expxorcist, a new family of non-parametric graphical models. Our approach relies on a simple function
space assumption that the conditional distribution of each variable conditioned on the other variables
has a non-parametric exponential family form. We proposed an estimator for Expxorcist that is
computationally efficient and comes with statistical guarantees. Our empirical results suggest that, in
the presence of multiple modes and fat tails in the data, our non-parametric model is a better choice
than the Nonparanormal model of [17].
9
Acknowledgement
A.S. and P.R. acknowledge the support of ARO via W911NF-12-1-0390 and NSF via IIS-1149803,
IIS-1447574, DMS-1264033, and NIH via R01 GM117594-01 as part of the Joint DMS/NIGMS
Initiative to Support Research at the Interface of the Biological and Mathematical Sciences. M. K.
acknowledges support by an IBM Corporation Faculty Research Fund at the University of Chicago
Booth School of Business.
9
References
[1] Barry C. Arnold, Enrique Castillo, and Jos? Mar?a Sarabia. Conditionally specified distributions: an
introduction. Stat. Sci., 16(3):249?274, 2001. With comments and a rejoinder by the authors.
[2] Patrizia Berti, Emanuela Dreassi, and Pietro Rigo. Compatibility results for conditional distributions. J.
Multivar. Anal., 125:190?203, 2014.
[3] Julian Besag. Spatial interaction and the statistical analysis of lattice systems. J. R. Stat. Soc. B, pages
192?236, 1974.
[4] St?phane Canu and Alex Smola. Kernel methods and the exponential family. Neurocomputing, 69(79):714?720, Mar 2006.
[5] Hua Yun Chen. Compatibility of conditionally specified models. Statist. Probab. Lett., 80(7-8):670?677,
2010.
[6] Ronaldo Dias. Density estimation via hybrid splines. J. Statist. Comput. Simulation, 60(4):277?293, 1998.
[7] Jerome H. Friedman, Trevor J. Hastie, and Robert J. Tibshirani. Sparse inverse covariance estimation with
the graphical lasso. Biostatistics, 9(3):432?441, 2008.
[8] I. J. Good and R. A. Gaskins. Nonparametric roughness penalties for probability densities. Biometrika,
58:255?277, 1971.
[9] Chong Gu. Smoothing spline density estimation: conditional distribution. Stat. Sinica, 5(2):709?726,
1995.
[10] Chong Gu, Yongho Jeon, and Yi Lin. Nonparametric density estimation in high-dimensions. Stat. Sinica,
23:1131?1153, 2013.
[11] Chong Gu and Chunfu Qiu. Smoothing spline density estimation: theory. Ann. Stat., 21(1):217?234, 1993.
[12] Chong Gu and Jingyuan Wang. Penalized likelihood density estimation: direct cross-validation and scalable
approximation. Stat. Sinica, 13(3):811?826, 2003.
[13] Ali Jalali, Pradeep Ravikumar, Vishvas Vasuki, and Sujay Sanghavi. On learning discrete graphical models
using group-sparse regularization. In AISTATS, pages 378?387, 2011.
[14] Yongho Jeon and Yi Lin. An effective method for high-dimensional log-density anova estimation, with
application to nonparametric graphical model building. Stat. Sinica, 16(2):353?374, 2006.
[15] Tom Leonard. Density estimation, stochastic processes and prior information. J. R. Stat. Soc. B, 40(2):113?
146, 1978. With discussion.
[16] Han Liu, Fang Han, Ming Yuan, John D. Lafferty, and Larry A. Wasserman. High-dimensional semiparametric Gaussian copula graphical models. Ann. Stat., 40(4):2293?2326, 2012.
[17] Han Liu, John D. Lafferty, and Larry A. Wasserman. The nonparanormal: Semiparametric estimation of
high dimensional undirected graphs. J. Mach. Learn. Res., 10:2295?2328, 2009.
[18] Beno?t R. M?sse and Young K. Truong. Conditional logspline density estimation. Canad. J. Statist.,
27(4):819?832, 1999.
[19] Song Mei, Yu Bai, and Andrea Montanari. The landscape of empirical risk for non-convex losses. arXiv
preprint arXiv:1607.06534, 2016.
[20] Pradeep Ravikumar, Martin J Wainwright, John D Lafferty, et al. High-dimensional ising model selection
using l1-regularized logistic regression. The Annals of Statistics, 38(3):1287?1319, 2010.
[21] B. W. Silverman. On the estimation of a probability density function by the maximum penalized likelihood
method. Ann. Stat., 10(3):795?810, 1982.
[22] TP Speed and HT Kiiveri. Gaussian markov distributions over finite graphs. The Annals of Statistics, pages
138?150, 1986.
[23] Charles J. Stone, Mark H. Hansen, Charles Kooperberg, and Young K. Truong. Polynomial splines and
their tensor products in extended linear modeling. Ann. Stat., 25(4):1371?1470, 1997. With discussion and
a rejoinder by the authors and Jianhua Z. Huang.
[24] Siqi Sun, Jinbo Xu, and Mladen Kolar. Learning structured densities via infinite dimensional exponential
families. In Advances in Neural Information Processing Systems, pages 2287?2295, 2015.
[25] Cristiano Varin, Nancy Reid, and David Firth. An overview of composite likelihood methods. Stat. Sinica,
21(1):5?42, 2011.
[26] Arend Voorman, Ali Shojaie, and Daniela M. Witten. Graph estimation with joint additive models.
Biometrika, 101(1):85?101, Mar 2014.
[27] Yuchung J. Wang and Edward H. Ip. Conditionally specified continuous distributions. Biometrika,
95(3):735?746, 2008.
[28] Eunho Yang, Pradeep Ravikumar, Genevera I Allen, and Zhandong Liu. Graphical models via univariate
exponential family distributions. Journal of Machine Learning Research, 16(1):3813?3847, 2015.
[29] Zhuoran Yang, Yang Ning, and Han Liu. On semiparametric exponential family graphical models. arXiv
preprint arXiv:1412.8697, 2014.
10
[30] Xiaotong Yuan, Ping Li, Tong Zhang, Qingshan Liu, and Guangcan Liu. Learning additive exponential
family graphical models via `_{2, 1}-norm regularized m-estimation. In Advances in Neural Information
Processing Systems, pages 4367?4375, 2016.
[31] Hao Helen Zhang and Yi Lin. Component selection and smoothing for nonparametric regression in
exponential families. Stat. Sinica, 16(3):1021?1041, 2006.
[32] Tuo Zhao, Zhaoran Wang, and Han Liu. Nonconvex low rank matrix factorization via inexact first order
oracle. Advances in Neural Information Processing Systems, 2015.
11
| 7031 |@word mild:1 faculty:1 version:1 briefly:1 norm:6 polynomial:1 simulation:1 decomposition:7 covariance:2 bai:1 liu:7 score:1 selecting:1 liquid:1 rkhs:6 nonparanormal:14 com:1 jinbo:1 dx:15 written:3 john:3 chicago:4 partition:4 numerical:1 subsequent:1 additive:2 plot:8 fund:1 stationary:3 selected:1 parameterization:1 fpr:5 parametrization:2 lr:2 colored:1 caveat:6 node:30 simpler:1 zhang:2 mathematical:1 along:2 c2:2 direct:1 initiative:1 yuan:2 redefine:1 introduce:1 pairwise:7 x0:2 expected:1 qingshan:1 andrea:1 multi:1 ming:1 estimating:12 underlying:1 bounded:4 moreover:6 notation:3 biostatistics:1 lowing:1 differing:1 gm117594:1 corporation:1 guarantee:4 sai:1 multidimensional:1 fat:3 biometrika:3 k2:7 rm:8 omit:1 reid:1 positive:2 treat:1 consequence:1 era:1 mach:1 abuse:1 chose:1 therein:1 studied:1 specifying:2 suggests:1 limited:1 factorization:2 cdfs:3 statistically:1 averaged:1 practical:3 unique:1 practice:2 silverman:1 xr:1 procedure:2 mei:1 empirical:4 composite:1 matching:1 pre:1 word:1 suggest:1 selection:3 context:5 risk:2 optimize:1 www:1 deterministic:1 center:2 missing:1 helen:1 l:25 convex:8 focused:4 simplicity:1 recovery:1 immediately:1 wasserman:2 estimator:8 q:1 rule:1 importantly:1 orthonormal:2 fang:1 s6:1 population:4 beno:1 sse:1 annals:2 suppose:11 exact:2 us:1 pa:2 chicagobooth:1 expensive:1 approximated:1 ising:1 bottom:1 ft:2 preprint:2 solved:1 wang:3 ensures:1 pradeepr:1 sun:1 convexity:2 complexity:2 covariates:1 depend:1 solving:1 ali:2 upon:1 basis:14 gu:4 translated:1 easily:1 joint:19 multimodal:1 represented:1 effective:1 detected:1 varin:1 neighborhood:2 valued:1 solve:4 otherwise:1 statistic:14 transform:8 itself:3 ip:1 seemingly:1 sequence:1 intraday:2 propose:3 nlog:1 interaction:2 product:3 skeptic:1 zm:1 aro:1 pthe:1 poorly:1 trigonometric:2 ky:1 qr:1 convergence:2 regularity:1 cluster:1 converges:2 phane:1 derive:2 stat:13 fixing:1 school:1 minor:1 p2:2 soc:2 edward:1 c:14 overnight:2 come:3 zhandong:1 strong:6 direction:1 ning:1 radius:2 closely:4 stochastic:1 stringent:2 larry:2 require:1 exchange:1 f1:4 fix:1 decompose:2 yongho:2 biological:1 roughness:2 extension:2 hold:3 around:2 considered:3 exp:24 k2h:1 traded:1 early:1 xk2:1 agriculture:1 estimation:41 hansen:1 repetition:1 arun:1 minimization:5 gaussian:14 supt:2 factorizes:1 broader:1 corollary:7 focus:3 rank:2 likelihood:12 check:1 contrast:3 besag:1 baseline:4 am:1 typically:2 bt:25 entire:1 relation:1 expand:1 transformed:1 interested:3 compatibility:2 overall:3 aforementioned:1 issue:1 dual:1 development:5 spatial:1 smoothing:3 copula:13 marginal:5 equal:1 beach:1 ng:1 sampling:1 look:1 yu:1 representer:1 siqi:1 future:3 np:1 spline:4 simplify:1 sanghavi:1 resulted:3 neurocomputing:1 consisting:1 lebesgue:1 sarabia:1 jeon:2 n1:2 yuchung:1 ab:2 friedman:1 interest:1 evaluation:2 chong:4 predominant:1 mixture:1 pradeep:4 held:3 xb:1 chain:4 amenable:4 integral:7 edge:8 indexed:3 incomplete:2 loosely:1 re:5 rsc:2 instance:2 column:3 modeling:1 corroborate:1 w911nf:1 rigo:1 tp:1 lattice:1 applicability:1 entry:3 uniform:1 answer:1 thickness:1 proximal:1 synthetic:5 combined:1 st:17 density:111 sequel:2 probabilistic:1 off:1 jos:1 squared:1 satisfied:2 choose:2 huang:1 positivity:1 zhao:1 return:9 li:1 b2:7 zhaoran:1 coefficient:4 satisfy:4 depends:2 sine:1 picked:1 analyze:1 sup:1 fol:1 reparametrization:1 identifiability:1 guangcan:1 minimize:1 il:1 ni:1 square:1 ggm:6 variance:1 efficiently:1 yield:1 identify:1 landscape:1 ping:1 trevor:1 definition:3 inexact:1 against:2 nonetheless:2 energy:1 pp:3 dm:2 proof:1 mi:1 couple:1 gain:1 proved:1 dataset:1 popular:1 nancy:1 recall:2 knowledge:1 hilbert:3 subtle:1 back:2 focusing:1 mkolar:1 higher:1 day:1 tom:1 specify:1 evaluated:1 though:1 unidentifiable:1 strongly:2 mar:3 xa:1 smola:1 jerome:1 lack:1 logistic:11 mode:2 usa:1 building:1 k22:3 requiring:1 true:4 equality:1 regularization:5 alternating:5 attractive:1 conditionally:3 sin:2 please:1 cosine:1 criterion:1 stone:1 yun:1 complete:1 l1:1 interface:1 allen:1 wise:6 charles:2 nih:1 common:3 witten:1 functional:5 overview:2 exponentially:1 discussed:1 belong:2 extend:1 slight:1 numerically:1 tpr:4 tail:2 mellon:2 refer:1 gibbs:1 tuning:4 sujay:1 grid:6 pm:3 similarly:1 canu:1 specification:1 han:5 base:11 curvature:1 multivariate:14 berti:1 recent:6 belongs:1 certain:4 nonconvex:1 yi:3 integrable:1 seen:4 minimum:2 additional:1 impose:3 converge:1 barry:1 semi:4 ii:2 multiple:2 currency:1 infer:1 reduces:2 smooth:2 multivar:1 cross:2 long:1 lin:3 truong:2 ravikumar:4 bigger:1 mrf:12 scalable:1 regression:2 supi:1 cmu:2 metric:1 arxiv:4 kernel:6 normalization:1 c1:2 addition:1 semiparametric:3 crucial:2 appropriately:1 rest:3 unlike:1 comment:1 tend:1 undirected:1 member:1 lafferty:3 call:1 integer:1 near:2 leverage:3 presence:2 yang:3 independence:3 fit:3 hastie:1 identified:1 lasso:1 inner:1 reduce:1 translates:1 bottleneck:2 expression:1 handled:2 padding:1 effort:1 penalty:2 song:1 f:16 mirroring:1 clear:2 detailed:2 transforms:1 nonparametric:9 statist:3 category:1 reduced:1 generate:1 http:1 exist:3 nsf:1 estimated:6 correctly:1 tibshirani:1 broadly:1 nodewise:2 carnegie:2 discrete:1 group:1 key:1 arend:1 drawn:3 anova:5 ht:1 graph:26 pietro:1 fraction:2 year:1 convert:1 inverse:3 parameterized:1 family:37 almost:1 sobolev:2 draw:1 appendix:4 jianhua:1 bound:2 simplification:1 fold:1 oracle:1 constraint:8 precisely:2 alex:1 encodes:1 u1:1 speed:1 span:4 min:3 xiaotong:1 relatively:1 martin:1 structured:1 according:1 truncate:1 ball:3 poor:1 march:1 fs0:2 slightly:1 increasingly:1 unity:1 b:26 restricted:2 computationally:8 equation:10 turn:1 daniela:1 letting:2 tractable:1 instrument:3 dia:1 generalizes:2 rewritten:1 rkhss:2 original:2 top:5 ensure:2 graphical:16 xc:1 restrictive:1 k1:1 february:1 comparatively:1 r01:1 tensor:1 objective:5 question:2 quantity:2 parametric:40 canad:1 md:1 diagonal:2 surrogate:3 said:1 jalali:1 gradient:7 sci:1 parametrized:1 nx:1 considers:1 collected:1 enrique:1 reason:1 assuming:2 index:1 eunho:1 julian:1 kolar:2 minimizing:1 difficult:1 sinica:6 sector:1 potentially:1 statement:1 robert:1 hao:1 negative:1 anal:1 unknown:4 datasets:2 markov:1 mladen:2 finite:12 descent:1 acknowledge:1 truncated:3 extended:2 varied:1 reproducing:3 arbitrary:2 tuo:1 introduced:1 david:1 cast:1 specified:6 c3:2 z1:1 connection:1 learned:2 hour:1 nip:1 alternately:1 able:1 below:1 xm:1 fp:4 summarize:1 wainwright:1 natural:4 business:1 regularized:4 hybrid:1 firth:1 ready:1 acknowledges:1 coupled:1 dating:2 review:2 literature:1 l2:3 acknowledgement:1 probab:1 prior:1 loss:10 glasso:1 proportional:1 rejoinder:2 validation:3 downloaded:1 vasuki:1 sufficient:14 xp:5 consistent:8 imposes:1 metal:1 viewpoint:1 cristiano:1 ibm:1 row:1 penalized:5 kiiveri:1 t6:1 infeasible:1 eastern:2 arnold:1 face:2 sparse:3 benefit:1 liquidity:1 overcome:1 dimension:3 xn:16 valid:1 curve:2 lett:1 cme:1 author:2 functionals:3 approximate:2 clique:1 global:3 pittsburgh:2 continuous:1 pen:3 iterative:1 decomposes:1 tailed:1 learn:1 ca:1 expansion:7 necessarily:4 complex:1 domain:7 aistats:1 montanari:1 big:1 qiu:1 voorman:1 repeated:1 measurep:1 emanuela:1 quadrature:1 x1:5 xu:1 roc:5 tong:1 n:2 sub:2 exponential:36 comput:1 lie:5 learns:1 young:2 theorem:4 rk:2 minute:4 removing:1 xt:20 specific:2 x:60 experimented:1 intractable:2 exists:2 ih:2 false:1 magnitude:1 conditioned:5 sparser:1 easier:1 booth:1 chen:1 nigms:1 pmrf:3 simply:1 univariate:2 monotonic:4 hua:1 zhuoran:1 corresponds:1 satisfies:8 relies:3 shojaie:1 cdf:4 conditional:43 goal:1 viewed:6 ann:4 leonard:1 towards:1 price:2 absence:1 considerable:1 genevera:1 infinite:9 specifically:2 contrasted:1 uniformly:2 except:1 lens:1 castillo:1 e:4 experimental:2 equity:1 select:1 support:4 mark:1 evaluate:1 |
6,669 | 7,032 | Improved Graph Laplacian via Geometric
Consistency
Dominique C. Perrault-Joncas
Google, Inc.
[email protected]
Marina Meil?a
Department of Statistics
University of Washington
[email protected]
James McQueen
Amazon
[email protected]
Abstract
In all manifold learning algorithms and tasks setting the kernel bandwidth used
construct the graph Laplacian is critical. We address this problem by choosing
a quality criterion for the Laplacian, that measures its ability to preserve the
geometry of the data. For this, we exploit the connection between manifold
geometry, represented by the Riemannian metric, and the Laplace-Beltrami operator.
Experiments show that this principled approach is effective and robust.
1
Introduction
Manifold learning and manifold regularization are popular tools for dimensionality reduction and
clustering [1, 2], as well as for semi-supervised learning [3, 4, 5, 6] and modeling with Gaussian
Processes [7]. Whatever the task, a manifold learning method requires the user to provide an external
parameter, called ?bandwidth? or ?scale? , that defines the size of the local neighborhood.
More formally put, a common challenge in semi-supervised and unsupervised manifold learning
lies in obtaining a ?good? graph Laplacian estimator L. We focus on the practical problem of
optimizing the parameters used to construct L and, in particular, . As we see empirically, since the
Laplace-Beltrami operator on a manifold is intimately related to the geometry of the manifold, our
estimator for has advantages even in methods that do not explicitly depend on L.
In manifold learning, there has been sustained interest for determining the asymptotic properties of L
[8, 9, 10, 11]. The most relevant is [12], which derives the optimal rate for w.r.t. the sample size N
1
2 = C(M)N ? 3+d/2 ,
(1)
with d denoting the intrinsic dimension of the data manifold M. The problem is that C(M) is a
constant that depends on the yet unknown data manifold, so it is rarely known in practice.
Considerably fewer studies have focused on the parameters used to construct L in a finite sample
problem. A common approach is to ?tune? parameters by cross-validation in the semi-supervised
context. However, in an unsurpervised problem like non-linear dimensionality reduction, there is
no context in which to apply cross-validation. While several approaches [13, 14, 15, 16] may yield
a usable parameter, they generally do not aim to improve L per se and offer no geometry-based
justification for its selection.
In this paper, we present a new, geometrically inspired approach to selecting the bandwidth parameter
of L for a given data set. Under the data manifold hypothesis, the Laplace-Beltrami operator ?M
of the data manifold M contains all the intrinsic geometry of M. We set out to exploit this fact by
comparing the geometry induced by the graph Laplacian L with the local data geometry and choose
the value of for which these two are closest.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
2
Background: Heat Kernel, Laplacian and Geometry
Our paper builds on two previous sets of results: 1) the construction of L that is consistent for ?M
when the sample size N ? ? under the data manifold hypothesis (see [17]); and 2) the relationship
between ?M and the Riemannian metric g on a manifold, as well as the estimation of g (see [18]).
Construction of the graph Laplacian. Several methods methods to construct L have been suggested
(see [10, 11]). The one we present, due to [17], guarantees that, if the data are sampled from a manifold
M, L converges to ?M :
Given a set of points D = {x1 , . . . , xN } in high-dimensional Euclidean space Rr , construct a
weighted graph G = (D, W ) over them, with W = [wij ]ij=1:N . The weight wij between xi and xj
is the heat kernel [1]
2
Wij ? w (xi , xj ) = exp ||xi ? xj ||2 /2 ,
(2)
with a bandwidth parameter fixed by the user. Next, construct L = [Lij ]ij of G by
ti =
X
Wij ,
Wij0 =
j
Wij
,
ti tj
t0i =
X
j
Wij0 , and Lij =
X Wij0
j
t0j
.
(3)
Equation (3) represents the discrete versions of the renormalized Laplacian construction from [17].
Note that ti , t0i , W 0 , L all depend on the bandwidth via the heat kernel.
Estimation of the Riemannian metric. We follow [18] in this step. A Riemannian manifold (M, g)
is a smooth manifold M endowed with a Riemannian metric g; the metric g at point p ? M is a scalar
product over the vectors in Tp M, the tangent subspace of M at p. In any coordinate representation
of M, gp ? G(p) - the Riemannian metric at p - represents a positive definite matrix1 of dimension d
equal to the intrinsic dimension of M. We say that the metric g encodes
the geometry of M because
p
g determines the volume element for any integration over
M
by
det
G(x)dx,
and the line element
q
dx T
dx
for computing distances along a curve x(t) ? M, by
dt G(x) dt .
If we assume that the data we observe (in Rr ) lies on a manifold, then under rotation of the original
coordinates, the metric G(p) is the unit matrix of dimension d padded with zeros up to dimension r.
When the data is mapped to another coordinate system - for instance by a manifold learning algorithm
that performs non-linear dimension reduction - the matrix G(p) changes with the coordinates to
reflect the distortion induced by the mapping (see [18] for more details).
Proposition 2.1 Let x denote local coordinate functions of a smooth Riemannian manifold (M, g)
of dimension d and ?M the Laplace-Beltrami operator defined on M. Then, H(p) = (G(p))?1 the
(matrix) inverse of the Riemannian metric at point p, is given by
(4)
(H(p))kj = 21 ?M xk ? xk (p) xj ? xj (p) |x=x(p) with i, j = 1, . . . , d.
Note that the inverse matrices H(p), p ? M, being symmetric and positive definite, also defines a
metric h called the cometric on M. Proposition 2.1
says that the
cometric is given by applying the
?M operator to the function ?kj = xk ? xk (p) xj ? xj (p) , where xk , xj denote coordinates
k, j seen as functions on M. A converse theorem [19] states that g (or h) uniquely determines ?M .
Proposition 2.1 provides a way to estimate h and g from data. Algorithm 1, adapted from [18],
implements (4).
3
A Quality Measure for L
Our approach can be simply stated: the ?best? value for is the value for which the corresponding L
of (3) best captures the original data geometry. For this we must: (1) estimate the geometry g or h
1
This paper contains mathematical objects like M, g and ?, and computable objects like a data point x, and
the graph Laplacian L. The Riemannian metric at a point belongs to both categories, so it will sometimes be
denoted gp , gxi and sometimes G(p), G(xi ), depending on whether we refer to its mathematical or algorithmic
aspects (or, more formally, whether the expression is coordinate free or in a given set of coordinates). This also
holds for the cometric h, defined in Proposition 2.1.
2
Algorithm 1 Riemannian Metric(X, i, L, pow ? {?1, 1})
Input: N ? d design matrix X, i index in data set, Laplacian L, binary variable pow
for k = 1 ?
d, l = 1 ? d do
PN
Hk,l ? j=1 Lij (Xjk ? Xik )(Xjl ? Xil )
end for
return H pow (i.e. H if pow = 1 and H ?1 if pow = ?1)
from L (this is achieved by RiemannianMetric()); (2) find an independent way to estimate the data
geometry, locally (this is done in Sections 3.2 and 3.1); (3) define a measure of agreement between
the two (Section 3.3).
3.1
The Geometric Consistency Idea and g target
There is a natural way to estimate the geometry of the data without the use of L. We consider
the canonical embedding of the data in the ambient space Rr for which the geometry is trivially
known. This provides a target g target ; we tune the scale of the Laplacian so that the g calculated
from Proposition 2.1 matches this target. Hence, we choose to maximize consistency with the
geometry of the data. We denote the inherited metric by gRr |T M , which stands for the restriction of
the natural metric of the ambient space Rr to the tangent bundle T M of the manifold M. We tune
the parameters of the graph Laplacian L so as to enforce (a coordinate expression of) the identity
gp () = g target ,
with g target = gRr |Tp M ?p ? M .
(5)
In the above, the l.h.s. will be the metric implied from the Laplacian via Proposition 2.1, and the r.h.s
is the metric induced by Rr . Mathematically speaking, (5) is necessary and sufficient for finding the
?correct? Laplacian. The next section describes how to obtain the r.h.s. from a finite sample D. Then,
to optimize the graph Laplacian we estimate g from L as prescribed by Proposition 2.1 and compare
with gRr |Tp M numerically. We call this approach geometric consistency (GC). The GC method is not
limited to the choice of , but can be applied to any other parameter required for the Laplacian.
3.2
Robust Estimation of g target for a finite sample
First idea: estimate tangent subspace We use the simple fact, implied by Section 3.1, that
projecting the data onto Tp M preserves the metric locally around p. Hence, Gtarget = Id in the
projected data. Moreover, projecting on any direction in Tp M does not change the metric in that
direction. This remark allows us to work with small matrices (of at most d ? d instead of r ? r) and
to avoid the problem of estimating d, the intrinsic dimension of the data manifold.
Specifically, we evaluate the tangent subspace around each sampled point xi using weighted (local)
Principal Component Analysis (wPCA) and then express gRr |Tp M directly in the resulting lowdimensional subspace as the unit matrix Id . The tangent subspace also serves to define a local
coordinate chart, which is passed as input to Algorithm 1 which computes H(xi ), G(xi ) in these
coordinates. For computing Txi M, by wPCA, we choose weights defined by the heat kernel (2),
centered around xi , with same bandwidth as for computing L. This approach is similar to samplewise weighted PCA of [20], with one important requirements: the weights must decay rapidly away
from xi so that only points close xi are used to estimate Txi M. This is satisfied by the weighted
recentered design matrix Z, where Zi: , row i of Z, is given by:
?
?
?
? ?
?
N
N
N
X
X
X
Zi: = Wij (xi ? x
?)/ ?
Wij 0 ? , with x
? = ?
Wij xj ? / ?
Wij 0 ? .
(6)
j 0 =1
j=1
j 0 =1
[21] proves that the wPCA using the heat kernel, and equating the PCA and heat kernel bandwidths
as we do, yields a consistent estimator of Txi M. This is implemented in Algorithm 2.
In summary, to instantiate equation (5) at point xi ? D, one must (i) construct row i of the graph
Laplacian by (3); (ii) perform Algorithm 2 to obtain Y ; (iii) apply Algorithm 1 to Y to obtain
G(xi ) ? Rd?d ; (iv) this matrix is then compared with Id , which represents the r.h.s. of (5).
3
Algorithm 2 Tangent Subspace Projection(X, w, d0 )
Input: N ? r design matrix X, weight vector w, working dimension d0
Compute Z using (6)
[V, ?] ? eig(Z T Z, d0 ) (i.e.d0 -SVD of Z)
Center X around x
? from (6)
Y ? XV:,1:d0 (Project X on d0 principal subspace)
return Y
Second idea: project onto tangent directions We now take this approach a few steps further in
terms of improving its robustness with minimal sacrifice to its theoretical grounding. In particular,
we perform both Algorithm 2 and Algorithm 1 in d0 dimensions, with d0 < d (and typically d0 = 1).
This makes the algorithm faster, and make the computed metrics G(xi ), H(xi ) both more stable
numerically and more robust to possible noise in the data2 . Proposition 3.1 shows that the resulting
method remains theoretically sound.
Proposition 3.1 Let X, Y, Z, V, W:i , H, and d ? 1 represent the quantities in Algorithms 1 and 2;
assume that the columns of V are sorted in decreasing order of the singular values, and that the rows
and columns of H are sorted according to the same order. Now denote by Y 0 , V 0 , H 0 the quantitities
computed by Algorithms 1 and 2 for the same X, W:i but with d ? d0 = 1. Then,
V 0 = V:1 ? Rr?1 Y 0 = Y:1 ? RN ?1 H 0 = H11 ? R.
(7)
The proof of this result is straightforward and omitted for brevity. It is easy to see that Proposition 3.1
generalizes immediately to any 1 ? d0 < d. In other words, by using d0 < d, we will be projecting
the data on a proper subspace of Txi M - namely, the subspace of least curvature [22]. The cometric
H 0 of this projection is the principal submatrix of order d0 of H, i.e. H11 if d0 = 1.
Third idea: use h instead of g Relation (5) is trivially satisfied by the cometrics of g and g target
(the latter being H target = Id ). Hence, inverting H in Algorithm 1 is not necessary, and we will use
the cometric h in place of g by default. This saves time and increases numerical stability.
3.3
Measuring the Distortion
For a finite sample, we cannot expect (5) to hold exactly, and so we need to define a distortion
between the two metrics to evaluate how well they agree. We propose the distortion
D =
1
N
N
X
||H(xi ) ? Id ||
(8)
i=1
where ||A|| = ?max (A) is the matrix spectral norm. Thus D measures the average distance of H
from the unit matrix over the data set. For a ?good? Laplacian, the distortion D should be minimal:
? = argmin D .
(9)
The choice of norm in (8) is not arbitrary. Riemannian metrics are
R order 2 tensors or T M
hence the expression of D is the discrete version of Dg0 (g1 , g2 ) = M ||g1 ? g2 ||g0 dVg0 , with
<u,v>gp
||g|| = supu,v?T M\{0}
, representing the tensor norm of gp on Tp M with respect to
g0 p
p
<u,v>g0p
the Riemannian metric g0p . Now, (8) follows when g0 , g1 , g2 are replaced by I, I and H, respectively.
With (9), we have established a principled criterion for selecting the parameter(s) of the graph
Laplacian, by minimizing the distortion between the true geometry and the geometry derived from
Proposition 2.1. Practically, we have in (9) a 1D optimization problem with no derivatives, and we
can use standard algorithms to find its minimum. ?.
4
Related Work
We have already mentioned the asymptotic result (1) of [12]. Other work in this area [8, 10, 11, 23]
provides the rates of change for with respect to N to guarantee convergence. These studies are
2
We know from matrix perturbation theory that noise affects the d-th principal vector increasingly with d.
4
Algorithm 3 Compute Distortion(X, , d0 )
Input: N ? r design matrix X, , working dimension d0 , index set I ? {1, . . . , N }
Compute the heat kernel W by (2) for each pair of points in X
Compute the graph Laplacian L from W by (3)
D?0
for i ? I do
Y ? TangentSubspaceProjection(X, Wi,: , d0 )
H ? RiemannianMetric(Y, L, pow = 1)
D ? D + ||H ? Id0 ||2 /|I|
end for
return D
relevant; but they depend on manifold parameters that are usually not known. Recently, an extremely
interesting Laplacian "continuous nearest neighbor? consistent construction method was proposed
by [24], from a topological perspective. However, this method depends on a smoothness parameter
too, and this is estimated by constructing the persistence diagram of the data. [25] propose a new,
statistical approach for estimating , which is very promising, but currently can be applied only to
un-normalized Laplacian operators. This approach also depends on unknown pparameters a, b, which
are set heuristically. (By contrast, our method depends only weakly on d0 , which can be set to 1.)
Among practical methods, the most interesting is that of [14], which estimates k, the number of
nearest neighbors to use in the construction of the graph Laplacian. This method optimizes k
depending on the embedding algorithm used. By contrast, the selection algorithm we propose
estimates an intrinsic quantity, a scale that depends exclusively on the data. Moreover, it is not
known when minimizing reconstruction error for a particular method can be optimal, since [26] even
in the limit of infinite data, the most embeddings will distort the original geometry. In semi-supervised
learning (SSL), one uses Cross-Validation (CV) [5].
Finally, we mention the algorithm proposed in [27] (CLMR). Its goal is to obtain an estimate of the
intrinsic dimension of the data; however, a by-product of the algorithm is a range of scales where
the tangent space at a data point is well aligned with the principal subspace obtained by a local
singular value decomposition. As these are scales at which the manifold looks locally linear, one can
reasonably expect that they are also the correct scales at which to approximate differential operators,
such as ?M . Given this, we implement the method and compare it to our own results.
From the computational point of view, all methods described above explore exhaustively a range
of values. GC and CLMR only require local PCA at a subset of the data points (with d0 < d
components for GC, d0 >> d for CLMR); whereas CV, and [14] require respectively running a SSL
algorithm, or an embedding algorithm, for each . In relation to these, GC is by far the most efficient
computationally. 3
5
Experimental Results
Synthethic Data. We experimented with estimating the bandwidth ? on data sampled from two
known manifolds, the two-dimensional hourglass and dome manifolds of Figure 1. We sampled
points uniformly from these, adding 10 ?noise? dimensions and Gaussian noise N (0, ? 2 ) resulting in
r = 13 dimensions.
The range of values was delimited by min and max . We set max to the average of ||xi ? xj ||2
over all point pairs and min to the limit in
Pwhich the heat kernel W becomes approximately equal
to the unit matrix; this is tested by maxj ( i Wij ) ? 1 < ? 4 for ? ? 10?4 . This range spans about
two orders of magnitude in the data we considered, and was searched by a logarithmic grid with
?
approximately 20 points. We saved computatation time by evaluating all pointwise quantities (D,
0
local SVD) on a random sample of size N = 200 of each data set. We replicated each experiment
on 10 independent samples.
3
4
In addition, these operations being local, they can be further parallelized or accelerated in the usual ways.
Guaranteeing that all eigenvalues of W are less than ? away from 1.
5
? = 0.001
? = 0.01
? = 0.1
Figure 1: Estimates ? (mean and standard deviation over 10 runs) on the dome and hourglass data, vs sample
sizes N for various noise levels ?; d0 = 2 is in black and d0 = 1 in blue. In the background, we also show as
gray rectangles, for each N, ? the intervals in the range where the eigengaps of local SVD indicate the true
dimension, and, as unfillled rectangles, the estimates proposed by CLMR [27] for these intervals. The variance
of ? observed is due to randomness in the subsample N 0 used to evaluate the distortion. Our ? always falls in the
true interval (when this exists), and have are less variable and more accurate than the CLMR intervals.
Reconstruction of manifold w.r.t. gold standard These results (relegated to the Supplement) are
uniformly very positive, and show that GC achieves its most explicit goal, even in the presence of
noise. In the remainder, we illustrate the versatility of our method on on other tasks. Effects of d0 ,
noise and N . The estimated are presented in Figure 1. Let ?d0 denote the estimate obtained for a
given d0 ? d. We note that when d1 < d2 , typically ?d1 > ?d2 , but the values are of the same order
(a ratio of about 2 in the synthetic experiments). The explanation is that, chosing d0 < d directions
in the tangent subspace will select a subspace aligned with the ?least curvature? directions of the
manifold, if any exist, or with the ?least noise? in the random sample. In these directions, the data
will tolerate more smoothing, which results in larger ?. The optimal decreases with N and grows
with the noise levels, reflecting the balance it must find between variance and bias. Note that for the
hourglass data, the highest noise level of ? = 0.1 is an extreme case, where the original manifold
is almost drowned in the 13-dimensional noise. Hence, is not only commensurately larger, but also
stable between the two dimensions and runs. This reflects the fact
?that captures the noise dimension,
and its values are indeed just below the noise amplitude of 0.1 13. The dome data set exhibits the
same properties discussed previously, showing that our method is effective even for manifolds with
border.
Semi-supervised Learning (SSL) with Real Data. In this set of experiments, the task is classification on the benchmark SSL data sets proposed by [28]. This was done by least-square classification,
similarly to [5], after choosing the optimal bandwidth by one of the methods below.
TE Minimize Test Error, i.e. ?cheat? in an attempt to get an estimate of the ?ground truth?.
CV Cross-validation We split the training set (consisting of 100 points in all data sets) into two
equal groups;5 we minimize the highly non-smooth CV classification error by simulated
annealing.
Rec Minimize the reconstruction error We cannot use the method of [14] directly, as it requires
an embedding, so we minimize reconstruction error based on the heat kernel weights w.r.t.
2
Pn
P
W
(this is reminiscent of LLE [29]): R() = i=1 xi ? j6=i P ijWij xj
l6=i
Our method is denoted GC for Geometric Consistency; we evaluate straighforward GC, that uses the
cometric H and a variant that includes the matrix inversion in Algorithm 1 denoted GC?1 .
5
In other words, we do 2-fold CV. We also tried 20-fold and 5-fold CV, with no significant difference.
6
Digit1
USPS
COIL
BCI
g241c
g241d
TE
0.67?0.08
[0.57, 0.78]
1.24?0.15
[1.04, 1.59]
49.79?6.61
[42.82, 60.36]
3.4?3.1
[1.2, 8.9]
8.3? 2.5
[6.3, 14.6]
5.7? 0.24
[5.6, 6.3]
CV
0.80?0.45
[0.47, 1.99]
1.25?0.86
[0.50, 3.20]
69.65?31.16
[50.55, 148.96]
3.2?2.5
[1.2, 8.2]
8.8?3.3
[4.4, 14.9]
6.4?1.15
[4.3, 8.2]
Rec
GC?1
GC
0.64
0.74
0.74
1.68
2.42
1.10
78.37
216.95
116.38
3.31
3.19
5.61
3.79
7.37
7.38
3.77
7.35
7.36
Table 1: Estimates of by methods presented for the six SSL data sets used, as well as TE. For TE
and CV, which depend on the training/test splits, we report the average, its standard error, and range
(in brackets below) over the 12 splits.
?1
Digit1
Digit1
USPS
COIL
BCI
g241c
g241d
CV
3.32
5.18
7.02
49.22
13.31
8.67
Rec
2.16
4.83
8.03
49.17
23.93
18.39
GC?1
2.11
12.00
16.31
50.25
12.77
8.76
GC
2.11
3.89
8.81
48.67
12.77
8.76
USPS
COIL
BCI
g241c
g241d
GC
GC
GC?1
GC
GC?1
GC
GC?1
GC
GC?1
GC
GC?1
GC
d0 =1
0.743
0.744
2.42
1.10
116
187
3.32
5.34
7.38
7.38
7.35
7.35
d0 =2
0.293
0.767
2.31
1.16
87.4
179
3.48
5.34
7.38
9.83
7.35
9.33
d0 =3
0.305
0.781
3.88
1.18
128
187
3.65
5.34
7.38
9.37
7.35
9.78
Table 2: Left panel: Percent classification error for the six SSL data sets using the four estimation
methods described. Right panel: obtained for the six datasets using various d0 values with GC and
GC?1 . ? was computed for d=5 for Digit1, as it is known to have an intrinsic dimension of 5, and
found to be 1.162 with GC and 0.797 with GC?1 .
Across all methods and data sets, the estimate of closer to the values determined by TE lead to
better classification error, see Table 2. For five of the six data sets6 , GC-based methods outperformed
CV, and were 2 to 6 times faster to compute. This is in spite of the fact that GC does not use label
information, and is not aimed at reducing the classification error, while CV does. Further, the CV
estimates of are highly variable, suggesting that CV tends to overfit to the training data.
Effect of Dimension d0 . Table 2 shows how changing the dimension d0 alters our estimate of . We
see that the ? for different d0 values are close, even though we search over a range of two orders of
magnitude. Even for g241c and g241d, which were constructed so as to not satisfy the manifold
hypothesis, our method does reasonably well at estimating . That is, our method finds the ? for
which the Laplacian encodes the geometry of the data set irrespective of whether or not that geometry
is lower-dimensional. Overall, we have found that using d0 = 1 is most stable, and that adding more
dimensions introduces more numerical problems: it becomes more difficult to optimize the distortion
as in (9), as the minimum becomes shallower. In our experience, this is due to the increase in
variance associated with adding more dimensions.
Using one dimension probably works well because the wPCA selects the dimension that explains the
most variance and hence is the closest to linear over the scale considered. Subsequently, the wPCA
moves to incrementally ?shorter? or less linear dimensions, leading to more variance in the estimate
of the tangent subspace (more evidence for this in the Supplement).
6
In the COIL data set, despite their variability, CV estimates still outperformed the GC-based methods. This is
the only data set constructed from a collection of manifolds - in this case, 24 one-dimensional image rotations.
As such, one would expect that there would be more than one natural length scale.
7
Figure 2: Bandwidth Estimation For Galaxy Spectra Data. Left: GC results for d0 = 1 (d0 = 2, 3 are
also shown); we chose radius = 66 the minimum of D for d = 10 . Right: A log-log plot of radius
versus average number of neighbors within this radius. The region in blue includes radius = 66 and
indicates dimension d = 3. In the code = radius/3, hence we use = 22.
Embedding spectra of galaxies (Details of this experiment are in the Supplement.) For these data
in r = 3750 dimensions, with N = 650, 000, the goal was to obtain a smooth, low dimensional
embedding. The intrinsic dimension d is unknown, CV cannot be applied, and it is impractical
to construct multiple embeddings for large N . Hence, we used the GC method with d0 = 1, 2, 3
and N 0 = |I| = 200. We compare the ??s obtained with a heuristic based on the scaling of the
neighborhood sizes [30] with the radius, which relates , d and N (Figure 2). Remarkably, both
methods yield the same , see the Supplement for evidence that the resulting embedding is smooth.
6
Discussion
In manifold learning, supervised and unsupervised, estimating the graph versions of Laplacian-type
operators is a fundamental task. We have provided a principled method for selecting the parameters
of such operators, and have applied it to the selection of the bandwidth/scale parameter . Moreover,
our method can be used to optimize any other parameters used in the graph Laplacian; for example,
k in the k-nearest neighbors graph, or - more interestingly - the renormalization parameter ? [17]
of the kernel. The latter is theoretically equal to 1, but it is possible that it may differ from 1 in the
finite N regime. In general, for finite N , a small departure from the asymptotic prescriptions may be
beneficial - and a data-driven method such as ours can deliver this benefit.
By imposing geometric self-consistency, our method estimates an intrinsic quantity of the data. GC is
also fully unsupervised, aiming to optimize a (lossy) representation of the data, rather than a particular
task. This is an efficiency if the data is used in an unsupervised mode, or if it is used in many different
subsequent tasks. Of course, one cannot expect an unsupervised method to always be superior to a
task-dependent one. Yet, GC has shown to be competitive and sometimes superior in experiments
with the widely accepted CV. Besides the experimental validation, there are other reasons to consider
an unsupervised method like GC in a supervised task: (1) the labeled data is scarce, so ? will have
high variance, (2) the CV cost function is highly non-smooth while D is much smoother, and (3)
when there is more than one parameter to optimize, difficulties (1) and (2) become much more severe.
Our algorithm requires minimal prior knowledge. In particular, it does not require exact knowledge
of the intrinsic dimension d, since it can work satisfactorily with d0 = 1 in many cases.
An interesting problem that is outside the scope of our paper is the question of whether needs to
vary over M. This is a question/challenge facing not just GC, but any method for setting the scale,
unsupervised or supervised. Asymptotically, a uniform is sufficient. Practically, however, we believe
that allowing to vary may be beneficial. In this respect, the GC method, which simply evaluates
the overall result, can be seamlessly adapted to work with any user-selected spatially-variable , by
appropriately changing (2) or sub-sampling D when calculating D.
8
References
[1] M. Belkin and P. Niyogi. Laplacian eigenmaps for dimensionality reduction and data representation. Neural Computation, 15:1373?1396, 2002.
[2] U. von Luxburg, M. Belkin, and O. Bousquet. Consistency of spectral clustering. Annals of
Statistics, 36(2):555?585, 2008.
[3] M. Belkin, P. Niyogi, and V. Sindhwani. Manifold regularization: A geometric framework
for learning from labeled and unlabeled examples. Journal of Machine Learning Research,
7:2399?2434, December 2006.
[4] Xiaojin Zhu, John Lafferty, and Zoubin Ghahramani. Semi-supervised learning: From gaussian
fields to gaussian processes. Technical Report, 2003.
[5] X. Zhou and M. Belkin. Semi-supervised learning by higher order regularization. AISTAT,
2011.
[6] A. J. Smola and I.R. Kondor. Kernels and regularization on graphs. In Proceedings of the
Annual Conference on Computational Learning Theory, 2003.
[7] V. Sindhwani, W. Chu, and S. S. Keerthi. Semi-supervised gaussian process classifiers. In
Proceedings of the International Joint Conferences on Artificial Intelligence, 2007.
[8] E. Gin? and V. Koltchinskii. Empirical Graph Laplacian Approximation of Laplace-Beltrami
Operators: Large Sample results. High Dimensional Probability, pages 238?259, 2006.
[9] M. Belkin and P. Niyogi. Convergence of laplacians eigenmaps. NIPS, 19:129?136, 2007.
[10] M. Hein, J.-Y. Audibert, and U. von Luxburg. Graph Laplacians and their Convergence on
Random Neighborhood Graphs. Journal of Machine Learning Research, 8:1325?1368, 2007.
[11] D. Ting, L Huang, and M. I. Jordan. An analysis of the convergence of graph laplacians. In
ICML, pages 1079?1086, 2010.
[12] A. Singer. From graph to manifold laplacian: the convergence rate. Applied and Computational
Harmonic Analysis, 21(1):128?134, 2006.
[13] John A. Lee and Michel Verleysen. Nonlinear Dimensionality Reduction. Springer Publishing
Company, Incorporated, 1st edition, 2007.
[14] Lisha Chen and Andreas Buja. Local Multidimensional Scaling for nonlinear dimension reduction, graph drawing and proximity analysis. Journal of the American Statistical Association,
104(485):209?219, March 2009.
[15] "E. Levina and P. Bickel". Maximum likelihood estimation of intrinsic dimension. "Advances
in NIPS", 17, 2005. "Vancouver Canada".
[16] "K. Carter, A. Hero, and R Raich". "de-biasing for intrinsic dimension estimation". "IEEE/SP
14th Workshop on Statistical Signal Processing", pages 601?605, 8 2007.
[17] R. R. Coifman and S. Lafon. Diffusion maps. Applied and Computational Harmonic Analysis,
21(1):6?30, 2006.
[18] Anonymous. Metric learning and manifolds: Preserving the intrinsic geometry. Submitted, 7,
December 2012.
[19] S. Rosenberg. The Laplacian on a Riemannian Manifold. Cambridge University Press, 1997.
[20] H. Yue, M. Tomoyasu, and N. Yamanashi. Weighted principal component analysis and its
applications to improve fdc performance. In 43rd IEEE Conference on Decision and Control,
pages 4262?4267, 2004.
[21] Anil Aswani, Peter Bickel, and Claire Tomlin. Regression on manifolds: Estimation of the
exterior derivative. Annals of Statistics, 39(1):48?81, 2011.
9
[22] J. M. Lee. Riemannian Manifolds: An Introduction to Curvature, volume M. Springer, New
York, 1997.
[23] Xu Wang. Spectral convergence rate of graph laplacian. ArXiv, 2015. convergence rate of
Laplacian when both n and h vary simultaneously.
[24] Tyrus Berry and Timothy Sauer. Consistent manifold representation for topological data analysis.
ArXiv, June 2016.
[25] Frederic Chazal, Ilaria Giulini, and Bertrand Michel. Data driven estimation of laplace-beltrami
operator. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, editors, Advances
in Neural Information Processing Systems 29, pages 3963?3971. Curran Associates, Inc., 2016.
[26] Y. Goldberg, A. Zakai, D. Kushnir, and Y. Ritov. Manifold Learning: The Price of Normalization.
Journal of Machine Learning Research, 9:1909?1939, AUG 2008.
[27] Guangliang Chen, Anna Little, Mauro Maggioni, and Lorenzo Rosasco. Some recent advances
in multiscale geometric analysis of point clouds. In J. Cohen and A. I. Zayed, editors, Wavelets
and multiscale analysis: Theory and Applications, Applied and Numerical Harmonic Analysis,
chapter 10, pages 199?225. Springer, 2011.
[28] O. Chapelle, B. Sch?lkopf, A. Zien, and editors. Semi-Supervised Learning. the MIT Press,
2006.
[29] L. Saul and S. Roweis. Think globally, fit locally: unsupervised learning of low dimensional
manifold. Journal of Machine Learning Research, 4:119?155, 2003.
[30] Sanjoy Dasgupta and Yoav Freund. Random projection trees and low dimensional manifolds.
In Proceedings of the Fortieth Annual ACM Symposium on Theory of Computing, STOC ?08,
pages 537?546, New York, NY, USA, 2008. ACM.
10
| 7032 |@word kondor:1 version:3 inversion:1 norm:3 heuristically:1 d2:2 dominique:1 tried:1 decomposition:1 mention:1 reduction:6 giulini:1 contains:2 exclusively:1 selecting:3 denoting:1 ours:1 interestingly:1 com:2 comparing:1 yet:2 dx:3 must:4 reminiscent:1 john:2 chu:1 numerical:3 subsequent:1 hourglass:3 plot:1 v:1 intelligence:1 fewer:1 instantiate:1 selected:1 xk:5 data2:1 commensurately:1 matrix1:1 provides:3 five:1 mathematical:2 along:1 constructed:2 differential:1 become:1 symposium:1 sustained:1 coifman:1 theoretically:2 sacrifice:1 indeed:1 tested:1 inspired:1 bertrand:1 decreasing:1 globally:1 company:1 little:1 becomes:3 project:2 estimating:5 moreover:3 provided:1 panel:2 ilaria:1 argmin:1 finding:1 impractical:1 guarantee:2 multidimensional:1 ti:3 exactly:1 classifier:1 whatever:1 unit:4 converse:1 control:1 positive:3 local:11 xv:1 limit:2 tends:1 aiming:1 despite:1 id:5 cheat:1 meil:1 approximately:2 black:1 chose:1 koltchinskii:1 equating:1 limited:1 range:7 practical:2 satisfactorily:1 practice:1 definite:2 implement:2 supu:1 area:1 empirical:1 projection:3 persistence:1 word:2 spite:1 zoubin:1 get:1 onto:2 close:2 selection:3 operator:11 cannot:4 put:1 context:2 applying:1 g241c:4 unlabeled:1 restriction:1 optimize:5 map:1 center:1 straightforward:1 focused:1 amazon:2 immediately:1 estimator:3 embedding:7 stability:1 maggioni:1 coordinate:11 justification:1 laplace:6 annals:2 construction:5 target:9 user:3 xjl:1 exact:1 us:2 curran:1 hypothesis:3 goldberg:1 agreement:1 associate:1 element:2 rec:3 labeled:2 observed:1 cloud:1 wang:1 capture:2 dg0:1 region:1 decrease:1 highest:1 principled:3 mentioned:1 exhaustively:1 renormalized:1 depend:4 weakly:1 deliver:1 efficiency:1 usps:3 joint:1 represented:1 various:2 chapter:1 heat:9 effective:2 artificial:1 choosing:2 neighborhood:3 outside:1 heuristic:1 larger:2 widely:1 say:2 distortion:9 recentered:1 drawing:1 bci:3 ability:1 statistic:3 niyogi:3 g1:3 tomlin:1 gp:5 think:1 advantage:1 rr:6 eigenvalue:1 propose:3 lowdimensional:1 reconstruction:4 product:2 remainder:1 relevant:2 aligned:2 rapidly:1 roweis:1 gold:1 t0j:1 aistat:1 convergence:7 wpca:5 requirement:1 xil:1 guaranteeing:1 converges:1 object:2 depending:2 illustrate:1 nearest:3 ij:2 aug:1 implemented:1 indicate:1 differ:1 direction:6 beltrami:6 radius:6 correct:2 grr:4 saved:1 subsequently:1 centered:1 explains:1 require:3 h11:2 anonymous:1 pwhich:1 proposition:11 mathematically:1 hold:2 practically:2 around:4 considered:2 ground:1 proximity:1 exp:1 mapping:1 algorithmic:1 scope:1 achieves:1 digit1:4 vary:3 omitted:1 bickel:2 estimation:9 outperformed:2 label:1 currently:1 tool:1 weighted:5 reflects:1 mit:1 gaussian:5 always:2 aim:1 rather:1 pn:2 avoid:1 zhou:1 rosenberg:1 derived:1 focus:1 june:1 indicates:1 likelihood:1 seamlessly:1 hk:1 contrast:2 dependent:1 typically:2 relation:2 relegated:1 wij:10 selects:1 overall:2 among:1 classification:6 denoted:3 verleysen:1 smoothing:1 integration:1 ssl:6 equal:4 construct:8 field:1 washington:1 beach:1 sampling:1 represents:3 look:1 unsupervised:8 icml:1 report:2 few:1 belkin:5 preserve:2 simultaneously:1 maxj:1 replaced:1 geometry:21 consisting:1 versatility:1 keerthi:1 attempt:1 interest:1 highly:3 severe:1 introduces:1 extreme:1 bracket:1 tj:1 bundle:1 accurate:1 ambient:2 closer:1 necessary:2 experience:1 shorter:1 sauer:1 tree:1 iv:1 euclidean:1 xjk:1 hein:1 theoretical:1 minimal:3 instance:1 column:2 modeling:1 tp:7 measuring:1 yoav:1 cost:1 deviation:1 subset:1 uniform:1 eigenmaps:2 too:1 considerably:1 synthetic:1 st:2 fundamental:1 international:1 lee:3 perrault:1 von:2 reflect:1 satisfied:2 choose:3 huang:1 rosasco:1 external:1 american:1 usable:1 derivative:2 return:3 leading:1 michel:2 suggesting:1 de:1 includes:2 inc:2 satisfy:1 explicitly:1 audibert:1 depends:5 view:1 competitive:1 inherited:1 minimize:4 chart:1 square:1 variance:6 yield:3 lkopf:1 j6:1 randomness:1 submitted:1 guangliang:1 chazal:1 distort:1 eigengaps:1 evaluates:1 james:1 galaxy:2 proof:1 riemannian:14 associated:1 sampled:4 popular:1 knowledge:2 zakai:1 dimensionality:4 amplitude:1 reflecting:1 delimited:1 dt:2 supervised:12 follow:1 tolerate:1 higher:1 improved:1 ritov:1 done:2 though:1 just:2 smola:1 overfit:1 working:2 nonlinear:2 eig:1 multiscale:2 google:2 incrementally:1 defines:2 mode:1 quality:2 gray:1 grows:1 lossy:1 believe:1 usa:2 grounding:1 effect:2 normalized:1 true:3 regularization:4 hence:8 spatially:1 symmetric:1 joncas:1 self:1 uniquely:1 criterion:2 txi:4 performs:1 percent:1 image:1 harmonic:3 recently:1 common:2 rotation:2 superior:2 empirically:1 cohen:1 volume:2 discussed:1 association:1 numerically:2 refer:1 significant:1 cambridge:1 imposing:1 cv:17 smoothness:1 rd:2 consistency:7 trivially:2 grid:1 similarly:1 sugiyama:1 chapelle:1 stable:3 curvature:3 closest:2 own:1 recent:1 perspective:1 optimizing:1 belongs:1 optimizes:1 driven:2 binary:1 seen:1 minimum:3 preserving:1 parallelized:1 maximize:1 signal:1 semi:9 relates:1 multiple:1 ii:1 sound:1 smoother:1 d0:38 zien:1 smooth:6 technical:1 match:1 faster:2 levina:1 cross:4 offer:1 long:1 prescription:1 marina:1 laplacian:32 variant:1 pow:6 regression:1 metric:23 arxiv:2 wij0:3 kernel:12 sometimes:3 represent:1 normalization:1 achieved:1 background:2 whereas:1 addition:1 remarkably:1 interval:4 annealing:1 diagram:1 singular:2 appropriately:1 sch:1 probably:1 yue:1 induced:3 december:2 lafferty:1 jordan:1 call:1 presence:1 iii:1 easy:1 embeddings:2 split:3 xj:11 affect:1 zi:2 fit:1 bandwidth:11 g241d:4 andreas:1 idea:4 computable:1 det:1 whether:4 expression:3 pca:3 six:4 passed:1 peter:1 speaking:1 york:2 remark:1 mcqueen:1 generally:1 se:1 aimed:1 tune:3 locally:4 category:1 carter:1 exist:1 canonical:1 alters:1 estimated:2 per:1 blue:2 discrete:2 dasgupta:1 express:1 group:1 four:1 changing:2 diffusion:1 rectangle:2 uw:1 graph:24 asymptotically:1 geometrically:1 padded:1 run:2 inverse:2 luxburg:3 fortieth:1 place:1 almost:1 guyon:1 decision:1 scaling:2 submatrix:1 fold:3 topological:2 annual:2 adapted:2 encodes:2 raich:1 bousquet:1 aspect:1 prescribed:1 extremely:1 min:2 span:1 department:1 according:1 march:1 describes:1 across:1 increasingly:1 intimately:1 beneficial:2 wi:1 projecting:3 computationally:1 equation:2 agree:1 remains:1 previously:1 singer:1 know:1 hero:1 end:2 serf:1 generalizes:1 operation:1 endowed:1 apply:2 observe:1 away:2 enforce:1 spectral:3 save:1 robustness:1 original:4 clustering:2 running:1 publishing:1 l6:1 calculating:1 exploit:2 ting:1 ghahramani:1 build:1 prof:1 implied:2 tensor:2 g0:3 already:1 quantity:4 move:1 question:2 usual:1 exhibit:1 gin:1 subspace:13 distance:2 mapped:1 simulated:1 mauro:1 gxi:1 manifold:44 reason:1 length:1 code:1 index:2 relationship:1 pointwise:1 ratio:1 minimizing:2 balance:1 besides:1 difficult:1 stoc:1 xik:1 stated:1 design:4 proper:1 kushnir:1 unknown:3 perform:2 shallower:1 allowing:1 datasets:1 benchmark:1 finite:6 variability:1 incorporated:1 gc:39 rn:1 perturbation:1 lisha:1 arbitrary:1 buja:1 canada:1 inverting:1 namely:1 required:1 pair:2 connection:1 established:1 nip:3 address:1 suggested:1 usually:1 below:3 departure:1 samplewise:1 regime:1 laplacians:3 challenge:2 biasing:1 max:3 explanation:1 critical:1 natural:3 difficulty:1 scarce:1 zhu:1 representing:1 t0i:2 improve:2 lorenzo:1 irrespective:1 xiaojin:1 lij:3 kj:2 prior:1 geometric:7 berry:1 tangent:10 vancouver:1 determining:1 asymptotic:3 freund:1 fully:1 expect:4 interesting:3 versus:1 facing:1 validation:5 straighforward:1 sufficient:2 consistent:4 editor:3 row:3 claire:1 course:1 summary:1 free:1 bias:1 lle:1 neighbor:4 fall:1 saul:1 benefit:1 curve:1 dimension:32 xn:1 calculated:1 stand:1 default:1 computes:1 evaluating:1 lafon:1 collection:1 projected:1 replicated:1 dome:3 far:1 approximate:1 xi:18 spectrum:2 continuous:1 un:1 search:1 table:4 promising:1 reasonably:2 robust:3 ca:1 synthethic:1 exterior:1 obtaining:1 improving:1 constructing:1 garnett:1 sp:1 anna:1 border:1 noise:13 subsample:1 edition:1 x1:1 chosing:1 xu:1 renormalization:1 ny:1 sub:1 explicit:1 lie:2 dvg0:1 third:1 wavelet:1 anil:1 theorem:1 showing:1 decay:1 experimented:1 evidence:2 derives:1 intrinsic:13 exists:1 workshop:1 frederic:1 adding:3 supplement:4 magnitude:2 te:5 chen:2 logarithmic:1 timothy:1 simply:2 explore:1 g2:3 scalar:1 sindhwani:2 springer:3 truth:1 determines:2 acm:2 coil:4 identity:1 sorted:2 goal:3 price:1 change:3 aswani:1 specifically:1 infinite:1 uniformly:2 determined:1 reducing:1 principal:6 called:2 sanjoy:1 accepted:1 svd:3 experimental:2 rarely:1 formally:2 select:1 searched:1 latter:2 brevity:1 accelerated:1 evaluate:4 d1:2 |
6,670 | 7,033 | Dual Path Networks
Yunpeng Chen1 , Jianan Li1,2 , Huaxin Xiao1,3 , Xiaojie Jin1 , Shuicheng Yan4,1 , Jiashi Feng1
1
National University of Singapore
2
Beijing Institute of Technology
3
National University of Defense Technology
4
Qihoo 360 AI Institute
Abstract
In this work, we present a simple, highly efficient and modularized Dual Path
Network (DPN) for image classification which presents a new topology of connection paths internally. By revealing the equivalence of the state-of-the-art Residual
Network (ResNet) and Densely Convolutional Network (DenseNet) within the
HORNN framework, we find that ResNet enables feature re-usage while DenseNet
enables new features exploration which are both important for learning good representations. To enjoy the benefits from both path topologies, our proposed Dual Path
Network shares common features while maintaining the flexibility to explore new
features through dual path architectures. Extensive experiments on three benchmark datasets, ImagNet-1k, Places365 and PASCAL VOC, clearly demonstrate
superior performance of the proposed DPN over state-of-the-arts. In particular, on
the ImagNet-1k dataset, a shallow DPN surpasses the best ResNeXt-101(64 ? 4d)
with 26% smaller model size, 25% less computational cost and 8% lower memory
consumption, and a deeper DPN (DPN-131) further pushes the state-of-the-art single model performance with about 2 times faster training speed. Experiments on the
Places365 large-scale scene dataset, PASCAL VOC detection dataset, and PASCAL
VOC segmentation dataset also demonstrate its consistently better performance
than DenseNet, ResNet and the latest ResNeXt model over various applications.
1
Introduction
?Network engineering? is increasingly more important for visual recognition research. In this paper,
we aim to develop new path topology of deep architectures to further push the frontier of representation
learning. In particular, we focus on analyzing and reforming the skip connection, which has been
widely used in designing modern deep neural networks and offers remarkable success in many
applications [16, 7, 20, 14, 5]. Skip connection creates a path propagating information from a lower
layer directly to a higher layer. During the forward propagation, skip connection enables a very
top layer to access information from a distant bottom layer; while for the backward propagation,
it facilitates gradient back-propagation to the bottom layer without diminishing magnitude, which
effectively alleviates the gradient vanishing problem and eases the optimization.
Deep Residual Network (ResNet) [5] is one of the first works that successfully adopt skip connections,
where each mirco-block, a.k.a. residual function, is associated with a skip connection, called residual
path. The residual path element-wisely adds the input features to the output of the same mircoblock, making it a residual unit. Depending on the inner structure design of the mirco-block, the
residual network has developed into a family of various architectures, including WRN [22], Inceptionresnet [20], and ResNeXt [21].
More recently, Huang et al. [8] proposed a different network architecture that achieves comparable
accuracy with deep ResNet [5], named Dense Convolutional Network (DenseNet). Different from
residual networks which add the input features to the output features through the residual path, the
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
DenseNet uses a densely connected path to concatenate the input features with the output features,
enabling each micro-block to receive raw information from all previous micro-blocks. Similar with
residual network family, DenseNet can be categorized to the densely connected network family.
Although the width of the densely connected path increases linearly as it goes deeper, causing
the number of parameters to grow quadratically, DenseNet provides higher parameter efficiency
compared with the ResNet [5].
In this work, we aim to study the advantages and limitations of both topologies and further enrich
the path design by proposing a dual path architecture. In particular, we first provide a new understanding of the densely connected networks from the lens of a higher order recurrent neural network
(HORNN) [19], and explore the relations between densely connected networks and residual networks.
More specifically, we bridge the densely connected networks with the HORNNs, showing that the
densely connected networks are HORNNs when the weights are shared across steps. Inspired by [12]
which demonstrates the relations between the residual networks and RNNs, we prove that the residual
networks are densely connected networks when connections are shared across layers. With this unified
view on the state-of-the-art deep architecture, we find that the deep residual networks implicitly reuse
the features through the residual path, while densely connected networks keep exploring new features
through the densely connected path.
Based on this new view, we propose a novel dual path architecture, called the Dual Path Network
(DPN). This new architecture inherits both advantages of residual and densely connected paths,
enabling effective feature re-usage and re-exploitation. The proposed DPN also enjoys higher
parameter efficiency, lower computational cost and lower memory consumption, and being friendly
for optimization compared with the state-of-the-art classification networks. Experimental results
validate the outstanding high accuracy of DPN compared with other well-established baselines
for image classification on both ImageNet-1k dataset and Places365-Standard dataset. Additional
experiments on object detection task and semantic segmentation task also demonstrate that the
proposed dual path architecture can be broadly applied for various tasks and consistently achieve the
best performance.
2
Related work
Designing an advanced neural network architecture is one of the most challenging but effective
ways for improving the image classification performance, which can also directly benefit a variety
of other tasks. AlexNet [10] and VGG [18] are two most important works that show the power
of deep convolutional neural networks. They demonstrate that building deeper networks with tiny
convolutional kernels is a promising way to increase the learning capacity of the neural network.
Residual Network was first proposed by He et al. [5], which greatly alleviates the optimization
difficulty and further pushes the depth of deep neural networks to hundreds of layers by using
skipping connections. Since then, different kinds of residual networks arose, concentrating on
either building a more efficient micro-block inner structure [3, 21] or exploring how to use residual
connections [9]. Recently, Huang et al. [8] proposed a different network, called Dense Convolutional
Networks, where skip connections are used to concatenate the input to the output instead of adding.
However, the width of the densely connected path linearly increases as the depth rises, causing the
number of parameters to grow quadratically and costing a large amount of GPU memory compared
with the residual networks if the implementation is not specifically optimized. This limits the building
of a deeper and wider densenet that may further improve the accuracy.
Besides designing new architectures, researchers also try to re-explore the existing state-of-the-art
architectures. In [6], the authors showed the importance of the residual path on alleviating the
optimization difficulty. In [12], the residual networks are bridged with recurrent neural networks
(RNNs), which helps people better understand the deep residual network from the perspective of
RNNs. In [3], several different residual functions are unified, trying to provide a better understanding
of designing a better mirco structure with higher learning capacity. But still, for the densely connected
networks, in addition to several intuitive explanations on better feature reusage and efficient gradient
flow introduced, there have been few works that are able to provide a really deeper understanding.
In this work, we provide a deeper understanding of the densely connected network, from the lens of
Higher Order RNN, and explain how the residual networks are in indeed a special case of densely
connected network. Based on these analysis, we then propose a novel Dual Path Network architecture
that not only achieves higher accuracy, but also enjoys high parameter and computational efficiency.
2
?
?
+
h2
g2(?)
+
f1(?)
h1
h2
Unfold
z-1
?(?)+I(?)
Fold
g1(?)
x0
(a) ResNet with shared weights
Output
+
Output
f2(?)
+
h1
+
xt
z-1
Unfold
Fold
hk
gk(?)
+
fk-2k(?)
z-1
...
f1k(?)
+
f0k(?)
x0
(b) ResNet in RNN form
+
fk-1k(?)
z-1
x0
(c) DenseNet with shared weights
(d) DenseNet in HORNN form
Figure 1: The topological relations of different types of neural networks. (a) and (b) show relations
between residual networks and RNN, as stated in [12]; (c) and (d) show relations between densely
connected networks and higher order recurrent neural network (HORNN), which is explained in this
paper. The symbol ?z ?1 ? denotes a time-delay unit; ??? denotes the element-wise summation; ?I(?)?
denotes an identity mapping function.
3
Revisiting ResNet, DenseNet and Higher Order RNN
In this section, we first bridge the densely connected network [8] with higher order recurrent
neural networks [19] to provide a new understanding of the densely connected network. We prove
that residual networks [5, 6, 22, 21, 3], essentially belong to the family of densely connected
networks except their connections are shared across steps. Then, we present analysis on strengths
and weaknesses of each topology architecture, which motivates us to develop the dual path network
architecture.
For exploring the above relation, we provide a new view on the densely connected networks from
the lens of Higher Order RNN, explain their relations and then specialize the analysis to residual
networks. Throughout the paper, we formulate the HORNN in a more generalized form. We use ht
to denote the hidden state of the recurrent neural network at the t-th step and use k as the index of the
current step. Let xt denotes the input at t-th step, h0 = x0 . For each step, ftk (?) refers to the feature
extracting function which takes the hidden state as input and outputs the extracted information. The
g k (?) denotes a transformation function that transforms the gathered information to current hidden
state:
"k?1
#
X
k
k
k t
(1)
h =g
ft (h ) .
t=0
Eqn. (1) encapsulates the update rule of various network architectures in a generalized way. For
k
HORNNs, weights are shared across steps, i.e. ?t, k, fk?t
(?) ? ft (?) and ?k, g k (?) ? g(?). For the
densely connected networks, each step (micro-block) has its own parameter, which means ftk (?)
and g k (?) are not shared. Such observation shows that the densely connected path of DenseNet
is essentially a higher order path which is able to extract new information from previous states.
Figure 1(c)(d) graphically shows the relations of densely connected networks and higher order
recurrent networks.
We then explain that the residual networks are special cases of densely connected networks if taking
?t, k, ftk (?) ? ft (?). Here, for succinctness we introduce rk to denote the intermediate results and let
r0 = 0. Then Eqn. (1) can be rewritten as
rk ,
k?1
X
t=1
k
hk = g
ft (ht ) = rk?1 + fk?1 (hk?1 ),
(2)
rk .
(3)
Thus, by substituting Eqn. (3) into Eqn. (2), Eqn. (2) can be simplified as
rk = rk?1 + fk?1 (hk?1 ) = rk?1 + fk?1 (g k?1 rk?1 ) = rk?1 + ?k?1 (rk?1 ),
(4)
where ?k (?) = fk (g k (?)). Obviously, Eqn. (4) has the same form as the residual network and the
recurrent neural network. Specifically, when ?k, ?k (?) ? ?(?), Eqn. (4) degenerates to an RNN;
when none of ?k (?) is shared and xk = 0, k > 1, Eqn. (4) produces a residual network. Figure 1(a)(b)
3
graphically shows the relation. Besides, recall that Eqn. (4) is derived under the condition when
?t, k, ftk (?) ? ft (?) from Eqn. (1) and the densely connected networks are in forms of Eqn. (1),
meaning that the residual network family essentially belongs to the densely connected network family.
Figure 2(a?c) give an example and demonstrate such equivalence, where ft (?) corresponds to the
first 1 ? 1 convolutional layer and the g k (?) corresponds to the other layers within a micro-block in
Figure 2(b).
From the above analysis, we observe: 1) both residual networks and densely connected networks can
be seen as a HORNN when ftk (?) and g k (?) are shared for all k; 2) a residual network is a densely
connected network if ?t, k, ftk (?) ? ft (?). By sharing the ftk (?) across all steps, g k (?) receives the
same feature from a given output state, which encourages the feature reusage and thus reduces the
feature redundancy. However, such an information sharing strategy makes it difficult for residual
networks to explore new features. Comparatively, the densely connected networks are able to explore
new information from previous outputs since the ftk (?) is not shared across steps. However, different
ftk (?) may extract the same type of features multiple times, leading to high redundancy.
In the following section, we present the dual path networks which can overcome both inherent
limitations of these two state-of-the-art network architectures. Their relations with HORNN also
imply that our proposed architecture can be used for improving HORNN, which we leave for future
works.
4
Dual Path Networks
Above we explain the relations between residual networks and densely connected networks, showing
that the residual path implicitly reuses features, but it is not good at exploring new features. In contrast
the densely connected network keeps exploring new features but suffers from higher redundancy.
In this section, we describe the details of our proposed novel dual path architecture, i.e. the Dual Path
Network (DPN). In the following, we first introduce and formulate the dual path architecture, and
then present the network structure in details with complexity analysis.
4.1
Dual Path Architecture
Sec. 3 discusses the advantage and limitations of both residual networks and densely connected
networks. Based on the analysis, we propose a simple dual path architecture which shares the ftk (?)
across all blocks to enjoy the benefits of reusing common features with low redundancy, while still
remaining a densely connected path to give the network more flexibility in learning new features. We
formulate such a dual path architecture as follows:
xk ,
k?1
X
ftk (ht ),
(5)
vt (ht ) = y k?1 + ?k?1 (y k?1 ),
(6)
t=1
yk ,
k?1
X
t=1
k
rk , x + yk ,
hk = g k rk ,
(7)
(8)
where xk and y k denote the extracted information at k-th step from individual path, vt (?) is a feature
learning function as ftk (?). Eqn. (5) refers to the densely connected path that enables exploring new
features, Eqn. (6) refers to the residual path that enables common features re-usage, and Eqn. (7)
defines the dual path that integrates them and feeds them to the last transformation function in
Eqn. (8). The final transformation function g k (?) generates current state, which is used for making
next mapping or prediction. Figure 2(d)(e) show an example of the dual path architecture that is
being used in our experiments.
More generally, the proposed DPN is a family of convolutional neural networks which contains a
residual alike path and a densely connected alike path, as explained later. Similar to these networks,
one can customize the micro-block function of DPN for task-specific usage or for further overall
performance boosting.
4
1?1
3?3
3?3
+
3?3
3?3
1?1
1?1
+
+
1?1
3?3
3?3
+
1?1
3?3
1?1
3?3
+
1?1
3?3
+
(a) Residual Network
(b) Densely Connected Network
+
1?1
+
(c) Densely Connected Network
( with shared connections )
1?1
1?1
~
1?1
1?1
1?1
1?1
1?1
1?1
+
~
1?1
+
1?1
~
1?1
+
1?1
~
3?3
1?1
residual unit
1?1
(d) Dual Path Architecture
1?1
+
(e) DPN
Figure 2: Architecture comparison of different networks. (a) The residual network. (b) The densely
connected network, where each layer can access the outputs of all previous micro-blocks. Here, a
1 ? 1 convolutional layer (underlined) is added for consistency with the micro-block design in (a).
(c) By sharing the first 1 ? 1 connection of the same output across micro-blocks in (b), the densely
connected network degenerates to a residual network. The dotted rectangular in (c) highlights the
residual unit. (d) The proposed dual path architecture, DPN. (e) An equivalent form of (d) from
the perspective of implementation, where the symbol ?o? denotes a split operation, and ?+? denotes
element-wise addition.
4.2
Dual Path Networks
The proposed network is built by stacking multiple modualized mirco-blocks as shown in Figure 2.
In this work, the structure of each micro-block is designed with a bottleneck style [5] which starts
with a 1 ? 1 convolutional layer followed by a 3 ? 3 convolutional layer, and ends with a 1 ? 1
convolutional layer. The output of the last 1 ? 1 convolutional layer is split into two parts: the first
part is element-wisely added to the residual path, and the second part is concatenated with the densly
connected path. To enhance the leaning capacity of each micro-block, we use the grouped convolution
layer in the second layer as the ResNeXt [21].
Considering that the residual networks are more wildly used than the densely connected networks in
practice, we choose the residual network as the backbone and add a thin densely connected path to
build the dual path network. Such design also helps slow the width increment of the densely connected
path and the cost of GPU memory. Table 1 shows the detailed architecture settings. In the table, G
refers to the number of groups, and k refers to the channels increment for the densely connected path.
For the new proposed DPNs, we use (+k) to indicate the width increment of the densely connected
path. The overall design of DPN inherits backbone architecture of the vanilla ResNet / ResNeXt,
making it very easy to implement and apply to other tasks. One can simply implement a DPN by
adding one more ?slice layer? and ?concat layer? upon existing residual networks. Under a well
optimized deep learning platform, none of these newly added operations requires extra computational
cost or extra memory consumption, making the DPNs highly efficient.
In order to demonstrate the appealing effectiveness of the dual path architecture, we intentionally
design a set of DPNs with a considerably smaller model size and less FLOPs compared with the
sate-of-the-art ResNeXts [21], as shown in Table 1. Due to limited computational resources, we set
these hyper-parameters based on our previous experience instead of grid search experiments.
Model complexity We measure the model complexity by counting the total number of learnable
parameters within each neural network. Table 1 shows the results for different models. The DPN-92
costs about 15% fewer parameters than ResNeXt-101 (32 ? 4d), while the DPN-98 costs about 26%
fewer parameters than ResNeXt-101 (64 ? 4d).
Computational complexity We measure the computational cost of each deep neural network using
the floating-point operations (FLOPs) with input size of 224 ? 224, in the number of multiply-adds
following [21]. Table 1 shows the theoretical computational cost. Though the actual time cost
might be influenced by other factors, e.g. GPU bandwidth and coding quality, the computational
cost shows the speed upper bound. As can be see from the results, DPN-92 consumes about 19%
less FLOPs than ResNeXt-101(32 ? 4d), and the DPN-98 consumes about 25% less FLOPs than
ResNeXt-101(64 ? 4d).
5
Table 1: Architecture and complexity comparison of our proposed Dual Path Networks (DPNs) and
other state-of-the-art networks. We compare DPNs with two baseline methods: DenseNet [5] and
ResNeXt [21]. The symbol (+k) denotes the width increment on the densely connected path.
stage
output
DenseNet-161 (k=48)
ResNeXt-101 (32?4d)
ResNeXt-101 (64?4d)
DPN-92 (32?3d)
DPN-98 (40?4d)
conv1
112x112
7 ? 7, 96, stride 2
7 ? 7, 64, stride 2
7 ? 7, 64, stride 2
7 ? 7, 64, stride 2
7 ? 7, 96, stride 2
conv2
56x56
3 ? 3 max pool, stride 2
?
1?1, 256
? 3?3, 256, G=64 ? ? 3
1?1, 256
?
?
1?1, 512
? 3?3, 512, G=64 ? ? 4
1?1, 512
?
?
1?1, 1024
? 3?3, 1024, G=64 ? ? 23
1?1, 1024
?
?
1?1, 2048
? 3?3, 2048, G=64 ? ? 3
1?1, 2048
3 ? 3 max pool, stride 2
?
1?1, 96
? 3?3, 96, G=32 ? ? 3
1?1, 256 (+16)
?
?
1?1, 192
? 3?3, 192, G=32 ? ? 4
1?1, 512 (+32)
?
?
1?1, 384
? 3?3, 384, G=32 ? ? 20
1?1, 1024 (+24)
?
?
1?1, 768
? 3?3, 768, G=32
??3
1?1, 2048 (+128)
3 ? 3 max pool, stride 2
conv3
28?28
conv4
14?14
conv5
7?7
1?1, 192
3?3, 48
1?1, 192
3?3, 48
?
? 6
?
? 12
1?1, 192
3?3, 48
1?1, 192
3?3, 48
?
?
?
? 36
?
?
? 24
?
3 ? 3 max pool, stride 2
?
1?1, 128
3?3, 128, G=32 ? ? 3
1?1, 256
?
1?1, 256
3?3, 256, G=32 ? ? 4
1?1, 512
?
1?1, 512
3?3, 512, G=32 ? ? 23
1?1, 1024
?
1?1, 1024
3?3, 1024, G=32 ? ? 3
1?1, 2048
?
?
?
?
?
?
?
?
?
?
3 ? 3 max pool, stride 2
?
1?1, 160
3?3, 160, G=40 ? ? 3
1?1, 256 (+16)
?
1?1, 320
3?3, 320, G=40 ? ? 6
1?1, 512 (+32)
?
1?1, 640
3?3, 640, G=40 ? ? 20
1?1, 1024 (+32)
?
1?1, 1280
3?3, 1280, G=40 ? ? 3
1?1, 2048 (+128)
global average pool
1000-d fc, softmax
global average pool
1000-d fc, softmax
global average pool
1000-d fc, softmax
global average pool
1000-d fc, softmax
global average pool
1000-d fc, softmax
# params
28.9 ? 106
44.3 ? 106
83.7 ? 106
37.8 ? 106
61.7 ? 106
FLOPs
7.7 ? 109
8.0 ? 109
15.5 ? 109
6.5 ? 109
11.7 ? 109
1?1
5
Experiments
Extensive experiments are conducted for evaluating the proposed Dual Path Networks. Specifically,
we evaluate the proposed architecture on three tasks: image classification, object detection and
semantic segmentation, using three standard benchmark datasets: the ImageNet-1k dataset, Places365Standard dataset and the PASCAL VOC datasets.
Key properties of the proposed DPNs are studied on the ImageNet-1k object classification dataset [17]
and further verified on the Places365-Standard scene understanding dataset [24]. To verify whether the
proposed DPNs can benefit other tasks besides image classification, we further conduct experiments
on the PASCAL VOC dataset [4] to evaluate its performance in object detection and semantic
segmentation.
5.1
Experiments on image classification task
We implement the DPNs using MXNet [2] on a cluster with 40 K80 graphic cards. Following [3], we
adopt standard data augmentation methods and train the networks using SGD with a mini-batch size
of 32 for each GPU. For the deepest network, i.e. DPN-1311 , the mini-batch
?size is limited to 24
because of the 12GB GPU memory constraint. The learning rate starts from 0.1 for DPN-92 and
DPN-131, and from 0.4 for DPN-98. It drops in a ?steps? manner by a factor of 0.1. Following [5],
batch normalization layers are refined after training.
5.1.1
ImageNet-1k dataset
Firstly, we compare the image classification performance of DPNs with current state-of-the-art
models. As can be seen from the first block in Table 2, a shallow DPN with only the depth of 92
reduces the top-1 error rate by an absolute value of 0.5% compared with the ResNeXt-101(32 ? 4d)
and an absolute value of 1.5% compared with the DenseNet-161 yet provides with considerably less
FLOPs. In the second block of Table 2, a deeper DPN (DPN-98) surpasses the best residual network ?
ResNeXt-101 (64 ? 4d), and still enjoys 25% less FLOPs and a much smaller model size (236 MB
v.s. 320 MB). In order to further push the state-of-the-art accuracy, we slightly increase the depth
of the DPN to 131 (DPN-131). The results are shown in the last block in Table 2. Again, the DPN
shows superior accuracy over the best single model ? Very Deep PolyNet [23], with a much smaller
model size (304 MB v.s. 365 MB). Note that the Very Deep PolyNet adopts numerous tricks, e.g.
initialization by insertion, residual scaling, stochastic paths, to assist the training process. In contrast,
our proposed DPN-131 is simple and does not involve these tricks, DPN-131 can be trained using a
standard training strategy as shallow DPNs. More importantly, the actual training speed of DPN-131
is about 2 times faster than the Very Deep PolyNet, as discussed in the following paragraph.
1
The DPN-131 has 128 channels at conv1, 4 blocks at conv2, 8 blocks at conv3, 28 blocks at conv4 and 3
blocks at conv5, which has #params=79.5 ? 106 and FLOPs=16.0 ? 109 .
6
Table 2: Comparison with state-of-the-art CNNs on
ImageNet-1k dataset. Single crop validation error rate
(%) on validation set. *: Performance reported by [21], Table 3: Comparison with state-of-the?: With Mean-Max Pooling (see supplementary material). art CNNs on Places365-Standard dataset.
Model
x224
x320 / x299 10 crops validation accuracy rate (%) on
Method
GFLOPs
Size
top-1 top-5 top-1 top-5 validation set.
111 MB
170 MB
170 MB
145 MB
247 MB
227 MB
320 MB
236 MB
531 MB
365 MB
304 MB
304 MB
7.7
7.8
8.0
6.5
15.0
?
15.5
11.7
?
?
16.0
16.0
?
?
?
4.7
4.8
4.9
4.4
4.4
4.48
4.25
4.23
4.16
20
19.5
ResNeXt-101 (64x4d)
19
DPN-131 (40x4d) DPN-98 (40x4d)
18.5
50
60
70
80
90
100
Training Speed (samples/sec)
(a)
AlexNet [24]
GoogleLeNet [24]
VGG-16 [24]
ResNet-152 [24]
ResNeXt-101 [3]
CRU-Net-116 [3]
DPN-92 (32 ? 3d)
ResNet-200
20
19.5
ResNeXt-101 (64x4d)
19
DPN-98 (40x4d) DPN-131 (40x4d)
18.5
8
9
10
11
Memory Cost (GB), Batch Size = 24
(b)
Model
Size
223 MB
44 MB
518 MB
226 MB
165 MB
163 MB
138 MB
Method
20.5
ResNet-200
Single Crop, Top-1 Error
Single Crop, Top-1 Error
20.5
22.2
?
?
22.0 6.0
?
21.2 5.6
?
20.7 5.4 19.3
21.7 5.8 20.1
?
?
19.9
20.4 5.3 19.1
20.2 5.2 18.9
?
? 19.10
?
? 18.71
19.93 5.12 18.62
19.93 5.12 18.55
12
Memory Cost (GB), Batch Size = 24
DenseNet-161(k=48) [8]
ResNet-101* [5]
ResNeXt-101 (32 ? 4d) [21]
DPN-92 (32 ? 3d)
ResNet-200 [6]
Inception-resnet-v2 [20]
ResNeXt-101 (64 ? 4d) [21]
DPN-98 (40 ? 4d)
Very deep Inception-resnet-v2 [23]
Very Deep PolyNet [23]
DPN-131 (40 ? 4d)
DPN-131 (40 ? 4d) ?
top-1
acc.
53.17
53.63
55.24
54.74
56.21
56.60
56.84
top-5
acc.
82.89
83.88
84.91
85.08
86.25
86.55
86.69
12
DPN-131 (40x4d)
11
10
9
ResNet-200
ResNeXt-101 (64x4d)
DPN-98 (40x4d)
8
50
60
70
80
90
100
Training Speed (samples/sec)
(c)
Figure 3: Comparison of total actual cost between different models during training. Evaluations are
conducted on a single Node with 4 K80 graphic card with all training samples cached into memory.
(For the comparison of Training Speed, we push the mini-batch size to its maximum value given a
12GB GPU memory to test the fastest possible training speed of each model.)
Secondly, we compare the training cost between the best performing models. Here, we focus on
evaluating two key properties ? the actual GPU memory cost and the actual training speed. Figure 3
shows the results. As can be seen from Figure 3(a)(b), the DPN-98 is 15% faster and uses 9% less
memory than the best performing ResNeXt with a considerably lower testing error rate. Note that
theoretically the computational cost of DPN-98 shown in Table 2 is 25% less than the best performing
ResNeXt, indicating there is still room for code optimization. Figure 3(c) presents the same result in
a more clear way. The deeper DPN-131 only costs about 19% more training time compared with the
best performing ResNeXt, but achieves the state-of-the-art single model performance. The training
speed of the previous state-of-the-art single model, i.e. Very Deep PolyNet (537 layers) [23], is about
31 samples per second based on our implementation using MXNet, showing that DPN-131 runs about
2 times faster than the Very Deep PolyNet during training.
5.1.2
Place365-Standard dataset
In this experiment, we further evaluate the accuracy of the proposed DPN on the scene classification
task using the Places365-Standard dataset. The Places365-Standard dataset is a high-resolution scene
understanding dataset with more than 1.8 million images of 365 scene categories. Different from
object images, scene images do not have very clear discriminative patterns and require a higher level
context reasoning ability.
Table 3 shows the results of different models on this dataset. To make a fair comparison, we perform
the DPN-92 on this dataset instead of using deeper DPNs. As can be seen from the results, DPN
achieves the best validation accuracy compared with other methods. The DPN-92 requires much less
parameters (138 MB v.s. 163 MB), which again demonstrates its high parameter efficiency and high
generalization ability.
5.2
Experiments on the object detection task
We further evaluate the proposed Dual Path Network on the object detection task. Experiments
are performed on the PASCAL VOC 2007 datasets [4]. We train the models on the union set of
VOC 2007 trainval and VOC 2012 trainval following [16], and evaluate them on VOC 2007 test set.
We use standard evaluation metrics Average Precision (AP) and mean of AP (mAP) following the
PASCAL challenge protocols for evaluation.
7
Table 4: Object detection results on PASCAL VOC 2007 test set. The performance is measured by
mean of Average Precision (mAP, in %).
Method
mAP areo bike bird boat bottle bus car
cat chair cow table dog horse mbk prsn plant sheep sofa train tv
DenseNet-161 (k=48)
79.9 80.4 85.9 81.2 72.8 68.0 87.1 88.0 88.8 64.0 83.3 75.4 87.5 87.6 81.3 84.2 54.6 83.2 80.2 87.4 77.2
ResNet-101 [16]
76.4 79.8 80.7 76.2 68.3 55.9 85.1 85.3 89.8 56.7 87.8 69.4 88.3 88.9 80.9 78.4 41.7 78.6 79.8 85.3 72.0
ResNeXt-101 (32 ? 4d) 80.1 80.2 86.5 79.4 72.5 67.3 86.9 88.6 88.9 64.9 85.0 76.2 87.3 87.8 81.8 84.1 55.5 84.0 79.7 87.9 77.0
DPN-92 (32 ? 3d)
82.5 84.4 88.5 84.6 76.5 70.7 87.9 88.8 89.4 69.7 87.0 76.7 89.5 88.7 86.0 86.1 58.4 85.0 80.4 88.2 83.1
Table 5: Semantic segmentation results on PASCAL VOC 2012 test set. The performance is measured
by mean Intersection over Union (mIoU, in %).
Method
mIoU bkg areo bike bird boat bottle bus car
cat chair cow table dog horse mbk prsn plant sheep sofa train tv
DenseNet-161 (k=48)
68.7 92.1 77.3 37.1 83.6 54.9 70.0 85.8 82.5 85.9 26.1 73.0 55.1 80.2 74.0 79.1 78.2 51.5 80.0 42.2 75.1 58.6
ResNet-101
73.1 93.1 86.9 39.9 87.6 59.6 74.4 90.1 84.7 87.7 30.0 81.8 56.2 82.7 82.7 80.1 81.1 52.4 86.2 52.5 81.3 63.6
ResNeXt-101 (32 ? 4d) 73.6 93.1 84.9 36.2 80.3 65.0 74.7 90.6 83.9 88.7 31.1 86.3 62.4 84.7 86.1 81.2 80.1 54.0 87.4 54.0 76.3 64.2
DPN-92 (32 ? 3d)
74.8 93.7 88.3 40.3 82.7 64.5 72.0 90.9 85.0 88.8 31.1 87.7 59.8 83.9 86.8 85.1 82.8 60.8 85.3 54.1 82.6 64.6
We perform all experiments based on the ResNet-based Faster R-CNN framework, following [5] and
make comparisons by replacing the ResNet, while keeping other parts unchanged. Since our goal is
to evaluate DPN, rather than further push the state-of-the-art accuracy on this dataset, we adopt the
shallowest DPN-92 and baseline networks at roughly the same complexity level. Table 4 provides the
detection performance comparisons of the proposed DPN with several current state-of-the-art models.
It can be observed that the DPN obtains the mAP of 82.5%, which makes large improvements, i.e.
6.1% compared with ResNet-101 [16] and 2.4% compared with ResNeXt-101 (32 ? 4d). The better
results shown in this experiment demonstrate that the Dual Path Network is also capable of learning
better feature representations for detecting objects and benefiting the object detection task.
5.3 Experiments on the semantic segmentation task
In this experiment, we evaluate the Dual Path Network for dense prediction, i.e. semantic segmentation, where the training target is to predict the semantic label for each pixel in the input image. We
conduct experiments on the PASCAL VOC 2012 segmentation benchmark dataset [4] and use the
DeepLab-ASPP-L [1] as the segmentation framework. For each compared method in Table 5, we
replace the 3 ? 3 convolutional layers in conv4 and conv5 of Table 1 with atrous convolution [1]
and plug in a head of Atrous Spatial Pyramid Pooling (ASPP) [1] in the final feature maps of conv5.
We adopt the same training strategy for all networks following [1] for fair comparison.
Table 5 shows the results of different convolutional neural networks. It can be observed that the
proposed DPN-92 has the highest overall mIoU accuracy. Compared with the ResNet-101 which
has a larger model size and higher computational cost, the proposed DPN-92 further improves the
IoU for most categories and improves the overall mIoU by an absolute value 1.7%. Considering the
ResNeXt-101 (32 ? 4d) only improves the overall mIoU by an absolute value 0.5% compared with
the ResNet-101, the proposed DPN-92 gains more than 3 times improvement compared with the
ResNeXt-101 (32 ? 4d). The better results once again demonstrate the proposed Dual Path Network
is capable of learning better feature representation for dense prediction.
6
Conclusion
In this paper, we revisited the densely connected networks, bridged the densely connected networks
with Higher Order RNNs and proved the residual networks are essentially densely connected networks
with shared connections. Based on this new explanation, we proposed a dual path architecture that
enjoys benefits from both sides. The novel network, DPN, is then developed based on this dual path
architecture. Experiments on the image classification task demonstrate that the DPN enjoys high
accuracy, small model size, low computational cost and low GPU memory consumption, thus is
extremely useful for not only research but also real-word application. Experiments on the object
detection task and semantic segmentation tasks show that the proposed DPN can also benefit other
tasks by simply replacing the base network.
Acknowledgments
The work of Jiashi Feng was partially supported by National University of Singapore startup grant
R-263-000-C08-133, Ministry of Education of Singapore AcRF Tier One grant R-263-000-C21-112
and NUS IDS grant R-263-000-C67-646.
8
References
[1] Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, and Alan L Yuille. Deeplab:
Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs.
arXiv preprint arXiv:1606.00915, 2016.
[2] Tianqi Chen, Mu Li, Yutian Li, Min Lin, Naiyan Wang, Minjie Wang, Tianjun Xiao, Bing Xu, Chiyuan
Zhang, and Zheng Zhang. Mxnet: A flexible and efficient machine learning library for heterogeneous
distributed systems. arXiv preprint arXiv:1512.01274, 2015.
[3] Yunpeng Chen, Xiaojie Jin, Bingyi Kang, Jiashi Feng, and Shuicheng Yan. Sharing residual units through
collective tensor factorization in deep neural networks. arXiv preprint arXiv:1703.02180, 2017.
[4] Mark Everingham, SM Ali Eslami, Luc Van Gool, Christopher KI Williams, John Winn, and Andrew
Zisserman. The pascal visual object classes challenge: A retrospective. IJCV, 111(1):98?136, 2014.
[5] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition.
In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 770?778,
2016.
[6] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks.
In European Conference on Computer Vision, pages 630?645. Springer, 2016.
[7] Kaiming He, Georgia Gkioxari, Piotr Doll?r, and Ross Girshick.
arXiv:1703.06870, 2017.
Mask r-cnn.
arXiv preprint
[8] Gao Huang, Zhuang Liu, Kilian Q Weinberger, and Laurens van der Maaten. Densely connected convolutional networks. arXiv preprint arXiv:1608.06993, 2016.
[9] Jiwon Kim, Jung Kwon Lee, and Kyoung Mu Lee. Accurate image super-resolution using very deep convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,
pages 1646?1654, 2016.
[10] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional
neural networks. In Advances in neural information processing systems, pages 1097?1105, 2012.
[11] Chen-Yu Lee, Patrick W Gallagher, and Zhuowen Tu. Generalizing pooling functions in convolutional
neural networks: Mixed, gated, and tree. In Artificial Intelligence and Statistics, pages 464?472, 2016.
[12] Qianli Liao and Tomaso Poggio. Bridging the gaps between residual learning, recurrent neural networks
and visual cortex. arXiv preprint arXiv:1604.03640, 2016.
[13] Jonathan Long, Evan Shelhamer, and Trevor Darrell. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages
3431?3440, 2015.
[14] Alejandro Newell, Kaiyu Yang, and Jia Deng. Stacked hourglass networks for human pose estimation. In
European Conference on Computer Vision, pages 483?499. Springer, 2016.
[15] Geoff Pleiss, Danlu Chen, Gao Huang, Tongcheng Li, Laurens van der Maaten, and Kilian Q Weinberger.
Memory-efficient implementation of densenets. arXiv preprint arXiv:1707.06990, 2017.
[16] Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster r-cnn: Towards real-time object detection
with region proposal networks. In Advances in neural information processing systems, pages 91?99, 2015.
[17] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang,
Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. ImageNet Large
Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 115(3):211?252,
2015. doi: 10.1007/s11263-015-0816-y.
[18] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
[19] Rohollah Soltani and Hui Jiang. Higher order recurrent neural networks. arXiv preprint arXiv:1605.00064,
2016.
[20] Christian Szegedy, Sergey Ioffe, Vincent Vanhoucke, and Alex Alemi. Inception-v4, inception-resnet and
the impact of residual connections on learning. arXiv preprint arXiv:1602.07261, 2016.
[21] Saining Xie, Ross Girshick, Piotr Doll?r, Zhuowen Tu, and Kaiming He. Aggregated residual transformations for deep neural networks. arXiv preprint arXiv:1611.05431, 2016.
[22] Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. arXiv preprint arXiv:1605.07146,
2016.
[23] Xingcheng Zhang, Zhizhong Li, Chen Change Loy, and Dahua Lin. Polynet: A pursuit of structural
diversity in very deep networks. arXiv preprint arXiv:1611.05725, 2016.
[24] Bolei Zhou, Aditya Khosla, Agata Lapedriza, Antonio Torralba, and Aude Oliva. Places: An image
database for deep scene understanding. arXiv preprint arXiv:1610.02055, 2016.
9
| 7033 |@word exploitation:1 cnn:3 kokkinos:1 everingham:1 shuicheng:2 reusage:2 sgd:1 f0k:1 liu:1 contains:1 cru:1 trainval:2 existing:2 current:5 dpn:66 skipping:1 yet:1 gpu:8 john:1 concatenate:2 distant:1 enables:5 christian:1 designed:1 drop:1 update:1 hourglass:1 intelligence:1 fewer:2 concat:1 xk:3 kyoung:1 vanishing:1 provides:3 boosting:1 node:1 detecting:1 revisited:1 firstly:1 zhang:5 prove:2 specialize:1 ijcv:2 paragraph:1 manner:1 introduce:2 theoretically:1 x0:4 mask:1 indeed:1 tomaso:1 roughly:1 inspired:1 voc:12 actual:5 considering:2 lapedriza:1 bike:2 alexnet:2 backbone:2 kind:1 developed:2 proposing:1 unified:2 transformation:4 friendly:1 demonstrates:2 unit:5 grant:3 internally:1 enjoy:2 reuses:1 engineering:1 limit:1 eslami:1 analyzing:1 id:1 jiang:1 path:65 ap:2 might:1 rnns:4 bird:2 initialization:1 studied:1 equivalence:2 challenging:1 fastest:1 limited:2 factorization:1 c21:1 acknowledgment:1 testing:1 practice:1 block:22 implement:3 union:2 unfold:2 evan:1 rnn:6 yan:1 revealing:1 word:1 refers:5 andrej:1 naiyan:1 context:1 equivalent:1 map:5 x56:1 gkioxari:1 crfs:1 latest:1 go:1 graphically:2 conv4:3 williams:1 rectangular:1 formulate:3 resolution:2 rule:1 importantly:1 increment:4 target:1 alleviating:1 us:2 designing:4 trick:2 element:4 areo:2 recognition:7 database:1 bottom:2 ft:7 observed:2 preprint:14 wang:2 revisiting:1 region:1 connected:51 sun:3 kilian:2 highest:1 consumes:2 yk:2 xiaojie:2 mu:2 complexity:6 insertion:1 trained:1 ali:1 yutian:1 atrous:3 creates:1 upon:1 efficiency:4 f2:1 yuille:1 geoff:1 various:4 cat:2 train:4 stacked:1 effective:2 describe:1 doi:1 artificial:1 horse:2 startup:1 hyper:1 kevin:1 h0:1 refined:1 widely:1 supplementary:1 larger:1 ability:2 statistic:1 simonyan:1 g1:1 prsn:2 final:2 obviously:1 advantage:3 net:2 propose:3 mb:25 causing:2 tu:2 alleviates:2 flexibility:2 achieve:1 degenerate:2 benefiting:1 intuitive:1 validate:1 sutskever:1 cluster:1 darrell:1 produce:1 cached:1 leave:1 tianqi:1 object:13 resnet:26 depending:1 develop:2 recurrent:9 pose:1 propagating:1 measured:2 wider:1 help:2 gflops:1 andrew:2 conv5:4 jiwon:1 skip:6 indicate:1 iou:1 laurens:2 cnns:2 stochastic:1 exploration:1 human:1 material:1 education:1 require:1 f1:1 generalization:1 really:1 summation:1 secondly:1 frontier:1 exploring:6 mapping:3 predict:1 substituting:1 achieves:4 adopt:4 chiyuan:1 torralba:1 estimation:1 integrates:1 qihoo:1 sofa:2 label:1 ross:3 bridge:2 grouped:1 successfully:1 clearly:1 shallowest:1 aim:2 super:1 rather:1 arose:1 zhou:1 agata:1 derived:1 focus:2 inherits:2 improvement:2 consistently:2 greatly:1 hk:5 contrast:2 baseline:3 kim:1 diminishing:1 hidden:3 relation:11 x224:1 pixel:1 overall:5 dual:33 classification:12 pascal:11 flexible:1 enrich:1 art:17 special:2 platform:1 softmax:5 spatial:1 once:1 beach:1 piotr:2 yu:1 thin:1 future:1 micro:11 few:1 inherent:1 modern:1 kwon:1 national:3 densely:50 individual:1 murphy:1 floating:1 detection:11 highly:2 multiply:1 zheng:1 evaluation:3 sheep:2 weakness:1 miou:5 huaxin:1 accurate:1 s11263:1 capable:2 experience:1 poggio:1 conduct:2 tree:1 re:5 mbk:2 girshick:3 theoretical:1 papandreou:1 cost:19 stacking:1 surpasses:2 jin1:1 hundred:1 krizhevsky:1 delay:1 jiashi:3 conducted:2 graphic:2 reported:1 params:2 considerably:3 st:1 international:1 eas:1 lee:3 v4:1 pool:10 enhance:1 michael:1 ilya:1 alemi:1 sanjeev:1 augmentation:1 again:3 huang:5 choose:1 leading:1 style:1 reusing:1 li:5 szegedy:1 diversity:1 stride:10 sec:3 coding:1 later:1 view:3 try:1 h1:2 performed:1 zhizhong:1 start:2 jia:2 accuracy:12 convolutional:21 gathered:1 raw:1 vincent:1 none:2 ren:3 pleiss:1 researcher:1 russakovsky:1 acc:2 explain:4 influenced:1 suffers:1 sharing:4 trevor:1 intentionally:1 associated:1 gain:1 newly:1 dataset:22 proved:1 concentrating:1 bridged:2 recall:1 car:2 improves:3 segmentation:12 iasonas:1 sean:1 back:1 feed:1 higher:18 xie:1 zisserman:2 though:1 wildly:1 inception:4 stage:1 eqn:15 receives:1 replacing:2 christopher:1 densenets:1 su:1 propagation:3 acrf:1 defines:1 quality:1 aude:1 usa:1 usage:4 building:3 succinctness:1 verify:1 semantic:10 komodakis:1 during:3 width:5 encourages:1 customize:1 generalized:2 trying:1 f1k:1 demonstrate:9 mxnet:3 reasoning:1 zhiheng:1 image:17 wise:2 meaning:1 novel:4 recently:2 common:3 superior:2 million:1 belong:1 he:6 discussed:1 dahua:1 ai:1 vanilla:1 fk:7 consistency:1 grid:1 access:2 cortex:1 alejandro:1 add:4 base:1 patrick:1 own:1 showed:1 perspective:2 belongs:1 underlined:1 success:1 vt:2 der:2 seen:4 ministry:1 additional:1 george:1 nikos:1 deng:2 r0:1 xiangyu:2 aggregated:1 multiple:2 reduces:2 alan:1 faster:6 plug:1 offer:1 long:2 lin:2 bolei:1 impact:1 prediction:3 crop:4 oliva:1 heterogeneous:1 essentially:4 metric:1 vision:6 liao:1 arxiv:28 kernel:1 normalization:1 sergey:2 pyramid:1 deeplab:2 receive:1 addition:2 proposal:1 krause:1 winn:1 grow:2 jian:3 extra:2 zagoruyko:1 pooling:3 facilitates:1 flow:1 effectiveness:1 extracting:1 structural:1 counting:1 yang:1 intermediate:1 split:2 easy:1 zhuowen:2 bernstein:1 variety:1 li1:1 topology:5 architecture:34 bandwidth:1 inner:2 cow:2 vgg:2 bottleneck:1 whether:1 defense:1 gb:4 reuse:1 assist:1 bridging:1 retrospective:1 karen:1 shaoqing:3 deep:28 antonio:1 generally:1 useful:1 detailed:1 involve:1 clear:2 karpathy:1 amount:1 transforms:1 category:2 soltani:1 bkg:1 wisely:2 singapore:3 dotted:1 per:1 broadly:1 group:1 redundancy:4 key:2 costing:1 verified:1 densenet:18 ht:4 backward:1 beijing:1 run:1 named:1 place:1 family:7 throughout:1 places365:7 maaten:2 scaling:1 comparable:1 layer:22 bound:1 ki:1 followed:1 fold:2 topological:1 strength:1 constraint:1 alex:2 fei:2 scene:7 ftk:12 generates:1 speed:9 chair:2 extremely:1 min:1 performing:4 tv:2 imagnet:2 smaller:4 across:8 increasingly:1 sate:1 slightly:1 appealing:1 shallow:3 making:4 encapsulates:1 alike:2 explained:2 tier:1 resource:1 bus:2 bing:1 discus:1 end:1 pursuit:1 operation:3 rewritten:1 doll:2 apply:1 observe:1 v2:2 batch:6 weinberger:2 top:10 denotes:8 remaining:1 maintaining:1 concatenated:1 build:1 comparatively:1 unchanged:1 feng:2 chieh:1 tensor:1 added:3 strategy:3 gradient:3 card:2 capacity:3 consumption:4 besides:3 code:1 index:1 mini:3 loy:1 liang:1 difficult:1 minjie:1 gk:1 hao:1 stated:1 rise:1 design:6 implementation:4 motivates:1 satheesh:1 conv2:2 perform:2 collective:1 upper:1 gated:1 observation:1 convolution:3 datasets:4 sm:1 benchmark:3 enabling:2 yunpeng:2 jin:1 c08:1 flop:8 hinton:1 head:1 introduced:1 bottle:2 dog:2 extensive:2 connection:15 imagenet:7 optimized:2 quadratically:2 chen1:1 established:1 kang:1 nu:1 nip:1 able:3 pattern:4 saining:1 challenge:3 built:1 including:1 memory:14 explanation:2 x112:1 max:6 power:1 gool:1 difficulty:2 boat:2 residual:59 advanced:1 improve:1 zhuang:1 technology:2 imply:1 numerous:1 library:1 extract:2 understanding:8 deepest:1 plant:2 fully:2 highlight:1 mixed:1 limitation:3 geoffrey:1 remarkable:1 validation:5 h2:2 shelhamer:1 vanhoucke:1 conv1:2 xiao:1 feng1:1 leaning:1 tiny:1 share:2 jung:1 supported:1 last:3 keeping:1 enjoys:5 side:1 deeper:9 understand:1 institute:2 wide:1 conv3:2 taking:1 absolute:4 modularized:1 benefit:6 slice:1 overcome:1 depth:4 distributed:1 evaluating:2 van:3 forward:1 author:1 adopts:1 simplified:1 k80:2 obtains:1 implicitly:2 keep:2 global:5 ioffe:1 discriminative:1 search:1 khosla:2 table:21 promising:1 channel:2 ca:1 improving:2 european:2 protocol:1 qianli:1 dense:4 linearly:2 fair:2 categorized:1 xu:1 georgia:1 slow:1 precision:2 rk:12 c67:1 xt:2 specific:1 showing:3 symbol:3 learnable:1 adding:2 effectively:1 importance:1 hui:1 magnitude:1 gallagher:1 push:6 chen:6 gap:1 intersection:1 generalizing:1 fc:5 simply:2 explore:5 gao:2 visual:4 aditya:2 kaiming:5 g2:1 partially:1 springer:2 corresponds:2 newell:1 extracted:2 ma:1 identity:2 goal:1 towards:1 room:1 shared:12 replace:1 luc:1 wrn:1 change:1 kaiyu:1 specifically:4 except:1 olga:1 called:3 lens:3 total:2 experimental:1 aspp:2 indicating:1 berg:1 people:1 mark:1 jonathan:2 alexander:1 outstanding:1 evaluate:7 |
6,671 | 7,034 | Faster and Non-ergodic O(1/K) Stochastic
Alternating Direction Method of Multipliers
Cong Fang
Feng Cheng
Zhouchen Lin?
Key Laboratory of Machine Perception (MOE), School of EECS, Peking University, P. R. China
Cooperative Medianet Innovation Center, Shanghai Jiao Tong University, P. R. China
[email protected]
[email protected]
[email protected]
Abstract
We study stochastic convex optimization subjected to linear equality constraints.
Traditional Stochastic Alternating Direction Method of Multipliers
? [1] and its Nesterov?s acceleration scheme [2] can only achieve ergodic O(1/ K) convergence
rates, where K is the number of iteration. By introducing Variance Reduction (VR)
techniques, the convergence rates improve to ergodic O(1/K) [3, 4]. In this paper,
we propose a new stochastic ADMM which elaborately integrates Nesterov?s extrapolation and VR techniques. With Nesterov?s extrapolation, our algorithm can
achieve a non-ergodic O(1/K) convergence rate which is optimal for separable
linearly constrained non-smooth convex problems, ?
while the convergence rates of
VR based ADMM methods are actually tight O(1/ K) in non-ergodic sense. To
the best of our knowledge, this is the first work that achieves a truly accelerated,
stochastic convergence rate for constrained convex problems. The experimental
results demonstrate that our algorithm is faster than the existing state-of-the-art
stochastic ADMM methods.
1
Introduction
We consider the following general convex finite-sum problem with linear constraints:
n
min
x1 ,x2
s.t.
h1 (x1 ) + f1 (x1 ) + h2 (x2 ) +
1X
f2,i (x2 ),
n i=1
A1 x1 + A2 x2 = b,
(1)
where f1 (x1 ) and f2,i (x2 ) with i ? {1, 2, ? ? ? , n} are convex and have Lipschitz continuous gradients,
h1 (x1 ) and h2 (x2 ) are also convex, but can be non-smooth. We use the following notations:
L1 denotes the Lipschitz constant
Pn of f1 (x1 ), L2 is the Lipschitz constant of f2,i (x2 ) with i ?
{1, 2, ? ? ? , n}, and f2 (x) = n1 i=1 f2,i (x). And we use ?f to denote the gradient of f .
Problem (1) is of great importance in machine learning. The finite-sum functions f2 (x2 ) are typically
a loss over training samples, and the remaining functions control the structure or regularize the model
to aid generalization [2]. The idea of using linear constraints to decouple the loss and regularization
terms enables researchers to consider some more sophisticated regularization terms which might
be very complicated to solve through proximity operators for Gradient Descent [5] methods. For
example, for multitask learning problems [6, 7], the regularization term is set as ?1 kxk? + ?2 kxk1 ,
for most graph-guided fused Lasso and overlapping group Lasso problem [8, 4], the regularization
term can be written as ?kAxk1 , and for many multi-view learning tasks [9], the regularization terms
always involve ?1 kxk2,1 + ?2 kxk? .
?
Corresponding author.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
Table 1: Convergence rates of ADMM type methods solving Problem (1).
Type
Batch
Stochastic
Algorithm
ADMM [13]
LADM-NE [15]
STOC-ADMM [1]
OPG-ADMM [16]
OPT-ADMM [2]
SDCA-ADMM [17]
SAG-ADMM [3]
SVRG-ADMM [4]
ACC-SADMM (ours)
Convergence Rate
Tight non-ergodic O( ?1K )
1
)
Optimal non-ergodic O( K
1
ergodic O( ?K )
ergodic O( ?1K )
ergodic O( ?1K )
unknown
Tight non-ergodic O( ?1K )
Tight non-ergodic O( ?1K )
1
)
Optimal non-ergodic O( K
Alternating Direction Method of Multipliers (ADMM) is a very popular optimization method to solve
Problem (1), with its advantages in speed, easy implementation and good scalability shown in lots of
literatures (see survey [10]). A popular criterion of the algorithms? convergence rate is its ergodic
convergence. And it is proved in [11, 12] that ADMM converges with an O(1/K) ergodic rate.
However, in this paper, it is noteworthy that we consider the convergence in the non-ergodic sense.
The reasons are two folded: 1) in real applications, the output of ADMM methods are non-ergodic
results (xK ), rather than the ergodic one (convex combination of x1 , x2 , ? ? ? , xK ), as the non-ergodic
results are much faster (see detailed discussions in Section 5.3); 2) The ergodic convergence rate
is not trivially the same as general-case?s rate. For a sequence {ak } = {1, ?1, 1, ?1, 1, ?1, ? ? ? }
(When k is odd, ak is 1, and ?1 when k is even), it is divergent, while in ergodic sense, it converges
in O(1/K). So the analysis in the non-ergodic are closer to reality. 2) is especially suit for ADMM
methods. In [13],
in non? Davis et al. prove that the Douglas-Rachford (DR) splitting converges ?
ergodic O(1/ K). They also construct
a
family
of
functions
showing
that
non-ergodic
O(1/
K) is
?
tight. Chen et al. establish O(1/ K) for Linearized ADMM [14]. Then Li et al. accelerate ADMM
through Nesterov?s extrapolation and obtain a non-ergodic O(1/K) convergence rate[15]. They also
prove that the lower complexity bound of ADMM type methods for the separable linearly constrained
nonsmooth convex problems is exactly O(1/K), which demonstrates that their algorithm is optimal.
The convergence rates for different ADMM based algorithms are shown in Table 1.
On the other hand, to meet the demands of solving large-scale machine learning problems, stochastic
algorithms [18] have drawn a lot of interest in recent years. For stochastic ADMM (SADMM), the
prior works are from STOC-ADMM [1] and OPG-ADMM
? [16]. Due to the noise of gradient, both of
the two algorithms can only achieve an ergodic O(1/ K) convergence rate. There are two lines of
research to accelerate SADMM. The first is to introduce the Variance Reduction (VR) [19, 20, 21]
techniques into SADMM. VR methods ensure the descent direction to have a bounded variance
and so can achieve faster convergence rates. The existing VR based SADMM algorithms include
SDCA-ADMM [17], SAG-ADMM [3] and SVRG-ADMM [4]. SAG-ADMM and SVRG-ADMM
can provably achieve ergodic O(1/K) rates for Porblem (1). The second way to accelerate SADMM
is through the Nesterov?s acceleration [22]. This work is from [2], in which the authors propose
Dy +?
R2
an ergodic O( K
+ ??K ) stochastic algorithm (OPT-ADMM). The dependence on the
2 +
K
smoothness constant of the convergence rate is O(1/K 2 ) and so each term in the convergence?rate
seems to have been improved to optimal. However, the worst convergence rate of it is still O(1/ K).
In this paper, we propose Accelerated Stochastic ADMM (ACC-SADMM) for large scale general convex finite-sum problems with linear constraints. By elaborately integrating Nesterov?s extrapolation
and VR techniques, ACC-SADMM provably achieves a non-ergodic O(1/K) convergence rate which
is optimal for non-smooth problems. As in non-ergodic sense,
? the VR based SADMM methods (e.g.
SVRG-ADMM, SAG-ADMM) converges in a tight O(1/ K) (please
? see detailed discussions in
Section 5.3), ACC-SADMM improve the convergence rates from O(1/ K) to (1/K) in the ergodic
sense and fill the theoretical gap between the stochastic and batch (deterministic) ADMM. The
original idea to design our ACC-SADMM is by explicitly considering the snapshot vector x
? (approximately the mean value of x in the last epoch) into the extrapolation terms. This is, to some degree,
inspired by [23] who proposes an O(1/K 2 ) stochastic gradient algorithm named Katyusha for convex
2
Table 2: Notations and Variables
Notation
hx, yiG , kxkG
Fi (xi )
x
y
F (x)
Meaning
?
x Gy, xT Gx
hi (xi ) + fi (xi )
(x1 , x2 )
(y1 , y2 )
F1 (x1 ) + F2 (x2 )
Variable
k
k
ys,1
, ys,2
k
k
xs,1 , xs,2
? k , ?k
?
s
s
?s
x
?s,1 , x
?s,2 , b
x?1 , x?2 , ??
T
Meaning
extrapolation variables
primal variables
dual and temp variables
snapshot vectors
optimal solution of Eq. (1)
problems. However, there are many distinctions between the two algorithms (please see detailed
discussions in Section 5.1). Our method is also very efficient in practice since we have sufficiently
considered the noise of gradient into our acceleration scheme. For example, we adopt extrapolation
as ysk = xks + (1 ? ?1,s ? ?2 )(xks ? xk?1
) in the inner loop, where ?2 is a constant and ?1,s decreases
s
after every epoch, instead of directly adopting extrapolation as yk = xk +
k+1
?1k (1??1k?1 )
(xk
?1k?1
? xk?1 )
k 2
?x k
in the original Nesterov?s scheme and adding proximal term kx ?k3/2
as [2] does. There are also
variants on updating of multiplier and the snapshot vector. We list the contributions of our work as
follows:
? We propose ACC-SADMM for large scale convex finite-sum problems with linear constraints
which integrates Nesterov?s extrapolation and VR techniques. We prove that our algorithm
converges in non-ergodic O(1/K) which is optimal for separable linearly constrained nonsmooth convex problems. To our best knowledge, this is the first work that achieves a truly
accelerated, stochastic convergence rate for constrained convex problems.
? We do experiments on four bench-mark datasets to demonstrate the superiority of our
algorithm. We also do experiments on the Multitask Learning [6] problem to demonstrate
that our algorithm can be used on very large datasets.
2
Preliminary
Most SADMM methods alternately minimize the following variant surrogate of the augmented
Lagrangian:
L1
kx1 ? xk1 k2G1
(2)
2
? 2 (x2 ), x2 i + L2 kx2 ? xk2 k2G + ? kA1 x1 + A2 x2 ? b + ? k2 ,
+h2 (x2 ) + h?f
2
2
2
?
L0 (x1 , x2 , ?, ?) = h1 (x1 ) + h?f1 (x1 ), x1 i +
? 2 (x2 ) is an estimator of ?f2 (x2 ) from one or a mini-batch of training samples. So the
where ?f
computation cost for each iteration reduces from O(n) to O(b) instead, where b is the mini-batch size.
When fi (x) = 0 and Gi = 0, with i = 1, 2, Problem (1) is solved as exact ADMM. When there
is no hi (xi ), Gi is set as the identity matrix I, with i = 1, 2, the subproblem in xi can be solved
through matrix inversion. This scheme is advocated in many SADMM methods [1, 3]. Another
common approach is linearization (also called the inexact Uzawa method) [24, 25], where Gi is set
as ?i I ? L?i ATi Ai with ?i ? 1 + L?i kATi Ai k.
? 2 (x2 ) is simply set as:
For STOC-ADMM [1], ?f
X
? 2 (x2 ) = 1
?f
?f2,ik (x2 ),
b
(3)
ik ?Ik
where Ik is the mini-batch of size b from {1, 2, ? ? ? , n}. For SVRG-ADMM [4], the gradient
estimator can be written as:
X
? 2 (x2 ) = 1
?f
(?f2,ik (x2 ) ? ?f2,ik (?
x2 )) + ?f2 (?
x2 ),
(4)
b
ik ?Ik
where x
?2 is a snapshot vector (mean value of last epoch).
3
Algorithm 1 Inner loop of ACC-SADMM
for k = 0 to m ? 1 do
?s .
? k + ??2 A1 xk + A2 xk ? b
Update dual variable: ?ks = ?
s
s,1
s,2
?1,s
Update xk+1
s,1 through Eq. (6).
Update xk+1
s,2 through Eq. (7).
? k+1 = ?k + ? A1 xk+1 + A2 xk+1 ? b .
Update dual variable: ?
s
s
s,1
s,2
Update ysk+1 through Eq. (5).
end for k.
3
3.1
Our Algorithm
ACC-SADMM
To help readers easier understand our algorithm, we list the notations and the variables in Table
2. Our algorithm has double loops as we use SVRG [19], which also have two layers of nested
loops to estimate the gradient. We denote subscript s as the index of the outer loop and superscript
k as the index in the inner loops. For example, xks,1 is the value of x1 at the k-th step of the inner
iteration and the s-th step of the outer iteration. And we use xks and ysk to denote (xks,1 , xks,2 ), and
k
k
(ys,1
, ys,2
), respectively. In each inner loop, we update primal variables xks,1 and xks,2 , extrapolation
k
k
terms ys,1
, ys,2
and dual variable ?ks , and s remains unchanged. In the outer loop, we maintain
? s+1 , and then assign the initial value to the extrapolation
snapshot vectors x
?s+1,1 , x
?s+1,2 and b
0
0
terms ys+1,1 and ys+1,2 . We directly linearize both the smooth term fi (xi ) and the augmented term
?
? 2
2 kA1 x1 + A2 x2 ? b + ? k . The whole algorithm is shown in Algorithm 2.
3.2
Inner Loop
The inner loop of ACC-SAMM is straightforward, shown as Algorithm 1. In each iteration, we do
extrapolation, and then update the primal and dual variables. There are two critical steps which
ensures us to obtain a non-ergodic results. The first is extrapolation. We do extrapolation as:
(5)
ysk+1 = xk+1
+ (1 ? ?1,s ? ?2 )(xk+1
? xks ),
s
s
We can find that 1 ? ?1,s ? ?2 ? 1 ? ?1,s . So comparing with original Nesterov?s scheme, our way is
more ?mild? to tackle the noise of gradient. The second step is on the updating primal variables.
xk+1
s,1
=
(6)
k
argmin h1 (x1 ) + h?f1 (ys,1
), x1 i
x1
+h
?
k
k
A1 ys,1
+ A2 ys,2
? b + ?ks , A1 x1 i +
?1,s
L1
?kAT1 A1 k
+
2
2?1,s
k
kx1 ? ys,1
k2 .
And then update x2 with the latest information of x1 , which can be written as:
xk+1
s,2
=
?
k
A1 xk+1
s,1 + A2 ys,2 ? b
?1,s
!
1
(1 + b?2 )L2
?kAT2 A2 k
k
+
kx2 ? ys,2
k2 ,
2
2?1,s
k
? 2 (ys,1
argmin h2 (x2 ) + h?f
), x2 i + h
x2
+?ks , A2 x2 i +
(7)
? 2 (yk ) is obtained by the technique of SVRG [19] with the form:
where ?f
s,2
? 2 (yk ) = 1
?f
s,2
b
X
k
?f2,ik,s (ys,2
) ? ?f2,ik,s (?
xs,2 ) + ?f2 (?
xs,2 ) .
ik,s ?I(k,s)
Comparing with unaccelerated SADMM methods, which alternately minimize Eq. (2), our method is
k
distincted in two ways. The first is that the gradient estimator are computed on the ys,2
. The second
?
is that we have chosen a slower increasing penalty factor ?1,s , instead of a fixed one.
4
Algorithm 2 ACC-SADMM
? 0 = 0, x
Input: epoch length m > 2, ?, ? = 2, c = 2, x00 = 0, ?
?0 = x00 , y00 = x00 ,
0
1
m??
?1,s = c+? s , ?2 = ? (m?1) .
for s = 0 to S ? 1 do
Do inner loop, as stated in Algorithm 1.
Set primal variables: x0s+1 = xm
s .
Update snapshot vectors x
?s+1 through Eq. (8).
? 0 = ?m?1 + ?(1 ? ? )(A1 xm + A2 xm ? b).
Update dual variable: ?
s
s+1
s,1
s,2
? s+1 = A1 x
Update dual snapshot variable: b
?s+1,1 + A2 x
?s+1,2 .
0
Update extrapolation terms ys+1
through Eq. (9).
end for s.
m?1
X
1
?1,S + ?2
Output:
?S =
x
xm
xkS .
+
S
(m ? 1)(?1,S + ?2 ) + 1
(m ?1)(?1,S + ?2 ) + 1
k=1
3.3
Outer Loop
The outer loop of our algorithm is a little complex, in which we preserve snapshot vectors, and
then resets the initial value. The main variants we adpot is on the snapshot vector x
?s+1 and the
0
extrapolation term ys+1
. For the snapshot vector x
?s+1 , we update it as:
m?1 !
(? ? 1)?1,s+1 m
1
(? ? 1)?1,s+1 X k
1?
xs .
(8)
x
?s+1 =
xs + 1 +
m
?2
(m ? 1)?2
k=1
x
?s+1 is not the average of {xks }, different from most SVRG-based methods [19, 4]. The way of
0
generating x
? guarantees a faster convergence rate for the constraints. Then we reset ys+1
as:
?1,s+1
0
m?1
ys+1
= (1 ? ?2 )xm
?s+1 +
(1 ? ?1,s )xm
? ?2 x
?s .
(9)
s + ?2 x
s ? (1 ? ?1,s ? ?2 )xs
?1,s
4
Convergence Analysis
In this section, we give the convergence results of ACC-SADMM. The proof and a outline can be
found in Supplementary Material. As we have mentioned in Section 3.2, the main strategy that enable
us to obtain a non-ergodic results is that we adopt extrapolation as Eq. (5). We first analyze each
inner iteration, shown in Lemma 1. We ignore subscript s as s is unchanged in the inner iteration.
Lemma 1 Assume that f1 (x1 ) and f2,i (x2 ) with i ? {1, 2, ? ? ? , n} are convex and have Lipschitz
continuous gradients. L1 is the Lipschitz constant of f1 (x1 ). L2 is the Lipschitz constant of f2,i (x2 )
with i ? {1, 2, ? ? ? , n} . h1 (x1 ) and h2 (x2 ) is also convex. For Algorithm 2, in any epoch, we have
Eik L(xk+1
, xk+1
, ?? ) ? ?2 L(?
x1 , x
?2 , ?? ) ? (1 ? ?2 ? ?1 )L(xk1 , xk2 , ?? )
1
2
h
i
?1
? k ? ?? k2 ? Ei k?
? k+1 ? ?? k2 + 1 kyk ? (1 ? ?1 ? ?2 )xk ? ?2 x
k?
?1 ? ?1 x?1 k2G3
?
1
k
2?
2 1
1
? (1 ? ?1 ? ?2 )xk1 ? ?2 x
?1 ? ?1 x?1 k2G3
? Eik kxk+1
1
2
1
+ ky2k ? (1 ? ?1 ? ?2 )xk2 ? ?2 x
?2 ? ?1 x?2 k2G4
2
1
? Eik kxk+1
? (1 ? ?1 ? ?2 )xk2 ? ?2 x
?2 ? ?1 x?2 k2G4 ,
2
2
where Eik denotes that the expectation is taken over the random samples in the minibatch Ik,s ,
?k = ?
? k + ?(1??1 ) (Axk ? b),
L(x1 , x2 , ?) = F1 (x1 ) + F2 (x2 ) + h?, A1 x1 + A2 x2 ? bi and ?
?1
T
T
T
?kA1 A1 k
?A1 A1
?kA2 A2 k
1
G3 = L1 +
I
?
,
and
G
=
(1
+
)L
+
I.
4
2
?1
?1
b?2
?1
Then Theorem 1 analyses ACC-SADMM in the whole iteration, which is the key convergence result
of the paper.
5
Theorem 1 If the conditions in Lemma 1 hold, then we have
1 ?m
?(m?1)?2
0
0
? 2
?
E
k
(A?
xS ?b) ?
Ax0 ? b + ?0 ? ? k
(10)
2? ?1,S
?1,0
m
(F (?
xS ) ? F (x? ) + h?? , A?
xS ? bi)
+E
?1,S
1 ? 0 ?(1 ? ?1,0 )
? C3 F (x00 ) ? F (x? ) + h?? , Ax00 ? bi +
k? +
(Ax00 ? b) ? ?? k2
2? 0
?1,0
1
1
,
+ kx00,1 ? x?1 k2(? L +kAT A k)I?AT A + kx00,2 ? x?2 k2
1,0 1
1
1
(1+ b?1 )?1,0 L2 +kAT
1
1
2
2
2 A2 k I
2
where C3 =
1??1,0 +(m?1)?2
.
?1,0
Corollary 1 directly demonstrates that ACC-SADMM have a non-ergodic O(1/K) convergence rate.
Corollary 1 If the conditions in Lemma 1 holds, we have
E|F (?
xS ) ? F (x? )|
?
EkA?
xS ? bk
?
1
O( ),
S
1
O( ).
S
(11)
? S depends on the latest m information of xkS . So our convergence results is in
We can find that x
non-ergodic sense, while the analysis for SVRG-ADMM [4] and SAG-ADMM [3] is in ergodic sense,
PS Pm
1
k
k
? S = mS
since they consider the point x
s=1
k=1 xs , which is the convex combination of xs over
all the iterations.
Now we directly use the theoretical results of [15] to demonstrate that our algorithm is optimal when
there exists non-smooth term in the objective function.
Theorem 2 For the following problem:
min F1 (x1 ) + F2 (x2 ), s.t. x1 ? x2 = 0,
(12)
x1 ,x2
let the ADMM type algorithm to solve it be:
? Generate ?k2 and y2k in any way,
?k
? xk+1
= ProxF1 /? k y2k ? ? k2 ,
1
? Generate ?k+1
and y1k+1 in any way,
1
?k+1
? xk+1
= ProxF2 /? k y1k+1 ? ?1 k .
2
Then there exist convex functions F1 and F2 defined on X = {x ? R6k+5 : kxk ? B} for the above
general ADMM method, satsifying
? k1 k + |F1 (?
Lk?
xk2 ? x
xk1 ) ? F1 (x?1 ) + F1 (?
xk2 ) ? F2 (x?2 )| ?
LB
,
8(k + 1)
(13)
P
P
? k2 = ki=1 ?2i xi2 for any ?1i and ?2i with i from 1 to k.
? k1 = ki=1 ?1i xi1 and x
where x
Theorem 2 is Theorem 11 in [15]. More details can be found in it. Problem (12) is a special case of
Problem (1) as we can set each F2,i (x2 ) = F (x2 ) with i = 1, ? ? ? , n or set n = 1. So there is no
better ADMM type algorithm which converges faster than O(1/K) for Problem (1).
5
Discussions
We discuss some properties of ACC-SADMM and make further comparisons with some related
methods.
6
Table 3: Size of datasets and mini-batch size we adopt in the experiments
Problem
Lasso
Multitask
5.1
Dataset
a9a
covertype
mnist
dna
ImageNet
# training
72, 876
290, 506
60, 000
2, 400, 000
1, 281, 167
# testing
72, 875
290, 506
10, 000
600, 000
50, 000
# dimension ? # class
74 ? 2
54 ? 2
784 ? 10
800 ? 2
4, 096 ? 1, 000
# minibatch
100
500
2, 000
Comparison with Katyusha
As we have mentioned in Introduction, some intuitions of our algorithm are inspired by Katyusha [23],
which obtains an O(1/K 2 ) algorithm for convex finite-sum problems. However, Katyusha cannot
solve the problem with linear constraints. Besides, Katyusha uses the Nesterov?s second scheme
to accelerate the algorithm while our method conducts acceleration through Nesterov?s extrapolation (Nesterov?s first scheme). And our proof uses the technique of [26], which is different from
[23]. Our algorithm can be easily extended to unconstrained convex finite-sum and can also obtain a
O(1/K 2 ) rate but belongs to the Nesterov?s first scheme 2 .
5.2
The Growth of Penalty Factor
?
?1,s
?
The penalty factor ?1,s
increases linearly with the iteration. One might deem that this make our
algorithm impractical because after dozens of epoches, the large value of penalty factor might slow
down the decrement of function value. However, we have not found any bad influence. There may
be two reasons 1. In our algorithm, ?1,s decreases after each epoch (m iterations), which is much
slower than LADM-NE [15]. So the growth of penalty factor works as a continuation technique [28],
which may help to decrease the function value. 2. From Theorem 1, our algorithm converges in
O(1/S) whenever ?1,s is large. So from the theoretical viewpoint, a large ?1,s cannot slow down
our algorithm. We find that OPT-ADMM [2] also needs to decrease the step size with the iteration.
3
However, its step size decreasing rate is O(k 2 ) and is faster than ours.
5.3
The Importance of Non-ergodic O(1/K)
SAG-ADMM [3] and SVRG-ADMM [4] accelerate SADMM to ergodic O(1/K). In Theorem
9 of [15], the authors
? generate a class of functions showing that the original ADMM has a tight
non-ergodic O(1/ K) convergence rate. When n = 1, SAG-ADMM and?SVRG-ADMM are the
same as batch ADMM, so their convergence rates are no better than O(1/ K). So in non-ergodic
sense, our algorithm does have a faster convergence rate than VR based SADMM methods.
Then we are to highlight the importance of our non-ergodic result. As we have mentioned in the
Introduction, in practice, the output of ADMM methods is the non-ergodic result xK , not the mean
of x1 to xK . For deterministic ADMM, the proof of ergodic O(1/K) rate is proposed in [11], after
ADMM had become a prevailing method of solving machine learning problems [29]; for stochastic
ADMM, e.g. SVRG-ADMM [4], the authors give an ergodic O(1/K) proof, but in experiment, what
they emphasize to use is the mean value of the last epoch as the result. As the non-ergodic results
are more close to reality, our algorithm is much faster than VR based SADMM methods, even when
its rate is seemingly the same. Actually, though VR based SADMM methods have provably faster
rates than STOC-ADMM, the improvement in practice is evident only after numbers of iterations,
when point are close to the convergence point, rather than at the early stage. In both [3] and [4], the
authors claim that SAG-ADMM and SVRG-ADMM are sensitive to initial points. We also find that
if the step sizes are set based on the their theoretical guidances, sometimes they are even slower than
STOC-ADMM (see Fig. 1) as the early stage lasts longer when the step size is small. Our algorithm is
faster than the two algorithms which demonstrates that Nesterov?s extrapolation has truly accelerated
the speed and the integration of extrapolation and VR techniques is harmonious and complementary.
2
We follow [26] to name the extrapolation scheme as Nesterov?s first scheme and the three-step scheme [27]
as the Nesterov?s second scheme.
7
10
10
Faster and Non-ergodic
O(1/K) Stochastic Alternating
Direction Method 10of Multipliers
5
10
15
20
25
30
35
10 -3
40
5
(a) a9a-original
5
10
15
20
25
30
35
20
25
30
35
40
objective gap
5
10
5
10
15
20
25
30
35
10-3
40
25
30
35
40
5
10
15
20
25
30
40
5
10 -2
objective gap
test loss
10 -3
0.2
10 -2
20
25
30
35
40
10 -4
3.8
10
15
20
25
30
35
40
35
40
?10-3
3.6
3.4
3.2
10 -6
0.195
0.37
15
number of effective passes
10 0
0.205-1
10
10 -2
10 -4
10 -3
0.114
10
number of effective passes
(d) dna-original
35
10 0
0.21
objective gap
0.372
objective gap
test loss
objective gap
0.116
0.374
10 -2
5
10-4
10 -1
0.118
-3
number of effective passes
10 0
0.376
20
(c) mnist-original
number of effective passes
10 -1
15
number of effective passes
(b) covertype-original
10-3
40
number of effective passes
0.12
15
10 -3
10
10 -4
10-2
number of effective passes
number of effective passes
10-5
10
-1
10 -2
objective gap
10 -5
10-4
10
-3
test loss
10 -4
-2
objective gap
10
objective gap
-3
10 -3
-1
objective gap
10
-2
-2
objective gap
objective gap
10
test loss
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
objective gap
10 -2
3
10 -5
0.112
5
10
15
10
20
20
25
30
30
40
35
50
40
60
number of effective passes
number of effective passes
(a)
(e)a9aa9a-group
STOC-ADMM
STOC-ADMM-ERG
5
1010 15 20 20 3025
40
30
50
35
4060
number
of effective
passes
number
of effective
passes
5
10
10
20
15
30
20
40
25
50
30
number of effective passes
60
35
40
number of effective passes
covertype
(f)(b)covertype-group
OPT-ADMM
10 -3
0.19
(b) mnist
(g) mnist-group
SVRG-ADMM
SVRG-ADMM-ERG
SAG-ADMM
10 -8
2.8
5
10
5
15
20
25
30
35
40
10
15
20
25
30
number of effective passes
number of effective passes
(d) dna
(h) dna-group
SAG-ADMM-ERG
ACC-SADMM
Figure 3. Illustration of the proposed approach. The evolutionary process of our PDE (solid arrow) with respect to the time (t =
0, T /N, ? ? ? , T,) extracts the feature from the image and the gradient descent process (hollow arrow) learns a transform to represent the
Figure 1: Experimental results of solving the original Lasso (Top) and Graph-Guided Fused Lasfeature.
so (Bottom). The computation time includes the cost of calculating full gradients for SVRG based
methods. SVRG-ADMM and SAG-ADMM are initialized by running STOC-ADMM for 3n
b iterarepresents
the Sham,
ergodicand
results
for the corresponding
Frostig,tions.
Roy,?-ERG?
Ge, Rong,
Kakade,
Sidford,
Lin, Zhouchen,algorithms.
Liu, Risheng, and Su, Zhixun. Linearized
Aaron. Un-regularizing: approximate proximal point
and faster stochastic algorithms for empirical risk minimization.
In Proc. Int?l. Conf. on Machine Learning,
6 Experiments
2015.
alternating direction method with adaptive penalty for
low-rank representation. In Proc. Conf. Advances in
Neural Information Processing Systems, 2011.
Lin, Zhouchen, Liu, Risheng, and Li, Huan. Linearized
We conduct
to show
the O(1/n)
effectiveness
method3 .direction
We compare
ourwith
method
withsplitting
the
He, Bingsheng
and experiments
Yuan, Xiaoming.
On the
con- of our
alternating
method
parallel
and
following
the-state-of-the-art
SADMM
algorithms:
(1)
STOC-ADMM
[1],separable
(2) SVRG-ADMM
[4], in mavergence rate of the douglas?rachford alternating direcadaptive penalty for
convex programs
OPT-SADMM
[2], (4)
[3]. We
SDCA-ADMM
in our Learning,
comparison99(2):287?325,
since
tion (3)
method.
SIAM Journal
on SAG-ADMM
Numerical Analysis,
50ignore chine
learning. [17]
Machine
it
gives
no
analysis
on
general
convex
problems
and
it
is
also
not
faster
than
SVRG-ADMM
[4].
(2):700?709, 2012.
2015b.
Experiments are performed on Intel(R) CPU i7-4770 @ 3.40GHz machine with 16 GB memory. Our
Hien, Le
Thi Khanh,
Lu, Canyi,
Huan,
and Feng,
Canyi,and
Li, Huan,
Lin,Learning.
Zhouchen,Due
and to
Yan,
Shuicheng.
experiments
focus
on two Xu,
typical
problems
[4]:Ji-LassoLu,
Problem
Multitask
space
ashi.limited,
Accelerated
stochastic mirror
descentLearning
algorithms
proximal linearized
alternating
method of
the experiment
of Multitask
is shownFast
in Supplementary
Materials.
Fordirection
the Lasso
for composite
optimization.
arXiv
problems,non-strongly
we performconvex
experiments
under the
following
typical variations.
first is P
the
original
multiplier
with parallelThe
splitting.
arXiv
preprint arXn
1
preprint
arXiv:1605.06892,
Lasso
problem; and the2016.
second is Graph-Guided Fused Lasso
model: min
iv:1511.05133,
2015.
x ?kAxk1 + n
i=1 li (x),
where li (x) is the logistic loss on sample i, and A = [G; I] is a matrix encoding the feature sparsity
Johnson,
Rie and
Zhang,
Tong. pattern
Accelerating
Nesterov,
Yurii.
A method
for unconstrained
pattern.
G is
the sparsity
of the stochastic
graph obtained
by sparse
inverse
covariance
estimation convex
[30]. minigradient
descent
using
predictive
variance
reduction.
In
4
mization
problem
with
the
rate
of
convergence
O(1/k 2 ).
The experiments are performed on four benchmark data sets: a9a, covertype, mnist and dna . The
Proc.
Conf.
Advances
in
Neural
Information
Processing
In in
Doklady
an SSSR,
pp. 543?547,
details of the dataset and the mini-batch size that we use
all SADMM
arevolume
shown269,
in Table
3. And 1983.
Systems,
2013.
like [3]
and [4], we fix ? = 10?5 and report the performance based on (xt , Axt ) to satisfy the
Nesterov, Yurii. On an approach to the construction of opconstraints
of Kyung-Ah,
ADMM. Results
areEric
averaged
over five repetitions. And we set m = 2n
for all the
Kim, Seyoung,
Sohn,
and Xing,
P. A multimal methods of minimization of bsmooth convex funcalgorithms.
For
original
Lasso
problem,
the
step
sizes
are
set
through
theoretical
guidances
for
tivariate regression approach to association analysis of a
tions. Ekonomika i Mateaticheskie Metody, 24(3):509?
each algorithm.
For the
Graph-Guided
Lasso, the best step sizes are obtained through searches on
quantitative
trait network.
Bioinformatics,
25(12):i204?
517, 1988.
which give best convergence progress. Except
ACC-SADMM, we use the continuation
i212,parameters
2009.
technique [28] to accelerate algorithms. SAG-ADMM
is
performed
the first three
datasets
due tooptimizaNesterov, Yurii. on
Introductory
lectures
on convex
its and
largeLin,
memory
requirement.
Li, Huan
Zhouchen.
Optimal nonergodic O(1/k)
tion: A basic course, volume 87. 2013.
convergence rate: When linearized ADM meets nesThe experimental results are shown in Fig. 1. We can find that our algorithm consistently outperforms
Nitanda, Atsushi. Stochastic proximal gradient descent
terov?s extrapolation. arXiv preprint arXiv:1608.06366,
other compared methods in all these datasets for both the two problems, which verifies our theoretical
with acceleration techniques. In Proc. Conf. Advances
2016.
analysis. The details about parameter setting, experimental results where we set a larger fixed step
in Neural Information Processing Systems, 2014.
size for theMairal,
group Julien,
guidedand
Lasso
problem,Zaid.
curvesAof the test error, the memory costs of all algorithms,
Lin, Hongzhou,
Harchaoui,
and Multitask
learning
experiment
are shown
in Supplementary
Materials.
Ouyang, Hua,
He, Niao, Tran, Long, and Gray, Alexanuniversal
catalyst for
first-order
optimization.
In Proc.
der G. Stochastic alternating direction method of multiConf. Advances in Neural Information Processing Sys3
pliers. Proc. Int?l. Conf. on Machine Learning, 2013.
tems, 2015a.
The code will be available at http://www.cis.pku.edu.cn/faculty/vision/zlin/zlin.htm.
4
a9a, covertype and dna are from: http://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/,
and mnist is from: http://yann.lecun.com/exdb/mnist/.
8
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
7
Conclusion
We propose ACC-SADMM for the general convex finite-sum problems. ACC-SADMM integrates
Nesterov?s extrapolation and VR techniques and achieves a non-ergodic O(1/K) convergence rate,
which shows theoretical and practical importance. We do experiments to demonstrate that our
algorithm is faster than other SADMM methods.
Acknowledgment
Zhouchen Lin is supported by National Basic Research Program of China (973 Program) (grant no.
2015CB352502) and National Natural Science Foundation (NSF) of China (grant no.s 61625301,
61731018, and 61231002).
References
[1] Hua Ouyang, Niao He, Long Tran, and Alexander G Gray. Stochastic alternating direction
method of multipliers. Proc. Int?l. Conf. on Machine Learning, 2013.
[2] Samaneh AzadiSra and Suvrit Sra. Towards an optimal stochastic alternating direction method
of multipliers. In Proc. Int?l. Conf. on Machine Learning, 2014.
[3] Wenliang Zhong and James Tin-Yau Kwok. Fast stochastic alternating direction method of
multipliers. In Proc. Int?l. Conf. on Machine Learning, 2014.
[4] Shuai Zheng and James T Kwok. Fast-and-light stochastic admm. In Proc. Int?l. Joint Conf. on
Artificial Intelligence, 2016.
[5] Amir Beck and Marc Teboulle. A fast iterative shrinkage-thresholding algorithm for linear
inverse problems. SIAM Journal on Imaging Sciences, 2(1):183?202, 2009.
[6] Andreas Argyriou, Theodoros Evgeniou, and Massimiliano Pontil. Multi-task feature learning.
Proc. Conf. Advances in Neural Information Processing Systems, 2007.
[7] Li Shen, Gang Sun, Zhouchen Lin, Qingming Huang, and Enhua Wu. Adaptive sharing for
image classification. In Proc. Int?l. Joint Conf. on Artificial Intelligence, 2015.
[8] Seyoung Kim, Kyung-Ah Sohn, and Eric P Xing. A multivariate regression approach to
association analysis of a quantitative trait network. Bioinformatics, 25(12):i204?i212, 2009.
[9] Kaiye Wang, Ran He, Liang Wang, Wei Wang, and Tieniu Tan. Joint feature selection and
subspace learning for cross-modal retrieval. IEEE Trans. on Pattern Analysis and Machine
Intelligence, 38(10):1?1, 2016.
[10] Stephen Boyd, Neal Parikh, Eric Chu, Borja Peleato, and Jonathan Eckstein. Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations
R in Machine Learning, 3(1):1?122, 2011.
and Trends
[11] Bingsheng He and Xiaoming Yuan. On the O(1/n) convergence rate of the Douglas?Rachford
alternating direction method. SIAM Journal on Numerical Analysis, 50(2):700?709, 2012.
[12] Zhouchen Lin, Risheng Liu, and Huan Li. Linearized alternating direction method with parallel
splitting and adaptive penalty for separable convex programs in machine learning. Machine
Learning, 99(2):287?325, 2015.
[13] Damek Davis and Wotao Yin. Convergence rate analysis of several splitting schemes. In
Splitting Methods in Communication, Imaging, Science, and Engineering, pages 115?163. 2016.
[14] Caihua Chen, Raymond H Chan, Shiqian Ma, and Junfeng Yang. Inertial proximal ADMM
for linearly constrained separable convex optimization. SIAM Journal on Imaging Sciences,
8(4):2239?2267, 2015.
[15] Huan Li and Zhouchen Lin. Optimal nonergodic O(1/k) convergence rate: When linearized
ADM meets nesterov?s extrapolation. arXiv preprint arXiv:1608.06366, 2016.
9
[16] Taiji Suzuki. Dual averaging and proximal gradient descent for online alternating direction
multiplier method. In Proc. Int?l. Conf. on Machine Learning, 2013.
[17] Taiji Suzuki. Stochastic dual coordinate ascent with alternating direction method of multipliers.
In Proc. Int?l. Conf. on Machine Learning, 2014.
[18] L?on Bottou. Stochastic learning. In Advanced lectures on machine learning, pages 146?168.
2004.
[19] Rie Johnson and Tong Zhang. Accelerating stochastic gradient descent using predictive variance
reduction. In Proc. Conf. Advances in Neural Information Processing Systems, 2013.
[20] Aaron Defazio, Francis Bach, and Simon Lacoste-Julien. SAGA: A fast incremental gradient
method with support for non-strongly convex composite objectives. In Proc. Conf. Advances in
Neural Information Processing Systems, 2014.
[21] Mark Schmidt, Nicolas Le Roux, and Francis Bach. Minimizing finite sums with the stochastic
average gradient. Mathematical Programming, pages 1?30, 2013.
[22] Yurii Nesterov. A method for unconstrained convex minimization problem with the rate of
convergence O(1/k 2 ). In Doklady an SSSR, volume 269, pages 543?547, 1983.
[23] Zeyuan Allen-Zhu. Katyusha: The first truly accelerated stochastic gradient method. In Annual
Symposium on the Theory of Computing, 2017.
[24] Zhouchen Lin, Risheng Liu, and Zhixun Su. Linearized alternating direction method with
adaptive penalty for low-rank representation. In Proc. Conf. Advances in Neural Information
Processing Systems, 2011.
[25] Xiaoqun Zhang, Martin Burger, and Stanley Osher. A unified primal-dual algorithm framework
based on bregman iteration. Journal of Scientific Computing, 46:20?46, 2011.
[26] Paul Tseng. On accelerated proximal gradient methods for convex-concave optimization. In
Technical report, 2008.
[27] Yurii Nesterov. On an approach to the construction of optimal methods of minimization of
smooth convex functions. Ekonomika i Mateaticheskie Metody, 24(3):509?517, 1988.
[28] Wangmeng Zuo and Zhouchen Lin. A generalized accelerated proximal gradient approach for
total variation-based image restoration. IEEE Trans. on Image Processing, 20(10):2748, 2011.
[29] Zhouchen Lin, Minming Chen, and Yi Ma. The augmented lagrange multiplier method for
exact recovery of corrupted low-rank matrices. arXiv preprint arXiv:1009.5055, 2010.
[30] Jerome Friedman, Trevor Hastie, and Robert Tibshirani. Sparse inverse covariance estimation
with the graphical lasso. Biostatistics, 9(3):432?441, 2008.
10
| 7034 |@word multitask:6 mild:1 faculty:1 inversion:1 seems:1 shuicheng:1 linearized:8 covariance:2 minming:1 solid:1 reduction:4 initial:3 liu:4 ours:2 ati:1 outperforms:1 existing:2 comparing:2 com:1 chu:1 written:3 numerical:2 opg:2 enables:1 zaid:1 update:13 intelligence:3 zlin:3 kyk:1 amir:1 xk:24 tems:1 gx:1 theodoros:1 zhang:3 five:1 mathematical:1 become:1 symposium:1 ik:12 yuan:2 prove:3 introductory:1 introduce:1 multi:2 inspired:2 decreasing:1 little:1 cpu:1 considering:1 increasing:1 deem:1 burger:1 notation:4 bounded:1 biostatistics:1 what:1 argmin:2 ouyang:2 unified:1 impractical:1 guarantee:1 quantitative:2 every:1 concave:1 tackle:1 growth:2 sag:13 exactly:1 doklady:2 demonstrates:3 k2:11 axt:1 control:1 grant:2 superiority:1 engineering:1 encoding:1 ak:2 meet:3 subscript:2 x0s:1 noteworthy:1 approximately:1 might:3 china:4 k:4 limited:1 bi:3 averaged:1 practical:1 lecun:1 acknowledgment:1 testing:1 practice:3 kat:2 pontil:1 sdca:3 thi:1 empirical:1 yan:1 composite:2 boyd:1 integrating:1 cannot:2 close:2 selection:1 operator:1 risk:1 influence:1 cb352502:1 www:2 deterministic:2 lagrangian:1 center:1 straightforward:1 latest:2 convex:31 ergodic:48 survey:1 shen:1 roux:1 splitting:5 recovery:1 estimator:3 kyung:2 fill:1 regularize:1 erg:4 fang:1 zuo:1 variation:2 coordinate:1 wenliang:1 construction:2 tan:1 exact:2 programming:1 us:2 kxkg:1 trend:1 roy:1 taiji:2 updating:2 cooperative:1 kxk1:1 bottom:1 subproblem:1 preprint:5 csie:1 solved:2 wang:3 worst:1 cong:1 ensures:1 sun:1 decrease:4 yk:3 mentioned:3 intuition:1 ran:1 complexity:1 nesterov:22 tight:7 solving:4 predictive:2 f2:22 eric:2 accelerate:6 easily:1 htm:1 joint:3 bingsheng:2 mization:1 jiao:1 massimiliano:1 fast:4 effective:16 artificial:2 medianet:1 supplementary:3 solve:4 larger:1 gi:3 transform:1 superscript:1 seemingly:1 online:1 advantage:1 sequence:1 propose:5 tran:2 kat1:1 reset:2 junfeng:1 loop:13 achieve:5 kx1:2 scalability:1 convergence:39 double:1 p:1 requirement:1 generating:1 incremental:1 converges:7 help:2 tions:2 linearize:1 metody:2 wangmeng:1 school:1 odd:1 progress:1 advocated:1 eq:8 direction:17 guided:4 xiaoqun:1 sssr:2 stochastic:30 enable:1 libsvmtools:1 material:3 hx:1 assign:1 f1:14 generalization:1 fix:1 preliminary:1 ntu:1 opt:5 rong:1 hold:2 proximity:1 sufficiently:1 considered:1 y00:1 great:1 k3:1 claim:1 achieves:4 adopt:3 a2:14 xk2:6 early:2 estimation:2 proc:17 integrates:3 sensitive:1 repetition:1 minimization:4 always:1 rather:2 pn:1 zhong:1 shrinkage:1 corollary:2 l0:1 focus:1 improvement:1 consistently:1 rank:3 hongzhou:1 a9a:4 k2g:1 kim:2 sense:8 plier:1 damek:1 typically:1 provably:3 adm:2 dual:10 i212:2 classification:1 proposes:1 constrained:6 art:2 special:1 prevailing:1 integration:1 construct:1 evgeniou:1 beach:1 ky2k:1 eik:4 nonsmooth:2 report:2 preserve:1 national:2 beck:1 n1:1 suit:1 maintain:1 friedman:1 interest:1 zheng:1 truly:4 light:1 primal:6 bregman:1 closer:1 huan:6 conduct:2 iv:1 pku:4 initialized:1 y1k:2 guidance:2 theoretical:7 y2k:2 teboulle:1 sidford:1 ax0:1 restoration:1 cost:3 introducing:1 johnson:2 eec:1 corrupted:1 proximal:8 st:1 siam:4 xi1:1 fused:3 huang:1 shiqian:1 dr:1 conf:16 yau:1 li:9 gy:1 includes:1 int:9 satisfy:1 explicitly:1 depends:1 tion:2 h1:5 extrapolation:24 view:1 lot:2 analyze:1 performed:3 francis:2 xing:2 complicated:1 parallel:2 simon:1 contribution:1 minimize:2 sadmm:34 variance:5 who:1 ka1:3 lu:1 researcher:1 ah:2 acc:18 eka:1 whenever:1 sharing:1 trevor:1 inexact:1 pp:1 james:2 proof:4 con:1 proved:1 dataset:2 popular:2 knowledge:2 stanley:1 sophisticated:1 actually:2 inertial:1 follow:1 modal:1 improved:1 katyusha:6 rie:2 wei:1 though:1 strongly:2 xk1:4 stage:2 shuai:1 jerome:1 hand:1 ei:1 axk:1 su:2 overlapping:1 minibatch:2 logistic:1 gray:2 scientific:1 usa:1 name:1 zhixun:2 multiplier:13 y2:1 equality:1 regularization:5 alternating:18 laboratory:1 neal:1 xks:12 please:2 davis:2 criterion:1 m:1 generalized:1 exdb:1 outline:1 evident:1 demonstrate:5 l1:5 atsushi:1 allen:1 meaning:2 image:4 regularizing:1 fi:4 parikh:1 common:1 ji:1 shanghai:1 volume:2 rachford:3 association:2 he:5 trait:2 chine:1 ai:2 smoothness:1 unconstrained:3 trivially:1 zhouchen:12 elaborately:2 pm:1 frostig:1 had:1 longer:1 ysk:4 multivariate:1 recent:1 chan:1 belongs:1 suvrit:1 kx2:2 der:1 yi:1 zeyuan:1 stephen:1 full:1 harchaoui:1 sham:1 reduces:1 smooth:6 borja:1 faster:15 technical:1 cross:1 long:3 lin:12 pde:1 retrieval:1 bach:2 y:21 a1:13 peking:1 nonergodic:2 variant:3 regression:2 basic:2 vision:1 expectation:1 arxiv:9 iteration:14 sometimes:1 adopting:1 represent:1 risheng:4 pass:16 ascent:1 effectiveness:1 ladm:2 yang:1 easy:1 hastie:1 lasso:11 inner:10 idea:2 cn:4 andreas:1 i7:1 method3:1 defazio:1 gb:1 accelerating:2 penalty:9 detailed:3 involve:1 sohn:2 dna:6 generate:3 continuation:2 http:3 exist:1 nsf:1 tibshirani:1 group:6 key:2 four:2 drawn:1 douglas:3 lacoste:1 imaging:3 graph:5 sum:8 year:1 inverse:3 named:1 family:1 reader:1 yann:1 wu:1 dy:1 bound:1 hi:2 layer:1 ki:2 cheng:1 tieniu:1 annual:1 gang:1 covertype:6 constraint:7 x2:42 speed:2 min:3 separable:6 xiaoming:2 martin:1 combination:2 kakade:1 g3:1 temp:1 tw:1 osher:1 taken:1 remains:1 discus:1 cjlin:1 xi2:1 ge:1 subjected:1 nitanda:1 end:2 yurii:5 available:1 kwok:2 batch:8 schmidt:1 slower:3 original:11 denotes:2 remaining:1 include:1 ensure:1 top:1 running:1 graphical:1 calculating:1 k1:2 especially:1 establish:1 unchanged:2 feng:2 objective:14 strategy:1 dependence:1 traditional:1 surrogate:1 niao:2 evolutionary:1 gradient:21 subspace:1 outer:5 tseng:1 reason:2 length:1 besides:1 index:2 code:1 mini:5 illustration:1 minimizing:1 innovation:1 liang:1 ekonomika:2 robert:1 stoc:9 stated:1 implementation:1 design:1 unknown:1 wotao:1 snapshot:10 datasets:6 benchmark:1 finite:8 descent:7 extended:1 communication:1 y1:1 lb:1 peleato:1 bk:1 ka2:1 eckstein:1 moe:1 bench:1 c3:2 imagenet:1 distinction:1 nip:1 alternately:2 trans:2 perception:1 xm:6 pattern:3 sparsity:2 program:4 memory:3 critical:1 natural:1 advanced:1 zhu:1 scheme:13 improve:2 ne:2 julien:2 lk:1 extract:1 raymond:1 prior:1 literature:1 l2:5 epoch:8 catalyst:1 loss:7 lecture:2 highlight:1 h2:5 foundation:2 degree:1 thresholding:1 viewpoint:1 course:1 supported:1 last:4 svrg:19 understand:1 sparse:2 uzawa:1 ghz:1 distributed:1 dimension:1 author:5 suzuki:2 adaptive:4 approximate:1 obtains:1 kaxk1:2 ignore:2 emphasize:1 xi:6 x00:4 continuous:2 un:1 search:1 iterative:1 table:6 reality:2 ca:1 sra:1 nicolas:1 bottou:1 complex:1 marc:1 main:2 linearly:5 decrement:1 whole:2 noise:3 arrow:2 paul:1 verifies:1 complementary:1 x1:33 augmented:3 fig:2 intel:1 xu:1 slow:2 tong:3 vr:14 aid:1 saga:1 kxk2:1 tin:1 learns:1 dozen:1 theorem:7 down:2 bad:1 xt:2 showing:2 r2:1 divergent:1 x:14 list:2 exists:1 mnist:7 adding:1 importance:4 ci:1 mirror:1 linearization:1 demand:1 kx:1 chen:3 gap:13 easier:1 yin:1 simply:1 lagrange:1 kxk:5 hua:2 nested:1 i204:2 ma:2 identity:1 seyoung:2 acceleration:5 towards:1 lipschitz:6 admm:75 folded:1 typical:2 except:1 averaging:1 decouple:1 lemma:4 called:1 total:1 experimental:4 aaron:2 mark:2 support:1 jonathan:1 alexander:1 bioinformatics:2 accelerated:8 hollow:1 argyriou:1 |
6,672 | 7,035 | A Probabilistic Framework for Nonlinearities in
Stochastic Neural Networks
Qinliang Su
Xuejun Liao
Lawrence Carin
Department of Electrical and Computer Engineering
Duke University, Durham, NC, USA
{qs15, xjliao, lcarin}@duke.edu
Abstract
We present a probabilistic framework for nonlinearities, based on doubly truncated Gaussian distributions. By setting the truncation points appropriately, we
are able to generate various types of nonlinearities within a unified framework,
including sigmoid, tanh and ReLU, the most commonly used nonlinearities in
neural networks. The framework readily integrates into existing stochastic neural
networks (with hidden units characterized as random variables), allowing one for
the first time to learn the nonlinearities alongside model weights in these networks.
Extensive experiments demonstrate the performance improvements brought about
by the proposed framework when integrated with the restricted Boltzmann machine
(RBM), temporal RBM and the truncated Gaussian graphical model (TGGM).
1
Introduction
A typical neural network is composed of nonlinear units connected by linear weights, and such a
network is known to have universal approximation ability under mild conditions about the nonlinearity
used at each unit [1, 2]. In previous work, the choice of nonlinearity has commonly been taken as a
part of network design rather than network learning, and the training algorithms for neural networks
have been mostly concerned with learning the linear weights. However, it is becoming increasingly
understood that the choice of nonlinearity plays an important role in model performance. For example,
[3] showed advantages of rectified linear units (ReLU) over sigmoidal units in using the restricted
Boltzmann machine (RBM) [4] to pre-train feedforward ReLU networks. It was further shown in [5]
that rectified linear units (ReLU) outperform sigmoidal units in a generative network under the same
undirected and bipartite structure as the RBM.
A number of recent works have reported benefits of learning nonlinear units along with the inter-unit
weights. These methods are based on using parameterized nonlinear functions to activate each unit
in a neural network, with the unit-dependent parameters incorporated into the data-driven training
algorithms. In particular, [6] considered the adaptive piecewise linear (APL) unit defined by a mixture
of hinge-shaped functions, and [7] used nonparametric Fourier basis expansion to construct the
activation function of each unit. The maxout network [8] employs piecewise linear convex (PLC)
units, where each PLC unit is obtained by max-pooling over multiple linear units. The PLC units were
extended to Lp units in [9] where the normalized Lp norm of multiple linear units yields the output
of an Lp unit. All these methods have been developed for learning the deterministic characteristics
of a unit, lacking a stochastic unit characterization. The deterministic nature limits these methods
from being easily applied to stochastic neural networks (for which the hidden units are random
variables, rather than being characterized by a deterministic function), such as Boltzmann machines
[10], restricted Boltzmann machines [11], and sigmoid belief networks (SBNs) [12].
We propose a probabilistic framework to unify the sigmoid, hyperbolic tangent (tanh) and ReLU
nonlinearities, most commonly used in neural networks. The proposed framework represents a
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
unit h probabilistically as p(h|z, ?), where z is the total net contribution that h receives from other
units, and ? represents the learnable parameters. By taking
the expectation of h, a deterministic
R
characterization of the unit is obtained as E(h|z, ?) , h p(h|z, ?)dh. We show that the sigmoid,
tanh and ReLU are well approximated by E(h|z, ?) under appropriate settings of ?. This is different
from [13], in which nonlinearities were induced by the additive noises of different variances, making
the model learning much more expensive and nonlinearity producing less flexible. Additionally,
more-general nonlinearities may be constituted or learned, with these corresponding to distinct
settings of ?. A neural unit represented by the proposed framework is named a truncated Gaussian
(TruG) unit because the framework is built upon truncated Gaussian distributions. Because of the
inherent stochasticity, TruG is particularly useful in constructing stochastic neural networks.
The TruG generalizes the probabilistic ReLU in [14, 5] to a family of stochastic nonlinearities, with
which one can perform two tasks that could not be done previously: (i) One can interchangeably use
one nonlinearity in place of another under the same network structure, as long as they are both in
the TruG family; for example, the ReLU-based stochastic networks in [14, 5] can be extended to
new networks based on probabilistic tanh or sigmoid nonlinearities, and the respective algorithms in
[14, 5] can be employed to train the associated new models with little modification; (ii) Any stochastic
network constructed with the TruG can learn the nonlinearity alongside the network weights, by
maximizing the likelihood function of ? given the training data. We can learn the nonlinearity at the
unit level, with each TruG unit having its own parameters; or we can learn the nonlinearity at the
model level, with the entire network sharing the same parameters for all its TruG units. The different
choices entail only minor changes in the update equation of ?, as will be seen subsequently.
We integrate the TruG framework into three existing stochastic networks: the RBM, temporal RBM
[15] and feedforward TGGM [14], leading to three new models referred to as TruG-RBM, temporal
TruG-RBM and TruG-TGGM, respectively. These new models are evaluated against the original
models in extensive experiments to assess the performance gains brought about by the TruG. To
conserve space, all propositions in this paper are proven in the Supplementary Material.
2
TruG: A Probabilistic Framework for Nonlinearities in Neural Networks
For a unit h that receives net contribution z from other units, we propose to relate h to z through the
following stochastic nonlinearity,
N h z, ? 2 I(?1 ? h ? ?2 )
(1)
, N[?1 ,?2 ] h z, ? 2 ,
p(h|z, ?) =
R ?2
0
2
0
N (h |z, ? ) dh
?1
where I(?) is an indicator function and N ? z, ? 2 is the probability density function (PDF) of
a univariate Gaussian distribution with mean z and variance ? 2 ; the shorthand notation N[?1 ,?2 ]
indicates the density N is truncated and renormalized such that it is nonzero only in the interval
[?1 , ?2 ]; ? , {?1 , ?2 } contains the truncation points and ? 2 is fixed.
The units of a stochastic neural network fall into two categories: visible units and hidden units [4].
The network represents a joint distribution over both hidden and visible units and the hidden units are
integrated out to yield the marginal distribution of visible units. With a hidden unit expressed in (1),
the expectation of h is given by
E(h|z, ?) = z + ?
?( ?1??z ) ? ?( ?2??z )
?( ?2??z ) ? ?( ?1??z )
,
(2)
where ?(?) and ?(?) are, respectively, the PDF and cumulative distribution function (CDF) of the
standard normal distribution [16]. As will become clear below, a weighted sum of these expected
hidden units constitutes the net contribution received by each visible unit when the hidden units are
marginalized out. Therefore E(h|z, ?) acts as a nonlinear activation function to map the incoming
contribution h receives to the outgoing contribution h sends out. The incoming contribution received
by h may be a random variable or a function of data such as z = wT x + b; the former case is typically
for unsupervised learning and the latter case for supervised learning with x being the predictors.
By setting the truncation points to different values, we are able to realize many different kinds of
nonlinearities. We plot in Figure 1 three realizations of E(h|z, ?) as a function of z, each with a
particular setting of {?1 , ?2 } and ? 2 = 0.2 in all cases. The plots of ReLU, tanh and sigmoid are
2
1
10
5
0
-20
-10
0
10
20
0.5
0.8
0
-0.5
-1
-20
6
sigmoid
TruG
0.6
0.4
0.2
-10
0
z
10
20
0
-20
4
3
2
1
-10
0
10
20
0
-20
-10
0
z
z
(a)
sigmoid
TruG
ReLU
5
activation(z)
15
1
tanh
TruG
activation(z)
ReLU
TruG
activation(z)
activation(z)
20
(b)
10
20
z
(c)
(d)
Figure 1: Illustration of different nonlinearities realized by the TruG with different truncation points.
(a) ?1 = 0 and ?2 = +?; (b) ?1 = ?1 and ?2 = 1; (c) ?1 = 0 and ?2 = 1; (d) ?1 = 0 and ?2 = 4.
also shown as a comparison. It is seen from Figure 1 that, by choosing appropriate truncation points,
E(h|z, ?) is able to approximate ReLU, tanh and sigmoid, the three types of nonlinearities most
widely used in neural networks. We can also realize other types of nonlinearities by setting the
truncation points to other values, as exemplified in Figure 1(d). The truncation points can be set
manually by hand, selected by cross-validation, or learned in the same way as the inter-unit weights.
In this paper, we focus on learning them alongside the weights based on training data.
The variance of h, given by [16],
Var(h|z, ?) = ? 2 +
?1 ?z
?1 ?z
? ?
?
?2
?2 ?z
?
?
?
?2 ?z
? ?
??
?1 ?z
?
?2 ?z
?
?
? ?2 ?
?
?1 ?z
?
??
?
?2 ?z
?
??
?2 ?z
?
?1 ?z
?
?2
? , (3)
is employed in learning the truncation points and network weights. Direct evaluation of (2) and (3) is
prone to the numerical issue of 00 , because both ?(z) and ?(z) are so close to 0 when z < ?38 that
they are beyond the maximal accuracy a double float number can represent. We solve this problem by
?(z)
using the fact that (2) and (3) can be equivalently expressed in terms of ?(z)
by dividing both the
numerator and the denominator by ?(?). We make use of the following approximation for the ratio,
?
?(z)
z2 + 4 ? z
?
, ?(z), for z < ?38,
(4)
?(z)
2
the accuracy of which is established in Proposition 1.
?
2 +4?z
?(z)
? 1 < 2 ?zz2 +8?3z
?1; moreover, for all
Proposition 1. The relative error is bounded by ?(z)/ ?(z)
?(z)
z < ?38, the relative error is guaranteed to be smaller than 4.8 ? 10?7 , that is, ?(z)/ ?(z)
? 1 <
4.8 ? 10?7 for all z < ?38.
3
RBM with TruG Nonlinearity
We generalize the ReLU-based RBM in [5] by using the TruG nonlinearity. The resulting TruG-RBM
is defined by the following joint distribution over visible units x and hidden units h,
p(x, h) =
1 ?E(x,h)
e
I(x ? {0, 1}n , ?1 ? h ? ?2 ),
Z
(5)
where E(x, h) , 12 hT diag(d)h ? xT Wh ? bT x ? cT h is an energy function and Z is the
normalization constant. Proposition 2 shows (5) is a valid probability distribution.
Proposition 2. The distribution p(x, h) defined in (5) is normalizable.
Qn
By (5), the conditional distribution of x given h is still Bernoulli, p(x|h) = i=1 ?([Wh + b]i ),
while the conditional p(h|x) is a truncated normal distribution, i.e.,
m
Y
1
1
T
p(h|x) =
N[?1 ,?2 ] hj [W x + c]j ,
.
(6)
dj
dj
j=1
3
By setting ?1 and ?2 to different values, we are able to produce different nonlinearities in (6).
We train a TruG-RBM based on maximizing the log-likelihood function `(?, ?) ,
P
ln p(x; ?, ?), where ? , {W, b, c} denotes the network weights, p(x; ?, ?) ,
R ?2x?X
p(x,
h)dh is contributed by a single data point x, and X is the training dataset.
?1
3.1
The Gradient w.r.t. Network Weights
The gradient w.r.t. ? is known to be
?ln p(x)
??
=E
h
?E(x,h)
??
i
?E
h
i
, where E[?] and E[?|x]
?E(x,h)
?? x
means the expectation w.r.t. p(x, h) and p(h|x), respectively. If we estimate the gradient using a
standard sampling-based method, the variance associated with the estimate is usually very large. To
reduce the variance, we follow the traditional RBM in applying the contrastive divergence (CD) to
estimate the gradient [4]. Specifically, we approximate the gradient as
? ln p(x)
?E(x, h)
?E(x, h) (k)
x
?E
x ,
(7)
?E
??
??
??
where x(k) is the k-th sample of the Gibbs sampler p(h(1) |x(0) ), p(x(1) |h(1) ) ? ? ? p(x(k) |h(k) ), with
x(0) being the data x. As shown in (6), p(x|h) and p(h|x) are factorized Bernoulli and univariate
truncated normal distributions, for which efficient sampling algorithms exist [17, 18].
= xi , ?E(x,h)
= hj and ?E(x,h)
= 12 h2j .
?cj
2 (s) ?dj
(s)
Thus estimation of the gradient with CD only requires E hj |x
and E hj |x , which can be
calculated using (2) and (3). Using the estimated gradient, the weights can be updated using the
stochastic gradient ascent algorithm or its variants.
Furthermore, we can obtain that
3.2
?E(x,h)
?wij
= xi hj ,
?E(x,h)
?bi
The Gradient w.r.t. Truncation Points
The gradient w.r.t. ?1 and ?2 are given by
m
? ln p(x) X
=
(p(hj = ?1 ) ? p(hj = ?1 |x)) ,
??1
j=1
(8)
m
? ln p(x) X
=
(p(hj = ?2 |x) ? p(hj = ?2 )) ,
??2
j=1
(9)
for a single data point, with
provided inthe Supplementary Material. It is known that
the derivation
1
T
p(hj = ?|x) = N[?1 ,?2 ] hj = ? dj [W x + c]j , d1j , which can be easily calculated. However, if
we calculate p(hj = ?) directly,
P it would be computationally prohibitive. Fortunately, by noticing
the identity p(hj = ?) = x p(hj = ?|x)p(x),
are able to estimate it efficiently with CD as
we
[WT x(k)+c]j 1
(k)
p(hj = ?) ? p(hj = ?|x ) = N[?1 ,?2 ] hj = ?
, dj , where x(k) is the k-th sample of
dj
the Gibbs sampler as described above. Therefore, the gradient w.r.t. the lower and upper truncation
Pm
points can be estimated using the equations ? ln??p(x)
? j=1 p(hj = ?2 |x)?p(hj = ?2 |x(k) ) and
2
Pm
? ln p(x)
? ? j=1 p(hj = ?1 |x)?p(hj = ?1 |x(k) ) . After obtaining the gradients, we can update the
??1
truncation points with stochastic gradient ascent methods.
It should be emphasized that in the derivation above, we assume a common truncation point
pair {?1 , ?2 } shared among all units for the clarity of presentation. The extension to separate
truncation points for different units is straightforward, by simply replacing (8) and (9) with
? ln p(x)
p(x)
= (p(hj = ?2j |x) ? p(hj = ?2j )) and ? ln
= (p(hj = ?1j ) ? p(hj = ?1j |x)), where
??2j
??1j
?1j and ?2j are the lower and upper truncation point of j-th unit, respectively. For the models
discussed subsequently, one can similarly get the gradient w.r.t. unit-dependent truncations points.
After training, due to the conditional independence between x and h and the existence of efficient
sampling algorithm for truncated normal, samples can be drawn efficiently from the TruG-RBM
using the Gibbs sampler discussed below (7).
4
4
Temporal RBM with TruG Nonlinearity
We integrate the TruG framework into the temporal RBM (TRBM) [19] to learn the probabilistic
nonlinearity in sequential-data modeling. The resulting temporal TruG-RBM is defined by
QT
p(X, H) = p(x1 , h1 ) t=2 p(xt , ht |xt?1 , ht?1 ),
(10)
where p(x1 , h1 ) and p(xt , ht |xt?1 , ht?1 ) are both represented by TruG-RBMs; xt ? Rn and
ht ? Rm are the visible and hidden variables at time step t, with X , [x1 , x2 , ? ? ? , xT ]
and H , [h1 , h2 , ? ? ? , hT ]. To be specific, the distribution p(xt , ht |xt?1 , ht?1 ) is defined as p(xt , ht |xt?1 , ht?1 ) = Z1t exp?E(xt ,ht ) I(x ? {0, 1}n , ?1 ? ht ? ?2 ),
where the energy function takes the form E(xt , ht ) , 12 xTt diag(a) xt + hTt diag(d) ht ?
T
T
2xTt W1 ht ? 2cT ht ? 2 (W2 xt?1 ) ht ? 2bT xt ?2 (W3 xt?1 ) xt ? 2(W4 ht?1 )T ht ; and
R +? R +?
Zt , ?? 0 e?E(xt ,ht ) dht dxt .
Similar to the TRBM, directly optimizing the log-likelihood is difficult. We instead optimize the
lower bound
L , Eq(H|X) [ln p(X, H; ?, ?) ? ln q(H|X)] ,
(11)
where q(H|X) is an approximating posterior distribution of H. The lower bound is equal to the
log-likelihood when q(H|X) is exactly the true posterior p(H|X). We follow [19] to choose the
following approximate posterior,
q(H|X) = p(h1 |x1 ) ? ? ? p(hT |xT ?1 , hT ?1 , xT ),
with which it can be shown that the gradient of the lower
bound w.r.t. h the network
i
PT
?E(xt ,ht )
?L
weights is given by ?? =
?
E
E
p(h
|x
,h
,x
)
p(x
,h
|x
,h
)
t?1
t?2
t?2
t?1
t
t
t?1
t?1
??
h
i t=1
?E(xt ,ht )
Ep(ht |xt?1 ,ht?1 ,xt )
. At any time step t, the outside expectation (which is over ht?1 ) is
??
approximated by sampling from p(ht?1 |xt?2 , ht?2 , xt?1 ); given ht?1 and xt?1 , one can represent
p(xt , ht |xt?1 , ht?1 ) as a TruG-RBM and therefore the two inside expectations can be computed in
the same way as in Section 3. In particular, the variables
in ht are conditionally independent given
Qm
(xt?1 , ht?1 , xt ), i.e., p(ht |xt?1 , ht?1 , xt ) = j=1 p(hjt |xt?1 , ht?1 , xt ) with each component
equal to
[W1T xt+W2 xt?1+W4 ht?1+c]j 1
p(hjt |xt?1 , ht?1 , xt ) =N[?1 ,?2 ] hjt
,
.
(12)
dj
dj
Similarly, the variables in xt are conditionally independent given (xt?1 , ht?1 , ht ). As a result,
Ep(ht |xt?1 ,ht?1 ,xt ) [?] can be calculated in closed-form using (2) and (3), and Ep(xt ,ht |xt?1 ,ht?1 ,xt ) [?]
can be estimated using the CD algorithm, as in Section Section 3. The gradient of L w.r.t. the upper
truncation point is
X
T X
m
T X
m
X
?L
= Eq(H|X)
p(hjt = ?2 |xt?1 , ht?1 , xt ) ?
p(hjt = ?2 |xt?1 , ht?1 ) ,
??2
t=1 j=1
t=1 j=1
?L
with ??
taking a similar form, where the expectations are similarly calculated using the same
1
?L
approach as described above for ??
.
5
TGGM with TruG Nonlinearity
We generalize the feedforward TGGM model in [14] by replacing the probabilistic ReLU with the
TruG. The resulting TruG-TGGM model is defined by the joint PDF over visible variables y and
hidden variables h,
p(y, h|x) = N (y|W1 h + b1 , ? 2 I)N[?1 ,?2 ] (h|W0 x + b0 , ? 2 I),
5
(13)
given the predictor variables x. After marginalizing out h, we get the expectation of y as
E[y|x] = W1 E(h|W0 x + b0 , ?) + b1 ,
(14)
where E(h|W0 x + b0 , ?) is given element-wisely in (2). It is then clear that the expectation of y
is related to x through the TruG nonlinearity. Thus E[y|x] yields the same output as a three-layer
perceptron that uses (2) to activate its hidden units. Hence, the TruG-TGGM model defined in (13)
can be understood as a stochastic perceptron with the TruG nonlinearity. By choosing different values
for the truncation points, we are able to realize different kinds of nonlinearities, including ReLU,
sigmoid and tanh.
ToRtrain the model by maximum likelihood estimation, we need to know the gradient of ln p(y|x) ,
ln p(y, h|x; ?)dh, where ? , {W1 , W0 , b1 , b0 } represents the model parameters. By rewriting
?E(y,h,x)
the joint PDFh as p(y, h|x)
to be given by
I(?
i ?
h e
i 1 ? h ? ?2 ), the gradient is found
?E(y,h,x)
?E(y,h,x)
||y?W1 h?b1 ||2 +||h?W0 x?b0 ||2
? ln p(y|x)
=E ?? x ?E ?? x, y , where E(y, h, x) ,
;
??
2? 2
E[?|x] is the expectation w.r.t. p(y, h|x); and E[?|x, y] is the expectation w.r.t. p(h|x, y). From
(13), we know p(h|x) = N[?1 ,?2 ] (h|W0 x + b0 , ? 2 I) can be factorized into a product of univariate
truncated Gaussian PDFs. Thus the expectation E[h|x] can be computed using (2). However, the
expectations E[h|x, y] and E[hhT |x, y] involve a multivariate truncated Gaussian PDF and are
expensive to calculate directly. Hence mean-field variational Bayesian analysis is used to compute
the approximate expectations. The details are similar to those in [14] except that (2) and (3) are used
to calculate the expectation and variance of h.
p(y|x)
=
The gradients of the log-likelihood w.r.t. the truncation points ?1 and ?2 are given by ? ln??
2
PK
PK
? ln p(y|x)
= ? j=1 (p(hj = ?1 |y, x) ? p(hj = ?1 |x))
j=1 (p(hj = ?2 |y, x) ? p(hj = ?2 |x)) and
??1
for a single data point, with the derivation provided in the Supplementary Material. The probability
p(hj = ?1 |x) can be computed directly since it is a univariate truncated Gaussian distribution. For
p(hj = ?2 |y, x), we approximate it with the mean-field marginal distributions obtained above.
Although TruG-TGGM involves random variables, thanks to the existence of close-form expression
?,
for the expectation of univariate truncated normal, the testing is still very easy. Given a predictor x
the output can be simply predicted with the conditional expectation E[y|x] in (14).
6
Experimental Results
We evaluate the performance benefit brought about by the TruG framework when integrated into the
RBM, temporal RBM and TGGM. In each of the three cases, the evaluation is based on comparing
the original network to the associated new network with the TruG nonlinearity. For the TruG, we
either manually set {?1 , ?2 } to particular values, or learn them automatically from data. We consider
both the case of learning a common {?1 , ?2 } shared for all hidden units and the case of learning a
separate {?1 , ?2 } for each hidden unit.
Results of TruG-RBM The binarized Table 1: Averaged test log-probability on MNIST. (?)
MNIST and Caltech101 Silhouettes are con- Results reported in [20]; () Results reported in [21]
sidered in this experiment. The MNIST using RMSprop as the optimizer.
contains 60,000 training and 10,000 testing
Ave. Log-prob
Model
Trun. Points
images of hand-written digits, while CalMNIST Caltech101
tech101 Silhouettes includes 6364 training
[0, 1]
-97.3
-127.9
and 2307 testing images of objects? silhou[0, +?)
-83.2
-105.2
ettes. For both datasets, each image has
TruG-RBM
[-1, 1]
-124.5
-141.5
28 ? 28 pixels [22]. Throughout this experc-Learn
-82.9
-104.6
iment, 500 hidden units are used. RMSprop
s-Learn
-82.5
-104.3
is used to update the parameters, with the
RBM
?
-86.3?
-109.0
delay and mini-batch size set to 0.95 and
100, respectively. The weight parameters are initialized with the Gaussian noise of zero mean and
0.01 variance, while the lower and upper truncation points at all units are initialized to 0 and 1,
respectively. The learning rates for weight parameters are fixed to 10?4 . Since truncations points
influence the whole networks in a more fundamental way than weight parameters, it is observed
that smaller learning rates are often preferred for them. To balance the convergence speed and
6
Sigmoid function <(7)
Nonlinearity in Ball
Nonlinearity in MNIST
Nonlinearity in Motion
Nonlinearity in Caltech
0.12
0.1
2.5
2
1.5
Probability
0.1
Probability
Output after transform
3
0.12
0.14
4
3.5
0.08
0.06
0.08
0.06
0.04
0.04
1
0.02
0.02
0.5
0
0
-15
-10
-5
0
5
Input before transform: 7
(a)
10
15
0.5
1
1.5
2
2.5
Upper truncation point:
(b)
3
2
3.5
2
2.5
3
3.5
Upper truncation point:
4
4.5
2
(c)
Figure 2: (a) The learned nonlinearities in TruG models with shared upper truncation point ?; The
distribution of unit-level upper truncation points of TruG-RBM for (b) MNIST; (c) Caltech101
Silhouettes.
performance, we anneal their learning rates from 10?4 to 10?6 gradually. The evaluation is based on
the log-probability averaged over test data points, which are estimated using annealed importance
sampling (AIS) [23] with 5 ? 105 inverse temperatures equally spaced in [0, 1]; the reported test
log-probability is averaged over 100 independent AIS runs.
To investigate the impact of truncation points, we first set the lower and upper truncation points
to three fixed pairs: [0, 1], [0, +?) and [?1, 1], which correspond to probabilistic approximations
of sigmoid, ReLU and tanh nonlinearities, respectively. From Table 1, we see that the ReLU-type
TruG-RBM performs much better than the other two types of TruG-RBM. We also learn the truncation
points from data automatically. We can see that the model benefits significantly from nonlinearity
learning, and the best performance is achieved when the units learn their own nonlinearities. The
learned common nonlinearities (c-Learn) for different datasets are plotted in Figure 2(a), which shows
that the model always tends to choose a nonlinearity in between sigmoid and ReLU functions. For the
case with separate nonlinearities (s-Learn), the distributions of the upper truncation points in the TruGRBM?s for MNIST and Caltech101 Silhouettes are plotted in Figure 2(b) and (c), respectively. Note
that due to the detrimental effect observed for negative truncation points, here the lower truncation
points are fixed to zero and only the upper points are learned. To demonstrate the reliability of
AIS estimate, the convergence plots of estimated log-probabilities are provided in Supplementary
Material.
Results of Temporal TruG-RBM The Bouncing Ball and CMU Motion Capture datasets are
considered in the experiment with temporal models. Bouncing Ball consists of synthetic binary
videos of 3 bouncing balls in a box, with 4000 videos for training and 200 for testing, and each video
has 100 frames of size 30 ? 30. CMU Motion Capture is composed of data samples describing the
joint angles associated with different motion types. We follow [24] to train a model on 31 sequences
and test the model on two testing sequences (one is running and the other is walking). Both the
original TRBM and the TruG-TRBM use 400 hidden units for Bouncing Ball and 300 hidden units
for CMU Motion Capture. Stochastic gradient descent (SGD) is used to update the parameters,
with the momentum set to 0.9. The learning rates are set to be 10?2 and 10?4 for the two datasets,
respectively. The learning rate for truncation points is annealed gradually, as done in Section 6.
Since calculating the log-probabilities for these temporal models is computationally prohibitive,
prediction error is employed here as the performance evaluation criteria, which is widely used
[24, 25] in temporal generative models. The performances averaged over 20 independent runs are
reported here. Tables 2 and 3 confirm again that models benefit remarkably from nonlinearity learning,
especially in the case of learning a separate nonlinearity for each hidden unit. It is noticed that,
although the ReLU-type TruG-TRBM performs better the tanh-type TruG-TRBM on Bouncing Ball,
the former performs much worse than the latter on CMU Motion Capture. This demonstrates that
a fixed nonlinearity cannot perform well on every dataset. However, by learning truncation points
automatically, the TruG can adapt the nonlinearity to the data and thus performs the best on every
dataset (up to the representational limit of the TruG framework). Video samples drawn from the
trained models are provided in the Supplementary Material.
Results of TruG-TGGM Ten datasets from the UCI repository are used in this experiment. Following the procedures in [26], datasets are randomly partitioned into training and testing subsets for
7
Table 2: Test prediction error on
Bouncing Ball. (?) Taken from [24],
in which 2500 hidden units are used.
Model
TruG-TRBM
TRBM
RTRBM?
Trun. Points
[0, 1]
[0, +?)
[-1, 1]
c-Learn
s-Learn
?
?
Pred. Err.
6.38?0.51
4.16?0.42
6.01?0.52
3.82?0.41
3.66?0.46
4.90?0.47
4.00?0.35
Table 3: Test prediction error on CMU Motion Capture, in which ?w? and ?r? mean walking and running,
respectively. (?) Taken from [24].
Model
TruG-TRBM
TRBM
ss-SRTRBM?
Trun. Points
[0, 1]
[0, +?)
[-1, 1]
c-Learn
s-Learn
?
?
Err. (w)
8.2?0.18
21.8?0.31
7.3?0.21
6.7?0.29
6.8?0.24
9.6?0.15
8.1?0.06
Err. (r)
6.1?0.22
14.9?0.29
5.9?0.22
5.5?0.22
5.4?0.14
6.8?0.12
5.9?0.05
Table 4: Averaged test RMSEs for multilayer perception (MLP) and TruG-TGGMs under different
truncation points. (?) Results reported in [26], where BH, CS, EE, K8 NP, CPP, PS, WQR, YH, YPM
are the abbreviations of Boston Housing, Concrete Strength, Kin8nm, Naval Propulsion, Cycle Power
Plant, Protein Structure, Wine Quality Red, Yacht Hydrodynamic, Year Prediction MSD, respectively.
Dataset
MLP (ReLU)?
BH
CS
EE
K8
NP
CPP
PS
WQR
YH
YPM
3.228 ?0.195
5.977?0.093
1.098?0.074
0.091?0.002
0.001?0.000
4.182?0.040
4.539?0.029
0.645?0.010
1.182?0.165
8.932?N/A
TruG-TGGM with Different Trun. Points
[0, 1]
3.564?0.655
5.210?0.514
1.168?0.130
0.094?0.003
0.002?0.000
4.023?0.128
4.231?0.083
0.662?0.052
0.871?0.367
8.961?N/A
[0, +?)
3.214?0.555
5.106?0.573
1.252?0.123
0.086?0.003
0.002?0.000
4.067?0.129
4.387?0.072
0.644?0.048
0.821?0.276
8.985?N/A
[-1, 1]
4.003?0.520
4.977?0.482
1.069?0.166
0.091?0.003
0.002? 0.000
3.978?0.132
4.262?0.079
0.659?0.052
0.846?0.310
8.859?N/A
c-Learn
3.401?0.375
4.910?0.467
0.881?0.079
0.073?0.002
0.001?0.000
3.952?0.134
4.209?0.073
0.645?0.050
0.803?0.292
8.893?N/A
s-Learn
3.622? 0.538
4.743? 0.571
0.913? 0.120
0.075? 0.002
0.001? 0.000
3.951? 0.130
4.206? 0.071
0.643? 0.048
0.793? 0.289
8.965? N/A
10 trials except the largest one (Year Prediction MSD), for which only one partition is conducted
due to computational complexity. Table 4 summarizes the root mean square error (RMSE) averaged
over the different trials. Throughout the experiment, 100 hidden units are used for the two datasets
(Protein Structure and Year Prediction MSD), while 50 units are used for the remaining. RMSprop is
used to optimize the parameters, with RMSprop delay set to 0.9. The learning rate is chosen from the
set {10?3 , 2 ? 104 , 10?4 }, while the mini-batch size is set to 100 for the two largest datasets and 50
for the others. The number of VB cycles used in the inference is set to 10 for all datasets.
The RMSE?s of TGGMs with fixed and learned truncation points are reported in Table 4, along
with the RMSE?s of the (deterministic) multilayer perceptron (MLP) using ReLU nonlinearity for
comparison. Similar to what we have observed in generative models, the supervised models also
benefit significantly from nonlinearity learning. The TruG-TGGM with learned truncation points
perform the best for most datasets, with the separate learning performing slightly better than the
common learning overall. Due to the limited space, the learned nonlinearities and their corresponding
truncation points are provided in Supplementary Material.
7
Conclusions
We have presented a probabilistic framework, termed TruG, to unify ReLU, sigmoid and tanh, the
most commonly used nonlinearities in neural networks. The TruG is a family of nonlinearities
constructed with doubly truncated Gaussian distributions. The ReLU, sigmoid and tanh are three
important members of the TruG family, and other members can be obtained easily by adjusting the
lower and upper truncation points. A big advantage offered by the TruG is that the nonlinearity is
learnable from data, alongside the model weights. Due to its stochastic nature, the TruG can be
readily integrated into many stochastic neural networks for which hidden units are random variables.
Extensive experiments have demonstrated significant performance gains that the TruG framework
can bring about when it is integrated with the RBM, temporal RBM, or TGGM.
Acknowledgements
The research reported here was supported by the DOE, NGA, NSF, ONR and by Accenture.
8
References
[1] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional
neural networks. In Advances in neural information processing systems, pages 1097?1105, 2012.
[2] Kurt Hornik. Approximation capabilities of multilayer feedforward networks. Neural networks, 4(2):251?
257, 1991.
[3] Vinod Nair and Geoffrey E Hinton. Rectified linear units improve restricted boltzmann machines. In
Proceedings of the 27th International Conference on Machine Learning (ICML-10), pages 807?814, 2010.
[4] Geoffrey E Hinton. Training products of experts by minimizing contrastive divergence. Neural computation,
14(8):1771?1800, 2002.
[5] Qinliang Su, Xuejun Liao, Chunyuan Li, Zhe Gan, and Lawrence Carin. Unsupervised learning with
truncated gaussian graphical models. In The Thirty-First National Conference on Artificial Intelligence
(AAAI), 2016.
[6] Forest Agostinelli, Matthew D. Hoffman, Peter J. Sadowski, and Pierre Baldi. Learning activation functions
to improve deep neural networks. CoRR, 2014.
[7] Carson Eisenach, Han Liu, and ZhaoranWang. Nonparametrically learning activation functions in deep
neural nets. In Under review as a conference paper at ICLR, 2017.
[8] Ian J. Goodfellow, David Warde-Farley, Mehdi Mirza, Aaron Courville, and Yoshua Bengio. Maxout
networks. In International Conference on Machine Learning (ICML), 2013.
[9] Caglar Gulcehre, Kyunghyun Cho, Razvan Pascanu, and Yoshua Bengio. Learned-norm pooling for deep
feedforward and recurrent neural networks. In Machine Learning and Knowledge Discovery in Databases,
pages 530?546, 2014.
[10] David H Ackley, Geoffrey E Hinton, and Terrence J Sejnowski. A learning algorithm for boltzmann
machines. Cognitive science, 9(1):147?169, 1985.
[11] Geoffrey E Hinton, Simon Osindero, and Yee-Whye Teh. A fast learning algorithm for deep belief nets.
Neural computation, 18(7):1527?1554, 2006.
[12] Radford M Neal. Connectionist learning of belief networks. Artificial intelligence, 56(1):71?113, 1992.
[13] Brendan J Frey. Continuous sigmoidal belief networks trained using slice sampling. In Advances in Neural
Information Processing Systems, pages 452?458, 1997.
[14] Qinliang Su, Xuejun Liao, Changyou Chen, and Lawrence Carin. Nonlinear statistical learning with
truncated gaussian graphical models. In Proceedings of the 33st International Conference on Machine
Learning (ICML-16), 2016.
[15] Ilya Sutskever, Geoffrey E Hinton, and Graham W. Taylor. The recurrent temporal restricted boltzmann
machine. In D. Koller, D. Schuurmans, Y. Bengio, and L. Bottou, editors, Advances in Neural Information
Processing Systems 21, pages 1601?1608. Curran Associates, Inc., 2009.
[16] Norman L Johnson, Samuel Kotz, and Narayanaswamy Balakrishnan. Continuous univariate distributions,
vol. 1-2, 1994.
[17] Nicolas Chopin. Fast simulation of truncated gaussian distributions. Statistics and Computing, 21(2):275?
288, 2011.
[18] Christian P Robert. Simulation of truncated normal variables. Statistics and computing, 5(2):121?125,
1995.
[19] Ilya Sutskever and Geoffrey E Hinton. Learning multilevel distributed representations for high-dimensional
sequences. In AISTATS, volume 2, pages 548?555, 2007.
[20] Ruslan Salakhutdinov and Iain Murray. On the quantitative analysis of deep belief networks. In Proceedings
of the 25th international conference on Machine learning, pages 872?879. ACM, 2008.
[21] David E Carlson, Edo Collins, Ya-Ping Hsieh, Lawrence Carin, and Volkan Cevher. Preconditioned spectral
descent for deep learning. In Advances in Neural Information Processing Systems, pages 2971?2979, 2015.
[22] Benjamin M Marlin, Kevin Swersky, Bo Chen, and Nando D Freitas. Inductive principles for restricted
boltzmann machine learning. In International conference on artificial intelligence and statistics, pages
509?516, 2010.
[23] Radford M Neal. Annealed importance sampling. Statistics and Computing, 11(2):125?139, 2001.
[24] Roni Mittelman, Benjamin Kuipers, Silvio Savarese, and Honglak Lee. Structured recurrent temporal
restricted boltzmann machines. In Proceedings of the 31st International Conference on Machine Learning
(ICML-14), pages 1647?1655, 2014.
[25] Zhe Gan, Chunyuan Li, Ricardo Henao, David E Carlson, and Lawrence Carin. Deep temporal sigmoid
belief networks for sequence modeling. In Advances in Neural Information Processing Systems, pages
2467?2475, 2015.
[26] Jos? Miguel Hern?ndez-Lobato and Ryan P Adams. Probabilistic backpropagation for scalable learning of
bayesian neural networks. Proceedings of The 32nd International Conference on Machine Learning, 2015.
[27] Siamak Ravanbakhsh, Barnab?s P?czos, Jeff Schneider, Dale Schuurmans, and Russell Greiner. Stochastic
neural networks with monotonic activation functions. AISTATS, 1050:14, 2016.
[28] Max Welling, Michal Rosen-Zvi, and Geoffrey E Hinton. Exponential family harmoniums with an
application to information retrieval. In NIPS, pages 1481?1488, 2004.
9
[29] Qinliang Su and Yik-Chung Wu. On convergence conditions of gaussian belief propagation. IEEE
Transactions on Signal Processing, 63(5):1144?1155, 2015.
[30] Qinliang Su and Yik-Chung Wu. Convergence analysis of the variance in gaussian belief propagation.
IEEE Transactions on Signal Processing, 62(19):5119?5131, 2014.
[31] Brendan J Frey and Geoffrey E Hinton. Variational learning in nonlinear gaussian belief networks. Neural
Computation, 11(1):193?213, 1999.
[32] Qinliang Su and Yik-Chung Wu. Distributed estimation of variance in gaussian graphical model via
belief propagation: Accuracy analysis and improvement. IEEE Transactions on Signal Processing,
63(23):6258?6271, 2015.
[33] Daniel Soudry, Itay Hubara, and Ron Meir. Expectation backpropagation: Parameter-free training of
multilayer neural networks with continuous or discrete weights. In Advances in Neural Information
Processing Systems 27, pages 963?971. Curran Associates, Inc., 2014.
[34] Soumya Ghosh, Francesco Maria Delle Fave, and Jonathan Yedidia. Assumed density filtering methods
for learning bayesian neural networks. In Proceedings of the Thirtieth AAAI Conference on Artificial
Intelligence, AAAI?16, pages 1589?1595, 2016.
10
| 7035 |@word mild:1 trial:2 repository:1 changyou:1 norm:2 nd:1 simulation:2 hsieh:1 contrastive:2 sgd:1 liu:1 contains:2 ndez:1 daniel:1 kurt:1 existing:2 err:3 freitas:1 z2:1 comparing:1 michal:1 activation:9 written:1 readily:2 realize:3 additive:1 visible:7 numerical:1 partition:1 christian:1 plot:3 siamak:1 update:4 generative:3 selected:1 prohibitive:2 intelligence:4 volkan:1 characterization:2 pascanu:1 ron:1 sigmoidal:3 along:2 constructed:2 direct:1 become:1 shorthand:1 doubly:2 consists:1 inside:1 baldi:1 inter:2 expected:1 salakhutdinov:1 automatically:3 little:1 kuiper:1 provided:5 notation:1 moreover:1 bounded:1 factorized:2 what:1 kind:2 developed:1 unified:1 marlin:1 ghosh:1 temporal:15 quantitative:1 every:2 binarized:1 act:1 exactly:1 rm:1 qm:1 demonstrates:1 unit:65 producing:1 before:1 engineering:1 frey:2 understood:2 tends:1 limit:2 soudry:1 becoming:1 limited:1 bi:1 averaged:6 thirty:1 testing:6 backpropagation:2 yacht:1 razvan:1 digit:1 lcarin:1 procedure:1 universal:1 w4:2 hyperbolic:1 significantly:2 pre:1 protein:2 get:2 cannot:1 close:2 bh:2 applying:1 influence:1 yee:1 optimize:2 deterministic:5 map:1 demonstrated:1 maximizing:2 annealed:3 straightforward:1 lobato:1 convex:1 xuejun:3 unify:2 iain:1 updated:1 pt:1 play:1 itay:1 duke:2 us:1 curran:2 goodfellow:1 associate:2 element:1 approximated:2 expensive:2 particularly:1 conserve:1 walking:2 database:1 ep:3 role:1 observed:3 ackley:1 electrical:1 capture:5 calculate:3 connected:1 cycle:2 russell:1 benjamin:2 plc:3 rmsprop:4 complexity:1 warde:1 renormalized:1 trained:2 harmonium:1 upon:1 bipartite:1 basis:1 easily:3 joint:5 various:1 represented:2 derivation:3 train:4 distinct:1 fast:2 activate:2 sejnowski:1 artificial:4 kevin:1 choosing:2 outside:1 supplementary:6 widely:2 solve:1 s:1 ability:1 statistic:4 transform:2 housing:1 advantage:2 sequence:4 trbm:10 net:5 propose:2 maximal:1 product:2 uci:1 realization:1 representational:1 sutskever:3 convergence:4 double:1 p:2 produce:1 adam:1 object:1 recurrent:3 miguel:1 qt:1 b0:6 minor:1 received:2 eq:2 dividing:1 predicted:1 involves:1 c:2 stochastic:18 subsequently:2 nando:1 material:6 multilevel:1 agostinelli:1 barnab:1 proposition:5 ryan:1 extension:1 considered:2 normal:6 exp:1 lawrence:5 matthew:1 optimizer:1 wine:1 estimation:3 ruslan:1 integrates:1 tanh:12 hubara:1 largest:2 weighted:1 hoffman:1 brought:3 gaussian:17 always:1 rather:2 hj:31 srtrbm:1 thirtieth:1 probabilistically:1 focus:1 naval:1 improvement:2 pdfs:1 bernoulli:2 likelihood:6 indicates:1 maria:1 ave:1 brendan:2 inference:1 dependent:2 integrated:5 entire:1 typically:1 bt:2 hidden:21 koller:1 wij:1 chopin:1 pixel:1 issue:1 among:1 flexible:1 overall:1 classification:1 henao:1 marginal:2 equal:2 construct:1 field:2 shaped:1 beach:1 sampling:7 manually:2 having:1 represents:4 unsupervised:2 carin:5 constitutes:1 icml:4 rosen:1 np:2 others:1 piecewise:2 inherent:1 employ:1 wqr:2 mirza:1 soumya:1 randomly:1 composed:2 divergence:2 national:1 mlp:3 investigate:1 evaluation:4 mixture:1 farley:1 respective:1 taylor:1 savarese:1 initialized:2 plotted:2 cevher:1 modeling:2 mittelman:1 delle:1 subset:1 predictor:3 delay:2 krizhevsky:1 conducted:1 osindero:1 johnson:1 zvi:1 reported:8 rtrbm:1 synthetic:1 cho:1 st:3 density:3 thanks:1 fundamental:1 international:7 probabilistic:11 terrence:1 lee:1 jos:1 ilya:3 concrete:1 w1:5 again:1 aaai:3 accenture:1 choose:2 worse:1 cognitive:1 hydrodynamic:1 expert:1 leading:1 ricardo:1 chung:3 li:2 nonlinearities:25 includes:1 inc:2 h1:4 root:1 closed:1 red:1 capability:1 simon:1 rmse:3 contribution:6 ass:1 square:1 accuracy:3 convolutional:1 variance:9 characteristic:1 efficiently:2 yield:3 spaced:1 correspond:1 generalize:2 bayesian:3 rectified:3 ping:1 sharing:1 edo:1 against:1 energy:2 rbms:1 associated:4 rbm:29 con:1 gain:2 dataset:4 adjusting:1 wh:2 knowledge:1 cj:1 supervised:2 follow:3 done:2 evaluated:1 box:1 furthermore:1 hand:2 receives:3 replacing:2 su:6 nonlinear:6 mehdi:1 propagation:3 nonparametrically:1 quality:1 usa:2 effect:1 normalized:1 true:1 norman:1 former:2 hence:2 kyunghyun:1 inductive:1 nonzero:1 neal:2 conditionally:2 interchangeably:1 numerator:1 samuel:1 criterion:1 carson:1 whye:1 pdf:4 demonstrate:2 performs:4 motion:7 temperature:1 bring:1 image:3 variational:2 kin8nm:1 sigmoid:16 common:4 volume:1 discussed:2 significant:1 honglak:1 gibbs:3 ai:3 pm:2 similarly:3 nonlinearity:30 stochasticity:1 dj:8 reliability:1 entail:1 han:1 posterior:3 own:2 showed:1 recent:1 multivariate:1 optimizing:1 driven:1 termed:1 binary:1 onr:1 caltech:1 seen:2 fortunately:1 schneider:1 employed:3 signal:3 ii:1 multiple:2 sbns:1 characterized:2 adapt:1 cross:1 long:2 retrieval:1 msd:3 equally:1 impact:1 prediction:6 variant:1 scalable:1 liao:3 denominator:1 expectation:17 cmu:5 multilayer:4 represent:2 normalization:1 achieved:1 remarkably:1 interval:1 float:1 sends:1 appropriately:1 w2:2 ascent:2 pooling:2 induced:1 undirected:1 member:2 balakrishnan:1 ee:2 htt:1 feedforward:5 bengio:3 easy:1 concerned:1 vinod:1 independence:1 relu:23 w3:1 reduce:1 expression:1 narayanaswamy:1 peter:1 roni:1 deep:8 yik:3 useful:1 clear:2 involve:1 nonparametric:1 ten:1 category:1 generate:1 outperform:1 exist:1 ravanbakhsh:1 wisely:1 nsf:1 meir:1 estimated:5 discrete:1 vol:1 drawn:2 clarity:1 rewriting:1 ht:48 sum:1 year:3 nga:1 run:2 angle:1 parameterized:1 noticing:1 prob:1 inverse:1 bouncing:6 named:1 place:1 family:5 throughout:2 kotz:1 swersky:1 wu:3 summarizes:1 vb:1 graham:1 bound:3 ct:2 layer:1 guaranteed:1 courville:1 strength:1 k8:2 alex:1 normalizable:1 x2:1 fourier:1 speed:1 performing:1 department:1 structured:1 ball:7 smaller:2 slightly:1 increasingly:1 partitioned:1 lp:3 making:1 modification:1 restricted:7 gradually:2 xjliao:1 taken:3 ln:16 equation:2 computationally:2 previously:1 hern:1 describing:1 know:2 gulcehre:1 generalizes:1 yedidia:1 appropriate:2 spectral:1 pierre:1 batch:2 existence:2 original:3 denotes:1 running:2 remaining:1 gan:2 graphical:4 hinge:1 marginalized:1 calculating:1 carlson:2 especially:1 murray:1 approximating:1 dht:1 noticed:1 realized:1 traditional:1 trun:4 gradient:20 detrimental:1 iclr:1 separate:5 w0:6 propulsion:1 preconditioned:1 illustration:1 ratio:1 mini:2 balance:1 minimizing:1 nc:1 equivalently:1 mostly:1 difficult:1 robert:1 relate:1 negative:1 design:1 zt:1 boltzmann:9 perform:3 allowing:1 contributed:1 upper:12 teh:1 francesco:1 datasets:10 caglar:1 descent:2 truncated:17 extended:2 incorporated:1 hinton:9 frame:1 rn:1 chunyuan:2 pred:1 david:4 pair:2 extensive:3 imagenet:1 learned:9 established:1 nip:2 able:6 beyond:1 alongside:4 below:2 exemplified:1 usually:1 perception:1 built:1 including:2 max:2 video:4 belief:10 power:1 indicator:1 improve:2 h2j:1 review:1 acknowledgement:1 tangent:1 discovery:1 xtt:2 marginalizing:1 relative:2 apl:1 lacking:1 plant:1 dxt:1 filtering:1 proven:1 var:1 geoffrey:9 validation:1 h2:1 integrate:2 rmses:1 offered:1 principle:1 editor:1 cd:4 prone:1 caltech101:4 supported:1 czos:1 truncation:37 free:1 perceptron:3 fall:1 taking:2 benefit:5 slice:1 distributed:2 calculated:4 valid:1 cumulative:1 qn:1 dale:1 commonly:4 adaptive:1 welling:1 transaction:3 approximate:5 preferred:1 silhouette:4 confirm:1 hjt:5 incoming:2 b1:4 assumed:1 connectionist:1 xi:2 zhe:2 continuous:3 table:8 additionally:1 learn:18 nature:2 nicolas:1 ca:1 obtaining:1 hornik:1 forest:1 schuurmans:2 expansion:1 bottou:1 anneal:1 constructing:1 diag:3 aistats:2 pk:2 constituted:1 whole:1 noise:2 big:1 w1t:1 x1:4 referred:1 momentum:1 exponential:1 yh:2 ian:1 sadowski:1 xt:50 emphasized:1 specific:1 learnable:2 mnist:6 sequential:1 corr:1 importance:2 chen:2 durham:1 boston:1 yoshua:2 simply:2 univariate:6 greiner:1 expressed:2 bo:1 monotonic:1 radford:2 dh:4 cdf:1 nair:1 acm:1 conditional:4 z1t:1 identity:1 presentation:1 abbreviation:1 maxout:2 jeff:1 shared:3 change:1 typical:1 specifically:1 except:2 wt:2 sampler:3 total:1 cpp:2 hht:1 silvio:1 experimental:1 ya:1 aaron:1 latter:2 collins:1 jonathan:1 evaluate:1 outgoing:1 |
6,673 | 7,036 | Distral: Robust Multitask Reinforcement Learning
Yee Whye Teh, Victor Bapst, Wojciech Marian Czarnecki, John Quan,
James Kirkpatrick, Raia Hadsell, Nicolas Heess, Razvan Pascanu
DeepMind
London, UK
Abstract
Most deep reinforcement learning algorithms are data inefficient in complex and
rich environments, limiting their applicability to many scenarios. One direction
for improving data efficiency is multitask learning with shared neural network
parameters, where efficiency may be improved through transfer across related tasks.
In practice, however, this is not usually observed, because gradients from different
tasks can interfere negatively, making learning unstable and sometimes even less
data efficient. Another issue is the different reward schemes between tasks, which
can easily lead to one task dominating the learning of a shared model. We propose
a new approach for joint training of multiple tasks, which we refer to as Distral
(distill & transfer learning). Instead of sharing parameters between the different
workers, we propose to share a ?distilled? policy that captures common behaviour
across tasks. Each worker is trained to solve its own task while constrained to stay
close to the shared policy, while the shared policy is trained by distillation to be the
centroid of all task policies. Both aspects of the learning process are derived by
optimizing a joint objective function. We show that our approach supports efficient
transfer on complex 3D environments, outperforming several related methods.
Moreover, the proposed learning process is more robust to hyperparameter settings
and more stable?attributes that are critical in deep reinforcement learning.
1
Introduction
Deep Reinforcement Learning is an emerging subfield of Reinforcement Learning (RL) that relies
on deep neural networks as function approximators that can scale RL algorithms to complex and
rich environments. One key work in this direction was the introduction of DQN [21] which is able
to play many games in the ATARI suite of games [1] at above human performance. However the
agent requires a fairly large amount of time and data to learn effective policies and the learning
process itself can be quite unstable, even with innovations introduced to improve wall clock time, data
efficiency, and robustness by changing the learning algorithm [27, 33] or by improving the optimizer
[20, 29]. A different approach was introduced by [12, 19, 14], whereby data efficiency is improved
by training additional auxiliary tasks jointly with the RL task.
With the success of deep RL has come interest in increasingly complex tasks and a shift in focus
towards scenarios in which a single agent must solve multiple related problems, either simultaneously
or sequentially. Due to the large computational cost, making progress in this direction requires
robust algorithms which do not rely on task-specific algorithmic design or extensive hyperparameter
tuning. Intuitively, solutions to related tasks should facilitate learning since the tasks share common
structure, and thus one would expect that individual tasks should require less data or achieve a
higher asymptotic performance. Indeed this intuition has long been pursued in the multitask and
transfer-learning literature [2, 31, 34, 5].
Somewhat counter-intuitively, however, the above is often not the result encountered in practice,
particularly in the RL domain [26, 23]. Instead, the multitask and transfer learning scenarios are
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
frequently found to pose additional challenges to existing methods. Instead of making learning
easier it is often observed that training on multiple tasks can negatively affect performances on the
individual tasks, and additional techniques have to be developed to counteract this [26, 23]. It is likely
that gradients from other tasks behave as noise, interfering with learning, or, in another extreme, one
of the tasks might dominate the others.
In this paper we develop an approach for multitask and transfer RL that allows effective sharing
of behavioral structure across tasks, giving rise to several algorithmic instantiations. In addition to
some instructive illustrations on a grid world domain, we provide a detailed analysis of the resulting
algorithms via comparisons to A3C [20] baselines on a variety of tasks in a first-person, visually-rich,
3D environment. We find that the Distral algorithms learn faster and achieve better asymptotic
performance, are significantly more robust to hyperparameter settings, and learn more stably than
multitask A3C baselines.
2
Distral: Distill and Transfer Learning
We propose a framework for simultaneous reindistill
regularise
?3
forcement learning of multiple tasks which we ?1
call Distral. Figure 1 provides a high level illustration involving four tasks. The method is
distill
regularise
founded on the notion of a shared policy (shown
?
0
regularise
distill
in the centre) which distills (in the sense of
Bucila and Hinton et al. [4, 11]) common be- ?2
?4
haviours or representations from task-specific
regularise
distill
policies [26, 23]. Crucially, the distilled policy
is then used to guide task-specific policies via
regularization using a Kullback-Leibler (KL) di- Figure 1: Illustration of the Distral framework.
vergence. The effect is akin to a shaping reward
which can, for instance, overcome random walk exploration bottlenecks. In this way, knowledge
gained in one task is distilled into the shared policy, then transferred to other tasks.
2.1
Mathematical framework
In this section we describe the mathematical framework underlying Distral. A multitask RL setting is
considered where there are n tasks, where for simplicity we assume an infinite horizon with discount
factor .1 We will assume that the action A and state S spaces are the same across tasks; we use
a 2 A to denote actions, s 2 S to denote states. The transition dynamics pi (s0 |s, a) and reward
functions Ri (a, s) are different for each task i. Let ?i be task-specific stochastic policies. The
dynamics and policies give rise to joint distributions over state and action trajectories starting from
some initial state, which we will also denote by ?i by an abuse of notation.
Our mechanism for linking the policy learning across tasks is via optimising an objective which consists of expected returns and policy regularizations. We designate ?0 to be the distilled policy which
we believe will capture agent behaviour that is common across the tasks. We regularize each task
P
t |st )
policy ?i towards the distilled policy using -discounted KL divergences E?i [ t 0 t log ??0i (a
(at |st ) ].
In addition, we also use a -discounted entropy regularization to further encourage exploration. The
resulting objective to be maximized is:
2
3
X
X
?
(a
|s
)
i t t
t
J(?0 , {?i }ni=1 ) =
E?i 4
Ri (at , st ) cKL t log
cEnt t log ?i (at |st )5
?
(a
|s
)
0
t
t
i
t 0
2
3
t
t
X
X
?
t
=
E ?i 4
Ri (at , st ) +
log ?0 (at |st )
log ?i (at |st )5
(1)
i
t 0
where cKL , cEnt 0 are scalar factors which determine the strengths of the KL and entropy regularizations, and ? = cKL /(cKL + cEnt ) and = 1/(cKL + cEnt ). The log ?0 (at |st ) term can be thought
1
The method can be easily generalized to other scenarios like undiscounted finite horizon.
2
of as a reward shaping term which encourages actions which have high probability under the distilled
policy, while the entropy term log ?i (at |st ) encourages exploration. In the above we used the same
regularization costs cKL , cEnt for all tasks. It is easy to generalize to using task-specific costs; this can
be important if tasks differ substantially in their reward scales and amounts of exploration needed,
although it does introduce additional hyperparameters that are expensive to optimize.
2.2
Soft Q Learning and Distillation
A range of optimization techniques in the literature can be applied to maximize the above objective,
which we will expand on below. To build up intuition for how the method operates, we will start in the
simple case of a tabular representation and an alternating maximization procedure which optimizes
over ?i given ?0 and over ?0 given ?i . With ?0 fixed, (1) decomposes into separate maximization
problems for each task, and is an entropy regularized expected return with redefined (regularized)
reward Ri0 (a, s) := Ri (a, s) + ? log ?0 (a|s). It can be optimized using soft Q learning [10] aka G
learning [7], which are based on deriving the following ?softened? Bellman updates for the state and
action values (see also [25, 28, 22]):
X
1
Vi (st ) = log
?0? (at |st ) exp [ Qi (at , st )]
(2)
at
Qi (at , st ) = Ri (at , st ) +
X
st
pi (st+1 |st , at )Vi (st+1 )
(3)
The Bellman updates are softened in the sense that the usual max operator over actions for the state
values Vi is replaced by a soft-max at inverse temperature , which hardens into a max operator as
! 1. The optimal policy ?i is then a Boltzmann policy at inverse temperature :
?i (at |st ) = ?0? (at |st )e
Qi (at |st )
Vi (st )
= ?0? (at |st )e
Ai (at |st )
(4)
where Ai (a, s) = Qi (a, s) Vi (s) is a softened advantage function. Note that the softened state
values Vi (s) act as the log normalizers in the above. The distilled policy ?0 can be interpreted as a
policy prior, a perspective well-known in the literature on RL as probabilistic inference [32, 13, 25, 7].
However, unlike in past works, it is raised to a power of ? ? 1. This softens the effect of the prior ?0
on ?i , and is the result of the additional entropy regularization beyond the KL divergence.
Also unlike past works, we will learn ?0 instead of hand-picking it (typically as a uniform distribution
over actions). In particular, notice that the only terms in (1) depending on ?0 are:
2
3
X
?X
t
E ?i 4
log ?0 (at |st )5
(5)
i
t 0
which is simply a log likelihood for fitting a model ?0 to a mixture of -discounted state-action
distributions, one for each task i under policy ?i . A maximum likelihood (ML) estimator can be
derived from state-action visitation frequencies under roll-outs in each task, with the optimal ML
solution given by the mixture of state-conditional action distributions. Alternatively, in the non-tabular
case, stochastic gradient ascent can be employed, which leads precisely to an update which distills the
task policies ?i into ?0 [4, 11, 26, 23]. Note however that in our case the distillation step is derived
naturally from a KL regularized objective on the policies. Another difference from [26, 23] and from
prior works on the use of distillation in deep learning [4, 11] is that the distilled policy is ?fed back in?
to improve the task policies when they are next optimized, and serves as a conduit in which common
and transferable knowledge is shared across the task policies.
It is worthwhile here to take pause and ponder the effect of the extra entropy regularization. First
suppose that there is no extra entropy regularisation, ? = 1, and consider the simple scenario of
only n = 1 task.Then (5) is maximized when the distilled policy ?0 and the task policy ?1 are equal,
and the KL regularization term is 0. Thus the objective reduces to an unregularized expected return,
and so the task policy ?1 converges to a greedy policy which locally maximizes expected returns.
Another way to view this line of reasoning is that the alternating maximization scheme is equivalent
to trust-region methods like natural gradient or TRPO [24, 29] which use a KL ball centred at the
previous policy, and which are understood to converge to greedy policies.
If ? < 1, there is an additional entropy term in (1). So even with ?0 = ?1 and KL(?1 k?0 ) = 0,
the objective (1) will no longer be maximized by greedy policies. Instead (1) reduces to an entropy
3
regularized expected returns with entropy regularization factor 0 = /(1 ?) = 1/cEnt , so that the
optimal policy is of the Boltzmann form with inverse temperature 0 [25, 7, 28, 22]. In conclusion,
by including the extra entropy term, we can guarantee that the task policy will not turn greedy, and
we can control the amount of exploration by adjusting cEnt appropriately.
This additional control over the amount of exploration is essential when there are more than one task.
To see this, imagine a scenario where one of the tasks is easier and is solved first, while other tasks
are harder with much sparser rewards. Without the entropy term, and before rewards in other tasks
are encountered, both the distilled policy and all the task policies can converge to the one that solves
the easy task. Further, because this policy is greedy, it can insufficiently explore the other tasks to
even encounter rewards, leading to sub-optimal behaviour. For single-task RL, the use of entropy
regularization was recently popularized by Mnih et al. [20] to counter premature convergence to
greedy policies, which can be particularly severe when doing policy gradient learning. This carries
over to our multitask scenario as well, and is the reason for the additional entropy regularization.
2.3
Policy Gradient and a Better Parameterization
The above method alternates between maximization of the distilled policy ?0 and the task policies
?i , and is reminiscent of the EM algorithm [6] for learning latent variable models, with ?0 playing
the role of parameters, while ?i plays the role of the posterior distributions for the latent variables.
Going beyond the tabular case, when both ?0 and ?i are parameterized by, say, deep networks, such
an alternating maximization procedure can be slower than simply optimizing (1) with respect to task
and distilled policies jointly by stochastic gradient ascent. In this case the gradient update for ?i
is simply given by policy gradient with an entropic regularization [20, 28], and can be carried out
within a framework like advantage actor-critic [20].
A simple parameterization of policies would be to use a separate network for each task policy ?i ,
and another one for the distilled policy ?0 . An alternative parameterization, which we argue can
result in faster transfer, can be obtained by considering the form of the optimal Boltzmann policy (4).
Specifically, consider parameterising the distilled policy using a network with parameters ?0 ,
?
?0 (at |st ) = P
exp(h?0 (at |st )
0
a0 exp(h?0 (a |st ))
(6)
and estimating the soft advantages2 using another network with parameters ?i :
A?i (at |st ) = f?i (at |st )
1
log
X
?
?0? (a|st ) exp( f?i (a|st ))
(7)
a
We used hat notation to denote parameterized approximators of the corresponding quantities. The
policy for task i then becomes parameterized as,
exp(?h?0 (at |st ) + f?i (at |st ))
?
?i (at |st ) = ?
?0? (at |st ) exp( A?i (at |st )) = P
0
0
a0 exp((?h?0 (a |st ) + f?i (a |st ))
(8)
This can be seen as a two-column architecture for the policy, with one column being the distilled
policy, and the other being the adjustment required to specialize to task i.
Given the parameterization above, we can now derive the policy gradients. The gradient wrt to the
task specific parameters ?i is given by the standard policy gradient theorem [30],
h?P
? ?P
?i
reg
u
r?i J =E??i
r
log
?
?
(a
|s
)
(R
(a
,
s
))
?i
i t t
u u
i
t 1
u 1
hP
?P
?i
reg
u
=E??i
?i (at |st )
(Ri (au , su ))
(9)
t 1 r?i log ?
u t
where Rireg (a, s) = Ri (a, s) + ? log ?
?0 (a|s) 1 log ?
?i (a|s) is the regularized reward. Note that the
partial derivative of the entropy in the integrand has expectation E??i [r?i log ?
?i (at |st )] = 0 because
of the log-derivative trick. If a value baseline is estimated, it can be subtracted from the regularized
2
In practice, we do not actually use these as advantage estimates. Instead we use (8) to parameterize a policy
which is optimized by policy gradients.
4
DisTra Learning
Returns
Returns
KL
?0
h
Baselines
Returns
KL
?i?
?
?i ?
?0
entropy
f
h
?
?i
i = 1, 2, ..
f
h
i = 1, 2, ..
i = 1, 2, ..
KL 1col
KL+ent 1col
KL 2col
KL+ent 2col
Returns
entropy
entropy
f
entropy
Returns
A3C 2col
i = 1, 2, ..
?i entropy
f
i = 1, 2, ..
A3C
h
?0
A3C
multitask
Figure 2: Depiction of the different algorithms and baselines. On the left are two of the Distral
algorithms and on the right are the three A3C baselines. Entropy is drawn in brackets as it is optional
and only used for KL+ent 2col and KL+ent 1col.
returns as a control variate. The gradient wrt ?0 is more interesting:
hP
?P
?i
X
u
r ?0 J =
E??i
?i (at |st )
(Rireg (au , su )
t 1 r?0 log ?
u t
i
+
?X
i
E??i
hP
t 1
t
P
?i (a0t |st )
a0t (?
?
?0 (a0t |st ))r?0 h?0 (a0t |st )
(10)
i
Note that the first term is the same as for the policy gradient of ?i . The second term tries to match
the probabilities under the task policy ?
?i and under the distilled policy ?
?0 . The second term would
not be present if we simply parameterized ?i using the same architecture ?
?i , but do not use a KL
regularization for the policy. The presence of the KL regularization gets the distilled policy to
learn to be theP
centroid of all task policies, in the sense that the second term would be zero if
?
?0 (a0t |st ) = n1 i ?
?i (a0t |st ), and helps to transfer information quickly across tasks and to new tasks.
2.4
Other Related Works
The centroid and star-shaped structure of Distral is reminiscent of ADMM [3], elastic-averaging
SGD [35] and hierarchical Bayes [9]. Though a crucial difference is that while ADMM, EASGD
and hierarchical Bayes operate in the space of parameters, in Distral the distilled policy learns to be
the centroid in the space of policies. We argue that this is semantically more meaningful, and may
contribute to the observed robustness of Distral by stabilizing learning. In our experiments we find
indeed that absence of the KL regularization significantly affects the stability of the algorithm.
Another related line of work is guided policy search [17, 18, 15, 16]. These focus on single tasks,
and uses trajectory optimization (corresponding to task policies here) to guide the learning of a policy
(corresponding to the distilled policy ?0 here). This contrasts with Distral, which is a multitask
setting, where a learnt ?0 is used to facilitate transfer by sharing common task-agnostic behaviours,
and the main outcome of the approach are instead the task policies.
Our approach is also reminiscent of recent work on option learning [8], but with a few important
differences. We focus on using deep neural networks as flexible function approximators, and applied
our method to rich 3D visual environments, while Fox et al. [8] considered only the tabular case.
We argue for the importance of an additional entropy regularization besides the KL regularization.
This lead to an interesting twist in the mathematical framework allowing us to separately control the
amounts of transfer and of exploration. On the other hand Fox et al. [8] focused on the interesting
problem of learning multiple options (distilled policies here). Their approach treats the assignment of
tasks to options as a clustering problem, which is not easily extended beyond the tabular case.
3
Algorithms
The framework we just described allows for a number of possible algorithmic instantiations, arising
as combinations of objectives, algorithms and architectures, which we describe below and summarize
in Table 1 and Figure 2. KL divergence vs entropy regularization: With ? = 0, we get a purely
5
?=0
?=1
0<?<1
h?0 (a|s)
f?i (a|s)
?h?0 (a|s) + f?i (a|s)
A3C multitask
A3C
KL 1col
KL+ent 1col
A3C 2col
KL 2col
KL+ent 2col
Table 1: The seven different algorithms evaluated in our experiments. Each column describes a
different architecture, with the column headings indicating the logits for the task policies. The rows
define the relative amount of KL vs entropy regularization loss, with the first row comprising the
A3C baselines (no KL loss).
entropy-regularized objective which does not couple and transfer across tasks [20, 28]. With ? = 1,
we get a purely KL regularized objective, which does couple and transfer across tasks, but might
prematurely stop exploration if the distilled and task policies become similar and greedy. With
0 < ? < 1 we get both terms. Alternating vs joint optimization: We have the option of jointly
optimizing both the distilled policy and the task policies, or optimizing one while keeping the other
fixed. Alternating optimization leads to algorithms that resemble policy distillation/actor-mimic
[23, 26], but are iterative in nature with the distilled policy feeding back into task policy optimization.
Also, soft Q learning can be applied to each task, instead of policy gradients. While alternating
optimization can be slower, evidence from policy distillation/actor-mimic indicate it might learn more
stably, particularly for tasks which differ significantly. Separate vs two-column parameterization:
Finally, the task policy can be parameterized to use the distilled policy (8) or not. If using the distilled
policy, behaviour distilled into the distilled policy is ?immediately available? to the task policies so
transfer can be faster. However if the process of transfer occurs too quickly, it might interfere with
effective exploration of individual tasks.
From this spectrum of possibilities we consider four concrete instances which differ in the underlying
network architecture and distillation loss, identified in Table 1. In addition, we compare against three
A3C baselines. In initial experiments we explored two variants of A3C: the original method [20]
and the variant of Schulman et al. [28] which uses entropy regularized returns. We did not find
significant differences for the two variants in our setting, and chose to report only the original A3C
results for clarity in Section 4. Further algorithmic details are provided in the Appendix.
4
Experiments
We demonstrate the various algorithms derived from our framework, firstly using alternating optimization with soft Q learning and policy distillation on a set of simple grid world tasks. Then all
seven algorithms will be evaluated on three sets of challenging RL tasks in partially observable 3D
environments.
4.1
Two room grid world
To give better intuition for the role of the distilled behaviour policy, we considered a set of tasks
in a grid world domain with two rooms connected by a corridor (see Figure 3) [8]. Each task is
distinguished by a different randomly chosen goal location and each MDP state consists of the map
location, the previous action and the previous reward. A Distral agent is trained using only the KL
regularization and an optimization algorithm which alternates between soft Q learning and policy
distillation. Each soft Q learning iteration learns using a rollout of length 10.
To determine the benefit of the distilled policy, we compared the Distral agent to one which soft Q
learns a separate policy for each task. The learning curves are shown in Figure 3 (left). We see that
the Distral agent is able to learn significantly faster than single-task agents. Figure 3 (right) visualizes
the distilled policy (probability of next action given position and previous action), demonstrating
that the agent has learnt a policy which guides the agent to move consistently in the same direction
through the corridor in order to reach the other room. This allows the agent to reach the other room
faster and helps exploration, if the agent is shown new test tasks. In Fox et al. [8] two separate options
are learnt, while here we learn a single distilled policy which conditions on more past information
(previous action and reward).
6
Four di?erent examples of GridWorld tasks
A
Policy in the corridor if previous action was:
left
B
Policy in the corridor if previous action was:
right
C
D
Figure 3: Left: Learning curves on two room grid world. The Distral agent (blue) learns faster,
converges towards better policies, and demonstrates more stable learning overall. Center: Example
of tasks. Green is goal position which is uniformly sampled for each task. Starting position is
uniformly sampled at the beginning of each episode. Right: depiction of learned distilled policy ?0
only in the corridor, conditioned on previous action being left/right and no previous reward. Sizes of
arrows depict probabilities of actions. Note that up/down actions have negligible probabilities. The
model learns to preserve direction of travel in the corridor.
4.2
Complex Tasks
To assess Distral under more challenging conditions, we use a complex first-person partially observed
3D environment with a variety of visually-rich RL tasks. All agents were implemented with a distributed Python/TensorFlow code base, using 32 workers for each task and learnt using asynchronous
RMSProp. The network columns contain convolutional layers and an LSTM and are uniform across
experiments and algorithms. We tried three values for the entropy costs and three learning rates ?.
Four runs for each hyperparameter setting were used. All other hyperparameters were fixed to the
single-task A3C defaults and, for the KL+ent 1col and KL+ent 2col algorithms, ? was fixed at
0.5.
Mazes In the first experiment, each of n = 8 tasks is a different maze containing randomly placed
rewards and a goal object. Figure 4.A1 shows the learning curves for all seven algorithms. Each
curve is produced by averaging over all 4 runs and 8 tasks, and selecting the best settings for and ?
(as measured by the area under the learning curves). The Distral algorithms learn faster and achieve
better final performance than all three A3C baselines. The two-column algorithms learn faster than
the corresponding single column ones. The Distral algorithms without entropy learn faster but achieve
lower final scores than those with entropy, which we believe is due to insufficient exploration towards
the end of learning.
We found that both multitask A3C and two-column A3C can learn well on some runs, but are generally
unstable?some runs did not learn well, while others may learn initially then suffer degradation
later. We believe this is due to negative interference across tasks, which does not happen for Distral
algorithms. The stability of Distral algorithms also increases their robustness to hyperparameter
selection. Figure 4.A2 shows the final achieved average returns for all 36 runs for each algorithm,
sorted in decreasing order. We see that Distral algorithms have a significantly higher proportion of
runs achieving good returns, with KL+ent_2col being the most robust.
Distral algorithms, along with multitask A3C, use a distilled or common policy which can be applied
on all tasks. Panels B1 and B2 in Figure 4 summarize the performances of the distilled policies.
Algorithms that use two columns (KL_2col and KL+ent_2col) obtain the best performance, because
policy gradients are also directly propagated through the distilled policy in those cases. Moreover,
panel B2 reveals that Distral algorithms exhibit greater stability as compared to traditional multitask
A3C. We also observe that KL algorithms have better-performing distilled policies than KL+ent ones.
We believe this is because the additional entropy regularisation allows task policies to diverge more
substantially from the distilled policy. This suggests that annealing the entropy term or increasing the
KL term throughout training could improve the distilled policy performance, if that is of interest.
Navigation We experimented with n = 4 navigation and memory tasks. In contrast to the previous
experiment, these tasks use random maps which are procedurally generated on every episode. The
first task features reward objects which are randomly placed in a maze, and the second task requires to
return these objects to the agent?s start position. The third task has a single goal object which must be
repeatedly found from different start positions, and on the fourth task doors are randomly opened and
7
Figure 4: Panels A1, C1, D1 show task specific policy performance (averaged across all the tasks)
for the maze, navigation and laser-tag tasks, respectively. The x-axes are total numbers of training
environment steps per task. Panel B1 shows the mean scores obtained with the distilled policies (A3C
has no distilled policy, so it is represented by the performance of an untrained network.). For each
algorithm, results for the best set of hyperparameters (based on the area under curve) are reported.
The bold line is the average over 4 runs, and the colored area the average standard deviation over the
tasks. Panels A2, B2, C2, D2 shows the corresponding final performances for the 36 runs of each
algorithm ordered by best to worst (9 hyperparameter settings and 4 runs).
closed to force novel path-finding. Hence, these tasks are more involved than the previous navigation
tasks. The panels C1 and C2 of Figure 4 summarize the results. We observe again that Distral
algorithms yield better final results while having greater stability (Figure 4.C2). The top-performing
algorithms are, again, the 2 column Distral algorithms (KL_2col and KL+ent_2col).
Laser-tag In the final set of experiments, we use n = 8 laser-tag levels. These tasks require the agent
to learn to tag bots controlled by a built-in AI, and differ substantially: fixed versus procedurally
generated maps, fixed versus procedural bots, and complexity of agent behaviour (e.g. learning to
jump in some tasks). Corresponding to this greater diversity, we observe (see panels D1 and D2
of Figure 4) that the best baseline is the A3C algorithm that is trained independently on each task.
Among the Distral algorithms, the single column variants perform better, especially initially, as they
are able to learn task-specific features separately. We observe again the early plateauing phenomenon
for algorithms that do not possess an additional entropy term. While not significantly better than the
A3C baseline on these tasks, the Distral algorithms clearly outperform the multitask A3C.
Discussion Considering the 3 different sets of complex 3D experiments, we argue that the Distral
algorithms are promising solutions to the multitask deep RL problem. Distral can perform significantly
better than A3C baselines when tasks have sufficient commonalities for transfer (maze and navigation),
while still being competitive with A3C when there is less transfer possible. In terms of specific
algorithmic proposals, the additional entropy regularization is important in encouraging continued
exploration, while two column architectures generally allow faster transfer (but can affect performance
when there is little transfer due to task interference). The computational costs of Distral algorithms
are at most twice that of the corresponding A3C algorithms, as each agent need to process two
network columns instead of one. However in practice the runtimes are just slightly more than for
A3C, because the cost of simulating environments is significant and the same whether single or
multitask.
8
5
Conclusion
We have proposed Distral, a general framework for distilling and transferring common behaviours
in multitask reinforcement learning. In experiments we showed that the resulting algorithms learn
quicker, produce better final performances, and are more stable and robust to hyperparameter settings.
We have found that Distral significantly outperforms the standard way of using shared neural network
parameters for multitask or transfer reinforcement learning.
Two ideas in Distral might be worth reemphasizing here. We observe that distillation arises naturally
as one half of an optimization procedure when using KL divergences to regularize the output of
task models towards a distilled model. The other half corresponds to using the distilled model as a
regularizer for training the task models. Another observation is that parameters in deep networks
do not typically by themselves have any semantic meaning, so instead of regularizing networks
in parameter space, it is worthwhile considering regularizing networks in a more semantically
meaningful space, e.g. of policies.
We would like to end with a discussion of the various difficulties faced by multitask RL methods.
The first is that of positive transfer: when there are commonalities across tasks, how does the method
achieve this transfer and lead to better learning speed and better performance on new tasks in the
same family? The core aim of Distral is this, where the commonalities are exhibited in terms of
shared common behaviours. The second is that of task interference, where the differences among
tasks adversely affect agent performance by interfering with exploration and the optimization of
network parameters. This is the core aim of the policy distillation and mimic works [26, 23]. As
in these works, Distral also learns a distilled policy. But this is further used to regularise the task
policies to facilitate transfer. This means that Distral algorithms can be affected by task interference.
It would be interesting to explore ways to allow Distral (or other methods) to automatically balance
between increasing task transfer and reducing task interference.
Other possible directions of future research include: combining Distral with techniques which use
auxiliary losses [12, 19, 14], exploring use of multiple distilled policies or latent variables in the
distilled policy to allow for more diversity of behaviours, exploring settings for continual learning
where tasks are encountered sequentially, and exploring ways to adaptively adjust the KL and entropy
costs to better control the amounts of transfer and exploration. Finally, theoretical analyses of Distral
and other KL regularization frameworks for deep RL would help better our understanding of these
recent methods.
9
References
[1] M. G. Bellemare, Y. Naddaf, J. Veness, and M. Bowling. The arcade learning environment: An evaluation
platform for general agents. Journal of Artificial Intelligence Research, 47:253?279, june 2013.
[2] Yoshua Bengio. Deep learning of representations for unsupervised and transfer learning. In JMLR:
Workshop on Unsupervised and Transfer Learning, 2012.
[3] Stephen Boyd, Neal Parikh, Eric Chu, Borja Peleato, and Jonathan Eckstein. Distributed optimization and
statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn., 3(1),
January 2011.
[4] Cristian Bucila, Rich Caruana, and Alexandru Niculescu-Mizil. Model compression. In Proc. of the Int?l
Conference on Knowledge Discovery and Data Mining (KDD), 2006.
[5] Rich Caruana. Multitask learning. Machine Learning, 28(1):41?75, July 1997.
[6] Arthur P Dempster, Nan M Laird, and Donald B Rubin. Maximum likelihood from incomplete data via the
em algorithm. Journal of the royal statistical society. Series B (methodological), pages 1?38, 1977.
[7] R. Fox, A. Pakman, and N. Tishby. Taming the noise in reinforcement learning via soft updates. In
Uncertainty in Artificial Intelligence (UAI), 2016.
[8] Roy Fox, Michal Moshkovitz, and Naftali Tishby. Principled option learning in markov decision processes.
In European Workshop on Reinforcement Learning (EWRL), 2016.
[9] Andrew Gelman, John B Carlin, Hal S Stern, and Donald B Rubin. Bayesian data analysis, volume 2.
Chapman & Hall/CRC Boca Raton, FL, USA, 2014.
[10] Tuomas Haarnoja, Haoran Tang, Pieter Abbeel, and Sergey Levine. Reinforcement learning with deep
energy-based policies. arXiv preprint arXiv:1702.08165, 2017.
[11] Geoffrey E. Hinton, Oriol Vinyals, and Jeffrey Dean. Distilling the knowledge in a neural network. NIPS
Deep Learning Workshop, 2014.
[12] Max Jaderberg, Volodymyr Mnih, Wojciech Marian Czarnecki, Tom Schaul, Joel Z Leibo, David Silver,
and Koray Kavukcuoglu. Reinforcement learning with unsupervised auxiliary tasks. Int?l Conference on
Learning Representations (ICLR), 2016.
[13] Hilbert J Kappen, Vicen? G?mez, and Manfred Opper. Optimal control as a graphical model inference
problem. Machine learning, 87(2):159?182, 2012.
[14] Guillaume Lample and Devendra Singh Chaplot. Playing FPS games with deep reinforcement learning.
Association for the Advancement of Artificial Intelligence (AAAI), 2017.
[15] Sergey Levine and Pieter Abbeel. Learning neural network policies with guided policy search under
unknown dynamics. In Advances in Neural Information Processing Systems, pages 1071?1079, 2014.
[16] Sergey Levine, Chelsea Finn, Trevor Darrell, and Pieter Abbeel. End-to-end training of deep visuomotor
policies. Journal of Machine Learning Research, 17(39):1?40, 2016.
[17] Sergey Levine and Vladlen Koltun. Variational policy search via trajectory optimization. In Advances in
Neural Information Processing Systems, pages 207?215, 2013.
[18] Sergey Levine and Vladlen Koltun. Learning complex neural network policies with trajectory optimization.
In International Conference on Machine Learning, pages 829?837, 2014.
[19] Piotr Mirowski, Razvan Pascanu, Fabio Viola, Hubert Soyer, Andrew J. Ballard, Andrea Banino, Misha
Denil, Ross Goroshin, Laurent Sifre, Koray Kavukcuoglu, Dharshan Kumaran, and Raia Hadsell. Learning
to navigate in complex environments. Int?l Conference on Learning Representations (ICLR), 2016.
[20] Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy P Lillicrap, Tim
Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. In
Int?l Conference on Machine Learning (ICML), 2016.
[21] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G. Bellemare,
Alex Graves, Martin Riedmiller, Andreas K. Fidjeland, Georg Ostrovski, Stig Petersen, Charles Beattie,
Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran, Daan Wierstra, Shane Legg, and Demis
Hassabis. Human-level control through deep reinforcement learning. Nature, 518(7540):529?533, 02
2015.
[22] Ofir Nachum, Mohammad Norouzi, Kelvin Xu, and Dale Schuurmans. Bridging the gap between value
and policy based reinforcement learning. arXiv:1702.08892, 2017.
[23] Emilio Parisotto, Jimmy Lei Ba, and Ruslan Salakhutdinov. Actor-mimic: Deep multitask and transfer
reinforcement learning. In Int?l Conference on Learning Representations (ICLR), 2016.
10
[24] Razvan Pascanu and Yoshua Bengio. Revisiting natural gradient for deep networks. Int?l Conference on
Learning Representations (ICLR), 2014.
[25] Konrad Rawlik, Marc Toussaint, and Sethu Vijayakumar. On stochastic optimal control and reinforcement
learning by approximate inference. In Robotics: Science and Systems (RSS), 2012.
[26] Andrei A Rusu, Sergio Gomez Colmenarejo, Caglar Gulcehre, Guillaume Desjardins, James Kirkpatrick,
Razvan Pascanu, Volodymyr Mnih, Koray Kavukcuoglu, and Raia Hadsell. Policy distillation. In Int?l
Conference on Learning Representations (ICLR), 2016.
[27] Tom Schaul, John Quan, Ioannis Antonoglou, and David Silver. Prioritized experience replay. CoRR,
abs/1511.05952, 2015.
[28] J. Schulman, P. Abbeel, and X. Chen. Equivalence between policy gradients and soft Q-Learning.
arXiv:1704.06440, 2017.
[29] John Schulman, Sergey Levine, Pieter Abbeel, Michael Jordan, and Philipp Moritz. Trust region policy
optimization. In Int?l Conference on Machine Learning (ICML), 2015.
[30] Richard S Sutton, David A McAllester, Satinder P Singh, Yishay Mansour, et al. Policy gradient methods
for reinforcement learning with function approximation. In Adv. in Neural Information Processing Systems
(NIPS), volume 99, pages 1057?1063, 1999.
[31] Matthew E. Taylor and Peter Stone. An introduction to inter-task transfer for reinforcement learning. AI
Magazine, 32(1):15?34, 2011.
[32] Marc Toussaint, Stefan Harmeling, and Amos Storkey. Probabilistic inference for solving (PO)MDPs.
Technical Report EDI-INF-RR-0934, University of Edinburgh, School of Informatics, 2006.
[33] Hado van Hasselt, Arthur Guez, and David Silver. Deep reinforcement learning with double Q-learning.
Association for the Advancement of Artificial Intelligence (AAAI), 2016.
[34] Jason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lipson. How transferable are features in deep neural
networks? In Adv. in Neural Information Processing Systems (NIPS), 2014.
[35] Sixin Zhang, Anna Choromanska, and Yann LeCun. Deep learning with elastic averaging SGD. In Adv. in
Neural Information Processing Systems (NIPS), 2015.
11
| 7036 |@word multitask:22 compression:1 proportion:1 pieter:4 d2:2 r:1 crucially:1 tried:1 sgd:2 harder:1 carry:1 kappen:1 initial:2 series:1 score:2 selecting:1 past:3 existing:1 outperforms:1 hasselt:1 michal:1 chu:1 must:2 reminiscent:3 john:4 guez:1 happen:1 kdd:1 update:5 depict:1 v:4 pursued:1 greedy:7 half:2 intelligence:4 parameterization:5 advancement:2 amir:1 beginning:1 core:2 manfred:1 colored:1 provides:1 pascanu:4 contribute:1 location:2 philipp:1 firstly:1 zhang:1 mathematical:3 rollout:1 along:1 c2:3 become:1 corridor:6 koltun:2 fps:1 wierstra:1 consists:2 specialize:1 softens:1 fitting:1 behavioral:1 introduce:1 inter:1 expected:5 indeed:2 andrea:1 themselves:1 frequently:1 bellman:2 salakhutdinov:1 discounted:3 decreasing:1 automatically:1 encouraging:1 little:1 considering:3 increasing:2 becomes:1 provided:1 estimating:1 moreover:2 underlying:2 notation:2 maximizes:1 agnostic:1 panel:7 atari:1 interpreted:1 substantially:3 deepmind:1 emerging:1 developed:1 dharshan:2 finding:1 suite:1 guarantee:1 marian:2 every:1 act:1 continual:1 demonstrates:1 uk:1 control:8 kelvin:1 before:1 negligible:1 understood:1 positive:1 treat:1 soyer:1 mach:1 sutton:1 laurent:1 path:1 abuse:1 might:5 chose:1 twice:1 au:2 equivalence:1 suggests:1 challenging:2 range:1 averaged:1 harmeling:1 lecun:1 practice:4 razvan:4 procedure:3 demis:1 area:3 riedmiller:1 significantly:8 thought:1 boyd:1 donald:2 arcade:1 petersen:1 get:4 close:1 selection:1 operator:2 gelman:1 yee:1 bellemare:2 optimize:1 equivalent:1 map:3 dean:1 center:1 helen:1 starting:2 independently:1 jimmy:1 focused:1 hadsell:3 stabilizing:1 simplicity:1 immediately:1 estimator:1 continued:1 d1:2 dominate:1 regularize:2 deriving:1 stability:4 notion:1 banino:1 limiting:1 imagine:1 play:2 suppose:1 yishay:1 magazine:1 us:2 trick:1 trend:1 roy:1 expensive:1 particularly:3 storkey:1 observed:4 role:3 levine:6 quicker:1 preprint:1 solved:1 capture:2 parameterize:1 worst:1 boca:1 region:2 revisiting:1 connected:1 adv:3 episode:2 counter:2 principled:1 intuition:3 environment:11 dempster:1 rmsprop:1 complexity:1 reward:15 instructive:1 dynamic:3 trained:4 singh:2 solving:1 purely:2 negatively:2 efficiency:4 eric:1 czarnecki:2 easily:3 joint:4 po:1 various:2 represented:1 regularizer:1 laser:3 effective:3 london:1 describe:2 artificial:4 visuomotor:1 outcome:1 quite:1 dominating:1 solve:2 forcement:1 say:1 jointly:3 itself:1 laird:1 final:7 cristian:1 advantage:3 rr:1 propose:3 ckl:6 combining:1 achieve:5 schaul:2 ent:9 convergence:1 double:1 undiscounted:1 darrell:1 produce:1 silver:5 converges:2 object:4 help:3 depending:1 develop:1 derive:1 pose:1 andrew:2 measured:1 erent:1 school:1 tim:1 progress:1 solves:1 auxiliary:3 implemented:1 resemble:1 come:1 indicate:1 distilling:2 differ:4 direction:7 guided:2 goroshin:1 attribute:1 alexandru:1 stochastic:4 opened:1 exploration:14 human:2 mcallester:1 crc:1 require:2 behaviour:10 feeding:1 abbeel:5 wall:1 designate:1 exploring:3 considered:3 hall:1 visually:2 exp:7 algorithmic:5 rawlik:1 matthew:1 desjardins:1 optimizer:1 entropic:1 a2:2 early:1 commonality:3 ruslan:1 proc:1 travel:1 ross:1 amos:1 stefan:1 bapst:1 clearly:1 ewrl:1 aim:2 denil:1 rusu:2 clune:1 derived:4 focus:3 ax:1 june:1 legg:1 consistently:1 methodological:1 likelihood:3 aka:1 contrast:2 normalizer:1 centroid:4 baseline:12 sense:3 inference:4 niculescu:1 typically:2 transferring:1 a0:2 initially:2 expand:1 going:1 choromanska:1 comprising:1 issue:1 overall:1 flexible:1 among:2 constrained:1 raised:1 fairly:1 platform:1 equal:1 distilled:45 shaped:1 beach:1 having:1 runtimes:1 optimising:1 veness:2 chapman:1 koray:5 unsupervised:3 piotr:1 icml:2 tabular:5 mimic:4 others:2 report:2 future:1 yoshua:3 few:1 mirza:1 richard:1 randomly:4 simultaneously:1 divergence:4 preserve:1 individual:3 replaced:1 jeffrey:1 n1:1 harley:1 ab:1 interest:2 ostrovski:1 possibility:1 mnih:5 mining:1 evaluation:1 severe:1 adjust:1 joel:2 kirkpatrick:2 mixture:2 extreme:1 bracket:1 navigation:5 misha:1 parameterising:1 hubert:1 encourage:1 worker:3 partial:1 arthur:2 experience:1 fox:5 incomplete:1 puigdomenech:1 taylor:1 walk:1 a3c:27 theoretical:1 instance:2 column:14 soft:11 caruana:2 assignment:1 maximization:5 applicability:1 cost:7 distill:5 deviation:1 uniform:2 too:1 tishby:2 reported:1 learnt:4 adaptively:1 st:48 person:2 lstm:1 international:1 stay:1 vijayakumar:1 probabilistic:2 informatics:1 picking:1 diverge:1 michael:1 quickly:2 concrete:1 again:3 aaai:2 containing:1 adversely:1 inefficient:1 leading:1 wojciech:2 return:15 derivative:2 volodymyr:4 diversity:2 haoran:1 centred:1 star:1 b2:3 bold:1 ioannis:2 int:8 vi:6 later:1 view:1 try:1 closed:1 jason:1 doing:1 start:3 bayes:2 option:6 competitive:1 lipson:1 ass:1 ni:1 roll:1 convolutional:1 maximized:3 yield:1 generalize:1 bayesian:1 norouzi:1 kavukcuoglu:5 produced:1 trajectory:4 worth:1 visualizes:1 simultaneous:1 reach:2 sharing:3 trevor:1 against:1 energy:1 frequency:1 involved:1 james:2 naturally:2 di:2 couple:2 stop:1 sampled:2 propagated:1 adjusting:1 knowledge:4 hilbert:1 shaping:2 actually:1 back:2 lample:1 higher:2 tom:2 improved:2 evaluated:2 though:1 mez:1 just:2 clock:1 hand:2 trust:2 su:2 mehdi:1 interfere:2 stably:2 lei:1 believe:4 mdp:1 dqn:1 facilitate:3 usa:2 lillicrap:1 effect:3 contain:1 logits:1 multiplier:1 regularization:22 hence:1 alternating:8 moritz:1 leibler:1 semantic:1 neal:1 konrad:1 game:3 bowling:1 encourages:2 naftali:1 whereby:1 transferable:2 generalized:1 whye:1 stone:1 demonstrate:1 mohammad:1 temperature:3 reasoning:1 meaning:1 variational:1 novel:1 recently:1 parikh:1 charles:1 common:9 rl:14 twist:1 volume:2 linking:1 association:2 yosinski:1 refer:1 distillation:12 significant:2 ai:4 tuning:1 grid:5 hp:3 centre:1 stable:3 actor:4 longer:1 depiction:2 badia:1 mirowski:1 base:1 sergio:1 posterior:1 own:1 recent:2 showed:1 perspective:1 optimizing:4 optimizes:1 chelsea:1 inf:1 scenario:7 sixin:1 outperforming:1 success:1 approximators:3 victor:1 seen:1 additional:12 somewhat:1 greater:3 employed:1 determine:2 maximize:1 converge:2 july:1 stephen:1 multiple:6 emilio:1 reduces:2 borja:1 technical:1 faster:10 match:1 pakman:1 long:2 raia:3 a1:2 qi:4 controlled:1 involving:1 variant:4 expectation:1 arxiv:4 iteration:1 sometimes:1 sergey:6 hado:1 achieved:1 robotics:1 c1:2 proposal:1 addition:3 separately:2 annealing:1 crucial:1 appropriately:1 extra:3 operate:1 unlike:2 posse:1 exhibited:1 ascent:2 shane:1 quan:2 jordan:1 call:1 presence:1 door:1 bengio:3 easy:2 variety:2 affect:4 variate:1 plateauing:1 carlin:1 architecture:6 identified:1 andreas:1 idea:1 easgd:1 shift:1 bottleneck:1 whether:1 a0t:6 bridging:1 akin:1 suffer:1 peter:1 action:19 repeatedly:1 deep:23 heess:1 generally:2 detailed:1 conduit:1 amount:7 discount:1 locally:1 outperform:1 notice:1 bot:2 estimated:1 arising:1 per:1 blue:1 naddaf:1 hyperparameter:7 affected:1 georg:1 visitation:1 key:1 four:4 trpo:1 demonstrating:1 procedural:1 achieving:1 distills:2 drawn:1 changing:1 clarity:1 leibo:1 counteract:1 inverse:3 parameterized:5 run:9 fourth:1 procedurally:2 uncertainty:1 throughout:1 family:1 yann:1 decision:1 appendix:1 layer:1 fl:1 nan:1 gomez:1 encountered:3 strength:1 insufficiently:1 precisely:1 alex:2 ri:7 tag:4 aspect:1 integrand:1 speed:1 performing:2 martin:1 transferred:1 ri0:1 softened:4 popularized:1 alternate:2 ball:1 combination:1 vladlen:2 across:14 describes:1 increasingly:1 em:2 slightly:1 making:3 intuitively:2 interference:5 unregularized:1 ponder:1 turn:1 mechanism:1 needed:1 wrt:2 fed:1 finn:1 serf:1 end:4 antonoglou:2 gulcehre:1 available:1 observe:5 worthwhile:2 hierarchical:2 stig:1 simulating:1 distinguished:1 subtracted:1 alternative:1 robustness:3 encounter:1 slower:2 hat:1 hassabis:1 cent:7 original:2 top:1 clustering:1 include:1 graphical:1 giving:1 build:1 especially:1 society:1 objective:10 move:1 quantity:1 occurs:1 hal:1 usual:1 traditional:1 exhibit:1 gradient:20 iclr:5 fabio:1 separate:5 fidjeland:1 nachum:1 seven:3 sethu:1 argue:4 unstable:3 reason:1 besides:1 length:1 code:1 tuomas:1 illustration:3 insufficient:1 balance:1 innovation:1 negative:1 rise:2 ba:1 haarnoja:1 design:1 stern:1 policy:127 redefined:1 boltzmann:3 teh:1 allowing:1 perform:2 observation:1 unknown:1 markov:1 kumaran:2 daan:1 finite:1 caglar:1 behave:1 optional:1 january:1 viola:1 hinton:2 extended:1 ofir:1 prematurely:1 gridworld:1 mansour:1 peleato:1 raton:1 introduced:2 david:6 eckstein:1 required:1 kl:40 extensive:1 optimized:3 edi:1 learned:1 tensorflow:1 nip:5 able:3 beyond:3 usually:1 below:2 challenge:1 summarize:3 built:1 max:4 including:1 green:1 memory:1 royal:1 power:1 critical:1 natural:2 rely:1 regularized:9 force:1 difficulty:1 pause:1 mizil:1 scheme:2 improve:3 mdps:1 carried:1 regularise:5 taming:1 faced:1 literature:3 prior:3 schulman:3 python:1 understanding:1 discovery:1 asymptotic:2 regularisation:2 relative:1 subfield:1 expect:1 loss:4 graf:2 parisotto:1 interesting:4 versus:2 geoffrey:1 toussaint:2 agent:19 sufficient:1 s0:1 rubin:2 playing:2 share:2 interfering:2 pi:2 critic:1 row:2 placed:2 keeping:1 asynchronous:2 heading:1 guide:3 allow:3 edinburgh:1 benefit:1 distributed:2 overcome:1 curve:6 default:1 world:5 transition:1 rich:7 maze:5 opper:1 dale:1 reinforcement:20 jump:1 sifre:1 premature:1 founded:1 approximate:1 observable:1 jaderberg:1 kullback:1 satinder:1 ml:2 sequentially:2 instantiation:2 reveals:1 uai:1 b1:2 thep:1 alternatively:1 spectrum:1 search:3 latent:3 iterative:1 vergence:1 decomposes:1 table:3 promising:1 ballard:1 nature:2 learn:18 transfer:29 robust:6 nicolas:1 ca:1 elastic:2 improving:2 schuurmans:1 untrained:1 complex:9 european:1 domain:3 marc:3 did:2 anna:1 main:1 arrow:1 noise:2 hyperparameters:3 xu:1 andrei:2 sub:1 position:5 col:14 replay:1 jmlr:1 third:1 learns:6 tang:1 theorem:1 down:1 specific:9 navigate:1 explored:1 experimented:1 evidence:1 essential:1 bucila:2 workshop:3 corr:1 gained:1 importance:1 conditioned:1 hod:1 horizon:2 sparser:1 easier:2 gap:1 chen:1 entropy:34 timothy:1 simply:4 likely:1 explore:2 visual:1 vinyals:1 adjustment:1 ordered:1 partially:2 scalar:1 van:1 sadik:1 corresponds:1 relies:1 conditional:1 goal:4 sorted:1 adria:1 king:1 towards:5 prioritized:1 room:5 shared:9 admm:2 absence:1 jeff:1 infinite:1 specifically:1 operates:1 semantically:2 averaging:3 uniformly:2 reducing:1 degradation:1 beattie:1 total:1 meaningful:2 indicating:1 colmenarejo:1 guillaume:2 support:1 arises:1 jonathan:1 oriol:1 reg:2 regularizing:2 phenomenon:1 |
6,674 | 7,037 | Online Learning of Optimal Bidding Strategy
in Repeated Multi-Commodity Auctions
Sevi Baltaoglu
Cornell University
Ithaca, NY 14850
[email protected]
Lang Tong
Cornell University
Ithaca, NY 14850
[email protected]
Qing Zhao
Cornell University
Ithaca, NY 14850
[email protected]
Abstract
We study the online learning problem of a bidder who participates in repeated
auctions. With the goal of maximizing his T-period payoff, the bidder determines
the optimal allocation of his budget among his bids for K goods at each period.
As a bidding strategy, we propose a polynomial-time algorithm, inspired by the
dynamic programming approach to the knapsack problem. The proposed algorithm,
referred to ?
as dynamic programming on discrete set (DPDS), achieves ?
a regret
order of O( T log T ). By showing that the regret is lower bounded
by
?(
T ) for
?
any strategy, we conclude that DPDS is order optimal up to a log T term. We
evaluate the performance of DPDS empirically in the context of virtual trading in
wholesale electricity markets by using historical data from the New York market.
Empirical results show that DPDS consistently outperforms benchmark heuristic
methods that are derived from machine learning and online learning approaches.
1
Introduction
We consider the problem of optimal bidding in a multi-commodity uniform-price auction (UPA) [1],
which promotes the law of one price for identical goods. UPA is widely used in practice. Examples
include spectrum auction, the auction of treasury notes, the auction of emission permits (UK), and
virtual trading in the wholesale electricity market, which we discuss in detail in Sec. 1.1.
A mathematical abstraction of multi-commodity UPA is as follows. A bidder has K goods to bid on
at an auction. With the objective to maximize his T-period expected profit, at each period, the bidder
determines how much to bid for each good subject to a budget constraint.
In the bidding period t, if a bid xt,k for good k is greater than or equal to its auction clearing price
?t,k , then the bid is cleared, and the bidder pays ?t,k . His revenue resulting from the cleared bid
will be the good?s spot price (utility) ?t,k . In particular, the payoff obtained from good k at period
t is (?t,k ? ?t,k )1{xt,k ? ?t,k } where 1{xt,k ? ?t,k } indicates whether the bid is cleared. Let
?t = [?t,1 , ..., ?t,K ]| and ?t = [?t,1 , ..., ?t,K ]| be the vector of auction clearing and spot market
prices at period t, respectively. Similarly, let xt = [xt,1 , ..., xt,K ]| be the vector of bids for period
t. We assume that (?t , ?t ) are drawn from an unknown joint distribution and, in our analysis,
independent and identically distributed (i.i.d.) over time.1
At the end of each period, the bidder observes the auction clearing and spot prices of all goods.
Therefore, before choosing the bid of period t, all the information the bidder has is a vector It?1
containing his observation and decision history {xi , ?i , ?i }t?1
i=1 . Consequently, a bidding policy ? of
a bidder is defined as a sequence of decision rules, i.e., ? = (?0 , ?1 ..., ?T ?1 ), such that, at time t ? 1,
1
This implies that the auction clearing price is independent of bid xt , which is a reasonable assumption for
any market where an individual?s bid has negligible impact on the market price.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
?t?1 maps the information history It?1 to the bid xt of period t. The performance of any bidding
policy ? is measured by its regret, which is defined by the difference between the total expected
payoff of policy ? and that of the optimal bidding strategy under known distribution of (?t , ?t ).
1.1
Motivating applications
The mathematical abstraction introduced above applies to virtual trading in the U.S. wholesale
electricity markets that are operated under a two-settlement framework. In the day-ahead (DA)
market, the independent system operator (ISO) receives offers to sell and bids to buy from generators
and retailers for each hour of the next day. To determine the optimal DA dispatch of the next day and
DA electricity prices at each location, ISO solves an economic dispatch problem with the objective of
maximizing social surplus while taking transmission and operational constraints into account. Due
to system congestion and losses, wholesale electricity prices vary from location to location.2 In the
real-time (RT) market, ISO adjusts the DA dispatch according to the RT operating conditions, and the
RT wholesale price compensates deviations in the actual consumption from the DA schedule.
The differences between DA and RT prices occur frequently both as a result of generators and
retailers exercising locational market power [2] and as a result of price spikes in the RT due to
unplanned outages and unpredictable weather conditions [3]. To promote price convergence between
DA and RT markets, in the early 2000s, virtual trading was introduced [4]. Virtual trading is a
financial mechanism that allows market participants and external financial entities to arbitrage on the
differences between DA and RT prices. Empirical and analytical studies have shown that increased
competition in the market due to virtual trading results in price convergence and increased market
efficiency [2, 3, 5].
Virtual transactions make up a significant portion of the wholesale electricity markets. For example,
the total volume of cleared virtual transactions in five big ISO markets was 13% of the total load in
2013 [4]. In the same year, total payoff resulting from all virtual transactions was around 250 million
dollars in the PJM market [2] and 45 million dollars in NYISO market [6].
A bid in virtual trading is a bid to buy (sell) energy in the DA market at a specific location with an
obligation to sell (buy) back exactly the same amount in the RT market at the same location if the bid
is cleared (accepted). Specifically, a bid to buy in the DA market is cleared if the offered bid price is
higher than the DA market price. Similarly, a bid to sell in the DA market is cleared if it is below the
DA market price. In this context, different locations and/or different hours of the day are the set of
goods to bid on. The DA prices are the auction clearing prices, and the RT prices are the spot prices.
The problem studied here may also find applications in other types of repeated auctions where the
auction may be of the double, uniform-price, or second-price types. For example, in the case of
online advertising auctions [7], different goods can correspond to different types of advertising space
an advertiser may consider to bid on.
1.2
Main results and related work
We propose an online learning approach to the algorithmic bidding under budget constraints in
repeated multi-commodity auctions. The proposed approach falls in the category of empirical risk
minimization (ERM) also referred to as the follow the leader approach. The main challenge here is
that optimizing the payoff (risk) amounts to solving a multiple choice knapsack problem (MCKP)
that is known to be NP hard [8]. The proposed approach, referred to as dynamic programming on
discrete set (DPDS), is inspired by a pseudo-polynomial dynamic programming approach to 0-1
Knapsack problems. DPDS allocates the limited budget of the bidder among K goods in polynomial
time both in terms of the number of goods K and in terms of the time horizon T . We show that the
expected payoffpof DPDS converges to that of the optimal strategy under
? known distribution by a rate
no slower than log t/t which results in a regret upper?bound of O( T log T ). By showing that, for
any bidding
strategy, the regret is lower bounded by ?( T ), we prove that DPDS is order optimal up
?
to a log T term. We also evaluate the performance of DPDS empirically in the context of virtual
trading by using historical data from the New York energy market. Our empirical results show that
2
For example, transmission congestion may prevent scheduling the least expensive resources at some
locations.
2
DPDS consistently outperforms benchmark heuristic methods that are derived from standard machine
learning methods.
The problem formulated here can be viewed in multiple machine learning perspectives. We highlight
below several relevant existing approaches. Since the bidder can calculate the reward that could have
been obtained by selecting any given bid value regardless of its own decision, our problem falls into
the category of full-feedback version of multi-armed bandit (MAB) problem, referred to as experts
problem, where the reward of all arms (actions) are observable at the end of each period regardless of
the chosen arm. For the case of finite number of arms, Kleinberg et al. [9] showed that, for stochastic
setting, constant regret is achievable by choosing the arm with the highest average reward at each
period. A special case of the adversarial setting was studied
by Cesa-Bianchi et al. [10] who provided
?
matching upper and lower bounds in the order of ?( T ). Later, Freund and Schapire [11] and Auer
et al. [12] showed that the Hedge algorithm, a variation of weighted majority algorithm [13], achieves
the matching bound for the general setting. These results, however, do not apply to experts problems
with continuous action spaces.
The stochastic experts problem where the set of arms is an uncountable compact metric space (X , d)
rather than finite was studied by Kleinberg and Slivkins [14] (see [15] for an extended version). Since
there are uncountable number of arms, it is assumed that, in each period, a payoff function drawn from
an i.i.d. distribution is observed rather than the individual payoff of each arm. Under the assumption
of Lipschitz expected ?
payoff function, they showed that the instance-specific regret of any algorithm is
lower bounded by ?( T ). They also showed that their algorithm?NaiveExperts?achieves a regret
upper bound of O(T ? ) for any ? > (b + 1)/(b + 2) where b is the isometry invariant of the metric
space. However, NaiveExperts is computationally intractable in practice because the computational
complexity of its direct implementation grows exponentially with the dimension (number of goods in
our case). Furthermore, the lower bound in [14] does not imply a lower bound for our problem with
a specific payoff. Krichene et al. [16] studied
the adversarial setting and proposed an extension of
?
the Hedge algorithm, which achieves O( T log T ) regret under the assumption of Lipschitz payoff
functions. For our problem, it is reasonable to assume that the expected payoff function is Lipschitz;
yet it is clear that, at each period, the payoff realization is a step function which is not Lipschitz.
Hence, Lipschitz assumption of [16] doesn?t hold in our setting.
Stochastic gradient descent methods, which have low computational complexity, have been extensively
studied in the literature of continuum-armed bandit [17, 18, 19]. However, either the concavity or
the unimodality of the expected payoff function is required for regret guarantees of these methods to
hold. This may not be the case in our problem depending on the underlying distribution of prices.
A relevant work that takes an online learning perspective for the problem of a bidder engaging in
repeated auctions is Weed et al. [7]. They are motivated by online advertising auctions and studied
the partial information setting of the same problem as ours but without a budget constraint. Under the
margin condition, i.e., the probability of auction price occurring in close proximity of mean utility is
bounded, they showed that their
? algorithm, inspired by the UCB1 algorithm [20], achieves regret that
ranges from O(log T ) to O( T log T ) depending on how tight the margin condition is. They also
provided matching lower bounds up to a logarithmic factor. However, their lower bound does not
imply a bound for the full information setting we study here. Also, the learning algorithm in [7] does
not apply here because the goods are coupled through the budget constraint in our case. Furthermore,
we do not have margin condition, and we allow the utility of the good to depend on the auction price.
Some other examples of literature on online learning in repeated auctions studied the problem of an
advertiser who wants to maximize the number of clicks with a budget constraint [21, 22], or that of
a seller who tries to learn the valuation of its buyer in a posted price auction [23, 24]. The settings
considered in those problems are considerably different from that studied here in the implementation
of budget constraints [21, 22], and in the strategic behavior of the bidder [23, 24].
2
Problem formulation
The total expected payoff at period t given bid xt can be expressed as
r(xt ) = E ((?t ? ?t )| 1{xt ? ?t }|xt ) ,
where the expectation is taken using the joint distribution of (?t , ?t ), and 1{xt ? ?t } is the vector of
indicator functions with the k-th entry corresponding to 1{xt,k ? ?t,k }. We assume that the payoff
3
(?t ? ?t )| 1{xt ? ?t } obtained at each period is a bounded random variable with support in [l, u],3
and the auction prices are drawn from a distribution with positive support. Hence, a zero bid for any
good is equivalent to not bidding because it will not get cleared.
The objective is to determine a bidding policy ? that maximizes the expected T-period payoff subject
to a budget constraint for each individual period:
!
T
X
?
maximize E
r(xt )
?
t=1
(1)
subject to kx?t k1 ? B,
for all t = 1, ..., T,
x?t ? 0,
for all t = 1, ..., T,
where B is the auction budget of the bidder, x?t denotes the bid determined by policy ?, and x?t ? 0
is equivalent to x?t,k ? 0 for all k ? {1, 2, ..., K}.
2.1
Optimal solution under known distribution
If the joint distribution f (., .) of ?t and ?t is known, the optimization problem (1) decouples to
solving for each time instant separately. Since (?t , ?t ) is i.i.d. over t, an optimal solution under
known model does not depend on t and is given by
x? = arg max r(xt )
(2)
xt ?F
where F = {x ? <K : x ? 0, kxk1 ? B} is the feasible set of bids. Optimal solution x? may not
be unique or it may not have a closed form. The following example illustrates a case where there
isn?t a closed form solution and shows that, even in the case of known distribution, the problem is a
combinatorial stochastic optimization, and it is not easy to calculate an optimal solution.
? k > 0, and
Example. Let ?t and ?t be independent, ?t,k be exponentially distributed with mean ?
the mean of ?t,k be ?
?k > 0 for all k ? {1, .., K}. Since not bidding for good k is optimal if ?
?k ? 0,
we exclude the case ?
?k ? 0 without loss of generality. For this example, we can use the concavity of
r(x) in the interval [0, ?
? ], where ?
? = [?
?1 , ..., ?
?K ]| , to obtain the unique optimal solution x? , which
is characterized by
?
PK
?
?k
if k=1 ?
?k ? B,
??
PK
?k < ? ?,
x?k = 0
if k=1 ?
?k > B and ?
? k /?
?
P
?
?x satisfying (?
? k = ? ? if K ?
?k ? ? ?,
?k ? xk )e?xk /?k /?
? k /?
k
k=1 ?k > B and ?
where the Lagrange multiplier ? ? > 0 is chosen such that kx? k1 = B is satisfied. This solution takes
the form of a "water-filling" strategy. More specifically, if the budget constraint is not binding, then
the optimal solution is to bid ?
?k for every good k. However, in the case of a binding budget constraint,
the optimal solution is determined by the bid value at which the marginal expected payoff associated
? k ), and this bid value cannot be expressed in closed form.
with each good k is equal to min(? ? , ?
?k /?
We measure the performance of a bidding policy ? by its regret4 , the difference between the expected
T-period payoff of ? and that of x? , i.e.,
R?T (f ) =
T
X
E(r(x? ) ? r(x?t )),
(3)
t=1
where the expectation is taken with respect to the randomness induced by ?. The regret of any policy
is monotonically increasing. Hence, we are interested in policies with sub-linear regret growth.
3
This is reasonable in the case of virtual trading because DA and RT prices are bounded due to offer/bid caps.
The regret definition used here is the same as in [14]. This definition is also known as pseudo-regret in the
literature [25].
4
4
3
Online learning approach to optimal bidding
The idea behind our approach is to maximize the sample mean of the expected payoff function, which
is an ERM approach [26]. However, we show that a direct implementation of ERM is NP-hard. Hence,
we propose a polynomial-time algorithm that is based on dynamic programming on a discretized
feasible set. We show that our approach achieves the order optimal regret.
3.1
Approximate expected payoff function and its optimization
Regardless of the bidding policy, one can observe the auction and spot prices of past periods.
Therefore, the average payoff that could have been obtained by bidding x up to the current period can
be calculated for any fixed value of x ? F. Specifically, the average payoff r?t,k (xk ) for a good k as
a function of the bid value xk can be calculated at period t + 1 by using observations up to t, i.e.,
r?t,k (xk ) = (1/t)
t
X
(?i,k ? ?i,k )1{xk ? ?i,k }.
i=1
For example, at the end of first period, r?t,k (xk ) = (?1,k ? ?1,k )1{xk ? ?1,k } as illustrated in Fig. 1a.
For, t ? 2, this can be expressed recursively;
t?1
?t?1,k (xk )
if xk < ?t,k ,
t r
r?t,k (xk ) = t?1
(4)
1
r
?
(x
)
+
(?
?
?
)
if xk ? ?t,k .
t?1,k k
t,k
t,k
t
t
Since each observation introduces a new breakpoint, and the value of average payoff function is
constant between two consecutive breakpoints, we observe that r?t,k (xk ) is a piece-wise constant
function with at most t breakpoints. Let
the vector of order
| statistics of the observed auction clearing
prices {?i,k }ti=1 and zero be ?(k)= 0,
?
,
...,
?
, and let the vector of associated average
(1),k
(t),k
(k)
(k)
(k)
payoffs be r , i.e., ri = r?t,k ?i . Then, r?t,k (xk ) can be expressed by the pair ?(k) , r(k) ,
e.g., see Fig. 1b.
r?1,k (xk )
r?4,k (xk )
?1,k ? ?1,k
xk
?1,k
0
(k)
r5
(k)
r3
(k)
r4
(k)
r2
0
(k)
?2
(k)
?3
(k)
?4
(k)
xk
?5
(b) t = 4
(a) t = 1
Figure 1: Piece-wise constant average payoff function of good k
For a vector
y, let ym:n = (ym , ym+1 , ..., yn ) denote the sequence of entries from m to n. Initialize
?(k) , r(k) = (0, 0) at the beginning of first period. Then, at each period t ? 1, the pair ?(k) , r(k)
can be updated recursively as follows:
|
h
i| t ? 1
1
(k) t ? 1 (k)
(k)
(k)
(k) (k)
r ,
r
+ (?t,k ? ?t,k )
, (5)
? ,r
= ?1:ik , ?t,k , ?ik +1:t ,
t 1:ik
t ik :t t
where ik = maxi:?(k) <? i at period t.
i
t,k
Consequently, overall average payoff function r?t (x) can be expressed as a sum of average payoff
functions of individual goods. Instead of the unknown expected payoff r(x), let?s consider the
maximization of the average payoff function, which corresponds to the ERM approach, i.e.,
max r?t (x) = max
x?F
x?F
K
X
r?t,k (xk ).
(6)
k=1
(k)
forhsome i ? {1,
..., t + 1} contributes
(k)
(k)
the same amount to the overall payoff as choosing any xk ? ?i , ?i+1 if i < t + 1 and any
Due to the piece-wise constant structure, choosing xk = ?i
5
(k)
(k)
xk ? ?i if i = t + 1. However, choosing xk = ?i utilizes a smaller portion of the budget. Hence,
an optimal solution to (6) can be obtained by solving the following integer linear program:
maximize
{zk }K
k=1
subject to
K
X
r(k)
|
zk
k=1
K
X
?(k)
|
(7)
zk ? B,
k=1
1| zk ? 1,
?k = 1, ..., K,
zk,i ? {0, 1}, ?i = 1, ..., t + 1; ?k = 1, ..., K.
|
where the bid value xk = ?(k) zk for good k.
Observe that (7) is a multiple choice knapsack problem (MCKP) [8], a generalization of 0-1 knapsack.
Unfortunately, (7) is NP-hard [8]. If we had a polynomial-time algorithm that finds an optimal
solution x ? F to (6), then we could have obtained the solution of (7) in polynomial-time too by
setting zk,i = 1 where i = maxi:?(k) ?x i for each k. Therefore, (6) is also NP-hard, and, to the
k
i
best of our knowledge, there isn?t any method in the ERM literature [27], which mostly focuses on
classification problems, suitable to implement for the specific problem at hand.
3.2
Dynamic programming on discrete set (DPDS) policy
Next, we present an approach that discretizes the feasible set using intervals of equal length and
optimizes the average payoff on this new discrete set via a dynamic program. Although this approach
doesn?t solve (6), the solution can be arbitrarily close to the optimal depending on the choice of
the interval length under the assumption of the Lipschitz continuous expected payoff function. To
exploit the smoothness of Lipschitz continuity, discretization approach of the continuous feasible set
has been used in the continuous MAB literature previously [17, 14]. However, different than MAB
literature, in this paper, discretization approach is utilized to reduce the computational complexity of
an NP-hard problem as well.
Let ?t be an integer sequence increasing with t and Dt = {0, B/?t , 2B/?t , ..., B} as illustrated in
Fig. 2. Then, the new discrete set is given as Ft = {x ? F : xk ? Dt , ?k ? {1, ..., K}}. Our goal is
to optimize r?t (.) on the new set Ft rather than F, i.e.,
max r?t (xt+1 ).
(8)
xt+1 ?Ft
r?4,k (xk )
(k)
r5
(k)
r3
(k)
r4
(k)
r2
0
(k) B
?4
?2
(k) 2B
?4
?3
(k) 3B
?4
?4
(k)
?5
4B
?4
xk
Figure 2: Example of the discretization of the decision space for good k when t = 4
Now, we use dynamic programming approach that has been used to solve 0-1 Knapsack problems
including MCKP given in (7) [28]. However, direct implementation of this approach results in pseudopolynomial computational complexity in the case of 0-1 Knapsack problems. The discretization of
the feasible set with equal interval length reduces the computational complexity to polynomial time.
We define the maximum payoff one can collect with budget b among goods {1, ..., n} when the bid
value xk is restricted to the set Dt for each good k as
Vn (b) =
Pnmax
{xk }n
k=1 xk ?b,xk ?Dt ?k
k=1 :
6
n
X
k=1
r?t,k (xk ).
Then, the following recursion can be used to solve for VK (B) which gives the optimal solution to (8):
(
0
Vn (jB/?t ) = max (?
rt,n (iB/?t ) + Vn?1 ((j ? i)B/?t ))
0?i?j
if n = 0, j ? {0, 1, ..., ?t },
if 1 ? n ? K, j ? {0, 1, ..., ?t }.
(9)
This is the Bellman equation where Vn (b) is the maximum total payoff one can collect using remaining
budget b and remaining n goods. Its optimality can be shown via a simple induction argument. Recall
that r?t,n (0) = 0 for all (t, n) pairs due to the assumption of positive day-ahead prices.
Recursion (9) can be solved starting from n = 1 and proceeding to n = K, where, for each n, Vn (b)
is calculated for all b ? Dt . Since the computation of Vn (b) requires at most ?t + 1 comparison for
any fixed value of n ? {1, ..., K} and b ? Dt , it has a computational complexity on the order of K?t2
once the average payoff values r?t,n (xn ) for all xn ? Dt and n ? {1, ..., K} are given. For each
n ? {1, ..., K}, computation of r?t,n (xn ) for all xn ? Dt introduces an additional computational
complexity of at most on the order of t, which can be observed from the update step of ?(k) , ? (k) ,
given in (5). Hence, total computational complexity of DPDS is O(K max(t, ?t2 )) at each period t.
3.3
Convergence and regret of DPDS policy
Under the assumption of Lipschitz continuity, Theorem 1 shows that the value of DPDS
p converges to
the value of the optimal policy under known model with a rate faster than or equal to log t/t if the
?
DPDS algorithm parameter
? ?t = dt e with ? ? 1/2. Consequently, the regret growth rate of DPDS
is upper bounded by O( T log T ). If ? = 1/2, then the computational complexity of the algorithm
is bounded by O(Kt) at each period t, and total complexity over the entire horizon is O(KT 2 ).
Theorem 1 Let xDPDS
t+1 denote the bid of DPDS policy for period t + 1. If r(.) is Lipschitz continuous
on F with p-norm and Lipschitz constant L, then, for any ? > 0 and for DPDS parameter choice
?t ? 2,
r
log t 4 min(u ? l, LK 1/p B)?tK
LK 1/p B p
?
DPDS
+
,
E(r(x ) ? r(xt+1 )) ?
+ 2(? + 1)K + 1(u ? l)
?t
t
t(?+1)K+1/2
(10)
and for ?t = max(dt? e, 2) with ? ? 1/2,
p
p
?
RDPDS
(f ) ? 2(LK 1/p B +4 min(u?l, LK 1/p B)) T +2 2(? + 1)K + 1(u?l) T log T . (11)
T
Actually, we can relax the uniform Lipschitz continuity condition. Under the weaker condition of
|r(x? ) ? r(x)| ? Lkx? ? xkqp for all x ? F and for some constant L > 0, the incremental regret
bound that is given in (10) becomes
p
p
q/p
E(r(x? )?r(xDPDS
(B/?t )q +(u?l)( 2(? + 1)K + 1 log t/t+4?tK t?(?+1)K?1/2 ).
t+1 )) ? LK
The proof of Theorem 1 is derived by showing that the value of x?t+1 = arg maxx?Ft r(x) converges
?
to the value of x? due to Lipschitz continuity, and the value of xDPDS
t+1 converges to the value of xt+1
via the use of concentration inequality inspired by [20, 17].
Even though the upper bound of regret in Theorem 1 depends on the budget B linearly, this dependence can be avoided in the expense of increase in computational complexity. For example,
in the literature, the reward is generally assumed to be in the unit interval, i.e., l = 0 and u = 1,
and the expected reward is assumed to be Lipschitz continuous with Euclidean norm and constant
L = 1. In this case, by following the proof of Theorem 1, we observe
? = 1/2 and
? that assigning
?
?t = max(d?t? e, 2) for some ? > 0 gives a regret upper bound
of
2B
KT
/?
+12
KT
log T +?
?
?
for T > ? + 1. Consequently, if B = O(K), then O(K 3/4 T + KT log T ) regret is achievable
by setting ? = K 3/4 .
3.4
Lower bound of regret for any bidding policy
We now show that DPDS in fact achieves the slowest possible regret growth. Specifically, Theorem 2
states that, for any bidding policy ? and horizon T , there exists a distribution f for which the regret
growth is slower than or equal to the square root of the horizon T .
7
Theorem 2 Consider the case where K = 1, B = 1, and ?t and ?t are independent random
variables with distributions
f? (?t ) = ?1 1{(1 ? )/2 ? ?t ? (1 + )/2}
?
and f? (?t ) = Bernoulli(?
? ), respectively. Let f (?t , ?t ) = f? (?t )f? (?t ) and = T ?1/2 /2 5. Then,
for any bidding policy ?,
? ?
RT? (f ) ? (1/16 5) T ,
either for ?
? = 1/2 + or for ?
? = 1/2 ? .
As seen in Theorem 2, we choose a specific distribution for the auction clearing and spot prices.
Observe that, for this distribution, the payoff function is Lipschitz continuous with Lipschitz constant
L = 3/2 because the magnitude of the derivative of the payoff function |r0 (x)| ? |?
? ? x|/ ? 3/2
for (1 ? )/2 ? x ? (1 + )/2 and r0 (x) = 0 otherwise. So, it satisfies the condition given in
Theorem 1.
The proof of Theorem 2 is obtained by showing that, every time the bid is cleared, an incremental
regret greater than /2 is incurred under the distribution with ?
? = (1/2?); otherwise, an incremental
regret greater than /2 is incurred under the distribution with ?
? = (1/2 + ). However, to distinguish
between
these
two
distributions,
one
needs
?(T
)
samples,
which results in a regret lower bound
?
of ?( T ). The bound is obtained by adapting a similar argument used by [29] in the context of
non-stochastic MAB problem.
4
Empirical study
New York ISO (NYISO), which consists of 11 zones, allows virtual transactions at zonal nodes only.
So, we use historical DA and RT prices of these zones from 2011 to 2016 [30]. Since the price for
each hour is different at each zone, there are 11 ? 24 different locations, i.e., zone-hour pairs, to bid on
every day. The prices are per unit (MWh) prices. We also consider buy and sell bids simultaneously
for all location. As explained in Sec. 1.1, a sell bid is a bid to sell in the DA market with an obligation
to buy back in the RT market. Hence, the profit of a sell bid at period t is (?t ? ?t )| 1{xt ? ?t }.
Generally, an upper bound p? for the DA prices is known, e.g. p? = $1000 for NYISO. We convert
a sell bid to a buy bid by using xsell
? ? xt , ?sell
? ? ?t , and ?tsell = p? ? ?t instead of xt , ?t ,
t = p
t = p
and ?t . NYISO DA market for day t closes at 5:00 am on day t ? 1. Hence, the RT prices of all
hours of day t ? 1 cannot be observed before the bid submission for day t. Therefore, the most recent
information used before the submission for day t was the observations from day t ? 2.
(a) y = 2012
(b) y = 2013
(d) y = 2015
(c) y = 2014
(e) y = 2016
Figure 3: Cumulative profit trajectory of year y for B = 100000
We compare DPDS with three algorithms. One of them is UCBID-GR, inspired by UCBID [7]. At
each day, UCBID-GR sorts all locations according to their profitabilities, i.e., their price spread (the
difference between DA and RT price) sample means. Then, starting from the most profitable location,
8
UCBID-GR sets the bid of a location equal to its RT price sample mean until there isn?t any sufficient
budget left.
The second algorithm, referred to as SA, is a variant of Kiefer-Wolfowitz stochastic approximation
method. SA approximates the gradient of the payoff function by using the current observation and
updates the bid of each k as follows;
xt,k = xt?1,k + at ((?t?2,k ? ?t?2,k )(1{xt?1,k + ct ? ?t?2,k } ? 1{xt?1,k ? ?t?2,k })) /ct .
Then, xt is projected to the feasible set F.
The last algorithm is SVM-GR, which is inspired by the use of support vector machines (SVM) by
Tang et al. [31] to determine if a buy or a sell bid is profitable at a location, i.e., if the price spread is
positive or negative. Due to possible correlation of the price spread at a location on day t with the
price spreads observed recently at that and also at other locations, the input of SVM for each location
is set as the price spreads of all locations from day t ? 7 to day t ? 2. To test SVM-GR algorithm
at a particular year, for each location, the data from the previous year is used to train SVM and to
determine the average profit, i.e., average price spread, and the bid level that will be accepted with
95% confidence in the event that a buy or a sell bid is profitable. For the test year, at each period,
SVM-GR first determines if a buy or a sell bid is profitable for each location. Then, SVM-GR sorts
all locations according to their average profits, and, starting from the most profitable location, it sets
the bid of a location equal to the bid level with 95% confidence of acceptance until there isn?t any
sufficient budget left.
To evaluate the performance of a year, DPDS, UCBID-GR, and SA algorithms have also been trained
starting from the beginning of the previous year. The algorithm parameter of DPDS was set as ?t = t;
and the step size at and ct of SA were set as 20000/t and 2000/t1/4 , respectively.
For B=$100,000, the cumulative profit trajectory of five consecutive years are given in Fig. 3. We
observe that DPDS obtains a significant profit in all cases, and it outperforms other algorithms
consistently except 2015 where SVM-GR makes approximately 25% more profit. However, in three
out of five years, SVM-GR suffers a considerable amount of loss. In general, UCBID-GR performs
quite well except 2016, and SA algorithm incurs a loss almost every year.
5
Conclusion
By applying general techniques such as ERM, discretization approach, and dynamic programming, we
derive a practical and efficient algorithm to the algorithmic bidding problem under budget constraint
in repeated multi-commodity auctions. We show that the expected payoff ofp
the proposed algorithm,
DPDS, converges to that of the optimal strategy by a rate no slower than log
? t/t, which results
?
in a O( T log T ) regret. By showing that the regret is
lower
bounded
by
?(
T ) for any bidding
?
strategy, we prove that DPDS is order optimal up to a log T term.
For the motivating application of virtual bidding in electricity markets (see Sec. 1.1), the stochastic
setting, studied in this paper, is natural due to the electricity markets being competitive, which
implies that the existence of an adversary is very unlikely. However, it is also of interest to study the
adversarial setting to extend the results to other applications. For example, the adversarial setting of
our problem is a special case of no-regret learning problem of Simultaneous Second Price Auctions
(SiSPA), studied by Daskalakis and Syrgkanis [32] and Dudik et al. [33].
In particular, to deal with the adversarial setting, it is possible to use our dynamic programming
approach as the offline oracle for the Oracle-Based Generalized FTPL algorithm proposed by Dudik
et al. [33] if we fix the discretized action set over the whole time horizon. More specifically, let the
interval length of discretization be B/m, i.e., ?t = m. Then, it is possible to show that a 1-admissible
translation matrix with Kdlog me columns is implementable with complexity
m. Consequently,
?
no-regret result of Dudik et al. [33] holds with a regret bound of O(K T log m) if we measure
the performance of the algorithm against the best action in hindsight in the discretized finite action
set rather than in the original continuous action set considered here. Unfortunately, as shown by
Weed et al. [7], it is not possible to achieve sublinear regret with a fixed discretization for the specific
problem considered in this paper. Hence, it requires further work to see if this method can be extended
to obtain no-regret learning for the adversarial setting under the original continuous action set.
9
Acknowledgments
We would like to thank Professor Robert Kleinberg for the insightful discussion.
This work was supported in part by the National Science Foundation under Award 1549989 and by
the Army Research Laboratory Network Science CTA under Cooperative Agreement W911NF-09-20053.
References
[1] Paul Milgrom. Putting auction theory to work. Cambridge University Press, 2004.
[2] PJM. Virtual transactions in the pjm energy markets. Technical report, Oct 2015. http://
www.pjm.com/~/media/committees-groups/committees/mc/20151019-webinar/
20151019-item-02-virtual-transactions-in-the-pjm-energy-marketswhitepaper.ashx.
[3] Ruoyang Li, Alva J. Svoboda, and Shmuel S. Oren. Efficiency impact of convergence bidding in the
california electricity market. Journal of Regulatory Economics, 48(3):245?284, 2015.
[4] John E. Parsons, Cathleen Colbert, Jeremy Larrieu, Taylor Martin, and Erin Mastrangelo. Financial
arbitrage and efficient dispatch in wholesale electricity markets, February 2015. https://ssrn.com/
abstract=2574397.
[5] Wenyuan Tang, Ram Rajagopal, Kameshwar Poolla, and Pravin Varaiya. Model and data analysis of
two-settlement electricity market with virtual bidding. In 2016 IEEE 55th Conference on Decision and
Control (CDC), pages 6645?6650, 2016.
[6] David B. Patton, Pallas LeeVanSchaick, and Jie Chen. 2014 state of the market report for the new york iso
markets. Technical report, May 2015. http://www.nyiso.com/public/webdocs/
markets_operations/documents/Studies_and_Reports/Reports/
Market_Monitoring_Unit_Reports/2014/NYISO2014SOMReport__5-132015_Final.pdf.
[7] Jonathan Weed, Vianney Perchet, and Philippe Rigollet. Online learning in repeated auctions. In 29th
Annual Conference on Learning Theory, page 1562?1583, 2016.
[8] Hans Kellerer, Ulrich Pferschy, and David Pisinger. The Multiple-Choice Knapsack Problem, pages
317?347. Springer Berlin Heidelberg, 2004.
[9] Robert Kleinberg, Alexandru Niculescu-Mizil, and Yogeshwer Sharma. Regret bounds for sleeping experts
and bandits. In 21st Conference on Learning Theory, pages 425?436, 2008.
[10] Nicol? Cesa-Bianchi, Yoav Freund, David P. Helmbold, David Haussler, Robert E. Schapire, and Manfred K.
Warmuth. How to use expert advice. In Proceedings of the Twenty-fifth Annual ACM Symposium on Theory
of Computing, pages 382?391. ACM, 1993.
[11] Yoav Freund and Robert E. Schapire. A decision-theoretic generalization of on-line learning and an
application to boosting. In Proceedings of the Second European Conference on Computational Learning
Theory, pages 23?37. Springer-Verlag, 1995.
[12] Peter Auer, Nicol? Cesa-Bianchi, Yoav Freund, and Robert E. Schapire. Gambling in a rigged casino: The
adversarial multi-armed bandit problem. In Proceedings of IEEE 36th Annual Foundations of Computer
Science, pages 322?331, 1995.
[13] Nick Littlestone and Manfred K. Warmuth. The weighted majority algorithm. Information and Computation, 108(2):212 ? 261, 1994.
[14] Robert Kleinberg and Aleksandrs Slivkins. Sharp dichotomies for regret minimization in metric spaces. In
Proceedings of the Twenty-first Annual ACM-SIAM Symposium on Discrete Algorithms, pages 827?846.
Society for Industrial and Applied Mathematics, 2010.
[15] Robert Kleinberg, Aleksandrs Slivkins, and Eli Upfal. Bandits and experts in metric spaces. arXiv preprint
arXiv:1312.1277v2, 2015.
[16] Walid Krichene, Maximilian Balandat, Claire Tomlin, and Alexandre Bayen. The hedge algorithm on a
continuum. In Proceedings of the 32Nd International Conference on International Conference on Machine
Learning - Volume 37, pages 824?832. JMLR.org, 2015.
10
[17] Robert D. Kleinberg. Nearly tight bounds for the continuum-armed bandit problem. In L. K. Saul, Y. Weiss,
and L. Bottou, editors, Advances in Neural Information Processing Systems 17, pages 697?704. MIT Press,
2005.
[18] Abraham D. Flaxman, Adam Tauman Kalai, and H. Brendan McMahan. Online convex optimization in the
bandit setting: Gradient descent without a gradient. In Proceedings of the Sixteenth Annual ACM-SIAM
Symposium on Discrete Algorithms, pages 385?394. Society for Industrial and Applied Mathematics, 2005.
[19] Eric W. Cope. Regret and convergence bounds for a class of continuum-armed bandit problems. IEEE
Transactions on Automatic Control, 54(6):1243?1253, 2009.
[20] Peter Auer, Nicol? Cesa-Bianchi, and Paul Fischer. Finite-time analysis of the multiarmed bandit problem.
Machine Learning, 47(2-3):235?256, 2002.
[21] Kareem Amin, Michael Kearns, Peter Key, and Anton Schwaighofer. Budget optimization for sponsored
search: Censored learning in mdps. In Proceedings of the Twenty-Eighth Conference on Uncertainty in
Artificial Intelligence, pages 54?63. AUAI Press, 2012.
[22] Long Tran-Thanh, Lampros Stavrogiannis, Victor Naroditskiy, Valentin Robu, Nicholas R Jennings, and
Peter Key. Efficient regret bounds for online bid optimisation in budget-limited sponsored search auctions.
In Proceedings of the Thirtieth Conference on Uncertainty in Artificial Intelligence, pages 809?818. AUAI
Press, 2014.
[23] Kareem Amin, Afshin Rostamizadeh, and Umar Syed. Learning prices for repeated auctions with strategic
buyers. In C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q. Weinberger, editors, Advances
in Neural Information Processing Systems 26, pages 1169?1177. Curran Associates, Inc., 2013.
[24] Mehryar Mohri and Andres Munoz. Optimal regret minimization in posted-price auctions with strategic
buyers. In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, editors, Advances
in Neural Information Processing Systems 27, pages 1871?1879. Curran Associates, Inc., 2014.
[25] S?bastien Bubeck and Nicol? Cesa-Bianchi. Regret analysis of stochastic and nonstochastic multi-armed
bandit problems. Foundations and Trends in Machine Learning, 5(1):1?122, 2012.
[26] Vladimir Vapnik. Principles of risk minimization for learning theory. In J. E. Moody, S. J. Hanson,
and R. P. Lippmann, editors, Advances in Neural Information Processing Systems 4, pages 831?838.
Morgan-Kaufmann, 1992.
[27] Shai Shalev-Shwartz and Shai Ben-David. Understanding Machine Learning: From Theory to Algorithms.
Cambridge University Press, 2014.
[28] Krzysztof Dudzi?nski and Stanis?aw Walukiewicz. Exact methods for the knapsack problem and its
generalizations. European Journal of Operational Research, 28(1):3 ? 21, 1987.
[29] Peter Auer, Nicol? Cesa-Bianchi, Yoav Freund, and Robert E. Schapire. The nonstochastic multiarmed
bandit problem. SIAM Journal on Computing, 32(1):48?77, 2002.
[30] NYISO Website, 2017. http://www.nyiso.com/public/markets_operations/
market_data/pricing_data/index.jsp.
[31] Wenyuan Tang, Ram Rajagopal, Kameshwar Poolla, and Pravin Varaiya. Private communications, 2017.
[32] Constantinos Daskalakis and Vasilis Syrgkanis. Learning in auctions: Regret is hard, envy is easy. In 2016
IEEE 57th Annual Symposium on Foundations of Computer Science (FOCS), pages 219?228, 2016.
[33] Miroslav Dudik, Nika Haghtalab, Haipeng Luo, Robert E. Shapire, Vasilis Syrgkanis, and Jennifer Wortman
Vaughan. Oracle-efficient online learning and auction design. In 2017 IEEE 58th Annual Symposium on
Foundations of Computer Science (FOCS), pages 528?539, 2017.
11
| 7037 |@word private:1 version:2 achievable:2 polynomial:7 norm:2 nd:1 rigged:1 incurs:1 profit:8 recursively:2 selecting:1 ours:1 document:1 cleared:9 outperforms:3 existing:1 past:1 current:2 discretization:7 com:4 luo:1 lang:1 yet:1 assigning:1 john:1 sponsored:2 update:2 congestion:2 intelligence:2 website:1 item:1 warmuth:2 xk:32 beginning:2 iso:6 manfred:2 boosting:1 node:1 location:22 org:1 five:3 mathematical:2 robu:1 direct:3 symposium:5 ik:5 focs:2 prove:2 consists:1 cta:1 expected:16 market:36 behavior:1 frequently:1 multi:8 discretized:3 bellman:1 inspired:6 actual:1 armed:6 unpredictable:1 increasing:2 becomes:1 provided:2 bounded:9 underlying:1 maximizes:1 medium:1 pravin:2 hindsight:1 guarantee:1 pseudo:2 commodity:5 every:4 ti:1 auai:2 growth:4 exactly:1 decouples:1 uk:1 control:2 unit:2 yn:1 before:3 negligible:1 positive:3 t1:1 approximately:1 studied:10 r4:2 collect:2 limited:2 range:1 unique:2 practical:1 acknowledgment:1 practice:2 regret:42 implement:1 nyiso:7 spot:6 empirical:5 maxx:1 adapting:1 weather:1 matching:3 confidence:2 get:1 cannot:2 close:3 operator:1 scheduling:1 context:4 risk:3 applying:1 vaughan:1 optimize:1 equivalent:2 map:1 www:3 thanh:1 maximizing:2 syrgkanis:3 regardless:3 starting:4 economics:1 convex:1 helmbold:1 rule:1 adjusts:1 haussler:1 his:6 financial:3 variation:1 haghtalab:1 updated:1 profitable:5 svoboda:1 exact:1 programming:9 curran:2 engaging:1 agreement:1 associate:2 trend:1 expensive:1 satisfying:1 utilized:1 perchet:1 submission:2 cooperative:1 observed:5 kxk1:1 ft:4 preprint:1 solved:1 calculate:2 highest:1 observes:1 complexity:12 reward:5 dynamic:10 seller:1 trained:1 depend:2 solving:3 tight:2 efficiency:2 eric:1 bidding:24 joint:3 unimodality:1 sevi:1 train:1 artificial:2 dichotomy:1 choosing:5 shalev:1 quite:1 heuristic:2 widely:1 solve:3 yogeshwer:1 relax:1 otherwise:2 compensates:1 statistic:1 fischer:1 tomlin:1 final:1 online:13 rajagopal:2 sequence:3 analytical:1 propose:3 tran:1 relevant:2 vasilis:2 realization:1 achieve:1 sixteenth:1 amin:2 haipeng:1 competition:1 convergence:5 double:1 transmission:2 jsp:1 incremental:3 converges:5 adam:1 tk:2 ben:1 depending:3 derive:1 measured:1 sa:5 solves:1 bayen:1 trading:9 implies:2 alexandru:1 stochastic:8 virtual:17 exercising:1 public:2 fix:1 generalization:3 mab:4 extension:1 hold:3 proximity:1 around:1 considered:3 lawrence:1 algorithmic:2 pjm:5 achieves:7 vary:1 early:1 continuum:4 consecutive:2 combinatorial:1 weighted:2 minimization:4 mit:1 rather:4 kalai:1 cornell:6 thirtieth:1 derived:3 emission:1 focus:1 vk:1 consistently:3 bernoulli:1 indicates:1 slowest:1 industrial:2 adversarial:7 brendan:1 rostamizadeh:1 dollar:2 am:1 abstraction:2 niculescu:1 nika:1 entire:1 unlikely:1 pallas:1 bandit:11 interested:1 arg:2 among:3 overall:2 classification:1 special:2 initialize:1 marginal:1 equal:8 once:1 beach:1 identical:1 sell:13 r5:2 filling:1 nearly:1 breakpoint:1 promote:1 constantinos:1 report:4 np:5 jb:1 t2:2 simultaneously:1 national:1 individual:4 qing:1 acceptance:1 interest:1 introduces:2 operated:1 behind:1 kt:5 ftpl:1 partial:1 censored:1 allocates:1 euclidean:1 taylor:1 littlestone:1 miroslav:1 increased:2 instance:1 column:1 w911nf:1 yoav:4 maximization:1 electricity:11 strategic:3 deviation:1 entry:2 uniform:3 wortman:1 valentin:1 gr:11 too:1 motivating:2 aw:1 considerably:1 nski:1 st:2 international:2 siam:3 participates:1 michael:1 ym:3 moody:1 cesa:6 satisfied:1 containing:1 choose:1 external:1 expert:6 zhao:1 derivative:1 li:1 account:1 exclude:1 jeremy:1 bidder:13 sec:3 erin:1 casino:1 inc:2 depends:1 piece:3 later:1 unplanned:1 try:1 closed:3 root:1 mwh:1 portion:2 competitive:1 sort:2 participant:1 shai:2 square:1 kiefer:1 kaufmann:1 who:4 correspond:1 anton:1 andres:1 mc:1 advertising:3 trajectory:2 randomness:1 history:2 upa:3 simultaneous:1 suffers:1 definition:2 against:1 energy:4 associated:2 proof:3 recall:1 knowledge:1 cap:1 schedule:1 surplus:1 back:2 auer:4 actually:1 alexandre:1 higher:1 dt:10 day:16 follow:1 wei:1 formulation:1 though:1 generality:1 furthermore:2 profitability:1 until:2 correlation:1 hand:1 receives:1 continuity:4 grows:1 balandat:1 usa:1 multiplier:1 hence:9 laboratory:1 illustrated:2 deal:1 dispatch:4 krichene:2 generalized:1 pdf:1 theoretic:1 performs:1 auction:36 wise:3 recently:1 rigollet:1 empirically:2 exponentially:2 volume:2 million:2 extend:1 approximates:1 significant:2 multiarmed:2 cambridge:2 munoz:1 smoothness:1 automatic:1 mathematics:2 similarly:2 had:1 han:1 operating:1 lkx:1 own:1 showed:5 isometry:1 perspective:2 optimizing:1 optimizes:1 recent:1 verlag:1 retailer:2 inequality:1 arbitrarily:1 victor:1 seen:1 wholesale:7 greater:3 additional:1 dudik:4 morgan:1 r0:2 sharma:1 determine:4 maximize:5 period:32 advertiser:2 monotonically:1 wolfowitz:1 multiple:4 full:2 reduces:1 technical:2 faster:1 characterized:1 offer:2 long:2 naroditskiy:1 ofp:1 award:1 promotes:1 impact:2 variant:1 optimisation:1 metric:4 expectation:2 arxiv:2 oren:1 sleeping:1 want:1 separately:1 interval:6 ithaca:3 subject:4 induced:1 integer:2 identically:1 easy:2 bid:52 nonstochastic:2 click:1 economic:1 idea:1 reduce:1 whether:1 motivated:1 utility:3 peter:5 york:4 action:7 jie:1 generally:2 jennings:1 clear:1 amount:4 extensively:1 category:2 schapire:5 http:4 per:1 discrete:7 group:1 putting:1 key:2 drawn:3 prevent:1 obligation:2 krzysztof:1 ram:2 year:10 sum:1 convert:1 eli:1 uncertainty:2 almost:1 reasonable:3 wenyuan:2 vn:6 utilizes:1 zonal:1 decision:6 bound:21 ct:3 pay:1 breakpoints:2 distinguish:1 oracle:3 annual:7 ahead:2 occur:1 constraint:11 ri:1 locational:1 kleinberg:7 argument:2 min:3 optimality:1 martin:1 ssrn:1 according:3 smaller:1 stavrogiannis:1 pseudopolynomial:1 explained:1 invariant:1 restricted:1 erm:6 taken:2 computationally:1 resource:1 equation:1 previously:1 jennifer:1 discus:1 r3:2 mechanism:1 committee:2 milgrom:1 end:3 permit:1 apply:2 observe:6 discretizes:1 v2:1 nicholas:1 weinberger:2 slower:3 vianney:1 knapsack:9 existence:1 original:2 uncountable:2 denotes:1 include:1 remaining:2 instant:1 umar:1 exploit:1 k1:2 ghahramani:2 february:1 society:2 shapire:1 objective:3 spike:1 strategy:9 concentration:1 rt:17 dependence:1 gradient:4 thank:1 berlin:1 entity:1 majority:2 consumption:1 me:1 valuation:1 water:1 induction:1 afshin:1 length:4 index:1 vladimir:1 unfortunately:2 mostly:1 robert:10 expense:1 negative:1 implementation:4 design:1 policy:16 unknown:2 twenty:3 bianchi:6 upper:7 observation:5 benchmark:2 finite:4 implementable:1 descent:2 philippe:1 colbert:1 payoff:39 extended:2 communication:1 sharp:1 aleksandrs:2 introduced:2 david:5 pair:4 required:1 varaiya:2 slivkins:3 hanson:1 nick:1 california:1 hour:5 nip:1 adversary:1 below:2 eighth:1 challenge:1 program:2 max:8 including:1 power:1 suitable:1 event:1 natural:1 syed:1 indicator:1 recursion:2 arm:7 mizil:1 mdps:1 imply:2 lk:5 coupled:1 flaxman:1 isn:4 literature:7 understanding:1 nicol:5 law:1 freund:5 loss:4 highlight:1 cdc:1 sublinear:1 allocation:1 generator:2 revenue:1 foundation:5 upfal:1 incurred:2 offered:1 sufficient:2 principle:1 editor:4 ulrich:1 translation:1 claire:1 arbitrage:2 mohri:1 supported:1 clearing:7 last:1 offline:1 allow:1 weaker:1 burges:1 fall:2 patton:1 taking:1 saul:1 kareem:2 fifth:1 tauman:1 distributed:2 feedback:1 dimension:1 calculated:3 xn:4 settlement:2 cumulative:2 doesn:2 concavity:2 projected:1 avoided:1 historical:3 social:1 transaction:7 cope:1 welling:2 approximate:1 observable:1 compact:1 obtains:1 lippmann:1 buy:10 conclude:1 assumed:3 xi:1 leader:1 shwartz:1 spectrum:1 daskalakis:2 dpds:26 continuous:9 regulatory:1 search:2 learn:1 shmuel:1 zk:7 ca:1 operational:2 parson:1 contributes:1 heidelberg:1 mehryar:1 bottou:2 european:2 posted:2 da:20 pk:2 main:2 spread:6 linearly:1 abraham:1 big:1 whole:1 paul:2 repeated:9 weed:3 fig:4 referred:5 advice:1 gambling:1 envy:1 ny:3 tong:1 sub:1 mcmahan:1 ib:1 jmlr:1 tang:3 admissible:1 theorem:10 treasury:1 load:1 specific:6 xt:30 bastien:1 showing:5 insightful:1 maxi:2 r2:2 svm:9 cortes:1 intractable:1 exists:1 vapnik:1 magnitude:1 budget:21 occurring:1 illustrates:1 horizon:5 margin:3 kx:2 chen:1 maximilian:1 ucb1:1 logarithmic:1 army:1 bubeck:1 lagrange:1 expressed:5 schwaighofer:1 applies:1 binding:2 springer:2 corresponds:1 determines:3 satisfies:1 acm:4 hedge:3 pferschy:1 oct:1 goal:2 formulated:1 viewed:1 consequently:5 price:52 lipschitz:15 feasible:6 hard:6 considerable:1 professor:1 specifically:5 determined:2 except:2 walid:1 kearns:1 total:8 accepted:2 buyer:3 zone:4 support:3 jonathan:1 evaluate:3 |
6,675 | 7,038 | Trimmed Density Ratio Estimation
Song Liu?
University of Bristol
[email protected]
Taiji Suzuki
University of Tokyo,
Sakigake (PRESTO), JST,
AIP, RIKEN,
[email protected]
Akiko Takeda
The Institute of Statistical Mathematics,
AIP, RIKEN,
[email protected]
Kenji Fukumizu
The Institute of Statistical Mathematics,
[email protected]
Abstract
Density ratio estimation is a vital tool in both machine learning and statistical
community. However, due to the unbounded nature of density ratio, the estimation
proceudre can be vulnerable to corrupted data points, which often pushes the
estimated ratio toward infinity. In this paper, we present a robust estimator which
automatically identifies and trims outliers. The proposed estimator has a convex
formulation, and the global optimum can be obtained via subgradient descent. We
analyze the parameter estimation error of this estimator under high-dimensional
settings. Experiments are conducted to verify the effectiveness of the estimator.
1
Introduction
Density ratio estimation (DRE) [18, 11, 27] is an important tool in various branches of machine
learning and statistics. Due to its ability of directly modelling the differences between two probability
density functions, DRE finds its applications in change detection [13, 6], two-sample test [32] and
outlier detection [1, 26]. In recent years, a sampling framework called Generative Adversarial
Network (GAN) (see e.g., [9, 19]) uses the density ratio function to compare artificial samples from a
generative distribution and real samples from an unknown distribution. DRE has also been widely
discussed in statistical literatures for adjusting non-parametric density estimation [5], stabilizing the
estimation of heavy tailed distribution [7] and fitting multiple distributions at once [8].
However, as a density ratio function can grow unbounded, DRE can suffer from robustness and
stability issues: a few corrupted points may completely mislead the estimator (see Figure 2 in Section
6 for example). Considering a density ratio p(x)/q(x), a point x that is extremely far away from the
high density region of q may have an almost infinite ratio value and DRE results can be dominated
by such points. This makes DRE performance very sensitive to rare pathological data or small
modifications of the dataset. Here we give two examples:
Cyber-attack In change detection applications, a density ratio p(x)/q(x) is used to determine how
the data generating model differs between p and q. Consider a ?hacker? who can spy on our data
may just inject a few data points in p which are extremely far away from the high-density region of q.
This would result excessively large p(x)/q(x) tricking us to believe there is a significant change from
q(x) to p(x), even if there is no change at all. If the generated outliers are also far away from the
?
This work was done when Song Liu was at The Institute of Statistical Mathematics, Japan
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
high density region of p(x), we end up with a very different density ratio function and the original
parametric pattern in the ratio is ruined. We give such an example in Section 6.
Volatile Samples The change of external environment may be responded in unpredictable ways. It
is possible that a small portion of samples react more ?aggressively? to the change than the others.
These samples may be skewed and show very high density ratios, even if the change of distribution is
relatively mild when these volatile samples are excluded. For example, when testing a new fertilizer,
a small number of plants may fail to adapt, even if the vast majority of crops are healthy.
Overly large density ratio values can cause further troubles when the ratio is used to weight samples.
For example, in the domain adaptation setting, we may reweight samples from one task and reuse
them in another task. Density ratio is a natural choice of such ?importance weighting? scheme
[28, 25]. However, if one or a few samples have extremely high ratio, after renormalizing, other
samples will have almost zero weights and have little impact to the learning task.
Several methods have been proposed to solve this problem. The relative density ratio estimation [33]
estimates a ?biased? version of density ratio controlled by a mixture parameter ?. The relative density
ratio is always upper-bounded by ?1 , which can give a more robust estimator. However, it is not clear
how to de-bias such an estimator to recover the true density ratio function. [26] took a more direct
approach. It estimates a thresholded density ratio by setting up a tolerance t to the density ratio value.
All likelihood ratio values bigger than t will be clipped to t. The estimator was derived from Fenchel
duality for f -divergence [18]. However, the optimization for the estimator is not convex if one uses
log-linear models. The formulation also relies on the non-parametric approximation of the density
ratio function (or the log ratio function) making the learned model hard to interpret. Moreover, there
is no intuitive way to directly control the proportion of ratios that are thresholded. Nonetheless, the
concept studied in our paper is inspired by this pioneering work.
In this paper, we propose a novel method based on a ?trimmed Maximum Likelihood Estimator?
[17, 10]. This idea relies on a specific type of density ratio estimator (called log-linear KLIEP) [30]
which can be written as a maximum likelihood formulation. We simply ?ignore? samples that make
the empirical likelihood take exceedingly large values. The trimmed density ratio estimator can
be formulated as a convex optimization and translated into a weighted M-estimator. This helps us
develop a simple subgradient-based algorithm that is guaranteed to reach the global optimum.
Moreover, we shall prove that in addition to recovering the correct density ratio under the outlier
setting, the estimator can also obtain a ?corrected? density ratio function under a truncation setting. It
ignores ?pathological? samples and recovers density ratio only using ?healthy? samples.
Although trimming will usually result a more robust estimate of the density ratio function, we also
point out that it should not be abused. For example, in the tasks of two-sample test, a diverging
density ratio might indicate interesting structural differences between two distributions.
In Section 2, we explain some preliminaries on trimmed maximum likelihood estimator. In Section 3,
we introduce a trimmed DRE. We solve it using a convex formulation whose optimization procedure
is explained in Section 4. In Section 5, we prove the estimation error upper-bound with respect to a
sparsity inducing regularizer. Finally, experimental results are shown in Section 6 and we conclude
our work in Section 7.
2
Preliminary: Trimmed Maximum Likelihood Estimation
Although our main purpose is to estimate the density ratio, we first introduce the basic concept of
trimmed estimator using density functions as examples. Given n samples drawn from a distribution
n i.i.d.
P , i.e., X := x(i) i=1 ? P, x ? Rd , we want to estimate the density function p(x). Suppose the
true density function is a member of exponential family [20],
Z
p(x; ?) = exp [h?, f (x)i ? log Z(?)] , Z(?) = q(x) exph?, f (x)idx
(1)
where f (x) is the sufficient statistics, Z(?) is the normalization function and q(x) is the base
measure.
Maximum Likelihood Estimator (MLE) maximizes the empirical likelihood over the entire dataset.
In contrast, a trimmed MLE only maximizes the likelihood over a subset of samples according to
2
their likelihood values (see e.g., [10, 31]). This paradigm can be used to derive a popular outlier
detection method, one-class Support Vector Machine (one-SVM) [24]. The derivation is crucial to
the development of our trimmed density ratio estimator in later sections.
Without loss of generality, we can set the log likelihood function as log p(x(i) ; ?) ? ?0 , where
?0 is a constant. As samples corresponding to high likelihood values are likely to be inliers,
we can trim all samples whose likelihood is bigger than ?0 using a clipping function [?]? , i.e.,
? = arg max? Pn [log p(x(i) ; ?) ? ?0 ]? , where [`]? returns ` if ` ? 0 and 0 otherwise. This
?
i=1
optimization has a convex formulation:
min h, 1i, s.t. ?i, log p x(i) ; ? ? ?0 ? i ,
(2)
?,?0
where is the slack variable measuring the difference between log p x(i) ; ? and ?0 . However,
formulation (2) is not practical since computing the normalization term Z(?) in (1) is intractable for
a general f and it is unclear how to set the trimming level ?0 . Therefore we ignore the normalization
term and introduce other control terms:
min
?,?0,? ?0
1
1
k?k2 ? ?? + h, 1i s.t. ?i, h?, f (x(i) )i ? ? ? i .
2
n
(3)
The `2 regularization term is introduced to avoid ? reaching unbounded values. A new hyper
parameter ? ? (0, 1] replaces ?0 to control the number of trimmed samples. It can be proven using
KKT conditions that at most 1 ? ? fraction of samples are discarded (see e.g., [24], Proposition 1 for
details). Now we have reached the standard formulation of one-SVM.
This trimmed estimator ignores the large likelihood values and creates a focus only on the low density
region. Such a trimming strategy allows us to discover ?novel? points or outliers which are usually
far away from the high density area.
3
Trimmed Density Ratio Estimation
In this paper, our main focus is to derive a robust density ratio estimator following a similar trimming
strategy. First, we briefly review the a density ratio estimator [27] from the perspective of KullbackLeibler divergence minimization.
3.1
Density Ratio Estimation (DRE)
(1)
i.i.d.
(n )
(1)
(n )
i.i.d.
For two sets of data Xp := {xp , . . . , xp p } ? P, Xq := {xq , . . . , xq q } ? Q, asp(x;? )
sume both the densities p(x) and q(x) are in exponential family (1). We know q(x;?pq ) ?
exp [h? p ? ? q , f (x)i] . Observing that the data x only interacts with the parameter ? p ? ? q through
f , we can keep using f (x) as our sufficient statistic for the density ratio model, and merge two
parameters ? p and ? q into one single parameter ? = ? p ? ? q . Now we can model our density ratio as
Z
r(x; ?) := exp [h?, f (x)i ? log N (?)] , N (?) := q(x) exph?, f (x)idx,
(4)
R
where N (?) is the normalization term that guarantees q(x)r(x; ?)dx = 1 so that q(x)r(x; ?) is a
valid density function and is normalized over its domain.
Interestingly, despite the parameterization (changing from ? to ?), (4) is exactly the same as (1)
where q(x) appeared as a base measure. The difference is, here, q(x) is a density function from
which Xq are drawn so that N (?) can be approximated accurately from samples of Q. Let us define
nq
h
i
h
i
X
b (?) , N
b (?) := 1
exp h?, f (xq(j) )i .
r?(x; ?) := exp h?, f (x)i ? log N
nq j=1
(5)
Note this model can be computed for any f even if the integral in N (?) does not have a closed form .
3
In order to estimate ?, we minimize the Kullback-Leibler divergence between p and q ? r? :
Z
Z
p(x)
min KL [p|q ? r? ] = min p(x) log
dx = c ? max p(x) log r(x; ?)dx
?
?
?
q(x)r(x; ?)
np
X
1
log r?(x(i)
? c ? max
p ; ?)
? np
i=1
(6)
where c is a constant irrelevant to ?. It can be seen that the minimization of KL divergence boils
down to maximizing log likelihood ratio over dataset Xp .
Now we have reached the log-linear Kullback-Leibler Importance Estimation Procedure (log-linear
KLIEP) estimator [30, 14].
3.2
Trimmed Maximum Likelihood Ratio
As stated in Section 1, to rule out the influences of large density ratio, we trim samples with large
likelihood ratio values from (6). Similarly to one-SVM in (2), we can consider a trimmed MLE
Pnp
(i)
?? = arg max? i=1
[log r?(xp ; ?) ? t0 ]? where t0 is a threshold above which the likelihood ratios
are ignored. It has a convex formulation:
min h, 1i, s.t. ?x(i)
?(x(i)
p ? Xp , log r
p ; ?) ? t0 ? i .
?,?0
(7)
(7) is similar to (2) since we have only replaced p(x; ?) with r?(x; ?). However, the ratio model
? while the normalization term Z in p(x; ?)
r?(x; ?) in (7) comes with a tractable normalization term N
is in general intractable.
Similar to (3), we can directly control the trimming quantile via a hyper-parameter ?:
1
h, 1i ? ? ? t + ?R(?), s.t. ?xp(i) ?Xp , log r?(x(i)
p ; ?) ? t ? i
?,?0,t?0 np
min
(8)
where R(?) is a convex regularizer. (8) is also convex, but it has np number of non-linear constraints
and the search for the global optimal solution can be time-consuming. To avoid such a problem,
one could derive and solve the dual problem of (8). In some applications, we rely on the primal
parameter structure (such as sparsity) for model interpretation, and feature engineering. In Section
4, we translate (8) into an equivalent form so that its solution is obtained via a subgradient ascent
method which is guaranteed to converge to the global optimum.
One common way to construct a convex robust estimator is using a Huber loss [12]. Although the
proposed trimming technique rises from a different setting, it shares the same guiding principle with
Huber loss: avoid assigning dominating values to outlier likelihoods in the objective function.
In Section 8.1 in the supplementary material, we show the relationship between trimmed DRE and
binary Support Vector Machines [23, 4].
4
Optimization
The key to solving (8) efficiently is reformulating it into an equivalent max min problem.
Proposition 1. Assuming ? is chosen such that t? > 0 for all optimal solutions in (8), then ?? is an
optimal solution of (8) if and only if it is also the optimal solution of the following max min problem:
max
?
min
h
inp
w? 0, n1p
,h1,wi=?
L(?, w) ? ?R(?), L(?, w) :=
np
X
wi ? log r?(x(i)
p ; ?).
(9)
i=1
? w)
? as a saddle point of (9):
The proof is in Section 8.2 in the supplementary material. We define (?,
? w)
? = 0, w
? ? ?? ?R(?)
? ? arg
?? L(?,
min
w?[0, n1p ]np ,hw,1i=?
where the second ?? means the subgradient if R is sub-differentiable.
4
? w),
L(?,
(10)
Algorithm 1 Gradient Ascent and Trimming
max
Input: Xp , Xq , ? and step sizes {?it }it
it=1 ; Initialize ? 0 , w 0 , Iteration counter: it = 0, Maximum
number of iterations: itmax , Best objective, parameter pair (Obest = ??, ? best , wbest ) .
while not converged and
n ito? itmax do
(i)
Obtain a sorted set xp
1
np , ?i
np
i=1
(1)
(2)
(np )
so that log r?(xp ; ? it ) ? log r?(xp ; ? it ) ? ? ? ? log r?(xp
; ? it ).
wit+1,i =
? ?np . wit+1,i = 0, otherwise.
Gradient ascent with respect to ?: ? it+1 = ? it + ?it ? ?? [L(? it , wit+1 ) ? ?R(? it )],
Obest = max(Obest , L(? it+1 , wit+1 )) and update (? best , wbest ) accordingly. it = it + 1.
end while
Output: (? best , wbest )
Now the ?trimming? process of our estimator can be clearly seen from (9): The max procedure
estimates a density ratio given the currently assigned weights w, and the min procedure trims the
large log likelihood ratio values by assigning corresponding wi to 0 (or values smaller than n1p ). For
simplicity, we only consider the cases where ? is a multiple of n1p . Intuitively, 1 ? ? is the proportion
of likelihood ratios that are trimmed thus ? should not be greater than 1. Note if we set ? = 1, (9) is
equivalent to the standard density ratio estimator (6). Downweighting outliers while estimating the
model parameter ? is commonly used by robust estimators (See e.g., [3, 29]).
? w)
? is straightforward. It is easy to solve with respect to w or ? while the other
The search for (?,
is fixed: given a parameter ?, the optimization with respect to w is a linear programming and one
of the extreme optimal solutions is attained by assigning weight n1p to the elements that correspond
to the ?np -smallest log-likelihood ratio log r?(x(i) , ?). This observation leads to a simple ?gradient
ascent and trimming? algorithm (see Algorithm 1). In Algorithm 1,
?? L(?, w) =
np
nq
X
1 X
e(j)
(i)
Pnq (k) f (x(j)
:= exp(h?, f (x(i)
wi ? f (x(i)
)
?
?
?
q ), e
q )i).
p
np i=1
e
k=1
j=1
In fact, Algorithm 1 is a subgradient method [2, 16], since the optimal value function of the inner
problem of (9) is not differentiable at some ? where the inner problem has multiple optimal solutions.
The subdifferential of the optimal value of the inner problem with respect to ? can be a set but
Algorithm 1 only computes a subgradient obtained using the extreme point solution wit+1 of the
inner linear programming. Under mild conditions, this subgradient ascent approach will converge to
optimal results with diminishing step size rule and it ? ?. See [2] for details.
Algorithm 1 is a simple gradient ascent procedure and can be implemented by deep learning softwares
such as Tensorflow2 which benefits from the GPU acceleration. In contrast, the original problem (8),
due to its heavily constrained nature, cannot be easily programmed using such a framework.
5
Estimation Consistency in High-dimensional Settings
In this section, we show how the estimated parameter ?? in (10) converges to the ?optimal parameters?
? ? as both sample size and dimensionality goes to infinity under the ?outlier? and ?truncation? setting
respectively.
In the outlier setting (Figure 1a), we assume Xp is contaminated by outliers and all ?inlier? samples
in Xp are i.i.d.. The outliers are injected into our dataset Xp after looking at our inliers. For example,
hackers can spy on our data and inject fake samples so that our estimator exaggerates the degree of
change.
In the truncation setting, there are no outliers. Xp and Xq are i.i.d. samples from P and Q
respectively. However, we have a subset of ?volatile? samples in Xp (the rightmost mode on
histogram in Figure 1b) that are pathological and exhibit large density ratio values.
2
https://www.tensorflow.org/
5
(a) Outlier Setting. Blue and red points are i.i.d.
(b) Truncation Setting. There are no outliers.
Figure 1: Two settings of theoretical analysis.
In the theoretical results in this section, we focus on analyzing the performance of our estimator
for high-dimensional data assuming the number of non-zero elements in the optimal ? ? is k and
? The proofs rely on a recent
use the `1 regularizer, i.e., R(?) = k?k1 which induces sparsity on ?.
development [35, 34] where a ?weighted? high-dimensional estimator was studied. We also assume
the optimization of ? in (9) was conducted within an `1 ball of width ?, i.e., Ball(?), and ? is wisely
chosen so that the optimal parameter ? ? ? Ball(?). The same technique was used in previous works
[15, 35].
Notations: We denote w? ? Rnp as the ?optimal? weights depending on ? ? and our data. To lighten
the notation, we shorten the log density ratio model as z? (x) := log r(x; ?), z?? (x) := log r?(x; ?)
The proof of Theorem 1, 2 and 3 can be found in Section 8.4, 8.5 and 8.6 in supplementary materials.
5.1
A Base Theorem
Now we provide a base theorem giving an upperbound of k?? ? ? ? k. We state this theorem only with
respect to an arbitrary pair (? ? , w? ) and the pair is set properly later in Section 5.2 and 5.3.
We make a few regularity conditions on samples from Q. Samples of Xq should be well behaved in
terms of log-likelihood ratio values.
Assumption 1. ?0 < c1 < 1, 1 < c2 < ? ?xq ? Xq , u ? Ball(?), c1 ? exph? ? + u, xq i ? c2
and collectively c2 /c1 = Cr .
We also assume the Restricted Strong Convexity (RSC) condition on the covariance of X q , i.e.,
cov(X q ) = n1q (X q ? n1q X q 1)(X q ? n1q X q 1)> . Note this property has been verified for various
different design matrices X q , such as Gaussian or sub-Gaussian (See, e.g., [21, 22]).
Assumption 2. RSC condition of cov(X q ) holds for all u, i.e., there exists ?01 > 0 and c > 0 such
that u> cov(X q )u ? ?01 kuk2 ? ?cnq kuk21 with high probability.
Theorem 1. In addition to Assumption 1 and 2, there exists coherence between parameter w and ?
? w):
?
at a saddle point (?,
? w)
? w? ), u
? ? ?? L(?,
? i ? ??2 k?
h?? L(?,
uk2 ? ?2 (n, d)k?
uk1 ,
(11)
? := ?? ? ? ? , ?2 > 0 is a constant and ?2 (d, n) > 0. It can be shown that if
where u
h
i
?n ? 2 max k?? L(? ? , w? )k? , 2C??c
2 ?n , ?2 (n, d)
q
r
??01
?
and
k?? ? ?
> 2Cr2 ?2 , where c >
0 is a constant determined by RSC condition,
?
Cr2
3 k?n
k ? (??0 ?2C
with probability converging to one.
?
2
2
r ?2 )
1
we are guaranteed that
? for w? , the change of the gradient ?? L is limited.
The condition (11) states that if we swap w
Intuitively, it shows that our estimator (9) is not ?picky? on w: even if we cannot have the optimal
? to compute the gradient which is
weight assignment w? , we can still use ?the next best thing?, w
close enough. We later show how (11) is satisfied. Note if k?? L(? ? , w? )k? , ?2 (n, d) converge to
? In Section
zero as np , nq , d ? ?, by taking ?n as such, Theorem 1 guarantees the consistency of ?.
?
?
5.2 and 5.3, we explore two different settings of (? , w? ) that make ||?? ? ? k converges to zero.
6
5.2
Consistency under Outlier Setting
Setting:
Suppose dataset Xp is the union of two disjoint sets G (Good points) and B (Bad points)
i.i.d.
(j)
(i)
i.i.d.
such that G ? p(x) and minj?B z?? (xp ) > maxi?G z?? (xp ) (see Figure 1a). Dataset Xq ?
?
q(x) does not contain any outlier. We set ? = |G|
np . The optimal parameter ? is set such that
p(x) = q(x)r(x; ? ? ). We set w?i =
(i)
1
np , ?xp
?G and 0 otherwise.
Remark: Knowing the inlier proportion |G|/np is a strong assumption. However it is only imposed
for theoretical analysis. As we show in Section 6, our method works well even if ? is only a rough
guess (like 90%). Loosening this assumption will be an important future work.
Assumption 3. ?u ? Ball(?), supx |?
z ?? +u (x) ? z??? (x)| ? Clip kuk1 .
This assumption says that the log density ratio model is Lipschitz continuous around its optimal
parameter ? ? and hence there is a limit how much a log ratio model can deviate from the optimal
model under a small perturbation u. As our estimated weights w
?i depends on the relative ranking of
(i)
z??? (xp ), this assumption implies that the relative ranking between two points will remain unchanged
under a small perturbation u if they are far apart. The following theorem shows that if we have
enough clearance between ?good?and ?bad samples?, ?? converges to the optimal parameter ? ? .
Theorem 2. In addition to Assumption 1, 2 and a few mild technical conditions (see Section 8.5 in the
(j)
(i)
?
?
supplementary material), Assumptions 3 holds. Suppose
q minj?B z? (x
p ) ? maxi?G z? (xp ) ?
3Clip ?, ? =
|G|
np , nq
constants, we are guaranteed that ||?? ? ? ? k ?
It can be seen that k?? ? ? ? k = O
5.3
K1 log d
??c
?
|G| , 2Cr2 nq
= ?(|G|2 ). If ?n ? 2 ? max
p
Cr2
??01
, where K1 > 0, c > 0 are
?
? 3 k?n with probability converging to 1.
log d/min(|G|, nq ) if d is reasonably large.
Consistency under Truncation Setting
In this setting, we do not assume there are outliers in the observed data. Instead, we examine the
ability of our estimator recovering the density ratio up to a certain quantile of our data. This ability
is especially useful when the behavior of the tail quantile is more volatile and makes the standard
estimator (6) output unpredictable results.
Notations: Given ? ? (0, 1], we call t? (?) is the ?-th quantile of z? if P [z? < t? (?))] ? ? and
P [z? ? t? (?))] ? ?. In this setting, we consider ? is fixed by a user thuswe drop the subscript ?
from all subsequent discussions. Let?s define a truncated domain: X(?) = x ? Rd |z? (x) < t(?) ,
p
q
X (?) = Xp ? X(?) and X (?) = Xq ? X(?). See Figure 1b for a visualization of t(?) and X(?)
(the dark shaded region).
i.i.d.
i.i.d.
Setting: Suppose dataset Xp ? P and Xq ? Q. Truncated densities p? and q ? are the
unbounded densities p and q restricted only on the truncated domain X(?). Note that the truncated
densities are dependent on the parameter ? and ?. We show that under some assumptions, the
parameter ?? obtained from (9) using a fixed hyperparameter ? will converge to the ? ? such that
(i)
q ?? (x)r(x; ? ? ) = p?? (x). We also define the ?optimal? weight assignment wi? = n1p , ?i, xp ?
X(? ? ) and 0 otherwise. Interestingly, the constraint in (9), hw? , 1i = ? may not hold, but our
? w)
? in the feasible region so that
analysis in this section suggests we can always find a pair (?,
?
?
k? ? ? k converges to 0 under mild conditions.
We first assume the log density ratio model and its CDF is Lipschitz continuous.
Assumption 4.
?u ? Ball(?), sup |?
z ?? +u (x) ? z??? (x)| ? Clip kuk.
x
7
(12)
Define T (u, ) := x ? Rd | |z?? (x) ? t(? ? )| ? 2Clip kuk + where 0 < ? 1. We assume
?u ? Ball(?), 0 < ? 1
P [xp ? T (u, )] ? CCDF ? kuk + .
In this assumption, we define a ?zone? T (u, ) near the ?-th quantile t(? ? ) and assume the CDF of
our ratio model is upper-bounded over this region. Different from Assumption 3, the RHS of (12) is
with respect to `2 norm of u. In the following assumption, we assume regularity on P and Q.
Assumption 5. ?xq ? Rd , kf (xq )k? ? Cq and ?u ? Ball(?), ?xp ? T (u, 1), kf (xp )k? ? Cp .
(see Section 8.6 in the
Theorem 3. In addition Assumption 1 and 2 and other mild assumptions
?
8CCDF kCp Cr2
supplementary material), Assumption 4 and 5 hold. If 1 ? ? ?
, nq = ?(|X p (? ? )|2 ),
?01
i
hq 0
q
K1 log d
2Cr2 Cq |Xq \X (? ? )| 2L?Cp
??c
?
,
,
,
?n ? 2 max
+
2 ?n
?
p
n
n
2C
q
p
q
|X (? )|
r
?
4C 2
where K10 > 0, c > 0 are constants, we are guaranteed that ||?? ? ? ? k ? ??0r ? 3 k?n with high
1
probability.
q
p
It can be seen that k?? ? ? ? k = O
log d/min(|X (? ? )|, nq ) if d is reasonably large and
q
|Xq \X (? ? )|/nq decays fast.
6
Experiments
6.1
Detecting Sparse Structural Changes between Two Markov Networks (MNs) [14]
In the first experiment3 , we learn changes between two Gaussian MNs under the outlier
setting. The
P
ratio between two Gaussian MNs can be parametrized as p(x)/q(x) ? exp(? i,j?d ?i,j xi xj ),
where ?i,j := ?pi,j ? ?qi,j is the difference between precision matrices. We generate 500 samples
as Xp and Xq using two randomly structured Gaussian MNs. One point [10, . . . , 10] is added as an
Pd
outlier to Xp . To induce sparsity, we set R(?) = i,j=1,i?j |?i,j | and fix ? = 0.0938. Then run
DRE and TRimmed-DRE to learn the sparse differential precision matrix ? and results are plotted
on Figure 2a and 2b4 where the ground truth (the position i, j, ??i,j 6= 0) is marked by red boxes.
It can be seen that the outlier completely misleads DRE while TR-DRE performs reasonably well.
We also run experiments with two different settings (d = 25, d = 36) and plot True Negative Rate
(TNR) - True Positive Rate (TPR) curves. We fix ? in TR-DRE to 90% and compare the performance
of DRE and TR-DRE using DRE without any outliers as gold standard (see Figure 2c). It can be
seen that the added outlier makes the DRE fail completely while TR-DRE can almost reach the gold
standard. It also shows the price we pay: TR-DRE does lose some power for discarding samples.
However, the loss of performance is still acceptable.
6.2
Relative Novelty Detection from Images
In the second experiment, we collect four images (see Figure 3a) containing three objects with a
textured background: a pencil, an earphone and an earphone case. We create data points from these
four images using sliding windows of 48 ? 48 pixels (the green box on the lower right picture on
Figure 3a). We extract 899 features using MATLAB HOG method on each window and construct
an 899-dimensional sample. Although our theorems in Section 5 are proved for linear models, here
f (x) is an RBF kernel using all samples in Xp as kernel basis. We pick the top left image as Xp and
using all three other images as Xq , then run TR-DRE, THresholded-DRE [26], and one-SVM.
In this task, we select high density ratio super pixels on image Xp . It can be expected that the
super pixels containing the pencil will exhibit high density ratio values as they did not appear in
the reference dataset Xq while super pixels containing the earphone case, the earphones and the
background, repeats similar patches in Xq will have lower density ratio values. This is different from
3
4
Code can be found at http://allmodelsarewrong.org/software.html
Figures are best viewed in color.
8
1
0.8
TNR
0.6
0.4
0.2
0
0
0.2
0.4
0.6
0.8
1
TPR
? obtained by DRE, d = 20, with(b) ?
? obtained by TR-DRE, ? =
(a) ?
one outlier.
90%, with one outlier.
(c) TNR-TPR plot, ? = 90%
Figure 2: Using DRE to learn changes between two MNs. We set R(?) = k ? k1 and f (xi , xj ) = xi xj .
(a) Dataset
(b) ? = 97%
(c) ? = 90%
(d) ? = 85%
(e) TH-DRE
(f) one-SVM
2
Figure 3: Relative object detection using super pixels. We set R(?) = k ? k , f (x) is an RBF kernel.
a conventional novelty detection, as a density ratio function help us capture only the relative novelty.
For TR-DRE, we use the trimming threshold t? as the threshold for selecting high density ratio points.
It can be seen on Figure 3b, 3c and 3d, as we tune ? to allow more and more high density ratio
windows to be selected, more relative novelties are detected: First the pen, then the case, and finally
the earphones, as the lack of appearance in the reference dataset Xq elevates the density ratio value
by different degrees. In comparison, we run TH-DRE with top 3% highest density ratio values
thresholded, which corresponds to ? = 97% in our method. The pattern of the thresholded windows
(shaded in red) in Figure 3e is similar to Figure 3b though some parts of the case are mistakenly
shaded. Finally, one-SVM with 3% support vectors (see Figure 3f) does not utilize the knowledge of
a reference dataset Xq and labels all salient objects in Xp as they corresponds to the ?outliers? in Xp .
7
Conclusion
We presents a robust density ratio estimator based on the idea of trimmed MLE. It has a convex
formulation and the optimization can be easily conducted using a subgradient ascent method. We also
investigate its theoretical property through an equivalent weighted M-estimator whose `2 estimation
error bound was provable under two high-dimensional, robust settings. Experiments confirm the
effectiveness and robustness of the our trimmed estimator.
Acknowledgments
We thank three anonymous reviewers for their detailed and helpful comments. Akiko Takeda thanks
Grant-in-Aid for Scientific Research (C), 15K00031. Taiji Suzuki was partially supported by MEXT
KAKENHI (25730013, 25120012, 26280009 and 15H05707), JST-PRESTO and JST-CREST. Song
Liu and Kenji Fukumizu have been supported in part by MEXT Grant-in-Aid for Scientific Research
on Innovative Areas (25120012).
9
References
[1] F. Azmandian, J. G. Dy, J. A. Aslam, and D. R. Kaeli. Local kernel density ratio-based feature
selection for outlier detection. In Proceedings of 8th Asian Conference on Machine Learning
(ACML2012), JMLR Workshop and Conference Proceedings, pages 49?64, 2012.
[2] S. Boyd. Subgradient methods. Technical report, Stanford University, 2014. Notes for EE364b,
Stanford University, Spring 2013?14.
[3] W. S. Cleveland. Robust locally weighted regression and smoothing scatterplots. Journal of the
American Statistical Association, 74(368):829?836, 1979.
[4] N. Cristianini and J. Shawe-Taylor. An Introduction to Support Vector Machines and Other
Kernel-based Learning Methods. Cambridge University Press, 2000.
[5] B. Efron and R. Tibshirani. Using specially designed exponential families for density estimation.
The Annals of Statistics, 24(6):2431?2461, 1996.
[6] F. Fazayeli and A. Banerjee. Generalized direct change estimation in ising model structure. In
Proceedings of The 33rd International Conference on Machine Learning (ICML2016), page
2281?2290, 2016.
[7] W. Fithian and S. Wager. Semiparametric exponential families for heavy-tailed data. Biometrika,
102(2):486?493, 2015.
[8] K. Fokianos. Merging information for semiparametric density estimation. Journal of the Royal
Statistical Society: Series B (Statistical Methodology), 66(4):941?958, 2004.
[9] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and
Y. Bengio. Generative adversarial nets. In Advances in neural information processing systems,
pages 2672?2680, 2014.
[10] A. S. Hadi and A. Luceno. Maximum trimmed likelihood estimators: a unified approach,
examples, and algorithms. Computational Statistics & Data Analysis, 25(3):251 ? 272, 1997.
[11] J. Huang, A. Gretton, K. M Borgwardt, B. Sch?lkopf, and A. J Smola. Correcting sample
selection bias by unlabeled data. In Advances in neural information processing systems, pages
601?608, 2007.
[12] P. J. Huber. Robust estimation of a location parameter. The Annals of Mathematical Statistics,
35(1):73?101, 03 1964.
[13] Y. Kawahara and M. Sugiyama. Sequential change-point detection based on direct density-ratio
estimation. Statistical Analysis and Data Mining, 5(2):114?127, 2012.
[14] S. Liu, T. Suzuki, R. Relator, J. Sese, M. Sugiyama, and K. Fukumizu. Support consistency
of direct sparse-change learning in Markov networks. Annals of Statistics, 45(3):959?990, 06
2017.
[15] P.-L. Loh and M. J. Wainwright. Regularized m-estimators with nonconvexity: Statistical and
algorithmic theory for local optima. Journal of Machine Learning Research, 16:559?616, 2015.
[16] A. Nedi?c and A. Ozdaglar. Subgradient methods for saddle-point problems. Journal of
Optimization Theory and Applications, 142(1):205?228, 2009.
[17] N. Neykov and P. N. Neytchev. Robust alternative of the maximum likelihood estimators.
COMPSTAT?90, Short Communications, pages 99?100, 1990.
[18] X. Nguyen, M. J. Wainwright, and M. I. Jordan. Estimating divergence functionals and the
likelihood ratio by convex risk minimization. IEEE Transactions on Information Theory,
56(11):5847?5861, 2010.
[19] S. Nowozin, B. Cseke, and R. Tomioka. f-gan: Training generative neural samplers using
variational divergence minimization. In Advances in Neural Information Processing Systems,
pages 271?279, 2016.
10
[20] E. J. G. Pitman. Sufficient statistics and intrinsic accuracy. Mathematical Proceedings of the
Cambridge Philosophical Society, 32(4):567?579, 1936.
[21] G. Raskutti, M. J. Wainwright, and B. Yu. Restricted eigenvalue properties for correlated
gaussian designs. Journal of Machine Learning Research, 11:2241?2259, 2010.
[22] M. Rudelson and S. Zhou. Reconstruction from anisotropic random measurements. IEEE
Transactions on Information Theory, 59(6):3434?3447, 2013.
[23] B. Scholkopf and A. J. Smola. Learning with kernels: support vector machines, regularization,
optimization, and beyond. MIT press, 2001.
[24] B. Sch?lkopf, R. C. Williamson, Smola A. J., Shawe-Taylor J., and Platt J.C. Support vector
method for novelty detection. In Advances in Neural Information Processing Systems 12, pages
582?588. MIT Press, 2000.
[25] A. Shimodaira. Improving predictive inference under covariate shift by weighting the loglikelihood function. Journal of Statistical Planning and Inference, 90(2):227 ? 244, 2000.
[26] A. Smola, L. Song, and C. H. Teo. Relative novelty detection. In Proceedings of the Twelth
International Conference on Artificial Intelligence and Statistics (AISTATS), volume 5, pages
536?543, 2009.
[27] M. Sugiyama, T. Suzuki, and T. Kanamori. Density Ratio Estimation in Machine Learning.
Cambridge University Press, 2012.
[28] M. Sugiyama, T. Suzuki, S. Nakajima, H. Kashima, P. von B?nau, and M. Kawanabe. Direct
importance estimation for covariate shift adaptation. Annals of the Institute of Statistical
Mathematics, 60(4):699?746, 2008.
[29] J. A. K. Suykens, J. De Brabanter, L. Lukas, and J. Vandewalle. Weighted least squares support
vector machines: robustness and sparse approximation. Neurocomputing, 48(1):85?105, 2002.
[30] Y. Tsuboi, H. Kashima, S. Hido, S. Bickel, and M. Sugiyama. Direct density ratio estimation
for large-scale covariate shift adaptation. Journal of Information Processing, 17:138?155, 2009.
[31] D. L. Vandev and N. M. Neykov. About regression estimators with high breakdown point.
Statistics: A Journal of Theoretical and Applied Statistics, 32(2):111?129, 1998.
[32] M. Wornowizki and R. Fried. Two-sample homogeneity tests based on divergence measures.
Computational Statistics, 31(1):291?313, 2016.
[33] M. Yamada, T. Suzuki, T. Kanamori, H. Hachiya, and M. Sugiyama. Relative density-ratio
estimation for robust distribution comparison. Neural Computation, 25(5):1324?1370, 2013.
[34] E. Yang, A. Lozano, and A. Aravkin. High-dimensional trimmed estimators: A general
framework for robust structured estimation. arXiv preprint arXiv:1605.08299, 2016.
[35] E. Yang and A. C. Lozano. Robust gaussian graphical modeling with the trimmed graphical
lasso. In Advances in Neural Information Processing Systems, pages 2602?2610, 2015.
11
| 7038 |@word mild:5 version:1 briefly:1 norm:1 proportion:3 covariance:1 pick:1 tr:8 liu:5 series:1 selecting:1 interestingly:2 rightmost:1 assigning:3 dx:3 written:1 gpu:1 subsequent:1 drop:1 plot:2 update:1 designed:1 generative:4 selected:1 guess:1 nq:10 parameterization:1 accordingly:1 intelligence:1 akiko:2 fried:1 experiment3:1 short:1 yamada:1 detecting:1 location:1 attack:1 org:2 unbounded:4 mathematical:2 n1q:3 c2:3 direct:6 differential:1 scholkopf:1 prove:2 fitting:1 pnp:1 introduce:3 huber:3 expected:1 behavior:1 examine:1 planning:1 inspired:1 automatically:1 little:1 unpredictable:2 considering:1 window:4 cleveland:1 discover:1 bounded:2 moreover:2 maximizes:2 estimating:2 notation:3 unified:1 guarantee:2 h05707:1 exactly:1 biometrika:1 k2:1 uk:1 control:4 ozdaglar:1 grant:2 platt:1 appear:1 positive:1 elevates:1 engineering:1 kuk1:1 tnr:3 local:2 limit:1 despite:1 analyzing:1 subscript:1 merge:1 might:1 studied:2 suggests:1 shaded:3 collect:1 programmed:1 limited:1 practical:1 acknowledgment:1 testing:1 union:1 differs:1 procedure:5 area:2 k10:1 empirical:2 boyd:1 induce:1 inp:1 cannot:2 close:1 selection:2 unlabeled:1 risk:1 influence:1 www:1 equivalent:4 imposed:1 conventional:1 reviewer:1 maximizing:1 compstat:1 straightforward:1 go:1 convex:11 nedi:1 stabilizing:1 mislead:1 wit:5 simplicity:1 react:1 shorten:1 pouget:1 correcting:1 estimator:40 rule:2 stability:1 exaggerates:1 annals:4 suppose:4 heavily:1 user:1 programming:2 us:2 goodfellow:1 element:2 approximated:1 taiji:3 breakdown:1 ising:1 observed:1 preprint:1 capture:1 region:7 kuk21:1 counter:1 highest:1 environment:1 convexity:1 pd:1 cristianini:1 warde:1 solving:1 predictive:1 creates:1 completely:3 swap:1 translated:1 textured:1 easily:2 basis:1 various:2 regularizer:3 riken:2 derivation:1 fast:1 abused:1 artificial:2 detected:1 sume:1 hyper:2 kawahara:1 whose:3 widely:1 solve:4 dominating:1 supplementary:5 say:1 otherwise:4 stanford:2 loglikelihood:1 ability:3 statistic:12 cov:3 brabanter:1 differentiable:2 eigenvalue:1 net:1 took:1 propose:1 reconstruction:1 adaptation:3 translate:1 gold:2 intuitive:1 inducing:1 takeda:2 regularity:2 optimum:4 generating:1 renormalizing:1 converges:4 inlier:2 help:2 derive:3 develop:1 ac:4 depending:1 object:3 strong:2 recovering:2 kenji:2 implemented:1 indicate:1 come:1 implies:1 aravkin:1 tokyo:2 correct:1 jst:3 material:5 fix:2 preliminary:2 anonymous:1 proposition:2 hold:4 tricking:1 around:1 ground:1 exp:7 algorithmic:1 bickel:1 smallest:1 purpose:1 estimation:25 lose:1 label:1 currently:1 healthy:2 sensitive:1 teo:1 create:1 tool:2 weighted:5 fukumizu:4 minimization:4 rough:1 clearly:1 mit:2 gaussian:7 always:2 super:4 reaching:1 pn:1 avoid:3 asp:1 zhou:1 cr:1 cseke:1 derived:1 focus:3 properly:1 kakenhi:1 modelling:1 likelihood:26 contrast:2 adversarial:2 cr2:6 helpful:1 inference:2 dependent:1 entire:1 diminishing:1 pixel:5 issue:1 arg:3 dual:1 html:1 development:2 constrained:1 smoothing:1 initialize:1 once:1 construct:2 beach:1 sampling:1 yu:1 future:1 report:1 contaminated:1 mirza:1 others:1 np:18 aip:2 few:5 pathological:3 randomly:1 lighten:1 divergence:7 neurocomputing:1 asian:1 homogeneity:1 replaced:1 nau:1 detection:11 trimming:10 investigate:1 mining:1 fazayeli:1 mixture:1 extreme:2 farley:1 inliers:2 primal:1 wager:1 integral:1 taylor:2 plotted:1 theoretical:5 rsc:3 fenchel:1 modeling:1 measuring:1 assignment:2 clipping:1 subset:2 rare:1 vandewalle:1 conducted:3 kullbackleibler:1 kcp:1 supx:1 corrupted:2 st:1 density:74 thanks:1 international:2 fithian:1 borgwardt:1 uk1:1 von:1 satisfied:1 containing:3 huang:1 external:1 inject:2 american:1 return:1 japan:1 upperbound:1 de:2 ranking:2 depends:1 later:3 h1:1 closed:1 analyze:1 observing:1 portion:1 reached:2 recover:1 red:3 sup:1 aslam:1 minimize:1 square:1 accuracy:1 responded:1 hadi:1 who:1 efficiently:1 correspond:1 lkopf:2 accurately:1 bristol:2 converged:1 hachiya:1 explain:1 minj:2 reach:2 nonetheless:1 proof:3 recovers:1 boil:1 dataset:11 adjusting:1 popular:1 proved:1 color:1 knowledge:1 dimensionality:1 efron:1 attained:1 methodology:1 formulation:9 done:1 box:2 though:1 generality:1 just:1 smola:4 mistakenly:1 banerjee:1 lack:1 mode:1 behaved:1 scientific:2 believe:1 usa:1 excessively:1 verify:1 true:4 concept:2 normalized:1 lozano:2 regularization:2 assigned:1 aggressively:1 excluded:1 reformulating:1 leibler:2 pencil:2 hence:1 skewed:1 width:1 clearance:1 generalized:1 mist:1 performs:1 cp:2 image:6 variational:1 novel:2 volatile:4 common:1 raskutti:1 b4:1 jp:3 volume:1 anisotropic:1 discussed:1 interpretation:1 tail:1 tpr:3 interpret:1 association:1 significant:1 measurement:1 cambridge:3 rd:5 consistency:5 mathematics:4 similarly:1 sugiyama:6 shawe:2 pq:1 base:4 recent:2 perspective:1 irrelevant:1 apart:1 certain:1 binary:1 seen:7 greater:1 determine:1 paradigm:1 converge:4 novelty:6 branch:1 multiple:3 earphone:5 sliding:1 dre:28 gretton:1 technical:2 adapt:1 long:1 mle:4 bigger:2 hido:1 controlled:1 impact:1 converging:2 qi:1 crop:1 basic:1 n1p:6 regression:2 arxiv:2 iteration:2 normalization:6 histogram:1 kernel:6 nakajima:1 suykens:1 c1:3 addition:4 want:1 subdifferential:1 background:2 semiparametric:2 grow:1 crucial:1 sch:2 biased:1 specially:1 ascent:7 comment:1 cyber:1 thing:1 member:1 effectiveness:2 jordan:1 call:1 structural:2 near:1 yang:2 vital:1 easy:1 enough:2 bengio:1 xj:3 lasso:1 inner:4 idea:2 knowing:1 shift:3 t0:3 reuse:1 trimmed:22 song:5 suffer:1 loh:1 hacker:2 cause:1 remark:1 matlab:1 deep:1 ignored:1 useful:1 fake:1 clear:1 detailed:1 tune:1 picky:1 dark:1 locally:1 induces:1 clip:4 http:2 generate:1 wisely:1 uk2:1 spy:2 estimated:3 overly:1 disjoint:1 tibshirani:1 blue:1 hyperparameter:1 shall:1 key:1 four:2 salient:1 threshold:3 drawn:2 downweighting:1 changing:1 verified:1 thresholded:5 kuk:3 utilize:1 nonconvexity:1 vast:1 subgradient:10 fraction:1 year:1 neykov:2 run:4 injected:1 clipped:1 almost:3 family:4 patch:1 coherence:1 acceptable:1 dy:1 bound:2 pay:1 guaranteed:5 courville:1 replaces:1 infinity:2 constraint:2 software:2 dominated:1 extremely:3 min:13 innovative:1 spring:1 relatively:1 structured:2 according:1 ball:8 shimodaira:1 rnp:1 smaller:1 contain:1 remain:1 wi:5 icml2016:1 modification:1 making:1 outlier:27 explained:1 intuitively:2 restricted:3 visualization:1 slack:1 fail:2 know:1 tractable:1 end:2 presto:2 kawanabe:1 away:4 kashima:2 alternative:1 robustness:3 original:2 top:2 rudelson:1 trouble:1 gan:2 graphical:2 itmax:2 ism:2 giving:1 quantile:5 k1:5 especially:1 society:2 unchanged:1 objective:2 added:2 parametric:3 strategy:2 interacts:1 unclear:1 exhibit:2 gradient:6 hq:1 thank:1 majority:1 parametrized:1 idx:2 toward:1 provable:1 ozair:1 assuming:2 code:1 loosening:1 relationship:1 cq:2 ratio:77 hog:1 reweight:1 stated:1 rise:1 negative:1 design:2 unknown:1 upper:3 observation:1 markov:2 discarded:1 descent:1 truncated:4 kliep:2 looking:1 communication:1 perturbation:2 arbitrary:1 community:1 pnq:1 introduced:1 pair:4 kl:2 philosophical:1 learned:1 tensorflow:1 nip:1 beyond:1 usually:2 pattern:2 appeared:1 sparsity:4 ccdf:2 pioneering:1 max:13 green:1 royal:1 wainwright:3 power:1 natural:1 rely:2 regularized:1 mn:5 scheme:1 picture:1 identifies:1 extract:1 xq:24 deviate:1 review:1 literature:1 kf:2 relative:10 plant:1 loss:4 interesting:1 proven:1 degree:2 sufficient:3 xp:37 sese:1 principle:1 nowozin:1 share:1 heavy:2 pi:1 repeat:1 supported:2 truncation:5 kanamori:2 bias:2 allow:1 institute:4 taking:1 lukas:1 sparse:4 pitman:1 tolerance:1 benefit:1 curve:1 valid:1 exceedingly:1 ignores:2 computes:1 suzuki:6 commonly:1 nguyen:1 far:5 transaction:2 functionals:1 crest:1 trim:4 ignore:2 kullback:2 keep:1 confirm:1 global:4 kkt:1 conclude:1 consuming:1 xi:3 search:2 continuous:2 pen:1 tailed:2 learn:3 nature:2 reasonably:3 robust:14 ca:1 improving:1 williamson:1 domain:4 did:1 aistats:1 main:2 rh:1 exph:3 xu:1 aid:2 precision:2 sub:2 position:1 guiding:1 scatterplots:1 tomioka:1 exponential:4 jmlr:1 weighting:2 ito:1 hw:2 down:1 theorem:10 kuk2:1 bad:2 specific:1 discarding:1 covariate:3 maxi:2 decay:1 svm:6 abadie:1 tsuboi:1 intractable:2 exists:2 workshop:1 intrinsic:1 merging:1 sequential:1 importance:3 push:1 simply:1 likely:1 saddle:3 explore:1 appearance:1 partially:1 vulnerable:1 collectively:1 corresponds:2 truth:1 relies:2 cdf:2 sorted:1 formulated:1 marked:1 acceleration:1 rbf:2 viewed:1 lipschitz:2 price:1 feasible:1 change:15 hard:1 infinite:1 determined:1 corrected:1 ruined:1 sampler:1 called:2 duality:1 experimental:1 diverging:1 zone:1 select:1 support:8 mext:2 correlated:1 |
6,676 | 7,039 | Training recurrent networks to generate hypotheses
about how the brain solves hard navigation problems
Ingmar Kanitscheider & Ila Fiete
Department of Neuroscience
The University of Texas
Austin, TX 78712
ikanitscheider, ilafiete @mail.clm.utexas.edu
Abstract
Self-localization during navigation with noisy sensors in an ambiguous world is
computationally challenging, yet animals and humans excel at it. In robotics, Simultaneous Location and Mapping (SLAM) algorithms solve this problem through
joint sequential probabilistic inference of their own coordinates and those of external spatial landmarks. We generate the first neural solution to the SLAM problem
by training recurrent LSTM networks to perform a set of hard 2D navigation tasks
that require generalization to completely novel trajectories and environments. Our
goal is to make sense of how the diverse phenomenology in the brain?s spatial
navigation circuits is related to their function. We show that the hidden unit representations exhibit several key properties of hippocampal place cells, including
stable tuning curves that remap between environments. Our result is also a proof
of concept for end-to-end-learning of a SLAM algorithm using recurrent networks,
and a demonstration of why this approach may have some advantages for robotic
SLAM.
1
Introduction
Sensory noise and ambiguous spatial cues make self-localization during navigation computationally
challenging. Errors in self-motion estimation cause rapid deterioration in localization performance, if
localization is based simply on path integration (PI), the integration of self-motion signals. Spatial
features in the world are often spatially extended (e.g. walls) or similar landmarks are found at
multiple locations, and thus provide only partial position information. Worse, localizing in novel
environments requires solving a chicken-or-egg problem: Since landmarks are not yet associated
with coordinates, agents must learn landmark positions from PI (known as mapping), but PI location
estimates drift rapidly and require correction from landmark coordinates.
Despite the computational difficulties, animals exhibit stable neural tuning in familiar and novel
environments over several 10s of minutes [1, 2], even though the PI estimates in the same animals is
estimated to deteriorate within a few minutes [3]. These experimental and computational findings
suggest that the brain is solving some version of the simultaneous localization and mapping (SLAM)
problem.
In robotics, the SLAM problem is solved by algorithms that approximate Bayes-optimal sequential
probabilistic inference: at each step, a probability distribution over possible current locations and
over the locations of all the landmarks is updated based on noisy motion and noisy, ambiguous
landmark inputs [4]. These algorithms simultaneously update location and map estimates, effectively
bootstrapping their way to better estimates of both. Quantitative studies of neural responses in rodents
suggest that their brains might also perform high-quality sequential probabilistic fusion of motion
and landmark cues during navigation [3]. The required probabilistic computations are difficult to
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
translate by hand into forms amenable to neural circuit dynamics, and it is entirely unknown how the
brain might perform them.
We ask here how the brain might solve the SLAM problem. Instead of imposing heavy prior
assumptions on the form a neural solution might take, we espouse a relatively model-free approach
[5, 6, 7]: supervised training of recurrent neural networks to solve spatial localization in familiar
and novel environments. A recurrent architecture is necessary because self-localization from motion
inputs and different landmark encounters involves integration over time, which requires memory. We
expect that the network will form representations of the latent variables essential to solving the task .
Unlike robotic SLAM algorithms that simultaneously acquire a representation of the agent?s location
and a detailed metric map of a novel environment, we primarily train the network to perform accurate
localization; the map representation is only explicitly probed by asking the network to extract features
to correctly classify the environment it is currently in. However, even if the goal is to merely localize
in one of several environments, the network must have created and used a map of the environment
to enable accurate localization with noisy PI. In turn, an algorithm that successfully solves the
problem of accurate localization in novel environments can automatically solve the SLAM problem,
as mapping a space then simply involves assigning correct coordinates to landmarks, walls, and
other features in the space [4]. Our network solution exploits the fact that the SLAM problem can
be considered as one of mapping sequences of ambiguous motion and landmark observations to
locations, in a way that generalizes across trajectories and environments.
Our goal is to better understand how the brain solves such problems, by relating emergent responses
in the trained network to those observed in the brain, and through this process to synthesize, from
a function-driven perspective, the large body of phenomenology on the brain?s spatial navigation
circuits. Because we have access to all hidden units and control over test environments and trajectories,
this approach allows us to predict the effective dimensionality of the dynamics required to solve the 2D
SLAM task and make novel predictions about the representations the brain might construct to solve
hard inference problems. Even from the perspective of well-studied robotic SLAM, this approach
could allow for the learning and use of rich environment structure priors from past experience, which
can enable faster map building in novel environments.
2
2.1
Methods
Environments and trajectories
We study the task of a simulated rat that must estimate its position (i.e., localize itself) while moving
along a random trajectory in two-dimensional enclosure, similar to a typical task in which rats chase
randomly scattered food pellets [8]. The enclosure is polygon-shaped and the rat does not have
access to any local or distal spatial cues other than touch-based information upon contact with the
boundaries of the environment (Figure 1A-B; for details see SI Text, section 1-4). We assume that
the rat has access to noisy estimates of self-motion speed and direction, as might be derived from
proprioceptive and vestibular cues (Figure 1A), and to boundary-contact information derived from
its rare encounters with a boundary whose only feature is its geometry. On boundary contact, the
rat receives information only about its distance and angle relative to the boundary (Figure 1B). This
information is degenerate: it depends simply on the pose of the rat with respect to the boundary,
and the same signal could arise at various locations along the boundary. Self-motion and boundary
contact estimates are realistically noisy, with magnitudes based on work in [3].
2.2
Navigation tasks
We study the following navigation tasks:
? Localization only: Localization in a single familiar environment. The rat is familiar with
the geometry of the environment but starts each trial at a random unknown location. To
successfully solve the task, the rat must infer its location relative to a fixed point in the
interior on the basis of successive boundary contacts and its knowledge of the environment?s
geometry, and be able to generalize this computation across novel random trajectories.
? Generalized SLAM: Localization in novel environments. Each trial takes place in a novel
environment, sampled from a distribution of random polygons (Figure 1C; SI Text, section
2
A
B
C
D
localization or classification
?
?
recurrent layer (LSTM)
input: motion and boundary
Figure 1: Task setup. Self-localization in 2D enclosures. A Noisy heading direction and speed inputs
allow the simulated rat to update its location in the interior. B Occasional boundary contacts provide
noisy estimates of the its relative angle (?) and distance (d) from the wall. C Samples from the
distribution of random environments. D Architecture of the recurrent neural network.
1); the rat must accurately infer its location relative to the starting point by exploiting
boundary inputs despite not knowing the geometry of its enclosure. To solve the task, the rat
must be able to generalize its localization computations to trials with both novel trajectories
and novel environments.
? Specialized task: Localization in and classification of any of 100 familiar environments.
Each trial takes place in one of 100 known environments, sampled from a distribution of
random polygons (Figure 1C; SI Text, section 1), but the rat does not know which one.
The trial starts at a fixed point inside the polygon (known to rat through training), and the
ongoing trajectory is random. In addition to the challenges of the localization tasks above,
the rat must correctly classify the environment.
The environments are random polygons with 10 vertices. The center-to vertex lengths are drawn
randomly from a distribution with mean 1m in the localization-only task or 0.33m in the specialized
and generalized SLAM tasks.
2.3
Recurrent network architecture and training
The network has three layers: input, recurrent hidden and output layer (Figure 1D). The input layer
encodes noisy self-motion cues like velocity and head direction change, as well as noisy boundarycontact information like relative angle and distance to boundary (SI Text, section 9). The recurrent
layer contains 256 Long Short-Term Memory (LSTM) units with peepholes and forget gates [9], an
architecture demonstrated to be able to learn dependencies across many timesteps [10]. We adapt
the nonlinearity of the LSTM units to produce non-negative hidden activations in order to facilitate
the comparison with neural firing rates1 . Two self-localization units in the output perform a linear
readout; their activations correspond to the estimated location coordinates. The cost function for
localization is mean squared error. The classification output is implemented by a softmax layer with
100 neurons (1 per environment); the cost function is cross-entropy. When the network is trained
to both localize and classify, the relative weight is tuned such that the classification cost is half of
the localization cost. Independent trials used for training: 5000 trials in the localization-only task,
250,000 trials in the specialized task, and 300,000 trials in the generalized task. The network is
trained using the Adam algorithm [11], a form of stochastic gradient descent. Gradients are clipped
to 1. During training performance is monitored on a validation set of 1000 independent trials, and
1
The LSTM equations are implemented by the equations:
it
ft
ct
ot
ht
=
=
=
=
=
?(Wxi xt + Whi ht?1 + wci ct?1 + bi )
?(Wxf xt + Whf ht?1 + wcf ct?1 + bf )
ft ct?1 + it tanh(Wxc xt + Whc ht?1 + bc )
?(Wxo xt + Who ht?1 + wco ct + bo )
ot tanh([ct ]+ )
where ? is the logistic sigmoid function, h is the hidden activation vector, i, f , o and c are respectively the
input gate, forget gate, output gate and cell activation vectors, a b denotes point-wise multiplication and [x]+
denotes rectification.
3
network parameters with the smallest validation error are selected. All results are cross-validated
on a separate set of 1000 test trials to ensure the network indeed generalizes across new random
trajectories and/or environments.
3
Results
3.1
Network performance on spatial tasks rivals optimal performance
3.1.1
Localization in a familiar environment
The trained network, starting a trial from an unknown random initial position and running along a new
random trajectory, quickly localizes itself within the space (Figure 2, red curve). The mean location
error (averaged over new test trials) drops as a function of time in each trial, as the rat encounters
more boundaries in the environment. After about 5 boundary contacts, the initial error has sharply
declined.
0.5
0
0
500
1000
timestep
1500
0
0
PF
ang error [rads]
1
2
1
NN
PF
SH
PI
radial error [m]
mean abs err [m]
1.5
1
500
1000 1500
timestep
0
0
SH
500
1000
timestep
1500
Figure 2: Localization in a single familiar environment. Mean absolute error on the localization-only
task (left), radial error measured from origin (middle) and angular error (right). One time step
corresponds to 0.77 seconds. Network performance (red, NN) is compared to that of the particle filter
(black, PF). Also shown: single hypothesis filter (light red, SH) and simple path integration (gray, PI)
estimates as controls.
The drop in error over time and the final error of the network match that of the optimal Bayesian estimator with access to the same noisy sensory data but perfect knowledge of the boundary coordinates
(Figure 2, black). The optimal Bayesian estimator is implemented as a particle filter (PF) with 1000
particles and performs fully probabilistic sequential inference about position, using the environment
coordinates and the noisy sensory data. The posterior location distributions are frequently elongated
in an angular arc and multimodal (thus far from Gaussian).
Both network and PF vastly outperform pure PI. First, since the PI estimate does not have access to
boundary information, it cannot overcome initial localization uncertainty due to the unknown starting
point. Second, the error in the PI estimate of location grows unbounded with time, as expected due to
the accumulating effects of noise in the motion estimates (Figure 2, gray). In contrast, the errors in
the network and PF ? which make use of the same motion estimates ? remain bounded.
Finally we contrast the performance of the network and PF with the single hypothesis (SH) algorithm,
which updates a single location estimate (rather than a probability distribution) by taking into account
motion, contact, and arena shape. The SH algorithm can be thought of as an abstraction of neural
bump attractor models [12, 13], in which an activity bump is updated using PI and corrected when a
landmark or boundary with known spatial coordinates is observed. The SH algorithm overcomes, to
a certain degree, the initial localization uncertainty due to the unknown starting position, but the error
steadily increases thereafter. It still vastly underperforms the network and PF, since it is not able to
efficiently resolve the complex-shaped uncertainties induced by featureless boundaries.
3.1.2
Localization in novel environments
The network is trained to localize within a different environment in each trial, then tested on a set of
trials in different novel environments.
4
Strikingly, the network localizes well in the novel environments, despite its ignorance about their
specific geometry (Figure 3A, red). While the network (unsurprisingly) does not match the performance of an oracular PF that is supplied with the arena geometry at the beginning of the trial
(Figure 3A, black), its error exceeds the oracular PF by only ? 50%, and it vastly outperforms
PI-based estimation (Figure 3A, gray) and a naive Bayesian (NB) approach that takes into account the
distribution of locations across the ensemble of environments (Figure 3A, reddish-gray; SI section 8).
Compared to robotic SLAM in open-field environments, this task setting is especially difficult since
distant boundary information is gathered only from sparse contacts, rather than spatially extended
and continuous measurements with laser or radar scanners.
3.1.3
Localization in and classification of 100 familiar environments
The network is trained on 100 environments then tested in an arbitrary environment from that set. The
goal is to identify the environment and localize within it, from a known starting location. Localization
initially deteriorates because of PI errors (Figure 3B, red). After a few boundary encounters, the
network correctly identifies the environment (Figure 3C), and simultaneously, localization error drops
as the network now associates the boundary with coordinates for the appropriate environment. The
network?s localization error post-classification matches that of an oracular PF with full knowledge
about the environment geometry. Within 200s of exploration within the environment, classification
performance is close to 100%.
As a measure of the efficacy of the neural network in solving the specialized task, we compare its
performance to PFs that do not know the identity of the environment at the outset of the trial (PF
SLAM) and that perform both localization and classification, with varying numbers of particles,
Figure 3D-E. For classification, the asymptotic network performance with 256 recurrent units is
comparable to a 10,000 particle PF SLAM, while for localization, the asymptotic network performance
is comparable to a 4,000 particle PF SLAM, suggesting that the network is extremely efficient. Even
the 10,000 particle PF SLAM classification estimate sometimes prematurely collapses to not always
the correct value. The network is slower to select a classification, and is more accurate, improving on
a common problem with particle-filter based SLAM caused by particle depletion.
generalized network
B
0.4
0
0.1
500 PF SLAM
1000
4000
10000
oracular PF
F
0.05
0.2
0
D
specialized network
NN
oracular PF
NB
PI
500
1000
timestep
0
100
0
1500 C
100
E
0
500
1000
timestep
1500 0
generalized network,
specialized task
class (%)
0.6
class (%)
mean abs err [m]
A
500
1000
timestep
0
0
500
1000
timestep
1500
1500
Figure 3: Localization and classification in the generalized and specialized SLAM tasks. A Localization performance of the generalized network (red, NN) tested in novel environments, compared to a
PF that knows the environment identity (black, oracular PF). Controls: PI only (gray, PI) and a naive
Bayes filter (see text and SI; reddish-gray, NB). B Same as (A), but for the specialized network tested
in 100 familiar environments. C Classification performance of the specialized network in 100 familiar
environments. D-E Localization and classification by a SLAM PF with different number of particles,
compared to the specialized network in 100 familiar environments. F Classification performance of
the general network after retraining of the readout weights on the specialized task.
3.1.4
Spontaneous classification of novel environments
In robotic SLAM, algorithms that self-localize accurately in novel environments in the presence of
noise must simultaneously build a map of the environments. Since the network in the general task
in Figure 3A successfully localizes in novel environments, we conjecture that it must entertain a
5
spontaneous representation of the environment, even though the environments are quite similar to
each other.
To test this hypothesis we fix the input and recurrent weights of the network trained on the generalized
task (completely novel environments) and retrain it on the specialized task (one out of hundred
familiar environments), whereby only the readout weights are trained for classification. We find that
the classification performance late in each trial is close to 80%, much higher than chance (1%), Figure
3F. This implies that the hidden neurons spontaneously build a representation that separates novel
environments so they can be linearly classified. This separation can be interpreted as a simple form
of spontaneous map-building. However, this spontaneous map-building is done with fixed weights this is different than standard Hopfield-type network models that require synaptic plasticity to learn a
new environment.
3.2
Comparison with and predictions for neural representation
Neural activity in the hippocampus and entorhinal cortex ? areas involved in spatial navigation ? has
been extensively catalogued, usually while animals chase randomly dropped food pellets in open
field environments. It is not always clear what function the observed responses play in solving hard
navigation problems, or why certain responses exist. Here we compare the responses of our network,
which is trained to solve such tasks, with the experimental phenomenology.
Hidden units in our network exhibit stable place tuning, similar to place cells in CA1/CA3 of the
hippocampus [14, 15, 16], Figure 4A,B (left two columns). Stable place fields are observed across
tasks ? the network trained to localize in a single familiar environment exhibits stable fields there,
while the networks trained on the specialized and generalized tasks exhibit repeatedly stable fields in
all tested environments.
generalized network
env A env A env B
C
D
E
1
weight
B
30
cumul freq
0
1
head direction
frequency
specialized network
env A env A env B
activity distribution
A
0
0.05
0.1
0.15
normalized similarity
0.2
20
SS
10
0
0
0.5
1
spatial selectivity (SS)
Figure 4: Neuron-like representations. A Spatial tuning of four typical hidden units from the
specialized network, measured twice with different trajectories in the same environment (columns
1-2, blue box). The same cells are measured in a second environment (column 3, red box). B Same
as A but for the generalized network; both environments were not in the training set. C Hidden
units (representative sample of 20) are not tuned to head direction. D Cumulative distribution of
similarity of hidden unit states in the specialized (top) and generalized (bottom) networks, for trials
in the same environment (blue) versus trials in different environments (purple). Control: similarity
after randomizing over environments (gray). E Spatial selectivities of hidden units in the specialized
network. Inset: spatial selectivity (averaged across environments) versus effective projection strength
to classifier neurons, per hidden unit.
The hidden units, all of which receive head direction inputs and use this data to compute location estimates, nevertheless exhibit weak to nil head direction tuning, Figure 4C, again similar to observations
in rodent place cells [17] (but see [18] for a report of head direction tuning in bat place cells).
Between different environments, the network trained on the specialized task exhibits clear global
remapping [19, 20]: cells fire in some environments and not others, and cells that were co-active
in one environment are not in another, Figure 4A,B (third column). Strikingly, the network trained
on the generalized task exhibits stable and reproducible maps of different novel environments with
remapping, even though the input and recurrent connections were never readjusted for these novel
environments, Figure 4B.
The similarity and dissimilarity of the representations within the same environment and across
environments, in the specialized and generalized tasks are quantified in Figure 4D: the representations
are randomized across environments but stable within an environment.
6
For networks trained on the specialized or generalized tasks, the spatial selectivity of hidden units in
an environment - measured as the fraction of the variance of each hidden neuron?s activation that
can be explained by location - is broad and long-tailed or sparse, Figure 4E: a few cells exhibit
high selectivity, many have low selectivity. Interestingly, cells with low spatial selectivity in one
environment also tend to have low selectivity across environments (in other words, the distribution in
selectivity per cell across environments is narrower than the distribution of selectivity across cells
per environment). Indeed, spatial information in hippocampal neurons seems to be concentrated in a
small set of neurons [21], an experimental observation that seemed to run counter to the informationtheoretic view that whitened representations are most efficient. However, our 256-neuron recurrent
network, which efficiently solves a hard task that requires 104 particles, seems to do the same.
There is a negative correlation between spatial selectivity and the strength of feedforward connections
to the classification units: Hidden units that more strongly drive classification also tend to be less
spatially selective, Figure 4E (inset). In other words, some low spatial selectivity cells correspond to
what are termed context cells [22]. It remains unclear and the focus of future work to understand the
role of the remaining cells with low spatial selectivity.
Inner workings of the network
r = 0.45
Cyy (PF)
Cxx (PF)
0
0.1
r = 0.49
0
0.02
Cxx (net prediction)
0
0.05
C
r = 0.45
15
5
-0.05
0
0.02
-0.01
0.01
Cyy (net prediction)
Cxy (net prediction)
B
0
10
8.6 s
10.1 s
30.3 s
d = 5.6 +/- 0.03
0
15
10
5
3.9 s
d = 5.0 +/- 0.04
10
log(#elements within)
A 0.1
Cxy (PF)
3.3
d = 8.6 +/- 0.1
0
-1.5
-1
-0.5
0
log(radius)
0.5
Figure 5: Inner workings of the network A Hidden units in the localization-only network predict
the covariances (Cxx , Cyy , Cxy ) of the posterior location (x, y) distributions in the particle filter. B
Light red: snapshots of the narrowing set of potential environment classifications by the specialized
neural network at different early times in a trajectory, as determined by the activation of classifier
neurons in the output layer. C Dimensionality of the hidden representations: localization network
(top), specialized network (middle), generalized network (bottom). Dimensionality estimated from
across-environment pooled responses for the latter two networks.
Beyond the similarities between representations in our hidden units and neural representations, what
can we learn about how the network solves the SLAM problem?
The performance of the network compared to the particle filter (and its superiority to simpler
strategies used as controls) already implies that the network is performing sophisticated probabilistic
computations about location. If it is indeed tracking probabilities, it should be possible to predict the
uncertainties in location estimation from the hidden units. Indeed, all three covariance components
related to the location estimate of the particle filter can be predicted by cross-validated linear
regression from the hidden units in the localization-only network (Figure 5A).
When first placed into one of 100 familiar environments, the specialized network simultaneously
entertains multiple possibilities for environment identity, Figure 5B. The activations of neurons in
the soft-max classification layer may be viewed as a posterior distribution over environment identity.
7
With continued exploration and boundary encounters, the represented possibilities shrink until the
network has identified the correct environment.
Unlike the particle filter and contrary to neural models that implement probabilistic inference by
stochastic sampling of the underlying distribution [23], this network implements ongoing near-optimal
probabilistic location estimation through a fully deterministic dynamics.
Location in 2D spaces is a continuous 2D metric variable, so one might expect location representations
to lie on a low-dimensional manifold. On the other hand, SLAM also involves the representation of
landmark and boundary coordinates and the capability to classify environments, which may greatly
expand the effective dimension of a system solving the problem. We analyze the fractal manifold
dimension of the hidden layer activities in the three networks, Figure 5C2 . The localization-only
network has a dimension D = 5.0. Surprisingly, the specialized network states (pooled across all 100
environments) are equally low-dimensional: D = 5.6. The generalized network states, pooled across
environments, have dimension D = 8.6. (The dimensionality of activity in the latter two networks,
considered in single environments only, remains the same as when pooled across environments.)
This implies that the network extracts and representing only the most relevant summary statistics
required to solve the 2D localization tasks, and that these statistics have fairly low dimension. These
dimension estimates could serve as a prediction for hippocampal dynamics in the brain.
4
Discussion
By training a recurrent network on a range of challenging navigation tasks, we have generated ?
to our knowledge ? the first fully neural SLAM solution that is as effective as particle filter-based
implementations. Existing neurally-inspired SLAM algorithms such as RatSLAM [24] have combined
attractor models with semi-metric topological maps, but only the former was neurally implemented.
[25] trained a bidirectional LSTM network to transform laser range sensor data into location estimates,
but the network was not shown to generalize across environments. In contrast, our recurrent network
implementation is fully neural and generalizes successfully across environments with very different
shapes. (Note that since this paper was under review, a new paper implementing SLAM using a
network with recurrent components has appeared [26].)
Previous hand-designed models such as the multichart attractor model of Samsonovich & McNaughton [12] could path integrate and use landmark information to correct the network?s PI estimate
in many different environments. Yet our model substantially transcends those computational capabilities: First, our model performs sequential probabilistic inference, not simply a hard resetting of the
PI estimate according to external cues. Second, our network reliably localizes in 100 environments
with 256 LSTM units (which corresponds to 512 dynamical units); the low capacity of the multichart
attractor model would require about 175,000 neurons for the same number of environments. This
comparison suggests that the special network architecture of the LSTM not only affects learnability,
but also capacity. Finally, unlike the multichart attractor model, our model is able to linearly separate
completely novel environments without changing its weights, as shown in section 3.1.4.
Despite its success in reproducing key elements of the phenomenology of the hippocampus, our
network model does not incorporate many biological constraints. This is in itself interesting, since
it suggests that observed phenomena like stable place fields and remapping may emerge from the
computational demands of hard navigation tasks rather than from detailed biological constraints. It
will be interesting to see whether incorporating constraints like Dale?s law and the known gross architecture of the hippocampal circuit results in the emergence of additional features associated with the
brain?s navigation circuits, such as sparse population activity, directionality in place representations
in 1D environments, and grid cell-like responses.
The choice of an LSTM architecture for the hidden layer units, involving multiplicative input, output
and forget gates and persistent cells, was primarily motivated by its ability to learn long timedependencies. One might wonder whether such multiplicative interactions could be implemented
in biological neurons. A model by [27] proposed that dendrites of granule cells in the dental gyrus
contextually gate projections from grid cells in the entorhinal cortex to place cells. Similarly, granule
2
To estimate the fractal dimension, we use ?correlation dimension?: measure the number of states across
trials that fall into a ball of radius r around a point in state space. The slope of log(#states) versus log(r) is the
fractal dimension at that point.
8
cells could implement LSTM gates by modulating recurrent connections between pyramidal neurons
in hippocampal area CA3. LSTM cells might be interpreted as neural activity or as synaptic weights
updated by a form of synaptic plasticity.
The learning of synaptic weights by gradient descent does not map well to biologically plausible
synaptic plasticity rules, and such learning is slow, requiring a vast number of supervised training
examples. Our present results offer a hint that, through extensive learning, the generalized network
acquires useful general prior knowledge about the structure of natural navigation tasks, which it then
uses to map and localize in novel environments with minimal further learning. One could thus argue
that the slow phase of learning is evolutionary, while learning during a lifetime can be brief and
driven by relatively little experience in new environments. At the same time, progress in biologically
plausible learning may one day bridge the efficiency gap to gradient descent [28].
Finally, although our work is focused on understanding the phenomenology of navigation circuits
in the brain, it might also be of some interest for robotic SLAM. SLAM algorithms are sometimes
augmented by feedforward convolutional networks to assist in specific tasks like place recognition
(see e.g. [29]) from camera images, but the geometric calculations and parameters at the core of
SLAM algorithms are still largely hand-specified. By contrast, this work provides a proof of concept
for the feasibility end-to-end learning of SLAM algorithms using recurrent neural networks and
shows that the trained network provides a powerful solution to the particle depletion problem that
plagues many particle filter-based approaches to SLAM and is highly effective in identifying which
low-dimensional summary statistics to update over time.
References
[1] Etienne Save, Ludek Nerad, and Bruno Poucet. Contribution of multiple sensory information to
place field stability in hippocampal place cells. Hippocampus, 10(1):64?76, 2000.
[2] Torkel Hafting, Marianne Fyhn, Sturla Molden, May-Britt Moser, and Edvard I. Moser. Microstructure of a spatial map in the entorhinal cortex. Nature, 436:801?806, 2005.
[3] Allen Cheung, David Ball, Michael Milford, Gordon Wyeth, and Janet Wiles. Maintaining a
cognitive map in darkness: the need to fuse boundary knowledge with path integration. PLoS
Comput Biol, 8(8):e1002651, 2012.
[4] Sebastian Thrun, Wolfram Burgard, and Dieter Fox. Probabilistic robotics. MIT press, 2005.
[5] Valerio Mante, David Sussillo, Krishna V Shenoy, and William T Newsome. Context-dependent
computation by recurrent dynamics in prefrontal cortex. Nature, 503(7474):78?84, 2013.
[6] Daniel LK Yamins, Ha Hong, Charles F Cadieu, Ethan A Solomon, Darren Seibert, and James J
DiCarlo. Performance-optimized hierarchical models predict neural responses in higher visual
cortex. Proceedings of the National Academy of Sciences, 111(23):8619?8624, 2014.
[7] Adam Marblestone, Greg Wayne, and Konrad Kording. Towards an integration of deep learning
and neuroscience. arXiv preprint arXiv:1606.03813, 2016.
[8] Robert U Muller and John L Kubie. The effects of changes in the environment on the spatial
firing of hippocampal complex-spike cells. Journal of Neuroscience, 7(7):1951?1968, 1987.
[9] Alex Graves. Generating sequences with recurrent neural networks.
arXiv:1308.0850, 2013.
arXiv preprint
[10] Sepp Hochreiter and J?rgen Schmidhuber. Long short-term memory. Neural computation,
9(8):1735?1780, 1997.
[11] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint
arXiv:1412.6980, 2014.
[12] A Samsonovich and B L McNaughton. Path integration and cognitive mapping in a continuous
attractor neural network model. J Neurosci, 17(15):5900?5920, 1997.
[13] Yoram Burak and Ila R Fiete. Fundamental limits on persistent activity in networks of noisy
neurons. Proc Natl Acad Sci U S A, 109(43):17645?50, Oct 2012.
[14] J O?Keefe and J Dostrovsky. The hippocampus as a spatial map. preliminary evidence from
unit activity in the freely-moving rat. Brain Res, 34(1):171?175, 1971.
9
[15] John O?Keefe and Lynn Nadel. The hippocampus as a cognitive map. Behavioral and Brain
Sciences, 2(04):487?494, 1979.
[16] Matthew A Wilson and Bruce L McNaughton. Dynamics of the hippocampal ensemble code
for space. Science, 261(5124):1055?1058, 1993.
[17] Robert U Muller, Elizabeth Bostock, Jeffrey S Taube, and John L Kubie. On the directional
firing properties of hippocampal place cells. The Journal of Neuroscience, 14(12):7235?7251,
1994.
[18] Alon Rubin, Michael M Yartsev, and Nachum Ulanovsky. Encoding of head direction by
hippocampal place cells in bats. The Journal of Neuroscience, 34(3):1067?1080, 2014.
[19] J O?Keefe and DH Conway. Hippocampal place units in the freely moving rat: why they fire
where they fire. Experimental Brain Research, 31(4):573?590, 1978.
[20] Robert U. Muller, John L. Kubie, E. M. Bostock, J. S. Taube, and G. J. Quirk. Spatial firing
correlates of neurons in the hippocampal formation of freely moving rats, pages 296?333.
Oxford University Press, New York, NY, US, 1991.
[21] Gy?rgy Buzs?ki and Kenji Mizuseki. The log-dynamic brain: how skewed distributions affect
network operations. Nature Reviews Neuroscience, 15(4):264?278, 2014.
[22] David M Smith and Sheri J Y Mizumori. Hippocampal place cells, context, and episodic
memory. Hippocampus, 16(9):716?729, 2006.
[23] J?zsef Fiser, Pietro Berkes, Gerg?o Orb?n, and M?t? Lengyel. Statistically optimal perception and
learning: from behavior to neural representations. Trends in cognitive sciences, 14(3):119?130,
2010.
[24] Michael Milford and Gordon Wyeth. Persistent navigation and mapping using a biologically
inspired slam system. The International Journal of Robotics Research, 29(9):1131?1153, 2010.
[25] Alexander F?rster, Alex Graves, and J?rgen Schmidhuber. Rnn-based learning of compact maps
for efficient robot localization. In ESANN, pages 537?542, 2007.
[26] J. Zhang, L. Tai, J. Boedecker, W. Burgard, and M. Liu. Neural SLAM. arXiv preprint
arXiv:1706.09520, 2017.
[27] Robin M Hayman and Kathryn J Jeffery. How heterogeneous place cell responding arises from
homogeneous grids - a contextual gating hypothesis. Hippocampus, 18(12):1301?1313, 2008.
[28] Yoshua Bengio, Dong-Hyun Lee, Jorg Bornschein, and Zhouhan Lin. Towards biologically
plausible deep learning. arXiv preprint arXiv:1502.04156, 2015.
[29] Niko Sunderhauf, Sareh Shirazi, Feras Dayoub, Ben Upcroft, and Michael Milford. On the
performance of convnet features for place recognition. In Intelligent Robots and Systems (IROS),
2015 IEEE/RSJ International Conference on, pages 4297?4304. IEEE, 2015.
10
| 7039 |@word trial:22 middle:2 version:1 fiete:2 wco:1 hippocampus:8 retraining:1 bf:1 seems:2 open:2 covariance:2 initial:4 liu:1 contains:1 efficacy:1 daniel:1 tuned:2 bc:1 interestingly:1 past:1 outperforms:1 err:2 current:1 existing:1 contextual:1 si:6 yet:3 assigning:1 must:9 activation:7 john:4 diederik:1 distant:1 plasticity:3 shape:2 fyhn:1 drop:3 reproducible:1 update:4 designed:1 cue:6 half:1 selected:1 beginning:1 smith:1 short:2 core:1 wolfram:1 provides:2 location:30 successive:1 simpler:1 zhang:1 unbounded:1 along:3 c2:1 persistent:3 behavioral:1 inside:1 deteriorate:1 expected:1 indeed:4 rapid:1 behavior:1 frequently:1 samsonovich:2 brain:17 inspired:2 automatically:1 food:2 resolve:1 little:1 pf:24 bounded:1 underlying:1 circuit:6 remapping:3 what:3 interpreted:2 substantially:1 ca1:1 sheri:1 finding:1 bootstrapping:1 quantitative:1 classifier:2 control:5 unit:25 wayne:1 superiority:1 shenoy:1 dropped:1 local:1 limit:1 acad:1 despite:4 encoding:1 oxford:1 path:5 firing:4 might:10 black:4 twice:1 studied:1 quantified:1 suggests:2 challenging:3 co:1 contextually:1 collapse:1 bi:1 range:2 averaged:2 statistically:1 bat:2 camera:1 spontaneously:1 whc:1 implement:3 kubie:3 episodic:1 area:2 rnn:1 thought:1 projection:2 outset:1 radial:2 enclosure:4 word:2 suggest:2 ila:2 cannot:1 interior:2 close:2 janet:1 nb:3 context:3 accumulating:1 darkness:1 map:17 demonstrated:1 center:1 elongated:1 deterministic:1 sepp:1 starting:5 jimmy:1 focused:1 identifying:1 pure:1 estimator:2 continued:1 transcends:1 rule:1 hafting:1 population:1 stability:1 coordinate:10 mcnaughton:3 updated:3 spontaneous:4 play:1 homogeneous:1 us:1 kathryn:1 hypothesis:5 origin:1 associate:1 synthesize:1 velocity:1 element:2 recognition:2 trend:1 observed:5 ft:2 bottom:2 role:1 narrowing:1 solved:1 preprint:5 readout:3 plo:1 counter:1 gross:1 environment:108 dynamic:7 radar:1 trained:16 solving:6 serve:1 localization:44 upon:1 efficiency:1 completely:3 basis:1 strikingly:2 multimodal:1 joint:1 hopfield:1 emergent:1 polygon:5 tx:1 various:1 represented:1 train:1 laser:2 effective:5 mizumori:1 formation:1 whose:1 whi:1 quite:1 solve:10 plausible:3 s:2 ability:1 statistic:3 peephole:1 transform:1 noisy:13 itself:3 final:1 emergence:1 chase:2 advantage:1 sequence:2 net:3 bornschein:1 interaction:1 relevant:1 rapidly:1 translate:1 degenerate:1 realistically:1 academy:1 rgy:1 exploiting:1 readjusted:1 produce:1 generating:1 adam:3 perfect:1 ben:1 sussillo:1 recurrent:21 alon:1 pose:1 quirk:1 measured:4 progress:1 esann:1 solves:5 implemented:5 kenji:1 involves:3 implies:3 predicted:1 orb:1 direction:9 radius:2 correct:4 filter:11 stochastic:3 exploration:2 human:1 enable:2 implementing:1 require:4 fix:1 generalization:1 wall:3 preliminary:1 microstructure:1 biological:3 whf:1 correction:1 scanner:1 around:1 considered:2 marianne:1 mapping:7 predict:4 bump:2 matthew:1 rgen:2 early:1 smallest:1 estimation:4 proc:1 pfs:1 currently:1 tanh:2 utexas:1 bridge:1 modulating:1 pellet:2 successfully:4 mit:1 sensor:2 gaussian:1 always:2 rather:3 varying:1 wilson:1 derived:2 validated:2 focus:1 greatly:1 contrast:4 sense:1 inference:6 abstraction:1 dependent:1 nn:4 initially:1 hidden:23 expand:1 selective:1 classification:22 animal:4 spatial:24 special:1 integration:7 softmax:1 fairly:1 field:7 construct:1 never:1 shaped:2 beach:1 sampling:1 env:6 cadieu:1 broad:1 future:1 report:1 others:1 gordon:2 hint:1 few:3 primarily:2 yoshua:1 randomly:3 mizuseki:1 intelligent:1 simultaneously:5 national:1 familiar:14 geometry:7 phase:1 fire:3 attractor:6 william:1 jeffrey:1 ab:2 interest:1 possibility:2 highly:1 arena:2 navigation:17 sh:6 light:2 clm:1 natl:1 slam:36 amenable:1 accurate:4 partial:1 necessary:1 experience:2 entertain:1 fox:1 burak:1 re:1 minimal:1 classify:4 column:4 soft:1 asking:1 dostrovsky:1 localizing:1 newsome:1 cost:4 ca3:2 vertex:2 rare:1 hundred:1 burgard:2 wonder:1 learnability:1 dental:1 dependency:1 randomizing:1 combined:1 st:1 lstm:11 randomized:1 moser:2 fundamental:1 international:2 probabilistic:10 dong:1 lee:1 zhouhan:1 michael:4 conway:1 quickly:1 squared:1 vastly:3 again:1 solomon:1 prefrontal:1 worse:1 external:2 cognitive:4 account:2 suggesting:1 potential:1 gy:1 pooled:4 explicitly:1 caused:1 depends:1 multiplicative:2 view:1 analyze:1 wyeth:2 red:8 start:2 bayes:2 poucet:1 capability:2 slope:1 bruce:1 contribution:1 purple:1 greg:1 cxy:3 variance:1 who:1 efficiently:2 ensemble:2 correspond:2 gathered:1 identify:1 resetting:1 directional:1 largely:1 generalize:3 weak:1 bayesian:3 accurately:2 trajectory:12 drive:1 lengyel:1 classified:1 simultaneous:2 sebastian:1 synaptic:5 frequency:1 steadily:1 involved:1 james:1 niko:1 proof:2 associated:2 monitored:1 wxo:1 sampled:2 ask:1 knowledge:6 dimensionality:4 sophisticated:1 bidirectional:1 higher:2 supervised:2 day:1 response:8 done:1 though:3 box:2 strongly:1 shrink:1 angular:2 lifetime:1 fiser:1 correlation:2 until:1 hand:4 receives:1 working:2 touch:1 logistic:1 cyy:3 quality:1 gray:7 grows:1 shirazi:1 usa:1 molden:1 building:3 concept:2 facilitate:1 effect:2 normalized:1 former:1 requiring:1 spatially:3 proprioceptive:1 freq:1 ignorance:1 distal:1 konrad:1 during:5 self:11 skewed:1 ambiguous:4 whereby:1 acquires:1 rat:18 hong:1 generalized:18 hippocampal:13 performs:2 motion:13 allen:1 image:1 wise:1 novel:26 charles:1 sigmoid:1 common:1 specialized:24 relating:1 measurement:1 reddish:2 imposing:1 tuning:6 grid:3 similarly:1 particle:18 nonlinearity:1 bruno:1 moving:4 stable:9 access:5 cortex:5 similarity:5 robot:2 berkes:1 buzs:1 posterior:3 own:1 perspective:2 driven:2 termed:1 selectivity:13 certain:2 schmidhuber:2 success:1 muller:3 krishna:1 additional:1 freely:3 taube:2 signal:2 semi:1 multiple:3 full:1 neurally:2 infer:2 exceeds:1 valerio:1 faster:1 adapt:1 match:3 cross:3 long:5 offer:1 calculation:1 lin:1 post:1 equally:1 feasibility:1 prediction:6 involving:1 regression:1 nadel:1 whitened:1 heterogeneous:1 metric:3 arxiv:10 sometimes:2 deterioration:1 robotics:4 cell:28 chicken:1 underperforms:1 addition:1 receive:1 hochreiter:1 ingmar:1 pyramidal:1 ot:2 unlike:3 induced:1 tend:2 contrary:1 near:1 presence:1 feedforward:2 bengio:1 affect:2 timesteps:1 architecture:7 identified:1 inner:2 knowing:1 texas:1 whether:2 motivated:1 assist:1 remap:1 york:1 cause:1 repeatedly:1 fractal:3 deep:2 useful:1 detailed:2 clear:2 rival:1 ang:1 extensively:1 concentrated:1 gyrus:1 generate:2 outperform:1 supplied:1 exist:1 wci:1 neuroscience:6 estimated:3 correctly:3 per:4 deteriorates:1 blue:2 diverse:1 gerg:1 probed:1 key:2 thereafter:1 four:1 nevertheless:1 localize:8 drawn:1 changing:1 iros:1 ht:5 timestep:7 vast:1 fuse:1 merely:1 fraction:1 pietro:1 run:1 angle:3 uncertainty:4 powerful:1 place:20 clipped:1 separation:1 cxx:3 comparable:2 entirely:1 layer:10 ct:6 ki:1 topological:1 mante:1 activity:9 strength:2 constraint:3 sharply:1 alex:2 encodes:1 speed:2 extremely:1 performing:1 relatively:2 conjecture:1 department:1 according:1 ball:2 oracular:6 wxi:1 across:19 remain:1 marblestone:1 elizabeth:1 biologically:4 wile:1 explained:1 dieter:1 depletion:2 computationally:2 equation:2 rectification:1 remains:2 tai:1 turn:1 know:3 yamins:1 end:4 generalizes:3 operation:1 phenomenology:5 jeffery:1 occasional:1 hierarchical:1 appropriate:1 save:1 encounter:5 gate:7 slower:1 denotes:2 running:1 ensure:1 top:2 remaining:1 kanitscheider:1 responding:1 maintaining:1 etienne:1 exploit:1 yoram:1 especially:1 build:2 granule:2 rsj:1 contact:9 already:1 spike:1 strategy:1 unclear:1 exhibit:9 gradient:4 evolutionary:1 distance:3 separate:3 convnet:1 simulated:2 capacity:2 landmark:14 thrun:1 sci:1 nachum:1 mail:1 manifold:2 argue:1 length:1 dicarlo:1 code:1 demonstration:1 acquire:1 difficult:2 setup:1 robert:3 lynn:1 negative:2 ba:1 implementation:2 reliably:1 jorg:1 unknown:5 perform:6 observation:3 neuron:15 snapshot:1 arc:1 hyun:1 descent:3 extended:2 head:7 prematurely:1 reproducing:1 arbitrary:1 bostock:2 drift:1 david:3 required:3 specified:1 wxf:1 connection:3 extensive:1 rad:1 ethan:1 optimized:1 plague:1 kingma:1 vestibular:1 nip:1 able:5 beyond:1 usually:1 dynamical:1 perception:1 appeared:1 challenge:1 including:1 memory:4 max:1 difficulty:1 natural:1 localizes:4 representing:1 brief:1 declined:1 identifies:1 created:1 excel:1 lk:1 milford:3 extract:2 naive:2 entertains:1 text:5 prior:3 review:2 understanding:1 geometric:1 multiplication:1 relative:6 unsurprisingly:1 asymptotic:2 fully:4 expect:2 law:1 graf:2 interesting:2 versus:3 validation:2 integrate:1 agent:2 degree:1 rubin:1 pi:18 heavy:1 austin:1 summary:2 placed:1 surprisingly:1 free:1 heading:1 allow:2 understand:2 fall:1 taking:1 convolutional:1 emerge:1 absolute:1 sparse:3 curve:2 boundary:25 wcf:1 world:2 overcome:1 rich:1 cumulative:1 sensory:4 seemed:1 dimension:9 dale:1 far:1 correlate:1 kording:1 approximate:1 compact:1 informationtheoretic:1 overcomes:1 global:1 robotic:6 active:1 continuous:3 latent:1 tailed:1 why:3 robin:1 learn:5 nature:3 ca:1 improving:1 dendrite:1 complex:2 linearly:2 neurosci:1 featureless:1 noise:3 arise:1 body:1 wxc:1 augmented:1 representative:1 retrain:1 egg:1 scattered:1 slow:2 ny:1 position:6 comput:1 lie:1 late:1 third:1 minute:2 xt:4 specific:2 inset:2 gating:1 evidence:1 fusion:1 essential:1 incorporating:1 sequential:5 effectively:1 keefe:3 magnitude:1 entorhinal:3 dissimilarity:1 demand:1 gap:1 rodent:2 entropy:1 forget:3 simply:4 boedecker:1 visual:1 tracking:1 bo:1 corresponds:2 darren:1 chance:1 dh:1 oct:1 goal:4 identity:4 cumul:1 narrower:1 viewed:1 cheung:1 seibert:1 towards:2 hard:7 change:2 directionality:1 typical:2 determined:1 corrected:1 nil:1 experimental:4 select:1 latter:2 arises:1 alexander:1 phenomenon:1 ongoing:2 incorporate:1 tested:5 biol:1 |
6,677 | 704 | Biologically Plausible Local Learning Rules for
the Adaptation of the Vestibulo-Ocular Reflex
Olivier Coenen*
Terrence J. Sejnowski
Computational Neurobiology Laboratory
Howard Hughes Medical Institute
The Salk Institute
P.O.Box 85800
San Diego, CA 92186-5800
Stephen G. Lisberger
Department of Physiology
W.M. Keck Foundation Center
for Integrative Neuroscience
University of California,
San Fransisco, CA, 94143
Abstract
The vestibulo-ocular reflex (VOR) is a compensatory eye movement that
stabilizes images on the retina during head turns. Its magnitude, or gain,
can be modified by visual experience during head movements. Possible
learning mechanisms for this adaptation have been explored in a model
of the oculomotor system based on anatomical and physiological constraints. The local correlational learning rules in our model reproduce the
adaptation and behavior of the VOR under certain parameter conditions.
From these conditions, predictions for the time course of adaptation at
the learning sites are made.
1
INTRODUCTION
The primate oculomotor system is capable of maintaining the image of an object on the
fovea even when the head and object are moving simultaneously. The vestibular organs
provide information about the head velocity with a short delay of 14 ms but visual Signals
from the moving object are relatively slow and can take 100 ms to affect eye movemen.ts.
The gain, a, of the VOR, defined as minus the eye velocity over the head velocity (-if h),
can be modified by wearing magnifying or diminishing glasses (figure 1). VOR adaptation,
absent in the dark, is driven by the combination of image slip on the retina and head turns.
?University of California, San Diego. Dept. of Physics. La Jolla, CA, 92037. Email address:
oli [email protected]
961
962
Coenen, Sejnowski, and Lisberger
During head turns on the first day of wearing magnifying glasses, the magnified image of
an object slips on the retina. After a few days of adaptation, the eye velocity and hence the
gain of the VOR increases to compensate for the image magnification.
We have constructed a model of the VOR and smooth pursuit systems that uses biologically
plausible local learning rules that are consistent with anatomical path ways and physiological
recordings. The learning rules in the model are local in the sense that the adaptation of a
synapse depends solely on signals that are locally available. A similar model with different
local learning rules has been recently proposed (Quinn et at., Neuroscience 1992).
xl .O
xl.O
Spectacles
off
Spectacles
on
1B
9
1 I;
,
1.4
1.2
z
<
<.:)
~
~
,,
.
o
Gain = 1.01 + 0 .68(1 .
e-<1020
Gain = 1.01 + 0.68 Ie
t)
l?
,
~--------
1.0
?
:: r~--:--:---:----:----- m._____
0.6 [
+
Gain" =
O.~ +
027 Ie" -0 1J t)
"
II
xO.5
SpectaCles
Spectacles
,
o
tl
?
f
xO.5
on
" -- " -------?
-0._
off
,
I
!
!
I
I
?
_...L'_..L.'_.I..-'
,L-....L.---.J'_....l,_....L,
23456780
234567
TIME IDaysl
Figure 1: Tune course of the adapting VOR and its recovery of gain in monkeys exposed to the longterm influence of magnifying (upper curves) and diminishing (lower curves) spectacles. Different
symbols obtained from different animals, demonstrating the consistency of the adaptive change. From
Melvill Jones (1991), selected from Miles and Eighmy (1980).
2
THEMODEL
Feedforward and recurrent models of the VOR have been proposed (Fujita, 1982; Galiana,
1986; Kawato and Gomi, 1992; Quinn et al., 1992; Arnold and Robinson, 1992; Lisberger
and Sejnowski, 1992). In this paper we study a static and linear version of a previously
studied recurrent network model of the VOR and smooth pursuit system (Lisberger, 1992;
Lisberger and Sejnowski, 1992; Viola, Lisberger and Sejnowski, 1992). The time delays
and time constants associated with nodes in the network were eliminated so that the time
course of the VOR plasticity could be more easily analyzed (figure 2).
The model describes the system ipsilateral to one eye. The visual error, which carries the
image retinal slip velocity signal, is a measure of the performance of both the VOR and
smooth pursuit system as well as the main error signal for learning. The value at each node
represents changes in its firing rate from its resting firing rate. The transformation from the
rate of firing of premotor signal (N) to eye velocity is represented in the model by a gain
Biologically Plausible Local Learning Rules for Adaptation of Vestibulo-Ocular Reflex
o:
Visual error: mossy fibers
Gains
P : Purkinje Cell
N : Vestibular Nucleus
g : Desired gain
?
h
-(g
Inhibitory
h
+
e)
e?
eye velocity
Visual error: climbing fibers
Figure 2: Diagram of the VOR and smooth pursuit model. The input and output of the model are,
respectively, head velocity and eye velocity. The model has three main parts: the node P represents
an ensemble of Purkinje cells from the ventral paraflocculus of the cerebellum, the node N represents
an ensemble of flocculus-target neurons in the vestibular nucleus, and the visual inputs which provide
the visual error signals in the mossy and climbing fibers. The capital letter gains A and D, multiplying
the input signals to the nodes, are modified according to their learning rules. The lower case letters
b, v, and 9 are also multiplicative gains, but remain constant during adaptation. The traces represent
head and eye velocity modulation in time. The visual error signal in the climbing fibers drives learning
in node N but does not constitute one of its inputs in the present model.
of -1. The gain of the VOR in this model is given by ~=:. We have not modeled the
neural integrator that converts eye velocity commands to eye position signals that drive the
motoneurons.
3
LEARNING RULES
We have adopted the learning rules proposed by Marr (1969), Albus (1971) and Ito (1970)
for adaptation in the cerebellum and by Lisberger (1988), Miles and Lisberger (1981) for
plasticity in the brain stem (figure 3). These are variations of the delta rule and depend on
an explicit representation of the error signal at the synapses.
Long term depression at mossy fiber synapses on Purkinje cells has been observed in
vitro under simultaneous stimulation of climbing fibers and mossy fibers (Ito, Sakurai and
Tongroach, 1982). In addition, we have included a learning mechanism for potentiation
of mossy fiber head velocity inputs under concurrent mossy fiber visual and head velocity
inputs. Although the climbing fiber inputs to the cerebellum were not directly represented
in this model (figure 2), the image velocity signal carried by the mossy fibers to P was used
in the model to achieve the same result.
There is good indirect evidence that learning also occurs in the vestibular nucleus. We
have adopted the suggestion of Lisberger (1988) that the effectiveness of the head velocity
input to some neurons in the vestibular nucleus may be modified by head velocity input in
963
964
Coenen, Sejnowski, and Lisberger
~
-
Learning
Rate
x
(
InPut)
(Error)
Signal
x
Signal
Cerebellum (P):
A
qA X (
-
qA x
-
qA X
ex:
h2
Head
Velocity
)
x (
Mossy fiber )
Visual signal
h x -v(gh + e)
h x -v[(g - D)h + P]
Vestibular nucleus (N):
b
qD X (
Head )
(Climbing fiber
x
Visual signal
Velocity
x h x [(1 - q)(gh
-
qD
-
qD X
oc
h2
Purkinje
Signal
)
+ e) -
qP]
h x [(1 - q)(g - D)h + (1 - 2q)P]
where
P
A - bD - (g - D)v .
h
I-b+v
Figure 3: Learning rules for the cerebellum and vestibular nucleus. The gains A and D change
according to the correlation of their input signal and the error signal to the node, as shown for ~ at
the top. The parameter q determines the proportion of learning from Purkinje cell inputs compared
to learning from climbing fiber inputs. When q = I, only Purkinje cell inputs drive the adaptation at
node N; if q = 0, learning occurs solely from climbing fiber inputs.
association with Purkinje cells firing. We have also added adaptation from pairing the head
velocity input with climbing fiber firing. The relative effectiveness of these two learning
mechanisms is controlled by the parameter q (figure 3).
Learning for gain D depends on the interplay between several signals. If the VOR gain is
too small. a rightward head turn P (positive value for head velocity) results in too small a
leftward eye turn (a negative value for eye velocity). Consequently, the visual scene appears
to move to the left (negative image slip). P then fires below its resting level (negative) and
its inhibitory influence on N decreases so that N increases its firing rate (figure 4 bottom
left). This corrects the VOR gain and increases gain D according to figure 3. Concurrently,
the climbing fiber visual signal is above resting firing rate (positive) which also leads to an
increase in gain D.
Since the signal passing through gain A has an inhibitory influence via Ponto N, decreasing
gain A has the opposite effect on the eye velocity as decreasing gain D. Hence, if the VOR
is too small we expect gain A to decrease. This is what happens during the early phase of
learning (figure 4 top left).
4
RESULTS
Finite difference equations of the learning rules were used to calculate changes in gains A
and D at the end of each cycle during our simulations. A cycle was defined as one biphasic
Biologically Plausible Local Learning Rules for Adaptation of Vestibulo-Ocular Reflex
Desired gain 9 = 1.6
Magnitude
Magnitude
2
1.75
1.5
t.25
1
0.7!1
2
G
/
t.75
1.5
1.25
D
1
0.5
0.25
0.75
0.5
0.25
A
0
G
D
20
40
60
10lime
80
0
A
2000
4000
6000
10(J;ime
8000
A, D & VOR gain G vs time
Amplitude
Amplitude
2
2
1.5
1.5
1
/
N
0.5
~.5
N
1
0.5
~
40
60
100 Time
80
P
?1
2000
4000
6000
~.5
P
?1
P & N responses to a head turn during learning vs time
Figure 4: Simulation of change in gain from 1.0 to 1.6. Top: Short-term (left) and long-term (right)
adaptation of the gains A, D and G. Bottom: Changes on two time scales of P and N responses to a
.!lA = 10. ,
head turn of amplitude 1 during learning. The parameters were v = 1.0, b .88, T
f1D
and q .01.
=
=
=
head velocity input as shown in figure 2. We assumed that the learning rates were so small
that the changes in gains, and hence in the node responses, were negligibly small during
each iteration. This allowed the replacement ~f A(t) .and D(t) by their values obtained on
the previous iteration for the calculations of A and D. The period of the iteration as well
as the amplitude of the head velocity input were chosen so that the integral of the head
velocity squared over one iteration equaled l.
For the simulations shown in figure 4 the gain G of the VOR increased monotonically from
1 to reach the desired value 1.6 within 60 time steps. This rapid adaptation was mainly
due to a rapid decrease in A, as expected from the local learning rule (figure 3), since the
learning rate 'f/A was greater than the learning rate 'f/D. Over a longer time period, learning
was transferred from A to D: D increased from 1 to reach its final value 1.6 while the VOR
gain stayed constant. Transfer of learning occurs when P fires in conjunction with a head
turn. P can have an elevated firing rate even though the visual error signal is zero (that is,
even if the VOR gain G has reached the desired gain g) because of the difference between
its two other inputs: the head velocity input through A and the eye velocity feedback input
through b. It is only when these two inputs become equal in.amplit~lde that P firing goes
to zero. It can be shown that when learning settles (when D and A equal zero) D = g,
A = bg, and P = O. With these values for A and D, the two other inputs to P are indeed
equal in amplitude: one equals Ah, while the other equals b( -1 )Dh. During the later
part of learning, gain A is driven in the opposite direction (increase) than during the earlier
965
966
Coenen, Sejnowski, and Lisberger
part (decrease). This comes from a sign reversal of the visual error input to P. After the
first 60 time steps, the gain has reached the desired gain due to a rapid decrease in A, this
means that any subsequent increase in D, due to transfer of learning as explained above,
will cause the gain of the VOR G to become larger than the desired gain g, hence the visual
error changes sign. In order to compensate for this small error, gain A increases promptly,
keeping G very close to the desired gain. This process goes on until A and D reach their
equilibrium values stated above.
The short and long-term changes in P and N responses to a velocity step are also shown.
As the firing of P decreased with the adaptation of A, the firing rate of N increased to the
right level.
5
OVERSHOOT OF THE VOR GAIN G
In this section we show that for some ranges of the learning parameters, the gain G in
the model overshoots the desired value g. Since an overshoot is not observed in animals
(figure I), this provides constraints on the parameters. The parameter q in the learning rule
for the vestibular nucleus (node N, gain D), determines the proportion of learning from
Purkinje cell inputs compared to learning from climbing fiber inputs. When q = 1, only
Purkinje cell inputs drive the adaptation at node N; if q = 0, learning at N occurs solely
from climbing fiber inputs. These two inputs have quite different effects on learning as
shown in figure 5. Asymptotically, P goes to 0, and D goes to 9 if q = 1; and P can only
0. The gain has an overshoot for any value of q different than 0, as
differ from if q
shown in figure 6. Nevertheless, its amplitude is only significant for a limited extent in the
parameter space of q and r (graph of figure 6). The overshoot is reduced with a smaller q
and a larger r. One possibility is that q is chosen close to and r > I, that is TJA > 7JD.
These conditions were used to choose parameter values in the simulations (figure 4).
?
=
?
6
DISCUSSION AND CONCLUSION
The VOR model analyzed here is a static model without time delays and multiple time
scales. We are currently studying how these factors affect the time course of learning in a
dynamical model of the VOR and smooth pursuit.
In our model, learning occurs in the dark if P #- 0, which has not been observed in animals.
One way to avoid learning in the dark when P is firing would be to gate the learning by a
visual input, such as that provided by climbing fibers.
The responses of vestibular afferents to head motion can be classified into two categories:
phase-tonic and tonic. In this model, only the tonic afferents were represented. Both
afferent types encode head velocity, while the phasic-tonic responds to head acceleration as
well. The steady state VOR gain can also be changed by altering the relative proportions
of phasic and tonic afferents to the Purkinje cells (Lisberger and Sejnowski, 1992). We are
currently investigating learning rules for which this occurs.
The model predicts that adaptation in the cerebellum is faster than in the vestibular nucleus,
and that learning in the vestibular nucleus is mostly driven by the climbing fiber error
signals.
The model shows how the dynamics of the whole system can lead to long-term adaptation
Biologically Plausible Local Learning Rules for Adaptation of Vestibulo-Ocular Reflex
Desired gain g
= 1. 6
q=1
q=O
Magnibtde
Magnibtde
1
2.5
1.75
G
G
1.5
0
A
0.5
1.15
1
0
0.75
0.5
A
0.15
0
100
200
300
400
0
sooTime
100
200
300
400
50lime
A, D & VOR gain G vs time
Amplitude
Amplitude
1.5
2.5
1
N
300
N
500 Time
400
?1
100
P
200
300
400
500 Time
?1
P & N responses to a head turn during learning vs time
Figure 5: Effect of q on learning curves for gain increase. Left: q = 1 leads to an (wershoot in
the VOR gain G above the desired gain. D increases up to the desired gain, P starts from 0 and
asymptotically goes back to 0; both indicate that learning is totally transferred from P to N. Right:
With q = 0, there is no overshoot in the VOR gain, but since A decreases to a constant value and
D only increases very slightly, learning is not transfered. Consequently, P firing rate stays constant
after an initial drop.
E
(I-b+v)
(I-b)
(D
-
)
q
9 (2q-I)-rv
10
Figure 6: Overshoot f. of the VOR gain G as a function of q and r. The parameter q is the proportion
of learning to node N (vestibular nucleus), coming from the P node (cerebellum) compared to learning
from climbing fibers. The parameter T is the ratio of the learning rates TJA and TJD. No overshoot is
seen in animals, which restricts the parameters space of q and r for the model to be valid. Note that
the overshoot diverges for some parameter values.'
which differs from what may be expected from the local learning rules at the synapses
because of differences in time scales and shifts of activity in the system during learning.
This may reconcile apparently contradictory evidence between local learning rules observed in vitro (Ito, 1970) and the long-term adaptation seen in vivo in animals (Miles and
Lisberger, 1981).
967
968
Coenen, Sejnowski, and Lisberger
Acknowledgments
O.c. was supported by NSERC during this research.
References
Albus, J. S. (1971). A theory of cerebellar function. Math. Biosci., 10:25-61.
Arnold, D. B. and Robinson, D. A. (1992). A neural network model of the vestibulo-ocular reflex using a local
synaptic learning rule. Phil. Trans. R. Soc. Lond. B, 337:327-330.
Fujita, M. (1982). Simulations of adaptive modification of the vestibulo-ocular reflex with an adaptive filter model
of the cerebellum. Biological Cybernetics, 45:207-214.
Galiana, H. L. (1986). A new approach to understanding adaptive visual-vestibular interactions in the central
nervous system. Journal of Neurophysiology, 55:349-374.
Ito, M. (1970). Neurophysiological aspects of the cerebellar motor control system. Int.J.Neurol., 7:162-176.
Ito, M., Sakurai, M., and Tongroach, P. (1982). Climbing fibre induced depression of both mossy fibre responsiveness and glutamate sensitivity of cerebellar purkinje cells. J. Physiol. Lond., 324:113-134.
Kawato, M. and Gomi, H. (1992). The cerebellum and VORlOKR learning models. Trends in Neuroscience,
15 :445-453.
Lisberger, S. G. (1988). The neural basis for learning of simple motor skills. Science, 242:728-735.
Lisberger, S. G. (1992). Neural basis for motor learning in the vestibulo-ocularreflex ofprimates:IV. The sites of
learning. In preparation.
Lisberger, S. G. and Sejnowski, T. J. (1992). Computational analysis suggests a new hypothesis for motor learning
in the vestibulo-ocular reflex. Technical Report 9201, INC, Univ. of California, San Diego.
Marr, D. (1969). A theory of cerebellar cortex. J. Physiol., 202:437-470.
MelviIl Jones, G. M. (1991). The Vestibular Contribution, volume 8 of Vision and Visual Dysfunction, chapter 2,
pages 293-303. CRC Press, Inc., Boston. General Editor: J. R. Cronly-Dillon.
Miles, E A. and Eighmy, B. B. (1980). Long-term adaptive changes in primate vestibulo-ocular reflex.l. Behavioural observations. Journal of Neurophysiology, 43:140&-1425.
Miles, F. A. and Lisberger, S. G. (1981). Plasticity in the vestibulo-ocular reflex: A new hypothesis. Ann. Rev.
Neurosci., 4:273-299.
Quinn, K. J., Baker, J., and Peterson, B. (1992). Simulation of cerebellar-vestibular interactions during VOR
adaptation. In Program 22nd Annual Meeting. Society for Neuroscience.
Quinn, K. J., Schmajuk, N., Jain, A., Baker, J. E, and Peterson, B. W. (1992). Vestibuloocular reflex arc analysis
using an experimentally constrained network. Biologtcal Cybernetics, 67: 113-122.
Viola, P. A., Lisberger, S. G., and Sejnowski, T. J. (1992). Recurrent eye tracking network using a distributed
representation of image motion. In Moody, 1. E., Hansen, S. J., and Lippman, R. P., editors, Advances in
Neural Information Processing Systems 4, San Mateo. IEEE, Morgan Kaufmann Publishers.
| 704 |@word neurophysiology:2 version:1 longterm:1 proportion:4 nd:1 integrative:1 simulation:6 minus:1 carry:1 initial:1 bd:1 vor:30 physiol:2 subsequent:1 plasticity:3 motor:4 drop:1 v:4 selected:1 nervous:1 short:3 provides:1 math:1 node:13 constructed:1 become:2 pairing:1 indeed:1 expected:2 rapid:3 behavior:1 integrator:1 brain:1 decreasing:2 totally:1 provided:1 baker:2 what:2 monkey:1 magnified:1 transformation:1 biphasic:1 control:1 medical:1 positive:2 local:12 path:1 solely:3 firing:13 modulation:1 studied:1 mateo:1 suggests:1 limited:1 range:1 acknowledgment:1 hughes:1 differs:1 lippman:1 physiology:1 adapting:1 close:2 influence:3 center:1 phil:1 go:5 recovery:1 rule:20 marr:2 mossy:9 variation:1 diego:3 target:1 olivier:1 us:1 slip:4 hypothesis:2 velocity:29 helmholtz:1 magnification:1 trend:1 predicts:1 observed:4 bottom:2 negligibly:1 calculate:1 cycle:2 movement:2 decrease:6 dynamic:1 overshoot:9 depend:1 exposed:1 basis:2 rightward:1 easily:1 indirect:1 represented:3 fiber:22 chapter:1 univ:1 jain:1 sejnowski:11 quite:1 premotor:1 larger:2 plausible:5 sdsc:1 final:1 interplay:1 interaction:2 flocculus:1 coming:1 adaptation:22 achieve:1 albus:2 keck:1 diverges:1 object:4 recurrent:3 soc:1 come:1 indicate:1 qd:3 differ:1 direction:1 filter:1 settle:1 crc:1 potentiation:1 stayed:1 biological:1 equilibrium:1 stabilizes:1 ventral:1 early:1 currently:2 hansen:1 concurrent:1 organ:1 concurrently:1 modified:4 avoid:1 vestibuloocular:1 command:1 conjunction:1 encode:1 mainly:1 equaled:1 sense:1 glass:2 spectacle:5 diminishing:2 reproduce:1 fujita:2 animal:5 constrained:1 equal:5 eliminated:1 represents:3 jones:2 report:1 few:1 retina:3 simultaneously:1 ime:1 phase:2 replacement:1 fire:2 possibility:1 analyzed:2 integral:1 capable:1 experience:1 iv:1 desired:11 increased:3 earlier:1 purkinje:11 sakurai:2 altering:1 delay:3 too:3 fransisco:1 sensitivity:1 ie:2 stay:1 terrence:1 physic:1 off:2 corrects:1 transfered:1 moody:1 squared:1 central:1 choose:1 retinal:1 int:1 inc:2 dillon:1 afferent:4 depends:2 bg:1 multiplicative:1 later:1 apparently:1 reached:2 start:1 vivo:1 contribution:1 kaufmann:1 ensemble:2 climbing:16 ponto:1 multiplying:1 drive:4 cybernetics:2 gomi:2 ah:1 promptly:1 simultaneous:1 synapsis:3 reach:3 classified:1 synaptic:1 email:1 lde:1 ocular:10 associated:1 static:2 gain:53 amplitude:8 back:1 appears:1 day:2 melvill:1 response:6 synapse:1 box:1 though:1 correlation:1 until:1 effect:3 hence:4 laboratory:1 mile:5 cerebellum:9 during:15 dysfunction:1 steady:1 oc:1 m:2 motion:2 gh:2 image:9 recently:1 kawato:2 stimulation:1 vitro:2 qp:1 oli:1 volume:1 association:1 elevated:1 resting:3 significant:1 biosci:1 consistency:1 moving:2 longer:1 cortex:1 leftward:1 jolla:1 driven:3 certain:1 meeting:1 seen:2 motoneuron:1 greater:1 responsiveness:1 morgan:1 period:2 monotonically:1 signal:23 stephen:1 ii:1 multiple:1 rv:1 stem:1 smooth:5 technical:1 faster:1 calculation:1 compensate:2 long:6 dept:1 controlled:1 prediction:1 vision:1 iteration:4 represent:1 cerebellar:5 cell:10 addition:1 decreased:1 diagram:1 publisher:1 recording:1 induced:1 effectiveness:2 feedforward:1 affect:2 opposite:2 absent:1 shift:1 coenen:5 passing:1 cause:1 constitute:1 depression:2 tune:1 dark:3 locally:1 category:1 reduced:1 restricts:1 inhibitory:3 sign:2 neuroscience:4 delta:1 ipsilateral:1 anatomical:2 demonstrating:1 nevertheless:1 capital:1 asymptotically:2 graph:1 convert:1 fibre:2 letter:2 lime:2 annual:1 activity:1 constraint:2 scene:1 aspect:1 lond:2 relatively:1 transferred:2 department:1 according:3 combination:1 describes:1 remain:1 smaller:1 slightly:1 rev:1 biologically:5 primate:2 happens:1 modification:1 explained:1 xo:2 behavioural:1 equation:1 previously:1 turn:9 mechanism:3 phasic:2 end:1 reversal:1 adopted:2 pursuit:5 available:1 studying:1 quinn:4 tjd:1 gate:1 jd:1 top:3 maintaining:1 society:1 move:1 added:1 occurs:6 responds:1 fovea:1 extent:1 modeled:1 ratio:1 mostly:1 trace:1 negative:3 stated:1 upper:1 neuron:2 observation:1 howard:1 finite:1 arc:1 t:1 viola:2 neurobiology:1 tonic:5 head:29 compensatory:1 california:3 vestibular:15 robinson:2 address:1 qa:3 trans:1 below:1 dynamical:1 oculomotor:2 program:1 glutamate:1 eye:16 carried:1 understanding:1 relative:2 expect:1 suggestion:1 foundation:1 nucleus:10 h2:2 consistent:1 vestibulo:11 editor:2 course:4 changed:1 supported:1 keeping:1 institute:2 arnold:2 peterson:2 magnifying:3 distributed:1 curve:3 feedback:1 schmajuk:1 valid:1 made:1 adaptive:5 san:5 skill:1 investigating:1 assumed:1 themodel:1 tja:2 transfer:2 ca:3 main:2 neurosci:1 whole:1 reconcile:1 allowed:1 site:2 tl:1 salk:1 slow:1 position:1 explicit:1 xl:2 ito:5 symbol:1 explored:1 neurol:1 physiological:2 evidence:2 galiana:2 magnitude:3 boston:1 neurophysiological:1 visual:19 nserc:1 tracking:1 reflex:11 lisberger:19 determines:2 dh:1 consequently:2 acceleration:1 ann:1 change:10 experimentally:1 included:1 contradictory:1 correlational:1 la:2 preparation:1 wearing:2 ex:1 |
6,678 | 7,040 | Visual Interaction Networks: Learning a Physics
Simulator from Video
Nicholas Watters, Andrea Tacchetti, Th?ophane Weber
Razvan Pascanu, Peter Battaglia, Daniel Zoran
DeepMind
London, United Kingdom
{nwatters, atacchet, theophane,
razp, peterbattaglia, danielzoran}@google.com
Abstract
From just a glance, humans can make rich predictions about the future of a wide
range of physical systems. On the other hand, modern approaches from engineering,
robotics, and graphics are often restricted to narrow domains or require information
about the underlying state. We introduce the Visual Interaction Network, a generalpurpose model for learning the dynamics of a physical system from raw visual
observations. Our model consists of a perceptual front-end based on convolutional
neural networks and a dynamics predictor based on interaction networks. Through
joint training, the perceptual front-end learns to parse a dynamic visual scene into
a set of factored latent object representations. The dynamics predictor learns to roll
these states forward in time by computing their interactions, producing a predicted
physical trajectory of arbitrary length. We found that from just six input video
frames the Visual Interaction Network can generate accurate future trajectories of
hundreds of time steps on a wide range of physical systems. Our model can also
be applied to scenes with invisible objects, inferring their future states from their
effects on the visible objects, and can implicitly infer the unknown mass of objects.
This work opens new opportunities for model-based decision-making and planning
from raw sensory observations in complex physical environments.
1
Introduction
Physical reasoning is a core domain of human knowledge [22] and among the earliest topics in AI
[24, 25]. However, we still do not have a system for physical reasoning that can approach the abilities
of even a young child. A key obstacle is that we lack a general-purpose mechanism for making
physical predictions about the future from sensory observations of the present. Overcoming this
challenge will help close the gap between human and machine performance on important classes
of behavior that depend on physical reasoning, such as model-based decision-making [3], physical
inference [13], and counterfactual reasoning [10, 11].
We introduce the Visual Interaction Network (VIN), a general-purpose model for predicting future
physical states from video data. The VIN is learnable and can be trained from supervised data
sequences which consist of input image frames and target object state values. It can learn to
approximate a range of different physical systems which involve interacting entities by implicitly
internalizing the rules necessary for simulating their dynamics and interactions.
The VIN model is comprised of two main components: a visual encoder based on convolutional
neural networks (CNNs) [17], and a recurrent neural network (RNN) with an interaction network (IN)
[2] as its core, for making iterated physical predictions. Using this architecture we are able to learn a
model which infers object states and can make accurate predictions about these states in future time
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
steps. We show that this model outperforms various baselines and can generate compelling future
rollout trajectories.
1.1
Related work
One approach to learning physical reasoning is to train models to make state-to-state predictions.
One early algorithm using this approach was the ?NeuroAnimator? [12], which was able to simulate
articulated bodies. Ladicky et al. [16] proposed a learned model for simulating fluid dynamics based
on regression forests. Battaglia et al. [2] introduced a general-purpose learnable physics engine,
termed an Interaction Network (IN), which could learn to predict gravitational systems, rigid body
dynamics, and mass-spring systems. Chang et al. [7] introduced a similar model in parallel that could
likewise predict rigid body dynamics.
Another class of approaches learn to predict summary physical judgments and produce simple
actions from images. There have been several efforts [18, 19] which used CNN-based models to
predict whether a stack of blocks would fall. Mottaghi et al. [20, 21] predicted coarse, image-space
motion trajectories of objects in real images. Several efforts [4, 6, 26, 27] have fit the parameters
of Newtonian mechanics equations to systems depicted in images and videos, though the dynamic
equations themselves were not learned. Agrawal et al. [1] trained a system that learns to move objects
by poking.
A third class of methods [5, 8, 9, 23], like our Visual Interaction Network, have been used to predict
future state descriptions from pixels. However, in contrast to the Visual Interaction Network, these
models have to be tailored to the particular physical domain of interest, are only effective over a few
time steps, or use side information such as object locations and physical constraints at test time.
2
Model
The Visual Interaction Network (VIN) learns to produce future trajectories of objects in a physical
system from video frames of that system. The VIN is depicted in Figure 1, and consists of the
following components:
? The visual encoder takes a triplet of frames as input and outputs a state code. A state code
is a list of vectors, one for each object in the scene. Each of these vectors is a distributed
representation of the position and velocity of its corresponding object. We apply the encoder
in a sliding window over a sequence of frames, producing a sequence of state codes. See
Section 2.1 and Figure 2a for details.
? The dynamics predictor takes a sequence of state codes (output from a visual encoder
applied in a sliding-window manner to a sequence of frames) and predicts a candidate state
code for the next frame. The dynamics predictor is comprised of several interaction-net
cores, each taking input at a different temporal offset and producing candidate state codes.
These candidates are aggregated by an MLP to produce a predicted state code for the next
frame. See Section 2.2 and Figure 2b for details.
? The state decoder converts a state code to a state. A state is a list of each object?s position/velocity vector. The training targets for the system are ground truth states. See Section
2.3 for details.
2.1
Visual Encoder
The visual encoder is a CNN that produces a state code from a sequence of 3 images. It has a
frame pair encoder Epair shown in Figure 2a which takes a pair of consecutive frames and outputs
a candidate state code. This frame pair encoder is applied to both consecutive pairs of frames in
a sequence of 3 frames. The two resulting candidate state codes are aggregated by a shared MLP
applied to the concatenation of each pair of slots. The result is an encoded state code. Epair itself
applies a CNN with two different kernel sizes to a channel-stacked pair of frames, appends constant
x, y coordinate channels, and applies a CNN with alternating convolutional and max-pooling layers
until unit width and height. The resulting matrix of shape 1 ? 1 ? (Nobject ? Lcode ) is reshaped into
a state code of shape Nobject ? Lcode , where Nobject is the number of objects in the scene and Lcode
is the length of each state code slot. The two state codes are fed into an MLP to produce the final
2
Figure 1: Visual Interaction Network: The general architecture is depicted here (see legend on the
bottom right). The visual encoder takes triplets of consecutive frames and produces a state code
for the third frame in each triplet. The visual encoder is applied in a sliding window over the input
sequence to produce a sequence of state codes. Auxiliary losses applied to the decoded output of the
encoder help in training. The state code sequence is then fed into the dynamics predictor which has
several Interaction Net cores (2 in this example) working on different temporal offsets. The outputs
of these Interaction Nets are then fed into an aggregator to produce the prediction for the next time
step. The core is applied in a sliding window manner as depicted in the figure. The predicted state
codes are linearly decoded and are used in the prediction loss when training.
encoder output from the triplet. See the Supplementary Material for further details of the visual
encoder model.
One important feature of this visual encoder architecture is its weight sharing given by applying
the same Epair on both pairs of frames, which approximates a temporal convolution over the input
sequence. Another important feature is the inclusion of constant coordinate channels (an x- and
y-coordinate meshgrid over the image), which allows positions to be incorporated throughout much
of the processing. Without the coordinate channels, such a convolutional architecture would have to
infer position from the boundaries of the image, a more challenging task.
2.2
Dynamics Predictor
The dynamics predictor is a variant of an Interaction Network (IN) [2]. An IN, summarized in Figure
2b, is a state-to-state physical predictor model that uses a shared relation net on pairs of objects
as well as shared self-dynamics and global affector nets to predict per-object dynamics. The main
difference between our predictor and a vanilla IN is aggregation over multiple temporal offsets. Our
predictor has a set of temporal offsets (in practice we use {1, 2, 4}), with one IN core for each. Given
an input state code sequence, for each offset t a separate IN core computes a candidate predicted state
code from the input state code at index t. An MLP aggregator transforms the list of candidate state
codes into a predicted state code. This aggregator is applied independently to the concatenation over
candidate state codes of each slot and is shared across slots to enforce some consistency of object
representations. See the Supplementary Material for further details of the dynamics predictor model.
As with the visual encoder, we explored many dynamics predictor architectures (some of which we
compare as baselines below). The temporal offset aggregation of this architecture enhances its power
by allowing it to accommodate both fast and slow movements by different objects within a sequence
of frames. See the Supplementary Material for an exploration of the importance of temporal offset
aggregation. The factorized representation of INs, which allows efficient learning of interactions even
in scenes with many objects, is an important contributor to our predictor architecture?s performance.
3
(b) Interaction Net
(a) Frame Pair Encoder
Figure 2: Frame Pair Encoder and Interaction Net. (a) The frame pair encoder is a CNN which
transforms two consecutive input frame into a state code. Important features are the concatenation of
coordinate channels before pooling to unit width and height. The pooled output is reshaped into a
state code. (b) An Interaction Net (IN) is used for each temporal offset by the dynamics predictor. For
each slot, a relation net is applied to the slot?s concatenation with each other slot. A self-dynamics
net is applied to the slot itself. Both of these results are summed and post-processed by the affector to
produce the predicted slot.
2.3
State Decoder
The state decoder is simply a linear layer with input size Lcode and output size 4 (for a position/velocity
vector). This linear layer is applied independently to each slot of the state code. We explored more
complicated architectures, but this yielded the best performance. The state decoder is applied to both
encoded state codes (for auxiliary encoding loss) and predicted state codes (for prediction loss).
3
3.1
Experiments
Physical Systems Simulations
We focused on five types of physical systems with high dynamic complexity but low visual complexity,
namely 2-dimensional simulations of colored objects on natural-image backgrounds interacting with
a variety of forces (see the Supplementary Material for details). In each system the force law is
applied pair-wise to all objects and all objects have the same mass and density unless otherwise
stated.
? Spring Each pair of objects has an invisible spring connection with non-zero equilibrium.
All springs share the same equilibrium and Hooke?s constant.
? Gravity Objects are massive and obey Newton?s Law of gravity.
? Billiards No long-distance forces are present, but the billiards bounce off each other and off
the boundaries of the field of vision.
? Magnetic Billiards All billiards are positively charged, so instead of bouncing, they repel
each other according to Coulomb?s Law. They still bounce off the boundaries.
? Drift No forces of any kind are present. Objects drift with their initial velocities.
These systems include previously studied gravitational and billiards systems [3, 1] with the added
challenge of natural image backgrounds. For example videos of these systems, see the Supplementary
Material or visit (https://goo.gl/yVQbUa).
One limitation of the above systems is that the positions, masses, and radii of all objects are either
visually observable in every frame or global constants. Furthermore, while occlusion is allowed,
the objects have the same radius so total occlusion never occurs. In contrast, systems with hidden
quantities that influence dynamics abound in the real world. To mimic this, we explored a few
challenging additional systems:
? Springs with Invisibility. In each simulation a random object is not rendered. In this way
a model must infer the location of the invisible object from its effects on the other objects.
? Springs and Billiards with Variable Mass. In each simulation, each object?s radius is
randomly generated. This not only causes total occlusion (in the Spring system), but density
is held constant, so a model must determine each object?s mass from its radius.
4
To simulate each system, we initialized the position and velocity of each ball randomly and used a
physics engine to simulate the resulting dynamics. See the Supplementary Material for more details.
To generate video data, we rendered the system state on top of a CIFAR-10 natural image background.
The background was randomized between simulations. Importantly, we rendered the objects with
15-fold anti-aliasing so the visual encoder could learn to distinguish object positions much more
finely than pixel resolution, as evident by the visual encoder accuracy described in Section 4.1.
For each system we generated a dataset with 3 objects and a dataset with 6 objects. Each dataset had
a training set of 2.5 ? 105 simulations and a test set of 2.5 ? 104 simulations, with each simulation
64 frames long. Since we trained on sequences of 14 frames, this ensures we had more than 1 ? 107
training samples with distinct dynamics. We rendered natural image backgrounds online from
separate training and testing CIFAR-10 sets.
3.2
Baseline Models
We compared the VIN to a suite of baseline and competitor models, including ablation experiments. For each model, we performed hyperparameter sweeps across all datasets and choose the
hyperparameter set with the lowest average test loss.
The Visual RNN has the same visual encoder as the VIN, but the core of its dynamics predictor core
is an MLP instead of an IN. Each state code is flattened before being passed to the dynamics predictor.
The dynamics predictor is still treated as a recurrent network with temporal offset aggregation, but
the dynamics predictor no longer supports the factorized representation of the IN core. Without the
weight-sharing of the IN, this model is forced to learn the same force law for each pair of objects,
which is not scalable as the object number increases.
The Visual LSTM has the same visual encoder as the VIN, but its dynamics predictor is an LSTM
[14] with MLP pre- and post-processors. It has no temporal offset aggregation, since the LSTM
implicitly integrates temporal information through state updates. During rollouts, the output state
code from the post-processor MLP is fed into the pre-processor MLP.
The VIN Without Relations is an ablation modification of the VIN. The only difference between
this and the VIN is an omitted relation network in the dynamics predictor cores. Note that there is still
ample opportunity to compute relations between objects (both in the visual encoder and the dynamics
predictor?s temporal offset aggregator), just not specifically through the relation network. Note that
we performed a second ablation experiment to isolate the effect of temporal offset aggregation. See
the Supplementary Material for details.
The Vision With Ground-Truth Dynamics model uses a visual encoder and a miniature version
of the dynamics predictor to predict not the next-step state but the current-step state (i.e. the state
corresponding to the last observed frame). Since this predicts static dynamics, we did not train it on
rollouts. However, when testing, we fed the static state estimation into a ground-truth physics engine
to generate rollouts. This model is not a fair comparison to the other models because it does not learn
dynamics. It serves instead as a performance bound imposed by the visual encoder. We normalized
our results by the performance of this model, as described in Section 4.
All models described above learn state from pixels. However, we also trained two baselines with
privileged information: IN from State and LSTM from State models, which have the IN and LSTM
dynamics predictors, but make their predictions directly from state to state. Hence, they do not have
a visual encoder but instead have access to the ground truth states for observed frames. These, in
combination with the Vision with Ground Truth Dynamics, allowed us to comprehensively test our
model in part and in full.
3.3
Training procedure
Our goal was for the models to accurately predict physical dynamics into the future. As shown in
Figure 1, the VIN lends itself well to long-term predictions because the dynamics predictor can be
treated as a recurrent net and rolled out on state codes. We trained the model to predict a sequence of
8 consecutive unseen future states from 6 frames of input video. Our prediction loss was a normalized
weighted sum of the corresponding 8 error terms. The sum was weighted by a discount factor that
started at 0.0 and approached 1.0 throughout training, so at the start of training the model must only
predict the first unseen state and at the end it must predict an average of all 8 future states. Our
5
training loss was the sum of this prediction loss and an auxiliary encoding loss, as indicated in Figure
1. The model was trained by backpropagation with an Adam optimizer [15]. See the Supplementary
Material for full training parameters.
4
Results
Our results show that the VIN predicts dynamics accurately, outperforming baselines on all datasets
(see Figures 3 and 4). It is scalable, can accommodate forces with a variety of strengths and distance
ranges, and can infer visually unobservable quantities (invisible object location) from dynamics.
Our model also generates long rollout sequences that are both visually plausible and similar to a
ground-truth physics, even outperforming state-of-the-art state-to-state models on this measure.
4.1
Inverse Normalized Loss
We evaluated the performance of each model with the Inverse Normalized Loss, defined as
Lbound /Lmodel . Here Lbound is the test loss of the Vision with Ground Truth Dynamics and Lmodel
is the test loss of the model in question (See Section 3.3). We used this metric because it is much
more interpretable than Lmodel itself. The Vision with Ground Truth Dynamics produces the best
possible predictions given the visual encoder?s error, so the Inverse Normalized Loss always lies in
[0, 1], where a value closer to 1.0 indicates better performance. The visual encoder learned position
predictions accurate to within 0.15% of the framewidth (0.048 times the pixel width), so we have no
concerns about the accuracy of the Vision with Ground Truth Dynamics.
Figure 3 shows the Inverse Normalized Loss on all test datasets after 3 ? 105 training steps. The VIN
out-performs all baselines on nearly all systems. The only baseline with comparable performance
is the VIN Without Relations on Drift, which matches the VIN?s performance. This makes sense,
because the objects do not interact in the Drift system, so the relation net should be unnecessary.
Of particular note is the performance of the VIN on the invisible dataset (spring system with random
invisible object), where its performance is comparable to the fully visible 3-object Spring system. It
can locate the invisible object?s position to within 4% of the frame width (1.3 times the pixel width)
for the first 8 rollout steps.
Figure 3: Performance. We compare our model?s Inverse Normalized Loss to that of the baselines
on all test datasets. 3-object dataset are on the upper row, and 6-object datasets are on the lower row.
By definition of the Inverse Normalized Loss, all values are in [0, 1] with 1.0 being the performance
of a ground-truth simulator given the visual encoder. The VIN (red) outperforms every baseline on
every dataset (except the VIN Without Relations on Drift, the system with no object interactions).
4.2
Euclidean Prediction Error of Rollout Positions
One important desirable feature of a physical predictor is the ability to extrapolate from a short
input video. We addressed this by comparing performance of all models on long rollout sequences
and observing the Euclidean Prediction Error. To compute the Euclidean Prediction Error from a
6
predicted state and ground-truth state, we calculated the mean over objects of the Euclidean norm
between the predicted and true position vectors.
We computed the Euclidean Prediction Error at each step over a 50-timestep rollout sequence. Figure
4 shows the average of this quantity over all 3-object test datasets with respect to both timestep and
object distance traveled. The VIN out-performs all other models, including the IN from state and
LSTM from state even though they have access to privileged information. This demonstrates the
remarkable robustness and generalization power of the VIN. We hypothesize that it outperforms
state-to-state models in part because its dynamics predictor must tolerate visual encoder noise during
training. This noise-robustness translates to rollouts, where the dynamics predictor remains accurate
even as its predictions deviate from true physical dynamics. The state-to-state methods are not trained
on noisy state inputs, so they exhibit poorer generalization. See the Supplementary Material for a
dataset-specific quantification of these results.
(a) Distance Comparison
(b) Time Comparison
Figure 4: Euclidean Prediction Error on 3-object datasets. We compute the mean over all test
datasets of the Euclidean Prediction Error for 50-timestep rollouts. The VIN outperforms all other
pixel-to-state models (solid lines) and state-to-state models (dashed lines). Errorbars show 95%
confidence intervals. (a) Mean Euclidean Prediction Error with respect to object distance traveled
(measured as a fraction of the frame-width). The VIN is accurate to within 6% after objects have
traversed 0.72 times the framewidth. (b) Mean Euclidean Prediction Error with respect to timestep.
The VIN is accurate to within 7.5% after 50 timesteps. The optimal information-less predictor
(predicting all objects to be at the frame?s center) has an error of 37%, higher than all models.
4.3
Visualized Rollouts
To qualitatively evaluate the plausibility of the VIN?s rollout predictions, we generated videos by
rendering the rollout predictions. These are best seen in video format, though we show them in
trajectory-trail images here as well. The backgrounds made trajectory-trails difficult to see, so
we masked the background (only for rendering purposes). Trajectory trails are shown for rollouts
between 40 and 60 time steps, depending on the dataset.
We encourage the reader to view the videos at (https://goo.gl/RjE3ey). Those include the CIFAR
backgrounds and show very long rollouts of up to 200 timesteps, which demonstrate the VIN?s
extremely realistic predictions. We find no reason to doubt that the predictions would continue to be
visually realistic (if not exactly tracking the ground-truth simulator) ad infinitum.
5
Conclusion
Here we introduced the Visual Interaction Network and showed that it can infer the physical states of
multiple objects from video input and make accurate predictions about their future trajectories. The
model uses a CNN-based visual encoder to obtain accurate measurements of object states in the scene.
The model also harnesses the prediction abilities and relational computation of Interaction Networks,
providing accurate predictions far into the future. We have demonstrated that our model performs
well on a variety of physical systems and is robust to visual complexity and partially observable data.
7
Truth
Prediction
Sample Frame
Truth
Prediction
Drift
Billiards
Magnetic
Billiards
Gravity
Spring
Sample Frame
Table 1: Rollout Trajectories. For each of our datasets, we show a sample frame, an example true
future trajectory, and a corresponding predicted rollout trajectory (for 40-60 frames, depending on
the dataset). The left half shows the 3-object regime and the right half shows the 6-object regime. For
visual clarity, all objects are rendered at a higher resolution here than in the training input.
One property of our model is the inherent presence of noise from the visual encoder. In contrast to
state-to-state models such as the Interaction Net, here the dynamic predictor?s input is inherently
noisy due to the discretization of our synthetic dataset rendering. Surprisingly, this noise seemed
to confer an advantage because it helped the model learn to overcome temporally compounding
errors generated by inaccurate predictions. This is especially notable when doing long term roll outs
where we achieve performance which surpasses even a pure state-to-state Interaction Net. Since this
dependence on noise would be inherent in any model operating on visual input, we postulate that this
is an important feature of any prediction model and warrants further research.
While experimentation with variable number of objects falls outside the scope of the material
presented here, this is an important direction that could be explored in further work. Importantly, INs
generalize out of the box to scenes with a variable number of objects. Should the present form of the
perceptual encoder be insufficient to support this type of generalization, this could be addressed by
using an attentional encoder and order-agnostic loss function.
Our Visual Interaction Network provides a step toward understanding how representations of objects,
relations, and physics can be learned from raw data. This is part of a broader effort toward understanding how perceptual models support physical predictions and how the structure of the physical
world influences our representations of sensory input, which will help AI research better capture the
powerful object- and relation-based system of reasoning that supports humans? powerful and flexible
general intelligence.
Acknowledgments
We thank Max Jaderberg, David Reichert, Daan Wierstra, and Koray Kavukcuoglu for helpful
discussions and insights.
8
References
[1] Pulkit Agrawal, Ashvin Nair, Pieter Abbeel, Jitendra Malik, and Sergey Levine. Learning to poke by
poking: Experiential learning of intuitive physics. arXiv preprint arXiv:1606.07419, 2016.
[2] Peter Battaglia, Razvan Pascanu, Matthew Lai, Danilo Jimenez Rezende, et al. Interaction networks for
learning about objects, relations and physics. In Advances in Neural Information Processing Systems,
pages 4502?4510, 2016.
[3] Peter W Battaglia, Jessica B Hamrick, and Joshua B Tenenbaum. Simulation as an engine of physical
scene understanding. Proceedings of the National Academy of Sciences, 110(45):18327?18332, 2013.
[4] Kiran Bhat, Steven Seitz, Jovan Popovi?c, and Pradeep Khosla. Computing the physical parameters of
rigid-body motion from video. Computer Vision?ECCV 2002, pages 551?565, 2002.
[5] Apratim Bhattacharyya, Mateusz Malinowski, Bernt Schiele, and Mario Fritz. Long-term image boundary
extrapolation. arXiv preprint arXiv:1611.08841, 2016.
[6] Marcus A Brubaker, Leonid Sigal, and David J Fleet. Estimating contact dynamics. In Computer Vision,
2009 IEEE 12th International Conference on, pages 2389?2396. IEEE, 2009.
[7] Michael B Chang, Tomer Ullman, Antonio Torralba, and Joshua B Tenenbaum. A compositional objectbased approach to learning physical dynamics. arXiv preprint arXiv:1612.00341, 2016.
[8] Sebastien Ehrhardt, Aron Monszpart, Niloy J Mitra, and Andrea Vedaldi. Learning a physical long-term
predictor. arXiv preprint arXiv:1703.00247, 2017.
[9] Katerina Fragkiadaki, Pulkit Agrawal, Sergey Levine, and Jitendra Malik. Learning visual predictive
models of physics for playing billiards. arXiv preprint arXiv:1511.07404, 2015.
[10] Tobias Gerstenberg, Noah Goodman, David A Lagnado, and Joshua B Tenenbaum. Noisy newtons:
Unifying process and dependency accounts of causal attribution. In In proceedings of the 34th. Citeseer,
2012.
[11] Tobias Gerstenberg, Noah Goodman, David A Lagnado, and Joshua B Tenenbaum. From counterfactual
simulation to causal judgment. In CogSci, 2014.
[12] Radek Grzeszczuk, Demetri Terzopoulos, and Geoffrey Hinton. Neuroanimator: Fast neural network
emulation and control of physics-based models. In Proceedings of the 25th annual conference on Computer
graphics and interactive techniques, pages 9?20. ACM, 1998.
[13] Jessica B Hamrick, Peter W Battaglia, Thomas L Griffiths, and Joshua B Tenenbaum. Inferring mass in
complex scenes by mental simulation. Cognition, 157:61?76, 2016.
[14] Sepp Hochreiter and Jurgen Schmidhuber. Long short-term memory. Neural Computation, 9(8):1735?1780,
1997.
[15] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint
arXiv:1412.6980, 2014.
[16] Lubor Ladicky, SoHyeon Jeong, Barbara Solenthaler, Marc Pollefeys, Markus Gross, et al. Data-driven
fluid simulations using regression forests. ACM Transactions on Graphics (TOG), 34(6):199, 2015.
[17] Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. Nature, 521(7553):436?444, 2015.
[18] Adam Lerer, Sam Gross, and Rob Fergus. Learning physical intuition of block towers by example. arXiv
preprint arXiv:1603.01312, 2016.
[19] Wenbin Li, Seyedmajid Azimi, Ale? Leonardis, and Mario Fritz. To fall or not to fall: A visual approach to
physical stability prediction. arXiv preprint arXiv:1604.00066, 2016.
[20] Roozbeh Mottaghi, Hessam Bagherinezhad, Mohammad Rastegari, and Ali Farhadi. Newtonian scene
understanding: Unfolding the dynamics of objects in static images. In Proceedings of the IEEE Conference
on Computer Vision and Pattern Recognition, pages 3521?3529, 2016.
[21] Roozbeh Mottaghi, Mohammad Rastegari, Abhinav Gupta, and Ali Farhadi. ?what happens if...? learning
to predict the effect of forces in images. In European Conference on Computer Vision, pages 269?285.
Springer, 2016.
[22] Elizabeth S Spelke and Katherine D Kinzler. Core knowledge. Developmental science, 10(1):89?96, 2007.
[23] Russell Stewart and Stefano Ermon. Label-free supervision of neural networks with physics and domain
knowledge. arXiv preprint arXiv:1609.05566, 2016.
[24] Terry Winograd. Procedures as a representation for data in a computer program for understanding natural
language. Technical report, DTIC Document, 1971.
[25] Patrick H Winston. Learning structural descriptions from examples. 1970.
[26] Jiajun Wu, Joseph J Lim, Hongyi Zhang, Joshua B Tenenbaum, and William T Freeman13. Physics 101:
Learning physical object properties from unlabeled videos. psychological science, 13(3):89?94, 2016.
[27] Jiajun Wu, Ilker Yildirim, Joseph J Lim, Bill Freeman, and Josh Tenenbaum. Galileo: Perceiving physical
object properties by integrating a physics engine with deep learning. In Advances in neural information
processing systems, pages 127?135, 2015.
9
| 7040 |@word cnn:6 version:1 norm:1 open:1 pieter:1 seitz:1 simulation:12 citeseer:1 solid:1 accommodate:2 initial:1 united:1 jimenez:1 daniel:1 document:1 bhattacharyya:1 outperforms:4 current:1 com:1 comparing:1 discretization:1 diederik:1 must:5 visible:2 realistic:2 shape:2 hypothesize:1 interpretable:1 update:1 half:2 intelligence:1 core:12 short:2 colored:1 mental:1 coarse:1 pascanu:2 provides:1 location:3 zhang:1 five:1 height:2 rollout:10 wierstra:1 consists:2 manner:2 introduce:2 behavior:1 themselves:1 planning:1 mechanic:1 simulator:3 aliasing:1 andrea:2 ashvin:1 freeman:1 window:4 farhadi:2 abound:1 theophane:1 underlying:1 estimating:1 mass:7 factorized:2 lowest:1 agnostic:1 what:1 kind:1 deepmind:1 suite:1 temporal:13 every:3 interactive:1 gravity:3 exactly:1 demonstrates:1 gerstenberg:2 demetri:1 control:1 unit:2 producing:3 before:2 engineering:1 mitra:1 encoding:2 niloy:1 studied:1 neuroanimator:2 challenging:2 range:4 acknowledgment:1 lecun:1 testing:2 galileo:1 practice:1 block:2 backpropagation:1 razvan:2 procedure:2 rnn:2 vedaldi:1 pre:2 confidence:1 griffith:1 integrating:1 close:1 unlabeled:1 applying:1 influence:2 bill:1 imposed:1 charged:1 center:1 demonstrated:1 attribution:1 sepp:1 independently:2 jimmy:1 focused:1 resolution:2 watters:1 pure:1 factored:1 rule:1 insight:1 importantly:2 stability:1 coordinate:5 target:2 massive:1 us:3 trail:3 velocity:5 recognition:1 predicts:3 winograd:1 bottom:1 observed:2 levine:2 preprint:9 steven:1 capture:1 ensures:1 movement:1 goo:2 russell:1 gross:2 intuition:1 environment:1 developmental:1 complexity:3 schiele:1 tobias:2 dynamic:51 zoran:1 depend:1 trained:7 ali:2 predictive:1 tog:1 lbound:2 joint:1 various:1 train:2 articulated:1 stacked:1 fast:2 effective:1 london:1 distinct:1 forced:1 cogsci:1 approached:1 outside:1 encoded:2 supplementary:9 plausible:1 bernt:1 otherwise:1 encoder:34 ability:3 unseen:2 reshaped:2 itself:4 noisy:3 final:1 online:1 sequence:18 agrawal:3 advantage:1 net:14 interaction:28 poke:1 ablation:3 achieve:1 academy:1 description:2 intuitive:1 produce:10 adam:3 newtonian:2 object:67 help:3 poking:2 recurrent:3 depending:2 measured:1 jurgen:1 auxiliary:3 predicted:11 direction:1 radius:4 emulation:1 cnns:1 stochastic:1 kiran:1 exploration:1 human:4 ermon:1 material:10 require:1 abbeel:1 generalization:3 traversed:1 gravitational:2 ground:12 visually:4 equilibrium:2 scope:1 predict:12 cognition:1 matthew:1 miniature:1 optimizer:1 early:1 consecutive:5 omitted:1 torralba:1 battaglia:5 purpose:4 estimation:1 integrates:1 label:1 contributor:1 weighted:2 unfolding:1 compounding:1 always:1 lubor:1 broader:1 earliest:1 rezende:1 indicates:1 contrast:3 baseline:10 sense:1 helpful:1 inference:1 rigid:3 inaccurate:1 hidden:1 relation:12 pixel:6 unobservable:1 among:1 flexible:1 art:1 summed:1 field:1 never:1 beach:1 koray:1 nearly:1 warrant:1 future:15 mimic:1 yoshua:1 report:1 inherent:2 few:2 modern:1 randomly:2 national:1 occlusion:3 rollouts:8 william:1 jessica:2 interest:1 mlp:8 rolled:1 pradeep:1 held:1 razp:1 accurate:9 poorer:1 closer:1 encourage:1 necessary:1 unless:1 pulkit:2 euclidean:9 initialized:1 causal:2 psychological:1 obstacle:1 compelling:1 stewart:1 surpasses:1 predictor:30 hundred:1 comprised:2 masked:1 front:2 graphic:3 dependency:1 synthetic:1 st:1 density:2 lstm:6 randomized:1 fritz:2 international:1 physic:13 off:3 michael:1 postulate:1 choose:1 ullman:1 doubt:1 li:1 account:1 summarized:1 pooled:1 jitendra:2 notable:1 infinitum:1 ad:1 aron:1 performed:2 view:1 helped:1 azimi:1 invisibility:1 observing:1 doing:1 red:1 start:1 aggregation:6 mario:2 parallel:1 vin:26 complicated:1 objectbased:1 accuracy:2 convolutional:4 roll:2 ehrhardt:1 likewise:1 judgment:2 generalize:1 raw:3 iterated:1 accurately:2 kavukcuoglu:1 yildirim:1 trajectory:12 processor:3 aggregator:4 sharing:2 definition:1 competitor:1 static:3 dataset:10 counterfactual:2 appends:1 knowledge:3 lim:2 infers:1 tolerate:1 higher:2 supervised:1 danilo:1 harness:1 popovi:1 roozbeh:2 evaluated:1 though:3 box:1 furthermore:1 just:3 until:1 hand:1 working:1 parse:1 lack:1 google:1 glance:1 billiards:9 indicated:1 hongyi:1 usa:1 effect:4 normalized:8 true:3 hence:1 alternating:1 confer:1 during:2 width:6 self:2 evident:1 demonstrate:1 mohammad:2 invisible:7 performs:3 motion:2 stefano:1 reasoning:6 weber:1 image:16 wise:1 physical:36 approximates:1 measurement:1 ai:2 vanilla:1 consistency:1 inclusion:1 language:1 had:2 access:2 longer:1 operating:1 supervision:1 patrick:1 showed:1 driven:1 barbara:1 termed:1 schmidhuber:1 outperforming:2 continue:1 mottaghi:3 joshua:6 seen:1 additional:1 aggregated:2 determine:1 dashed:1 ale:1 sliding:4 multiple:2 full:2 desirable:1 infer:5 technical:1 match:1 plausibility:1 hamrick:2 long:11 cifar:3 lai:1 post:3 visit:1 privileged:2 prediction:36 variant:1 regression:2 scalable:2 vision:10 metric:1 arxiv:18 kernel:1 tailored:1 sergey:2 robotics:1 hochreiter:1 background:8 bhat:1 addressed:2 interval:1 goodman:2 finely:1 pooling:2 isolate:1 ample:1 legend:1 structural:1 presence:1 bengio:1 rendering:3 variety:3 fit:1 timesteps:2 architecture:8 lerer:1 translates:1 peterbattaglia:1 bounce:2 fleet:1 whether:1 six:1 passed:1 effort:3 peter:4 cause:1 compositional:1 action:1 antonio:1 deep:2 fragkiadaki:1 malinowski:1 involve:1 transforms:2 discount:1 tenenbaum:7 processed:1 visualized:1 generate:4 http:2 jiajun:2 per:1 hyperparameter:2 pollefeys:1 key:1 spelke:1 clarity:1 timestep:4 fraction:1 convert:1 sum:3 inverse:6 powerful:2 bouncing:1 throughout:2 reader:1 yann:1 wu:2 decision:2 comparable:2 layer:3 bound:1 distinguish:1 fold:1 winston:1 yielded:1 annual:1 strength:1 noah:2 constraint:1 ladicky:2 extrapolation:1 scene:10 markus:1 generates:1 simulate:3 extremely:1 spring:10 rendered:5 format:1 according:1 ball:1 combination:1 across:2 sam:1 elizabeth:1 joseph:2 rob:1 making:4 modification:1 happens:1 restricted:1 equation:2 previously:1 remains:1 mechanism:1 fed:5 end:3 serf:1 experimentation:1 apply:1 obey:1 enforce:1 coulomb:1 magnetic:2 nicholas:1 simulating:2 robustness:2 reichert:1 thomas:1 top:1 include:2 opportunity:2 newton:2 unifying:1 especially:1 experiential:1 contact:1 sweep:1 move:1 malik:2 added:1 quantity:3 occurs:1 question:1 dependence:1 enhances:1 exhibit:1 lends:1 distance:5 separate:2 attentional:1 thank:1 entity:1 decoder:4 concatenation:4 topic:1 tower:1 reason:1 toward:2 marcus:1 length:2 code:33 index:1 insufficient:1 providing:1 lmodel:3 kingdom:1 difficult:1 katherine:1 stated:1 fluid:2 ba:1 unknown:1 sebastien:1 allowing:1 upper:1 observation:3 convolution:1 datasets:9 daan:1 anti:1 relational:1 incorporated:1 hinton:2 frame:35 interacting:2 locate:1 stack:1 brubaker:1 tacchetti:1 arbitrary:1 tomer:1 drift:6 overcoming:1 introduced:3 david:4 pair:14 namely:1 repel:1 connection:1 jeong:1 engine:5 errorbars:1 learned:4 narrow:1 kingma:1 nip:1 able:2 leonardis:1 below:1 pattern:1 mateusz:1 regime:2 challenge:2 program:1 max:2 including:2 video:15 memory:1 terry:1 power:2 grzeszczuk:1 natural:5 force:7 treated:2 predicting:2 quantification:1 abhinav:1 temporally:1 started:1 lagnado:2 ilker:1 traveled:2 deviate:1 understanding:5 law:4 loss:18 fully:1 limitation:1 geoffrey:2 remarkable:1 jovan:1 sigal:1 playing:1 share:1 row:2 eccv:1 wenbin:1 summary:1 gl:2 last:1 surprisingly:1 free:1 side:1 terzopoulos:1 wide:2 fall:4 taking:1 comprehensively:1 distributed:1 boundary:4 calculated:1 overcome:1 world:2 rich:1 computes:1 sensory:3 forward:1 qualitatively:1 made:1 seemed:1 far:1 transaction:1 approximate:1 observable:2 implicitly:3 jaderberg:1 global:2 unnecessary:1 monszpart:1 fergus:1 latent:1 hessam:1 triplet:4 khosla:1 table:1 nature:1 learn:9 channel:5 ca:1 inherently:1 rastegari:2 robust:1 forest:2 interact:1 generalpurpose:1 complex:2 european:1 domain:4 marc:1 did:1 main:2 linearly:1 noise:5 child:1 allowed:2 fair:1 body:4 positively:1 slow:1 inferring:2 position:12 decoded:2 candidate:8 lie:1 perceptual:4 bagherinezhad:1 third:2 learns:4 young:1 specific:1 learnable:2 list:3 offset:12 explored:4 gupta:1 concern:1 consist:1 importance:1 flattened:1 dtic:1 gap:1 depicted:4 simply:1 visual:44 josh:1 tracking:1 partially:1 chang:2 applies:2 springer:1 truth:14 acm:2 nair:1 slot:10 goal:1 shared:4 leonid:1 specifically:1 except:1 perceiving:1 total:2 katerina:1 support:4 evaluate:1 extrapolate:1 |
6,679 | 7,041 | Reconstruct & Crush Network
Erin? Merdivan1,2 , Mohammad Reza Loghmani3 and Matthieu Geist4
1
AIT Austrian Institute of Technology GmbH, Vienna, Austria
2
LORIA (Univ. Lorraine & CNRS), CentraleSup?lec, Univ. Paris-Saclay, 57070 Metz, France
3
Vision4Robotics lab, ACIN, TU Wien, Vienna, Austria
4
Universit? de Lorraine & CNRS, LIEC, UMR 7360, Metz, F-57070 France
[email protected], [email protected]
[email protected]
Abstract
This article introduces an energy-based model that is adversarial regarding data:
it minimizes the energy for a given data distribution (the positive samples) while
maximizing the energy for another given data distribution (the negative or unlabeled
samples). The model is especially instantiated with autoencoders where the energy,
represented by the reconstruction error, provides a general distance measure for
unknown data. The resulting neural network thus learns to reconstruct data from the
first distribution while crushing data from the second distribution. This solution can
handle different problems such as Positive and Unlabeled (PU) learning or covariate
shift, especially with imbalanced data. Using autoencoders allows handling a large
variety of data, such as images, text or even dialogues. Our experiments show
the flexibility of the proposed approach in dealing with different types of data in
different settings: images with CIFAR-10 and CIFAR-100 (not-in-training setting),
text with Amazon reviews (PU learning) and dialogues with Facebook bAbI (next
response classification and dialogue completion).
1
Introduction
The main purpose of machine learning is to make inferences about unknown data based on encoded
dependencies between variables learned from known data. Energy-based learning [16] is a framework
that achieves this goal by using an energy function that maps each point of an input space to a
single scalar, called energy. The fact that energy-based models are not subject to the normalizability
condition of probabilistic models makes them a flexible framework for dealing with tasks such as
prediction or classification.
In the recent years, with the advancement of deep learning, astonishing results have been achieved in
classification [15, 25, 8, 26]. These solutions focus on the standard setting, in which the classifier
learns to discriminate between K classes, based on the underlying assumption that the training and
test samples belong to the same distribution. This assumption is violated in many applications in
which the dynamic nature [6] or the high cardinality [19] of the problem prevent the collection of a
representative training set. In the literature, this problem is referred to as covariate shift [7, 24].
In this work, we address the covariate shift problem by explicitly learning features that define the
intrinsic characteristics of a given class of data rather than features that discriminate between different
classes. The aim is to distinguish between samples of a positive class (A) and samples that do not
belong to this class (?A), even when test samples are not drawn from the same distribution as the
training samples. We achieve this goal by introducing an energy-based model that is adversarial
regarding data: it minimizes the energy for a given data distribution (the positive samples) while
maximizing the energy for another given data distribution (the negative or unlabeled samples). The
model is instantiated with autoencoders because of their ability to learn data manifolds.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
In summary, our contributions are the following:
? a simple energy-based model dealing with the A/?A classification problem by providing a
distance measure of unknown data as the energy value;
? a general framework that can deal with a large variety of data (images, text and sequential
data) by using features extracted from an autoencoder architecture;
? a model that implicitly addresses the imbalanced classification problem;
? state-of-the-art results for the dialogue completion task on the Facebook bAbI dataset and
competitive results for the general A/?A classification problem using different datasets such
as CIFAR-10, CIFAR-100 and Amazon Reviews.
The next section introduces the proposed ?reconstruct & crush? network, section 3 positions our
approach compared to related works, section 4 presents the experimental results on the aforementioned
problems and section 5 draws the conclusions.
2
Model
Let define ppos as the probability distribution producing positive samples, xpos ? ppos . Similarly, write
pneg the distribution of negative samples, xneg ? pneg . More generally, these negative samples can be
unlabeled samples (possibly containing positive samples). This case will be considered empirically,
but we keep this notation for now.
Let N denote a neural network that takes as input a sample x and outputs a (positive) energy value E:
N (x) = E ? R+ .
The proposed approach aims at learning a network N that assign low energy values to positive
samples (N (xpos ) small for xpos ? ppos ) and high energy values for negative samples (N (xneg ) high
for xneg ? pneg ).
Let m > 0 be a user-defined margin, we propose to use the following loss LN and associated risk
RN :
L(xpos , xneg ; N ) = N (xpos ) + max(0, m ? N (xneg ))
R(N ) = Expos ?ppos ,xneg ?pneg L(xpos , xneg )
= Expos ?ppos [N (xpos )] + Exneg ?pneg [max(0, m ? N (xneg ))].
(1)
Ideally, minimizing this risk amounts to have no reconstruction error over positive samples and a
reconstruction error greater than m (in expectation) over negative samples. The second term of the
risk acts as a regularizer that enforces the network to assign a low energy only to positive samples.
The choice of the margin m will affect the behavior of the network: if m is too small a low energy
will be assigned to all inputs (both positive and negative), while if m is too large assigning a large
energy to negative samples can prevent from reconstructing the positive ones.
We specialize our model with autoencoders, that are a natural choice to represent energy-based
models. An autoencoder is composed of two parts, the encoder (Enc) that projects the data into an
encoding space, and the decoder (Dec) that reconstructs the data from this projection:
Enc :X ? Z
Dec :Z ? X
argmin kx? Dec(Enc(x))k2 .
Enc,Dec
Here, X is the space of the input data (either positive or negative) and Z is the space of encoded data.
In this setting, the reconstruction error of a sample x can be interpreted as the energy value associated
to that sample:
N (x) = kx ? Dec(Enc(x))k2 = E.
Our resulting reconstruct & crush network (RCN) is thus trained to assign a low reconstruction error
to xpos (reconstruct) and an high reconstruction error to xneg (crush).
Any stochastic gradient descent method can be used to optimize the risk of Eq. (1), the mini-batches
of positive and negative samples being sampled independently from the corresponding distributions.
2
3
Related work
With the diffusion of deep neural networks, autoencoders have received a new wave of attention due
to their use for layer-wise pretraining [1]. Although the concept of autoencoders goes back to the
80s [23, 3, 10], many variations have been proposed more recently, such as denoising autoencoder [27],
stacked autoencoders [9] or variational autoencoders [13].
Despite the use of autoencoders for pretraining is not a common practice anymore, various researches still take advantage of their properties. In energy-based generative adversarial networks
(EBGAN) [30], an autoencoder architecture is used to discriminate between real samples and "fake"
ones produced by the generator. Despite not being a generative model, our method shares with
EBGAN the interpretation of the reconstruction error provided by the autoencoder as energy value
and the fundamentals of the discriminator loss. However, instead of the samples produced by the
generator network, we use negative or unlabeled samples to push the autoencoder to discover the
data manifold during training. In other words, EBGAN searches for a generative model by training
adversarial networks, while in our framework the network tries to make two distributions adversarial.
The use of unlabeled data (that could contain both positive and negative samples) together with
positive samples during training is referred to as PU (Positive and Unlabeled) learning [5, 17]. In
the literature, works in the PU learning setting [29, 18] focus on text-based applications. Instead, we
show in the experiments that our work can be applied to different type of data such as images, text
and sequential data.
Similarly to our work, [11] uses the reconstruction error as a measure to differentiate between positive
and negative samples. However they train their network with either positive or negative data only.
In addition, instead of end-to-end training, they provide a two-stage process in which a classifier is
trained to discriminate between positive and negative samples based on the reconstruction error.
In the context of dialogue management systems, the score proposed in [21] has been used as a quality
measure of the response. Nevertheless, [19] shows that this score fails when a correct response, that
largely diverges from the ground truth, is given. The energy value of the RCN is a valid score to
discriminate between good and bad responses, as we show in section 4.4.
4
Experimental results
In this section, we experiment the proposed RCN on various tasks with various kind of data. We
consider a not-in-training setting for CIFAR-10 and CIFAR-100 (sections 4.1 and 4.2), a PU learning
setting for the amazon reviews dataset (section 4.3) and a dialogue completion setting for the Facebook
bAbI dataset (section 4.4).
For an illustrative purpose, we also provide examples of reconstructed and crushed images from
CIFAR-10 and CIFAR-100 in figure 1, corresponding to experiments of sections 4.1 and 4.2.
4.1
CIFAR-10
CIFAR-10 consists of 60k 32x32 color images in 10 classes, with 6k images per class. There are 50k
training images and 10k test images [14]. We converted the images to gray-scale and used 5k images
per class.
This set of experiments belong to the not-in-training setting [6]: the training set contains positive
and negative samples and the test set belongs to a different distribution than the training set. The
?automobile? class is used as the positive class (A) and the rest of the classes are considered to be the
negative class (?A) (binary classification problem). All the training samples are used for training,
except for those belonging to the ?ship? class. Test samples of ?automobile? and ?ship? are used for
testing. It is worth noticing that the size of positive and negative training sets is highly imbalanced:
5k positive samples and 40k negative samples.
In this experiment, we show the superior performances of our network with respect to standard
classifiers in dealing with images of an unseen class. Since we are dealing with a binary classification
problem, we define a threshold T for the energy value. This threshold is used in RCN to distinguish
between the positive and the negative class. For our autoencoder, we used a convolutional network
defined as: (32)3c1s-(32)3c1s-(64)3c2s-(64)3c2-(32)3c1s-512f-1024f, where ?(32)3c1s? denotes
3
Figure 1: Illustrations of Reconstructed and Crushed images by RCN from CIFAR10 and CIFAR100.
a convolution layer with 32 output feature maps, kernel size 3 and stride 1, and ?512f? denotes
a fully-connected layer with 512 hidden units. The size of the last layer corresponds to the size
of the images (32x32=1024). For standard classification we add on top of the last layer another
fully-connected layer with 2 output neurons (A/?A). The choice of the architectures for standard
classifier and autoencoder is driven by necessity of fair comparison. ReLU activation functions are
used for all the layers except for the last fully-connected layer of the standard classifier in which a
Softmax function is used. These models are implemented in Tensorflow and trained with the adam
optimizer [12] (learning rate of 0.0004) and a mini-batch size of 100 samples. The margin m was set
to 1.0 and the threshold T to 0.5.
Table 1 shows the true positive rate (TPR=#(correctly classified cars)/#cars) and the true negative rate
(TNR=#(correctly classified ships)/#ships) obtained by the standard classifier (CNN / CNN-reduced)
and our network (RCN). CNN-reduced shows the performance of the standard classifier when using
the same amount of positive and negative samples. It can be noticed that RCN presents the best TNR
and a TPR comparable to the one of CNN-reduced. These results shows that RCN is a better solution
when dealing with not-in-training data. In addition, the TPR and TNR of our method is comparable
despite the imbalanced training set.
Figure 2 clearly shows that not-in-training samples (ship images) are positioned between positive
in-training samples (automobile images) and in-training-negative samples (images from all classes
except automobile and ship). It can be noticed that negative in-training samples have a reconstruction
loss close to margin value 1.0.
Table 1: Performances of standard classifier (CNN / CNN-reduced) and our method (RCN) on
CIFAR-10. The positive class corresponds to "automobile" and the negative class corresponds to
"ship" (unseen during the training phase).
Method
True Positive Rate
True Negative Rate
CNN-reduced
CNN
RCN
0.82
0.74
0.81
0.638
0.755
0.793
4
Figure 2: Mean reconstruction error over the epochs of positive in-training, negative in-training and
negative not-in-training samples of CIFAR-10.
4.2
CIFAR-100
CIFAR-100 is similar to CIFAR-10, except it has 100 classes containing 600 images each (500 for
training and 100 for testing) [14]. The 100 classes in the CIFAR-100 are grouped into 20 super-classes
with 5 classes each. Each image comes with a pair of labels: the class and the super-class.
In this set of experiments, the ?food containers? super-class is used as the positive class (A) and the
all the other super-classes are considered to be the negative class (?A) (binary classification problem).
During training, 4 out of 5 classes belonging to the ?food containers? super-class (?bottles?, ?bowls?,
?cans?, ?cups?) are used as the positive training set and 4 out of 5 classes belonging to the ?flowers?
super-class (?orchids?, ?poppies?, ?roses?, ?sunflowers?) are used as the negative training set. At
test time, two in-training classes (?cups? and ?sunflowers?), two not-in-training classes belonging
to ?food containers? (?plates?) and ?flowers? (?tulips?) and two not-in-training classes belonging to
external super-classes (?keyboard? and ?chair?) are used.
In this experiment, we show the superior performances of our network with respect to standard
classifiers in dealing with data coming from unknown distributions and from unseen modes of the
same distributions as the training data. The same networks and parameters of section 4.1 are used
here.
Table 2 shows the true positive rate (TPR=#(correctly classified plates)/#plates) and the true negative
rate (TNR=#(correctly classified tulips)/#tulips) obtained by the standard classifier (CNN) and our
network (RCN). It can be noticed that RCN presents the best results both for TNR and for TPR.
These results shows that RCN is a better solution when dealing with not-in-training data coming from
unseen modes of the data distribution. It is worth noticing that only 4k samples (2k positive and 2k
negative) have been used during training.
Figure 3 clearly shows the effectiveness of the learning procedure of our framework: the networks
assigns low energy value (close to 0) to positive samples, high energy value (close to m) to negative
samples related to the negative training set and medium energy value (close to m/2) to negative
samples unrelated to the negative training set.
Table 2: Performances of the standard classifier (CNN) and our method (RCN) on CIFAR-100. The
positive class corresponds to "plates" and the negative class corresponds to "tulips".
Method
True Positive Rate
True Negative Rate
CNN
RCN
0.718
0.861
0.81
0.853
5
Figure 3: Mean reconstruction error over the epochs of positive in-training and not-in-training
(blue), negative in-training and not-in-training (red) and not-in-training unrelated (green,black) of
CIFAR-100.
4.3
Amazon review
Amazon reviews is a dataset containing product reviews (ratings, text, helpfulness votes) and metadata (descriptions, category information, price, brand, and image features) from Amazon, including
142.8 million reviews spanning [20]. Here, we only use the ratings and text features.
This set of experiments belong to the PU learning setting: the training set contains positive and
unlabeled data. The positive training set contains 10k "5-star" reviews and the unlabeled training
set contains 10k unlabeled review (containing both positive and negative review). The test set is
composed of 10k samples: 5k "5-star" (positive) reviews and 5k "1-star" (negative) reviews. The aim
here is to show that RCN performs well in the PU learning setting with unlabeled sets with different
positive/negative samples ratio.
For handling the text data, we used the pretrained Glove word-embedding [22] with 100 feature
dimensions. We set the maximum number of words in a sentence to 40 and zero-padded shorter
sentences.
For our autoencoder, we used a 1-dimensional (1D) convolutional network defined as: (128)7c1s(128)7c1s-(128)3c1s-(128)3c1-(128)3c1s-2048f-4000f, where ?(128)7c1s? denotes a 1D convolution
layer with 128 output feature maps, kernel size 7 and stride 1. ReLU activation functions are used
for all the layers. These models are implemented in Tensorflow and trained with the adam optimizer
(learning rate of 0.0004) and a mini-batch size of 100 samples. The margin m was set to 0.85 and
the threshold T to 0.425.
Table 3 shows the results of different well-established PU learning methods, together with ours
(RCN), on the Amazon review dataset. In can be noticed that, despite the fact that the architecture of
our method is not specifically designed for handling the PU learning setting, it shows comparable
results to the other methods, even when unlabeled training data with a considerable amount of positive
samples (50%) are used.
Table 4 presents some examples from the test set. It can be noticed that positive comments are
assigned a low reconstruction error (energy) and vice-versa.
4.4
Facebook bAbI dialogue
Facebook bAbI dialogue is a dataset containing dialogues related to 6 different tasks in which the
user books a table in a restaurant with the help of a bot [2]. For each task 1k training and 1k test
dialogues are provided. Each dialogue has 4 to 11 turns between the user and the bot for a total of
6
Table 3: F-measure of positive samples obtained with Roc-SVM [28], Roc-EM [18], Spy-SVM [18],
NB-SVM [18], NB-EM [18] and RCN (ours). The scores are obtained on two different configuration
of the unlabeled training set: one containing 5% of positive samples and one containing 50% of
positive samples.
Method
F-measure for pos. samples (%5-%95)
F-measure for pos. samples (%50-%50)
Roc-SVM [28]
Roc-EM [18]
Spy-SVM [18]
NB-SVM [18]
NB-EM [18]
RCN
0.92
0.91
0.92
0.92
0.91
0.90
0.89
0.90
0.89
0.86
0.86
0.83
Table 4: Examples of positive (5/5 score) and negative (1/5 score) reviews from Amazon review with
the corresponding reconstruction error assigned from RCN.
Review
Score
Error
excellent funny fast reading i would recommend to all my friends
5/5
0.00054
this is easily one of my favorite books in the series i highly
recommend it
5/5
0.00055
super book liked the sequence and am looking forward to a sequel
keeping the s and characters would be nice
5/5
0.00060
i truly enjoyed all the action and the characters in this book the
interactions between all the characters keep you drawn in to the
book
5/5
0.00066
this book was the worst zombie book ever not even worth the
review
1/5
1.00627
way too much sex and i am not a prude i did not finish and then
deleted the book
1/5
1.00635
in reality it rates no stars it had a political agenda in my mind it
was a waste my money
1/5
1.00742
fortunately this book did not cost much in time or money it
was very poorly written an ok idea poorly executed and poorly
developed
1/5
1.00812
?6k turns in each set (training and test) for task 1 and ?9.5k turns in each set for task 2. Here, we
consider the training and test data associated to tasks 1 and 2 because the other tasks require querying
Knowledge Base (KB) upon user request: this is out of the scope of the paper.
In task 1, the user requests to make a new reservation in a restaurant by defining a query that can
contain from 0 to 4 required fields (cuisine type, location, number of people and price range) and the
bot asks questions for filling the missing fields. In task 2, the user requests to update a reservation in
a restaurant between 1 and 4 times.
The training set is built in such a way that, for each turn in a dialogue, together with the positive
(correct) response, 100 possible negative responses are selected from the candidate set (set of all
bot responses in the Facebook bAbI dialogue dataset with a total of 4212 samples). The test set is
built in such a way that, for each turn in a dialogue, all possible negative responses are selected from
the candidate set. More precisely, for task 1, the test set contains approximately 6k positive and 25
million negative dialogue history-reply pairs, while for task 2, it contains approximately 9k positive
and 38 million negative pairs.
For our autoencoder, we use a gated recurrent unit (GRU) [4] with 1024 hidden units and a projection
layer on top of it in order to replicate the input sequence in output. An upper limit of 100 was set for
7
the sequence length and a feature size of 50 was selected for word embeddings. The GRU uses ReLU
activation and a dropout of 0.1. This model is implemented in Tensorflow and trained with the adam
optimizer (learning rate of 0.0004) and a mini-batch size of 100 samples.
In this experiments, our network equals the state-of-the-art performance of memory networks presented in [2] by achieving 100% accuracy both for next response classification and for dialogue
completion where dialogue is considered as completed if all responses within the dialogue are
correctly chosen.
5
Conclusions
We have introduced a simple energy-based model, adversarial regarding data by minimizing the
energy of positive data and maximizing the energy of negative data. The model is instantiated with
autoencoders where the specific architecture depends on the considered task, thus providing a family
of RCNs. Such an approach can address various covariate shift problems, such as not-in-training and
positive and unlabeled learning and various types of data.
The efficiency of our approach has been studied with exhaustive experiments on CIFAR-10, CIFAR100, the Amazon reviews dataset and the Facebook bAbI dialogue dataset. These experiments showed
that RCN can obtain state-of-the art results for the dialogue completion task and competitive results
for the general A/?A classification problem. These outcomes suggest that the energy value provided
by RCN can be used to asses the quality of response given the dialogue history. Future works will
extend the RCN to the multi-class classification setting.
These results suggest that the energy value provided by RCN can be used to assess the quality of the
response given the dialogue history. We plan to study further this aspect in the near future, in order to
provide an alternative metric for dialogue systems evaluation.
Acknowledgments
This work has been funded by the European Union Horizon2020 MSCA ITN ACROSSING project
(GA no. 616757). The authors would like to thank the members of the project?s consortium for their
valuable inputs.
References
[1] Y. Bengio. Learning deep architectures for ai. Foundations and trends in Machine Learning,
2(1):1?127, 2009.
[2] A. Bordes and J. Weston. Learning end-to-end goal-oriented dialog. arXiv:1605.07683, 2016.
[3] H. Bourlard and Y. Kamp. Auto-association by multilayer perceptrons and singular value
decomposition. Biological Cybernetics, 59(4):291?294, 1988.
[4] K. Cho, B. van Merrienboer, D. Bahdanau, and Y. Bengio. On the properties of neural machine
translation: Encoder-decoder approaches. arXiv preprint arXiv:1409.1259, 2014.
[5] F. Denis. Pac learning from positive statistical queries. Algorithmic Learning Theory,112?126,
1998.
[6] F. Geli and L. Bing. Social media text classification under negative covariate shift. EMNLP,
2015.
[7] W.H. Greene. Sample selection bias as a specification error: A comment. Econometrica:
Journal of the Econometric Society, pages 795?798, 1981.
[8] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. CVPR,
2016.
[9] G. E. Hinton and R. R. Salakhutdinov. Reducing the dimensionality of data with neural networks.
Science, 313(5786):504?507, 2006.
8
[10] G.E. Hinton and R.S. Zemel. Autoencoders, minimum description length and helmholtz free
energy. NIPS, 1994.
[11] N. Japkowicz, C. Myers, and M. Gluck. A novelty detection approach to classification. IJCAI,
1995.
[12] D. Kingma and J. Ba. Adam: A method for stochastic optimization. ICLR, 2015.
[13] D. Kingma and M. Welling. Auto-encoding variational bayes. ICLR, 2013.
[14] A. Krizhevsky. Learning multiple layers of features from tiny images. Technical report, 2009.
[15] A. Krizhevsky, I. Sutskever, and G. E Hinton. Imagenet classification with deep convolutional
neural networks. NIPS, 2012.
[16] Y. LeCun, S. Chopra, R. Hadsell, M. Ranzato, and F.J. Huang. A tutorial on energy-based
learning. Technical report, MIT Press, 2006.
[17] X. Li and L. Bing. Learning from positive and unlabeled examples with different data distributions. ECML, 2005.
[18] B Liu, Y. Dai, X. Li, W-S. Lee, and P. Yu. Building text classifiers using positive and unlabeled
examples. ICDM, 2003.
[19] C. Liu, R. Lowe, I.V. Serban, M. Noseworthy, L. Charlin, and J. Pineau. How not to your
dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response
generation. EMNLP, 2016.
[20] J. McAuley and J. Leskovec. Hidden factors and hidden topics: understanding rating dimensions
with review text. RecSys, 2013.
[21] K. Papineni, S. Roukos, T. Ward, and W. Zhu. Bleu: a method for automatic evaluation of
machine translation. ACL, 2002.
[22] J. Pennington, R. Socher, and C. D. Manning. Glove: Global vectors for word representation.
EMNLP, 2014.
[23] D.E. Rumelhart, G.E. Hinton, and R.J. Williams. Learning representations by back-propagating
errors. Cognitive Modeling, 5(3):1, 1988.
[24] H. Shimodaira. Improving predictive inference under covariate shift by weighting the loglikelihood function. Journal of Statistical Planning and Inference, 90(2):227?244, 2000.
[25] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image
recognition. arXiv preprint arXiv:1409.1556, 2014.
[26] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and
A. Rabinovich. Going deeper with convolutions. CVPR, 2015.
[27] P. Vincent, H. Larochelle, Y. Bengio, and P. Manzagol. Extracting and composing robust
features with denoising autoencoders. ACM, 2008.
[28] Li X. and Liu B. Learning to classify text using positive and unlabeled data. IJCAI, 2003.
[29] H. Yu, J. Han, and K. Chang. Pebl: Positive example based learning for web page classification
using svm. KDD, 2002.
[30] J. Zhao, M. Mathieu, and Y. LeCun. Energy-based generative adversarial networks. ICLR,
2017.
9
| 7041 |@word cnn:11 replicate:1 sex:1 decomposition:1 asks:1 mcauley:1 lorraine:3 necessity:1 configuration:1 contains:6 score:7 series:1 liu:4 ours:2 activation:3 assigning:1 written:1 kdd:1 designed:1 update:1 generative:4 selected:3 advancement:1 provides:1 location:1 denis:1 zhang:1 c2:1 specialize:1 consists:1 behavior:1 dialog:1 planning:1 multi:1 salakhutdinov:1 tuwien:1 food:3 cardinality:1 project:3 provided:4 underlying:1 notation:1 discover:1 medium:2 unrelated:2 argmin:1 interpreted:1 minimizes:2 kind:1 developed:1 horizon2020:1 act:1 universit:1 classifier:12 k2:2 unit:3 producing:1 positive:60 cuisine:1 tnr:5 limit:1 despite:4 encoding:2 approximately:2 black:1 acl:1 umr:1 studied:1 range:1 tulip:4 acknowledgment:1 lecun:2 enforces:1 testing:2 practice:1 union:1 procedure:1 crush:4 empirical:1 projection:2 word:5 suggest:2 consortium:1 unlabeled:17 close:4 ga:1 selection:1 nb:4 risk:4 context:1 optimize:1 map:3 missing:1 maximizing:3 go:1 attention:1 williams:1 independently:1 hadsell:1 amazon:9 x32:2 assigns:1 matthieu:2 embedding:1 handle:1 variation:1 cifar100:2 user:6 us:2 trend:1 helmholtz:1 recognition:2 rumelhart:1 preprint:2 worst:1 connected:3 sun:1 ranzato:1 valuable:1 rose:1 ideally:1 econometrica:1 dynamic:1 trained:5 astonishing:1 predictive:1 upon:1 efficiency:1 bowl:1 po:2 easily:1 geist:1 represented:1 various:5 regularizer:1 stacked:1 univ:3 instantiated:3 train:1 fast:1 query:2 zemel:1 reservation:2 outcome:1 exhaustive:1 encoded:2 xpos:8 cvpr:2 loglikelihood:1 reconstruct:5 encoder:2 ability:1 simonyan:1 ward:1 unseen:4 differentiate:1 advantage:1 sequence:3 myers:1 reconstruction:14 propose:1 interaction:1 coming:2 product:1 fr:1 tu:1 enc:5 flexibility:1 achieve:1 poorly:3 papineni:1 description:2 sutskever:1 ijcai:2 diverges:1 adam:4 liked:1 help:1 friend:1 ac:2 propagating:1 completion:5 recurrent:1 received:1 eq:1 implemented:3 come:1 larochelle:1 correct:2 stochastic:2 kb:1 require:1 assign:3 merrienboer:1 biological:1 considered:5 ground:1 scope:1 algorithmic:1 achieves:1 optimizer:3 purpose:2 label:1 grouped:1 vice:1 mit:1 clearly:2 normalizability:1 aim:3 rather:1 super:8 focus:2 political:1 adversarial:7 am:2 inference:3 cnrs:2 hidden:4 japkowicz:1 france:2 going:1 classification:17 flexible:1 aforementioned:1 plan:1 art:3 softmax:1 field:2 equal:1 beach:1 yu:2 unsupervised:1 filling:1 future:2 report:2 recommend:2 oriented:1 composed:2 phase:1 detection:1 highly:2 evaluation:3 introduces:2 truly:1 cifar10:1 shorter:1 leskovec:1 classify:1 modeling:1 rabinovich:1 cost:1 introducing:1 krizhevsky:2 too:3 dependency:1 my:4 cho:1 st:1 fundamental:1 sequel:1 probabilistic:1 lee:1 together:3 management:1 containing:7 reconstructs:1 possibly:1 emnlp:3 huang:1 external:1 book:9 cognitive:1 dialogue:25 zhao:1 helpfulness:1 li:3 szegedy:1 converted:1 de:1 wien:1 stride:2 star:4 erin:1 waste:1 explicitly:1 depends:1 try:1 lowe:1 lab:1 red:1 competitive:2 wave:1 metz:2 bayes:1 jia:1 msca:1 contribution:1 ass:2 accuracy:1 convolutional:4 characteristic:1 largely:1 kamp:1 vincent:1 produced:2 ren:1 worth:3 cybernetics:1 classified:4 history:3 facebook:7 energy:37 associated:3 sampled:1 dataset:9 austria:2 knowledge:1 color:1 car:2 dimensionality:1 positioned:1 back:2 ok:1 response:13 zisserman:1 charlin:1 stage:1 reply:1 autoencoders:12 babi:7 web:1 mode:2 pineau:1 quality:3 gray:1 building:1 usa:1 concept:1 contain:2 true:8 assigned:3 deal:1 during:5 illustrative:1 plate:4 mohammad:1 performs:1 image:23 wise:1 variational:2 recently:1 rcn:24 common:1 superior:2 empirically:1 reza:1 million:3 belong:4 interpretation:1 tpr:5 extend:1 association:1 he:1 c1s:9 anguelov:1 cup:2 versa:1 ai:1 enjoyed:1 automatic:1 similarly:2 had:1 funded:1 specification:1 han:1 money:2 loria:1 pu:9 add:1 base:1 imbalanced:4 recent:1 showed:1 belongs:1 driven:1 ship:7 keyboard:1 binary:3 minimum:1 greater:1 fortunately:1 dai:1 novelty:1 itn:1 multiple:1 technical:2 long:1 cifar:19 icdm:1 prediction:1 orchid:1 austrian:1 multilayer:1 expectation:1 metric:2 arxiv:5 represent:1 kernel:2 achieved:1 dec:5 c1:1 addition:2 singular:1 container:3 rest:1 comment:2 subject:1 bahdanau:1 member:1 effectiveness:1 extracting:1 near:1 chopra:1 bengio:3 embeddings:1 variety:2 affect:1 relu:3 restaurant:3 finish:1 architecture:6 regarding:3 idea:1 shift:6 sunflower:2 pretraining:2 action:1 deep:6 generally:1 fake:1 amount:3 category:1 reduced:5 tutorial:1 bot:4 spy:2 per:2 correctly:5 blue:1 write:1 serban:1 nevertheless:1 threshold:4 achieving:1 drawn:2 deleted:1 prevent:2 diffusion:1 econometric:1 padded:1 year:1 noticing:2 you:1 family:1 funny:1 draw:1 comparable:3 dropout:1 layer:12 distinguish:2 lec:1 greene:1 precisely:1 your:1 expo:2 aspect:1 c2s:1 chair:1 request:3 manning:1 shimodaira:1 belonging:5 reconstructing:1 em:4 character:3 ln:1 bing:2 turn:5 mind:1 end:4 anymore:1 batch:4 alternative:1 denotes:3 top:2 completed:1 vienna:2 especially:2 society:1 noticed:5 question:1 gradient:1 iclr:3 distance:2 thank:1 decoder:2 acin:2 recsys:1 topic:1 manifold:2 spanning:1 bleu:1 length:2 reed:1 mini:4 providing:2 minimizing:2 illustration:1 ratio:1 sermanet:1 manzagol:1 executed:1 negative:49 ba:1 agenda:1 unknown:4 gated:1 upper:1 convolution:3 neuron:1 datasets:1 descent:1 ecml:1 defining:1 hinton:4 looking:1 ever:1 rn:1 rating:3 introduced:1 pair:3 paris:1 bottle:1 required:1 sentence:2 discriminator:1 gru:2 imagenet:1 learned:1 tensorflow:3 established:1 kingma:2 nip:3 address:3 flower:2 reading:1 saclay:1 built:2 max:2 green:1 including:1 memory:1 natural:1 bourlard:1 residual:1 zhu:1 technology:1 ppos:5 mathieu:1 autoencoder:10 metadata:1 auto:2 text:12 review:19 literature:2 epoch:2 nice:1 understanding:1 loss:3 fully:3 generation:1 querying:1 generator:2 foundation:1 vanhoucke:1 article:1 tiny:1 share:1 bordes:1 translation:2 roukos:1 summary:1 last:3 keeping:1 free:1 bias:1 deeper:1 institute:1 van:1 dimension:2 valid:1 forward:1 collection:1 author:1 erhan:1 social:1 welling:1 reconstructed:2 implicitly:1 keep:2 dealing:8 global:1 search:1 table:9 reality:1 favorite:1 nature:1 learn:1 robust:1 ca:1 composing:1 improving:1 automobile:5 excellent:1 european:1 did:2 main:1 ebgan:3 ait:2 fair:1 gmbh:1 representative:1 referred:2 roc:4 poppy:1 fails:1 position:1 candidate:2 weighting:1 learns:2 bad:1 specific:1 covariate:6 pac:1 svm:7 intrinsic:1 socher:1 sequential:2 pennington:1 push:1 margin:5 kx:2 gluck:1 scalar:1 pretrained:1 chang:1 corresponds:5 truth:1 extracted:1 acm:1 weston:1 goal:3 price:2 considerable:1 glove:2 except:4 specifically:1 reducing:1 denoising:2 called:1 total:2 discriminate:5 experimental:2 vote:1 brand:1 perceptrons:1 people:1 violated:1 handling:3 |
6,680 | 7,042 | Streaming Robust Submodular Maximization:
A Partitioned Thresholding Approach
Slobodan Mitrovi?c?
EPFL
Ilija Bogunovic?
EPFL
Ashkan Norouzi-Fard?
EPFL
Jakub Tarnawski?
EPFL
Volkan Cevher?
EPFL
Abstract
We study the classical problem of maximizing a monotone submodular function
subject to a cardinality constraint k, with two additional twists: (i) elements arrive
in a streaming fashion, and (ii) m items from the algorithm?s memory are removed
after the stream is finished. We develop a robust submodular algorithm STAR-T.
It is based on a novel partitioning structure and an exponentially decreasing thresholding rule. STAR-T makes one pass over the data and retains a short but robust
summary. We show that after the removal of any m elements from the obtained
summary, a simple greedy algorithm STAR-T-G REEDY that runs on the remaining
elements achieves a constant-factor approximation guarantee. In two different
data summarization tasks, we demonstrate that it matches or outperforms existing
greedy and streaming methods, even if they are allowed the benefit of knowing the
removed subset in advance.
1
Introduction
A central challenge in many large-scale machine learning tasks is data summarization ? the extraction
of a small representative subset out of a large dataset. Applications include image and document
summarization [1, 2], influence maximization [3], facility location [4], exemplar-based clustering [5],
recommender systems [6], and many more. Data summarization can often be formulated as the
problem of maximizing a submodular set function subject to a cardinality constraint.
On small datasets, a popular algorithm is the simple greedy method [7], which produces solutions
provably close to optimal. Unfortunately, it requires repeated access to all elements, which makes it
infeasible for large-scale scenarios, where the entire dataset does not fit in the main memory. In this
setting, streaming algorithms prove to be useful, as they make only a small number of passes over the
data and use sublinear space.
In many settings, the extracted representative set is also required to be robust. That is, the objective
value should degrade as little as possible when some elements of the set are removed. Such removals
may arise for any number of reasons, such as failures of nodes in a network, or user preferences
which the model failed to account for; they could even be adversarial in nature.
?
e-mail:
e-mail:
?
e-mail:
?
e-mail:
?
e-mail:
?
[email protected]
[email protected]
[email protected]
[email protected]
[email protected]
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
A robustness requirement is especially challenging for large datasets, where it is prohibitively
expensive to reoptimize over the entire data collection in order to find replacements for the removed
elements. In some applications, where data is produced so rapidly that most of it is not being stored,
such a search for replacements may not be possible at all.
These requirements lead to the following two-stage setting. In the first stage, we wish to solve the
robust streaming submodular maximization problem ? one of finding a small representative subset of
elements that is robust against any possible removal of up to m elements. In the second, query stage,
after an arbitrary removal of m elements from the summary obtained in the first stage, the goal is to
return a representative subset, of size at most k, using only the precomputed summary rather than the
entire dataset.
For example, (i) in dominating set problem (also studied under influence maximization) we want
to efficiently (in a single pass) compute a compressed but robust set of influential users in a social
network (whom we will present with free copies of a new product), (ii) in personalized movie
recommendation we want to efficiently precompute a robust set of user-preferred movies. Once we
discard those users who will not spread the word about our product, we should find a new set of
influential users in the precomputed robust summary. Similarly, if some movies turn out not to be
interesting for the user, we should still be able to provide good recommendations by only looking
into our robust movie summary.
Contributions. In this paper, we propose a two-stage procedure for robust submodular maximization. For the first stage, we design a streaming algorithm which makes one pass over the data
and finds a summary that
is robust against removal of up to m elements, while containing at most
O (m log k + k) log2 k elements.
In the second (query) stage, given any set of size m that has been removed from the obtained summary,
we use a simple greedy algorithm that runs on the remaining elements and produces a solution of
size at most k (without needing to access the entire dataset). We prove that this solution satisfies a
constant-factor approximation guarantee.
Achieving this result requires novelty in the algorithm design as well as the analysis. Our streaming
algorithm uses a structure where the constructed summary is arranged into partitions consisting of
buckets whose sizes increase exponentially with the partition index. Moreover, buckets in different
partitions are associated with greedy thresholds, which decrease exponentially with the partition index.
Our analysis exploits and combines the properties of the described robust structure and decreasing
greedy thresholding rule.
In addition to algorithmic and theoretical contributions, we also demonstrate in several practical
scenarios that our procedure matches (and in some cases outperforms) the S IEVE -S TREAMING
algorithm [8] (see Section 5) ? even though we allow the latter to know in advance which elements
will be removed from the dataset.
2
Problem Statement
We consider a potentially large universe of elements V of size n equipped with a normalized monotone
submodular set function f : 2V ? R?0 defined on V . We say that f is monotone if for any two sets
X ? Y ? V we have f (X) ? f (Y ). The set function f is said to be submodular if for any two sets
X ? Y ? V and any element e ? V \ Y it holds that
f (X ? {e}) ? f (X) ? f (Y ? {e}) ? f (Y ).
We use f (Y | X) to denote the marginal gain in the function value due to adding the elements of set
Y to set X, i.e. f (Y | X) := f (X ? Y ) ? f (X). We say that f is normalized if f (?) = 0.
The problem of maximizing a monotone submodular function subject to a cardinality constraint, i.e.,
max
f (Z),
Z?V,|Z|?k
(1)
has been studied extensively. It is well-known that a simple greedy algorithm (henceforth refered to
as G REEDY) [7], which starts from an empty set and then iteratively adds the element with highest
marginal gain, provides a (1 ? e?1 )-approximation. However, it requires repeated access to all
elements of the dataset, which precludes it from use in large-scale machine learning applications.
2
We say that a set S is robust for a parameter m if, for any set E ? V such that |E| ? m, there is a
subset Z ? S \ E of size at most k such that
f (Z) ? cf (OPT(k, V \ E)),
where c > 0 is an approximation ratio. We use OPT(k, V \ E) to denote the optimal subset of size
k of V \ E (i.e., after the removal of elements in E):
OPT(k, V \ E) ?
argmax
f (Z).
Z?V \E,|Z|?k
In this work, we are interested in solving a robust version of Problem (1) in the setting that consists
of the following two stages: (i) streaming and (ii) query stage.
In the streaming stage, elements from the ground set V arrive in a streaming fashion in an arbitrary
order. Our goal is to design a one-pass streaming algorithm that has oracle access to f and retains a
small set S of elements in memory. In addition, we want S to be a robust summary, i.e., S should both
contain elements that maximize the objective value, and be robust against the removal of prespecified
number of elements m. In the query stage, after any set E of size at most m is removed from V , the
goal is to return a set Z ? S \ E of size at most k such that f (Z) is maximized.
Related work. A robust, non-streaming version of Problem (1) was first introduced in [9]. In that
setting, the algorithm must output a set Z of size k which maximizes the smallest objective value
guaranteed to be obtained after a set of size m is removed, that is,
max
min
f (Z \ E).
Z?V,|Z|?k E?Z,|E|?m
The work [10]
? provides the first constant (0.387) factor approximation result to this problem, valid
for m = o( k). Their solution consists of buckets of size O(m2 log k) that are constructed greedily,
one after another. Recently, in [11], a centralized algorithm PRO has been proposed that achieves the
same approximation result and allows for a greater robustness m = o(k). PRO constructs a set that is
arranged into partitions consisting of buckets whose sizes increase exponentially with the partition
index. In this work, we use a similar structure for the robust set but, instead of filling the buckets
greedily one after another, we place an element in the first bucket for which the gain of adding the
element is above the corresponding threshold. Moreover, we introduce a novel analysis that allows us
to be robust to any number of removals m as long as we are allowed to use O(m log2 k) memory.
Recently, submodular streaming algorithms (e.g. [5], [12] and [13]) have become a prominent
option for scaling submodular optimization to large-scale machine learning applications. A popular
submodular streaming algorithm S IEVE -S TREAMING [8] solves Problem (1) byperforming
one pass
k
over the data, and achieves a (0.5 ? )-approximation while storing at most O k log
elements.
Our algorithm extends the algorithmic ideas of S IEVE -S TREAMING, such as greedy thresholding, to
the robust setting. In particular, we introduce a new exponentially decreasing thresholding scheme
that, together with an innovative analysis, allows us to obtain a constant-factor approximation for the
robust streaming problem.
Recently, robust versions of submodular maximization have been considered in the problems of
influence maximization (e.g, [3], [14]) and budget allocation ([15]). Increased interest in interactive
machine learning methods has also led to the development of interactive and adaptive submodular
optimization (see e.g. [16], [17]). Our procedure also contains the interactive component, as we can
compute the robust summary only once and then provide different sub-summaries that correspond to
multiple different removals (see Section 5.2).
Independently and concurrently with our work, [18] gave a streaming algorithm for robust submodular
maximization under the cardinality constraint. Their approach provides a 1/2 ? ? approximation
guarantee. However, their algorithm uses O(mk log k/?) memory. While the memory requirement
of their method increases linearly with k, in the case of our algorithm this dependence is logarithmic.
3
Data Stream
decreasing
thresholds
?
k ? buckets
?/2
(k / 2) ? buckets
partitions
2?
?/k
1?
Set S
Figure 1: Illustration of the set S returned by STAR-T. It consists of dlog ke + 1 partitions such that
each partition i contains wdk/2i e buckets of size 2i (up to rounding). Moreover, each partition i has
its corresponding threshold ? /2i .
3
A Robust Two-stage Procedure
Our approach consists of the streaming Algorithm 1, which we call Streaming Robust submodular
algorithm with Partitioned Thresholding (STAR-T). This algorithm is used in the streaming stage,
while Algorithm 2, which we call STAR-T-G REEDY, is used in the query stage.
As the input, STAR-T requires a non-negative monotone submodular function f , cardinality
constraint k, robustness parameter m and thresholding parameter ? . The parameter ? is an ?approximation to f (OPT(k, V \ E)), for some ? ? (0, 1] to be specified later. Hence, it depends on
f (OPT(k, V \ E)), which is not known a priori. For the sake of clarity, we present the algorithm
as if f (OPT(k, V \ E)) were known, and in Section 4.1 we show how f (OPT(k, V \ E)) can be
approximated. The algorithm makes one pass over the data and outputs a set of elements S that is
later used in the query stage in STAR-T-G REEDY.
The set S (see Figure 1 for an illustration) is divided into dlog ke + 1 partitions, where every partition
i ? {0, . . . , dlog ke} consists of wdk/2i e buckets Bi,jl, j ? {1, . m. . , wdk/2i e}. Here, w ? N+ is a
memory parameter that depends on m; we use w ? 4dlogkkem in our asymptotic theory, while
our numerical results show that w = 1 works well in practice. Every bucket Bi,j stores at most
min{k, 2i } elements. If |Bi,j | = min{2i , k}, then we say that Bi,j is full.
Every partition has a corresponding threshold that is exponentially decreasing with the partition index
i as ? /2i . For example, the buckets in the first partition will only store elements that have marginal
value at least ? . Every element e ? V arriving on the stream is assigned to the first non-full bucket
Bi,j for which the marginal value f (e | Bi,j ) is at least ? /2i . If there is no such bucket, the element
will not be stored. Hence, the buckets are disjoint sets that in the end (after one pass over the data) can
have a smaller number of elements than specified by their corresponding cardinality constraints, and
some of them might even be empty. The set S returned by STAR-T is the union of all the buckets.
In the second stage, STAR-T-G REEDY receives as input the set S constructed in the streaming stage,
a set E ? S that we think of as removed elements, and the cardinality constraint k. The algorithm
then returns a set Z, of size at most k, that is obtained by running the simple greedy algorithm
G REEDY on the set S \ E. Note that STAR-T-G REEDY can be invoked for different sets E.
4
Theoretical Bounds
In this section we discuss our main theoretical results. We initially assume that the value
f (OPT(k, V \ E)) is known; later, in Section 4.1, we remove this assumption. The more detailed versions of our proofs are given in the supplementary material. We begin by stating the main
result.
4
Algorithm 1 STreAming Robust - Thresholding submodular algorithm (STAR-T)
Input: Set V , k, ? , w ? N+
1: Bi,j ? ? for all 0 ? i ? dlog ke and 1 ? j ? wdk/2i e
2: for each element e in the stream do
3:
for i ? 0 to dlog ke do
. loop over partitions
4:
for j ? 1 to wdk/2i e do
. loop over buckets
5:
if |Bi,j | < min{2i , k} and f (e | Bi,j ) ? ? / min{2i , k} then
6:
Bi,j ? Bi,j ? {e}
7:
break: proceed to the next element in the stream
S
8: S ? i,j Bi,j
9: return S
Algorithm 2 STAR-T- G REEDY
Input: Set S, query set E and k
1: Z ? G REEDY(k, S \ E)
2: return Z
Theorem 4.1 Let f be a normalized monotone submodular function defined over the
set V .
l ground m
4dlog kem
Given a cardinality constraint k and parameter m, for a setting of parameters w ?
and
k
?=
2+
1
f (OPT(k, V
(1?e?1 )
1
1? dlog ke
?1/3
(1?e
)
\ E)),
STAR-T performs a single pass over the data set and constructs a set S of size at most O((k +
m log k) log k) elements.
For such a set S and any set E ? V such that |E| ? m, STAR-T-G REEDY yields a set Z ? S \ E
of size at most k with
f (Z) ? c ? f (OPT(k, V \ E)),
1
for c = 0.149 1 ? dlog ke . Therefore, as k ? ?, the value of c approaches 0.149.
Proof sketch. We first consider the case when there is a partition i? in S such that at least half
of its buckets are full. We show that there is at least one full bucket Bi? ,j such that f (Bi? ,j \ E)
is only a constant factor smaller than f (OPT(k, V \ E)), as long as the threshold ? is set close to
f (OPT(k, V \ E)). We make this statement precise in the following lemma:
Lemma 4.2 If there exists a partition in S such that at least half of its buckets are full, then for the
set Z produced by STAR-T-G REEDY we have
4m
f (Z) ? 1 ? e?1 1 ?
?.
(2)
wk
To prove this lemma, we first observe that from the properties of G REEDY it follows that
f (Z) = f (G REEDY(k, S \ E)) ? 1 ? e?1 f (Bi? ,j \ E) .
Now it remains to show that f (Bi? ,j \ E) is close to ? . We observe that for any full bucket Bi? ,j , we
have |Bi? ,j | = min{2i , k}, so its objective value f (Bi? ,j ) is at least ? (every element added to this
bucket increases its objective value by at least ? / min{2i , k}). On average, |Bi? ,j ? E| is relatively
small, and hence we can show that there exists some full bucket Bi? ,j such that f (Bi? ,j \ E) is close
to f (Bi? ,j ).
Next, we consider the other case, i.e., when for every partition, more than half of its buckets are not
full after the execution of STAR-T. For every partition i, we let Bi denote a bucket that is not fully
populated and for which |Bi ? E| is minimized over all the buckets of that partition. Then, we look
at such a bucket in the last partition: Bdlog ke .
We provide two lemmas that depend on f (Bdlog ke ). If ? is set to be small compared to f (OPT(k, V \
E)):
5
? Lemma 4.3 shows that if f (Bdlog ke ) is close to f (OPT(k, V \ E)), then our solution is
within a constant factor of f (OPT(k, V \ E));
? Lemma 4.4 shows that if f (Bdlog ke ) is small compared to f (OPT(k, V \ E)), then our
solution is again within a constant factor of f (OPT(k, V \ E)).
Lemma 4.3 If there does not exist a partition of S such that at least half of its buckets are full, then
for the set Z produced by STAR-T-G REEDY we have
4m
?1/3
f (Z) ? 1 ? e
f Bdlog ke ?
? ,
wk
where Bdlog ke is a not-fully-populated bucket in the last partition that minimizes Bdlog ke ? E and
|E| ? m.
Using standard properties of submodular functions and the G REEDY algorithm we can show that
4m
?1/3
f (Z) = f (G REEDY(k, S \ E)) ? 1 ? e
f Bdlog ke ?
? .
wk
The complete proof of this result can be found in Lemma B.2, in the supplementary material.
Lemma 4.4 If there does not exist a partition of S such that at least half of its buckets are full, then
for the set Z produced by STAR-T-G REEDY,
f (Z) ? (1 ? e?1 ) f (OP T (k, V \ E)) ? f (Bdlog ke ) ? ? ,
where Bdlog ke is any not-fully-populated bucket in the last partition.
To prove this lemma, we look at two sets X and Y , where Y contains all the elements from
OPT(k, V \ E) that are placed in the buckets that precede bucket Bdlog ke in S, and set X :=
OPT(k, V \ E) \ Y . By monotonicity and submodularity of f , we bound f (Y ) by:
X
f (Y ) ? f (OPT(k, V \ E)) ? f (X) ? f (OPT(k, V \ E)) ? f Bdlog ke ?
f e Bdlog ke .
e?X
To bound the sum on the right hand side we use that for every e ? X we have f e Bdlog ke < ?k ,
which holds due to the fact that Bdlog ke is a bucket in the last partition and is not fully populated.
We conclude the proof by showing that f (Z) = f (G REEDY(k, S \ E)) ? 1 ? e?1 f (Y ).
Equipped with the above results, we proceed to prove our main result.
Proof of Theorem 4.1. First, we prove the bound on the size of S:
dlog ke
|S| =
X
dlog ke
wdk/2i e min{2i , k} ?
i=0
By setting w ?
l
4dlog kem
k
X
w(k/2i + 1)2i ? (log k + 5)wk.
(3)
i=0
m
we obtain S = O((k + m log k) log k).
?1/3
Next, we show the
approximation guarantee. We first define ? := 4m
, and
wk , ?1 := 1 ? e
?2 := 1 ? e?1 . Lemma 4.3 and 4.4 provide two bounds on f (Z), one increasing and one
decreasing in f (Bdlog ke ). By balancing out the two bounds, we derive
?1 ?2
f (Z) ?
(f (OPT(k, V \ E)) ? (1 + ?)? ),
(4)
?1 + ?2
with equality for f (Bdlog ke ) =
?2 f (OPT(k,V \E))?(?2 ???1 )?
.
?2 +?1
Next, as ? ? 0, we can observe that Eq. (4) is decreasing, while the bound on f (Z) given by
Lemma 4.2 is increasing in ? for ? < 1. Hence, by balancing out the two inequalities, we obtain our
final bound
1
f (Z) ?
(5)
2
1 f (OPT(k, V \ E)).
?2 (1??) + ?1
6
l
m
For w ? 4dlogkkem we have ? ? 1/dlog ke, and hence, by substituting ?1 and ?2 in Eq. (5), we
prove our main result:
1 ? e?1/3 1 ? e?1 1 ? dlog1 ke
f (OPT(k, V \ E))
f (Z) ?
2 1 ? e?1/3 + (1 ? e?1 )
1
? 0.149 1 ?
f (OPT(k, V \ E)).
dlog ke
2
4.1
Algorithm without access to f (OPT(k, V \ E))
Algorithm STAR-T requires in its input a parameter ? which is a function of an unknown value
f (OPT(k, V \ E)). To deal with this shortcoming, we show how to extend the idea of [8] of
maintaining multiple parallel instances of our algorithm in order to approximate f (OPT(k, V \ E)).
For a given constant > 0, this approach increases the space by a factor of log1+ k and provides a
(1 + )-approximation compared to the value obtained in Theorem 4.1. More precisely, we prove the
following theorem.
Theorem 4.5 For any given constant > 0 there exists a parallel variant of STAR-T that makes one
pass over the stream and outputs a collection of sets S of total size O (k + m log k) log k log1+ k
with the following property: There exists a set S ? S such that applying STAR-T-G REEDY on S
yields a set Z ? S \ E of size at most k with
0.149
1
f (Z) ?
1?
f (OPT(k, V \ E)).
1+
dlog ke
The proof of this theorem, along with a description of the corresponding algorithm, is provided in
Appendix E.
5
Experiments
In this section, we numerically validate the claims outlined in the previous section. Namely, we
test the robustness and compare the performance of our algorithm against the S IEVE -S TREAMING
algorithm that knows in advance which elements will be removed. We demonstrate improved or
matching performance in two different data summarization applications: (i) the dominating set
problem, and (ii) personalized movie recommendation. We illustrate how a single robust summary
can be used to regenerate recommendations corresponding to multiple different removals.
5.1
Dominating Set
In the dominating set problem, given a graph G = (V, M ), where V represents the set of nodes and
M stands for edges, the objective function is given by f (Z) = |N (Z) ? Z|, where N (Z) denotes
the neighborhood of Z (all nodes adjacent to any node of Z). This objective function is monotone
and submodular.
We consider two datasets: (i) ego-Twitter [19], consisting of 973 social circles from Twitter, which
form a directed graph with 81306 nodes and 1768149 edges; (ii) Amazon product co-purchasing
network [20]: a directed graph with 317914 nodes and 1745870 edges.
Given the dominating set objective function, we run STAR-T to obtain the robust summary S. Then
we compare the performance of STAR-T-G REEDY, which runs on S, against the performance of
S IEVE -S TREAMING, which we allow to know in advance which elements will be removed. We
also compare against a method that chooses the same number of elements as STAR-T, but does
so uniformly at random from the set of all elements that will not be removed (V \ E); we refer to
it as R ANDOM. Finally, we also demonstrate the peformance of STAR-T-S IEVE, a variant of our
algorithm that uses the same robust summary S, but instead of running G REEDY in the second stage,
it runs S IEVE -S TREAMING on S \ E.
7
8000
6000
4000
2000
0
10
20
30
40
50
60
70
80
90
?104
2
0.5
Obj. value
Obj. value
4000
3000
2000
20
30
40
50
60
70
20
10
20
30
40
50
60
70
80
90
0
100
10
30
80
90
100
90
60
Star-T-Greedy
Star-T-Sieve
Sieve-Str
Random
50
1
0.5
0
10
70
(f) Movies, by genre
?104
1.5
50
Cardinality k
Star-T-Greedy
Star-T-Sieve
Sieve-Str
Greedy
40
30
20
1000
0
10
30
(d) ego-Twitter,|E| = 2k
2
Star-T-Greedy
Star-T-Sieve
Sieve-Str
Random
5000
40
Cardinality k
(b) Amazon communities,|E| = 2k
6000
Star-T-Greedy
Star-T-Sieve
Sieve-Str
Greedy
50
1
Cardinality k
7000
(e) Movies, already-seen
60
1.5
0
10
100
(c) ego-Twitter,|E| = k
Star-T-Greedy
Star-T-Sieve
Sieve-Str
Random
Obj. value
10000
Avg. obj. value
Avg. obj. value
2.5
Star-T-Greedy
Star-T-Sieve
Sieve-Str
Random
Obj. value
(a) Amazon communities,|E| = k
12000
20
30
Cardinality k
40
50
60
70
80
90
100
Cardinality k
10
10
30
50
70
90
110 130 150 170 190
Cardinality k
Figure 2: Numerical comparisons of the algorithms STAR-T-G REEDY, STAR-T-S IEVE and S IEVE S TREAMING.
Figures 2(a,c) show the objective value after the random removal of k elements from the set S, for
different values of k. Note that E is sampled as a subset of the summary of our algorithm, which hurts
the performance of our algorithm more than the baselines. The reported numbers are averaged over
100 iterations. STAR-T-G REEDY, STAR-T-S IEVE and S IEVE -S TREAMING perform comparably
(STAR-T-G REEDY slightly outperforms the other two), while R ANDOM is significantly worse.
In Figures 2(b,d) we plot the objective value for different values of k after the removal of 2k elements
from the set S, chosen greedily (i.e., by iteratively removing the element that reduces the objective
value the most). Again, STAR-T-G REEDY, STAR-T-S IEVE and S IEVE -S TREAMING perform
comparably, but this time S IEVE -S TREAMING slightly outperforms the other two for some values
of k. We observe that even when we remove more than k elements from S, the performance of our
algorithm is still comparable to the performance of S IEVE -S TREAMING (which knows in advance
which elements will be removed). We provide additional results in the supplementary material.
5.2
Interactive Personalized Movie Recommendation
The next application we consider is personalized movie recommendation. We use the MovieLens
1M database [21], which contains 1000209 ratings for 3900 movies by 6040 users. Based on these
ratings, we obtain feature vectors for each movie and each user by using standard low-rank matrix
completion techniques [22]; we choose the number of features to be 30.
For a user u, we use the following monotone submodular function to recommend a set of movies Z:
X
X
fu (Z) = (1 ? ?) ?
hvu , vz i + ? ?
max hvm , vz i .
z?Z
m?M
z?Z
The first term aggregates the predicted scores of the chosen movies z ? Z for the user u (here vu
and vz are non-normalized feature vectors of user u and movie z, respectively). The second term
corresponds to a facility-location objective that measures how well the set Z covers the set of all
movies M [4]. Finally, ? is a user-dependent parameter that specifies the importance of global movie
coverage versus high scores of individual movies.
Here, the robust setting arises naturally since we do not have complete information about the user:
when shown a collection of top movies, it will likely turn out that they have watched (but not rated)
many of them, rendering these recommendations moot. In such an interactive setting, the user may
also require (or exclude) movies of a specific genre, or similar to some favorite movie.
We compare the performance of our algorithms STAR-T-G REEDY and STAR-T-S IEVE in such
scenarios against two baselines: G REEDY and S IEVE -S TREAMING (both being run on the set V \ E,
i.e., knowing the removed elements in advance). Note that in this case we are able to afford running
8
G REEDY, which may be infeasible when working with larger datasets. Below we discuss two concrete
practical scenarios featured in our experiments.
Movies by genre. After we have built our summary S, the user decides to watch a drama today;
we retrieve only movies of this genre from S. This corresponds to removing 59% of the universe
V . In Figure 2(f) we report the quality of our output compared to the baselines (for user ID 445
and ? = 0.95) for different values of k. The performance of STAR-T-G REEDY is within several
percent of the performance of G REEDY (which we can consider as a tractable optimum), and the two
sieve-based methods STAR-T-S IEVE and S IEVE -S TREAMING display similar objective values.
Already-seen movies. We randomly sample a set E of movies already watched by the user (500
out of all 3900 movies). To obtain a realistic subset, each movie is sampled proportionally to its
popularity (number of ratings). Figure 2(e) shows the performance of our algorithm faced with the
removal of E (user ID = 445, ? = 0.9) for a range of settings of k. Again, our algorithm is able to
almost match the objective values of G REEDY (which is aware of E in advance).
Recall that we are able to use the same precomputed summary S for different removed sets E. This
summary was built for parameter w = 1, which theoretically allows for up to k removals. However,
despite having |E| k in the above scenarios, our performance remains robust; this indicates that
our method is more resilient in practice than what the proved bound alone would guarantee.
6
Conclusion
We have presented a new robust submodular streaming algorithm STAR-T based on a novel partitioning structure and an exponentially decreasing thresholding rule. It makes one pass over the data
and retains a set of size O (k + m log k) log2 k . We have further shown that after the removal of
any m elements, a simple greedy algorithm that runs on the obtained set achieves a constant-factor
approximation guarantee for robust submodular function maximization. In addition, we have presented two numerical studies where our method compares favorably against the S IEVE -S TREAMING
algorithm that knows in advance which elements will be removed.
Acknowledgment. IB and VC?s work was supported in part by the European Research Council
(ERC) under the European Union?s Horizon 2020 research and innovation program (grant agreement
number 725594), in part by the Swiss National Science Foundation (SNF), project 407540_167319/1,
in part by the NCCR MARVEL, funded by the Swiss National Science Foundation, in part by
Hasler Foundation Switzerland under grant agreement number 16066 and in part by Office of Naval
Research (ONR) under grant agreement number N00014-16-R-BA01. JT?s work was supported by
ERC Starting Grant 335288-OptApprox.
9
References
[1] S. Tschiatschek, R. K. Iyer, H. Wei, and J. A. Bilmes, ?Learning mixtures of submodular
functions for image collection summarization,? in Advances in neural information processing
systems, 2014, pp. 1413?1421.
[2] H. Lin and J. Bilmes, ?A class of submodular functions for document summarization,? in Assoc.
for Comp. Ling.: Human Language Technologies-Volume 1, 2011.
[3] D. Kempe, J. Kleinberg, and ?. Tardos, ?Maximizing the spread of influence through a social
network,? in Int. Conf. on Knowledge Discovery and Data Mining (SIGKDD), 2003.
[4] E. Lindgren, S. Wu, and A. G. Dimakis, ?Leveraging sparsity for efficient submodular data
summarization,? in Advances in Neural Information Processing Systems, 2016, pp. 3414?3422.
[5] A. Krause and R. G. Gomes, ?Budgeted nonparametric learning from data streams,? in ICML,
2010, pp. 391?398.
[6] K. El-Arini and C. Guestrin, ?Beyond keyword search: discovering relevant scientific literature,?
in Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery
and data mining. ACM, 2011, pp. 439?447.
[7] G. L. Nemhauser, L. A. Wolsey, and M. L. Fisher, ?An analysis of approximations for maximizing submodular set functions?i,? Mathematical Programming, vol. 14, no. 1, pp. 265?294,
1978.
[8] A. Badanidiyuru, B. Mirzasoleiman, A. Karbasi, and A. Krause, ?Streaming submodular
maximization: Massive data summarization on the fly,? in Proceedings of the 20th ACM
SIGKDD. ACM, 2014, pp. 671?680.
[9] A. Krause, H. B. McMahan, C. Guestrin, and A. Gupta, ?Robust submodular observation
selection,? Journal of Machine Learning Research, vol. 9, no. Dec, pp. 2761?2801, 2008.
[10] J. B. Orlin, A. S. Schulz, and R. Udwani, ?Robust monotone submodular function maximization,?
in Int. Conf. on Integer Programming and Combinatorial Opt. (IPCO). Springer, 2016.
[11] I. Bogunovic, S. Mitrovi?c, J. Scarlett, and V. Cevher, ?Robust submodular maximization: A
non-uniform partitioning approach,? in Int. Conf. Mach. Learn. (ICML), 2017.
[12] R. Kumar, B. Moseley, S. Vassilvitskii, and A. Vattani, ?Fast greedy algorithms in MapReduce
and streaming,? ACM Transactions on Parallel Computing, vol. 2, no. 3, p. 14, 2015.
[13] A. Norouzi-Fard, A. Bazzi, I. Bogunovic, M. El Halabi, Y.-P. Hsieh, and V. Cevher, ?An efficient
streaming algorithm for the submodular cover problem,? in Adv. Neur. Inf. Proc. Sys. (NIPS),
2016.
[14] W. Chen, T. Lin, Z. Tan, M. Zhao, and X. Zhou, ?Robust influence maximization,? in Proceedings of the ACM SIGKDD, 2016, p. 795.
[15] M. Staib and S. Jegelka, ?Robust budget allocation via continuous submodular functions,? in
Int. Conf. Mach. Learn. (ICML), 2017.
[16] D. Golovin and A. Krause, ?Adaptive submodularity: Theory and applications in active learning
and stochastic optimization,? Journal of Artificial Intelligence Research, vol. 42, 2011.
[17] A. Guillory and J. Bilmes, ?Interactive submodular set cover,? arXiv preprint arXiv:1002.3345,
2010.
[18] B. Mirzasoleiman, A. Karbasi, and A. Krause, ?Deletion-robust submodular maximization:
Data summarization with ?the right to be forgotten?,? in International Conference on Machine
Learning, 2017, pp. 2449?2458.
[19] J. Mcauley and J. Leskovec, ?Discovering social circles in ego networks,? ACM Trans. Knowl.
Discov. Data, 2014.
[20] J. Yang and J. Leskovec, ?Defining and evaluating network communities based on ground-truth,?
Knowledge and Information Systems, vol. 42, no. 1, pp. 181?213, 2015.
[21] F. M. Harper and J. A. Konstan, ?The MovieLens datasets: History and context,? ACM Transactions on Interactive Intelligent Systems (TiiS), vol. 5, no. 4, p. 19, 2016.
[22] O. Troyanskaya, M. Cantor, G. Sherlock, P. Brown, T. Hastie, R. Tibshirani, D. Botstein,
and R. B. Altman, ?Missing value estimation methods for DNA microarrays,? Bioinformatics,
vol. 17, no. 6, pp. 520?525, 2001.
10
| 7042 |@word version:4 hsieh:1 mcauley:1 contains:4 score:2 document:2 outperforms:4 existing:1 must:1 realistic:1 numerical:3 partition:27 remove:2 plot:1 alone:1 greedy:19 half:5 discovering:2 item:1 ieve:20 intelligence:1 sys:1 short:1 prespecified:1 volkan:2 provides:4 node:6 location:2 preference:1 mathematical:1 along:1 constructed:3 become:1 prove:8 consists:5 combine:1 introduce:2 theoretically:1 decreasing:8 treaming:14 little:1 equipped:2 cardinality:14 increasing:2 str:6 begin:1 provided:1 moreover:3 project:1 maximizes:1 what:1 minimizes:1 dimakis:1 finding:1 guarantee:6 forgotten:1 every:8 interactive:7 prohibitively:1 assoc:1 partitioning:3 grant:4 despite:1 mach:2 id:2 might:1 studied:2 challenging:1 co:1 tschiatschek:1 bi:25 range:1 averaged:1 bazzi:1 directed:2 acknowledgment:1 practical:2 drama:1 vu:1 practice:2 union:2 swiss:2 procedure:4 snf:1 featured:1 significantly:1 fard:2 matching:1 word:1 close:5 selection:1 context:1 influence:5 applying:1 missing:1 maximizing:5 starting:1 independently:1 ke:30 amazon:3 m2:1 rule:3 retrieve:1 hurt:1 altman:1 tardos:1 today:1 tan:1 user:18 massive:1 programming:2 us:3 agreement:3 element:50 ego:4 expensive:1 approximated:1 database:1 fly:1 preprint:1 adv:1 keyword:1 decrease:1 removed:16 highest:1 depend:1 solving:1 badanidiyuru:1 genre:4 fast:1 wdk:6 shortcoming:1 query:7 artificial:1 aggregate:1 neighborhood:1 whose:2 supplementary:3 solve:1 dominating:5 say:4 larger:1 compressed:1 precludes:1 think:1 final:1 propose:1 product:3 relevant:1 loop:2 rapidly:1 description:1 validate:1 empty:2 requirement:3 optimum:1 produce:2 mirzasoleiman:2 derive:1 develop:1 stating:1 illustrate:1 completion:1 exemplar:1 op:1 eq:2 solves:1 coverage:1 predicted:1 switzerland:1 submodularity:2 stochastic:1 vc:1 human:1 material:3 require:1 resilient:1 opt:31 hold:2 considered:1 ground:3 algorithmic:2 claim:1 substituting:1 achieves:4 smallest:1 estimation:1 proc:1 precede:1 combinatorial:1 knowl:1 troyanskaya:1 council:1 vz:3 concurrently:1 rather:1 zhou:1 office:1 naval:1 rank:1 indicates:1 adversarial:1 greedily:3 baseline:3 sigkdd:4 twitter:4 dependent:1 el:2 streaming:25 epfl:10 entire:4 initially:1 tiis:1 schulz:1 interested:1 provably:1 priori:1 development:1 kempe:1 marginal:4 once:2 construct:2 extraction:1 beach:1 aware:1 having:1 represents:1 look:2 icml:3 filling:1 minimized:1 report:1 recommend:1 intelligent:1 randomly:1 national:2 individual:1 argmax:1 consisting:3 replacement:2 centralized:1 interest:1 mining:2 mixture:1 edge:3 fu:1 circle:2 hvm:1 theoretical:3 leskovec:2 mk:1 cevher:4 increased:1 instance:1 cover:3 retains:3 maximization:14 subset:8 uniform:1 rounding:1 stored:2 reported:1 guillory:1 chooses:1 st:1 international:2 dlog1:1 together:1 concrete:1 again:3 central:1 containing:1 choose:1 arini:1 henceforth:1 worse:1 conf:4 nccr:1 vattani:1 return:5 zhao:1 account:1 exclude:1 star:50 wk:5 int:4 depends:2 stream:7 later:3 break:1 start:1 option:1 parallel:3 contribution:2 orlin:1 who:1 efficiently:2 maximized:1 correspond:1 kem:2 yield:2 norouzi:2 produced:4 comparably:2 bilmes:3 comp:1 history:1 ipco:1 ashkan:2 failure:1 against:8 pp:10 naturally:1 associated:1 proof:6 gain:3 sampled:2 dataset:6 proved:1 popular:2 recall:1 knowledge:3 andom:2 botstein:1 improved:1 wei:1 arranged:2 though:1 stage:18 sketch:1 receives:1 hand:1 working:1 quality:1 scientific:1 usa:1 normalized:4 contain:1 brown:1 facility:2 hence:5 assigned:1 equality:1 sieve:13 iteratively:2 deal:1 staib:1 adjacent:1 prominent:1 complete:2 demonstrate:4 performs:1 pro:2 percent:1 image:2 invoked:1 novel:3 recently:3 twist:1 exponentially:7 volume:1 jl:1 extend:1 numerically:1 refer:1 outlined:1 populated:4 similarly:1 erc:2 submodular:36 language:1 funded:1 access:5 add:1 lindgren:1 cantor:1 inf:1 discard:1 scenario:5 store:2 n00014:1 inequality:1 onr:1 seen:2 guestrin:2 additional:2 greater:1 novelty:1 maximize:1 ii:5 multiple:3 full:10 needing:1 reduces:1 match:3 long:3 lin:2 divided:1 discov:1 watched:2 variant:2 arxiv:2 iteration:1 dec:1 addition:3 want:3 krause:5 pass:1 subject:3 leveraging:1 obj:6 call:2 integer:1 yang:1 peformance:1 rendering:1 fit:1 gave:1 hastie:1 idea:2 knowing:2 microarrays:1 vassilvitskii:1 returned:2 proceed:2 afford:1 useful:1 marvel:1 detailed:1 proportionally:1 nonparametric:1 extensively:1 dna:1 specifies:1 exist:2 disjoint:1 popularity:1 scarlett:1 tibshirani:1 vol:7 threshold:6 achieving:1 clarity:1 budgeted:1 hasler:1 graph:3 monotone:9 mitrovic:1 sum:1 run:7 arrive:2 place:1 extends:1 almost:1 wu:1 appendix:1 scaling:1 comparable:1 bound:9 guaranteed:1 display:1 oracle:1 constraint:8 precisely:1 personalized:4 sake:1 kleinberg:1 min:8 innovative:1 kumar:1 performing:1 relatively:1 slobodan:2 influential:2 neur:1 precompute:1 smaller:2 slightly:2 partitioned:2 refered:1 dlog:14 karbasi:2 bucket:34 remains:2 turn:2 precomputed:3 discus:2 know:5 halabi:1 tractable:1 bogunovic:4 end:1 observe:4 robustness:4 denotes:1 remaining:2 include:1 clustering:1 cf:1 running:3 log2:3 maintaining:1 top:1 exploit:1 especially:1 classical:1 objective:14 added:1 already:3 dependence:1 said:1 nemhauser:1 degrade:1 mail:5 whom:1 reason:1 index:4 illustration:2 ratio:1 innovation:1 unfortunately:1 statement:2 potentially:1 favorably:1 negative:1 design:3 summarization:10 unknown:1 perform:2 recommender:1 observation:1 regenerate:1 datasets:5 defining:1 looking:1 precise:1 arbitrary:2 community:3 rating:3 introduced:1 namely:1 required:1 specified:2 deletion:1 nip:2 trans:1 able:4 beyond:1 below:1 sparsity:1 challenge:1 program:1 built:2 max:3 memory:7 sherlock:1 scheme:1 movie:26 rated:1 technology:1 finished:1 log1:2 faced:1 literature:1 discovery:2 removal:15 mapreduce:1 asymptotic:1 fully:4 sublinear:1 interesting:1 wolsey:1 allocation:2 versus:1 foundation:3 purchasing:1 jegelka:1 thresholding:9 storing:1 balancing:2 summary:19 placed:1 last:4 free:1 copy:1 infeasible:2 arriving:1 supported:2 side:1 allow:2 moot:1 benefit:1 valid:1 stand:1 evaluating:1 collection:4 adaptive:2 avg:2 social:4 transaction:2 approximate:1 preferred:1 monotonicity:1 global:1 decides:1 active:1 conclude:1 gomes:1 search:2 continuous:1 ilija:2 favorite:1 nature:1 learn:2 robust:41 ca:1 golovin:1 european:2 main:5 spread:2 universe:2 linearly:1 ling:1 arise:1 allowed:2 repeated:2 representative:4 fashion:2 sub:1 wish:1 konstan:1 mcmahan:1 ib:1 theorem:6 removing:2 specific:1 jt:1 showing:1 jakub:2 gupta:1 exists:4 adding:2 importance:1 execution:1 iyer:1 budget:2 horizon:1 chen:1 reedy:31 led:1 logarithmic:1 likely:1 failed:1 watch:1 recommendation:7 springer:1 ch:5 corresponds:2 truth:1 satisfies:1 extracted:1 acm:8 goal:3 formulated:1 fisher:1 movielens:2 uniformly:1 lemma:12 total:1 pas:10 moseley:1 latter:1 arises:1 harper:1 bioinformatics:1 |
6,681 | 7,043 | Simple Strategies for Recovering Inner Products from
Coarsely Quantized Random Projections
Ping Li
Baidu Research, and
Rutgers University
[email protected]
Martin Slawski
Department of Statistics
George Mason University
[email protected]
Abstract
Random projections have been increasingly adopted for a diverse set of tasks in
machine learning involving dimensionality reduction. One specific line of research
on this topic has investigated the use of quantization subsequent to projection
with the aim of additional data compression. Motivated by applications in nearest
neighbor search and linear learning, we revisit the problem of recovering inner
products (respectively cosine similarities) in such setting. We show that even under
coarse scalar quantization with 3 to 5 bits per projection, the loss in accuracy tends
to range from ?negligible? to ?moderate?. One implication is that in most scenarios
of practical interest, there is no need for a sophisticated recovery approach like
maximum likelihood estimation as considered in previous work on the subject.
What we propose herein also yields considerable improvements in terms of accuracy
over the Hamming distance-based approach in Li et al. (ICML 2014) which is
comparable in terms of simplicity.
1
Introduction
The method of random projections (RPs) for linear dimensionality reduction has become more
and more popular over the years after the basic theoretical foundation, the celebrated JohnsonLindenstrauss (JL) Lemma [12, 20, 33], had been laid out. In a nutshell, it states that it is possible
to considerably lower the dimension of a set of data points by means of a linear map in such a way
that squared Euclidean distances and inner products are roughly preserved in the low-dimensional
representation. Conveniently, a linear map of this sort can be realized by a variety of random
matrices [1, 2, 18]. The scope of applications of RPs has expanded dramatically in the course of
time, and includes dimension reduction in linear classification and regression [14, 30], similarity
search [5, 17], compressed sensing [8], clustering [7, 11], randomized numerical linear algebra and
matrix sketching [29], and differential privacy [21], among others.
The idea of achieving further data compression by means of subsequent scalar quantization of the
projected data has been considered for a while. Such setting can be motivated from constraints
concerning data storage and communication, locality-sensitive hashing [13, 27], or the enhancement
of privacy [31]. The extreme case of one-bit quantization can be associated with two seminal works
in computer science, the SDP relaxation of the MAXCUT problem [16] and the simhash [10]. One-bit
compressed sensing is introduced in [6], and along with its numerous extensions, has meanwhile
developed into a subfield within the compressed sensing literature. A series of recent papers discuss
quantized RPs with a focus on similarity estimation and search. The papers [25, 32] discuss quantized
RPs with a focus on image retrieval based on nearest neighbor search. Independent of the specific
application, [25, 32] provide JL-type statements for quantized RPs, and consider the trade-off between
the number of projections and the number of bits per projection under a given budget of bits as it also
appears in the compressed sensing literature [24]. The paper [19] studies approximate JL-type results
for quantized RPs in detail. The approach to quantized RPs taken in the present paper follows [27, 28]
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
in which the problem of recovering distances and inner products is recast within the framework of
classical statistical point estimation theory. The paper [28] discusses maximum likelihood estimation
in this context, with an emphasis on the aforementioned trade-off between the number of RPs and the
bit depth per projection. In the present paper we focus on the much simpler and computationally much
more convenient approach in which the presence of the quantizer is ignored, i.e., quantized data are
treated in the same way as full-precision data. We herein quantify the loss of accuracy of this approach
relative to the full-precision case, which turns out to be insignificant in many scenarios of practical
interest even under coarse quantization with 3 to 5 bits per projection. Moreover, we show that
the approach compares favorably to the Hamming distance-based (or equivalently collision-based)
scheme in [27] which is of similar simplicity. We argue that both approaches have their merits: the
collision-based scheme performs better in preserving local geometry (the distances of nearby points),
whereas the one studied in more detail herein yields better preservation globally.
Notation. For a positive integer m, we let [m] = {1, . . . , m}. For l ? [m], v(l) denotes the l-th
component of a vector v ? Rm ; if there is no danger of confusion with another index, the brackets in
the subscript are omitted. I(P ) denotes the indicator function of expression P .
Supplement: Proofs and additional experimental results can be found in the supplement.
Basic setup. Let X = {x1 , . . . , xn } ? Rd be a set of input data with squared Euclidean norms
?2i := kxi k22 , i ? [n]. We think of d being large. RPs reduce the dimensionality of the input data
by means of a linear map A : Rd ? Rk , k d. We assume throughout the paper that the map
A is realized by a random matrix with i.i.d. entries from the standard Gaussian distribution, i.e.,
Alj ? N (0, 1), l ? [k], j ? [d]. One standard goal of RPs is to approximately preserve distances in
X while lowering the dimension, i.e., kAxi ? Axj k22 /k ? kxi ? xj k22 for all (i, j). This is implied
by approximate inner product preservation hxi , xj i ? hAxi , Axj i /k for all (i, j).
For the time being, we assume that it is possible to compute and store the squared norms {?2i }ni=1 ,
and to rescale the input data to unit norm, i.e., one first forms x
ei ? xi /?i , i ? [n], before applying
hx ,x i
A. In this case, it suffices to recover the (cosine) similarities ?ij := ?ii ?jj = he
xi , x
ej i, i, j ? [n], of
the input data X from their compressed representation Z = {z1 , . . . , zn }, zi := Ae
xi , i ? [n].
2
Estimation of cosine similarity based on full-precision RPs
As preparation for later sections, we start by providing background concerning the usual setting
without quantization. Let (Z, Z 0 )r be random variables having a bivariate Gaussian distribution with
zero mean, unit variance, and correlation r ? (?1, 1):
0
1 r
0
(Z, Z )r ? N2
,
.
(1)
0
r 1
Let further x, x0 be a generic pair of points from X , and let z := Ae
x, z 0 := Ae
x0 be the counterpart in
0
k
0
Z. Then the components {(z(l) , z(l) )}l=1 of (z, z ) are distributed i.i.d. as in (1) with r = ? =: he
x, x
e0 i.
0
Hence the problem of recovering the cosine similarity of x and x can be re-cast as estimating the
correlation from an i.i.d. sample of k bivariate Gaussian random variables. To simplify our exposition,
we henceforth assume that 0 ? ? < 1 as this can easily be achieved by flipping the sign of one of x
or x0 . The standard estimator of ? is what is called the ?linear estimator? herein:
?blin =
k
1
1X
0
hz, z 0 i =
.
z(l) z(l)
k
k
(2)
l=1
As pointed out in [26] this estimator can be considerably improved upon by the maximum likelihood
estimator (MLE) given (1):
1
1 1
1
1 0 2 1
2
2
0
?bMLE = argmax ? log(1 ? r ) ?
kzk2 + kz k2 ? hz, z i 2r
. (3)
2
2 1 ? r2 k
k
k
r
The estimator ?bMLE is not available in closed form, which is potentially a serious concern since it
needs to be evaluated for numerous different pairs of data points. However, this can be addressed
2
n
o
2
2
by tabulation of the two statistics
kzk2 + kz 0 k2 /k, hz, z 0 i /k and the corresponding solutions
?bMLE over a sufficiently fine grid. At processing time, computation of ?bMLE can then be reduced to a
look-up in a pre-computed table.
One obvious issue of ?blin is that it does not respect the range of the underlying parameter. A natural
fix is the use of the ?normalized linear estimator?
?bnorm = hz, z 0 i /(kzk2 kz 0 k2 ).
(4)
When comparing different estimators of ? in terms of statistical accuracy, we evaluate the mean
squared error (MSE), possibly asymptotically as the number of RPs k ? ?. Specifically, we consider
MSE? (b
?) = E? [(? ? ?b)2 ] = Bias2? (b
?) + Var? (b
?),
Bias? (b
?) := E? [b
?] ? ?,
(5)
where ?b is some estimator, and the subscript ? indicates that expectations are taken with respect to a
sample (z, z 0 ) following the bivariate normal distribution in (1) with r = ?.
It turns out that ?bnorm and ?bMLE can have dramatically lower (asymptotic) MSEs than ?blin for large
values of ?, i.e., for points of high cosine similarity. It can be shown that (cf. [4], p.132, and [26])
Var? (b
?lin ) = (1 + ?2 )/k,
Bias? (b
?lin ) = 0,
Bias2? (b
?norm )
Bias2? (b
?MLE )
2
2
= O(1/k ),
= O(1/k ),
(6)
2 2
2
Var? (b
?norm ) = (1 ? ? ) /k + O(1/k ),
(7)
(1??2 )2
1+?2 /k
+ O(1/k ).
(8)
Var? (b
?MLE ) =
2
While for ? = 0, the (asymptotic) MSEs are the same, we note that the leading terms of the MSEs
of ?bnorm and ?bMLE decay at rate ?((1 ? ?)2 ) as ? ? 1, whereas the MSE of ?blin grows with ?. The
following table provides the asymptotic MSE ratios of ?blin and ?bnorm for selected values of ?.
?
0.5
0.6
0.7
0.8
0.9
0.95
0.99
MSE? (b
?lin )
MSE? (b
?norm )
2.2
3.3
5.7
12.6
50
200
5000
In conclusion, if it is possible to pre-compute and store the norms of the data prior to dimensionality
reduction, a simple form of normalization can yield important benefits with regard to the recovery of
inner products and distances for pairs of points having high cosine similarity. The MLE can provide
a further refinement, but the improvement over ?bnorm can be at most by a factor of 2.
3
Estimation of cosine similarity based on quantized RPs
The following section contains our main results. After introducing preliminaries regarding quantization, we review previous approaches to the problem, before analyzing estimators following a different
paradigm. We conclude with a comparison and some recommendations about what to use in practice.
Quantization. After obtaining the projected data Z, the next step is scalar quantization. Let
t = (t1 , . . . , tK?1 ) with 0 = t0 < t1 < . . . < tK?1 < tK = +? be a set of thresholds
inducing a partitioning of the positive real line into K intervals {[ts?1 , ts ), s ? [K]}, and let
M = {?1 , . . . , ?K } be a set of codes with ?s representing interval [ts?1 , ts ), s ? [K]. Given t and
M, the scalar quantizer (or quantization map) is defined by
PK
Q : R ? M? := ?M ? M, z 7? Q(z) = sign(z) s=1 ?s I(|z| ? [ts?1 , ts )).
(9)
k
The projected and quantized data result as Q = {qi }ni=1 ? (M? )k , qi = Q(zi(l) ) l=1 , where zi(l)
denotes the l-th component of zi ? Z, l ? [k], i ? [n]. The bit depth b of the quantizer is given by
b := 1 + log2 (K). For simplicity, we only consider the case where b is an integer. The case b = 1 is
well-studied [10, 27] and is hence disregarded in our analysis to keep our exposition compact.
Bin-based vs. code-based approaches. Let q = Q(z) and q 0 = Q(z 0 ) be the points resulting from
quantization of the generic pair z, z 0 in the previous section. In this paper, we distinguish between
two basic paradigms for estimating the cosine similarity of the underlying pair x, x0 from q, q 0 . The
first paradigm, which we refer to as bin-based estimation, does not make use of the specific values of
3
the codes M? , but only of the intervals (?bins?) associated with each code. This is opposite to the
second paradigm, referred to as code-based estimation which only makes use of the values of the
codes. As we elaborate below, an advantage of the bin-based approach is that working with intervals
reflects the process of quantization more faithfully and hence can be statistically more accurate; on the
other hand, a code-based approach tends to be more convenient from the point of view computation.
In this paper, we make a case for the code-based approach by showing that the loss in statistical
accuracy can be fairly minor in several regimes of practical interest.
Lloyd-Max (LM) quantizer. With b respectively K being fixed, one needs to choose the thresholds
t and the codes M of the quantizer (the second is crucial only for a code-based approach). In our
setting, with zi(l) ? N (0, 1), i ? [n], l ? [k], which is inherited from the distribution of the entries
of A, a standard choice is LM quantization [15] which minimizes the squared distortion error:
(t? , ?? ) = argmin Eg?N (0,1) [{g ? Q(g; t, ?)}2 ].
(10)
t,?
Problem (10) can be solved by an iterative scheme that alternates between optimization of t for fixed
? and vice versa. That scheme can be shown to deliver the global optimum [22]. In the absence of
any prior information about the cosine similarities that we would like to recover, (10) appears as a
reasonable default whose use for bin-based estimation has been justified in [28]. In the limit of cosine
similarity ? ? 1, it may seem more plausible to use (10) with g replaced by its square, and taking the
root of the resulting optimal thresholds and codes. However, it turns out that empirically this yields
reduced performance more often than improvements, hence we stick to (10) in the sequel.
3.1
Bin-based approaches
0 k
MLE. Given a pair q = (q(l) )kl=1 and q 0 = (q(l)
)l=1 of projected and quantized points, maximum likelihood estimation of the underlying cosine similarity ? is studied in depth in [28].
The associated likelihood function L(r) is based on bivariate normal probabilities of the form
Pr (Z ? [ts?1 , ts ), Z 0 ? [tu?1 , tu )), P?r (Z ? [ts?1 , ts ), Z 0 ? [tu?1 , tu )) with (Z, Z 0 )r as in (1).
It is shown in [28] that the MLE with b ? 2 can be more efficient at the bit level than common
single-bit quantization [10, 16]; the optimal choice of b increases with ?. While statistically optimal in
the given setting, the MLE remains computationally cumbersome even when using the approximation
in [28] because it requires cross-tabulation of the empirical frequencies corresponding to the bivariate
normal probabilities above. This makes the use of the MLE unattractive particularly in situations in
which it is not feasible to materialize all O(n2 ) pairwise similarities estimable from (qi , qj )i<j so
that they would need to be re-evaluated frequently.
approach
Collision-based estimator. The collision-based estimatorproposed in [27] is a bin-based
Pk
0
as the MLE. The similarity ? is estimated as ?bcol = ??1
I(q
=
q
)/k
,
where
the map
(l)
l=1
(l)
? : [0, 1] ? [0, 1] is defined by r 7? ?(r) = Pr (Q(Z) = Q(Z 0 )), shown to be monotonically
increasing in [27]. Compared to the MLE, ?bcol uses less information ? it only counts ?collisions?,
0
i.e., events {q(l) = q(l)
}. The loss in statistical efficiency is moderate for b = 2, in particular for ?
close to 1. However, as b increases that loss becomes more and more substantial; cf. Figure 1. On
the positive side, ?bcol is convenient to compute given that the evaluation of the function ??1 can be
approximated by employing a look-up table after tabulating ? on a fine grid.
1.5
0
-0.5
-1
-1.5
-2
-2.5
-3
0
0.2
0.4
0.6
0.8
1
5
Relative Efficiency
log10(MSE)
0.5
30
b=4
b=3
b=2
Relative Efficiency
1
4
b=2
3
2
1
0.2
0.4
0.6
0.8
1
20
b=4
10
1
0
0.2
0.4
0.6
0.8
1
Figure 1: (L): Asymptotic MSEs [27] of ?bcol (to be divided by k) for 2 ? b ? 4. (M,R): Asymptotic
relative efficiencies MSE? (b
?col )/MSE? (b
?MLE ) for b ? {2, 4}, where ?bMLE is the MLE in [28].
4
2
-2
b=2
-3
b=3
-4
b=4
-5
1.8
b=5
-6
b=6
-7
-8
-9
-10
-11
0
0.2
0.4
0.6
0.8
b
bound on bias2
2
3
4
5
6
1.2 ? 10?1
7.2 ? 10?3
4.5 ? 10?4
2.8 ? 10?5
1.8 ? 10?6
1
1.6
variance
log10(squared Bias)
-1
1.4
1.2
1
0.8
0
0.2
0.4
Bias2? (b
?lin )
0.6
Figure 2: (L):
and the bound of Theorem 1. (M): uniform upper bounds on
obtained from Theorem 1 by setting ? = 1. (R): Var? (b
?lin ) (to be divided by k).
3.2
0.8
1
Bias2? (b
?lin )
Code-based approaches
In the code-based approach, we simply ignore the fact that the quantized data actually represent
intervals and treat them precisely in the same way as full-precision data. Recovery of cosine similarity
is performed by means of the estimator in ?2 with z, z 0 replaced by q, q 0 . Perhaps surprisingly, it
turns out that depending on ? the loss of information incurred by this rather crude approach can be
small already for bit depths between b = 3 and b = 5. That loss increases with ?, with a fundamental
gap compared to bin-based approaches and to the full precision case in the limit ? ? 1.
Linear estimator. We first consider ?blin = hq, q 0 i /k. We note that ?blin = ?blin,b depends on b; b = ?
corresponds to the estimator ?blin = ?blin,? in ?2 denoted by the same symbol. A crucial difference
between the code-based and the bin-based approaches discussed above is that the latter have vanishing
asymptotic squared bias of the order O(k ?2 ) for any b [27, 28]. This is not the case for code-based
approaches whose bias needs to be analyzed carefully. The exact bias of ?blin in dependence of ? and
b can be evaluated exactly numerically. Numerical evaluations of bias and variance of estimators
discussed in the present section only rely on the computation of coefficients ??,? defined by
??,? := E? [Q(Z)? Q(Z 0 )? ] =
X
K
X
?
0
0
? ? (? 0 )? ??
s ?u P? Z ? ?(ts?1 , ts ), Z ? ? (tu?1 , tu ) ,
?,? 0 ?{?1,1} s,u=1
(11)
where ?, ? are non-negative integers and (Z, Z 0 ) are bivariate normal (1) with r = ?. Specifically,
2
we have E? [b
?lin ] = ?1,1 , Var? (b
?lin ) = (?2,2 ? ?1,1
)/k. In addition to exact numerical evaluation, we
provide a bound on the bias of ?blin which quantifies explicitly the rate of decay in dependence b.
Theorem 1. We have Bias2? (b
?lin ) ? 4?2 Db2 , where Db =
33/2 2? ?2b
12 2
? 2.72 ? 2?2b .
As shown in Figure 2 (L), the bound on the squared bias in Theorem 1 constitutes a reasonable
proxy of the exact squared bias. The rate of decay is O(2?4b ). Moreover, it can be verified
numerically that the variance in the full precision case upper bounds the variance for finite b, i.e.,
Var? (b
?lin,b ) ? Var? (b
?lin,? ), ? ? [0, 1). Combining bias and variance, we may conclude that
depending on k, the MSE of ?blin based on coarsely quantized data does not tend to be far from what
is achieved with full precision data. The following two examples illustrate this point.
(i) Suppose k = 100 and b = 3. With full precision, we have MSE? (b
?lin,? ) = (1+?2 )/k ? [.01, .02].
From Figure 2 (M) and the observation that Var? (b
?lin,3 ) ? Var? (b
?lin,? ), we find that the MSE can
go up by at most 7.2 ? 10?3 , i.e., it can at most double relative to the full precision case.
(ii) Suppose k = 1000 and b = 4. With the same reasoning as in (i), the MSE under quantization can
increase at most by a factor of 1.45 as compared to full precision data.
Figure 3 shows that these numbers still tend to be conservative. In general, the difference of the
MSEs for b = ? on the one hand and b ? {3, 4, 5} on the other hand gets more pronounced for large
values of the similarity ? and large values of k. This is attributed to the (squared) bias of ?blin . In
particular, it does not pay off to choose k significantly larger than the order of the squared bias.
5
k = 500
k = 1000
-3
k = 2000
log10(MSE)
log10(MSE)
k = 200
-2
k = 5000
-3.5
k = 10000
-4
0.4
0.6
k = 50
-2
k = 200
k = 500
k = 1000
-2.5
k = 2000
-3
k = 5000
-3.5
0.8
1
k = 10000
0.2
0.4
0.6
k = 50
k = 100
-2
k = 200
k = 500
-2.5
k = 1000
-3
k = 2000
k = 5000
-3.5
k = 10000
-4
b=4
0
k = 20
-1.5
k = 100
-4
b=3
0.2
-1
k = 20
-1.5
k = 100
-2.5
0
-1
k = 20
k = 50
log10(MSE)
-1
-1.5
0.8
b=5
0
1
0.2
0.4
0.6
0.8
1
Figure 3: MSEs of ?blin for various k and b ? {3, 4, 5} (dotted). The solid (red) lines indicate the
corresponding MSEs for ?blin in the full-precision case (b = ?).
Normalized estimator. In the full precision case we have seen that simple normalization of the
form ?bnorm = hz, z 0 i /(kzk2 kz 0 k2 ) can yield substantial benefits. Interestingly, it turns out that
the counterpart ?bnorm = hq, q 0 i /(kqk2 kq 0 k2 ) for quantized data is even more valuable as it helps
reducing the bias of ?blin = hq, q 0 i /k. This effect can be seen easily in the limit ? ? 1 in which case
Bias? (b
?norm ) ? 0 by construction. In general, bias and variance can be evaluated as follows.
Proposition 1. In terms of the coefficients ??,? defined in (11), as k ? ?, we have
?
? ? + O(k ?1 )
| Bias? [b
?norm ]| = ?1,1
2,0
2
?1,1
(?4,0 +?2,2 )
?
2? ?3,1
Var(b
?norm ) = k1 ?2,2
? 1,1
+
+ O(k ?2 ).
2
3
4
?
2?
2,0
2,0
2,0
Figure 4 (L,M) graphs the above two expressions. In particular, the plots highlight the reduction
in bias compared to ?blin and the fact that the variance is decreasing in ? as for b = ?. While
Proposition 1 is asymptotic, we verify a tight agreement in simulations for reasonably small k
(cf. supplement).
-1
1
-3
b=3
-4
b=4
-5
0.8
b=5
-6
b=6
-7
-2 k = 100
b=2
0.6
b=3
0.4
-8
-9
k = 200
-2.5 k = 500
-3 k = 1000
k = 2000
-3.5 k = 5000
-4k = 10000
0.2
-4.5
-10
-11
0
k = 20
-1.5 k = 50
log10(MSE)
b=2
variance
-2
b=
log10(squared Bias)
-1
0.2
0.4
0.6
0.8
1
0
0
0.2
0.4
0.6
0.8
1
-5
0
0.2
0.4
0.6
0.8
1
Figure 4: (L): Asymptotic Bias2? (b
?norm ) relative to Bias2? (b
?lin ). (M): Var? (b
?norm ) (asymptotic, to be
divided by k). (R): MSEs of ?blin,4 vs. the MSEs of ?bcoll,2 using twice the number of RPs (comparison
at the bit level). The stars indicate the values of ? at which the MSEs of the two estimators are equal.
3.3
Coding-based estimation vs. Collision-based estimation
Both schemes are comparable in terms of simplicity, but at the level of statistical performance none
of the two dominates the other. The collision-based approach behaves favorably in a high similarity
regime as shows a comparison of MSE? (b
?col ) (b = 2) and MSE? (b
?norm ) (b = 4) at the bit level
(Figure 4 (R)): since ?bcol uses only two bits for each of the k RPs, while ?bnorm uses twice as many
bits, we have doubled the number of RPs for ?bcol . The values of ? for which the curves of the two
approaches (for fixed k) intersect are indicated by stars. As k decreases from 104 to 102 , these values
increase from about ? = 0.55 to ? = 0.95. In conclusion, ?bcol is preferable in applications in which
high similarities prevail, e.g., in duplicate detection. On the other hand, for generic high-dimensional
data, one would rather not expect ? to take high values given that two points drawn uniformly at
random from the sphere are close to orthogonal with high probability.
Figure 1 (L) shows that as b is raised, ?bcol requires ? to be increasingly closer to one to achieve lower
MSE. By contrast, increasing b for the coding-based schemes yields improvements essentially for the
6
whole range of ?. An interesting phenomenon occurs in the limit ? ? 1. It turns out that the rate of
decay of Var? (b
?norm ) is considerably slower than the rate of decay of Var? (b
?col ).
Theorem 2. For any finite b, we have
Var? (b
?norm ) = ?((1 ? ?)1/2 ),
Var? (b
?col ) = ?((1 ? ?)3/2 ) as ? ? 1.
The rate ?((1 ? ?)3/2 ) is the same as the MLE [28] which is slower than the rate ?((1 ? ?)2 ) in
the full precision case (cf. ?2). We conjecture that the rate ?((1 ? ?)1/2 ) is intrinsic to code-based
estimation as this rate is also obtained when computing the full precision MLE (3) with quantized
data (i.e., z, z 0 gets replaced by q, q 0 ).
3.4
Quantization of norms
Let us recall that according to our basic setup in ?1, we have assumed so far that it is possible to
compute the norms ?i = kxi k22 , i ? [n], of the original data prior to projection and quantization, and
store them in full precision to approximately recover inner products and squared distances via
hxi , xj i ? ?i ?j ?bij ,
kxi ? xj k22 ? ?2i + ?2j ? 2?i ?j ?bij ,
where ?bij is an estimate of the cosine similarity of xi and xj . Depending on the setting, it may be
required to quantize the {?i }ni=1 as well. It turns out that the MSE for estimating distances can be
bi ? ?i |, where
tightly bounded in terms of the MSE for estimating cosine similarities and max1?i?n |?
n
n
b
{?i }i=1 denote the quantized versions of {?i }i=1 ; the precise bound is stated in the supplement.
4
Empirical results: linear classification using quantized RPs
One traditional application of RPs is dimension reduction in linear regression or classification with
high-dimensional predictors [14, 30]. The results of ?3.2 suggest that as long as the number of RPs
k are no more than a few thousand, subsequent scalar quantization to four bits is not expected to
have much of a negative effect relative to using full precision data. In this section, we verify this
hypothesis for four high-dimensional data sets from the UCI repository: arcene (d = 104 ), Dexter
(d = 2 ? 104 ), farm (d = 5.5 ? 104 ) and PEMS (d = 1.4 ? 105 ).
Setup. All data points are scaled to unit Euclidean norm before dimension reduction and scalar
quantization based on the Lloyd-Max quantizer (10). The number of RPs k is varied according to
{26 , 27 , . . . , 212 }. For each of these values for k, we consider 20 independent realizations of the
random projection matrix A. Given projected and quantized data {q1 , . . . , qn }, we estimate the
underlying cosine similarities ?ij as ?bij = ?b(qi , qj ), i, j ? [n], where ?b(qi , qj ) is a placeholder
for either the collision-based estimator ?bcoll based on b = 2 bits or the normalized estimator ?bnorm
for b ? {1, 2, 4, ?} using data {qi(l) , qj (l) }kl=1 ; one-bit quantization (b = 1) is here included as a
reference. The {b
?ij }1?i,j?n are then used as a kernel matrix fed into LIBSVM [9] to train a binary
classifier. Prediction on test sets is performed accordingly. LIBSVM is run with 30 different values of
its tuning parameter C ranging from 10?3 to 104 .
Results. A subset of the results is depicted in Figure 5 which is composed of three columns (one for
each type of plot) and four rows (one for each data set). All results are averages over 20 independent
sets of random projections. The plots in the left column show the minimum test errors over all 30
choices of the tuning parameter C under consideration in dependency of the number of RPs k. The
plots in the middle column show the test errors in dependency of C for a selected value of k (the full
set of plots can be found in the supplement). The plots in the right column provide a comparison of
the minimum (w.r.t. C) test errors of ?bcoll,2 and ?bnorm,4 at the bit level, i.e., with k doubled for ?bcoll,2 .
In all plots, classification performance improves as b increases. What is more notable though is that
the gap between b = 4 and b = ? is indeed minor as anticipated. Regarding the performance of
?bcoll,2 and ?bnorm,4 , the latter consistently achieves better performance.
5
Conclusion
In this paper, we have presented theoretical and empirical evidence that it is possible to achieve
additional data compression in the use of random projections by means of coarse scalar quantization.
7
0.7
0.78
0.75
0.72
0.69
0.66
0.5
0.4
0.3
7
8
9
log2(k)
10
11
-4
12
accuracy on test set
0.8
0.75
0.7
-2
0
log10(C)
2
0.4
6
7
8
9
log2(k)
10
11
12
0.65
0.6
0.75
0.7
0.65
-2
0
log10(C)
2
0.6
6
4
0.75
7
8
9
log2(k)
10
11
0.65
0.6
0.55
-2
0
log10(C)
2
0.75
0.7
0.65
6
0.75
0.7
0.65
0.6
7
8
9
log2(k)
10
11
12
0.55
-4
-2
0
log10(C)
0.8
0.75
7
8
9
log2(k)
10
11
arcene
0.8
accuracy on test set
accuracy on test set
accuracy on test set
0.8
11
0.85
arcene, k = 512
arcene
10
0.85
0.7
6
4
0.85
0.85
8
9
log2(k)
farm
0.7
0.5
-4
12
7
0.9
accuracy on test set
0.8
11
0.8
farm, k = 64
0.85
10
0.85
0.75
farm
8
9
log2(k)
Dexter
0.7
0.55
-4
7
0.9
0.75
accuracy on test set
accuracy on test set
0.5
4
0.65
0.7
6
0.6
0.55
Dexter, k = 512
0.85
0.9
0.7
0.65
0.8
Dexter
0.6
6
0.75
0.45
0.2
0.9
accuracy on test set
0.6
accuracy on test set
0.63
6
0.8 PEMS
PEMS, k = 64
accuracy on test set
PEMS
accuracy on test set
accuracy on test set
0.81
2
4
0.8
0.75
0.7
0.65
0.6
6
7
8
9
log2(k)
10
11
Figure 5: Results of the classification experiments. Each row corresponds to one data set. (L):
Accuracy on the test set (optimized over C) in dependence of the number of RPs k (log2 scale). (M):
Accuracy on the test set for a selected value of k in dependence of log10 (C). (R): Comparison of the
test accuracies when using the estimators ?bnorm,4 respectively ?bcoll,2 with twice the number of RPs.
The loss of information incurred at this step tends to be mild even with the naive approach in which
quantized data are treated in the same way as their full precision counterparts. An exception only
arises for cosine similarities close to 1 (Theorem 2). We have also shown that the simple form of
normalization employed in the construction of the estimator ?bnorm can be extremely beneficial, even
more so for coarsely quantized data because of a crucial bias reduction.
Regarding future work, it is worthwhile to consider the extension to the case in which the random
projections are not Gaussian but arise from one of the various structured Johnson-Lindenstrauss
transforms, e.g., those in [2, 3, 23]. A second direction of interest is to analyze the optimal trade-off
between the number of RPs k and the bit depth b in dependence of the similarity ?; in the present
work, the choice of b has been driven with the goal of roughly matching the full precision case.
8
Acknowledgments
The work was partially supported by NSF-Bigdata-1419210, NSF-III-1360971. Ping Li also thanks
Michael Mitzenmacher for helpful discussions.
References
[1] D. Achlioptas. Database-friendly random projections: Johnson-Lindenstrauss with binary coins. Journal
of Computer and System Sciences, 66:671?687, 2003.
[2] N. Ailon and B. Chazelle. Approximate nearest neighbors and the fast Johnson-Lindenstrauss transform.
In Proceedings of the Symposium on Theory of Computing (STOC), pages 557?563, 2006.
[3] N. Ailon and E. Liberty. Almost optimal unrestricted fast Johnson?Lindenstrauss transform. ACM
Transactions on Algorithms, 9:21, 2013.
[4] T. Anderson. An Introduction to Multivariate Statistical Analysis. Wiley, 2003.
[5] E. Bingham and H. Mannila. Random projection in dimensionality reduction: applications to image and
text data. In Conference on Knowledge Discovery and Data Mining (KDD), pages 245?250, 2001.
[6] P. Boufounos and R. Baraniuk. 1-bit compressive sensing. In Information Science and Systems, 2008.
[7] C. Boutsidis, A. Zouzias, and P. Drineas. Random Projections for k-means Clustering. In Advances in
Neural Information Processing Systems (NIPS), pages 298?306. 2010.
[8] E. Candes and T. Tao. Near-optimal signal recovery from random projections: Universal encoding
strategies? IEEE Transactions on Information Theory, 52:5406?5425, 2006.
[9] C-C. Chang and C-J. Lin. LIBSVM: A library for support vector machines. ACM Transactions on Intelligent
Systems and Technology, 2:27:1?27:27, 2011. http://www.csie.ntu.edu.tw/~cjlin/libsvm.
[10] M. Charikar. Similarity estimation techniques from rounding algorithms. In Proceedings of the Symposium
on Theory of Computing (STOC), pages 380?388, 2002.
[11] S. Dasgupta. Learning mixtures of Gaussians. In Symposium on Foundations of Computer Science (FOCS),
pages 634?644, 1999.
[12] S. Dasgupta. An elementary proof of a theorem of Johnson and Lindenstrauss. Random Structures and
Algorithms, 22:60?65, 2003.
[13] M. Datar, N. Immorlica, P. Indyk, and V. Mirrokni. Locality-Sensitive Hashing Scheme Based on p-Stable
Distributions. In Symposium on Computational Geometry (SCG), pages 253?262, 2004.
[14] D. Fradkin and D. Madigan. Experiments with random projections for machine learning. In Conference on
Knowledge Discovery and Data Mining (KDD), pages 517?522, 2003.
[15] A. Gersho and R. Gray. Vector Quantization and Signal Compression. Springer, 1991.
[16] M. Goemans and D. Williamson. Improved Approximation Algorithms for Maximum Cut and Satisfiability
Problems Using Semidefinite Programming. Journal of the ACM, 42:1115?1145, 1995.
[17] P. Indyk and R. Motwani. Approximate nearest neighbors: towards removing the curse of dimensionality.
In Proceedings of the Symposium on Theory of Computing (STOC), pages 604?613, 1998.
[18] J. Matousek. On variants of the Johnson-Lindenstrauss lemma. Random Structures and Algorithms,
33:142?156, 2008.
[19] L. Jacques. A Quantized Johnson-Lindenstrauss Lemma: The Finding of Buffon?s needle. IEEE Transactions on Information Theory, 61:5012?5027, 2015.
[20] W. Johnson and J. Lindenstrauss. Extensions of Lipschitz mappings into a Hilbert space. Contemporary
Mathematics, pages 189?206, 1984.
[21] K. Kenthapadi, A. Korolova, I. Mironov, and N. Mishra. Privacy via the Johnson-Lindenstrauss Transform.
Journal of Privacy and Confidentiality, 5, 2013.
[22] J. Kieffer. Uniqueness of locally optimal quantizer for log-concave density and convex error weighting
function. IEEE Transactions on Information Theory, 29:42?47, 1983.
9
[23] F. Krahmer and R. Ward. New and improved Johnson-Lindenstrauss embeddings via the Restricted
Isometry Property. SIAM Journal on Mathematical Analysis, 43:1269?1281, 2011.
[24] J. Laska and R. Baraniuk. Regime change: Bit-depth versus measurement-rate in compressive sensing.
IEEE Transactions on Signal Processing, 60:3496?3505, 2012.
[25] M. Li, S. Rane, and P. Boufounos. Quantized embeddings of scale-invariant image features for mobile
augmented reality. In International Workshop on Multimedia Signal Processing (MMSP), pages 1?6, 2012.
[26] P. Li, T. Hastie, and K. Church. Improving Random Projections Using Marginal Information. In Annual
Conference on Learning Theory (COLT), pages 635?649, 2006.
[27] P. Li, M. Mitzenmacher, and A. Shrivastava. Coding for Random Projections. In Proceedings of the
International Conference on Machine Learning (ICML), pages 676?678, 2014.
[28] P. Li, M. Mitzenmacher, and M. Slawski. Quantized Random Projections and Non-Linear Estimation of
Cosine Similarity. In Advances in Neural Information Processing Systems (NIPS), pages 2756?2764. 2016.
[29] M. Mahoney. Randomized Algorithms for Matrices and Data. Foundations and Trends in Machine
Learning, 3:123?224, 2011.
[30] O. Maillard and R. Munos. Compressed least-squares regression. In Advances in Neural Information
Processing Systems (NIPS), pages 1213?1221. 2009.
[31] S. Rane and P. Boufounos. Privacy-preserving nearest neighbor methods: Comparing signals wihtout
revealing them. IEEE Signal Processing Magazine, 30:18?28, 2013.
[32] S. Rane, P. Boufounos, and A. Vetro. Quantized embeddings: An efficient and universal nearest neighbor
method for cloud-based image retrieval. In SPIE Optical Engineering and Applications, pages 885609?
885609. International Society for Optics and Photonics, 2013.
[33] S. Vempala. The Random Projection Method. American Mathematical Society, 2005.
10
| 7043 |@word mild:1 repository:1 version:1 middle:1 compression:4 norm:18 simulation:1 scg:1 q1:1 solid:1 reduction:9 celebrated:1 series:1 contains:1 interestingly:1 mishra:1 com:1 comparing:2 chazelle:1 gmail:1 subsequent:3 numerical:3 kdd:2 plot:7 korolova:1 v:3 selected:3 accordingly:1 vanishing:1 coarse:3 quantized:23 quantizer:7 provides:1 simpler:1 mathematical:2 along:1 become:1 differential:1 baidu:1 symposium:5 focs:1 privacy:5 pairwise:1 x0:4 indeed:1 expected:1 roughly:2 frequently:1 sdp:1 globally:1 decreasing:1 curse:1 increasing:2 becomes:1 estimating:4 moreover:2 notation:1 underlying:4 bounded:1 what:5 argmin:1 minimizes:1 developed:1 compressive:2 kenthapadi:1 finding:1 friendly:1 concave:1 nutshell:1 exactly:1 preferable:1 classifier:1 rm:1 k2:5 stick:1 unit:3 partitioning:1 scaled:1 positive:3 negligible:1 before:3 local:1 t1:2 tends:3 limit:4 treat:1 engineering:1 encoding:1 analyzing:1 subscript:2 datar:1 approximately:2 emphasis:1 twice:3 studied:3 matousek:1 range:3 ms:10 statistically:2 bi:1 confidentiality:1 practical:3 acknowledgment:1 practice:1 mannila:1 danger:1 intersect:1 empirical:3 universal:2 significantly:1 revealing:1 projection:23 convenient:3 pre:2 matching:1 madigan:1 suggest:1 get:2 doubled:2 close:3 needle:1 storage:1 context:1 applying:1 seminal:1 arcene:4 www:1 map:6 go:1 convex:1 simplicity:4 recovery:4 mironov:1 estimator:21 construction:2 suppose:2 magazine:1 exact:3 programming:1 us:3 hypothesis:1 agreement:1 trend:1 approximated:1 particularly:1 cut:1 database:1 csie:1 cloud:1 solved:1 thousand:1 trade:3 decrease:1 contemporary:1 valuable:1 substantial:2 tight:1 algebra:1 deliver:1 upon:1 max1:1 efficiency:4 drineas:1 easily:2 various:2 train:1 fast:2 whose:2 larger:1 plausible:1 distortion:1 compressed:6 statistic:2 ward:1 think:1 farm:4 transform:3 indyk:2 slawski:2 advantage:1 propose:1 product:7 tu:6 uci:1 combining:1 realization:1 achieve:2 inducing:1 pronounced:1 enhancement:1 optimum:1 double:1 motwani:1 tk:3 help:1 depending:3 illustrate:1 rescale:1 minor:2 ij:3 nearest:6 alj:1 recovering:4 bmle:7 indicate:2 quantify:1 direction:1 liberty:1 bin:9 hx:1 suffices:1 fix:1 preliminary:1 ntu:1 proposition:2 elementary:1 extension:3 sufficiently:1 considered:2 normal:4 scope:1 mapping:1 lm:2 achieves:1 omitted:1 uniqueness:1 estimation:15 sensitive:2 vice:1 faithfully:1 reflects:1 gaussian:4 aim:1 rather:2 ej:1 dexter:4 mobile:1 axj:2 focus:3 improvement:4 consistently:1 likelihood:5 indicates:1 contrast:1 helpful:1 tao:1 issue:1 classification:5 among:1 aforementioned:1 denoted:1 colt:1 raised:1 fairly:1 laska:1 marginal:1 equal:1 having:2 beach:1 look:2 icml:2 constitutes:1 anticipated:1 future:1 others:1 simplify:1 serious:1 intelligent:1 duplicate:1 few:1 composed:1 preserve:1 tightly:1 replaced:3 geometry:2 argmax:1 detection:1 interest:4 mining:2 evaluation:3 mahoney:1 photonics:1 analyzed:1 extreme:1 bracket:1 mixture:1 semidefinite:1 implication:1 accurate:1 closer:1 rps:24 orthogonal:1 gmu:1 euclidean:3 re:2 e0:1 theoretical:2 column:4 zn:1 introducing:1 entry:2 subset:1 uniform:1 kq:1 predictor:1 rounding:1 johnson:10 dependency:2 kxi:4 considerably:3 st:1 thanks:1 fundamental:1 randomized:2 density:1 siam:1 international:3 sequel:1 off:4 michael:1 sketching:1 squared:13 choose:2 possibly:1 henceforth:1 american:1 leading:1 li:7 star:2 lloyd:2 coding:3 includes:1 coefficient:2 notable:1 explicitly:1 kzk2:4 depends:1 later:1 view:1 root:1 closed:1 performed:2 analyze:1 red:1 start:1 sort:1 recover:3 inherited:1 candes:1 square:2 ni:3 accuracy:20 variance:9 yield:6 none:1 ping:2 cumbersome:1 boutsidis:1 frequency:1 johnsonlindenstrauss:1 obvious:1 associated:3 proof:2 attributed:1 hamming:2 spie:1 popular:1 recall:1 knowledge:2 dimensionality:6 improves:1 satisfiability:1 hilbert:1 maillard:1 sophisticated:1 actually:1 carefully:1 appears:2 hashing:2 improved:3 evaluated:4 though:1 mitzenmacher:3 anderson:1 achlioptas:1 correlation:2 working:1 hand:4 ei:1 bnorm:13 perhaps:1 indicated:1 gray:1 grows:1 tabulating:1 usa:1 effect:2 k22:5 normalized:3 verify:2 counterpart:3 hence:4 eg:1 cosine:17 confusion:1 performs:1 reasoning:1 image:4 ranging:1 consideration:1 common:1 behaves:1 empirically:1 tabulation:2 jl:3 discussed:2 he:2 numerically:2 refer:1 measurement:1 versa:1 rd:2 tuning:2 grid:2 mathematics:1 pointed:1 maxcut:1 had:1 hxi:2 stable:1 similarity:26 multivariate:1 isometry:1 recent:1 moderate:2 driven:1 scenario:2 store:3 binary:2 preserving:2 seen:2 george:1 additional:3 minimum:2 unrestricted:1 employed:1 zouzias:1 paradigm:4 monotonically:1 signal:6 preservation:2 ii:2 full:19 cross:1 long:2 retrieval:2 lin:16 divided:3 concerning:2 sphere:1 mle:14 qi:6 prediction:1 involving:1 basic:4 regression:3 ae:3 essentially:1 expectation:1 rutgers:1 variant:1 normalization:3 represent:1 kernel:1 achieved:2 kieffer:1 preserved:1 whereas:2 background:1 fine:2 justified:1 addressed:1 interval:5 addition:1 crucial:3 subject:1 hz:5 tend:2 db:1 seem:1 integer:3 near:1 presence:1 iii:1 embeddings:3 variety:1 xj:5 zi:5 hastie:1 opposite:1 inner:7 idea:1 reduce:1 regarding:3 qj:4 t0:1 motivated:2 expression:2 jj:1 dramatically:2 ignored:1 collision:8 pems:4 transforms:1 locally:1 reduced:2 http:1 nsf:2 revisit:1 dotted:1 sign:2 estimated:1 jacques:1 per:4 materialize:1 diverse:1 dasgupta:2 fradkin:1 coarsely:3 four:3 threshold:3 achieving:1 drawn:1 libsvm:4 verified:1 lowering:1 asymptotically:1 relaxation:1 graph:1 year:1 run:1 bias2:9 baraniuk:2 laid:1 throughout:1 reasonable:2 almost:1 comparable:2 bit:22 bound:7 pay:1 distinguish:1 annual:1 optic:1 constraint:1 precisely:1 nearby:1 extremely:1 expanded:1 optical:1 vempala:1 martin:1 conjecture:1 department:1 structured:1 according:2 alternate:1 ailon:2 charikar:1 beneficial:1 increasingly:2 tw:1 restricted:1 pr:2 invariant:1 taken:2 computationally:2 remains:1 discus:3 turn:7 count:1 cjlin:1 merit:1 fed:1 gersho:1 adopted:1 available:1 gaussians:1 worthwhile:1 generic:3 coin:1 slower:2 original:1 denotes:3 clustering:2 cf:4 log2:10 log10:12 placeholder:1 k1:1 classical:1 society:2 implied:1 already:1 realized:2 flipping:1 occurs:1 strategy:2 dependence:5 usual:1 traditional:1 mirrokni:1 rane:3 hq:3 distance:9 topic:1 argue:1 code:16 index:1 providing:1 ratio:1 equivalently:1 setup:3 statement:1 potentially:1 favorably:2 stoc:3 negative:2 stated:1 upper:2 observation:1 finite:2 t:12 situation:1 communication:1 precise:1 varied:1 introduced:1 pair:6 cast:1 kl:2 required:1 z1:1 optimized:1 herein:4 nip:4 below:1 regime:3 recast:1 max:2 event:1 treated:2 natural:1 rely:1 indicator:1 representing:1 scheme:7 technology:1 library:1 numerous:2 church:1 naive:1 text:1 prior:3 literature:2 review:1 discovery:2 relative:7 asymptotic:9 loss:8 subfield:1 highlight:1 expect:1 interesting:1 var:16 versus:1 foundation:3 incurred:2 proxy:1 row:2 course:1 surprisingly:1 supported:1 bias:20 side:1 neighbor:6 simhash:1 taking:1 munos:1 distributed:1 benefit:2 regard:1 dimension:5 depth:6 xn:1 default:1 curve:1 kz:4 qn:1 lindenstrauss:10 mmsp:1 refinement:1 projected:5 employing:1 far:2 transaction:6 approximate:4 compact:1 ignore:1 keep:1 global:1 conclude:2 assumed:1 xi:4 search:4 iterative:1 kqk2:1 quantifies:1 bingham:1 table:3 reality:1 reasonably:1 ca:1 obtaining:1 shrivastava:1 improving:1 quantize:1 mse:22 investigated:1 williamson:1 meanwhile:1 pk:2 main:1 whole:1 krahmer:1 arise:1 n2:2 x1:1 augmented:1 referred:1 elaborate:1 db2:1 wiley:1 precision:18 col:4 crude:1 weighting:1 bij:4 rk:1 theorem:7 removing:1 specific:3 showing:1 sensing:6 mason:1 insignificant:1 r2:1 decay:5 symbol:1 concern:1 bivariate:6 workshop:1 unattractive:1 quantization:22 dominates:1 intrinsic:1 evidence:1 prevail:1 supplement:5 budget:1 disregarded:1 gap:2 locality:2 depicted:1 simply:1 conveniently:1 partially:1 scalar:7 recommendation:1 chang:1 springer:1 corresponds:2 acm:3 goal:2 exposition:2 towards:1 lipschitz:1 absence:1 considerable:1 feasible:1 change:1 included:1 specifically:2 reducing:1 uniformly:1 lemma:3 conservative:1 called:1 boufounos:4 goemans:1 multimedia:1 experimental:1 estimable:1 exception:1 immorlica:1 support:1 latter:2 arises:1 kaxi:1 preparation:1 bigdata:1 evaluate:1 phenomenon:1 |
6,682 | 7,044 | Discovering Potential Correlations via
Hypercontractivity
Hyeji Kim1? Weihao Gao1? Sreeram Kannan2? Sewoong Oh1? Pramod Viswanath1?
University of Illinois at Urbana Champaign1 and University of Washington2
{hyejikim,wgao9}@illinois.edu,[email protected],{swoh,pramodv}@illinois.edu
Abstract
Discovering a correlation from one variable to another variable is of fundamental
scientific and practical interest. While existing correlation measures are suitable
for discovering average correlation, they fail to discover hidden or potential correlations. To bridge this gap, (i) we postulate a set of natural axioms that we expect a
measure of potential correlation to satisfy; (ii) we show that the rate of information
bottleneck, i.e., the hypercontractivity coefficient, satisfies all the proposed axioms;
(iii) we provide a novel estimator to estimate the hypercontractivity coefficient
from samples; and (iv) we provide numerical experiments demonstrating that this
proposed estimator discovers potential correlations among various indicators of
WHO datasets, is robust in discovering gene interactions from gene expression
time series data, and is statistically more powerful than the estimators for other
correlation measures in binary hypothesis testing of canonical examples of potential
correlations.
1
Introduction
Measuring the strength of an association between two random variables is a fundamental topic
of broad scientific interest. Pearson?s correlation coefficient [1] dates from over a century ago
and has been generalized seven decades ago as maximal correlation (mCor) to handle nonlinear
dependencies [2?4]. Novel correlation measures to identify different kinds of associations continue
to be proposed in the literature; these include maximal information coefficient (MIC) [5] and distance
correlation (dCor) [6]. Despite the differences, a common theme of measurement of the empirical
average dependence unites the different dependence measures. Alternatively, these are factual
measures of dependence and their relevance is restricted when we seek a potential dependence of
one random variable on another. For instance, consider a hypothetical city with very few smokers.
A standard measure of correlation on the historical data in this town on smoking and lung cancer
will fail to discover the fact that smoking causes cancer, since the average correlation is very small.
On the other hand, clearly, there is a potential correlation between smoking and lung cancer; indeed
applications of this nature abound in several scenarios in modern data science, including a recent one
on genetic pathway discovery [7].
Discovery of a potential correlation naturally leads one to ask for a measure of potential correlation
that is statistically well-founded and addresses practical needs. Such is the focus of this work, where
our proposed measure of potential correlation is based on a novel interpretation of the Information
Bottleneck (IB) principle [8]. The IB principle has been used to address one of the fundamental tasks
in supervised learning: given samples {Xi , Yi }ni=1 , how do we find a compact summary of a variable
?
Coordinated Science Lab and and Department of Electrical and Computer Engineering
Department of Electrical Engineering
?
Coordinated Science Lab and Department of Industrial and Enterprise Systems Engineering
?
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
X that is most informative in explaining another variable Y . The output of the IB principle is a
compact summary of X that is most relevant to Y and has a wide range of applications [9, 10].
We use this IB principle to create a measure of correlation based on the following intuition: if X is
(potentially) correlated with Y , then a relatively compact summary of X can still be very informative
about Y . In other words, the maximal ratio of how informative a summary can be in explaining Y
to how compact a summary is with respect to X is, conceptually speaking, an indicator of potential
correlation from X to Y . Quantifying the compactness by I(U ; X) and the information by I(U ; Y )
we consider the rate of information bottleneck as a measure of potential correlation:
s(X; Y ) ?
sup
U ?X?Y
I(U ; Y )
,
I(U ; X)
(1)
where U X Y forms a Markov chain and the supremum is over all summaries U of X. This
intuition is made precise in Section 2, where we formally define a natural notion of potential
correlation (Axiom 6), and show that the rate of information bottleneck s(X; Y ) captures this
potential correlation (Theorem 1) while other standard measures of correlation fail (Theorem 2).
This ratio has only recently been identified as the hypercontractivity coefficient [11]. Hypercontractivity has a distinguished and central role in a large number of technical arenas including quantum
physics [12, 13], theoretical computer science [14, 15], mathematics [16?18] and probability theory
[19, 20]. In this paper, we provide a novel interpretation to the hypercontractivity coefficient as
a measure of potential correlation by demonstrating that it satisfies a natural set of axioms such a
measure is expected to obey.
For practical use in discovering correlations, the standard correlation coefficients are equipped
with corresponding natural sample-based estimators. However, for hypercontractivity coefficient,
estimating it from samples is widely acknowledged to be challenging, especially for continuous
random variables [21?23]. There is no existing algorithm to estimate the hypercontractivity coefficient
in general [21], and there is no existing algorithm for solving IB from samples either [22, 23]. We
provide a novel estimator of the hypercontractivity coefficient ? the first of its kind ? by bringing
together the recent theoretical discoveries in [11, 24] of an alternate definition of hypercontractivity
coefficient as ratio of Kullback-Leibler divergences defined in (5), and recent advances in joint
optimization (the max step in Equation 1) and estimating information measures from samples using
importance sampling [25].
Our main contributions are the following:
? We postulate a set of natural axioms that a measure of potential correlation from X to Y
should satisfy (Section 2).
p
? We show that s(X; Y ), our proposed measure of potential correlation, satisfies all the
axioms we postulate. In comparison, we prove that existing standard measures of correlation
not only fail to satisfy thepproposed axioms, but also fail to capture canonical potential
correlations captured by s(X; Y ) (Section 2). Another natural candidate is mutual
information, but it is not clear how to interpret the value of mutual information as it is
unnormalized, unlike all other measures of correlation which are between zero and one.
? Computation of the hypercontractivity coefficient from samples is known to be a challenging
open problem. We introduce a novel estimator to compute hypercontractivity coefficient
from i.i.d. samples in a statistically consistent manner for continuous random variables,
using ideas from importance sampling and kernel density estimation (Section 3).
? In a series of synthetic experiments, we show empirically that our estimator for the hypercontractivity coefficient is statistically more powerful in discovering a potential correlation
than existing correlation estimators; a larger power means a larger successful detection rate
for a fixed false alarm rate (Section 4.1).
? We show applications of our estimator of hypercontractivity coefficient in two important
datasets: In Section 4.2, we demonstrate that it discovers hidden potential correlations among
various national indicators in WHO datasets, including how aid is potentially correlated
with the income growth. In Section 4.3, we consider the following gene pathway recovery
problem: we are given samples of four gene expressions time series. Assuming we know
that gene A causes B, that B causes C, and that C causes D, the problem is to discover that
2
these causations occur in the sequential order: A to B, and then B to C, and then C to D.
We show empirically that the estimator of the hypercontractivity coefficient recovers this
order accurately from a vastly smaller number of samples compared to other state-of-the art
causal influence estimators.
2
Axiomatic approach to measure potential correlations
We propose a set of axioms that a measure of potential correlation should satisfy and propose a new
measure of correlation that satisfies all the proposed axioms.
Axioms for potential correlation. We postulate that a measure of potential correlation ?? : X ?Y !
[0, 1] between two random variables X 2 X and Y 2 Y should satisfy:
1. ?? (X, Y ) is defined for any pair of non-constant random variables X and Y .
2. 0 ? ?? (X, Y ) ? 1.
3. ?? (X, Y ) = 0 iff X and Y are statistically independent.
4. For bijective Borel-measurable functions f, g : R ! R, ?? (X, Y ) = ?? (f (X), g(Y )).
5. If (X, Y ) ? N (?, ?), then ?? (X, Y ) = |?|, where ? is the Pearson correlation coefficient.
6. ?? (X, Y ) = 1 if there exists a subset Xr ? X such that for a pair of continuous random
variables (X, Y ) 2 Xr ?Y, Y = f (X) for a Borel-measurable and non-constant continuous
function f .
?
?
?
?
?
0.2
??
0.0
??
?
??
?
?
0.2
0.4
0.6
0.8
0.8
Y
0.6
?
?
?
?
?
?
?
0.4
0.6
??
?
?
?
?
?
?
?
0.2
??
?
?
?
0.4
Y
?
?
?
?
?
?
?
?
?
?
??
?
?
0.0
0.8
?
?
?
?
?
?
??
??
?
??
??
?
??
?
?
?? ?
? ?
???
??
?
?
???
???
?
??
?
??? ???
??
?????
?
? ??
??
?
??
?
?? ?
?
??
? ?
?
??
?
????
? ??
?
?
?????
? ??
???
??
?
??
?? ?
?
? ? ???
? ?
?? ?
? ???
?
?
?
?
?
?? ?? ?
?
?
??
??
? ?
?
?
?? ?
??
?
??
?? ?
??
?
?
? ???
?? ??? ?
?
?? ?
??
?
?
?
?? ? ??
??
??
? ?
?
?
? ????
?
?
? ? ?? ?
??????
?
?
??
? ???
?
?
?
??
?
?
?
?
?? ??
???
?
?
?
?? ? ?
??
? ??? ?
???
? ?
?
?? ??
?
??
??
??
???? ?
? ?
?
?
?
?
?
???
??
1.0
Quadratic
1.0
Linear
1.0
0.2
X
0.4
?
? ?
?
?
0.6
0.8
???
?? ?
?
?
?
??
? ?
? ? ?
?
?
??
??
?? ?
?
?
?? ?
? ??
?
???
?
??
??
?
? ??
?? ?
? ?
?
? ?? ? ?
?? ?
?
?
?
?
??
??
?
???
??
?
?
?
??
? ?
? ?? ??
?
? ??
?
???
? ??
???
?
?
?
?? ?
??
?
? ?
??
??
?? ?
?
? ?
???
?? ?
?
? ??
???
?? ?
? ?
?
??
?
? ?
???
???
?
?
? ?
??
?
??
? ????
??
?
?
??
? ?
?
? ??
?? ?
???? ?
?
??
?
??? ?
???
? ?
?
??
???
? ?
??
??
?
???
?
???
?
?
??
?
?
?
?
???
?
?
??
?
?
?
?
?
?
?
?
?
?
?
???
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
1.0
X
Figure 1: A measure of potential correlation should capture the rare correlation in X 2 [0, 1] in these
examples which satisfy Axiom 6 for a linear and a quadratic function, respectively.
Axioms 1-5 are identical to a subset of the celebrated axioms of R?nyi in [4], which ensure that
the measure is properly normalized and invariant under bijective transformations, and recovers the
Pearson correlation for jointly Gaussian random variables. R?nyi?s original axioms for a measure of
correlation in [4] included Axioms 1-5 and also that the measure ?? of correlation should satisfy
6?. ?? (X, Y ) = 1 if for Borel-measurable functions f or g, Y = f (X) or X = g(Y ).
7?. ?? (X; Y ) = ?? (Y ; X).
The Pearson correlation violates a subset (3, 4, and 6?) of R?nyi?s axioms. Together with recent
empirical successes in multimodal deep learning (e.g. [26?28]), R?nyi?s axiomatic approach has been
a major justification of Hirschfeld-Gebelein-R?nyi (HGR) maximum correlation coefficient defined as
mCor(X, Y ) := supf,g E[f (X)g(Y )], which satisfies all R?nyi?s axioms [2]. Here, the supremum
is over all measurable functions with E[f (X)] = E[g(Y )] = 0 and E[f 2 (X)] = E[g 2 (Y )] = 1.
However, maximum correlation is not the only measure satisfying all of R?nyi?s axioms, as we show
in the following.
Proposition 1. For any function F : [0, 1]?[0, 1] ! [0,p
1] satisfyingp
F (x, y) = F (y, x), F (x, x) =
x, and F (x, y) = 0 only if xy = 0, the symmetrized F ( s(X; Y ), s(Y ; X)) satisfies all R?nyi?s
axioms.
p
This follows from the fact that the hypercontractivity coefficient s(X; Y ) satisfies all but the
symmetry in Axiom 7 (Theorem 1), and it follows that a symmetrized version satisfies all axioms,
3
p
p
e.g. (1/2)( s(X; Y ) + s(Y ; X)) and (s(X; Y )s(Y ; X))1/4 . A formal proof is provided in
Appendix A.1.
From the original R?nyi?s axioms, for potential correlation measure, we remove Axiom 7? that ensures
symmetry, as directionality is fundamental in measuring the potential correlation from X to Y . We
further replace Axiom 6? by Axiom 6, as a variable X has a full potential to be correlated with Y
if there exists a domain Xr such that X and Y are deterministically dependent and non-degenerate
(i.e. not a constant function), as illustrated in Figure 1 for a linear function and a quadratic function.
The hypercontractivity coefficient satisfies all axioms. We propose the hypercontractivity coefficient s(X; Y ), first introduced in [19], as the measure of potential correlation satisfying all Axioms
1-6. Intuitively, s(X; Y ) measures how much potential correlation X has with Y . For example,
if X and Y are independent, then s(X; Y ) = 0 as X has no correlation with Y (Axiom 3). By
data processing inequality, it follows that it is a measure between zero and one (Axiom 2) and also
invariant under bijective transformations (Axiom 4). For jointly Gaussian variables X and Y with
2
the Pearson correlation ?, we can show
p that s(X; Y ) = s(Y ; X) = ? . Hence, the squared-root of
s(X; Y ) satisfies Axiom 5. In fact, s(X; Y ) satisfies all desired axioms for potential correlation,
and we make this precise in the following theorem whose proof is provided in Appendix A.2.
p
Theorem 1. Hypercontractivity coefficient s(X; Y ) satisfies Axioms 1-6.
In particular, the hypercontractivity coefficient satisfies Axiom 6 for potential correlation, unlike
other measures of correlation (see Theorem 2 for examples). If there is a potential for X in a possibly
rare regime in X to be fully correlated with Y such that Y = f (X), then the hypercontractivity
coefficient is maximum: s(X; Y ) = 1.
However, just as HGR correlation is not the only one satisfying R?nyi?s original axioms, the hypercontractivity coefficient is not the only one satisfying our axioms. There is a family of measures
known as hypercontractivity ribbon that includes the hypercontractivity coefficient as a special case,
all of which satisfy the axioms. However, a few properties of the hypercontractivity coefficient makes
it more attractive for practical use; it can be efficiently estimated from samples (see Section 3) and
is a natural extension of the popular HGR maximal correlation coefficient. Axiom 5p
is restricted to
univariate X and Y , and it can be naturally extended to multivariate variables where s(X; Y ) is a
multivariate measure that satisfies all the axioms. For the discussion of hypercontractivity ribbon,
connection between hypercontractivity coefficient and HGR maximal correlation, and extension of
axioms to multivariate variables, see the journal version [29].
Beside standard correlation measures, another measure widely used to quantify the strength of
dependence is mutual information. We can show that mutual information satisfies Axiom 6 if
we replace 1 by 1. However there are two key problems: (a) Practically, mutual information is
unnormalized, i.e., I(X; Y ) 2 [0, 1). Hence, it provides no absolute indication of the strength of the
dependence. (b) Mathematically, we are looking for a quantity that tensorizes, i.e., doesn?t change
when there are many i.i.d. copies of the same pair of random variables. Hypercontractivity coefficient
tensorizes, i.e,
s(X1 , ..., Xn ; Y1 , .., Yn ) = s(X1 , Y1 ), for i.i.d. (Xi , Yi ), i = 1, ? ? ? , n.
On the other hand, mutual information is additive, i.e.,
I(X1 , ? ? ? , Xn ; Y1 , ? ? ? , Yn ) = nI(X1 ; Y1 ), for i.i.d. (Xi , Yi ), i = 1, ? ? ? , n.
Tensorizing quantities capture the strongest relationship among independent copies while additive
quantities capture the sum. For instance, mutual information could be large because a small amount
of information accumulates over many of the independent components of X and Y (when X and
Y are high dimensional) while tensorizing quantities would rule out this scenario, where there is
no strong dependence. When the components are not independent, hypercontractivity indeed pools
information from different components to find the strongest direction of dependence, which is a
desirable property.
One natural way to normalize mutual information is by the log of the cardinality of the input/output
alphabets [30]. One can interpret a popular correlation measure MIC as a similar effort for normalizing
mutual information and is one of our baselines.
Standard correlation coefficients violate the Axioms. We next analyze existing measures of
correlations under the scenario with potential correlation (Axiom 6), where we find that none of the
4
existing correlation measures satisfy Axiom 6. Suppose X and Y are independent (i.e. no correlation)
in a subset Xd of the domain X , and allow X and Y to be arbitrarily correlated in the rest Xr of
the domain, such that X = Xd [ Xr . We further assume that the independent part is dominant and
the correlated part is rare; let ? := P(X 2 Xr ) and we consider the scenario when ? is small. A
good measure of potential correlation is expected to capture the correlation in Xr even if it is rare
(i.e., ? is small). To make this task more challenging, we assume that the conditional distribution of
Y |{X 2 Xr } is the same as Y |{X 2
/ Xr }. Figure 1 (of this section) illustrates sampled points for
two examples from such a scenario and more examples are in Figure 5 in Appendix B. Our main result
is the analysis of HGR maximal correlation (mCor) [2], distance correlation (dCor) [6], maximal
information coefficients (MIC) [5], which shows that these measures are vanishing with ? even if the
dependence in the rare regime is very high. Suppose Y |(X 2 Xr ) = f (X), then all three correlation
coefficients are vanishing as ? gets small. This in particular violates Axiom 6. The reason is that
standard correlation coefficients measure the average correlation whereas the hypercontractivity
coefficient measures the potential correlation. The experimental comparisons on the power of these
measures confirm our analytical predictions in Figure 2. The formal statement is below and the proof
is provided in Appendix A.3.
Theorem 2. Consider a pair of continuous random variables (X, Y ) 2 X ? Y. Suppose X is
partitioned as Xr [ Xd = X such that PY |X (S|X 2 Xr ) = PY |X (S|X 2 Xd ) for all S ? Y, and Y
is independent of X for X 2 Xd . Let ? = P{X 2 Xr }. The HGR maximal correlation coefficient is
p
mCor(X, Y ) =
? mCor(Xr , Y ) ,
(2)
the distance correlation coefficient is
dCor(X, Y ) = ? dCor(Xr , Y ) ,
(3)
the maximal information coefficient is upper bounded by
MIC(X, Y ) ? ? MIC(Xr , Y ) ,
(4)
where Xr is the random variable X conditioned on the rare domain X 2 Xr .
3
Estimator of the hypercontractivity coefficient from samples
In this section, we present an algorithm1 to compute the hypercontractivity coefficient s(X; Y ) from
i.i.d. samples {Xi , Yi }ni=1 . The computation of the hypercontractivity coefficient from samples is
known to be challenging for continuous random variables [22, 23], and to the best of our knowledge,
there is no known efficient algorithm to compute the hypercontractivity coefficient from samples.
Our estimator is the first efficient algorithm to compute the hypercontractivity coefficient, based on
the following equivalent definition of the hypercontractivity coefficient, shown recently in [11]:
s(X; Y )
?
sup
rx 6=px
D(ry ||py )
.
D(rx ||px )
(5)
There are two main challenges for computing s(X; Y ). The first challenge is ? given a marginal
distribution rx and samples from pxy , how do we estimate the KL divergences D(ry ||py ) and
D(rx ||px ). The second challenge is the optimization over the infinite dimensional simplex. We
need to combine estimation and optimization together in order to compute s(X; Y ). Our approach
is to combine ideas from traditional kernel density estimates and from importance sampling. Let
wi = rx (Xi )/px (Xi ) be the likelihood ratio evaluated at sample i. We propose the estimation and
optimization be solved jointly as follows:
Estimation: To estimate KL divergence D(rx ||px ), notice that
?
rx (X)
rx (X)
D(rx | |px ) = EX?px
log
.
px (X)
px (X)
Using empirical average to replace the expectation over px , we propose
n
1
n
X rx (Xi )
rx (Xi )
1X
b x | |px ) = 1
D(r
log
=
wi log wi .
n i=1 px (Xi )
px (Xi )
n i=1
Code is available at https://github.com/wgao9/hypercontractivity
5
For D(ry ||py ), we follow the similar idea, but the challenge is in computing vj = ry (Yj )/py (Yj ).
To do this, notice that rxy = rx py|x , so
?
?
?
rx (X)
.
ry (Yj ) = EX?rx py|x (Yj |X) = EX?px py|x (Yj |X)
px (X)
Replacing the expectation by empirical average again, we get the following estimator of vj :
n
vbj
=
n
1 X py|x (Yj |Xi ) rx (Xi )
1 X pxy (Xi , Yj )
=
wi .
n i=1 py (Yj ) px (Xi )
n i=1 px (Xi )py (Yj )
|
{z
}
Aji
b = AT w. We use a kernel density estimator
We can write this expression in matrix form as v
from [31] to estimate the matrix A, but our approach is compatible with any density estimator of
choice.
Optimization: Given the estimators of the KL divergences, we are able to convert the problem
of computing
s(X; Y ) into an optimization problem over the vector w. Here a constraint of
Pn
(1/n) i=1 wi = 1 is needed to satisfy Epx [rx /px ] = 1. To improve numerical stability, we
use log s(X; Y ) as the objective function.
Then the optimization problem has the following form:
maxw
subject to
Pn
log (wT A log(AT w)
n
1X
wi = 1
n i=1
wi
log wT log w
0, 8 i
where w log w = i=1 wi log wi for short. Although this problem is not convex, we apply gradient
descent to maximize the objective. In practice, we initialize wi = 1 + N (0, 2 ) for 2 = 0.01.
Hence, the initial rx is perturbed mildly from px . Although we are not guaranteed to achieve the
global maximum, we consistently observe in extensive numerical experiments that we have 50%-60%
probability of achieving the same maximum value, which we believed to be the global maximum. A
theoretical analysis of the landscape of local and global optima and their regions of attraction with
respect to gradient descent is an interesting and challenging open question, outside the scope of this
paper. A theoretical understanding of the performance of gradient descent on the optimization step
(where the number of samples is fixed) above is technically very challenging and is left to future
work.
T
4
Experimental results
We present experimental results on synthetic and real datasets showing that the hypercontractivity
coefficient (a) is more powerful in detecting potential correlation compared to existing measures; (b)
discovers hidden potential correlations among various national indicators in WHO datasets; and (c)
is more robust in discovering pathways of gene interactions from gene expression time series data.
4.1
Synthetic data: power test on potential correlation
As our estimator (and the measure itself) involves a maximization, it is possible that we are sensitive
to outliers and may capture spurious noise. A formal statistical approach to test the robustness as
well as accuracy is to run power tests: testing for the power of the estimator in binary hypothesis
tests. Via a series of experiments we show that the hypercontractivity coefficient and our estimator
are capturing the true potential correlation.
We compare the power of the hypercontractivity coefficient and other correlation coefficients in the
binary hypothesis testing scenario of Theorem 2. As shown in Figure 5 in Appendix B, we generate
pairs of datasets ? one where X and Y are independent and one where there is a potential correlation
as per our scenario. We experiment with eight types of functional associations, following the examples
from [5, 32, 33]. For the correlated datasets, out of n samples {(xi , yi )}ni=1 , ?n rare but correlated
samples are in X = [0, 1] and (1 ?)n dominant but independent samples are in X 2 [1, 1.1].
6
The rare but correlated samples are generated as xi ? Unif[0, 1], yi ? f (xi ) + N (0, 2 ) for
i 2 [1 : ?n]. The dominant samples are generated as xi ? Unif[1, 1.1], yi ? f (Unif[0, 1])+N (0, 2 )
for i 2 [?n + 1, n]. A formal comparison is done via testing their powers: comparing the false
negative rate at a fixed false positive rate of, say, 5%. We show empirically that for linear, quadratic,
sine with period 1/2, and the step function, the hypercontractivity coefficient is more powerful as
compared to other measures. For a given setting, a larger power means a larger successful detection
rate for a fixed false alarm rate. Figure 2 shows the power of correlation estimators as a function of
the additive noise level, 2 , for ? = 0.05 and n = 320. The hypercontractivity coefficient is more
powerful than other correlation estimators for most functions. The power of all the estimators are
very small for sine (period 1/8) and circle functions. This is not surprising given that it is very hard to
discern the correlated and independent cases even visually, as shown in Figure 5. We give extensive
experimental results in the journal version [29].
0.01
0.03
0.1
0.3
1
3
0.3
1
3
1
3
0.1
0.3
1
3
0
Noise level
0.03
0.1
0.3
1
3
Cor
dCor
MIC
mCor
HC
0.4
0.6
0.4
0.03
0.01
Step function
Cor
dCor
MIC
mCor
HC
0.2
0.01
0
Noise level
0.0
0
1.0
0.6
0.4
0.2
0.3
0.6
1.0
0.6
0.4
0.2
0.1
Noise level
0.1
Circle
Cor
dCor
MIC
mCor
HC
0.0
0.03
0.03
Noise level
0.8
1.0
0.8
0.6
0.4
0.2
0.0
0.01
0.01
X^(1/4)
Cor
dCor
MIC
mCor
HC
0
0.0
0
Noise level
Sine: period 1/8
0.8
1.0
0.8
0.4
0.2
0.0
0
1.0
3
0.8
1
0.2
0.3
Cor
dCor
MIC
mCor
HC
0.0
0.1
Noise level
1.0
0.03
0.8
0.01
Sine: period 1/2
Cor
dCor
MIC
mCor
HC
0.6
0.8
0.6
0.2
0.0
0
Power
Cubic
Cor
dCor
MIC
mCor
HC
0.4
0.6
0.4
0.0
0.2
Power
0.8
Cor
dCor
MIC
mCor
HC
1.0
Quadratic
1.0
Linear
0.01
0.03
0.1
0.3
1
3
0
0.01
Noise level
0.03
0.1
0.3
1
3
Noise level
Figure 2: Power vs. noise level for ? = 0.05, n = 320
4.2
Real data: correlation between indicators of WHO datasets
We compute the hypercontractivity coefficient, MIC, and Pearson correlation of 1600 pairs of
indicators for 202 countries in the World Health Organization (WHO) dataset [5]. Figure 3 illustrates
that the hypercontractivity coefficient discovers hidden potential correlation (e.g. in (E) and (F)),
whereas other measures fail. Scatter plots of Pearson correlation vs. the hypercontractivity coefficient
and MIC vs. the hypercontractivity coefficient for all pairs are presented in Figure 3 (A) and (D). The
samples for pairs of indicators corresponding to B,C,E,F in Figure 3 (A) and (D) are shown in Figure
3 (B),(C),(E),(F), respectively. In (B), it is reasonable to assume that the number of bad teeth per
child is uncorrelated with the democracy score. The hypercontractivity coefficient, MIC, and Pearson
correlation are all small, as expected. In (C), the correlation between CO2 emissions and energy use
is clearly visible, and all three correlation estimates are close to one.
However, only the hypercontractivity coefficient discovers the hidden potential correlation in (E) and
(F). In (E), the data is a mixture of two types of countries ? one with small amount of aid received (less
than $5 ? 108 ), and the other with large amount of aid received (larger than $5 ? 108 ). Dominantly
many countries (104 out of 146) belong to the first type (small aid), and for those countries, the
amount of aid received and the income growth are independent. For the remaining countries with
larger aid received, although those are rare, there is a clear correlation between the amount of aid
received and the income growth. Similarly in (F), there are two types of countries ? one with small
arms exports (less than $2 ? 108 ) and the other with large arms exports (larger than $2 ? 108 ).
Dominantly many countries (71 out of 82) belong to the first type, for which the amount of arms
exports and the health expenditure are independent. For the remaining countries that belong to the
second type, on the other hand, there is a visible correlation between the arms exports and the health
expenditure. This is expected as for those countries that export arms the GDP is positively correlated
7
?
?
?
?
?
?
?
F
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
??
?
?
??
?
?
?
?
?
??
?
?
?
?
?
?
??
?
??
??
?
?
?
?
???
?
?
?
?
??
?
?
?
?
?
??
?
?
??
?
??
?
?
?
?
?
2
3
4
5
?
6
?
??
?
?
??
?
???
?
?
? ?
? ?
?
? ?? ?
?
??
?? ?
?
?
??
?
?
? ?
?? ?
??
? ?
??
?
?
?
?
???
?? ?
?
?
?? ?
?
?
?
?
?
? ?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
0
10
20
? ?
?
?
?
?
?
?
?
?
?
?
???
??
?
?
?
?
? ?
? ??
?
? ?
??
?? ?
??
?
?
?
?? ?
? ?
?
?
?
?
?? ? ? ?
?
? ??
?
?
?
?? ? ? ?
? ??
? ?
?
?
??
? ? ?? ?
?
? ? ? ? ? ?
??
?? ?? ? ?
?
?
? ? ? ??
?? ? ?
?
? ?? ? ????? ?
?
??
?
?
? ???? ??
?
?
?
? ?? ?
?
? ?
?
?
??
??
? ??
?
? ???
?
?
?
?
?
??????
?
? ? ? ? ?? ? ?
?
???
???
? ??
? ??
??
? ?
? ??
?
?
?
???
?
? ?
?
?
?
?
?
?
?
?
?
?
?
?
? ? ??
?
??? ? ? ?? ? ?
??
? ?
???
?
?
?? ?
??
?
?
?
?
?
?
???
?
?
?
??
?
? ?
?
?
?
??
??
?
? ??
??
??
??
?
??
?
?
? ?
????
?
?
? ???
?
??
?
?
??
??
?
???? ?
??
???
? ?? ?
?
??
?
??
?
? ?
?
?
?
??
? ??
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
? ???
?
? ???
?
?
??
?
??
?
? ??
?
?
? ????
?
?
??
??
?
?
?
?
?
?
??
??
?
?
??
?
?
??
?
?
??
??
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
? ? ?? ?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
??
?
?
?
?
?
???
??? ? ???
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?? ?? ? ??
?
?
?
?
?
??
?
??
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
? ??
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
??
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
??????????
?
?
?
??
?
???
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??? ?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
??
?
?
?
?
??
??
?
?
?
?
??
??
?
?
?
??
?
??
??
?
?
?
?
?
??
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
???
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
????
?
?
?
?
?
?
?
?
?
?
?
?
?
?? ?
F
?
?
E
?
? ?
?
?
?
?
??
? ?
?
?
?
?
???
?
?
??
?
0.25
0.50
0.75
Hypercontractivity
15
?
?
?
?
?
?
?
?
??
? ?
? ?
?
? ?
1.00
?
?
??
?
?
?
??
?
??
?
?
?
??
?
??? ?? ?
??
?
?
???
?
?
?
? ?
??
?
?????
?
?
?
?
?
?
??
?
?
?
?
?
??? ??
?
?
?
?
?
?
?
??
?
?
?
???
?
?
?? ?
?
?
?
?
?
??
??
?
?
?
??
?
?
?
?
?
?
?
???
?
?
??
?
?
??
??
?
50
60
16
?
?
?
0.0e+00
5.0e+09
1.0e+10
1.5e+10
Aid_received_total
(E)
2.0e+10
14
?
?
?
4
?
?
?
?
?
?
?
?
?
2
?
Health_expenditure_total
??
??
?
??
?
?
?
?
??
?
?
? ? ?
?
?
?
?
?
??
?
20
?
?
?
?
40
(C)
12
?
?
?
??
?
?
?
?
10
?
?
?
30
CO2_emissions
10
?
5
??
?
?
?
?
??
Income_growth
?
?
?
?
?
?
?
?
?
?
0
?
(D)
?
?
?
??
?
?
?
0.00
?
?
?
1
? ?
?
0.00
??
?
?
?
?
?
25
C
?
?
??
B
?
?
?5
Pearson correlation
0.75
0.25
?
Bad_teeth_per_child
?
?
??
?
?
(B)
? ?
?
?
?
?
?
?
?
???
?
?
?
?
?
??
?
?
1.00
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
15000
5
??
?
?
?
?
1.00
?
???
??
Energy_use
?
?
(A)
?
?
?? ?
5000
? ?
?
?
?
?
?
?
?
?
?
??
0.25
0.50
0.75
Hypercontractivity
0.50
??
?
8
0.00
?
?
?
E
B
??
0
?
?
? ? ?
?
?
0.00
?
? ??
6
0.25
?
??
??
???
?
?
?
?
?
?
?
?
?
?
? ? ?
0
MIC
0.50
?
? ???
?? ?
?
Democracy_score
?
?
?
?
??
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
? ?
?
? ?
?
??
?
?
?
?
?
??
?
? ?
?
?
??
??
?
?
?
?
?
?
??
??
??
?
?? ?
?
?
?
??
? ?
? ?
? ? ??? ?
?
?
?
?
?
?
??
? ?
?
??
? ? ?? ???
?
?
?? ?
??
?
?
??
? ?
?
? ? ?
?
?
?
??
?? ??
?? ? ? ?? ?
? ? ?? ? ??
?
?
?
? ?? ?? ?
???
?
? ?
?? ?? ?
? ?
? ??
?
?
? ? ?? ?? ??? ??
?
?? ?
? ???
? ? ??
? ?
??
?
?
? ? ?
?
??
? ?
?
?
?
? ??????
?
?
?
??
??
? ?? ? ? ?
?
? ? ?
?
?
?
?
?
?
?
?
?
? ? ?? ??
?
?
?? ? ?
?
??? ?
?
?? ?
??
?
?
? ???? ?? ??
? ?
?
?
??
?
? ?
? ?
??
? ??
? ? ?
?? ?
?? ? ?
?
????
?
?
? ?
?
??
??
?
??
?
?
?
?
?
?
??
?
?? ??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
????
? ???
? ?? ? ?
? ??
???
??
?
?
?
?
??
??
?
??
?
?? ?
??
?
???
?
?
???
? ??
?
??
?
?
?
??
?
?
?
?
?? ? ?
?
?
??
??
??
?
???
?
?
?
??
??
?
?
?
?
?
??
??
?
?
? ?
?
?
?
?
?
?
??
??
???
?
?
??
?
?
??
?
?
? ? ? ? ?
?
?
?
?
?
?
??
? ?
?
?? ?????
???
?
?
?
?
?
?
?
?
?
?
?
??
? ???
? ???
??
? ??
?
?
?
??
??
?
?
??
?
?
??
??
?
?
?
?
?
?
?
?
??
??
??
???
??
?
??
??
?
?
?
?
???
? ?? ??
?
?
??
??
?
?
??
?
?
??
?
?
?
?
?
??
??
??
??
??
?
??
?
??
?
?
???
?
???
?
?
??
??
??
???
??
?
??
?
?
?
?
?
?
?
?
??
?
?
??
?
?
?? ?
??
?
????? ?
??
?? ?
?
??
??
?? ??
?
?
?
?
??
?
?
??
? ?
?
?
?
?
?
?
?
?
?
?
?
?
?????
??
?
??
?
?
?
? ? ?
?
?
??
?
??
?
?
??
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
??
?
?
?
? ? ? ? ? ??
? ??
?
?
?
?
?
?
?
?
?
?
?
?
??
??
????
?
?
?
?
?
?
?
?
?
???
??
? ?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
? ?
?
?
?
?
?
?
?
?
?
?
?
??
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?? ?
?
?
?
?
?
?
?
?
?
?
?
???
?
?
?
?
?
?
?
?
?
?
?
?
? ?
?
?
? ?
??
??
?
?
?
?
?
?
?
?
???
?
??
?
?
?
?
?
?
?
?
?
? ???
?
???
?
?
?
??
?
??
?
? ?? ??
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
??
?
? ?
?
?
?
?
?
?
?
?
?
?
??
?
?? ???
?
?
?
? ??
?
?
?
?
??
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
??
?
?
?
??
??
?
?
?
?
? ?
?
?
?
?
?
?
?
? ??
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??? ???? ??? ???? ?
?
?
?
??
?
?
?
?
?
?
?
???
?
?
?
?
?
??
?
?
??
???
??
?
?
?
??
?? ?
?
?
?
?
?
? ??
? ?
?
?
?
?
?
?
?
?
?
?? ? ?
?
???
?
??
?
?
?
?
??
??
?
?
??
???
??
?
?
???
?
?
?
?
?
?
?
?
?
???
?
C
?
?
?
?10
0.75
?
?
?
?
?5
?
?
?
?
20000
?
?
?
?
?
?
?
10000
?
?
?
10
1.00
?
?
?
?
?
?
? ?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
? ?
?
?
?
?
?
?
? ?
?
?
?
? ?
?
?
?
?
??
?
?
?
?
?
?
?
? ?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
0e+00
2e+09
4e+09
6e+09
Arms_exports
(F)
Figure 3: (A) and (D): Scatter plot of correlation measures. (B): Correlations are small. (C):
Correlations are large. (E) and (F): Only the hypercontractivity coefficient discovers potential
correlation.
with both arms exports and health expenditure, whereas for those do not have arms industry, these
two will be independent. We give extensive numerical analyses of the WHO dataset in the journal
version [29].
4.3
Gene pathway recovery from single cell data
We replicate the genetic pathway detection experiment from [7], and show that hypercontractivity
correctly discovers the genetic pathways from smaller number of samples. A genetic pathway is
a series of genes interacting with each other as a chain. Consider the following setup where four
genes whose expression values in a single cell are modeled by random processes Xt , Yt , Zt and Wt
respectively. These 4 genes interact with each other following a pathway Xt ! Yt ! Zt ! Wt ; it is
biologically known that Xt causes Yt with a negligible delay, and later at time t0 , Yt0 causes Zt0 , and
so on. Our goal is to recover this known gene pathway from sampled data points. For a sequence of
(j)
(j)
(j)
(j) ni
time points {ti }m
i=0 , we observe ni i.i.d. samples {Xti , Yti , Zti , Wti }j=1 generated from the
random process P (Xti , Yti , Zti , Wti ). We use the real data obtained by the single-cell mass flow
cytometry technique [7].
Given these samples from time series, the goal of [7] is to recover the direction of the interaction along the known pathway using correlation measures as follows, where they proposed
a new measure called DREMI. The DREMI correlation measure is evaluated on each pairs on
the pathway, ? (Xti , Yti ), ? (Yti , Zti ) and ? (Zti , Wti ), at each time points ti . It is declared that
a genetic pathway is correctly recovered if the peak of correlation follows the expected trend:
arg maxti ? (Xti , Yti ) ? arg maxti ? (Yti , Zti ) ? arg maxti ? (Zti , Wti ). In [25], the same experiment has been done with ? evaluated by UMI and CMI estimators. In this paper, we evaluate ? using
our proposed estimator of hypercontractivity.
We subsample the raw data from [7] to evaluate the ability to find the trend from smaller samples. Precisely, given a resampling rate
2 (0, 1], we randomly select a subset of indices
Si ? [ni ] with card(Si ) = d ni e, compute ? (Xti , Yti ), ? (Yti , Zti ) and ? (Zti , Wti ) from sub8
(j)
(j)
(j)
(j)
probability of success
samples {Xti , Yti , Zti , Wti }j2Si , and determine whether we can recover the trend successfully,
i.e., whether arg maxti ? (Xti , Yti ) ? arg maxti ? (Yti , Zti ) ? arg maxti ? (Zti , Wti ). We repeat
the experiment several times with independent subsamples and compute the probability of successfully recovering the trend. Figure 4 illustrates that when the entire dataset is available, all methods
are able to recover the trend correctly. When only fewer samples are available, hypercontractivity
improves upon other competing measures in recovering the hidden chronological order of interactions
of the pathway. For completeness, we run datasets for both regular T-cells (shown in left figure) and
T-cells exposed with an antigen (shown right figure), for which we expect distinct biological trends.
Hypercontractivity method can capture the trend for both datasets correctly and sample-efficiently.
1
1
0.9
0.9
0.8
0.8
0.7
0.7
0.6
0.6
HyperContractivity
CMI
UMI
DREMI
0.5
0.4
0.02
0.05
0.1
0.25
resampling rate
0.5
HyperContractivity
CMI
UMI
DREMI
0.5
0.4
1.0
0.2
0.4
0.6
resampling rate
0.8
1.0
Figure 4: Accuracy vs. subsampling rate. Hypercontractivity method has higher probability to
recover the trend when data size is smaller compared to other methods. Left: regular T-cells. Right:
T-cells exposed with an antigen [7].
Acknowledgments
This work was partially supported by NSF grants CNS-1527754, CNS-1718270, CCF-1553452,
CCF-1617745, CCF-1651236, CCF-1705007, and GOOGLE Faculty Research Award.
References
[1] K. Pearson, ?Note on regression and inheritance in the case of two parents,? Proceedings of the
Royal Society of London, vol. 58, pp. 240?242, 1895.
[2] H. Hirschfeld, ?A connection between correlation and contingency,? Mathematical Proceedings
of the Cambridge Philosophical Society, pp. 31(4), pp. 520?524., 1935.
[3] H. Gebelein, ?Das statistische problem der korrelation als variations-und eigenwertproblem und
sein zusammenhang mit der ausgleichsrechnung,? ZAMM-Journal of Applied Mathematics and
Mechanics/Zeitschrift f?r Angewandte Mathematik und Mechanik, vol. 21, no. 6, pp. 364?379,
1941.
[4] A. R?nyi, ?On measures of dependence,? Acta mathematica hungarica, vol. 10, no. 3-4, pp.
441?451, 1959.
[5] D. N. Reshef, Y. A. Reshef, H. K. Finucane, S. R. Grossman, G. McVean, P. J. Turnbaugh, E. S.
Lander, M. Mitzenmacher, and P. C. Sabeti, ?Detecting novel associations in large data sets,?
Science, vol. 334, no. 6062, pp. 1518?1524, 2011.
[6] G. J. Sz?kely, M. L. Rizzo, and N. K. Bakirov, ?Measuring and testing dependence by correlation
of distances,? Ann. Statist., vol. 35, no. 6, pp. 2769?2794, 12 2007.
[7] S. Krishnaswamy, M. H. Spitzer, M. Mingueneau, S. C. Bendall, O. Litvin, E. Stone, D. Pe?er,
and G. P. Nolan, ?Conditional density-based analysis of T cell signaling in single-cell data,?
Science, 2014.
[8] N. Tishby, F. C. Pereira, and W. Bialek, ?The information bottleneck method,? in Proc. 37th
Ann. Allerton Conf. Comm. Control Comput., 1999, pp. 368?377.
9
[9] I. S. Dhillon, S. Mallela, and R. Kumar, ?A divisive information-theoretic feature clustering
algorithm for text classification,? Journal of Machine Learning Research (JMLR), vol. 3, 2003.
[10] R. Bekkerman, R. El-Yaniv, N. Tishby, and Y. Winter, ?Distributional word clusters vs. words
for text categorization,? J. Mach. Learn. Res., vol. 3, pp. 1183?1208, 2003.
[11] V. Anantharam, A. A. Gohari, S. Kamath, and C. Nair, ?On maximal correlation, hypercontractivity, and the data processing inequality studied by Erkip and Cover,? CoRR, vol. abs/1304.6133,
2013.
[12] E. Davies, L. Gross, and B. Simone, ?Hypercontractivity: A bibliographic review,? Ideas and
methods in quantum and statistical physics (Oslo, 1988), pp. 370?89, 1992.
[13] E. Nelson, ?Construction of quantum fields from markoff fields,? Journal of Functional Analysis,
vol. 12, no. 1, pp. 97?112, 1973.
[14] J. Kahn, G. Kalai, and N. Linial, ?The influence of variables on boolean functions,? in Proceedings of the 29th Annual Symposium on Foundations of Computer Science, ser. SFCS ?88.
IEEE Computer Society, 1988, pp. 68?80.
[15] R. O?Donnell, Analysis of boolean functions.
Cambridge University Press, 2014.
[16] A. Bonami, ??tude des coefficients de fourier des fonctions de Lp (G),? Annales de l?institut
Fourier, vol. 20, no. 2, pp. 335?402, 1970.
[17] W. Beckner, ?Inequalities in fourier analysis,? Annals of Mathematics, pp. 159?182, 1975.
[18] L. Gross, ?Hypercontractivity and logarithmic sobolev inequalities for the clifford-dirichlet
form,? Duke Math. J., vol. 42, no. 3, pp. 383?396, 09 1975.
[19] R. Ahlswede and P. Gacs, ?Spreading of sets in product spaces and hypercontraction of the
markov operator,? Ann. Probab., vol. 4, no. 6, pp. 925?939, 1976.
[20] E. Mossel, K. Oleszkiewicz, and A. Sen, ?On reverse hypercontractivity,? Geometric and
Functional Analysis, vol. 23, no. 3, pp. 1062?1097, 2013.
[21] C. Nair and S. Kamath, Personal communication, 2016.
[22] A. A. Alemi, I. Fischer, J. V. Dillon, and K. Murphy, ?Deep variational information bottleneck,?
ICLR, 2017.
[23] A. Achille and S. Soatto, ?Information dropout: Learning optimal representations through noisy
computation,? ArXiv e-prints. 1611.01353, 2016.
[24] C. Nair, ?An extremal inequality related to hypercontractivity of Gaussian random variables,? in
Information Theory and Applications Workshop, 2014.
[25] W. Gao, S. Kannan, S. Oh, and P. Viswanath, ?Conditional dependence via shannon capacity:
Axioms, estimators and applications,? in Proceedings of The 33rd International Conference on
Machine Learning, 2016, pp. 2780?2789.
[26] J. Ngiam, A. Khosla, M. Kim, J. Nam, H. Lee, and A. Y. Ng, ?Multimodal deep learning,? in
Proceedings of the 28th international conference on machine learning (ICML-11), 2011, pp.
689?696.
[27] N. Srivastava and R. R. Salakhutdinov, ?Multimodal learning with deep boltzmann machines,?
in Advances in neural information processing systems, 2012, pp. 2222?2230.
[28] G. Andrew, R. Arora, J. Bilmes, and K. Livescu, ?Deep canonical correlation analysis,? in
International Conference on Machine Learning, 2013, pp. 1247?1255.
[29] H. Kim, W. Gao, S. Kannan, S. Oh, and P. Viswanath, ?Discovering potential correlations via
hypercontractivity,? in preparation.
10
[30] C. Bell, ?Mutual information and maximal correlation as measures of dependence,? The Annals
of Mathematical Statistics, vol. 33, no. 2, pp. 587?595, 1962.
[31] T. Michaeli, W. Wang, and K. Livescu, ?Nonparametric canonical correlation analysis,? in
Proceedings of the 33rd International Conference on International Conference on Machine
Learning - Volume 48, ser. ICML?16, 2016, pp. 1967?1976.
[32] N. Simon and R. Tibshirani, ?Comment on ?Detecting Novel Associations In Large Data Sets?
by Reshef Et Al, Science Dec 16, 2011,? ArXiv e-prints, Jan. 2014.
[33] M. Gorfine, R. Heller, and Y. Heller, ?Comment on detecting novel associations in large data
sets,? Unpublished (available at http:// emotion.technion.ac.il/ ~gorfinm/ filesscience6.pdf on 11
Nov. 2012), 2012.
[34] G. Chechik, A. Globerson, N. Tishby, and Y. Weiss, ?Information bottleneck for Gaussian
variables,? J. Mach. Learn. Res., vol. 6, pp. 165?188, Dec. 2005.
11
| 7044 |@word cmi:3 faculty:1 version:4 reshef:3 replicate:1 bekkerman:1 open:2 unif:3 seek:1 initial:1 celebrated:1 series:7 score:1 bibliographic:1 genetic:5 erkip:1 existing:8 recovered:1 com:1 comparing:1 surprising:1 si:2 scatter:2 numerical:4 additive:3 informative:3 visible:2 remove:1 plot:2 v:5 resampling:3 discovering:8 fewer:1 vanishing:2 short:1 provides:1 detecting:4 completeness:1 math:1 allerton:1 mathematical:2 along:1 enterprise:1 symposium:1 prove:1 pathway:13 combine:2 manner:1 introduce:1 expected:5 indeed:2 mechanic:1 ry:5 zti:11 salakhutdinov:1 hypercontractivity:64 xti:7 equipped:1 cardinality:1 abound:1 provided:3 discover:3 estimating:2 bounded:1 spitzer:1 mass:1 kind:2 transformation:2 hypothetical:1 ti:2 growth:3 xd:5 pramod:1 chronological:1 ser:2 control:1 grant:1 yn:2 positive:1 negligible:1 engineering:3 local:1 zeitschrift:1 despite:1 accumulates:1 mach:2 acta:1 studied:1 challenging:6 antigen:2 range:1 statistically:5 practical:4 acknowledgment:1 globerson:1 testing:5 yj:9 practice:1 xr:18 signaling:1 aji:1 jan:1 axiom:46 empirical:4 bell:1 davy:1 chechik:1 word:3 regular:2 ahlswede:1 get:2 close:1 operator:1 influence:2 py:12 measurable:4 equivalent:1 yt:3 convex:1 recovery:2 estimator:26 rule:1 attraction:1 nam:1 oh:2 century:1 handle:1 notion:1 stability:1 justification:1 variation:1 annals:2 construction:1 suppose:3 duke:1 livescu:2 hypothesis:3 trend:8 mic:17 satisfying:4 democracy:1 viswanath:2 distributional:1 role:1 factual:1 export:6 electrical:2 capture:8 solved:1 wang:1 region:1 ensures:1 gross:2 intuition:2 und:3 comm:1 viswanath1:1 co2:1 personal:1 pramodv:1 solving:1 exposed:2 technically:1 upon:1 linial:1 hirschfeld:2 oslo:1 multimodal:3 joint:1 various:3 alphabet:1 distinct:1 mechanik:1 london:1 pearson:10 outside:1 whose:2 widely:2 larger:7 say:1 nolan:1 ability:1 statistic:1 fischer:1 jointly:3 itself:1 noisy:1 subsamples:1 sequence:1 indication:1 analytical:1 sen:1 propose:5 interaction:4 maximal:11 product:1 relevant:1 date:1 iff:1 degenerate:1 achieve:1 normalize:1 epx:1 parent:1 yaniv:1 optimum:1 cluster:1 categorization:1 andrew:1 ac:1 received:5 strong:1 recovering:2 involves:1 quantify:1 direction:2 weihao:1 violates:2 proposition:1 biological:1 mathematically:1 extension:2 practically:1 visually:1 scope:1 major:1 estimation:4 eigenwertproblem:1 proc:1 bonami:1 axiomatic:2 spreading:1 extremal:1 bridge:1 sensitive:1 create:1 city:1 successfully:2 mit:1 clearly:2 gaussian:4 kalai:1 pn:2 focus:1 emission:1 properly:1 consistently:1 likelihood:1 industrial:1 baseline:1 kim:2 dependent:1 el:1 entire:1 compactness:1 spurious:1 hidden:6 kahn:1 arg:6 among:4 classification:1 art:1 special:1 initialize:1 mutual:10 marginal:1 field:2 emotion:1 beach:1 sampling:3 ng:1 identical:1 broad:1 icml:2 future:1 simplex:1 gdp:1 few:2 causation:1 modern:1 randomly:1 winter:1 divergence:4 national:2 murphy:1 cns:2 zt0:1 ab:1 detection:3 organization:1 interest:2 expenditure:3 arena:1 ribbon:2 mixture:1 chain:2 xy:1 institut:1 iv:1 desired:1 circle:2 causal:1 re:2 theoretical:4 instance:2 industry:1 boolean:2 cover:1 measuring:3 maximization:1 subset:5 rare:9 technion:1 delay:1 successful:2 tishby:3 dependency:1 perturbed:1 synthetic:3 st:1 density:5 fundamental:4 peak:1 international:5 kely:1 lee:1 physic:2 donnell:1 pool:1 together:3 alemi:1 squared:1 again:1 clifford:1 vastly:1 postulate:4 town:1 central:1 possibly:1 conf:1 grossman:1 potential:45 de:5 includes:1 coefficient:58 maxti:6 dillon:1 satisfy:10 coordinated:2 sine:4 root:1 later:1 lab:2 analyze:1 sup:2 lung:2 recover:5 simon:1 contribution:1 il:1 ni:8 accuracy:2 who:6 efficiently:2 identify:1 landscape:1 conceptually:1 raw:1 accurately:1 none:1 rx:17 bilmes:1 ago:2 strongest:2 turnbaugh:1 definition:2 energy:1 pp:24 mathematica:1 naturally:2 proof:3 recovers:2 sampled:2 dataset:3 popular:2 ask:1 knowledge:1 improves:1 umi:3 higher:1 supervised:1 follow:1 wei:1 evaluated:3 done:2 mitzenmacher:1 sabeti:1 just:1 correlation:112 hand:3 replacing:1 nonlinear:1 google:1 scientific:2 usa:1 normalized:1 true:1 ccf:4 hence:3 soatto:1 leibler:1 dhillon:1 illustrated:1 gebelein:2 attractive:1 unnormalized:2 generalized:1 stone:1 pdf:1 bijective:3 theoretic:1 demonstrate:1 variational:1 novel:9 discovers:7 recently:2 common:1 functional:3 empirically:3 volume:1 association:6 interpretation:2 belong:3 sein:1 interpret:2 measurement:1 cambridge:2 fonctions:1 rd:2 swoh:1 dcor:12 mathematics:3 similarly:1 illinois:3 dominant:3 krishnaswamy:1 multivariate:3 recent:4 reverse:1 scenario:7 inequality:5 binary:3 continue:1 success:2 arbitrarily:1 yi:7 der:2 captured:1 mallela:1 determine:1 maximize:1 period:4 ii:1 full:1 desirable:1 violate:1 technical:1 bendall:1 believed:1 long:1 simone:1 award:1 prediction:1 regression:1 expectation:2 arxiv:2 kernel:3 cell:9 dec:2 whereas:3 lander:1 country:9 rest:1 unlike:2 bringing:1 comment:2 subject:1 rizzo:1 flow:1 hgr:6 iii:1 wti:7 identified:1 competing:1 korrelation:1 idea:4 bottleneck:7 t0:1 expression:5 whether:2 effort:1 speaking:1 cause:6 zusammenhang:1 deep:5 tude:1 clear:2 amount:6 nonparametric:1 statist:1 http:2 generate:1 canonical:4 nsf:1 notice:2 estimated:1 per:2 oh1:1 correctly:4 tibshirani:1 write:1 vol:15 key:1 four:2 demonstrating:2 acknowledged:1 achieving:1 uw:1 annales:1 sum:1 convert:1 mingueneau:1 run:2 powerful:5 mcor:13 discern:1 family:1 reasonable:1 sobolev:1 appendix:5 capturing:1 dropout:1 guaranteed:1 pxy:2 bakirov:1 quadratic:5 annual:1 strength:3 occur:1 constraint:1 precisely:1 declared:1 ausgleichsrechnung:1 fourier:3 kumar:1 relatively:1 px:19 department:3 alternate:1 smaller:4 finucane:1 partitioned:1 wi:10 lp:1 biologically:1 intuitively:1 restricted:2 invariant:2 outlier:1 kim1:1 equation:1 mathematik:1 fail:6 needed:1 know:1 cor:8 available:4 apply:1 obey:1 observe:2 eight:1 distinguished:1 robustness:1 symmetrized:2 algorithm1:1 markoff:1 original:3 remaining:2 include:1 ensure:1 subsampling:1 clustering:1 dirichlet:1 especially:1 nyi:11 society:3 objective:2 question:1 quantity:4 print:2 dependence:13 traditional:1 bialek:1 gradient:3 iclr:1 distance:4 card:1 capacity:1 nelson:1 topic:1 seven:1 evaluate:2 reason:1 kannan:2 assuming:1 code:1 modeled:1 relationship:1 index:1 ratio:4 setup:1 potentially:2 statement:1 kamath:2 negative:1 zt:2 boltzmann:1 upper:1 datasets:10 urbana:1 markov:2 tensorizing:2 descent:3 extended:1 looking:1 precise:2 communication:1 y1:4 interacting:1 cytometry:1 introduced:1 smoking:3 pair:9 kl:3 extensive:3 connection:2 philosophical:1 unpublished:1 nip:1 address:2 able:2 below:1 regime:2 challenge:4 including:3 max:1 royal:1 power:13 suitable:1 natural:8 indicator:7 arm:7 improve:1 github:1 mossel:1 arora:1 health:4 hungarica:1 text:2 review:1 literature:1 discovery:3 understanding:1 inheritance:1 probab:1 geometric:1 heller:2 beside:1 fully:1 expect:2 interesting:1 foundation:1 contingency:1 teeth:1 consistent:1 mcvean:1 sewoong:1 principle:4 uncorrelated:1 cancer:3 compatible:1 summary:6 yt0:1 repeat:1 supported:1 copy:2 dominantly:2 formal:4 allow:1 explaining:2 wide:1 absolute:1 xn:2 world:1 quantum:3 doesn:1 made:1 historical:1 founded:1 income:3 nov:1 compact:4 kullback:1 michaeli:1 gene:12 supremum:2 confirm:1 global:3 sz:1 sfcs:1 statistische:1 xi:19 alternatively:1 continuous:6 decade:1 khosla:1 nature:1 learn:2 robust:2 ca:1 correlated:11 angewandte:1 symmetry:2 interact:1 ngiam:1 hc:8 domain:4 vj:2 da:1 main:3 noise:11 alarm:2 subsample:1 vbj:1 gao1:1 unites:1 child:1 x1:4 positively:1 borel:3 cubic:1 aid:7 theme:1 pereira:1 deterministically:1 comput:1 candidate:1 pe:1 ib:5 jmlr:1 theorem:8 bad:1 xt:3 showing:1 er:1 normalizing:1 exists:2 workshop:1 false:4 sequential:1 corr:1 importance:3 litvin:1 illustrates:3 conditioned:1 gap:1 smoker:1 mildly:1 supf:1 wgao9:2 logarithmic:1 univariate:1 gao:2 partially:1 zamm:1 maxw:1 srivastava:1 satisfies:15 nair:3 conditional:3 goal:2 quantifying:1 ann:3 replace:3 yti:11 change:1 directionality:1 included:1 infinite:1 hard:1 wt:4 called:1 experimental:4 divisive:1 shannon:1 formally:1 select:1 relevance:1 preparation:1 anantharam:1 ex:3 |
6,683 | 7,045 | Doubly Stochastic Variational Inference
for Deep Gaussian Processes
Hugh Salimbeni
Imperial College London and PROWLER.io
[email protected]
Marc Peter Deisenroth
Imperial College London and PROWLER.io
[email protected]
Abstract
Gaussian processes (GPs) are a good choice for function approximation as they are
flexible, robust to overfitting, and provide well-calibrated predictive uncertainty.
Deep Gaussian processes (DGPs) are multi-layer generalizations of GPs, but
inference in these models has proved challenging. Existing approaches to inference
in DGP models assume approximate posteriors that force independence between the
layers, and do not work well in practice. We present a doubly stochastic variational
inference algorithm that does not force independence between layers. With our
method of inference we demonstrate that a DGP model can be used effectively
on data ranging in size from hundreds to a billion points. We provide strong
empirical evidence that our inference scheme for DGPs works well in practice in
both classification and regression.
1
Introduction
Gaussian processes (GPs) achieve state-of-the-art performance in a range of applications including
robotics (Ko and Fox, 2008; Deisenroth and Rasmussen, 2011), geostatistics (Diggle and Ribeiro,
2007), numerics (Briol et al., 2015), active sensing (Guestrin et al., 2005) and optimization (Snoek
et al., 2012). A Gaussian process is defined by its mean and covariance function. In some situations
prior knowledge can be readily incorporated into these functions. Examples include periodicities
in climate modelling (Rasmussen and Williams, 2006), change-points in time series data (Garnett
et al., 2009) and simulator priors for robotics (Cutler and How, 2015). In other settings, GPs are
used successfully as black-box function approximators. There are compelling reasons to use GPs,
even when little is known about the data: a GP grows in complexity to suit the data; a GP is robust
to overfitting while providing reasonable error bars on predictions; a GP can model a rich class of
functions with few hyperparameters.
Single-layer GP models are limited by the expressiveness of the kernel/covariance function. To some
extent kernels can be learned from data, but inference over a large and richly parameterized space
of kernels is expensive, and approximate methods may be at risk of overfitting. Optimization of
the marginal likelihood with respect to hyperparameters approximates Bayesian inference only if
the number of hyperparameters is small (Mackay, 1999). Attempts to use, for example, a highly
parameterized neural network as a kernel function (Calandra et al., 2016; Wilson et al., 2016) incur the
downsides of deep learning, such as the need for application-specific architectures and regularization
techniques. Kernels can be combined through sums and products (Duvenaud et al., 2013) to create
more expressive compositional kernels, but this approach is limited to simple base kernels, and their
optimization is expensive.
A Deep Gaussian Process (DGP) is a hierarchical composition of GPs that can overcome the
limitations of standard (single-layer) GPs while retaining the advantages. DGPs are richer models
than standard GPs, just as deep networks are richer than generalized linear models. In contrast to
models with highly parameterized kernels, DGPs learn a representation hierarchy non-parametrically
with very few hyperparmeters to optimize.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
Unlike their single-layer counterparts, DGPs have proved difficult to train. The mean-field variational
approaches used in previous work (Damianou and Lawrence, 2013; Mattos et al., 2016; Dai et al.,
2016) make strong independence and Gaussianity assumptions. The true posterior is likely to
exhibit high correlations between layers, but mean-field variational approaches are known to severely
underestimate the variance in these situations (Turner and Sahani, 2011).
In this paper, we present a variational algorithm for inference in DGP models that does not force
independence or Gaussianity between the layers. In common with many state-of-the-art GP approximation schemes we start from a sparse inducing point variational framework (Matthews et al., 2016)
to achieve computational tractability within each layer, but we do not force independence between
the layers. Instead, we use the exact model conditioned on the inducing points as a variational
posterior. This posterior has the same structure as the full model, and in particular it maintains the
correlations between layers. Since we preserve the non-linearity of the full model in our variational
posterior we lose analytic tractability. We overcome this difficulty by sampling from the variational
posterior, introducing the first source of stochasticity. This is computationally straightforward due to
an important property of the sparse variational posterior marginals: the marginals conditioned on the
layer below depend only on the corresponding inputs. It follows that samples from the marginals
at the top layer can be obtained without computing the full covariance within the layers. We are
primarily interested in large data applications, so we further subsample the data in minibatches. This
second source of stochasticity allows us to scale to arbitrarily large data.
We demonstrate through extensive experiments that our approach works well in practice. We provide
results on benchmark regression and classification data problems, and also demonstrate the first
DGP application to a dataset with a billion points. Our experiments confirm that DGP models are
never worse than single-layer GPs, and in many cases significantly better. Crucially, we show that
additional layers do not incur overfitting, even with small data.
2
Background
In this section, we present necessary background on single-layer Gaussian processes and sparse
variational inference, followed by the definition of the deep Gaussian process model. Throughout we
emphasize a particular property of sparse approximations: the sparse variational posterior is itself a
Gaussian process, so the marginals depend only on the corresponding inputs.
2.1
Single-layer Gaussian Processes
We consider the task of inferring a stochastic function f : RD ? R, given a likelihood p(y|f ) and
a set of N observations y = (y1 , . . . , yN )> at design locations X = (x1 , . . . , xN )> . We place a
GP prior on the function f that models all function values as jointly Gaussian, with a covariance
function k : RD ? RD ? R and a mean function m : RD ? R. We further define an additional
set of M inducing locations Z = (z1 , . . . , zM )> . We use the notation f = f (X) and u = f (Z) for
the function values at the design and inducing points, respectively. We define also [m(X)]i = m(xi )
and [k(X, Z)]ij = k(xi , zj ). By the definition of a GP, the joint density p(f , u) is a Gaussian
whose mean is given by the mean function evaluated at every input (X, Z)> , and the corresponding
covariance is given by the covariance function evaluated at every pair of inputs. The joint density of
y, f and u is
YN
p(y, f , u) = p(f |u; X, Z)p(u; Z)
p(yi |fi ) .
(1)
|
{z
} | i=1{z
}
GP prior
likelihood
In (1) we factorized the joint GP prior p(f , u; X, Z) 1 into the prior p(u) = N (u|m(Z), k(Z, Z))
and the conditional p(f |u; X, Z) = N (f |?, ?), where for i, j = 1, . . . , N
[?]i = m(xi ) + ?(xi )> (u ? m(Z)) ,
>
[?]ij = k(xi , xj ) ? ?(xi ) k(Z, Z)?(xj ) ,
(2)
(3)
1
Throughout this paper we use the semi-colon notation to clarify the input locations of the corresponding
function values, which will become important later when we discuss multi-layer GP models. For example,
p(f |u; X, Z) indicates that the input locations for f and u are X and Z, respectively.
2
with ?(xi ) = k(Z, Z)?1 k(Z, xi ). Note that the conditional mean ? and covariance ? defined via (2)
and (3), respectively, take the form of mean and covariance functions of the inputs xi . Inference in
the model (1) is possible in closed form when the likelihood p(y|f ) is Gaussian, but the computation
scales cubically with N .
We are interested in large datasets with non-Gaussian likelihoods. Therefore, we seek a variational
posterior to overcome both these difficulties simultaneously. Variational inference seeks an approximate posterior q(f , u) by minimizing the Kullback-Leibler divergence KL[q||p] between the
variational posterior q and the true posterior p. Equivalently, we maximize the lower bound on the
marginal likelihood (evidence)
p(y, f , u)
L = Eq(f ,u) log
,
(4)
q(f , u)
where p(y, f , u) is given in (1). We follow Hensman et al. (2013) and choose a variational posterior
q(f , u) = p(f |u; X, Z)q(u) ,
(5)
where q(u) = N (u|m, S). Since both terms in the variational posterior are Gaussian, we can
analytically marginalize u, which yields
Z
? .
? ?)
q(f |m, S; X, Z) = p(f |u; X, Z)q(u)du = N (f |?,
(6)
? can be written as mean and covariance functions
? and ?
Similar to (2) and (3), the expressions for ?
of the inputs. To emphasize this point we define
?m,Z (xi ) = m(xi ) + ?(xi )> (m ? m(Z)) ,
>
?S,Z (xi , xj ) = k(xi , xj ) ? ?(xi ) (k(Z, Z) ? S)?(xj ) .
(7)
(8)
? ij = ?S,Z (xi , xj ). We have written the
? i = ?m,Z (xi ) and [?]
With these functions we define [?]
mean and covariance in this way to make the following observation clear.
Remark 1. The fi marginals of the variational posterior (6) depend only on the corresponding
inputs xi . Therefore, we can write the ith marginal of q(f |m, S; X, Z) as
q(fi |m, S; X, Z) = q(fi |m, S; xi , Z) = N (fi |?m,Z (xi ), ?S,Z (xi , xi )) .
(9)
Using our variational posterior (5) the lower bound (4) simplifies considerably since (a) the conditionals p(f |u; X, Z) inside the logarithm cancel and (b) the likelihood expectation requires only the
variational marginals. We obtain
L=
XN
i=1
Eq(fi |m,S;xi ,Z) [log p(yi |fi )] ? KL[q(u)||p(u)] .
(10)
The final (univariate) expectation of the log-likelihood can be computed analytically in some cases,
with quadrature (Hensman et al., 2015) or through Monte Carlo sampling (Bonilla et al., 2016; Gal
et al., 2015). Since the bound is a sum over the data, an unbiased estimator can be obtained through
minibatch subsampling. This permits inference on large datasets. In this work we refer to a GP with
this method of inference as a sparse GP (SGP).
The variational parameters (Z, m and S) are found by maximizing the lower bound (10). This
maximization is guaranteed to converge since L is a lower bound to the marginal likelihood p(y|X).
We can also learn model parameters (hyperparameters of the kernel or likelihood) through the
maximization of this bound, though we should exercise caution as this introduces bias because the
bound is not uniformly tight for all settings of hyperparameters (Turner and Sahani, 2011)
So far we have considered scalar outputs yi ? R. In the case of D-dimensional outputs yi ? RD we
define Y as the matrix with ith row containing the ith observation yi . Similarly, we define F and U.
QD
If each output is an independent GP we have the GP prior d=1 p(Fd |Ud ; X, Z)p(Ud ; Z), which
we abbreviate as p(F|U; X, Z)p(U; Z) to lighten the notation.
3
2.2
Deep Gaussian Processes
A DGP (Damianou and Lawrence, 2013) defines a prior recursively on vector-valued stochastic
functions F 1 , . . . , F L . The prior on each function F l is an independent GP in each dimension, with
input locations given by the noisy corruptions of the function values at the next layer: the outputs
of the GPs at layer l are Fdl , and the corresponding inputs are F l?1 . The noise between layers is
assumed i.i.d. Gaussian. Most presentations of DGPs (see, e.g. Damianou and Lawrence, 2013; Bui
et al., 2016) explicitly parameterize the noisy corruptions separately from the outputs of each GP. Our
method of inference does not require us to parameterize these variables separately. For notational
convenience, we therefore absorb the noise into the kernel knoisy (xi , xj ) = k(xi , xj ) + ?l2 ?ij , where
?ij is the Kronecker delta, and ?l2 is the noise variance between layers. We use Dl for the dimension
of the outputs at layer l. As with the single-layer case, we have inducing locations Zl?1 at each layer
and inducing function values Ul for each dimension.
An instantiation of the process has the joint density
YN
YL
p(Y, {Fl , Ul }L
p(yi |fiL )
p(Fl |Ul ; Fl?1 , Zl?1 )p(Ul ; Zl?1 ) ,
l=1 ) =
i=1
l=1
|
{z
}|
{z
}
likelihood
(11)
DGP prior
where we define F0 = X. Inference in this model is intractable, so approximations must be used.
The original DGP presentation (Damianou and Lawrence, 2013) uses a variational posterior that
maintains the exact model conditioned on Ul , but further forces the inputs to each layer to be independent from the outputs of the previous layer. The noisy corruptions are parameterized separately,
and the variational distribution over these variables is a fully factorized Gaussian. This approach
requires 2N (D1 + ? ? ? + DL?1 ) variational parameters but admits a tractable lower bound on the
log marginal likelihood if the kernel is of a particular form. A further problem of this bound is that
the density over the outputs is simply a single layer GP with independent Gaussian inputs. Since the
posterior loses all the correlations between layers it cannot express the complexity of the full model
and so is likely to underestimate the variance. In practice, we found that optimizing the objective
in Damianou and Lawrence (2013) results in layers being ?turned off? (the signal to noise ratio tends
to zero). In contrast, our posterior retains the full conditional structure of the true model. We sacrifice
analytical tractability, but due to the sparse posterior within each layer we can sample the bound using
univariate Gaussians.
3
Doubly Stochastic Variational Inference
In this section, we propose a novel variational posterior and demonstrate a method to obtain unbiased
samples from the resulting lower bound. The difficulty with inferring the DGP model is that there
are complex correlations both within and between layers. Our approach is straightforward: we use
sparse variational inference to simplify the correlations within layers, but we maintain the correlations
between layers. The resulting variational lower bound cannot be evaluated analytically, but we can
draw unbiased samples efficiently using univariate Gaussians. We optimize our bound stochastically.
We propose a posterior with three properties. Firstly, the posterior maintains the exact model, conditioned on Ul . Secondly, we assume that the posterior distribution of {Ul }L
l=1 is factorized between
layers (and dimension, but we suppress this from the notation). Therefore, our posterior takes the
simple factorized form
YL
q({Fl , Ul }L
p(Fl |Ul ; Fl?1 , Zl?1 )q(Ul ) .
(12)
l=1 ) =
l=1
Thirdly, and to complete specification of the posterior, we take q(Ul ) to be a Gaussian with mean
ml and variance Sl . A similar posterior was used in Hensman and Lawrence (2014) and Dai et al.
(2016), but each of these works contained additional terms for the noisy corruptions at each layer.
As in the single layer SGP, we can marginalize the inducing variables from each layer analytically.
After this marginalization we obtain following distribution, which is fully coupled within and between
layers:
YL
YL
? l) .
? l, ?
q(Fl |ml , Sl ; Fl?1 , Zl?1 ) =
N (Fl |?
(13)
q({Fl }L
l=1 ) =
l=1
l=1
4
?l
Here, q(Fl |ml , Sl ; Fl?1 , Zl?1 ) is as in (6). Specifically, it is a Gaussian with mean and variance ?
l
l
? ]ij = ?Sl ,Zl?1 (f l , f l ) (recall that f l is the ith row of
? , where [?
? l ]i = ?ml ,Zl?1 (f l ) and [?
and ?
i
i
j
i
Fl ). Since (12) is a product of terms that each take the form of the SGP variational posterior (5), we
have again the property that within each layer the marginals depend on only the corresponding inputs.
In particular, fiL depends only on fiL?1 , which in turn depends only on fiL?2 , and so on. Therefore,
we have the following property:
Remark 2. The ith marginal of the final layer of the variational DGP posterior (12) depends only
on the ith marginals of all the other layers. That is,
Z Y
L?1
q(fiL ) =
q(fil |ml , Sl ; fil?1 , Zl?1 )dfil .
(14)
l=1
The consequence of this property is that taking a sample from q(fiL ) is straightforward, and furthermore we can perform the sampling using only univariate unit Gaussians using the ?re-parameterization
trick? (Rezende et al., 2014; Kingma et al., 2015). Specifically, we first sample li ? N (0, IDl ) and
then recursively draw the sampled variables ?
fil ? q(fil |ml , Sl ; ?
fil?1 , Zl?1 ) for l = 1, . . . , L ? 1 as
q
?
fil = ?ml ,Zl?1 (?
fil?1 ) + li ?Sl ,Zl?1 (?
fil?1 , ?
fil?1 ) ,
(15)
where the terms in (15) are Dl -dimensional and the square root is element-wise. For the first layer
we define ?
fi0 := xi .
Efficient computation of the evidence lower bound The evidence lower bound of the DGP is
p(Y, {Fl , Ul }L
l=1 )
LDGP = Eq({Fl ,Ul }Ll=1 )
.
(16)
q({Fl , Ul }L
l=1 )
Using (11) and (12) for the corresponding expressions in (16), we obtain after some re-arranging
XN
XL
LDGP =
Eq(fiL ) [log p(yn |fnL )] ?
KL[q(Ul )||p(Ul ; Zl?1 )] ,
(17)
i=1
l=1
where we exploited the exact marginalization of the inducing variables (13) and the property of the
marginals of the final layer (14). A detailed derivation is provided in the supplementary material.
This bound has complexity O(N M 2 (D1 + ? ? ? + DL )) to evaluate.
We evaluate the bound (17) approximately using two sources of stochasticity. Firstly, we approximate
the expectation with a Monte Carlo sample from the variational posterior (14), which we compute
according to (15). Since we have parameterized this sampling procedure in terms of isotropic
Gaussians, we can compute unbiased gradients of the bound (17). Secondly, since the bound
factorizes over the data we achieve scalability through sub-sampling the data. Both stochastic
approximations are unbiased.
Predictions To predict we sample from the variational posterior changing the input locations to the
test location x? . We denote the function values at the test location as f?l . To obtain the density over
f?L we use the Gaussian mixture
1 XS
(s) L?1
q(f?L ) ?
q(f?L |mL , SL ; f?
, ZL?1 ) ,
(18)
s=1
S
(s) L?1
where we draw S samples f?
using (15), but replacing the inputs xi with the test location x? .
Further Model Details While GPs are often used with a zero mean function, we consider such a
choice inappropriate for the inner layers of a DGP. Using a zero mean function causes difficulties with
the DGP prior as each GP mapping is highly non-injective. This effect was analyzed in Duvenaud
et al. (2014) where the authors suggest adding the original input X to each layer. Instead, we consider
an alternative approach and include a linear mean function m(X) = XW for all the inner layers.
If the input and output dimension are the same we use the identity matrix for W, otherwise we
compute the SVD of the data and use the top Dl left eigenvectors sorted by singular value (i.e. the
PCA mapping). With these choices it is effective to initialize all inducing mean values ml = 0. This
choice of mean function is partly inspired by the ?skip layer? approach of the ResNet (He et al., 2016)
architecture.
5
boston
N=506, D=13
concrete
N=1030, D=8
energy
N=768, D=8
kin8nm
N=8192, D=8
PBP
DGP 5
DGP 4
DGP 3
DGP 2
AEDGP 2
SGP 500
SGP
Linear
-2.89
PBP
DGP 5
DGP 4
DGP 3
DGP 2
AEDGP 2
SGP 500
SGP
Linear
-2.63
-2.37
naval
N=11934, D=26
3.92
-3.75
-3.43
-3.11
-2.39
power
N=9568, D=4
-1.55
-0.71
protein
N=45730, D=9
5.39
6.86 -2.92
-2.83
-2.73 -3.05
Bayesian NN
Single layer benchmarks
0.25
0.78
1.31
wine_red
N=1599, D=22
-2.89
-2.73 -1.01
DGP with approx EP
PBP
DGP 5
DGP 4
DGP 3
DGP 2
AEDGP 2
SGP 500
SGP
Linear
PBP
DGP 5
DGP 4
DGP 3
DGP 2
AEDGP 2
SGP 500
SGP
Linear
-0.97
-0.93
This work
Figure 1: Regression test log-likelihood results on benchmark datasets. Higher (to the right) is better.
The sparse GP with the same number of inducing points is highlighted as a baseline.
4
Results
We evaluate our inference method on a number of benchmark regression and classification datasets.
We stress that we are interested in models that can operate in both the small and large data regimes,
with little or no hand tuning. All our experiments were run with exactly the same hyperparameters
and initializations. See the supplementary material for details. We use min(30, D0 ) for all the inner
layers of our DGP models, where D0 is the input dimension, and the RBF kernel for all layers.
Regression Benchmarks We compare our approach to other state-of-the-art methods on 8 standard
small to medium-sized UCI benchmark datasets. Following common practice (e.g. Hern?ndez-Lobato
and Adams, 2015) we use 20-fold cross validation with a 10% randomly selected held out test set
and scale the inputs and outputs to zero mean and unit standard deviation within the training set
(we restore the output scaling for evaluation). While we could use any kernel, we choose the RBF
kernel with a lengthscale for each dimension for direct comparison with Bui et al. (2016). The test
log-likelihood results are shown in Fig. 1. We compare our models of 2, 3, 4 and 5 layers (DGP
2?5), each with 100 inducing points, with (stochastically optimized) sparse GPs (Hensman et al.,
2013) with 100 and 500 inducing points points (SGP, SGP 500). We compare also to a two-layer
Bayesian neural network with ReLu activations, 50 hidden units (100 for protein and year), with
inference by probabilistic backpropagation (Hern?ndez-Lobato and Adams, 2015) (PBP). The results
are taken from Hern?ndez-Lobato and Adams (2015) and were found to be the most effective of
several other methods for inferring Bayesian neural networks. We compare also with a DGP model
with approximate expectation propagation (EP) for inference (Bui et al., 2016). Using the authors?
code 2 we ran a DGP model with 1 hidden layer using approximate expectation propagation (Bui et al.,
2016) (AEPDGP 2). We used the input dimension for the hidden layer for a fair comparison with our
models3 . We found the time requirements to train a 3-layer model with this inference prohibitive.
Plots for test RMSE and further results tables can be found in the supplementary material.
On five of the eight datasets, the deepest DGP model is the best. On ?wine?, ?naval? and ?boston?
our DGP recovers the single-layer GP, which is not surprising: ?boston? is very small, ?wine? is
2
https://github.com/thangbui/deepGP_approxEP
We note however that in Bui et al. (2016) the inner layers were 2D, so the results we obtained are not
directly comparable to those reported in Bui et al. (2016)
3
6
near-linear (note the proximity of the linear model and the scale) and ?naval? is characterized by
extremely high test likelihoods (the RMSE on this dataset is less than 0.001 for all SGP and DGP
models), i.e. it is a very ?easy? dataset for a GP. The Bayesian network is not better than the sparse GP
for any dataset and significantly worse for six. The Approximate EP inference for the DGP models
is also not competitive with the sparse GP for many of the datasets, but this may be because the
initializations were designed for lower dimensional hidden layers than we used.
Our results on these small and medium sized datasets confirm that overfitting is not observed with the
DGP model, and that the DGP is never worse and often better than the single layer GP. We note in
particular that on the ?power?, ?protein? and ?kin8nm? datasets all the DGP models outperform the
SGP with five times the number of inducing points.
Rectangles Benchmark We use the Rectangle-Images dataset4 , which is specifically designed to
distinguish deep and shallow architectures. The dataset consists of 12,000 training and 50,000 testing
examples of size 28 ? 28, where each image consists of a (non-square) rectangular image against
a different background image. The task is to determine which of the height and width is greatest.
We run 2, 3 and 4 layer DGP models, and observe increasing performance with each layer. Table 1
contains the results. Note that the 500 inducing point single-layer GP is significantly less effective
than any of the deep models. Our 4-layer model achieves 77.9% classification accuracy, exceeding
the best result of 77.5% reported in Larochelle et al. (2007) with a three-layer deep belief network.
We also exceed the best result of 76.4% reported in Krauth et al. (2016) using a sparse GP with an
Arcsine kernel, a leave-one-out objective, and 1000 inducing points.
Table 1: Results on Rectangles-Images dataset (N = 12000, D = 784)
Single layer GP
Accuracy (%)
Likelihood
Ours
Larochelle [2007]
Krauth [2016]
SGP
SGP 500
DGP 2
DGP 3
DGP 4
DBN-3
SVM
SGP 1000
76.1
?0.493
76.4
?0.485
77.3
0.475
77.8
?0.460
77.9
?0.460
77.5
-
76.96
-
76.4
?0.478
Large-Scale Regression To demonstrate our method on a large scale regression problem we use
the UCI ?year? dataset and the ?airline? dataset, which has been commonly used by the large-scale
GP community. For the ?airline? dataset we take the first 700K points for training and next 100K for
testing. We use a random 10% split for the ?year? dataset. Results are shown in Table 2, with the
log-likelihood reported in the supplementary material. In both datasets we see that the DGP models
perform better with increased depth, significantly improving in both log likelihood and RMSE over
the single-layer model, even with 500 inducing points.
Table 2: Regression test RMSE results for large datasets
year
airline
taxi
N
D
SGP
SGP 500
DGP 2
DGP 3
DGP 4
DGP 5
463810
700K
1B
90
8
9
10.67
25.6
337.5
9.89
25.1
330.7
9.58
24.6
281.4
8.98
24.3
270.4
8.93
24.2
268.0
8.87
24.1
266.4
MNIST Multiclass Classification We apply the DGP with 2 and 3 layers to the MNIST multiclass
classification problem. We use the robust-max multiclass likelihood (Hern?ndez-Lobato et al., 2011)
and use full unprocessed data with the standard training/test split of 60K/10K. The single-layer GP
with 100 inducing points achieves a test accuracy of 97.48% and this is increased to 98.06% and
98.11% with two and three layer DGPs, respectively. The 500 inducing point single layer model
achieved 97.9% in our implementation, though a slightly higher result for this model has previously
been reported of 98.1% (Hensman et al., 2013) and 98.4% (Krauth et al., 2016) for the same model
with 1000 inducing points. We attribute this difference to different hyperparameter initialization and
training schedules, and stress that we use exactly the same initialization and learning schedule for all
our models. The only other DGP result in the literature on this dataset is 94.24% (Wang et al., 2016)
for a two layer model with a two dimensional latent space.
4
http://www.iro.umontreal.ca/~lisa/twiki/bin/view.cgi/Public/RectanglesData
7
Large-Scale Classification We use the HIGGS (N = 11M, D = 28) and SUSY (N = 5.5M,
D = 18) datasets for large-scale binary classification. These datasets have been constructed from
Monte Carlo physics simulations to detect the presence of the Higgs boson and super-symmetry (Baldi
et al., 2014). We take a 10% random sample for testing and use the rest for training. We use the AUC
metric for comparison with Baldi et al. (2014). Our DGP models are the highest performing on the
SUSY dataset (AUC of 0.877 for all the DGP models) compared to shallow neural networks (NN,
0.875), deep neural networks (DNN, 0.876) and boosted decision trees (BDT, 0.863). On the HIGGS
dataset we see a steady improvement in additional layers (0.830, 0.837, 0.841 and 0.846 for DGP
2?4 respectively). On this dataset the DGP models exceed the performance of BDT (0.810) and NN
(0.816) and both single layer GP models SGP (0.785) and SGP 500 (0.794). The best performing
model on this dataset is a 5 layer DNN (0.885). Full results are reported in the supplementary
material.
Massive-Scale Regression To demonstrate the efficacy of our Table 3: Typical computation
model on massive data we use the New York city yellow taxi trip time in seconds for a single
dataset of 1.21 billion journeys 5 . Following Peng et al. (2017) we use gradient step.
9 features: time of day; day of the week; day of the month; month;
CPU GPU
pick-up latitude and longitude; drop-off latitude and longitude; travel
SGP
0.14 0.018
distance. The target is to predict the journey time. We randomly select
SGP 500 1.71
0.11
1B (109 ) examples for training and use 1M examples for testing, and
0.36 0.030
we scale both inputs and outputs to zero mean and unit standard de- DGP 2
DGP 3
0.49 0.045
viation in the training data. We discard journeys that are less than 10 s
DGP
4
0.65 0.056
or greater than 5 h, or start/end outside the New York region, which
DGP 5
0.87 0.069
we estimate to have squared distance less than 5o from the center of
New York. The test RMSE results are the bottom row of Table 2 and
test log likelihoods are in the supplementary material. We note the significant jump in performance
from the single layer models to the DGP. As with all the large-scale experiments, we see a consistent
improvement extra layers, but on this dataset the improvement is particularly striking (DGP 5 achieves
a 21% reduction in RMSE compared to SGP)
5
Related Work
The first example of the outputs of a GP used as the inputs to another GP can be found in Lawrence
and Moore (2007). MAP approximation was used for inference. The seminal work of Titsias
and Lawrence (2010) demonstrated how sparse variational inference could be used to propagate
Gaussian inputs through a GP with a Gaussian likelihood. This approach was extended in Damianou
et al. (2011) to perform approximate inference in the model of Lawrence and Moore (2007), and
shortly afterwards in a similar model L?zaro-Gredilla (2012), which also included a linear mean
function. The key idea of both these approaches is the factorization of the variational posterior
between layers. A more general model (flexible in depth and dimensions of hidden layers) introduced
the term ?DGP? and used a posterior that also factorized between layers. These approaches require a
linearly increasing number of variational parameters in the number of data. For high-dimensional
observations, it is possible to amortize the cost of this optimization with an auxiliary model. This
approach is pursued in Dai et al. (2016), and with a recurrent architecture in Mattos et al. (2016).
Another approach to inference in the exact model was presented in Hensman and Lawrence (2014),
where a sparse approximation was used within layers for the GP outputs, similar to Damianou and
Lawrence (2013), but with a projected distribution over the inputs to the next layer. The particular
form of the variational distribution was chosen to admit a tractable bound, but imposes a constraint
on the flexibility.
An alternative approach is to modify the DGP prior directly and perform inference in a parametric
model. This is achieved in Bui et al. (2016) with an inducing point approximation within each
layer, and in Cutajar et al. (2017) with an approximation to the spectral density of the kernel. Both
approaches then apply additional approximations to achieve tractable inference. In Bui et al. (2016),
an approximation to expectation propagation is used, with additional Gaussian approximations to the
log partition function to propagate uncertainly through the non-linear GP mapping. In Cutajar et al.
(2017) a fully factorized variational approximation is used for the spectral components. Both these
5
http://www.nyc.gov/html/tlc/html/about/trip_record_data.shtml
8
approaches require specific kernels: in Bui et al. (2016) the kernel must have analytic expectations
under a Gaussian, and in Cutajar et al. (2017) the kernel must have an analytic spectral density.
Vafa (2016) also uses the same initial approximation as Bui et al. (2016) but applies MAP inference
for the inducing points, such that the uncertainty propagated through the layers only represents the
quality of the approximation. In the limit of infinitely many inducing points this approach recovers a
deterministic radial basis function network. A particle method is used in Wang et al. (2016), again
employing an online version of the sparse approximation used by Bui et al. (2016) within each layer.
Similarly to our approach, in Wang et al. (2016) samples are taken through the conditional model,
but differently from us they then use a point estimate for the latent variables. It is not clear how this
approach propagates uncertainty through the layers, since the GPs at each layer have point-estimate
inputs and outputs.
A pathology with the DGP with zero mean function for the inner layers was identified in Duvenaud
et al. (2014). In Duvenaud et al. (2014) a suggestion was made to concatenate the original inputs at
each layer. This approach is followed in Dai et al. (2016) and Cutajar et al. (2017). The linear mean
function was original used by L?zaro-Gredilla (2012), though in the special case of a two layer DGP
with a 1D hidden layer. To the best of our knowledge there has been no previous attempt to use a
linear mean function for all inner layers.
6
Discussion
Our experiments show that on a wide range of tasks the DGP model with our doubly stochastic
inference is both effective and scalable. Crucially, we observe that on the small datasets the DGP
does not overfit, while on the large datasets additional layers generally increase performance and
never deteriorate it. In particular, we note that the largest gain with increasing layers is achieved
on the largest dataset (the taxi dataset, with 1B points). We note also that on all the large scale
experiments the SGP 500 model is outperformed by the all the DGP models. Therefore, for the
same computational budget increasing the number of layers can be significantly more effective than
increasing the accuracy of approximate inference in the single-layer model. Other than the additional
computation time, which is fairly modest (see Table 3), we do not see downsides to using a DGP over
a single-layer GP, but substantial advantages.
While we have considered simple kernels and black-box applications, any domain-specific kernel
could be used in any layer. This is in contrast to other methods (Damianou and Lawrence, 2013; Bui
et al., 2016; Cutajar et al., 2017) that require specific kernels and intricate implementations. Our
implementation is simple (< 200 lines), publicly available 6 , and is integrated with GPflow (Matthews
et al., 2017), an open-source GP framework built on top of Tensorflow (Abadi et al., 2015).
7
Conclusion
We have presented a new method for inference in Deep Gaussian Process (DGP) models. With our
inference we have shown that the DGP can be used on a range of regression and classification tasks
with no hand-tuning. Our results show that in practice the DGP always exceeds or matches the
performance of a single layer GP. Further, we have shown that the DGP often exceeds the single
layer significantly, even when the quality of the approximation to the single layer is improved. Our
approach is highly scalable and benefits from GPU acceleration.
The most significant limitation of our approach is the dealing with high dimensional inner layers. We
used a linear mean function for the high dimensional datasets but left this mean function fixed, as to
optimize the parameters would go against our non-parametric paradigm. It would be possible to treat
this mapping probabilistically, following the work of Titsias and L?zaro-Gredilla (2013).
Acknowledgments
We have greatly appreciated valuable discussions with James Hensman and Steindor Saemundsson
in the preparation of this work. We thank Vincent Dutordoir and anonymous reviewers for helpful
feedback on the manuscript. We are grateful for a Microsoft Azure Scholarship and support through
a Google Faculty Research Award to Marc Deisenroth.
6
https://github.com/ICL-SML/Doubly-Stochastic-DGP
9
References
M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. Corrado, A. Davis, J. Dean,
M. Devin, S. Ghemawat, I. Goodfellow, A. Harp, G. Irving, M. Isard, Y. Jia, L. Kaiser, M. Kudlur,
J. Levenberg, D. Man, R. Monga, S. Moore, D. Murray, J. Shlens, B. Steiner, I. Sutskever, P. Tucker,
V. Vanhoucke, V. Vasudevan, O. Vinyals, P. Warden, M. Wicke, Y. Yu, and X. Zheng. TensorFlow:
Large-Scale Machine Learning on Heterogeneous Distributed Systems. arXiv preprint:1603.04467,
2015.
P. Baldi, P. Sadowski, and D. Whiteson. Searching for Exotic Particles in High-Energy Physics with
Deep Learning. Nature Communications, 2014.
E. V. Bonilla, K. Krauth, and A. Dezfouli. Generic Inference in Latent Gaussian Process Models.
arXiv preprint:1609.00577, 2016.
F.-X. Briol, C. J. Oates, M. Girolami, M. A. Osborne, and D. Sejdinovic. Probabilistic Integration: A
Role for Statisticians in Numerical Analysis? arXiv preprint:1512.00933, 2015.
T. D. Bui, D. Hern?ndez-Lobato, Y. Li, J. M. Hern?ndez-Lobato, and R. E. Turner. Deep Gaussian
Processes for Regression using Approximate Expectation Propagation. International Conference
on Machine Learning, 2016.
R. Calandra, J. Peters, C. E. Rasmussen, and M. P. Deisenroth. Manifold Gaussian Processes for
Regression. IEEE International Joint Conference on Neural Networks, 2016.
K. Cutajar, E. V. Bonilla, P. Michiardi, and M. Filippone. Random Feature Expansions for Deep
Gaussian Processes. International Conference on Machine Learning, 2017.
M. Cutler and J. P. How. Efficient Reinforcement Learning for Robots using Informative Simulated
Priors. IEEE International Conference on Robotics and Automation, 2015.
Z. Dai, A. Damianou, J. Gonz?lez, and N. Lawrence. Variational Auto-encoded Deep Gaussian
Processes. International Conference on Learning Representations, 2016.
A. C. Damianou and N. D. Lawrence. Deep Gaussian Processes. International Conference on
Artificial Intelligence and Statistics, 2013.
A. C. Damianou, M. K. Titsias, and N. D. Lawrence. Variational Gaussian Process Dynamical
Systems. Advances in Neural Information Processing Systems, 2011.
M. P. Deisenroth and C. E. Rasmussen. PILCO: A Model-Based and Data-Efficient Approach to
Policy Search. International Conference on Machine Learning, 2011.
P. J. Diggle and P. J. Ribeiro. Model-based Geostatistics. Springer, 2007.
D. Duvenaud, J. R. Lloyd, R. Grosse, J. B. Tenenbaum, and Z. Ghahramani. Structure Discovery in
Nonparametric Regression through Compositional Kernel Search. International Conference on
Machine Learning, 2013.
D. Duvenaud, O. Rippel, R. P. Adams, and Z. Ghahramani. Avoiding Pathologies in Very Deep
Networks. Artificial Intelligence and Statistics, 2014.
Y. Gal, Y. Chen, and Z. Ghahramani. Latent Gaussian Processes for Distribution Estimation of
Multivariate Categorical Data. International Conference on Machine Learning, 2015.
R. Garnett, M. Osborne, and S. Roberts. Sequential Bayesian Prediction in the Presence of Changepoints. International Conference on Machine Learning, 2009.
C. Guestrin, A. Krause, and A. P. Singh. Near-optimal Sensor Placements in Gaussian Processes.
International Conference on Machine Learning, 2005.
K. He, X. Zhang, S. Ren, and J. Sun. Deep Residual Learning for Image Recognition. IEEE
Conference on Computer Vision and Pattern Recognition, 2016.
J. Hensman and N. D. Lawrence. Nested Variational Compression in Deep Gaussian Processes. arXiv
preprint:1412.1370, 2014.
10
J. Hensman, N. Fusi, and N. D. Lawrence. Gaussian Processes for Big Data. Uncertainty in Artificial
Intelligence, 2013.
J. Hensman, A. Matthews, M. Fillipone, and Z. Ghahramani. MCMC for Variationally Sparse
Gaussian Processes. Advances in Neural Information Processing Systems, 2015.
D. Hern?ndez-Lobato, H. Lobato, J. Miguel, and P. Dupont. Robust Multi-class Gaussian Process
Classification. Advances in Neural Information Processing Systems, 2011.
J. M. Hern?ndez-Lobato and R. Adams. Probabilistic Backpropagation for Scalable Learning of
Bayesian Neural Networks. International Conference on Machine Learning, 2015.
D. P. Kingma, T. Salimans, and M. Welling. Variational Dropout and the Local Reparameterization
Trick. 2015.
J. Ko and D. Fox. GP-BayesFilters: Bayesian Filtering using Gaussian Process Prediction and
Observation Models. IEEE Intelligent Robots and Systems, 2008.
K. Krauth, E. V. Bonilla, K. Cutajar, and M. Filippone. AutoGP: Exploring the Capabilities and
Limitations of Gaussian Process Models. arXiv preprint:1610.05392, 2016.
H. Larochelle, D. Erhan, A. Courville, J. Bergstra, and Y. Bengio. An Empirical Evaluation of Deep
Architectures on Problems with Many Factors of Variation. International Conference on Machine
Learning, 2007.
N. D. Lawrence and A. J. Moore. Hierarchical Gaussian Process Latent Variable Models. International
Conference on Machine Learning, 2007.
M. L?zaro-Gredilla. Bayesian Warped Gaussian Processes. Advances in Neural Information Processing Systems, 2012.
D. J. C. Mackay. Comparison of Approximate Methods for Handling Hyperparameters. Neural
computation, 1999.
A. G. Matthews, M. Van Der Wilk, T. Nickson, K. Fujii, A. Boukouvalas, P. Le?n-Villagr?, Z. Ghahramani, and J. Hensman. GPflow: A Gaussian process library using TensorFlow. Journal of Machine
Learning Research, 2017.
A. G. d. G. Matthews, J. Hensman, R. E. Turner, and Z. Ghahramani. On Sparse Variational Methods
and The Kullback-Leibler Divergence Between Stochastic Processes. Artificial Intelligence and
Statistics, 2016.
C. L. C. Mattos, Z. Dai, A. Damianou, J. Forth, G. A. Barreto, and N. D. Lawrence. Recurrent
Gaussian Processes. International Conference on Learning Representations, 2016.
H. Peng, S. Zhe, and Y. Qi. Asynchronous Distributed Variational Gaussian Processes. arXiv
preprint:1704.06735, 2017.
C. E. Rasmussen and C. K. I. Williams. Gaussian Processes for Machine Learning. MIT Press, 2006.
D. J. Rezende, S. Mohamed, and D. Wierstra. Stochastic Backpropagation and Approximate Inference
in Deep Generative Models. International Conference on Machine Learning, 2014.
J. Snoek, H. Larochelle, and R. P. Adams. Practical Bayesian Optimization of Machine Learning
Algorithms. Advances in Neural Information Processing Systems, 2012.
M. K. Titsias and N. D. Lawrence. Bayesian Gaussian Process Latent Variable Model. International
Conference on Artificial Intelligence and Statistics, 2010.
M. K. Titsias and M. L?zaro-Gredilla. Variational Inference for Mahalanobis Distance Metrics in
Gaussian Process Regression. Advances in Neural Information Processing Systems, 2013.
R. Turner and M. Sahani. Two Problems with Variational Expectation Maximisation for Time-Series
Models. Bayesian Time Series Models, 2011.
11
K. Vafa. Training Deep Gaussian Processes with Sampling. Advances in Approximate Bayesian
Inference Workshop, Neural Information Processing Systems, 2016.
Y. Wang, M. Brubaker, B. Chaib-Draa, and R. Urtasun. Sequential Inference for Deep Gaussian
Process. Artificial Intelligence and Statistics, 2016.
A. G. Wilson, Z. Hu, R. Salakhutdinov, and E. P. Xing. Deep Kernel Learning. Artificial Intelligence
and Statistics, 2016.
12
| 7045 |@word faculty:1 version:1 compression:1 open:1 hu:1 seek:2 crucially:2 simulation:1 covariance:10 propagate:2 idl:1 pick:1 recursively:2 reduction:1 initial:1 ndez:8 series:3 contains:1 efficacy:1 rippel:1 ours:1 existing:1 steiner:1 com:2 surprising:1 activation:1 written:2 readily:1 must:3 gpu:2 devin:1 concatenate:1 partition:1 numerical:1 informative:1 analytic:3 dupont:1 plot:1 designed:2 drop:1 pursued:1 selected:1 prohibitive:1 vafa:2 parameterization:1 isard:1 intelligence:7 isotropic:1 generative:1 ith:6 location:10 firstly:2 bayesfilters:1 zhang:1 five:2 height:1 fujii:1 wierstra:1 constructed:1 direct:1 become:1 abadi:2 consists:2 doubly:5 inside:1 baldi:3 deteriorate:1 peng:2 sacrifice:1 villagr:1 snoek:2 intricate:1 multi:3 simulator:1 inspired:1 salakhutdinov:1 gov:1 little:2 cpu:1 inappropriate:1 increasing:5 provided:1 linearity:1 notation:4 exotic:1 factorized:6 medium:2 caution:1 gal:2 every:2 exactly:2 uk:2 zl:14 unit:4 yn:4 local:1 modify:1 tends:1 limit:1 io:2 severely:1 consequence:1 taxi:3 treat:1 approximately:1 black:2 initialization:4 challenging:1 limited:2 factorization:1 range:3 acknowledgment:1 zaro:5 practical:1 testing:4 practice:6 maximisation:1 backpropagation:3 procedure:1 empirical:2 significantly:6 diggle:2 radial:1 suggest:1 protein:3 convenience:1 marginalize:2 cannot:2 risk:1 seminal:1 optimize:3 www:2 map:2 demonstrated:1 center:1 maximizing:1 lobato:9 williams:2 straightforward:3 deterministic:1 go:1 reviewer:1 rectangular:1 estimator:1 shlens:1 reparameterization:1 searching:1 variation:1 arranging:1 hierarchy:1 target:1 massive:2 exact:5 gps:13 us:2 goodfellow:1 trick:2 element:1 expensive:2 particularly:1 recognition:2 ep:3 observed:1 bottom:1 preprint:6 role:1 wang:4 parameterize:2 region:1 sun:1 highest:1 valuable:1 ran:1 substantial:1 complexity:3 depend:4 tight:1 grateful:1 singh:1 predictive:1 incur:2 titsias:5 basis:1 joint:5 differently:1 derivation:1 train:2 effective:5 london:2 monte:3 lengthscale:1 bdt:2 artificial:7 outside:1 whose:1 richer:2 supplementary:6 valued:1 encoded:1 otherwise:1 statistic:6 gp:39 jointly:1 itself:1 noisy:4 final:3 highlighted:1 online:1 advantage:2 analytical:1 propose:2 product:2 zm:1 turned:1 uci:2 flexibility:1 achieve:4 fi0:1 forth:1 inducing:22 scalability:1 billion:3 sutskever:1 requirement:1 adam:6 leave:1 resnet:1 recurrent:2 ac:2 boson:1 miguel:1 ij:6 eq:4 strong:2 longitude:2 auxiliary:1 skip:1 larochelle:4 qd:1 girolami:1 attribute:1 stochastic:10 material:6 public:1 bin:1 require:4 generalization:1 anonymous:1 secondly:2 exploring:1 clarify:1 fil:16 proximity:1 duvenaud:6 ic:1 considered:2 lawrence:20 mapping:4 week:1 predict:2 matthew:5 achieves:3 wine:2 estimation:1 travel:1 outperformed:1 lose:1 largest:2 create:1 successfully:1 city:1 mit:1 sensor:1 gaussian:52 always:1 super:1 dgp:76 boosted:1 factorizes:1 shtml:1 wilson:2 probabilistically:1 rezende:2 naval:3 notational:1 improvement:3 modelling:1 likelihood:21 indicates:1 greatly:1 contrast:3 baseline:1 detect:1 colon:1 inference:39 helpful:1 nn:3 cubically:1 integrated:1 hidden:6 dnn:2 interested:3 classification:10 flexible:2 html:2 retaining:1 art:3 special:1 mackay:2 initialize:1 marginal:6 field:2 fairly:1 never:3 integration:1 beach:1 sampling:6 represents:1 yu:1 cancel:1 lighten:1 simplify:1 few:2 primarily:1 viation:1 randomly:2 intelligent:1 preserve:1 simultaneously:1 divergence:2 azure:1 statistician:1 maintain:1 suit:1 attempt:2 microsoft:1 fd:1 highly:4 zheng:1 evaluation:2 introduces:1 mixture:1 analyzed:1 cutler:2 held:1 dezfouli:1 necessary:1 injective:1 modest:1 fox:2 tree:1 draa:1 logarithm:1 re:2 increased:2 compelling:1 downside:2 fnl:1 retains:1 maximization:2 tractability:3 introducing:1 deviation:1 parametrically:1 cost:1 hundred:1 nickson:1 calandra:2 reported:6 considerably:1 calibrated:1 combined:1 st:1 density:7 kudlur:1 international:17 hugh:1 probabilistic:3 yl:4 off:2 physic:2 concrete:1 lez:1 again:2 squared:1 containing:1 choose:2 worse:3 stochastically:2 admit:1 warped:1 preparation:1 li:3 de:1 bergstra:1 sml:1 lloyd:1 gaussianity:2 automation:1 bonilla:4 explicitly:1 depends:3 later:1 root:1 view:1 closed:1 higgs:3 start:2 competitive:1 maintains:3 capability:1 xing:1 jia:1 rmse:6 square:2 publicly:1 accuracy:4 variance:5 efficiently:1 yield:1 yellow:1 dean:1 bayesian:13 vincent:1 ren:1 carlo:3 corruption:4 damianou:12 definition:2 against:2 underestimate:2 energy:2 tucker:1 james:1 mohamed:1 recovers:2 propagated:1 sampled:1 gain:1 proved:2 richly:1 dataset:19 icl:1 chaib:1 recall:1 knowledge:2 wicke:1 cutajar:7 schedule:2 variationally:1 manuscript:1 higher:2 day:3 follow:1 improved:1 evaluated:3 box:2 though:3 furthermore:1 just:1 correlation:6 overfit:1 hand:2 expressive:1 replacing:1 propagation:4 google:1 minibatch:1 defines:1 quality:2 grows:1 usa:1 effect:1 true:3 unbiased:5 counterpart:1 vasudevan:1 regularization:1 analytically:4 leibler:2 moore:4 sgp:26 climate:1 mahalanobis:1 ll:1 irving:1 width:1 auc:2 davis:1 steady:1 levenberg:1 gpflow:2 generalized:1 stress:2 complete:1 demonstrate:6 ranging:1 variational:45 wise:1 novel:1 fi:7 pbp:5 kin8nm:2 common:2 image:6 umontreal:1 krauth:5 thirdly:1 he:2 approximates:1 marginals:9 refer:1 composition:1 significant:2 rd:5 approx:1 tuning:2 dbn:1 similarly:2 nyc:1 particle:2 stochasticity:3 pathology:2 specification:1 f0:1 robot:2 base:1 posterior:33 multivariate:1 optimizing:1 discard:1 gonz:1 susy:2 binary:1 arbitrarily:1 approximators:1 yi:6 exploited:1 der:1 guestrin:2 dai:6 additional:8 greater:1 converge:1 maximize:1 ud:2 determine:1 signal:1 pilco:1 semi:1 full:7 afterwards:1 corrado:1 paradigm:1 d0:2 exceeds:2 match:1 characterized:1 cross:1 long:1 award:1 qi:1 prediction:4 scalable:3 regression:14 ko:2 heterogeneous:1 vision:1 expectation:9 metric:2 arxiv:6 kernel:24 monga:1 sejdinovic:1 agarwal:1 robotics:3 achieved:3 filippone:2 background:3 conditionals:1 separately:3 krause:1 singular:1 source:4 extra:1 operate:1 unlike:1 airline:3 rest:1 warden:1 near:2 presence:2 exceed:2 split:2 easy:1 bengio:1 independence:5 xj:8 marginalization:2 relu:1 architecture:5 identified:1 inner:7 simplifies:1 idea:1 barham:1 multiclass:3 unprocessed:1 expression:2 pca:1 six:1 ul:16 peter:2 tlc:1 york:3 cause:1 compositional:2 remark:2 deep:25 generally:1 clear:2 detailed:1 eigenvectors:1 nonparametric:1 tenenbaum:1 http:4 sl:8 outperform:1 zj:1 delta:1 write:1 hyperparameter:1 express:1 key:1 harp:1 imperial:3 changing:1 rectangle:3 sum:2 year:4 run:2 parameterized:5 uncertainty:4 striking:1 journey:3 place:1 throughout:2 reasonable:1 draw:3 fusi:1 decision:1 scaling:1 comparable:1 dropout:1 layer:103 bound:20 fl:16 followed:2 guaranteed:1 distinguish:1 courville:1 fold:1 placement:1 kronecker:1 constraint:1 min:1 extremely:1 dgps:7 performing:2 according:1 gredilla:5 slightly:1 shallow:2 taken:2 computationally:1 previously:1 hern:8 discus:1 turn:1 tractable:3 end:1 available:1 changepoints:1 gaussians:4 permit:1 brevdo:1 eight:1 observe:2 hierarchical:2 apply:2 spectral:3 generic:1 salimans:1 alternative:2 shortly:1 original:4 top:3 include:2 subsampling:1 xw:1 mattos:3 scholarship:1 murray:1 ghahramani:6 objective:2 kaiser:1 parametric:2 exhibit:1 gradient:2 distance:3 thank:1 simulated:1 cgi:1 manifold:1 extent:1 urtasun:1 reason:1 dataset4:1 iro:1 code:1 providing:1 minimizing:1 ratio:1 equivalently:1 difficult:1 robert:1 numerics:1 suppress:1 design:2 implementation:3 policy:1 perform:4 observation:5 datasets:16 benchmark:7 wilk:1 situation:2 extended:1 incorporated:1 communication:1 y1:1 brubaker:1 community:1 expressiveness:1 introduced:1 pair:1 kl:3 extensive:1 z1:1 optimized:1 trip:1 learned:1 tensorflow:3 kingma:2 geostatistics:2 nip:1 boukouvalas:1 bar:1 below:1 dynamical:1 pattern:1 latitude:2 regime:1 built:1 including:1 max:1 oates:1 belief:1 power:2 greatest:1 difficulty:4 force:5 restore:1 abbreviate:1 residual:1 turner:5 scheme:2 github:2 library:1 categorical:1 coupled:1 auto:1 sahani:3 prior:13 literature:1 l2:2 deepest:1 discovery:1 fully:3 suggestion:1 limitation:3 filtering:1 validation:1 vanhoucke:1 consistent:1 imposes:1 propagates:1 row:3 periodicity:1 rasmussen:5 asynchronous:1 appreciated:1 bias:1 lisa:1 wide:1 taking:1 sparse:18 benefit:1 distributed:2 overcome:3 hensman:12 xn:3 dimension:9 depth:2 rich:1 uncertainly:1 feedback:1 author:2 commonly:1 jump:1 projected:1 made:1 reinforcement:1 ribeiro:2 far:1 employing:1 welling:1 erhan:1 approximate:13 emphasize:2 kullback:2 bui:13 absorb:1 confirm:2 ml:9 dealing:1 overfitting:5 active:1 instantiation:1 assumed:1 xi:27 zhe:1 search:2 latent:6 table:8 learn:2 nature:1 robust:4 ca:2 symmetry:1 improving:1 whiteson:1 du:1 expansion:1 complex:1 marc:2 garnett:2 domain:1 linearly:1 big:1 subsample:1 hyperparameters:7 noise:4 osborne:2 fair:1 quadrature:1 x1:1 fig:1 amortize:1 grosse:1 sub:1 inferring:3 exceeding:1 exercise:1 xl:1 sadowski:1 briol:2 specific:4 ghemawat:1 sensing:1 x:1 admits:1 svm:1 evidence:4 dl:5 intractable:1 workshop:1 mnist:2 adding:1 effectively:1 sequential:2 conditioned:4 budget:1 chen:2 boston:3 simply:1 likely:2 univariate:4 infinitely:1 vinyals:1 contained:1 scalar:1 van:1 applies:1 springer:1 nested:1 loses:1 minibatches:1 conditional:4 identity:1 presentation:2 sorted:1 sized:2 rbf:2 month:2 acceleration:1 man:1 change:1 included:1 specifically:3 typical:1 uniformly:1 partly:1 svd:1 twiki:1 citro:1 select:1 college:2 deisenroth:6 support:1 arcsine:1 prowler:2 barreto:1 evaluate:3 mcmc:1 d1:2 avoiding:1 handling:1 |
6,684 | 7,046 | Ranking Data with Continuous Labels
through Oriented Recursive Partitions
Stephan Cl?emenc?on
Mastane Achab
LTCI, T?el?ecom ParisTech, Universit?e Paris-Saclay
75013 Paris, France
[email protected]
Abstract
We formulate a supervised learning problem, referred to as continuous ranking,
where a continuous real-valued label Y is assigned to an observable r.v. X taking
its values in a feature space X and the goal is to order all possible observations
x in X by means of a scoring function s : X ? R so that s(X) and Y tend to
increase or decrease together with highest probability. This problem generalizes
bi/multi-partite ranking to a certain extent and the task of finding optimal scoring
functions s(x) can be naturally cast as optimization of a dedicated functional criterion, called the IROC curve here, or as maximization of the Kendall ? related to
the pair (s(X), Y ). From the theoretical side, we describe the optimal elements of
this problem and provide statistical guarantees for empirical Kendall ? maximization under appropriate conditions for the class of scoring function candidates. We
also propose a recursive statistical learning algorithm tailored to empirical IROC
curve optimization and producing a piecewise constant scoring function that is
fully described by an oriented binary tree. Preliminary numerical experiments
highlight the difference in nature between regression and continuous ranking and
provide strong empirical evidence of the performance of empirical optimizers of
the criteria proposed.
1
Introduction
The predictive learning problem considered in this paper can be easily stated in an informal fashion,
as follows. Given a collection of objects of arbitrary cardinality, N ? 1 say, respectively described
by characteristics x1 , . . . , xN in a feature space X , the goal is to learn how to order them by
increasing order of magnitude of a certain unknown continuous variable y. To fix ideas, the attribute
y can represent the ?size? of the object and be difficult to measure, as for the physical measurement of
microscopic bodies in chemistry and biology or the cash flow of companies in quantitative finance
and the features x may then correspond to indirect measurements. The most convenient way to
define a preorder on a feature space X is to transport the natural order on the real line onto it by
means of a (measurable) scoring function s : X ? R: an object with charcateristics x is then said to
be ?larger? (?strictly larger?, respectively) than an object described by x0 according to the scoring rule
s when s(x0 ) ? s(x) (when s(x) < s(x0 )). Statistical learning boils down here to build a scoring
function s(x), based on a training data set Dn = {(X1 , Y1 ), . . . , (Xn , Yn )} of objects for which
the values of all variables (direct and indirect measurements) have been jointly observed, such that
s(X) and Y tend to increase or decrease together with highest probability or, in other words, such
that the ordering of new objects induced by s(x) matches that defined by their true measures as well
as possible. This problem, that shall be referred to as continuous ranking throughout the article can
be viewed as an extension of bipartite ranking, where the output variable Y is assumed to be binary
and the objective can be naturally formulated as a functional M -estimation problem by means of the
concept of ROC curve, see [7]. Refer also to [4], [11], [1] for approaches based on the optimization
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
of summary performance measures such as the AUC criterion in the binary context. Generalization
to the situation where the random label is ordinal and may take a finite number K ? 3 of values
is referred to as multipartite ranking and has been recently investigated in [16] (see also e.g. [14]),
where distributional conditions guaranteeing that ROC surface and the VUS criterion can be used
to determine optimal scoring functions are exhibited in particular.
It is the major purpose of this paper to formulate the continuous ranking problem in a quantitative
manner and explore the connection between the latter and bi/multi-partite ranking. Intuitively, optimal scoring rules would be also optimal for any bipartite subproblem defined by thresholding the
continuous variable Y with cut-off t > 0, separating the observations X such that Y < t from
those such that Y > t. Viewing this way continuous ranking as a continuum of nested bipartite
ranking problems, we provide here sufficient conditions for the existence of such (optimal) scoring
rules and we introduce a concept of integrated ROC curve (IROC curve in abbreviated form) that
may serve as a natural performance measure for continuous ranking, as well as the related notion of
integrated AUC criterion, a summary scalar criterion, akin to Kendall tau. Generalization properties
of empirical Kendall tau maximizers are discussed in the Supplementary Material. The paper also
introduces a novel recursive algorithm that solves a discretized version of the empirical integrated
ROC curve optimization problem, producing a scoring function that can be computed by means of
a hierarchical combination of binary classification rules. Numerical experiments providing strong
empirical evidence of the relevance of the approach promoted in this paper are also presented.
The paper is structured as follows. The probabilistic framework we consider is described and key
concepts of bi/multi-partite ranking are briefly recalled in section 2. Conditions under which optimal
solutions of the problem of ranking data with continuous labels exist are next investigated in section
3, while section 4 introduces a dedicated quantitative (functional) performance measure, the IROC
curve. The algorithmic approach we propose in order to learn scoring functions with nearly optimal
IROC curves is presented at length in section 5. Numerical results are displayed in section 6. Some
technical proofs are deferred to the Supplementary Material.
2
Notation and Preliminaries
Throughout the paper, the indicator function of any event E is denoted by I{E}. The pseudo-inverse
of any cdf F (t) on R is denoted by F ?1 (u) = inf{s ? R : F (s) ? u}, while U([0, 1]) denotes the
uniform distribution on the unit interval [0, 1].
2.1
The probabilistic framework
Given a continuous real valued r.v. Y representing an attribute of an object, its ?size? say, and a
random vector X taking its values in a (typically high dimensional euclidian) feature space X
modelling other observable characteristics of the object (e.g. ?indirect measurements? of the size
of the object), hopefully useful for predicting Y , the statistical learning problem considered here
is to learn from n ? 1 training independent observations Dn = {(X1 , Y1 ), . . . , (Xn , Yn )},
drawn as the pair (X, Y ), a measurable mapping s : X ? R, that shall be referred to as a
scoring function throughout the paper, so that the variables s(X) and Y tend to increase or decrease together: ideally, the larger the score s(X), the higher the size Y . For simplicity, we assume throughout the article that X = Rd with d ? 1 and that the support of Y ?s distribution
is compact, equal to [0, 1] say. For any q ? 1, we denote by ?q the Lebesgue measure on Rq
equipped with its Borelian ?-algebra and suppose that the joint distribution FX,Y (dxdy) of the
pair (X, Y ) has a density fX,Y (x, y) w.r.t. the tensor product measure ?d ? ?1 . We also introduces the Rmarginal distributions FY (dy) = fY (y)?
R 1 (dy) and FX (dx) = fX (x)?d (dx), where
fY (y) = x?X fX,Y (x, y)?d (dx) and fX (x) = y?[0,1] fX,Y (x, y)?1 (dy) as well as the conditional densities fX|Y =y (x) = fX,Y (x, y)/fY (y) and fY |X=x (y) = fX,Y (x, y)/fX (x). Observe
incidentally that the probabilistic framework of the continuous ranking problem is quite similar to
that of distribution-free regression. However, as shall be seen in the subsequent analysis, even if
the regression function m(x) = E[Y | X = x] can be optimal under appropriate conditions, just
like for regression, measuring ranking performance involves criteria that are of different nature than
the expected least square error and plug-in rules may not be relevant for the goal pursued here, as
depicted by Fig. 2 in the Supplementary Material.
2
Scoring functions. The set of all scoring functions is denoted by S here. Any scoring function
s ? S defines a total preorder on the space X : ?(x, x0 ) ? X 2 , x s x0 ? s(x) ? s(x0 ). We also
set x ?s x0 when s(x) < s(x0 ) and x =s x0 when s(x) = s(x0 ) for (x, x0 ) ? X 2 .
2.2
Bi/multi-partite ranking
Suppose that Z is a binary label, taking its values in {?1, +1} say, assigned to the r.v. X. In bipartite
ranking, the goal is to pick s in S so that the larger s(X), the greater the probability that Y is equal
to 1 ideally. In other words, the objective is to learn s(x) such that the r.v. s(X) given Y = +1
is as stochastically larger1 as possible than the r.v. s(X) given Y = ?1: the difference between
? s (t) = P{s(X) ? t | Y = +1} and H
? s (t) = P{s(X) ? t | Y = ?1} should be thus maximal
G
for all t ? R. This can be naturally quantified by means of the notion of ROC curve of a candidate
? s (t), G
? s (t)), which can be viewed as the graph
s ? S, i.e. the parametrized curve t ? R 7? (H
of a mapping ROCs : ? ? (0, 1) 7? ROCs (?), connecting possible discontinuity points by linear
?1
? s ? (1 ? H?1
segments (so that ROCs (?) = G
s )(1 ? ?) when Hs has no flat part in Hs (1 ? ?),
?
where Hs = 1 ? Hs ). A basic Neyman Pearson?s theory argument shows that the optimal elements
s? (x) related to this natural (functional) bipartite ranking criterion (i.e. scoring functions whose
ROC curve dominates any other ROC curve everywhere on (0, 1)) are transforms (T ? ?)(x) of
the posterior probability ?(x) = P{Z = +1 | X = x}, where T : SUPP(?(X)) ? R is any
strictly increasing borelian mapping. Optimization of the curve in sup norm has been considered in
[7] or in [8] for instance. However, given its functional nature, in practice the ROC curve of any
s ? S is often summarized by the area under it, which performance measure can be interpreted in a
probabilistic manner, as the theoretical rate of concording pairs
1
AUC(s) = P {s(X) < s(X0 ) | Z = ?1, Z0 = +1} + P {s(X) = s(X0 ) | Z = ?1, Z0 = +1} ,
2
(1)
where (X 0 , Z 0 ) denoted an independent copy of (X, Z). A variety of algorithms aiming at maximizing the AUC criterion or surrogate pairwise criteria have been proposed and studied in the
literature, among which [11], [15] or [3], whereas generalization properties of empirical AUC maximizers have been studied in [5], [1] and [12]. An analysis of the relationship between the AUC and
the error rate is given in [9].
Extension to the situation where the label Y takes at least three ordinal values (i.e. multipartite
ranking) has been also investigated, see e.g. [14] or [6]. In [16], it is shown that, in contrast to the
bipartite setup, the existence of optimal solutions cannot be guaranteed in general and conditions on
(X, Y )?s distribution ensuring that optimal solutions do exist and that extensions of bipartite ranking
criteria such as the ROC manifold and the volume under it can be used for learning optimal scoring
rules have been exhibited. An analogous analysis in the context of continuous ranking is carried out
in the next section.
3
Optimal elements in ranking data with continuous labels
In this section, a natural definition of the set of optimal elements for continuous ranking is first
proposed. Existence and characterization of such optimal scoring functions are next discussed.
3.1
Optimal scoring rules for continuous ranking
Considering a threshold value y ? [0, 1], a considerably weakened (and discretized) version of the
problem stated informally above would consist in finding s so that the r.v. s(X) given Y > y is
as stochastically larger than s(X) given Y < y as possible. This subproblem coincides with the
bipartite ranking problem related to the pair (X, Zy ), where Zy = 2I{Y > y} ? 1. As briefly
recalled in subsection 2.2, the optimal set Sy? is composed of the scoring functions that induce the
same ordering as
?y (X) = P{Y > y | X} = 1 ? (1 ? py )/(1 ? py + py ?y (X)),
where py = 1 ? FY (y) = P{Y > y} and ?y (X) = (dFX|Y >y /dFX|Y <y )(X).
Given two real-valued r.v.?s U and U 0 , recall that U is said to be stochastically larger than U 0 when
P{U ? t} ? P{U 0 ? t} for all t ? R.
1
3
A continuum of bipartite ranking problems. The rationale behind the definition of the set S ? of
optimal scoring rules for continuous ranking is that any element s? should score observations x in
the same order as ?y (or equivalently as ?y ).
Definition 1. (O PTIMAL SCORING RULE ) An optimal scoring rule for the continuous ranking problem related to the random pair (X, Y ) is any element s? that fulfills: ?y ? (0, 1),
?(x, x0 ) ? X 2 , ?y (x) < ?y (x0 ) ? s? (x) < s? (x0 ).
T
In other words, the set of optimal rules is defined as S ? = y?(0,1) Sy? .
(2)
It is noteworthy that, although the definition above is natural, the set S ? can be empty in absence of
any distributional assumption, as shown by the following example.
Example 1. As a counter-example, consider the distributions FX,Y such that FY = U([0, 1]) and
d
FX|Y =y = N (|2y ? 1|, (2y ? 1)2 ). Observe that (X, 1 ? Y )=(X, Y ), so that ?1?t = ??1
t for all
t ? (0, 1) and there exists t 6= 0 s.t. ?t is not constant. Hence, there exists no s? in S such that (2)
holds true for all t ? (0, 1).
Remark 1. (I NVARIANCE ) We point out that the class S ? of optimal elements for continuous ranking thus defined is invariant by strictly increasing transform of the ?size? variable Y (in particular,
a change of unit has no impact on the definition of S ? ): for any borelian and strictly increasing
mapping H : (0, 1) ? (0, 1), any scoring function s? (x) that is optimal for the continuous ranking
problem related to the pair (X, Y ) is still optimal for that related to (X, H(Y )) (since, under these
hypotheses, for any y ? (0, 1): Y > y ? H(Y ) > H(y)).
3.2
Existence and characterization of optimal scoring rules
We now investigate conditions guaranteeing the existence of optimal scoring functions for the continuous ranking problem.
Proposition 1. The following assertions are equivalent.
1. For all 0 < y < y 0 < 1, for all (x, x0 ) ? X 2 : ?y (x) < ?y (x0 ) ? ?y0 (x) ? ?y0 (x0 ).
2. There exists an optimal scoring rule s? (i.e. S ? 6= ?).
3. The regression function m(x) = E[Y | X = x] is an optimal scoring rule.
4. The collection of probability distributions FX|Y =y (dx) = fX|Y =y (x)?d (dx), y ? (0, 1)
satisfies the monotone likelihood ratio property: there exist s? ? S and, for all 0 < y <
y 0 < 1, an increasing function ?y,y0 : R ? R+ such that: ?x ? Rd ,
fX|Y =y0
(x) = ?y,y0 (s? (x)).
fX|Y =y
Refer to the Appendix section for the technical proof. Truth should be said, assessing that Assertion
1. is a very challenging statistical task. However, through important examples, we now describe (not
uncommon) situations where the conditions stated in Proposition 1 are fulfilled.
Example 2. We give a few important examples of probabilistic models fulfilling the properties listed
in Proposition 1.
? Regression model. Suppose that Y = m(X) + , where m : X ? R is a borelian function and
is a centered r.v. independent from X. One may easily check that m ? S ? .
? Exponential families. Suppose that fX|Y =y (x) = exp(?(y)T (x) ? ?(y))f (x) for all x ? Rd ,
where f : Rd ? R+ is borelian, ? : [0, 1] ? R is a Rborelian strictly increasing function and
T : Rd ? R is a borelian mapping such that ?(y) = log x?Rd exp(?(y)T (x))f (x)dx < +?.
We point out that, although the regression function m(x) is an optimal scoring function when
S ? 6= ?, the continuous ranking problem does not coincide with distribution-free regression (notice
incidentally that, in this case, any strictly increasing transform of m(x) belongs to S ? as well). As
depicted by Fig. 2 the least-squares criterion is not relevant to evaluate continuous ranking performance and naive plug-in strategies should be avoided, see Remark 3 below. Dedicated performance
criteria are proposed in the next section.
4
4
Performance measures for continuous ranking
We now investigate quantitative criteria for assessing the performance in the continuous ranking
problem, which practical machine-learning algorithms may rely on. We place ourselves in the situation where the set S ? is not empty, see Proposition 1 above.
A functional performance measure. It follows from the view developped in the previous section
that, for any (s, s? ) ? S ? S ? and for all y ? (0, 1), we have:
?? ? (0, 1), ROCs,y (?) ? ROCs? ,y (?) = ROC?y (?),
(3)
denoting by ROCs,y the ROC curve of any s ? S related to the bipartite ranking subproblem
(X, Zy ) and by ROC?y the corresponding optimal ROC curve, i.e. the ROC curve of strictly increasing transforms of ?y (x). Based on this observation, it is natural to design a dedicated performance
measure by aggregating these ?sub-criteria?. Integrating over y w.r.t. a ?-finite
measure ? with supR
port equal to [0, 1], this leads to the following definition IROC?,s (?) = ROCs,y (?)?(dy). The
functional criterion thus defined inherits properties from the ROCs,y ?s (e.g. monotonicity, concavity). In addition, the curve IROC?,s? with s? ? S ? dominates everywhere on (0, 1) any other curve
IROC?,s for s ? S. However, except in pathologic situations (e.g. when s(x) is constant), the curve
IROC?,s is not invariant when replacing Y ?s distribution by that of a strictly increasing transform
H(Y ). In order to guarantee that this desirable property is fulfilled (see Remark 1), one should
integrate w.r.t. Y ?s distribution (which boils down to replacing Y by the uniformly distributed r.v.
FY (Y )).
Definition 2. (I NTEGRATED ROC/AUC CRITERIA ) The integrated ROC curve of any scoring rule
s ? S is defined as: ?? ? (0, 1),
Z 1
IROCs (?) =
ROCs,y (?)FY (dy) = E [ROCs,Y (?)] .
(4)
y=0
The integrated AUC criterion is defined as the area under the integrated ROC curve: ?s ? S,
Z 1
IAUC(s) =
IROCs (?)d?.
(5)
?=0
The following result reveals the relevance of the functional/summary criteria defined above for the
continuous ranking problem. Additional properties of IROC curves are listed in the Supplementary
Material.
Theorem 1. Let s? ? S. The following assertions are equivalent.
1. The assertions of Proposition 1 are fulfilled and s? is an optimal scoring function in the
sense given by Definition 1.
2. For all ? ? (0, 1), IROCs? (?) = E [ROC?Y (?)].
3. We have IAUCs? = E [AUC?Y ], where AUC?y =
R1
?=0
ROC?y (?)d? for all y ? (0, 1).
If S ? 6= ?, then we have: ?s ? S,
def
IROCs (?) ? IROC? (?) = E [ROC?Y (?)] , for any ? ? (0, 1, )
def
IAUC(s) ? IAUC? = E [AUC?Y ] .
In addition, for any borelian and strictly increasing mapping H : (0, 1) ? (0, 1), replacing Y by
H(Y ) leaves the curves IROCs , s ? S, unchanged.
Equipped with the notion defined above, a scoring rule s1 is said to be more accurate than another one s2 if IROCs2 (?) ? IROCs1 (?) for all ? ? (0, 1).The IROC curve criterion thus
provides a partial
preorder on S. Observe also that, by virtue of Fubini?s theorem, we have
R
IAUC(s) = AUCy (s)FY (dy) for all s ? S, denoting by AUCy (s) the AUC of s related to
the bipartite ranking subproblem (X, Zy ). Just like the AUC for bipartite ranking, the scalar IAUC
criterion defines a full preorder on S for continuous ranking. Based on a training dataset Dn of independent copies of (X, Y ), statistical versions of the IROC/IAUC criteria can be straightforwardly
computed by replacing the distributions FY , FX|Y >t and FX|Y <t by their empirical counterparts in
(3)-(5), see the Supplementary Material for further details. The lemma below provides a probabilistic interpretation of the IAUC criterion.
5
Lemma 1. Let (X 0 , Y 0 ) be a copy of the random pair (X, Y ) and Y 00 a copy of the r.v. Y . Suppose
that (X, Y ), (X 0 , Y 0 ) and Y 00 are defined on the same probability space and are independent. For
all s ? S, we have:
1
IAUC(s) = P {s(X) < s(X0 ) | Y < Y00 < Y0 } + P {s(X) = s(X0 ) | Y < Y00 < Y0 } .
(6)
2
This result shows in particular that a natural statistical estimate of IAUC(s) based on Dn involves
U -statistics of degree 3. Its proof is given in the Supplementary Material for completeness.
The Kendall ? statistic. The quantity (6) is akin to another popular way to measure the tendency to
define the same ordering on the statistical population in a summary fashion:
1
def
d? (s) = P {(s(X) ? s(X 0 )) ? (Y ? Y 0 ) > 0} + P {s(X) = s(X 0 )}
(7)
2
1
= P{s(X) < s(X 0 ) | Y < Y 0 } + P {X =s X 0 } ,
2
where (X 0 , Y 0 ) denotes an independent copy of (X, Y ), observing that P{Y < Y 0 } = 1/2. The
empirical counterpart of (7) based on the sample Dn , given by
X
X
1
2
I {(s(Xi ) ? s(Xj )) ? (Yi ? Yj ) > 0} +
I {s(Xi ) = s(Xj )}
dbn (s) =
n(n ? 1) i<j
n(n ? 1) i<j
(8)
is known as the Kendall ? statistic and is widely used in the context of statistical hypothesis testing.
The quantity (7) shall be thus referred to as the (theoretical or true) Kendall ? . Notice that d? (s) is
invariant by strictly increasing transformation of s(x) and thus describes properties of the order it
defines. The following result reveals that the class S ? , when non empty, is the set of maximizers of
the theoretical Kendall ? . Refer to the Supplementary Material for the technical proof.
Proposition 2. Suppose that S ? 6= ?. For any (s, s? ) ? S ? S ? , we have: d? (s) ? d? (s? ).
Equipped with these criteria, the objective expressed above in an informal manner can be now formulated in a quantitative manner as a (possibly functional) M -estimation problem. In practice,
the goal pursued is to find a reasonable approximation of a solution to the optimization problem
maxs?S d? (s) (respectively maxs?S IAUC(s)), where the supremum is taken over the set of all
scoring functions s : X ? R. Of course, these criteria are unknown in general, just like (X, Y )?s
probability distribution, and the empirical risk minimization (ERM in abbreviated form) paradigm
(see [10]) invites for maximizing the statistical version (8) over a class S0 ? S of controlled complexity when considering the criterion d? (s) for instance. The generalization capacity of empirical
maximizers of the Kendall ? can be straightforwardly established using results in [5]. More details
are given in the Supplementary Material.
Before describing a practical algorithm for recursive maximization of the IROC curve, a few remarks are in order.
Remark 2. (O N K ENDALL ? AND AUC) We point out that, in the bipartite ranking problem (i.e.
when the output variable Z takes its values in {?1, +1}, see subsection 2.2) as well, the AUC
criterion can be expressed as a function of the Kendall ? related to the pair (s(X), Z) when the r.v.
s(X) is continuous. Indeed, we have in this case 2p(1?p)AUC(s) = d? (s), where p = P{Z = +1}
and d? (s) = P{(s(X) ? s(X 0 )) ? (Z ? Z 0 ) > 0}, denoting by (X 0 , Z 0 ) an independent copy of
(X, Z).
Remark 3. (C ONNECTION TO DISTRIBUTION - FREE REGRESSION ) Consider the nonparametric
regression model Y = m(X) + , where is a centered r.v. independent from X. In this case, it is
well-known that the regression function m(X) = E[Y | X] is the (unique) solution of the expected
least squares minimization. However, although m ? S ? , the least squares criterion is far from
appropriate to evaluate ranking performance, as depicted by Fig. 2. Observe additionally that, in
contrast to the criteria introduced above, increasing transformation of the output variable Y may
have a strong impact on the least squares minimizer: except for linear stransforms, E[H(Y ) | X] is
not an increasing transform of m(X).
Remark 4. (O N DISCRETIZATION ) Bi/multi-partite algorithms are not directly applicable to the
continuous ranking problem. Indeed a discretization of the interval [0, 1] would be first required but
this would raise a difficult question outside our scope: how to choose this discretization based on
the training data? We believe that this approach is less efficient than ours which reveals problemspecific criteria, namely IROC and IAUC.
6
Figure 1: A scoring function described by an oriented binary subtree T . For any element x ? X , one
may compute the quantity sT (x) very fast in a top-down fashion by means of the heap structure:
starting from the initial value 2J at the root node, at each internal node Cj,k , the score remains
unchanged if x moves down to the left sibling, whereas one subtracts 2J?(j+1) from it if x moves
down to the right.
5
Continuous Ranking through Oriented Recursive Partitioning
It is the purpose of this section to introduce the algorithm CR ANK, a specific tree-structured learning
algorithm for continuous ranking.
5.1
Ranking trees and Oriented Recursive Partitions
Decision trees undeniably figure among the most popular techniques, in supervised and unsupervised settings, refer to [2] or [13] for instance. This is essentially due to the visual model summary
they provide, in the form of a binary tree graphic that permits to describe predictions by means of
a hierachichal combination of elementary rules of the type ?X (j) ? ?? or ?X (j) > ??, comparing
the value taken by a (quantitative) component of the input vector X (the split variable) to a certain
threshold (the split value). In contrast to local learning problems such as classification or regression,
predictive rules for a global problem such as ranking cannot be described by a (tree-structured) partition of the feature space: cells (corresponding to the terminal leaves of the binary decision tree)
must be ordered so as to define a scoring function. This leads to the definition of ranking trees
as binary trees equipped with a ?left-to-right? orientation, defining a tree-structured collection of
anomaly scoring functions, as depicted by Fig. 1. Binary ranking trees have been in the context of
bipartite ranking in [7] or in [3] and in [16] in the context of multipartite ranking. The root node
of a ranking tree TJ of depth J ? 0 represents the whole feature space X : C0,0 = X , while each
internal node (j, k) with j < J and k ? {0, . . . , 2j ? 1} corresponds to a subset Cj,k ? X , whose
left and right siblings respectively correspond to disjoint subsets Cj+1,2k and Cj+1,2k+1 such that
Cj,k = Cj+1,2k ? Cj+1,2k+1 . Equipped with the left-to-right orientation, any subtree T ? TJ defines
a preorder on X : elements lying in the same terminal cell of T being equally ranked. The scoring
function related to the oriented tree T can be written as:
sT (x) =
X
J
2
Cj,k : terminal leaf of T
5.2
k
1? j
2
? I{x ? Cj,k }.
(9)
The CR ANK algorithm
Based on Proposition 2, as mentioned in the Supplementary Material, one can try to build from
the training dataset Dn a ranking tree by recursive empirical Kendall ? maximization. We propose
below an alternative tree-structured recursive algorithm, relying on a (dyadic) discretization of the
?size? variable Y . At each iteration, the local sample (i.e. the data lying in the cell described by the
current node) is split into two halves (the highest/smallest halves, depending on Y ) and the algorithm
calls a binary classification algorithm A to learn how to divide the node into right/left children. The
theoretical analysis of this algorithm and its connection with approximation of IROC? are difficult
questions that will be adressed in future work. Indeed we found out that the IROC cannot be
7
represented as a parametric curve contrary to the ROC, which renders proofs much more difficult
than in the bipartite case.
T HE CR ANK A LGORITHM
1. Input. Training data Dn , depth J ? 1, binary classification algorithm A.
2. Initialization. Set C0,0 = X .
3. Iterations. For j = 0, . . . , J ? 1 and k = 0, . . . , 2J ? 1,
(a) Compute a median yj,k of the dataset {Y1 , . . . , , Yn } ? Cj,k and assign the binary label
Zi = 2I{Yi > yj,k } ? 1 to any data point i lying in Cj,k , i.e. such that Xi ? Cj,k .
(b) Solve the binary classification problem related to the input space Cj,k and the training set
{(Xi , Yi ) : 1 ? i ? n, Xi ? Cj,k }, producing a classifier gj,k : Cj,k ? {?1, +1}.
(c) Set Cj+1,2k = {x ? Cj,k , gj,k = +1} = Cj,k \ Cj+1,2k+1 .
4. Output. Ranking tree TJ = {Cj,k : 0 ? j ? J, 0 ? k < D}.
Of course, the depth J should be chosen such that 2J ? n. One may also consider continuing to
split the nodes until the number of data points within a cell has reached a minimum specified in
advance. In addition, it is well known that recursive partitioning methods fragment the data and the
unstability of splits increases with the depth. For this reason, a ranking subtree must be selected. The
growing procedure above should be classically followed by a pruning stage, where children of a same
parent are progressively merged until the root T0 is reached and a subtree among the sequence T0 ?
. . . ? TJ with nearly maximal IAUC should be chosen using cross-validation. Issues related to the
implementation of the CR ANK algorithm and variants (e.g. exploiting randomization/aggregation)
will be investigated in a forthcoming paper.
6
Numerical Experiments
In order to illustrate the idea conveyed by Fig. 2 that the least squares criterion is not appropriate for
the continuous ranking problem we compared on a toy example CR ANK with CART. Recall that
the latter is a regression decision tree algorithm which minimizes the MSE (Mean Squared Error).
We also runned an alternative version of CR ANK which maximizes the empirical Kendall ? instead
of the empirical IAUC: this method is refered to as K ENDALL from now on. The experimental
setting is composed of a unidimensional feature space X = [0, 1] (for visualization reasons) and a
simple regression model without any noise: Y = m(X). Intuitively, a least squares strategy can
miss slight oscillations of the regression function, which are critical in ranking when they occur in
high probability regions as they affect the order among the feature space. The results are presented
in Table 1. See Supplementary Material for further details.
CR ANK
K ENDALL
CART
IAUC
0.95
0.94
0.61
Kendall ?
0.92
0.93
0.58
MSE
0.10
0.10
7.4 ? 10?4
Table 1: IAUC, Kendall ? and MSE empirical measures
7
Conclusion
This paper considers the problem of learning how to order objects by increasing ?size?, modeled as a
continuous r.v. Y , based on indirect measurements X. We provided a rigorous mathematical formulation of this problem that finds many applications (e.g. quality control, chemistry) and is referred
to as continuous ranking. In particular, necessary and sufficient conditions on (X, Y )?s distribution
for the existence of optimal solutions are exhibited and appropriate criteria have been proposed for
evaluating the performance of scoring rules in these situations. In contrast to distribution-free regression where the goal is to recover the local values taken by the regression function, continuous
8
ranking aims at reproducing the preorder it defines on the feature space as accurately as possible.
The numerical results obtained via the algorithmic approaches we proposed for optimizing the criteria aforementioned highlight the difference in nature between these two statistical learning tasks.
Acknowledgments
This work was supported by the industrial chair Machine Learning for Big Data from T?el?ecom
ParisTech and by a public grant (Investissement d?avenir project, reference ANR-11-LABX-0056LMH, LabEx LMH).
References
[1] S. Agarwal, T. Graepel, R. Herbrich, S. Har-Peled, and D. Roth. Generalization bounds for the
area under the ROC curve. J. Mach. Learn. Res., 6:393?425, 2005.
[2] L. Breiman, J. Friedman, R. Olshen, and C. Stone. Classification and Regression Trees.
Wadsworth and Brooks, 1984.
[3] G. Cl?emenc?on, M. Depecker, and N. Vayatis. Ranking Forests. J. Mach. Learn. Res., 14:39?73,
2013.
[4] S. Cl?emenc?on, G. Lugosi, and N.Vayatis. Ranking and scoring using empirical risk minimization. In Proceedings of COLT 2005, volume 3559, pages 1?15. Springer., 2005.
[5] S. Cl?emenc?on, G. Lugosi, and N. Vayatis. Ranking and empirical risk minimization of ustatistics. The Annals of Statistics, 36:844?874, 2008.
[6] S. Cl?emenc?on and S. Robbiano. The TreeRank Tournament algorithm for multipartite ranking.
Journal of Nonparametric Statistics, 25(1):107?126, 2014.
[7] S. Cl?emenc?on and N. Vayatis. Tree-based ranking methods. IEEE Transactions on Information
Theory, 55(9):4316?4336, 2009.
[8] S. Cl?emenc?on and N. Vayatis. The RankOver algorithm: overlaid classification rules for optimal ranking. Constructive Approximation, 32:619?648, 2010.
[9] Corinna Cortes and Mehryar Mohri. Auc optimization vs. error rate minimization. In Advances
in neural information processing systems, pages 313?320, 2004.
[10] L. Devroye, L. Gy?orfi, and G. Lugosi. A Probabilistic Theory of Pattern Recognition. Springer,
1996.
[11] Y. Freund, R. D. Iyer, R. E. Schapire, and Y. Singer. An efficient boosting algorithm for
combining preferences. Journal of Machine Learning Research, 4:933?969, 2003.
[12] Aditya Krishna Menon and Robert C Williamson. Bipartite ranking: a risk-theoretic perspective. Journal of Machine Learning Research, 17(195):1?102, 2016.
[13] J.R. Quinlan. Induction of Decision Trees. Machine Learning, 1(1):1?81, 1986.
[14] S. Rajaram and S. Agarwal. Generalization bounds for k-partite ranking. In NIPS 2005 Workshop on Learn to rank, 2005.
[15] A. Rakotomamonjy. Optimizing Area Under Roc Curve with SVMs. In Proceedings of the
First Workshop on ROC Analysis in AI, 2004.
[16] S. Robbiano S. Cl?emenc?on and N. Vayatis. Ranking data with ordinal labels: optimality and
pairwise aggregation. Machine Learning, 91(1):67?104, 2013.
9
| 7046 |@word h:4 version:5 briefly:2 norm:1 c0:2 pick:1 euclidian:1 initial:1 score:3 fragment:1 denoting:3 ours:1 current:1 discretization:4 comparing:1 dx:6 must:2 written:1 subsequent:1 numerical:5 partition:3 progressively:1 v:1 pursued:2 leaf:3 half:2 selected:1 problemspecific:1 characterization:2 provides:2 completeness:1 node:7 herbrich:1 boosting:1 preference:1 mathematical:1 dn:7 direct:1 introduce:2 manner:4 pairwise:2 x0:21 indeed:3 expected:2 growing:1 multi:5 discretized:2 terminal:3 multipartite:4 relying:1 company:1 equipped:5 cardinality:1 increasing:14 considering:2 provided:1 project:1 notation:1 maximizes:1 interpreted:1 minimizes:1 finding:2 transformation:2 guarantee:2 pseudo:1 quantitative:6 finance:1 universit:1 classifier:1 partitioning:2 unit:2 control:1 grant:1 yn:3 producing:3 before:1 aggregating:1 local:3 aiming:1 mach:2 noteworthy:1 lugosi:3 tournament:1 initialization:1 studied:2 quantified:1 weakened:1 challenging:1 bi:5 practical:2 unique:1 acknowledgment:1 yj:3 developped:1 recursive:9 practice:2 testing:1 optimizers:1 procedure:1 area:4 empirical:18 orfi:1 convenient:1 word:3 induce:1 integrating:1 onto:1 cannot:3 context:5 risk:4 py:4 measurable:2 equivalent:2 roth:1 maximizing:2 emenc:8 starting:1 formulate:2 simplicity:1 rule:20 depecker:1 population:1 notion:3 fx:20 analogous:1 annals:1 suppose:6 anomaly:1 hypothesis:2 element:9 recognition:1 cut:1 distributional:2 adressed:1 observed:1 subproblem:4 region:1 ordering:3 decrease:3 highest:3 counter:1 rq:1 mentioned:1 treerank:1 complexity:1 peled:1 ideally:2 raise:1 segment:1 algebra:1 predictive:2 serve:1 bipartite:16 easily:2 joint:1 indirect:4 represented:1 fast:1 describe:3 pearson:1 outside:1 quite:1 whose:2 larger:6 valued:3 supplementary:10 say:4 widely:1 solve:1 anr:1 statistic:5 jointly:1 transform:4 sequence:1 propose:3 product:1 maximal:2 fr:1 relevant:2 combining:1 exploiting:1 parent:1 empty:3 assessing:2 r1:1 incidentally:2 guaranteeing:2 object:10 depending:1 illustrate:1 strong:3 solves:1 pathologic:1 involves:2 avenir:1 ptimal:1 merged:1 attribute:2 centered:2 viewing:1 material:10 public:1 assign:1 fix:1 generalization:6 preliminary:2 randomization:1 proposition:7 elementary:1 strictly:10 extension:3 hold:1 lying:3 y00:2 considered:3 onnection:1 exp:2 overlaid:1 algorithmic:2 mapping:6 scope:1 major:1 continuum:2 heap:1 smallest:1 purpose:2 estimation:2 applicable:1 label:9 minimization:5 aim:1 cash:1 cr:7 breiman:1 inherits:1 modelling:1 likelihood:1 check:1 rank:1 contrast:4 rigorous:1 industrial:1 sense:1 el:2 integrated:6 typically:1 france:1 issue:1 classification:7 among:4 orientation:2 denoted:4 aforementioned:1 colt:1 wadsworth:1 equal:3 beach:1 biology:1 represents:1 unsupervised:1 nearly:2 future:1 piecewise:1 few:2 oriented:6 composed:2 ourselves:1 lebesgue:1 friedman:1 ltci:1 investigate:2 deferred:1 introduces:3 uncommon:1 behind:1 tj:4 har:1 accurate:1 partial:1 necessary:1 tree:20 supr:1 divide:1 continuing:1 re:2 theoretical:5 instance:3 assertion:4 measuring:1 maximization:4 rakotomamonjy:1 subset:2 uniform:1 graphic:1 straightforwardly:2 considerably:1 st:3 density:2 probabilistic:7 off:1 together:3 connecting:1 squared:1 choose:1 possibly:1 classically:1 stochastically:3 toy:1 supp:1 chemistry:2 gy:1 summarized:1 ranking:69 try:1 view:1 root:3 kendall:14 observing:1 sup:1 reached:2 aggregation:2 recover:1 square:7 partite:6 characteristic:2 rajaram:1 sy:2 correspond:2 borelian:7 accurately:1 zy:4 definition:9 naturally:3 proof:5 boil:2 dataset:3 popular:2 recall:2 subsection:2 cj:20 graepel:1 higher:1 fubini:1 supervised:2 formulation:1 just:3 stage:1 until:2 transport:1 replacing:4 invite:1 hopefully:1 defines:5 quality:1 menon:1 believe:1 usa:1 concept:3 true:3 lgorithm:1 counterpart:2 hence:1 assigned:2 auc:17 coincides:1 criterion:33 stone:1 theoretic:1 dedicated:4 novel:1 recently:1 functional:9 physical:1 volume:2 discussed:2 interpretation:1 he:1 slight:1 measurement:5 refer:4 ai:1 rd:6 dbn:1 surface:1 gj:2 posterior:1 perspective:1 optimizing:2 inf:1 belongs:1 certain:3 binary:14 yi:3 scoring:40 seen:1 minimum:1 dxdy:1 greater:1 promoted:1 additional:1 krishna:1 determine:1 paradigm:1 full:1 desirable:1 technical:3 match:1 plug:2 cross:1 long:1 equally:1 controlled:1 ensuring:1 impact:2 prediction:1 regression:18 basic:1 variant:1 essentially:1 iteration:2 represent:1 tailored:1 agarwal:2 cell:4 vayatis:6 whereas:2 addition:3 interval:2 ank:7 median:1 exhibited:3 induced:1 tend:3 cart:2 contrary:1 flow:1 call:1 split:5 stephan:1 variety:1 xj:2 affect:1 zi:1 forthcoming:1 idea:2 unidimensional:1 sibling:2 t0:2 akin:2 render:1 remark:7 useful:1 informally:1 listed:2 transforms:2 nonparametric:2 svms:1 schapire:1 exist:3 notice:2 fulfilled:3 disjoint:1 shall:4 key:1 threshold:2 drawn:1 vus:1 graph:1 monotone:1 inverse:1 everywhere:2 place:1 throughout:4 family:1 reasonable:1 oscillation:1 decision:4 dy:6 appendix:1 def:3 bound:2 guaranteed:1 followed:1 robbiano:2 occur:1 flat:1 argument:1 chair:1 optimality:1 structured:5 according:1 combination:2 describes:1 y0:7 s1:1 intuitively:2 invariant:3 fulfilling:1 erm:1 refered:1 taken:3 ecom:2 neyman:1 visualization:1 remains:1 abbreviated:2 describing:1 singer:1 ordinal:3 informal:2 generalizes:1 permit:1 observe:4 hierarchical:1 appropriate:5 alternative:2 corinna:1 existence:6 denotes:2 top:1 quinlan:1 build:2 unchanged:2 tensor:1 objective:3 move:2 question:2 quantity:3 strategy:2 parametric:1 surrogate:1 said:4 microscopic:1 separating:1 capacity:1 parametrized:1 lmh:2 manifold:1 extent:1 fy:11 considers:1 reason:2 induction:1 length:1 devroye:1 modeled:1 relationship:1 providing:1 ratio:1 equivalently:1 difficult:4 setup:1 olshen:1 robert:1 stated:3 design:1 implementation:1 unknown:2 observation:5 finite:2 displayed:1 situation:6 defining:1 y1:3 reproducing:1 arbitrary:1 introduced:1 cast:1 paris:2 pair:9 required:1 preorder:6 connection:2 namely:1 specified:1 recalled:2 established:1 nip:2 discontinuity:1 brook:1 below:3 pattern:1 saclay:1 max:2 tau:2 event:1 critical:1 natural:7 rely:1 ranked:1 predicting:1 indicator:1 representing:1 carried:1 naive:1 literature:1 freund:1 fully:1 highlight:2 rationale:1 validation:1 integrate:1 labex:1 degree:1 conveyed:1 sufficient:2 s0:1 article:2 thresholding:1 port:1 course:2 summary:5 mohri:1 supported:1 last:1 free:4 copy:6 side:1 taking:3 distributed:1 curve:29 depth:4 xn:3 evaluating:1 concavity:1 collection:3 coincide:1 avoided:1 subtracts:1 far:1 transaction:1 pruning:1 observable:2 compact:1 supremum:1 monotonicity:1 global:1 reveals:3 investissement:1 assumed:1 xi:5 continuous:36 dfx:2 table:2 additionally:1 nature:4 learn:8 ca:1 forest:1 mse:3 investigated:4 cl:8 mehryar:1 williamson:1 s2:1 whole:1 noise:1 big:1 dyadic:1 child:2 x1:3 body:1 fig:5 telecom:1 referred:6 roc:34 fashion:3 sub:1 exponential:1 candidate:2 down:5 z0:2 theorem:2 undeniably:1 specific:1 cortes:1 virtue:1 evidence:2 maximizers:4 dominates:2 consist:1 exists:3 workshop:2 magnitude:1 subtree:4 iyer:1 depicted:4 explore:1 visual:1 expressed:2 ordered:1 aditya:1 scalar:2 springer:2 nested:1 truth:1 satisfies:1 minimizer:1 corresponds:1 cdf:1 labx:1 conditional:1 goal:6 viewed:2 formulated:2 absence:1 paristech:3 change:1 except:2 uniformly:1 miss:1 lemma:2 called:1 total:1 tendency:1 experimental:1 internal:2 support:1 latter:2 fulfills:1 relevance:2 constructive:1 evaluate:2 |
6,685 | 7,047 | Scalable Model Selection for Belief Networks
Zhao Song? , Yusuke Muraoka? , Ryohei Fujimaki? , Lawrence Carin?
?
Department of ECE, Duke University
Durham, NC 27708, USA
{zhao.song, lcarin}@duke.edu
?
NEC Data Science Research Laboratories
Cupertino, CA 95014, USA
{ymuraoka, rfujimaki}@nec-labs.com
Abstract
We propose a scalable algorithm for model selection in sigmoid belief networks
(SBNs), based on the factorized asymptotic Bayesian (FAB) framework. We derive
the corresponding generalized factorized information criterion (gFIC) for the SBN,
which is proven to be statistically consistent with the marginal log-likelihood. To
capture the dependencies within hidden variables in SBNs, a recognition network
is employed to model the variational distribution. The resulting algorithm, which
we call FABIA, can simultaneously execute both model selection and inference
by maximizing the lower bound of gFIC. On both synthetic and real data, our
experiments suggest that FABIA, when compared to state-of-the-art algorithms for
learning SBNs, (i) produces a more concise model, thus enabling faster testing; (ii)
improves predictive performance; (iii) accelerates convergence; and (iv) prevents
overfitting.
1
Introduction
The past decade has witnessed a dramatic increase in popularity of deep learning [20], stemming from
its state-of-the-art performance across many domains, including computer vision [19], reinforcement
learning [27], and speech recognition [15]. However, one important issue in deep learning is that
its performance is largely determined by the underlying model: a larger and deeper network tends
to possess more representational power, but at the cost of being more prone to overfitting [32],
and increased computation. The latter issue presents a challenge for deployment to devices with
constrained resources [2]. Inevitably, an appropriate model-selection method is required to achieve
good performance. Model selection is here the task of selecting the number of layers and the number
of nodes in each layer.
Despite the rapid advancement in performance of deep models, little work has been done to address
the problem of model selection. As a basic approach, cross-validation selects a model according
to a validation score. However, this is not scalable, as its complexity is exponential with respect to
LMAX
the number of layers in the network: O(JMAX
), where JMAX and LMAX represent the maximum
allowed numbers of nodes in each layer and number of layers, respectively. In Alvarez and Salzmann
[2], a constrained optimization approach was proposed to infer the number of nodes in convolutional
neural networks (CNNs); the key idea is to incorporate a sparse group Lasso penalty term to shrink
all edges flowing into a node. Based on the shrinkage mechanism of the truncated gamma-negative
binomial process, Zhou et al. [36] showed that the number of nodes in Poisson gamma belief networks
(PGBNs) can be learned. Furthermore, we empirically observe that the shrinkage priors employed
in Gan et al. [11], Henao et al. [14], Song et al. [31] can potentially perform model selection in
certain tasks, even though this was not explicitly discussed in those works. One common problem for
these approaches, however, is that the hyperparameters need to be tuned in order to achieve good
performance, which may be time-consuming for some applications involving deep networks.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
The factorized asymptotic Bayesian (FAB) approach has recently been shown as a scalable modelselection framework for latent variable models. Originally proposed for mixture models [9], it was
later extended to the hidden Markov model (HMM) [8], latent feature model (LFM) [12], and
relational model [22]. By maximizing the approximate marginal log-likelihood, FAB introduces an
`0 regularization term on latent variables, which can automatically estimate the model structure by
eliminating irrelevant latent features through an expectation maximization [7] (EM)-like alternating
optimization, with low computational cost.
We develop here a scalable model selection
algorithm within the FAB framework to infer the size of SBNs [28], a popular component of deep models, e.g., deep belief networks (DBN) [16] and deep Poisson factor
analysis (DPFA) [10], and we assume here
the depth of the SBN is fixed. Since the
mean-field assumption used in FAB does not
hold in SBNs, we employ a recognition network [18, 29, 25, 26] to represent the variational distribution. As our method combines Figure 1: Requirement for removal of nodes in (Left)
the advantages of FAB Inference and Auto- SBN and (Right) FNN (dashed circles denote nodes
encoding variational Bayesian (VB) frame- that can be removed). Note that a node in the SBN
works, we term it as FABIA. To handle large can be removed only if all of its connected edges
datasets, we also derive a scalable version shrink. For FNN, shrinkage of all incoming edges
of FABIA with mini-batches. As opposed to eliminates a node.
previous works, which predefine the SBN size [28, 30, 25, 5, 11, 6, 31, 26], FABIA determines it
automatically.
It should be noted that model selection in SBNs is more challenging than CNNs and feedforward
neural networks (FNNs). As shown in Figure 1, simply imposing a sparsity prior or a group sparsity
prior as employed in CNNs [2] and SBNs [11, 14, 31] does not necessarily shrink a node in SBN,
since such approaches cannot guarantee to shrink all edges connected to a node.
FABIA possesses the following distinguishing features: (i) a theoretical guarantee that its objective
function, the generalized factorized information criterion (gFIC), is statistically consistent with
the model?s marginal log-likelihood; and (ii) prevention of overfitting in large networks when the
amount of training data is not sufficiently large, thanks to an intrinsic shrinkage mechanism. We
also detail that FABIA has important connections with previous work on model regularization,
such as Dropout [32], Dropconnect [35], shrinkage priors [11, 36, 14, 31], and automatic relevance
determination (ARD) [34].
2
Background
An SBN is a directed graphical model for which the distribution of each layer is determined by the
preceding layer via the sigmoid function, defined as ?(x) , 1/[1 + exp(?x)]. Let h(l) denote the
lth hidden layer with Jl units, and v represent the visible layer with M units. The generative model
of the SBN, with L hidden layers, is represented as
p(h(L) |b) =
JL
Y
(L)
[?(bi )]hi
(L)
[?(?bi )]1?hi ,
p(h(l) |h(l+1) ) =
i=1
Jl
Y
(l)
(l)
(l)
(l)
[?(?i )]hi [?(??i )]1?hi
i=1
(l)
?i
(l)
(l)
Wi? h(l+1) +ci ,
where l = 1, . . . , L?1,
=
and b corresponds to prior parameters; the notation
i? means the ith row of a matrix. For the link function of the visible layer, i.e., p(v|h(1) ), we use the
sigmoid function for binary data and the multinomial function for count data, as in Mnih and Gregor
[25], Carlson et al. [6].
One difficulty of learning SBNs is the evaluation of the expectation with respect to the posterior
distribution of hidden variables [31]. In Mnih and Gregor [25], a recognition network under the variational auto-encoding (VAE) framework [18] was proposed to approximate this intractable expectation.
Compared with the Gibbs sampler employed in Gan et al. [11], Carlson et al. [6], Song et al. [31], the
recognition network enables fast sampling of hidden variables in blocks. The variational parameters in
2
the recognition network can be learned via stochastic gradient descent (SGD), as shown in the neural
variational inference and learning (NVIL) algorithm [25], for which multiple variance reduction
techniques have been proposed to obtain better gradient estimates. Note that all previous work on
learning SBNs assumes that a model with a fixed number of nodes in each layer has been provided.
To select a model for an SBN, we follow the FAB framework [9], which infers the structure of a
latent variable model by Bayesian inference. Let ? = {W, b, c} denote the model parameters and M
be the model, with the goal in the FAB framework being to obtain the following maximum-likelihood
(ML) estimate:
N
N
X
X
XZ
cML = arg max
M
ln p(v n |M) = arg max
ln
p(v n , hn |?)p(?|M) d?
(1)
M
M
n=1
n=1
hn
As a key feature of the FAB framework, the `0 penalty term on hn induced by approximating (1)
can remove irrelevant latent variables from the model (?shrinkage mechanism"). In practice, we
can start from a large model and gradually reduce its size through this ?shrinkage mechanism" until
convergence.
Although a larger model has more representational capacity, a smaller model with similar predictive
performance is preferred in practice, given a computational budget. A smaller model also enables
faster testing, a desirable property in many machine learning tasks. Furthermore, a smaller model
implies more robustness to overfitting, a common danger in deeper and larger models with insufficient
training data.
Since the integration in (1) is in general intractable, Laplace?s method [23] is employed in FAB
inference for approximation. Consequently, gFIC can be derived as a surrogate function of the
marginal log-likelihood. By maximizing the variational lower bound of gFIC, one obtains estimates
of both parameters and the underlying model size. Note that while FAB inference uses the mean-field
approximation for the variational distribution [9, 8, 22, 21], the same does not hold for SBNs, due to
the correlation within hidden variables given the data. In contrast, the recognition network has been
designed to approximate the posterior distribution of hidden variables with more fidelity [18, 29, 25].
Therefore, it can be a better candidate for the variational distribution in our task.
3
The FABIA Algorithm
3.1
gFIC for SBN
Following the FAB inference approach, we first lower bound the marginal log-likelihood in (1) via a
variational distribution q(h|?) as 1
R
XZ
X
p(v n , hn |?) p(?|M) d?
ln
p(v n , hn |?)p(?|M) d? ?
q(hn |?) ln
.
q(hn |?)
hn
hn
By applying Laplace?s method [23], we obtain
ln p(v, h|M) =
N
M
X
2?
1 X
D?
b + ln p(?|M)
b
ln( ) +
ln p(v n , hn |?)
?
ln |?m | + O(1) (2)
2
N
2
n=1
m=1
b represents the ML estimate of ?, and ?m represents the
where D? refers to the dimension of ?, ?
negative Hessian of the log-likelihood with respect to Wm? .
Since ln |?m | in (2) cannot be represented with an analytical form, we must approximate it first,
for the purpose of efficient optimization of the marginal log-likelihood. Following the gFIC [13]
approach, we propose performing model selection in SBNs by introducing the shrinkage mechanism
from this approximation. We start by providing the following assumptions, which are useful in the
proof of our main theoretical results in Theorem 1.
PN
Assumption 1. The matrix n=1 ?n hTn hn has full rank with probability 1 as N ? ?, where
?n ? (0, 1).
1
For derivation clarity, we assume only one hidden layer and drop the bias term in the SBN
3
Note that this full-rank assumption implies that the SBN can preserve information in the large-sample
limit, based on the degeneration analysis of gFIC [13].
Assumption 2. hn,j , ?j is generated from a Bernoulli distribution as hn,j ? Ber(?j ), where ?j > 0.
Theorem 1. As N ? ?, ln |?m | can be represented with the following equality:
X X
ln |?m | =
ln
hn,j ? ln N + O(1)
j
(3)
n
Proof. We first compute the negative Hessian as
X
1
?
1 X
?m = ?
ln p(v n , hn |?) =
?(Wm? hn ) ?(?Wm? hn ) hTn hn .
T
N ?Wm? ?Wm? n
N n
From Assumption 1, ?m has full rank, since ?(x) ? (0, 1), ?x ? R. Furthermore, the determinant
of ?m is bounded, since ?m
ij ? (0, 1), ?i, j. Next, we define the following diagonal matrix
P
P
( n hn,1 )
( n hn,J )
? , diag
,...,
.
N
N
P
From Assumption 2, limN ?? P r[ n hn,j = 0] = 0, ?j. Therefore, ? is full-rank and its determinant is bounded, when N ? ?. Subsequently, we can decompose it as
?m = ? F
(4)
where F also has full rank and bounded determinant. Finally, applying the log determinant operator
to the right side of (4) leads to our conclusion.
To obtain the gFIC for SBN, we first follow the previous FAB approaches [9, 12, 22] to assume
? |M) = 0. We then apply
the log-prior of ? to be constant with respect to N , i.e., limN ?? ln p(N
Theorem 1 to (2) and have
N
X
M X X
b + M J ? D? ln N + H(q)
gFIC SBN = max Eq ?
ln
hn,j +
ln p(v n , hn |?)
q
2 j
2
n
n=1
(5)
where H(q) is the entropy for the variational distribution q(h).
P
P
As a key quantity in (5), M
j (ln
n hn,j ) can be viewed as a regularizer over the model to
2
execute model selection. This term directly operates on hidden nodes to perform shrinkage, which
distinguishes our approach from previous work [11, 14, 31], where sparsity priors are assigned over
edges. As illustrated in Figure 1, these earlier approaches do not necessarily shrink hidden nodes, as
setting up a prior or a penalty term to shrink all edges connected to a node is very challenging in SBNs.
Furthermore, the introduction of this quantity does not bring any cost of tuning parameters with crossvalidation. In contrast, the Lagrange parameter in Alvarez and Salzmann [2] and hyperparameters for
priors in Gan et al. [11], Henao et al. [14], Zhou et al. [36], Song et al. [31] all need to be properly
set, which may be time-consuming in certain applications involving deep and large networks.
Under the same regularity conditions as Hayashi and Fujimaki [12], gFIC SBN is statistically consistent
with the marginal log-likelihood, an important property of the FAB framework.
Corollary 1. As N ? ?, ln p(v|M) = gFIC SBN + O(1).
Proof. The conclusion holds as a direct extension of the consistency results in Hayashi and Fujimaki
[12].
3.2
Optimization of gFIC
b is in general not
The gFIC SBN in (5) cannot be directly optimized, because (i) the ML estimator ?
available, and (ii) evaluation of the expectation over hidden variables is computationally expensive.
Instead, the proposed FABIA algorithm optimizes the lower bound as
gFIC SBN ? ?
N
X
M X X
ln
Eq (hn,j ) +
Eq ln p(v n , hn |?) + H(q)
2 j
n
n=1
4
(6)
b ? p(v n , hn |?), ??; (ii)
where we use the following facts to get the lower bound: (i) p(v n , hn |?)
the concavity of the logarithm function; (iii) D? ? M J; and (iv) the maximum of all possible
variational distributions q in (5).
This leaves the choice of the form of the variational distribution. We could use the mean-field
approximation as in previous FAB approaches [9, 8, 12, 13, 22, 21]. However, this approximation
fails to capture the dependencies between hidden variables in SBN, as discussed in Song et al. [31].
Instead, we follow the recent auto-encoding VB approach [18, 29, 25, 26] to model the variational
distribution with a recognition network, which maps v n to q(hn |v n , ?). Specifically, q(hn |v n , ?) =
QJ
QJ
J?M
parameterizes the recognition
j=1 q(hn,j |v n , ?) =
j=1 Ber[?(?j? v n )], where ? ? R
network. Not only does using a recognition network allow us to more accurately model the variational
distribution, it also enables faster sampling of hidden variables.
The optimization of the lower bound in (6) can be executed via SGD; we use the Adam algorithm [17]
as our optimizer. To reduce gradient variance, we employ the NVIL algorithm to estimate gradients
in both generative and recognition networks. We also note that other methods, such as the importancesampled objectives method [5, 26, 24], can be used and such an extension is left for future work.
P P
Since M
j ln
n Eq (hn,j ) in (6) is only dependent on q, gradients of the generative model in
2
our FABIA algorithm and NVIL should be the same. However, gradients of the recognition network
in FABIA are regularized to shrink the model, which is lacking in the standard VAE framework.
We note that FABIA is a flexible framework, as its shrinkage term can be combined with any gradientbased variational auto-encoding methods to perform model selection, where only minimal changes to
the gradients of the recognition network of the original methods are necessary.
PN
(l)
A node j at level l will be removed from the model if it satisfies N1 n=1 Eq (hn,j ) ? (l) , where
(l) is a threshold parameter to control the model size. This criterion has an intuitive interpretation
that a node should be removed if the proportion of its samples equaling 1 is small. When the
expectation is not exact, such as in the top layers, we use samples drawn from the recognition network
to approximate it.
3.3
Minibatch gFIC
To handle large datasets, we adapt the gFIC SBN developed in (5) to use minibatches (which is also
appropriate for online learning). Suppose that each mini-batch contains Nmini data points, and
currently we have seen T mini-batches, an unbiased estimator for (5) (up to constant terms) is then
NX
Nmini
mini
b
p(v i+NT , hi+NT |?)
MX X
^
ln
gFIC
=
max
E
?
ln
h
+
T
SBN
q
i+NT ,j
q
2 j
q(hi+NT |?)
i=1
i=1
M J ? D?
ln NT +1
+
2
(7)
where NT = (T ? 1)Nmini . Derivation details are provided in Supplemental Materials.
^
An interesting observation in (7) is that gFIC
SBN can automatically adjust shrinkage over time: At the
P
PNmini
beginning of the optimization, i.e., when T is small, the shrinkage term M
hi+NT ,j )
j ln(
i=1
2
is more dominant in (7). As T becomes larger, the model is more stable and shrinkage gradually
disappears. This phenomenon is also observed in our experiments in Section 5.
3.4
Computational complexity
The NVIL algorithm has complexity O(M JNtrain ) for computing gradients in both the generative
model and recognition network. FABIA needs an extra model selection step, also with complexity
O(M JNtrain ) per step. As the number of training iteration increases, the additional cost to perform
model selection is offset by the reduction of time when computing gradients, as observed in Figure 3.
In test, the complexity is O(M JNtest K) per step, with K being the number of samples taken to
compute the variational lower bound. Therefore, shrinkage of nodes can linearly reduce the testing
time.
5
4
Related Work
Dropout As a standard approach to regularize deep models, Dropout [32] randomly removes a
certain number of hidden units during training. Note that FABIA shares this important characteristic
by directly operating on nodes, instead of edges, to regularize the model, which has a more direct
connection with model selection. One important difference is that in each training iteration, Dropout
updates only a subset of the model; in contrast, FABIA updates every parameter in the model, which
enables faster convergence.
Shrinkage prior The shrinkage sparsity-inducing approach aims to shrink edges in a model, by
employing either shrinkage priors [11, 14, 36, 31] or a random mask [35] on the weight matrix. In
FABIA, the penalty term derived in gFIC of (5) also has the shrinkage property, but the shrinkage
effect is instead imposed on the nodes. Furthermore, shrinkage priors are usually approached from the
Bayesian framework, where Markov chain Monte Carlo (MCMC) is often needed for inference. In
contrast, FABIA integrates the shrinkage mechanism from gFIC into the auto-encoding VB approach
and thus is scalable to large deep models.
Group Sparsity Application of group sparsity can be viewed as an extension of the shrinkage prior,
with the key idea being to enforce sparsity on entire rows (columns) of the weight matrix [2]. This
corresponds to the ARD prior [34] where each row (column) has an individual hyperparameter. In
FNNs and CNNs, this is equivalent to node shrinkage in FABIA for SBNs. The structure of SBNs
precludes a direct application of the group sparsity approach for model selection, but there exists an
interesting opportunity for future work to extend FABIA to FNNs and CNNs.
Nonparametric Prior In Adams et al. [1], a cascading Indian buffet process (IBP) based approach
was proposed to infer the structure of the Gaussian belief network with continuous hidden units,
for which the inference was performed via MCMC. By employing the nonparametric properties
of the IBP prior, this approach can adjust the model size with observations. Due to the high
computational cost of MCMC, however, it may not be scalable to large problems.
5
Experiments
We test the proposed FABIA algorithm on synthetic data, as well as real image and count data. For
comparison, we use the NVIL algorithm [25] as a baseline method, which does not have the model
selection procedure. Both FABIA and NVIL are implemented in Theano [4] and tested on a machine
with 3.0GHz CPU and 64GB RAM. The learning rate in Adam is set to be 0.001 and we follow the
default settings of other parameters in all of our experiments. We set the threshold parameter (l) to be
0.001, ?l unless otherwise stated. We also tested Dropout but did not notice any clear improvement.
The purpose of these experiments is to show that FABIA can automatically learn the model size, and
achieve better or competitive performance with a more compact model.
5.1
Synthetic Dataset
The synthetic data are generated from a one-layer SBN and a two-layer SBN, with M = 30 visible
units in both cases. We simulate 1250 data points, and then follow an 80/20% split to obtain the
training and test sets. For the one-layer case, we employ a true model with 5 nodes and initialize
FABIA and NVIL with 25 nodes. For the two-layer case, the true network has the structure of 10-5 2 ,
and we initialize FABIA and NVIL with a network of 25-15. We compare the inferred SBN structure
and test log-likelihood for FABIA, the NVIL algorithm initialized with the same model size as FABIA
(denoted as ?NVIL"), and the NVIL algorithm initialized with the true model size (denoted as ?NVIL
(True)?). One hundred independent random trials are conducted to report statistics.
Figure 2(a) shows the mean and standard deviation of the number of nodes inferred by FABIA, as a
function of iteration number. In both one- and two-layer cases, the mean of the inferred model size
is very close to the ground truth. In Figure 2(b), we compare the convergence in terms of the test
log-likelihood for different algorithms: FABIA has almost the same convergence speed as NVIL with
2
We list the number of nodes in the deeper layer first in all of our experiments.
6
?16.0
25
Level 1
Level 1
10
5
0
20
Test log-likelihood
Number of nodes
Number of nodes
15
15
10
5
1
2
3
4
5
Iteration
6
7
1e2
?17.0
?17.5
?18.0
?18.5
FABIA
NVIL (True)
NVIL
?19.0
?19.5
0
0
?16.5
?16.5
Level 2
20
0
1
2
3
4
5
Iteration
6
7
1e2
(a)
Test log-likelihood
25
0
50
100
150
Time (seconds)
200
?17.0
?17.5
?18.0
FABIA
NVIL (True)
NVIL
?18.5
?19.0
0
50 100 150 200 250
Time (seconds)
(b)
Figure 2: (a) Inferred number of nodes from FABIA in (Left) one- and (Right) two-layer cases; (b)
Test log-likelihood for different methods in (Left) one- and (Right) two-layer cases.
the true model, both of which have remarkable gaps over the NVIL variant initialized with the same
model size as FABIA.
5.2
Image Modeling
We use the publicly available MNIST dataset, which contains 60, 000 training and 10, 000 test images
of size 28 ? 28. Our performance metric is the variational lower bound of the test log-likelihood. The
mini-batches for FABIA and NVIL are set to 100. For this dataset we compared FABIA with the VB
approach in Gan et al. [11] and Rec-MCEM in Song et al. [31]. The VB approach in Gan et al. [11] can
potentially shrink nodes, due to the three parameter beta-normal (TPBN) prior [3]. We claim a node
P
(l)
(l)
hj can be removed from the model, if its adjacent weight matrices satisfy k [Wk,j ]2 /J (l?1) <
P
(l+1)
10?8 and k [Wj,k ]2 /J (l+1) < 10?8 . We run the code provided in https://github.com/
zhegan27/dsbn_aistats2015 and use default parameter settings to report the VB results. We
also implemented the Rec-MCEM approach but only observed shrinkage of edges, not nodes.
Table 1 shows the variational lower bound of
Table 1: Model size, test variational lower bound
the test log-likelihood, model size, and test
(VLB) (in nats), and test time (in seconds) on the
time for different algorithms. FABIA achieves
MNIST dataset. Note that FABIA and VB start from
the highest test log-likelihood in all cases and
the same model size as NVIL and Rec-MCEM.
converges to smaller models, compared to
Method
Size
VLB
Time NVIL. FABIA also benefits from its more
compact model to have the smallest test time.
VB
81
-117.04
8.94 Furthermore, we observe that VB always overRec-MCEM 200
-116.70
8.52 shrinks nodes in the top layer, which might
NVIL
200
-115.63
8.47 be related to the settings of hyperparameters.
FABIA
107
?114.96
6.88 Unlike VB, FABIA avoids the difficult task of
VB
200-11
-113.69
22.37 tuning hyperparameters to balance predictive
Rec-MCEM 200-200
-106.54
12.25 performance and model size. We also notice
NVIL
200-200
-105.62
12.34 that the deeper layer in the two-layer model
FABIA
135-93
?104.92
9.18 did not shrink in VB, as our experiments suggest that all nodes in the deeper layer still have
NVIL
200-200-200 -101.99
15.66 connections with nodes in adjacent layers.
FABIA
136-77-72
?101.14 10.97
Figure 3 shows the variational lower bound of
the test log-likelihood and number of nodes in FABIA, as a function of CPU time, for different initial
model sizes. Additional plots as a function of the number of iterations are provided in Supplemental
Materials, which are similar to Figure 3. We note that FABIA initially has a similar log-likelihood
that gradually outperforms NVIL, which can be explained by the fact that FABIA initially needs
additional time to perform the shrinkage step but later converges to a smaller and better model. This
gap becomes more obvious when we increase the number of hidden units from 200 to 500. The
deteriorating performance of NVIL is most likely due to overfitting. In contrast, FABIA is robust to
the change of the initial model size.
7
200
Number of nodes
Test log-likelihood
?110
?115
?120
160
Level 1
Level 2
Level 3
140
120
100
?125
?130
Test log-likelihood
180
?105
Fabia
NVIL
0.0
0.5
1.0
1.5
2.0
500
?105
450
?110
400
?115
?120
?125
?130
?135
80
0.0
Time (seconds) 1e5
?100
0.5
1.0
1.5
?140
2.0
Time (seconds) 1e5
(a)
Number of nodes
?100
Fabia
NVIL
0.0 0.5 1.0 1.5 2.0 2.5 3.0
Time (seconds) 1e5
Level 1
Level 2
Level 3
350
300
250
200
150
100
0.0 0.5 1.0 1.5 2.0 2.5 3.0
Time (seconds) 1e5
(b)
Figure 3: Test log-likelihood and the number of nodes in FABIA, as a function of CPU time on the
MNIST dataset, for an SBN with initial size as (a) 200-200-200 (b) 500-500-500.
5.3
Topic Modeling
The two benchmarks we used for topic modeling are Reuters Corpus Volume I (RCV1) and Wikipedia,
as in Gan et al. [10], Henao et al. [14]. RCV1 contains 794,414 training and 10,000 testing documents,
with a vocabulary size of 10,000. Wikipedia is composed of 9,986,051 training documents, 1,000 test
documents, and 7,702 words. The performance metric we use is the predictive perplexity on the test
set, which cannot be directly evaluated. Instead, we follow the approach of 80/20% split on the test
set, with details provided in Gan et al. [10].
We compare FABIA against DPFA [10], deep Poisson factor modeling (DPFM) [14], MCEM [31],
Over-RSM [33], and NVIL. For both FABIA and NVIL, we use a mini-batch of 200 documents. The
results for other methods are cited from corresponding references. We test DPFA and DPFM with
the publicly available code provided by the authors; however, no shrinkage of nodes are observed in
our experiments.
Table 2 shows the perplexities of different algorithms on the RCV1 and Wikipedia datasets, respectively. Both FABIA and NVIL outperform other methods with marked margins.
Test Perplexity
Test Perplexity
Interestingly, we note that FABIA does not
shrink any nodes in the first layer, which is likely
RCV1
Wikipedia
1000
1000
due to the fact that these two datasets have a
FABIA
FABIA
950
large number of visible units and thus a suffiNVIL
NVIL
950
900
ciently large first hidden layer is necessary. This
requirement of a large first hidden layer to prop850
erly model the data may also explain why NVIL
900
800
does not overfit on these datasets as much as it
750
does on MNIST; the training set of these datasets
850
700
being sufficiently large is another possible ex650
planation. We also computed test time but did
800
600
not observe any clear improvement of FABIA
100 400 1000 2000
100 400 1000 2000
# of nodes in the 1st layer
# of nodes in the 1st layer
over NVIL, which may be explained by the fact
that most of the computation is spent on the first
Figure 4: Test perplexities as a function of number layer in these two benchmarks.
of nodes in the first layer, in the two-layer case.
In Figure 4, we vary the number of hidden units
in the first layer and fix the number of nodes
in other layers to be 400. We use early stopping for NVIL to prevent it from overfitting with
larger networks. For the networks with 100 and 400 nodes in the first layer, FABIA and NVIL
have roughly the same perplexities. Once the number of nodes is increased to 1000, FABIA starts
to outperform NVIL with remarkable gaps, which implies that FABIA can handle the overfitting
problem, as a consequence of its shrinkage mechanism for model selection. We also observed that
setting a larger (1) for the first layer in the 2000 units case for FABIA can stabilize its performance;
8
Table 2: Test perplexities and model size on the benchmarks. FABIA starts from a model initialized
with 400 hidden units in each layer.
RCV1
Over-RSM
MCEM
DPFA-SBN
DPFA-RBM
DPFM
NVIL
FABIA
Wikipedia
Perplexity
Size
Perplexity
Size
128
128
1024-512-256
128-64-32
128-64
400-400
400-156
1060
1023
964
920
908
857
856
1024-512-256
128-64-32
128-64
400-400
400-151
770
942
783
735
730
we choose this value by cross-validation. The results for three layers are similar and are included in
Supplemental Materials.
6
Conclusion and Future Work
We develop an automatic method to select the number of hidden units in SBNs. The proposed
gFIC criterion is proven to be statistically consistent with the model?s marginal log-likelihood. By
maximizing gFIC, the FABIA algorithm can simultaneously execute model selection and inference
tasks. Furthermore, we show that FABIA is a flexible framework that can be combined with autoencoding VB approaches. Our experiments on various datasets suggest that FABIA can effectively
select a more-compact model and achieve better held-out performance. Our future work will be to
extend FABIA to importance-sampling-based VAEs [5, 26, 24]. We also aim to explicitly select
the number of layers in SBNs, and to tackle other popular deep models, such as CNNs and FNNs.
Finally, investigating the effect of FABIA?s shrinkage mechanism on the gradient noise is another
interesting direction.
Acknowledgements
The authors would like to thank Ricardo Henao for helpful discussions, and the anonymous reviewers
for their insightful comments and suggestions. Part of this work was done during the internship of
the first author at NEC Laboratories America, Cupertino, CA. This research was supported in part by
ARO, DARPA, DOE, NGA, ONR, NSF, and the NEC Fellowship.
References
[1] Adams, R., Wallach, H., and Ghahramani, Z. (2010). Learning the structure of deep sparse graphical models.
In International Conference on Artificial Intelligence and Statistics, pages 1?8.
[2] Alvarez, J. M. and Salzmann, M. (2016). Learning the number of neurons in deep networks. In Advances in
Neural Information Processing Systems, pages 2270?2278.
[3] Armagan, A., Clyde, M., and Dunson, D. B. (2011). Generalized beta mixtures of Gaussians. In Advances
in Neural Information Processing Systems, pages 523?531.
[4] Bergstra, J., Breuleux, O., Bastien, F., Lamblin, P., Pascanu, R., Desjardins, G., Turian, J., Warde-Farley, D.,
and Bengio, Y. (2010). Theano: A CPU and GPU math compiler in python. In Proc. 9th Python in Science
Conf, pages 1?7.
[5] Bornschein, J. and Bengio, Y. (2015). Reweighted wake-sleep. In International Conference on Learning
Representations.
[6] Carlson, D., Hsieh, Y.-P., Collins, E., Carin, L., and Cevher, V. (2016). Stochastic spectral descent for
discrete graphical models. IEEE J. Sel. Topics Signal Process., 10(2):296?311.
[7] Dempster, A. P., Laird, N. M., and Rubin, D. B. (1977). Maximum likelihood from incomplete data via the
EM algorithm. J. Roy. Statist. Soc. Ser. B, pages 1?38.
9
[8] Fujimaki, R. and Hayashi, K. (2012). Factorized asymptotic Bayesian hidden Markov models. In International Conference on Machine Learning, pages 799?806.
[9] Fujimaki, R. and Morinaga, S. (2012). Factorized asymptotic Bayesian inference for mixture modeling. In
International Conference on Artificial Intelligence and Statistics, pages 400?408.
[10] Gan, Z., Chen, C., Henao, R., Carlson, D., and Carin, L. (2015a). Scalable deep Poisson factor analysis for
topic modeling. In International Conference on Machine Learning, pages 1823?1832.
[11] Gan, Z., Henao, R., Carlson, D., and Carin, L. (2015b). Learning deep sigmoid belief networks with data
augmentation. In International Conference on Artificial Intelligence and Statistics, pages 268?276.
[12] Hayashi, K. and Fujimaki, R. (2013). Factorized asymptotic Bayesian inference for latent feature models.
In Advances in Neural Information Processing Systems, pages 1214?1222.
[13] Hayashi, K., Maeda, S.-i., and Fujimaki, R. (2015). Rebuilding factorized information criterion: Asymptotically accurate marginal likelihood. In International Conference on Machine Learning, pages 1358?1366.
[14] Henao, R., Gan, Z., Lu, J., and Carin, L. (2015). Deep Poisson factor modeling. In Advances in Neural
Information Processing Systems, pages 2800?2808.
[15] Hinton, G., Deng, L., Yu, D., Dahl, G., Mohamed, A.-r., Jaitly, N., Senior, A., Vanhoucke, V., Nguyen, P.,
Kingsbury, B., et al. (2012). Deep neural networks for acoustic modeling in speech recognition: The shared
views of four research groups. IEEE Signal Process. Mag., 29(6):82?97.
[16] Hinton, G., Osindero, S., and Teh, Y.-W. (2006). A fast learning algorithm for deep belief nets. Neural
Computation, 18(7):1527?1554.
[17] Kingma, D. P. and Ba, J. L. (2015). Adam: A method for stochastic optimization. In International
Conference on Learning Representations.
[18] Kingma, D. P. and Welling, M. (2014). Auto-encoding variational Bayes. In International Conference on
Learning Representations.
[19] Krizhevsky, A., Sutskever, I., and Hinton, G. E. (2012). Imagenet classification with deep convolutional
neural networks. In Advances in Neural Information Processing Systems, pages 1097?1105.
[20] LeCun, Y., Bengio, Y., and Hinton, G. (2015). Deep learning. Nature, 521(7553):436?444.
[21] Liu, C., Feng, L., and Fujimaki, R. (2016). Streaming model selection via online factorized asymptotic
Bayesian inference. In International Conference on Data Mining, pages 271?280.
[22] Liu, C., Feng, L., Fujimaki, R., and Muraoka, Y. (2015). Scalable model selection for large-scale factorial
relational models. In International Conference on Machine Learning, pages 1227?1235.
[23] MacKay, D. J. (2003). Information theory, inference and learning algorithms. Cambridge University Press.
[24] Maddison, C. J., Mnih, A., and Teh, Y. W. (2017). The concrete distribution: A continuous relaxation of
discrete random variables. In International Conference on Learning Representations.
[25] Mnih, A. and Gregor, K. (2014). Neural variational inference and learning in belief networks. In
International Conference on Machine Learning, pages 1791?1799.
[26] Mnih, A. and Rezende, D. (2016). Variational inference for Monte Carlo objectives. In International
Conference on Machine Learning, pages 2188?2196.
[27] Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., Graves, A., Riedmiller,
M., Fidjeland, A. K., Ostrovski, G., Petersen, S., Beattie, C., Sadik, A., Antonoglou, I., King, H., Kumaran,
D., Wierstra, D., Legg, S., and Hassabis, D. (2015). Human-level control through deep reinforcement learning.
Nature, 518(7540):529?533.
[28] Neal, R. M. (1992). Connectionist learning of belief networks. Artificial Intelligence, 56(1):71?113.
[29] Rezende, D. J., Mohamed, S., and Wierstra, D. (2014). Stochastic backpropagation and approximate
inference in deep generative models. In International Conference on Machine Learning, pages 1278?1286.
[30] Saul, L. K., Jaakkola, T., and Jordan, M. I. (1996). Mean field theory for sigmoid belief networks. Journal
of Artificial Intelligence Research, 4:61?76.
10
[31] Song, Z., Henao, R., Carlson, D., and Carin, L. (2016). Learning sigmoid belief networks via Monte
Carlo expectation maximization. In International Conference on Artificial Intelligence and Statistics, pages
1347?1355.
[32] Srivastava, N., Hinton, G. E., Krizhevsky, A., Sutskever, I., and Salakhutdinov, R. (2014). Dropout: a simple
way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15(1):1929?1958.
[33] Srivastava, N., Salakhutdinov, R. R., and Hinton, G. E. (2013). Modeling documents with deep Boltzmann
machines. In Uncertainty in Artificial Intelligence, pages 616?624.
[34] Tipping, M. E. (2001). Sparse Bayesian learning and the relevance vector machine. Journal of Machine
Learning Research, 1:211?244.
[35] Wan, L., Zeiler, M., Zhang, S., LeCun, Y., and Fergus, R. (2013). Regularization of neural networks using
dropconnect. In International Conference on Machine Learning, pages 1058?1066.
[36] Zhou, M., Cong, Y., and Chen, B. (2015). The Poisson Gamma belief network. In Advances in Neural
Information Processing Systems, pages 3043?3051.
11
| 7047 |@word trial:1 determinant:4 version:1 eliminating:1 proportion:1 cml:1 hsieh:1 dramatic:1 sgd:2 concise:1 reduction:2 initial:3 liu:2 contains:3 score:1 selecting:1 mag:1 salzmann:3 tuned:1 document:5 interestingly:1 past:1 outperforms:1 com:2 nt:7 deteriorating:1 must:1 gpu:1 stemming:1 visible:4 enables:4 remove:2 designed:1 drop:1 update:2 plot:1 generative:5 leaf:1 device:1 advancement:1 intelligence:7 beginning:1 ith:1 pascanu:1 node:46 math:1 zhang:1 kingsbury:1 wierstra:2 direct:3 ryohei:1 beta:2 combine:1 mask:1 rapid:1 roughly:1 xz:2 salakhutdinov:2 automatically:4 little:1 cpu:4 becomes:2 provided:6 underlying:2 notation:1 bounded:3 factorized:9 developed:1 supplemental:3 guarantee:2 every:1 tackle:1 ser:1 control:2 unit:11 tends:1 limit:1 consequence:1 despite:1 encoding:6 yusuke:1 might:1 wallach:1 challenging:2 deployment:1 bi:2 statistically:4 directed:1 lecun:2 testing:4 practice:2 block:1 backpropagation:1 lcarin:1 procedure:1 danger:1 riedmiller:1 word:1 refers:1 suggest:3 petersen:1 get:1 cannot:4 close:1 selection:21 operator:1 applying:2 bellemare:1 equivalent:1 map:1 imposed:1 reviewer:1 maximizing:4 estimator:2 cascading:1 lamblin:1 regularize:2 handle:3 laplace:2 jmax:2 suppose:1 exact:1 duke:2 distinguishing:1 us:1 jaitly:1 roy:1 recognition:16 expensive:1 rec:4 observed:5 modelselection:1 capture:2 cong:1 sbn:27 degeneration:1 equaling:1 connected:3 wj:1 removed:5 highest:1 dempster:1 complexity:5 nats:1 warde:1 mcem:7 predictive:4 darpa:1 pgbns:1 represented:3 various:1 regularizer:1 america:1 derivation:2 fast:2 monte:3 fabia:66 approached:1 artificial:7 rebuilding:1 larger:6 otherwise:1 precludes:1 statistic:5 laird:1 online:2 autoencoding:1 advantage:1 analytical:1 bornschein:1 net:1 propose:2 aro:1 achieve:4 representational:2 intuitive:1 inducing:1 crossvalidation:1 sutskever:2 convergence:5 regularity:1 requirement:2 produce:1 adam:5 converges:2 silver:1 spent:1 derive:2 develop:2 ij:1 ard:2 ibp:2 eq:5 soc:1 implemented:2 implies:3 direction:1 cnns:6 stochastic:4 subsequently:1 human:1 material:3 fix:1 decompose:1 anonymous:1 extension:3 hold:3 gradientbased:1 sufficiently:2 ground:1 normal:1 exp:1 lawrence:1 claim:1 desjardins:1 optimizer:1 achieves:1 smallest:1 vary:1 early:1 purpose:2 proc:1 integrates:1 currently:1 gaussian:1 always:1 aim:2 zhou:3 pn:2 shrinkage:28 hj:1 sel:1 rusu:1 vae:2 jaakkola:1 corollary:1 derived:2 rezende:2 properly:1 improvement:2 rank:5 likelihood:25 bernoulli:1 legg:1 contrast:5 baseline:1 helpful:1 inference:17 dependent:1 stopping:1 streaming:1 entire:1 initially:2 hidden:23 selects:1 henao:8 issue:2 arg:2 fidelity:1 flexible:2 denoted:2 classification:1 prevention:1 art:2 constrained:2 integration:1 initialize:2 marginal:9 field:4 vlb:2 once:1 mackay:1 beach:1 sampling:3 veness:1 represents:2 yu:1 rfujimaki:1 carin:6 future:4 report:2 connectionist:1 employ:3 distinguishes:1 randomly:1 composed:1 simultaneously:2 gamma:3 preserve:1 individual:1 n1:1 ostrovski:1 mining:1 mnih:6 evaluation:2 fujimaki:9 adjust:2 introduces:1 mixture:3 farley:1 nvil:38 held:1 chain:1 accurate:1 edge:9 necessary:2 unless:1 iv:2 incomplete:1 logarithm:1 initialized:4 circle:1 theoretical:2 minimal:1 cevher:1 witnessed:1 column:2 increased:2 earlier:1 modeling:9 maximization:2 cost:5 introducing:1 deviation:1 subset:1 hundred:1 krizhevsky:2 conducted:1 osindero:1 dependency:2 synthetic:4 combined:2 clyde:1 st:3 thanks:1 cited:1 international:17 concrete:1 augmentation:1 opposed:1 hn:33 choose:1 wan:1 dropconnect:2 conf:1 zhao:2 ricardo:1 bergstra:1 wk:1 stabilize:1 satisfy:1 explicitly:2 later:2 performed:1 view:1 lab:1 compiler:1 start:5 wm:5 competitive:1 bayes:1 publicly:2 convolutional:2 variance:2 largely:1 characteristic:1 bayesian:10 kavukcuoglu:1 accurately:1 lu:1 carlo:3 explain:1 against:1 internship:1 mohamed:2 obvious:1 e2:2 proof:3 rbm:1 dataset:5 popular:2 improves:1 infers:1 originally:1 tipping:1 follow:6 flowing:1 alvarez:3 execute:3 done:2 shrink:12 though:1 furthermore:7 evaluated:1 until:1 correlation:1 overfit:1 minibatch:1 usa:3 effect:2 unbiased:1 true:7 regularization:3 equality:1 assigned:1 alternating:1 laboratory:2 neal:1 illustrated:1 reweighted:1 adjacent:2 during:2 noted:1 criterion:5 generalized:3 bring:1 rsm:2 image:3 variational:24 recently:1 sigmoid:6 common:2 wikipedia:5 multinomial:1 empirically:1 volume:1 jl:3 cupertino:2 discussed:2 interpretation:1 extend:2 cambridge:1 imposing:1 gibbs:1 automatic:2 tuning:2 dbn:1 consistency:1 stable:1 operating:1 dominant:1 posterior:2 showed:1 recent:1 irrelevant:2 optimizes:1 perplexity:9 certain:3 binary:1 onr:1 seen:1 additional:3 preceding:1 employed:5 deng:1 signal:2 dashed:1 ii:4 multiple:1 sbns:16 desirable:1 infer:3 full:5 faster:4 determination:1 adapt:1 cross:2 long:1 scalable:10 basic:1 involving:2 variant:1 vision:1 expectation:6 poisson:6 metric:2 iteration:6 represent:3 background:1 fellowship:1 htn:2 wake:1 limn:2 extra:1 eliminates:1 unlike:1 posse:2 breuleux:1 comment:1 induced:1 jordan:1 call:1 ciently:1 feedforward:1 iii:2 split:2 bengio:3 lasso:1 reduce:3 idea:2 parameterizes:1 qj:2 gb:1 penalty:4 song:8 speech:2 hessian:2 deep:24 useful:1 clear:2 factorial:1 amount:1 nonparametric:2 statist:1 http:1 outperform:2 nsf:1 notice:2 popularity:1 per:2 discrete:2 hyperparameter:1 group:6 key:4 four:1 tpbn:1 threshold:2 drawn:1 clarity:1 prevent:2 dahl:1 ram:1 asymptotically:1 relaxation:1 nga:1 run:1 uncertainty:1 almost:1 vb:13 accelerates:1 bound:11 layer:42 dropout:6 hi:7 sleep:1 simulate:1 speed:1 performing:1 rcv1:5 department:1 according:1 across:1 smaller:5 em:2 wi:1 explained:2 gradually:3 theano:2 taken:1 ln:28 resource:1 computationally:1 count:2 mechanism:8 needed:1 antonoglou:1 predefine:1 available:3 gaussians:1 apply:1 observe:3 appropriate:2 enforce:1 spectral:1 batch:5 robustness:1 buffet:1 hassabis:1 original:1 binomial:1 assumes:1 top:2 gan:10 graphical:3 opportunity:1 zeiler:1 carlson:6 ghahramani:1 approximating:1 gregor:3 feng:2 objective:3 quantity:2 diagonal:1 surrogate:1 gradient:10 mx:1 dpfa:5 link:1 thank:1 capacity:1 hmm:1 armagan:1 nx:1 maddison:1 topic:4 fidjeland:1 code:2 mini:6 insufficient:1 providing:1 balance:1 nc:1 difficult:1 executed:1 dunson:1 potentially:2 negative:3 stated:1 ba:1 boltzmann:1 perform:5 teh:2 observation:2 neuron:1 markov:3 datasets:7 benchmark:3 enabling:1 kumaran:1 descent:2 inevitably:1 truncated:1 extended:1 relational:2 hinton:6 frame:1 lfm:1 inferred:4 required:1 connection:3 optimized:1 imagenet:1 acoustic:1 fab:15 learned:2 kingma:2 nip:1 address:1 usually:1 maeda:1 sparsity:8 challenge:1 including:1 max:4 belief:12 power:1 difficulty:1 regularized:1 github:1 disappears:1 auto:6 prior:17 acknowledgement:1 removal:1 python:2 asymptotic:6 graf:1 lacking:1 interesting:3 suggestion:1 proven:2 remarkable:2 validation:3 vanhoucke:1 consistent:4 rubin:1 share:1 row:3 prone:1 dpfm:3 lmax:2 supported:1 bias:1 side:1 deeper:5 ber:2 allow:1 senior:1 saul:1 sparse:3 ghz:1 benefit:1 depth:1 dimension:1 default:2 avoids:1 vocabulary:1 concavity:1 author:3 reinforcement:2 nguyen:1 employing:2 welling:1 approximate:6 obtains:1 compact:3 preferred:1 ml:3 overfitting:8 incoming:1 investigating:1 corpus:1 consuming:2 fergus:1 continuous:2 latent:7 decade:1 why:1 table:4 learn:1 nature:2 robust:1 ca:3 e5:4 necessarily:2 domain:1 diag:1 did:3 main:1 linearly:1 reuters:1 noise:1 hyperparameters:4 turian:1 allowed:1 fails:1 exponential:1 candidate:1 theorem:3 bastien:1 insightful:1 offset:1 list:1 intrinsic:1 intractable:2 exists:1 mnist:4 effectively:1 importance:1 ci:1 nec:4 budget:1 margin:1 gap:3 durham:1 chen:2 entropy:1 simply:1 likely:2 prevents:1 lagrange:1 hayashi:5 sadik:1 corresponds:2 truth:1 determines:1 satisfies:1 minibatches:1 fnn:2 lth:1 goal:1 viewed:2 consequently:1 marked:1 king:1 shared:1 change:2 included:1 determined:2 specifically:1 operates:1 fnns:4 sampler:1 beattie:1 ece:1 vaes:1 select:4 latter:1 collins:1 relevance:2 indian:1 incorporate:1 mcmc:3 tested:2 phenomenon:1 srivastava:2 |
6,686 | 7,048 | Targeting EEG/LFP Synchrony with Neural Nets
Yitong Li1 , Michael Murias2 , Samantha Major2 , Geraldine Dawson2 , Kafui Dzirasa2 ,
Lawrence Carin1 and David E. Carlson3,4
1
Department of Electrical and Computer Engineering, Duke University
Departments of Psychiatry and Behavioral Sciences, Duke University
3
Department of Civil and Environmental Engineering, Duke University
4
Department of Biostatistics and Bioinformatics, Duke University
{yitong.li,michael.murias,samantha.major,geraldine.dawson,
kafui.dzirasa,lcarin,david.carlson}@duke.edu
2
Abstract
We consider the analysis of Electroencephalography (EEG) and Local Field Potential (LFP) datasets, which are ?big? in terms of the size of recorded data but
rarely have sufficient labels required to train complex models (e.g., conventional
deep learning methods). Furthermore, in many scientific applications, the goal is
to be able to understand the underlying features related to the classification, which
prohibits the blind application of deep networks. This motivates the development
of a new model based on parameterized convolutional filters guided by previous
neuroscience research; the filters learn relevant frequency bands while targeting
synchrony, which are frequency-specific power and phase correlations between
electrodes. This results in a highly expressive convolutional neural network with
only a few hundred parameters, applicable to smaller datasets. The proposed
approach is demonstrated to yield competitive (often state-of-the-art) predictive
performance during our empirical tests while yielding interpretable features. Furthermore, a Gaussian process adapter is developed to combine analysis over distinct
electrode layouts, allowing the joint processing of multiple datasets to address
overfitting and improve generalizability. Finally, it is demonstrated that the proposed framework effectively tracks neural dynamics on children in a clinical trial
on Autism Spectrum Disorder.
1
Introduction
There is significant current research on methods for Electroencephalography (EEG) and Local Field
Potential (LFP) data in a variety of applications, such as Brain-Machine Interfaces (BCIs) [21], seizure
detection [24, 26], and fundamental research in fields such as psychiatry [11]. The wide variety of
applications has resulted in many analysis approaches and packages, such as Independent Component
Analysis in EEGLAB [8], and a variety of standard machine learning approaches in FieldTrip [22].
While in many applications prediction is key, such as for BCIs [18, 19], in applications such as
emotion processing and psychiatric disorders, clinicians are ultimately interested in the dynamics
of underlying neural signals to help elucidate understanding and design future experiments. This
goal necessitates development of interpretable models, such that a practitioner may understand the
features and their relationships to outcomes. Thus, the focus here is on developing an interpretable
and predictive approach to understanding spontaneous neural activity.
A popular feature in these analyses is based on spectral coherence, where a specific frequency band is
compared between pairwise channels, to analyze both amplitude and phase coherence. When two
regions have a high power (amplitude) coherence in a spectral band, it implies that these areas are
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
coordinating in a functional network to perform a task [3]. Spectral coherence has been previously
used to design classification algorithms on EEG [20] and LFP [30] data. Furthermore, these features
have underlying neural relationships that can be used to design causal studies using neurostimulation
[11]. However, fully pairwise approaches face significant challenges with limited data because of the
proliferation of features when considering pairwise properties. Recent approaches to this problem
include first partitioning the data to spatial areas and considering only broad relationships between
spatial regions [33], or enforcing a low-rank structure on the pairwise relationships [30].
To analyze both LFP and EEG data, we follow [30] to focus on low-rank properties; however,
this previous approach focused on a Gaussian process implementation for LFPs, that does not
scale to the greater number of electrodes used in EEG. We therefore develop a new framework
whereby the low-rank spectral patterns are approximated by parameterized linear projections, with
the parametrization guided by neuroscience insights from [30]. Critically, these linear projections can
be included in a convolutional neural network (CNN) architecture to facilitate end-to-end learning with
interpretable convolutional filters and fast test-time performance. In addition to being interpretable,
the parameterization dramatically reduces the total number of parameters to fit, yielding a CNN with
only hundreds of parameters. By comparison, conventional deep models require learning millions of
parameters. Even special-purpose networks such as EEGNet [15], a recently proposed CNN model
for EEG data, still require learning thousands of parameters.
The parameterized convolutional layer in the proposed model is followed by max-pooling, a single
fully-connected layer, and a cross-entropy classification loss; this leads to a clear relationship between
the proposed targeted features and outcomes. When presenting the model, interpretation of the filters
and the classification algorithms are discussed in detail. We also discuss how deeper structures
can be developed on top of this approach. We demonstrate in the experiments that the proposed
framework mitigates overfitting and yields improved predictive performance on several publicly
available datasets.
In addition to developing a new neuroscience-motivated parametric CNN, there are several other
contributions of this manuscript. First, a Gaussian Process (GP) adapter [16] within the proposed
framework is developed. The idea is that the input electrodes are first mapped to pseudo-inputs by
using a GP, which allows straightforward handling of missing (dropped or otherwise noise-corrupted)
electrodes common in real datasets. In addition, this allows the same convolutional neural network to
be applied to datasets recorded on distinct electrode layouts. By combining data sources, the result
can better generalize to a population, which we demonstrate in the results by combining two datasets
based on emotion recognition. We also developed an autoencoder version of the network to address
overfitting concerns that are relevant when the total amount of labeled data is limited, while also
improving model generalizability. The autoencoder can lead to minor improvements in performance,
which is included in the Supplementary Material.
2
Basic Model Setup: Parametric CNN
The following notation is employed: scalars are lowercase italicized letters, e.g. x, vectors are bolded
lowercase letters, e.g. p
x, and matrices are bolded uppercase letters, e.g. X. The convolution operator
is denoted ?, and | =
1. ? denotes the Kronecker product. denotes an element-wise product.
The input data are Xi 2 RC?T , where C is the number of simultaneously recorded electrodes/channels, and T is given by the sampling rate and time length; i = 1, . . . , N , where N
|
is the total number of trials. The data can also be represented as Xi = [xi1 , ? ? ? , xiC ] , where
xic 2 RT is the data restricted to the cth channel. The associated labels are denoted yi , which is an
integer corresponding to a label. The trial index i is added only when necessary for clarity.
An example signal is presented in Figure 1 (Left). The data are often windowed, the ith of which
yields Xi and the associated label yi . Clear identification of phase and power relationships among
channels motivates the development of a structured neural network model for which the convolutional
filters target this synchrony, or frequency-specific power and phase correlations.
2.1
SyncNet
Inspired both by the success of deep learning and spectral coherence as a predictive feature [12, 30], a
CNN is developed to target these properties. The proposed model, termed SyncNet, performs a structured 1D convolution to jointly model the power, frequency and phase relationships between channels.
2
Figure 1: (Left) Visualization of EEG dataset on 8 electrodes split into windows. The markers (e.g.,
?FP1?) denote electrode names, which have corresponding spatial locations. (Right) 8 channels of
synthetic data. Refer to Section 2.2 for more detail.
Figure 2: SyncNet follows a convolutional neural network structure. The right side is the SyncNet
(Section 2.1), which is parameterized to target relevant quantities. The left side is the GP adapter,
which aims at unifying different electrode layout and reducing overfitting (Section 3).
This goal is achieved by using parameterized 1-dimensional convolutional filters. Specifically, the
kth of K filters for channel c is
(k)
(k)
fc (? ) = bc cos(! (k) ? +
(k)
c ) exp(
(k) 2
? ).
(1)
The frequency ! (k) 2 R+ and decay (k) 2 R+ parameters are shared across channels, and they
define the real part of a (scaled) Morlet wavelet1 . These two parameters define the spectral properties
targeted by the kth filter, where ! (k) controls the center of the frequency spectrum and (k) controls
(k)
(k)
the frequency-time precision trade-off. The amplitude bc 2 R+ and phase shift c 2 [0, 2?] are
channel-specific. Thus, the convolutional filter in each channel will be a discretized version of a
scaled and rotated Morlet wavelet. By parameterizing the model in this way, all channels are targeted
collectively. The form in (1) is motivated by the work in [30], but the resulting model we develop is
far more computationally efficient. A fuller discussion of the motivation for (1) is detailed in Section
2.2.
For practical reasons,?the filters are restricted
to have finite length
each ?time step ? takes
?
? NN? , and
N? N?
? 1 N? 1
an integer value from
,
1
when
N
is
even
and
from
,
when N? is odd.
?
2
2
2
2
(k)
For typical learned
?s, the convolutional filter vanishes by the edges of the window. Succinctly,
PC
(k)
the output of the k convolutional filter bank is given by h(k) = c=1 fc (? ) ? xc .
The simplest form of SyncNet contains only one convolution layer, as in Figure 2. The output from
each filter bank h(k) is passed through a Rectified Linear Unit (ReLU), followed by max pooling
? (k) for each filter. The filter outputs h
? (k) for k = 1, . . . , K are
over the entire window, to return h
concatenated and used as input to a softmax classifier with the cross-entropy loss to predict y?. Because
of the temporal and spatial redundancies in EEG, dropout is instituted at the channel level, with
?
xc /p, with probability p
dropout(xc ) =
.
(2)
0,
with probability 1 p
p determines the typical percentage of channels included, and was set as p = 0.75. It is straightforward
to create deeper variants of the model by augmenting SyncNet with additional standard convolutional
1
It is straightforward to use the Morlet wavelet directly and define the outputs as complex variables and
define the neural network to target the same properties, but this leads to both computational and coding overhead.
3
layers. However, in our experiments, adding more layers typically resulted in over-fitting due to the
limited numbers of training samples, but will likely be beneficial in larger datasets.
2.2
SyncNet Targets Class Differences in Cross-Spectral Densities
The cross-spectral density [3] is a widely used metric for understanding the synchronous nature
of signal in frequency bands. The cross-spectral density is typically constructed by converting a
time-series into a frequency representation, and then calculating the complex covariance matrix in
each frequency band. In this section we sketch how the SyncNet filter bank targets cross-spectral
densities to make optimal classifications. The discussion will be in the complex domain first, and
then it will be demonstrated why the same result occurs in the real domain.
In the time-domain, it is possible to understand the cross-spectral density of a single frequency band
by using a cross-spectral kernel [30] to define the covariance function of a Gaussian process. Letting
? = t t0 , the cross-spectral kernel is defined
CSD
Kcc
0 tt0 = cov(xct , xc0 t0 ) = Acc0 ?(? ),
?(? ) = exp
1
2
? 2
? + |! ? ? .
(3)
Here, ! ? and ? control the frequency band. c and c0 are channel indexes. A 2 CC?C is a positive
semi-definite matrix that defines the cross-spectral density for that frequency band controlled by ?(? ).
Each entry Acc0 is made of of a magnitude |Acc0 | that controls the power (amplitude) coherence
between electrodes in that frequency band and a complex phase that determines the optimal time
offset between the signals. The covariance over the complete multi-channel times series is given by
K CSD = A ? ?(? ). The power (magnitude) coherence is given by the absolute value of the entry,
and the phase offset can be determined by the rotation in the complex space.
A generative model for oscillatory neural signals is given by a Gaussian process with this kernel
[30], where vec(X) ? CN (0, K CSD + 2 IC?T ). The entries of K CSD are given from (3). CN
denotes the circularly symmetric complex normal. The additive noise term 2 IC?T is excluded in
the following for clarity.
Note that the complex form of (1) in SyncNet across channels is given as f (? ) = f! (? )s, where
f! (? ) = exp( 12 ? 2 + |!? ) is the filter over time and s = b exp(| ) are the weights and
rotations of a single SyncNet filter. Suppose that each channel was filtered independently by the filter
? c = f! ? xc = F!? xc ,
f! = f! (? ) with a vector input ? . Writing the convolution in matrix form as x
T ?T
where F! 2 C
is a matrix formulation of the convolution operator, results in a filtered signal
? c ? CN 0, Acc F!? ?(? )F! . For a filtered version over all channels, X T = [xT1 , ? ? ? , xTC ], the
x
distribution would be given by
?
?
? = vec(F ? X T ) ? CN 0, A ? F ? ?(? )F! , x
? t ? CN (0, A F!? ?(? )F! tt ). (4)
vec(X)
!
!
?
?
? t 2 RC is defined as the observation at time t for all C channels. The diagonal of F!? ?(? )F! will
x
? ?
?
reach a steady-state quickly away from the edge effects, so we state this as const = F! ?(? )F! tt .
?t ?
The output from the SyncNet filter bank prior to the pooling stage is then given by ht = s? x
CN (0, const ? s? As). We note that the signal-to-noise ratio would be maximized by matching the
filter?s (f! ) frequency properties to the generated frequency properties; i.e. and ! from (1) should
match ? and ! ? from (3).
We next focus on the properties of an optimal s. Suppose that two classes are generated from (3) with
cross-spectral densities of A0 and A1 for classes 0 and 1, respectively. Thus, the signals are drawn
from CN (0, Ay ? ?(? )) for y = {0, 1}. The optimal projection s? would maximize the differences
in the distribution ht depending on the class, which is equivalent to maximizing the ratio between the
variances of the two cases. Mathematically, this is equivalent to finding
n ?
o
?
1 s s A0 s
s? = arg maxs max ss? A
,
= arg maxs | log(s? A1 s) log(s? A0 s)|.
(5)
A 0 s s? A 1 s
Note that the constant dropped out due to the ratio. Because the SyncNet filter is attempting to
classify the two conditions, it should learn to best differentiate the classes and match the optimal s? .
We demonstrate in Section 5.1 on synthetic data that SyncNet filters do in fact align with this optimal
direction and is therefore targeting properties of the cross-spectral densities.
In the above discussion, the argument was made with respect to complex signals and models; however,
a similar result holds when only the real domain is used. Note that if the signals are oscillatory, then
4
the result after the filtering of the domain and the max-pooling will be essentially the same as using a
max-pooling on the absolute value of the complex filters. This is because the filtered signal is rotated
through the complex domain, and will align with the real domain within the max-pooling period for
standard signals. This is shown visually in Supplemental Figure 9.
3
Gaussian Process Adapter
A practical issue in EEG datasets is that electrode layouts are not constant, either due to inconsistent
device design or electrode failure. Secondly, nearby electrodes are highly correlated and contain
redundant information, so fitting parameters to all electrodes results in overfitting. These issues are
addressed by developing a Gaussian Process (GP) adapter, in the spirit of [16], trained with SyncNet
as shown in the left side of Figure 2. Regardless of the electrode layout, the observed signal X at
electrode locations p = {p1 , ? ? ? , pC } are mapped to a shared number of pseudo-inputs at locations
p? = {p?1 , ? ? ? , p?L } before being input to SyncNet.
In contrast to prior work, the proposed GP adapter is formulated as a multi-task GP [4] and the pseudoinput locations p? are learned. A GP is used to map X 2 RC?T at locations p to the pseudo-signals
X ? 2 RL?T at locations p? , where L < C is the number of pseudo-inputs. Distances are constructed
by projecting each electrode into a 2D representation by the Azimuthal Equidistant Projection. When
evaluated at a finite set of points, the multi-task GP [4] can be written as a multivariate normal
vec(X) ? N f ,
2
IC?T , f ? N (0, K) .
(6)
K is constructed by a kernel function K(?, c, c0 ) that encodes separable relationships through time
and through space. The full covariance matrix can be calculated as K = Kpp ? Ktt , where
Kpc pc0 = ?1 exp( ?2 ||pc pc0 ||1 ) and Ktt is set to identity matrix IT . Kpp 2 RC?C targets the
spatial relationship across channels using the exponential kernel. Note that this kernel K is distinct
from K CSD used in section 2.2.
Let the pseudo-inputs locations be defined as p?l for l = 1, ? ? ? , L. Using the GP formulation,
the signal can be inferred at the L pseudo-input locations from the original signal. Following
[16], only the expectation of the signal is used (to facilitate fast computation), which is given by
X ? = E(X ? |X) = Kp? p (Kpp + 2 IC ) 1 X. An illustration of the learned new locations is
shown under X ? in Figure 2. The derivation of this mathematical form and additional details on the
GP adapter are included in Supplemental Section A.
The GP adapter parameters p? , ?1 , ?2 are optimized jointly with SyncNet. The input signal
Xi is mapped to Xi? , which is then input to SyncNet. The predicted label y?i is given by y?i =
Sync(Xi? ; ?), where Sync()? is the prediction function of SyncNet. Given the SyncNet loss function
PN
PN
yi , yi ) = i=1 ` (Sync(Xi? ; ?), yi ), the overall training loss function
i=1 ` (?
PN
PN
L = i=1 ` (Sync(E[Xi? |Xi ]; ?), yi ) = i=1 ` Sync(Kp? p (Kpp + 2 IC ) 1 Xi ; ?), yi , (7)
is jointly minimized over the SyncNet parameters ? and the GP adapter parameters {p? , ?1 , ?2 }.
The GP uncertainty can be included in the loss at the expense of significantly increased optimization
cost, but does not result in performance improvements to justify the increased cost [16].
4
Related Work
Frequency-spectrum features are widely used for processing EEG/LFP signals. Often this requires
calculating synchrony- or entropy-based features within predefined frequency bands, such as [20,
5, 9, 14]. There are many hand-crafted features and classifiers for a BCI task [18]; however, in our
experiments, these hand-crafted features did not perform well on long oscillatory signals. The EEG
signal is modeled in [1] as a matrix-variate model with spatial and spectral smoothing. However, the
number of parameters scales with time length, rendering the approach ineffective for longer time
series. A range-EEG feature has been proposed [23], which measures the peak-to-peak amplitude.
In contrast, our approach learns frequency bands of interest and we can deal with long time series
evaluated in our experiments.
Deep learning has been a popular recent area of research in EEG analysis. This includes Restricted
Boltzmann Machines and Deep Belief Networks [17, 36], CNNs [32, 29], and RNNs [2, 34]. These
5
approaches focus on learning both spatial and temporal relationships. In contrast to hand-crafted
features and SyncNet, these deep learning methods are typically used as a black box classifier.
EEGNET [15] considered a four-layer CNN to classify event-related potentials and oscillatory EEG
signals, demonstrating improved performance over low-level feature extraction. This network was
designed to have limited parameters, requiring 2200 for their smallest model. In contrast, the SyncNet
filters are simple to interpret and require learning only a few hundred parameters.
An alternative approach is to design GP kernels to target synchrony properties and learn appropriate
frequency bands. The phase/amplitude synchrony of LFP signals has been modeled [30, 10] with
the cross-spectral mixture (CSM) kernel. This approach was used to define a generative model
over differing classes and may be used to learn an unsupervised clustering model. A key issue
with the CSM approach is the computational complexity, where gradients cost O(N T C 3 ) (using
approximations), and is infeasible with the larger number of electrodes in EEG data. In contrast, the
proposed GP adapter requires only a single matrix inversion shared by most data points, which is
O(C 3 ).
The use of wavelets has previously been considered in scattering networks [6]. Scattering networks
used Morlet wavelets for image classification, but did not consider the complex rotation of wavelets
over channels nor the learning of the wavelet widths and frequencies considered here.
5
Experiments
To demonstrate that SyncNet is targeting synchrony information, we first apply it to synthetic data
in Section 5.1. Notably, the learned filter bank recovers the optimal separating filter. Empirical
performance is given for several EEG datasets in Section 5.2, where SyncNet often has the highest
hold-out accuracy while maintaining interpretable features. The usefulness of the GP adapter to
combine datasets is demonstrated in Section 5.3, where classification performance is dramatically
improved via data augmentation. Empirical performance on an LFP dataset is shown in Section 5.4.
Both the LFP signals and the EEG signals measure broad voltage fluctuations from the brain, but the
LFP has a significantly cleaner signal because it is measured inside the cortical tissue. In all tested
cases, SyncNet methods have essentially state-of-the-art prediction while maintaining interpretable
features.
The code is written in Python and Tensorflow. The experiments were run on a 6-core i7 machine with
a Nvidia Titan X Pascal GPU. Details on training are given in Supplemental Section C.
5.1
2
Synthetic Dataset
Optimal
Learned
Synthetic data are generated for two classes by
drawing data from a circularly symmetric normal matching the synchrony assumptions discussed in Section 2.2. The frequency band is
pre-defined as ! ? = 10Hz and ? is defined as
40 (frequency variance of 2.5Hz) in (3). The
number of channels is set to C = 8. Example
data generated by this procedure is shown in
Figure 1 (Right), where only the real part of the
signal is kept.
1
0
-1
-2
-2
-1
0
1
2
A1 and A0 are set such that the optimal vector Figure 3: Each dot represents one of 8 electrodes.
from solving (5) is given by the shape visual- The dots give complex directions for optimal and
ized in Figure 3. This is accomplished by set- learned filters, demonstrating that SyncNet approxting A0 = IC and A1 = I + s? (s? )? . Data imately recovers optimal filters.
is then simulated by drawing from vec(X) ?
CN (0, K CSD + 2 IC?T ) and keeping only the real part of the signal. K CSD is defined in equation (3) with A set to A0 or A1 depending on the class. In this experiment, the goal is to relate the
filter learned in SyncNet and to this optimal separating plane s? .
To show that SyncNet is targeting synchrony, it is trained on this synthetic data using only one
single convolutional filter. The learned filter parameters are projected to the complex space by
s = b exp(| ), and are shown overlaid (rotated and rescaled to handle degeneracies) with the
6
optimal rotations in Figure 3. As the amount of data increases, the SyncNet filter recovers the
expected relationship between channels and the predefined frequency band. In addition, the learned !
is centered at 11Hz, which is close to the generated feature band ! ? of 10Hz. These synthetic data
results demonstrate that SyncNet is able to recover frequency bands of interest and target synchrony
properties.
5.2
Performance on EEG Datasets
We consider three publicly available datasets for EEG classification, described below. After the
validation on the publicly available data, we then apply the method to a new clinical-trial data, to
demonstrate that the approach can learn interpretable features that track the brain dynamics as a result
of treatment.
UCI EEG: This dataset2 has a total of 122 subjects with 77 diagnosed with alcoholism and 45 control
subjects. Each subject undergoes 120 separate trials. The stimuli are pictures selected from 1980
Snodgrass and Vanderwart picture set. The EEG signal is of length one second and is sampled at
256Hz with 64 electrodes. We evaluate the data both within subject, which is randomly split as
7 : 1 : 2 for training, validation and testing, and using 11 subjects rotating test set. The classification
task is to recover whether the subject has been diagnosed with alcoholism or is a control subject.
DEAP dataset: The ?Database for Emotion Analysis using Physiological signals? [14] has a total
of 32 participants. Each subject has EEG recorded from 32 electrodes while they are shown a total
of 40 one-minute long music videos with strong emotional score. After watching each video, each
subject gave an integer score from one to nine to evaluate their feelings in four different categories.
The self-assessment standards are valence (happy/unhappy), arousal (bored/excited), dominance
(submissive/empowered) and personal liking of the video. Following [14], this is treated as a binary
classification with a threshold at a score of 4.5. The performance is evaluated with leave-one-out
testing, and the remaining subjects are split to use 22 for training and 9 for validation.
SEED dataset: This dataset [35] involves repeated tests on 15 subjects. Each subject watches 15
movie clips 3 times. It clip is designated with a negative/neutral/positive emotion label, while the
EEG signal is recorded at 1000Hz from 62 electrodes. For this dataset, leave-one-out cross-validation
is used, and the remaining 14 subjects are split with 10 for training and 4 for validation.
ASD dataset: The Autism Spectral Disorder (ASD) dataset involves 22 children from ages 3 to 7
years undergoing treatment for ASD with EEG measurements at baseline, 6 months post treatment,
and 12 months post treatment. Each recording session involves 3 one-minute videos designed to
measure responses to social stimuli and controls, measured with a 121 electrode array. The trial was
approved by the Duke Hospital Institutional Review Board and conducted under IND #15949. Full
details on the experiments and initial clinical results are available [7]. The classification task is to
predict the time relative to treatment to track the change in neural signatures post-treatment. The
cross-patient predictive ability is estimated with leave-one-out cross-validation, where 17 patients are
used to train the model and 4 patients are used as a validation set.
Dataset
DE [35]
PSD [35]
rEEG [23]
Spectral [14]
EEGNET [15]
MC-DCNN [37]
SyncNet
GP-SyncNet
UCI
DEAP [14]
Within Cross Arousal Valence Domin. Liking
0.821 0.622
0.529
0.517
0.528
0.577
0.816 0.605
0.584
0.559
0.595
0.644
0.702 0.614
0.549
0.538
0.557
0.585
*
*
0.620
0.576
*
0.554
0.878 0.672
0.536
0.572
0.589
0.594
0.840 0.300
0.593
0.604
0.635
0.621
0.918 0.705
0.611
0.608
0.651
0.679
0.923 0.723
0.592
0.611
0.621
0.659
Table 1: Classification accuracy on EEG datasets.
SEED [35]
Emotion
0.491
0.352
0.468
*
0.533
0.527
0.558
0.516
ASD
Stage
0.504
0.499
0.361
*
0.363
0.584
0.630
0.637
The accuracy of predictions on these EEG datasets, from a variety of methods, is given in Table 1.
We also implemented other hand-crafted spatial features, such as the brain symmetric index [31];
however, their performance was not competitive with the results here. EEGNET is an EEG-specific
convolutional network proposed in [15]. The ?Spectral? method from [14] uses an SVM on extracted
2
https://kdd.ics.uci.edu/databases/eeg/eeg.html
7
(a) Spatial pattern of learned amplitude b.
(b) Spatial pattern of learned phase .
Figure 4: Learned filter centered at 14Hz on the ASD dataset. Figures made with FieldTrip [22].
spectral power features from each electrode in different frequency bands. MC-DCNN [37] denotes
a 1D CNN where the filters are learned without the constraints of the parameterized structure. The
SyncNet used 10 filter sets both with (GP-SyncNet) and without the GP adapter. Remarkably, the
basic SyncNet already delivers state-of-the-art performance on most tasks. In contrast, the handcrafted features did not effectively cannot capture available information and the alternative CNN
based methods severely overfit the training data due to the large number of free parameters.
In addition to state-of-the-art classification performance, a key component of SyncNet is that the
features extracted and used in the classification are interpretable. Specifically, on the ASD dataset, the
proposed method significantly improves the state-of-the-art. However, the end goal of this experiment
is to understand how the neural activity is changing in response to the treatment. On this task, the
ability of SyncNet to visualize features is important for dissemination to medical practitioners. To
demonstrate how the filters can be visualized and communicated, we show one of the filters learned
in SyncNet on the ASD dataset in Figure 4. This filter, centered at 14Hz, is highly associated with the
session at 6 months post-treatment. Notably, this filter bank is dominantly using the signals measured
at the forward part of the scalp (Figure 4, Left). Intriguingly, the phase relationships are primarily in
phase for the frontal regions, but note that there are off-phase relationships between the midfrontal
and the frontal part of the scale (Figure 4, Right). Additional visualizations of the results are given in
Supplemental Section E.
5.3 Experiments on GP adapter
In the previous section, it was noted that the GP adapter can improve performance within an existing
dataset, demonstrating that the GP adapter is useful to reduce the number of parameters. However, our
primary designed use of the GP Adapter is to unify different electrode layouts. This is explored further
by applying the GP-SyncNet to the UCI EEG dataset and changing the number of pseudo-inputs.
Notably, a mild reduction in the number of pseudo-inputs improves performance over directly using
the measured data (Supplemental Figure 6(a)) by reducing the total number of parameters. This
is especially true when comparing the GP adapter to using a random subset of channels to reduce
dimensionality.
SyncNet
GP-SyncNet
GP-SyncNet Joint
DEAP [14] dataset 0.521 ? 0.026 0.557 ? 0.025
0.603 ? 0.020
SEED [35] dataset 0.771 ? 0.009 0.762 ? 0.015
0.779 ? 0.009
Table 2: Accuracy mean and standard errors for training two datasets separately and jointly.
To demonstrate that the GP adapter can be used to combine datasets, the DEAP and SEED datasets
were trained jointly using a GP adapter. The SEED data was downsampled to 128Hz to match the
frequency of DEAP dataset, and the data was separated into 4 second windows due to their different
lengths. The label for the trial is attached for each window. To combine the labeling space, only the
negative and positive emotion labels were kept in SEED and valence was used in the DEAP dataset.
The number of pseudo-inputs is set to L = 26. The results are given in Table 2, which demonstrates
that combining datasets can lead to dramatically improved generalization ability due to the data
8
augmentation. Note that the basic SyncNet performances in Table 2 differ from the results in Table 1.
Specifically, the DEAP dataset performance is worse; this is due to significantly reduced information
when considering a 4 second window instead of a 60 second window. Second, the performance on
SEED has improved; this is due to considering only 2 classes instead of 3.
5.4
Performance on an LFP Dataset
Due to the limited publicly available multi-region LFP datasets, only a single LFP data was included
in the experiments. The intention of this experiment is to show that the method is broadly applicable
in neural measurements, and will be useful with the increasing availability of multi-region datasets.
An LFP dataset is recorded from 26 mice from two genetic backgrounds (14 wild-type and 12
CLOCK 19). CLOCK 19 mice are an animal model of a psychiatric disorder. The data are
sampled at 200 Hz for 11 channels. The data recording from each mouse has five minutes in its home
cage, five minutes from an open field test, and ten minutes from a tail-suspension test. The data are
split into temporal windows of five seconds. SyncNet is evaluated by two distinct prediction tasks.
The first task is to predict the genotype (wild-type or CLOCK 19) and the second task is to predict
the current behavior condition (home cage, open field, or tail-suspension test). We separate the data
randomly as 7 : 1 : 2 for training, validation and testing
Behavior
Genotype
PCA + SVM
0.911
0.724
DE [35]
0.874
0.771
PSD [35]
0.858
0.761
rEEG [23]
0.353
0.449
EEGNET [15]
0.439
0.689
SyncNet
0.946
0.926
Table 3: Comparison between different methods on an LFP dataset.
Results from these two predictive tasks are shown in Table 3. SyncNet used K = 20 filters with filter
length 40. These results demonstrate that SyncNet straightforwardly adapts to both EEG and LFP
data. These data will be released with publication of the paper.
6
Conclusion
We have proposed SyncNet, a new framework for EEG and LFP data classification that learns
interpretable features. In addition to our original architecture, we have proposed a GP adapter to unify
electrode layouts. Experimental results on both LFP and EEG data show that SyncNet outperforms
conventional CNN architectures and all compared classification approaches. Importantly, the features
from SyncNet can be clearly visualized and described, allowing them to be used to understand the
dynamics of neural activity.
Acknowledgements
In working on this project L.C. received funding from the DARPA HIST program; K.D., L.C., and
D.C. received funding from the National Institutes of Health by grant R01MH099192-05S2; K.D
received funding from the W.M. Keck Foundation; G.D. received funding from Marcus Foundation,
Perkin Elmer, Stylli Translational Neuroscience Award, and NICHD 1P50HD093074.
References
[1] A. S. Aghaei, M. S. Mahanta, and K. N. Plataniotis. Separable common spatio-spectral patterns
for motor imagery bci systems. IEEE TBME, 2016.
[2] P. Bashivan, I. Rish, M. Yeasin, and N. Codella. Learning representations from eeg with deep
recurrent-convolutional neural networks. arXiv:1511.06448, 2015.
[3] A. M. Bastos and J.-M. Schoffelen. A tutorial review of functional connectivity analysis
methods and their interpretational pitfalls. Frontiers in Systems Neuroscience, 2015.
[4] E. V. Bonilla, K. M. A. Chai, and C. K. Williams. Multi-task gaussian process prediction. In
NIPS, volume 20, 2007.
[5] W. Bosl, A. Tierney, H. Tager-Flusberg, and C. Nelson. Eeg complexity as a biomarker for
autism spectrum disorder risk. BMC Medicine, 2011.
9
[6] J. Bruna and S. Mallat. Invariant scattering convolution networks. IEEE PAMI, 2013.
[7] G. Dawson, J. M. Sun, K. S. Davlantis, M. Murias, L. Franz, J. Troy, R. Simmons, M. SabatosDeVito, R. Durham, and J. Kurtzberg. Autologous cord blood infusions are safe and feasible in
young children with autism spectrum disorder: Results of a single-center phase i open-label
trial. Stem Cells Translational Medicine, 2017.
[8] A. Delorme and S. Makeig. Eeglab: an open source toolbox for analysis of single-trial eeg
dynamics including independent component analysis. J. Neuroscience Methods, 2004.
[9] R.-N. Duan, J.-Y. Zhu, and B.-L. Lu. Differential entropy feature for eeg-based emotion
classification. In IEEE/EMBS Conference on Neural Engineering. IEEE, 2013.
[10] N. Gallagher, K. Ulrich, K. Dzirasa, L. Carin, and D. Carlson. Cross-spectral factor analysis. In
NIPS, 2017.
[11] R. Hultman, S. D. Mague, Q. Li, B. M. Katz, N. Michel, L. Lin, J. Wang, L. K. David, C. Blount,
R. Chandy, et al. Dysregulation of prefrontal cortex-mediated slow-evolving limbic dynamics
drives stress-induced emotional pathology. Neuron, 2016.
[12] V. Jirsa and V. M?ller. Cross-frequency coupling in real and virtual brain networks. Frontiers in
Computational Neuroscience, 2013.
[13] D. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv:1412.6980, 2014.
[14] S. Koelstra, C. Muhl, M. Soleymani, J.-S. Lee, A. Yazdani, T. Ebrahimi, T. Pun, A. Nijholt,
and I. Patras. Deap: A database for emotion analysis; using physiological signals. IEEE
Transactions on Affective Computing, 2012.
[15] V. J. Lawhern, A. J. Solon, N. R. Waytowich, S. M. Gordon, C. P. Hung, and B. J. Lance. Eegnet:
A compact convolutional network for eeg-based brain-computer interfaces. arXiv:1611.08024,
2016.
[16] S. C.-X. Li and B. M. Marlin. A scalable end-to-end gaussian process adapter for irregularly
sampled time series classification. In NIPS, 2016.
[17] W. Liu, W.-L. Zheng, and B.-L. Lu. Emotion recognition using multimodal deep learning. In
International Conference on Neural Information Processing. Springer, 2016.
[18] F. Lotte, M. Congedo, A. L?cuyer, F. Lamarche, and B. Arnaldi. A review of classification
algorithms for eeg-based brain?computer interfaces. Journal of Neural Engineering, 2007.
[19] K.-R. M?ller, M. Tangermann, G. Dornhege, M. Krauledat, G. Curio, and B. Blankertz. Machine
learning for real-time single-trial eeg-analysis: from brain?computer interfacing to mental state
monitoring. J. Neuroscience Methods, 2008.
[20] M. Murias, S. J. Webb, J. Greenson, and G. Dawson. Resting state cortical connectivity reflected
in eeg coherence in individuals with autism. Biological Psychiatry, 2007.
[21] E. Nurse, B. S. Mashford, A. J. Yepes, I. Kiral-Kornek, S. Harrer, and D. R. Freestone. Decoding
eeg and lfp signals using deep learning: heading truenorth. In ACM International Conference
on Computing Frontiers. ACM, 2016.
[22] R. Oostenveld, P. Fries, E. Maris, and J.-M. Schoffelen. Fieldtrip: open source software
for advanced analysis of meg, eeg, and invasive electrophysiological data. Computational
Intelligence and Neuroscience, 2011.
[23] D. O?Reilly, M. A. Navakatikyan, M. Filip, D. Greene, and L. J. Van Marter. Peak-to-peak
amplitude in neonatal brain monitoring of premature infants. Clinical Neurophysiology, 2012.
[24] A. Page, C. Sagedy, E. Smith, N. Attaran, T. Oates, and T. Mohsenin. A flexible multichannel
eeg feature extractor and classifier for seizure detection. IEEE Circuits and Systems II: Express
Briefs, 2015.
10
[25] Y. Pu, Z. Gan, R. Henao, X. Yuan, C. Li, A. Stevens, and L. Carin. Variational autoencoder for
deep learning of images, labels and captions. In NIPS, 2016.
[26] Y. Qi, Y. Wang, J. Zhang, J. Zhu, and X. Zheng. Robust deep network with maximum correntropy
criterion for seizure detection. BioMed Research International, 2014.
[27] A. Rasmus, M. Berglund, M. Honkala, H. Valpola, and T. Raiko. Semi-supervised learning with
ladder networks. In NIPS, 2015.
[28] O. Tsinalis, P. M. Matthews, Y. Guo, and S. Zafeiriou. Automatic sleep stage scoring with
single-channel eeg using convolutional neural networks. arXiv:1610.01683, 2016.
[29] K. R. Ulrich, D. E. Carlson, K. Dzirasa, and L. Carin. Gp kernels for cross-spectrum analysis.
In NIPS, 2015.
[30] M. J. van Putten. The revised brain symmetry index. Clinical Neurophysiology, 2007.
[31] H. Yang, S. Sakhavi, K. K. Ang, and C. Guan. On the use of convolutional neural networks and
augmented csp features for multi-class motor imagery of eeg signals classification. In EMBC.
IEEE, 2015.
[32] Y. Yang, E. Aminoff, M. Tarr, and K. E. Robert. A state-space model of cross-region dynamic
connectivity in meg/eeg. In NIPS, 2016.
[33] N. Zhang, W.-L. Zheng, W. Liu, and B.-L. Lu. Continuous vigilance estimation using lstm
neural networks. In International Conference on Neural Information Processing. Springer,
2016.
[34] W.-L. Zheng and B.-L. Lu. Investigating critical frequency bands and channels for eeg-based
emotion recognition with deep neural networks. IEEE Transactions on Autonomous Mental
Development, 2015.
[35] W.-L. Zheng, J.-Y. Zhu, Y. Peng, and B.-L. Lu. Eeg-based emotion classification using deep
belief networks. In IEEE ICME. IEEE, 2014.
[36] Y. Zheng, Q. Liu, E. Chen, Y. Ge, and J. L. Zhao. Time series classification using multi-channels
deep convolutional neural networks. In International Conference on Web-Age Information
Management. Springer, 2014.
11
| 7048 |@word mild:1 trial:10 cnn:10 version:3 inversion:1 oostenveld:1 neurophysiology:2 approved:1 c0:2 open:5 azimuthal:1 covariance:4 excited:1 reduction:1 initial:1 liu:3 contains:1 series:6 score:3 genetic:1 bc:2 outperforms:1 existing:1 current:2 comparing:1 rish:1 mari:1 written:2 gpu:1 additive:1 kdd:1 shape:1 motor:2 designed:3 interpretable:10 infant:1 generative:2 selected:1 device:1 intelligence:1 parameterization:1 plane:1 parametrization:1 ith:1 smith:1 core:1 pc0:2 filtered:4 mental:2 location:9 pun:1 zhang:2 five:3 rc:4 windowed:1 constructed:3 mathematical:1 differential:1 yuan:1 combine:4 overhead:1 behavioral:1 fitting:2 sync:5 inside:1 wild:2 pairwise:4 affective:1 notably:3 expected:1 congedo:1 behavior:2 p1:1 nor:1 proliferation:1 multi:8 brain:10 discretized:1 kpp:4 inspired:1 pitfall:1 duan:1 window:8 electroencephalography:2 considering:4 increasing:1 project:1 underlying:3 notation:1 circuit:1 biostatistics:1 prohibits:1 developed:5 correntropy:1 supplemental:5 differing:1 finding:1 marlin:1 dornhege:1 pseudo:9 temporal:3 makeig:1 scaled:2 classifier:4 demonstrates:1 partitioning:1 control:7 unit:1 medical:1 grant:1 positive:3 before:1 engineering:4 local:2 dropped:2 samantha:2 severely:1 fluctuation:1 pami:1 black:1 rnns:1 co:1 limited:5 range:1 practical:2 testing:3 lfp:19 definite:1 communicated:1 lcarin:1 procedure:1 area:3 empirical:3 evolving:1 significantly:4 projection:4 matching:2 pre:1 intention:1 reilly:1 downsampled:1 psychiatric:2 cannot:1 targeting:5 close:1 operator:2 risk:1 applying:1 writing:1 conventional:3 equivalent:2 demonstrated:4 missing:1 center:2 maximizing:1 layout:7 straightforward:3 regardless:1 independently:1 map:1 focused:1 williams:1 unify:2 disorder:6 insight:1 parameterizing:1 array:1 importantly:1 population:1 handle:1 autonomous:1 simmons:1 elucidate:1 spontaneous:1 target:9 suppose:2 mallat:1 duke:6 caption:1 us:1 element:1 approximated:1 recognition:3 labeled:1 database:3 observed:1 electrical:1 capture:1 wang:2 thousand:1 region:6 cord:1 connected:1 sun:1 trade:1 highest:1 rescaled:1 limbic:1 vanishes:1 complexity:2 dynamic:7 personal:1 ultimately:1 signature:1 trained:3 solving:1 predictive:6 necessitates:1 multimodal:1 joint:2 darpa:1 represented:1 derivation:1 train:2 separated:1 distinct:4 fast:2 kp:2 xc0:1 labeling:1 outcome:2 peng:1 supplementary:1 larger:2 widely:2 s:1 otherwise:1 drawing:2 bci:2 ability:3 cov:1 dzirasa:3 lfps:1 gp:31 jointly:5 differentiate:1 net:1 product:2 tbme:1 relevant:3 combining:3 uci:4 adapts:1 interpretational:1 chai:1 electrode:27 keck:1 adam:1 leave:3 rotated:3 help:1 depending:2 develop:2 recurrent:1 augmenting:1 coupling:1 measured:4 odd:1 minor:1 received:4 strong:1 implemented:1 predicted:1 involves:3 implies:1 differ:1 direction:2 guided:2 safe:1 stevens:1 filter:42 cnns:1 stochastic:1 centered:3 material:1 virtual:1 require:3 generalization:1 biological:1 secondly:1 mathematically:1 frontier:3 hold:2 considered:3 ic:8 normal:3 exp:6 visually:1 lawrence:1 overlaid:1 predict:4 visualize:1 seed:7 matthew:1 major:1 institutional:1 smallest:1 csm:2 released:1 purpose:1 estimation:1 unhappy:1 kpc:1 applicable:2 label:10 honkala:1 create:1 clearly:1 dcnn:2 gaussian:9 interfacing:1 aim:1 csp:1 pn:4 voltage:1 publication:1 focus:4 improvement:2 rank:3 biomarker:1 contrast:6 psychiatry:3 baseline:1 deap:8 lowercase:2 nn:1 entire:1 typically:3 a0:6 interested:1 biomed:1 henao:1 arg:2 classification:22 flexible:1 among:1 issue:3 denoted:2 overall:1 pascal:1 development:4 art:5 special:1 spatial:10 softmax:1 smoothing:1 field:5 emotion:11 fuller:1 extraction:1 beach:1 sampling:1 intriguingly:1 bmc:1 represents:1 broad:2 tarr:1 unsupervised:1 carin:3 future:1 minimized:1 stimulus:2 gordon:1 few:2 primarily:1 imately:1 randomly:2 simultaneously:1 resulted:2 national:1 jirsa:1 individual:1 phase:14 psd:2 geraldine:2 detection:3 interest:2 highly:3 zheng:6 mixture:1 genotype:2 yielding:2 pc:3 uppercase:1 predefined:2 edge:2 animal:1 necessary:1 rotating:1 arousal:2 causal:1 increased:2 classify:2 empowered:1 cost:3 entry:3 neutral:1 subset:1 hundred:3 usefulness:1 conducted:1 straightforwardly:1 corrupted:1 generalizability:2 synthetic:7 st:1 density:8 fundamental:1 peak:4 international:5 lstm:1 lee:1 xi1:1 off:2 decoding:1 michael:2 quickly:1 mouse:3 connectivity:3 kcc:1 augmentation:2 recorded:6 suspension:2 imagery:2 management:1 prefrontal:1 vigilance:1 berglund:1 watching:1 worse:1 zhao:1 return:1 michel:1 li:4 potential:3 de:2 coding:1 includes:1 availability:1 titan:1 bonilla:1 blind:1 analyze:2 competitive:2 recover:2 participant:1 fp1:1 truenorth:1 synchrony:10 contribution:1 publicly:4 accuracy:4 convolutional:20 variance:2 bolded:2 maximized:1 yield:3 generalize:1 html:1 identification:1 critically:1 mc:2 lu:5 monitoring:2 autism:5 rectified:1 cc:1 tissue:1 drive:1 acc:1 oscillatory:4 reach:1 failure:1 frequency:30 invasive:1 associated:3 recovers:3 degeneracy:1 sampled:3 nurse:1 dataset:23 treatment:8 popular:2 improves:2 dimensionality:1 electrophysiological:1 amplitude:8 manuscript:1 scattering:3 supervised:1 follow:1 reflected:1 response:2 improved:5 formulation:2 evaluated:4 box:1 diagnosed:2 furthermore:3 stage:3 correlation:2 overfit:1 sketch:1 hand:4 clock:3 working:1 expressive:1 web:1 marker:1 assessment:1 cage:2 icme:1 defines:1 undergoes:1 bcis:2 scientific:1 tt0:1 facilitate:2 usa:1 name:1 effect:1 contain:1 requiring:1 true:1 translational:2 excluded:1 symmetric:3 deal:1 ind:1 during:1 width:1 self:1 xtc:1 whereby:1 steady:1 noted:1 criterion:1 stress:1 presenting:1 complete:1 demonstrate:9 tt:2 ay:1 performs:1 delivers:1 interface:3 image:2 wise:1 variational:1 recently:1 funding:4 common:2 rotation:4 functional:2 rl:1 handcrafted:1 attached:1 volume:1 million:1 discussed:2 interpretation:1 tail:2 katz:1 interpret:1 resting:1 significant:2 refer:1 measurement:2 vec:5 automatic:1 session:2 schoffelen:2 pathology:1 dot:2 bruna:1 morlet:4 longer:1 cortex:1 align:2 pu:1 multivariate:1 recent:2 termed:1 nvidia:1 dawson:3 success:1 dataset2:1 binary:1 yi:7 accomplished:1 scoring:1 greater:1 additional:3 employed:1 converting:1 maximize:1 ller:2 period:1 tangermann:1 redundant:1 signal:35 semi:2 multiple:1 full:2 reduces:1 liking:2 stem:1 ii:1 match:3 clinical:5 long:4 cross:21 lin:1 post:4 award:1 a1:5 controlled:1 qi:1 prediction:6 variant:1 basic:3 scalable:1 essentially:2 metric:1 expectation:1 patient:3 arxiv:4 kernel:9 achieved:1 cell:1 addition:6 remarkably:1 separately:1 background:1 addressed:1 embs:1 source:3 chandy:1 ineffective:1 pooling:6 hz:10 subject:13 recording:2 induced:1 inconsistent:1 spirit:1 xct:1 practitioner:2 integer:3 yang:2 split:5 rendering:1 variety:4 adapter:21 lotte:1 fit:1 relu:1 equidistant:1 li1:1 architecture:3 ktt:2 variate:1 gave:1 idea:1 cn:8 reduce:2 shift:1 i7:1 synchronous:1 t0:2 motivated:2 whether:1 pca:1 passed:1 bastos:1 nine:1 deep:15 dramatically:3 useful:2 krauledat:1 clear:2 detailed:1 cleaner:1 amount:2 ang:1 band:18 ten:1 clip:2 visualized:2 category:1 simplest:1 reduced:1 http:1 multichannel:1 percentage:1 tutorial:1 neuroscience:9 coordinating:1 track:3 estimated:1 broadly:1 express:1 dominance:1 key:3 redundancy:1 four:2 demonstrating:3 threshold:1 blood:1 drawn:1 tierney:1 clarity:2 changing:2 asd:7 ht:2 kept:2 year:1 run:1 package:1 parameterized:6 letter:3 uncertainty:1 home:2 coherence:8 seizure:3 dropout:2 layer:6 followed:2 sleep:1 activity:3 scalp:1 greene:1 kronecker:1 constraint:1 software:1 encodes:1 nearby:1 argument:1 attempting:1 separable:2 department:4 developing:3 structured:2 designated:1 dissemination:1 smaller:1 across:3 beneficial:1 cth:1 projecting:1 restricted:3 invariant:1 handling:1 computationally:1 equation:1 visualization:2 previously:2 discus:1 letting:1 irregularly:1 ge:1 hultman:1 end:5 available:6 apply:2 away:1 spectral:24 appropriate:1 fry:1 alternative:2 original:2 ebrahimi:1 top:1 denotes:4 include:1 clustering:1 remaining:2 gan:1 maintaining:2 unifying:1 emotional:2 xc:5 carlson:3 calculating:2 const:2 music:1 concatenated:1 medicine:2 especially:1 infusion:1 neonatal:1 added:1 quantity:1 occurs:1 already:1 parametric:2 primary:1 rt:1 diagonal:1 gradient:1 kth:2 distance:1 separate:2 mapped:3 separating:2 simulated:1 valence:3 valpola:1 nelson:1 italicized:1 reason:1 enforcing:1 marcus:1 meg:2 length:6 code:1 index:4 relationship:13 illustration:1 ratio:3 modeled:2 happy:1 rasmus:1 setup:1 webb:1 robert:1 relate:1 expense:1 troy:1 ized:1 negative:2 ba:1 design:5 implementation:1 motivates:2 boltzmann:1 embc:1 perform:2 allowing:2 convolution:6 observation:1 datasets:21 neuron:1 lawhern:1 finite:2 revised:1 carin1:1 inferred:1 david:3 required:1 toolbox:1 optimized:1 delorme:1 learned:14 tensorflow:1 kingma:1 nip:8 address:2 able:2 below:1 pattern:4 challenge:1 program:1 max:8 including:1 video:4 belief:2 lance:1 power:8 event:1 oates:1 treated:1 critical:1 advanced:1 zhu:3 blankertz:1 improve:2 movie:1 brief:1 ladder:1 picture:2 raiko:1 mediated:1 autoencoder:3 health:1 prior:2 understanding:3 review:3 python:1 acknowledgement:1 relative:1 fully:2 loss:5 filtering:1 age:2 validation:8 foundation:2 solon:1 sufficient:1 bank:6 ulrich:2 succinctly:1 keeping:1 free:1 infeasible:1 dominantly:1 heading:1 side:3 understand:5 deeper:2 institute:1 wide:1 face:1 absolute:2 van:2 calculated:1 cortical:2 zafeiriou:1 forward:1 made:3 dysregulation:1 projected:1 franz:1 premature:1 feeling:1 far:1 social:1 transaction:2 compact:1 snodgrass:1 overfitting:5 hist:1 investigating:1 filip:1 xt1:1 spatio:1 xi:10 spectrum:6 putten:1 continuous:1 why:1 table:8 learn:5 channel:28 robust:1 ca:1 nature:1 correlated:1 symmetry:1 eeg:50 improving:1 complex:14 domain:7 alcoholism:2 did:3 bored:1 csd:7 big:1 noise:3 motivation:1 s2:1 child:3 neurostimulation:1 repeated:1 augmented:1 crafted:4 board:1 slow:1 precision:1 plataniotis:1 exponential:1 guan:1 extractor:1 wavelet:6 learns:2 young:1 minute:5 specific:5 xic:2 mitigates:1 undergoing:1 offset:2 decay:1 physiological:2 svm:2 explored:1 concern:1 nichd:1 curio:1 circularly:2 adding:1 effectively:2 magnitude:2 gallagher:1 chen:1 durham:1 patras:1 civil:1 entropy:4 fc:2 likely:1 visual:1 scalar:1 watch:1 mague:1 collectively:1 springer:3 environmental:1 determines:2 extracted:2 acm:2 goal:5 targeted:3 formulated:1 identity:1 month:3 shared:3 feasible:1 eeglab:2 change:1 included:6 specifically:3 clinician:1 reducing:2 typical:2 determined:1 kafui:2 justify:1 total:7 hospital:1 experimental:1 rarely:1 guo:1 bioinformatics:1 frontal:2 evaluate:2 tested:1 hung:1 |
6,687 | 7,049 | Near-Optimal Edge Evaluation in Explicit
Generalized Binomial Graphs
Sanjiban Choudhury
The Robotics Institute
Carnegie Mellon University
[email protected]
Shervin Javdani
The Robotics Institute
Carnegie Mellon University
[email protected]
Siddhartha Srinivasa
The Robotics Institute
Carnegie Mellon University
[email protected]
Sebastian Scherer
The Robotics Institute
Carnegie Mellon University
[email protected]
Abstract
Robotic motion-planning problems, such as a UAV flying fast in a partially-known
environment or a robot arm moving around cluttered objects, require finding
collision-free paths quickly. Typically, this is solved by constructing a graph,
where vertices represent robot configurations and edges represent potentially valid
movements of the robot between these configurations. The main computational
bottlenecks are expensive edge evaluations to check for collisions. State of the art
planning methods do not reason about the optimal sequence of edges to evaluate
in order to find a collision free path quickly. In this paper, we do so by drawing
a novel equivalence between motion planning and the Bayesian active learning
paradigm of decision region determination (DRD). Unfortunately, a straight application of existing methods requires computation exponential in the number
of edges in a graph. We present B I SEC T, an efficient and near-optimal algorithm to solve the DRD problem when edges are independent Bernoulli random
variables. By leveraging this property, we are able to significantly reduce computational complexity from exponential to linear in the number of edges. We show
that B I SEC T outperforms several state of the art algorithms on a spectrum of
planning problems for mobile robots, manipulators, and real flight data collected
from a full scale helicopter. Open-source code and details can be found here:
https://github.com/sanjibac/matlab_learning_collision_checking
1
Introduction
Motion planning, the task of computing collision-free motions for a robotic system from a start to
a goal configuration, has a rich and varied history [23]. Up until now, the bulk of the prominent
research has focused on the development of tractable planning algorithms with provable worst-case
performance guarantees such as computational complexity [3], probabilistic completeness [24] or
asymptotic optimality [20]. In contrast, analysis of the expected performance of these algorithms
on the real world planning problems a robot encounters has received considerably less attention,
primarily due to the lack of standardized datasets or robotic platforms. However, recent advances in
affordable sensors and actuators have enabled mass deployment of robots that navigate, interact and
collect real data. This motivates us to examine the following question: ?How can we design planning
algorithms that, subject to on-board computation constraints, maximize their expected performance
on the actual distribution of problems that a robot encounters??
This paper addresses a class of robotic motion planning problems where path evaluation is expensive.
For example, in robot arm planning [12], evaluation requires expensive geometric intersection
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
||
||
||
? ?||
&&
&&
! !
??
&&
!
&&
? ?
X X
X !X
tcas
eq
enabled
&&
tcas
eq
&&
X
X
? ? enabled
X
X
? &&
tcas
enabled
? &&
tcas
eq eq X X
enabled
tcas eq
intent
known
X not
intent
known
? ? tcas eq
Xnot
(a)
(b)
tcas
intent
known
tcas
eq eq
intent
notnot
known
Figure 1: The feasible path identification problem (a) The explicit graph contains
dynamically
feasible
maneuvers [27] for a UAV flying fast, with a set candidate paths. The map shows the distribution of edge validity
for the graph. (b) Given a distribution over edges, our algorithm checks an edge, marks it as invalid (red) or
valid (green), and updates its belief. We continue until a feasible path is identified as free. We aim to minimize
the number of expensive edge evaluations.
computations. In UAV path planning [9], evaluation must be done online with limited computational
resources (Fig. 1).
State of the art planning algorithms [11] first compute a set of unevaluated paths quickly, and then
evaluate them sequentially to find a valid path. Oftentimes, candidate paths share common edges.
Hence, evaluation of a small number of edges can provide information about the validity of many
candidate paths simultaneously. Methods that check paths sequentially, however, do not reason about
these common edges.
This leads us naturally to the feasible path identification problem - given a library of candidate
paths, identify a valid path while minimizing the cost of edge evaluations. We assume access to a
prior distribution over edge validity, which encodes how obstacles are distributed in the environment
(Fig. 1(a)). As we evaluate edges and observe outcomes, the uncertainty of a candidate path collapses.
Our first key insight is that this problem is equivalent to decision region determination (DRD) [19, 5])
- given a set of tests (edges), hypotheses (validity of edges), and regions (paths), the objective is to
drive uncertainty into a single decision region. This linking enables us to leverage existing methods
in Bayesian active learning for robotic motion planning.
Chen et al. [5] provide a method to solve this problem by maximizing an objective function that
satisfies adaptive submodularity [15] - a natural diminishing returns property that endows greedy
policies
with near-optimality guarantees. Unfortunately, naively applying this algorithm requires
O 2E computation to select an edge to evaluate, where E is the number of edges in all paths.
We define the Bern-DRD problem, which leverages additional structure in robotic motion planning
by assuming edges are independent Bernoulli random variables 1 , and regions correspond to sets of
edges evaluating to true. We propose Bernoulli Subregion Edge Cutting (B I SEC T), which provides
a greedy policy to select candidate edges in O (E). We prove our surrogate objective also satisfies
adaptive submodularity [15], and provides the same bounds as Chen et al. [5] while being more
efficient to compute.
We make the following contributions:
1. We show a novel equivalence between feasible path identification and the DRD problem,
linking motion planning to Bayesian active learning.
2. We develop B I SEC T, a near-optimal algorithm
for the special case of Bernoulli tests, which
selects tests in O (E) instead of O 2E .
3. We demonstrate the efficacy of our algorithm on a spectrum of planning problems for mobile
robots, manipulators, and real flight data collected from a full scale helicopter.
1
1
1
1
Generally, edges in this graph are correlated, as edges in collision are likely to have neighbours in collision.
Unfortunately, even measuring this correlation is challenging, especially in the high-dimensional non-linear
configuration space of robot arms. Assuming independent edges is a common simplification [23, 25, 7, 2, 11]
2
1
2
Problem Formulation
2.1
Planning as Feasible Path Identification on Explicit Graphs
Let G = (V, E) be an explicit graph that consists of a set of vertices V and edges E. Given
a pair of start and goal vertices, (vs , vg ) ? V , a search algorithm computes a path ? ? E - a
connected sequence of valid edges. To ascertain the validity of an edge, it invokes an evaluation
function Eval : E ? {0, 1}. We address applications where edge evaluation is expensive, i.e., the
computational cost c(e) of computing Eval(e) is significantly higher than regular search operations2 .
We define a world as an outcome vector o ? {0, 1}|E| which assigns to each edge a boolean validity
when evaluated, i.e. Eval(e) = o(e). We assume that the outcome vector is sampled from an
independent Bernoulli distribution P (o), giving rise to a Generalized Binomial Graph (GBG) [13].
We make a second simplification to the problem - from that of search to that of identification. Instead
of searching G online for a path, we frame the problem as identifying a valid path from a library
of ?good? candidate paths ? = (?1 , ?2 , . . . , ?m ). The candidate set of paths ? is constructed offline,
while being cognizant of P (o), and can be verified to ensure that all paths have acceptable solution
quality when valid. 3 Hence we care about completeness with respect to ? instead of G.
We wish to design an adaptive edge selector Select(o) which is a decision tree that operates on a
world o, selects an edge for evaluation and branches on its outcome. The total cost of edge evaluation
is c(Select(o)). Our objective is to minimize the cost required to find a valid path:
Y
min Eo?P (o) [c(Select(o))] s.t ?o, ?? :
o(e) = 1 , ? ? Select(o)
(1)
e??
2.2
Decision Region Determination with Independent Bernoulli Tests
We now define an equivalent problem - decision region determination with independent Bernoulli
tests (Bern-DRD). Define a set of tests T = {1, . . . , n}, where the outcome of each test is a Bernoulli
random variable Xt ? {0, 1}, P (Xt = xt ) = ?txt (1 ? ?t )1?xt . We define a set of hypotheses h ? H,
where each is an outcome vector h ? {0, 1}T mapping all tests t ? T to outcomes h(t). We define a
m
set of regions {Ri }i=1 , each of which is a subset of tests R ? T . A regionQis determined to be valid
if all tests in that region evaluate to true, which has probability P (R) =
P (Xt = 1).
t?R
If a set of tests A ? T are performed, let the observed outcome vector be denoted by xA ? {0, 1}|A| .
Let the version space H(xA ) be the set of hypotheses consistent with observation vector xA , i.e.
H(xA ) = {h ? H | ?t ? A, h(t) = xA (t)}.
We define a policy ? as a mapping from observation vector xA to tests. A policy terminates when it
shows that at least one region is valid, or all regions are invalid. Let xT ? {0, 1}T be the ground
truth - the outcome vector for all tests. Denote the observation vector of a policy ? given ground truth
xT as xA (?, xT ). The expected cost of a policy ? is c(?) = ExT [c(xA (?, xT )] where c(xA ) is
the cost of all tests t ? A. The objective is to compute a policy ? ? with minimum cost that ensures at
least one region is valid, i.e.
? ? ? arg min c(?) s.t ?xT , ?Rd : P (Rd | xA (?, xT )) = 1
(2)
?
m
Note that we can cast problem (1) to (2) by setting E = T and ? = {Ri }i=1 . That is, driving
uncertainty into a region is equivalent to identification of a valid path (Fig. 2). This casting enables
us to leverage efficient algorithms with near-optimality guarantees for motion planning.
3
Related Work
The computational bottleneck in motion planning varies with problem domain and that has led to a
plethora of planning techniques ([23]). When vertex expansions are a bottleneck, A* [17] is optimally
efficient while techniques such as partial expansions [28] address graph searches with large branching
factors. The problem class we examine, that of expensive edge evaluation, has inspired a variety of
2
3
It is assumed that c(e) is modular and non-zero. It can scale with edge length.
Refer to supplementary on various methods to construct a library of good candidate paths
3
?1
R1
R2
?2
R1
?1
R2
?2
?3
(a)
R3
?1
R1
R2
?2
?3
(b)
R3
?3
(c)
R3
Figure 2: Equivalence between the feasible path identification problem and Bern-DRD. A path ?i is equivalent
to a region Ri over valid hypotheses (blue dots). Tests eliminate hypotheses and the algorithm terminates when
uncertainty is pushed into a region (R1 ) and the corresponding path (?1 ) is determined to be valid.
?lazy? approaches. The Lazy Probabilistic Roadmap (PRM) algorithm [1] only evaluates edges on
the shortest path while Fuzzy PRM [26] evaluates paths that minimize probability of collision. The
Lazy Weighted A* (LWA*) algorithm [8] delays edge evaluation in A* search and is reflected in
similar techniques for randomized search [14, 6]. An approach most similar in style to ours is the
LazyShortestPath (LazySP) framework [11] which examines the problem of which edges to evaluate
on the shortest path. Instead of the finding the shortest path, our framework aims to efficiently
identify a feasible path in a library of ?good? paths. Our framework is also similar to the Anytime
Edge Evaluation (AEE*) framework [25] which deals with edge evaluation on a GBG. However, our
framework terminates once a single feasible path is found while AEE* continues to evaluation in
order to minimize expected cumulative sub-optimality bound. Similar to Choudhury et al. [7] and
Burns and Brock [2], we leverage priors on the distribution of obstacles to make informed planning
decisions.
We draw a novel connection between motion planning and optimal test selection which has a
wide-spread application in medical diagnosis [21] and experiment design [4]. Optimizing the ideal
metric, decision theoretic value of information [18], is known to be NPPP complete [22]. For
hypothesis identification (known as the Optimal Decision Tree (ODT) problem), Generalized Binary
Search (GBS) [10] provides a near-optimal policy. For disjoint region identification (known as the
Equivalence Class Determination (ECD) problem), EC2 [16] provides a near-optimal policy. When
regions overlap (known as the Decision Region Determination (DRD) problem), HEC [19] provides
a near-optimal policy. The D I REC T algorithm [5], a computationally more efficient alternative to
HEC, forms the basis of our approach.
4
The Bernoulli Subregion Edge Cutting Algorithm
The DRD problem in general is addressed by the Decision Region Edge Cutting (D I REC T) [5]
algorithm. The intuition behind the method is as follows - as tests are performed, hypotheses
inconsistent with test outcomes are pruned away. Hence, tests should be incentivized to push the
probability mass over hypotheses into any region as fast as possible. Chen et al. [5] derive a surrogate
objective function that provides such an incentive by creating separate sub-problems for each region
and combining them in a Noisy-OR fashion such that quickly solving any one sub-problem suffices.
Importantly, this objective is adaptive submodular [15] - greedily maximizing such an objective
results in a near-optimal policy.
We adapt the framework of D I REC T to address the Bern-DRD problem. We first provide a modification to the EC2 sub-problem objective which is simpler to compute when the distribution over
hypotheses is non-uniform, while
providing the same guarantees. Unfortunately, naively apply
ing D I REC T requires O 2T computation per sub-problem. For the special case of independent
Bernoulli tests, we present a more efficient Bernoulli Subregion Edge Cutting (B I SEC T) algorithm,
which computes each subproblem in O (T ) time. We provide a brief exposition deferring to the
supplementary for detailed derivations.
4.1
A simple subproblem: One region versus all
Following Chen et al. [5], we define a ?one region versus all? subproblem, the solution of which helps
address the Bern-DRD. Given a single region, the objective is to either push the version space to
that region, or collapse it to a single hypothesis. We view a region R as a version space RH ? H
4
consistent with its constituent tests. We define this subproblem over a set of disjoint subregions Si .
Let the hypotheses in the target region RH be S1 . Every other hypothesis h ? RH is defined as its
own subregion Si , i > 1, where RH is a set of hypothesis where a region is not valid. Determining
which subregion is valid falls under the framework of Equivalence Class Determination (ECD), (a
special case of the DRD problem) and can be solved efficiently by the EC2 algorithm (Golovin et al.
[16]). This objective defines a graph with nodes as subregions and edges between distinct subregions,
where the weight of an edge is the product of probabilities of subregions. As tests are performed and
outcomes are received, the version space shrinks, and probabilities of different subregions are driven
to 0. This has the effect of decreasing the total weight of edges. Importantly, the problem is solved
i.f.f. the weight of all edges is zero. The weight over the set of subregions is:
X
w[16] ({Si }) =
P (Sj )P (Sk )
(3)
j6=k
When hypotheses have uniformP
weight, this can be computed efficiently for the ?one region versus
all? subproblem. Let P (S1 ) =
P (Si ):
i>1
1
w[16] ({Si }) = P (S1 )P (S1 ) + P (S1 ) P (S1 ) ?
|H|
(4)
For non-uniform prior however, this quantity is more difficult to compute. We modify this objective
slightly, adding self-edges on subregions Si , i > 1, enabling more efficient computation while still
maintaining the same guarantees:
X
X
X
wEC ({Si }) = P (S1 )(
P (Si )) + (
P (Si ))(
P (Sj ))
j?1
i6=1
i6=1
(5)
= P (S1 )P (S1 ) + P (S1 )2 = P (RH )(P (RH ) + P (RH ))
For region R, let the relevant version space be HR (xA ) = {h ? H | ?t ? A ? R, h(t) = xA (t)}.
The set of all hypotheses in RH consistent with relevant outcomes in xA is given by RH ? HR (xA ).
The terms P (RH ? HR (xA )) and P (RH ? HR (xA )) allows us to quantify the progress made on
determining region validity. Naively computing these terms would require computing
all hypotheses
and assigning them to correct subregions, thus requiring a runtime of O 2T . However, for the
special case of Bernoulli tests, we can reduce this to O (T ) as we can see from the expression
?
?
!2
Y
Y
Y x (k)
A
R
1?x
(k)
A
wEC ({Si }?H (xA )) = ?1 ?
I(Xi = 1)
?j ?
?k
(1 ? ?k )
i?(R?A)
k?R?A
j?(R\A)
(6)
We can further reduce this to O (1) when iteratively updated (see supplementary for derivations). We
now define a criterion that incentivizes removing edges quickly and has theoretical guarantees. Let
fEC (xA ) be the weight of edges removed on observing outcome vector xA . This is evaluated as
fEC (xA ) = 1 ?
=1?
wEC ({Si } ? HR (xA ))
wEC ({Si })
1?
Q
i?(R?A)
I(Xi = 1)
!
Q
Q
?j
k?R?A
j?(R\A)
1?
Q
x (k)
?k A (1
1?xA (k)
2
? ?k )
(7)
?i
i?R
Lemma 1. The expression fEC (xA ) is strongly adaptive monotone and adaptive submodular.
4.2
Solving the Bern-DRD problem using B I SEC T
We now return to Bern-DRD problem (2) where we have multiple regions {R1 , . . . , Rm } that
r
overlap. Each region Rr is associated with an objective fEC
(xA ) for solving the ?one region versus
all? problem. Since solving any one such subproblem suffices, we combine them in a Noisy-OR
5
m
Algorithm 1: Decision Region Determination with Independent Bernoulli Test({Ri }i=1 , ?, xT )
1
2
3
4
5
6
A??;
while (@Ri , P (Ri |xA ) = 1) and (?Ri , P (Ri |xA ) > 0) do
Tcand ? SelectCandTestSet(xA ) ;
. Using either (10) or (12)
t? ? SelectTest(Tcand , ?, xA ) ;
. Using either (11),(13),(14),(15) or (16)
A ? A ? t? ;
xt? ? xT (t? ) ;
. Observe outcome for selected test
formulation by defining an objective fDRD (xA ) = 1 ?
?
m
Q
r
(1 ? fEC
(xA )) [5] which evaluates to
r=1
!2 ?
!
x (k)
? 1?
I(Xi = 1)
?j
?k A (1 ? ?k )1?xA (k)
m ?
Y
k?R
?A
i?(R
?A)
j?(R
\A)
r
r
r
?
Q
1?
?
?
1?
?i
r=1 ?
i?Rr
Q
Q
Q
?
?
?
?
?
?
(8)
r
Since fDRD (xA ) = 1 iff fEC
(xA ) = 1 for at least one r, we define the following surrogate problem
to Bern-DRD
? ? ? arg min c(?) s.t ?xT : fDRD (xA (?, xT )) ? 1
(9)
?
The surrogate problem has a structure that allows greedy policies to have near-optimality guarantees
Lemma 2. The expression fDRD (xA ) is strongly adaptive monotone and adaptive submodular.
Theorem 1. Let m be the number of regions, phmin the minimum prior probability of any hypothesis,
?DRD be the greedy policy and ? ? with the optimal policy. Then c(?DRD ) ? c(? ? )(2m log ph1 +1).
min
We now describe the B I SEC T algorithm. Algorithm 1 shows the framework for a general decision region determination algorithm. In order to specify B I SEC T, we need to define two
options - a candidate test set selection function SelectCandTestSet(xA ) and a test selection function SelectTest(Tcand , ?, xA ). The unconstrained version of B I SEC T implements
SelectCandTestSet(xA ) to return the set of all tests Tcand that contains only unevaluated tests
belonging to active regions
(m
)
[
{Ri | P (Ri |xA ) > 0} \ A
(10)
Tcand =
i=1
We now examine the B I SEC T test selection rule SelectTest(Tcand , ?, xA )
?
?
?
m
Y
Y
Y
1
?1 ?
I(Xi = 1)
?j ?
Ext ?
t? ? arg max
t?Tcand c(t)
r=1
i?(Rr ?A)
j?(Rr \A)
?
?
??
? (11)
m
P
m
Y
Y
Y
2
I(t?Rk )
?1 ?
?
??
I(Xi = 1)
?j ?? (?txt (1 ? ?t )1?xt ) k=1
r=1
i?(Rr ?A?t)
j?(Rr \A?t)
The intuition behind this update is that tests are selected to squash the probability of regions not being
valid. It also additionally incentivizes selection of tests on which multiple regions overlap.
4.3
Adaptively constraining test selection to most likely region
We observe in our experiments that the surrogate (8) suffers from a slow convergence problem fDRD (xA ) takes a long time to converge to 1 when greedily optimized. To alleviate the convergence
problem, we introduce an alternate candidate selection function SelectCandTestSet(xA ) that
assigns to Tcand the set of all tests that belong to the most likely region TmaxP which is evaluated as
follows (we will refer to this variant as M AX P ROB R EG)
(
)
TmaxP =
arg max
Ri =(R1 ,R2 ,...,Rm )
6
P (Ri |xA )
\A
(12)
Applying the constraint in (12) leads to a dramatic improvement for any test selection policy as we
will show in Sec. 5.2. The following theorem offers a partial explanation
Theorem 2. A policy that greedily latches to a region according the the posterior conditioned on the
region outcomes has a near-optimality guarantee of 4 w.r.t the optimal region evaluation sequence.
Applying the constraint in (12) implies we are no longer greedily optimizing fDRD (xA ). However,
the following theorem bounds the sub-optimality of this policy.
Theorem 3. Let pmin = mini P (Ri ), phmin = minh?H P (h) and l = maxi |Ri |. The policy using
?1
2
l
+ 1 where ? ? 1 ? max (1 ? pmin )2 , pmin
(12) has a suboptimality of ? 2m log ph1
.
min
5
Experiments
We evaluate B I SEC T on a collection of datasets spanning across a spectrum of synthetic problems and
real-world planning applications. The synthetic problems are created by randomly selecting problem
parameters to test the general applicability of B I SEC T. The motion planning datasets range from
simplistic yet insightful 2D problems to more realistic high dimension problems as encountered by an
UAV or a robot arm. The 7D arm planning dataset is obtained from a high fidelity simulation as shown
in Fig. 4(a). Finally, we test B I SEC T on experimental data collected from a full scale helicopter flying
that has to avoid unmapped wires at high speed as it comes into land as shown in Fig. 4(b). Refer to
supplementary for exhaustive details on experiments and additional results. Open-source code and
details can be found here: https://github.com/sanjibac/matlab_learning_collision_checking
5.1
Heuristic approaches to solving the Bern-DRD problem
We propose a collection of competitive heuristics that can also be used to solve the Bern-DRD problem. These
heuristics are various SelectTest(Tcand , ?, xA ) policies in the framework of Alg. 1. To simplify the setting,
we assume unit cost c(t) = 1 although it would be possible to extend these to nonuniform setting. The first
heuristic R ANDOM selects a test by sampling uniform randomly
t? ? Tcand
(13)
We adopt our next heuristic M AX TALLY from Dellin and Srinivasa [11] where the test belonging to most regions
is selected. It uses the following criteria, which exhibits a ?fail-fast? characteristic
t? ? arg max
t?Tcand
m
X
I (t ? Ri , P (Ri |xA ) > 0)
(14)
i=1
The next policy S ET C OVER selects tests that maximize the expected number of ?covered? tests, i.e. if a selected
test is in collision, how many other tests does it remove from consideration.
( m
)
m
[
[
x
,
?
A
t ? arg max (1 ? ?t )
{Ri | P (Ri |xA ) > 0} ?
Rj P (Rj |, Xt =0) > 0
\ {A ? {t}}
t?Tcand
i=1
j=1
(15)
Theorem 4. S ET C OVER is a near-optimal policy for the problem of optimally checking all regions.
The last heuristic is derived from a classic heuristic in decision theory: myopic value of information (Howard
[18]). MVO I greedily chooses the test that maximizes the change in the probability mass of the most likely
region. This test selection works only with SelectCandTestSet(xA ) = TmaxP .
t? ? arg max (1 ? ?t ) max P (Ri | xA , Xt = 0)
t?TmaxP
i=1,...,m
(16)
We also evaluate against state of the art L AZY SP [11] planner which explicitly minimizes collision checking
effort while trying to guarantee optimality. We ran two variants of LazySP. The first variant is the vanilla
unconstrained algorithm that searches for the shortest path on the entire graph, collision checks the path and
repeats. The second variant is constrained to the library of paths used by all other baselines.
5.2
Analysis of results
Table 1 shows the evaluation cost of all algorithms on various datasets normalized w.r.t B I SEC T. The two
numbers are lower and upper 95% confidence intervals - hence it conveys how much fractionally poorer are
algorithms w.r.t B I SEC T. The best performance on each dataset is highlighted. We present a set of observations
to interpret these results.
O 1. B I SEC T has a consistently competitive performance across all datasets.
7
1
1
1
1
0.9
0.9
0.9
0.9
0.8
0.8
0.8
0.8
0.7
0.7
0.7
0.7
0.6
0.6
0.6
0.6
0.5
0.5
0.5
0.5
0.4
0.4
0.4
0.4
0.3
0.3
0.3
0.3
0.2
0.2
0.2
0.2
0.1
0.1
0.1
0
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
0
0
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
0.1
0
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
1
1
1
1
0.9
0.9
0.9
0.9
0.8
0.8
0.8
0.8
0.7
0.7
0.7
0.7
0.6
0.6
0.6
0.6
0.5
0.5
0.5
0.5
0.4
0.4
0.4
0.4
0.3
0.3
0.3
0.3
0.2
0.2
0.2
0.2
0.1
0.1
0
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
1 0
0.9
0
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
1
1
1
1
0.9
0.9
0.9
0.9
0.8
0.8
0.8
0.8
0.7
0.7
0.7
0.7
0.6
0.6
0.6
0.6
0.5
0.5
0.5
0.5
0.4
0.4
0.4
0.3
0.3
0.3
0.2
0.2
0.2
0.1
0.1
0.1
0
0
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
0.1
0.2
0.3
0.4
0.5
0.6
0.1
0.1
0
0
0.4
0.3
0.2
0.1
0
0
MaxTally (|A| : 29) SetCover (|A| : 30)
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
MVoI (|A| : 28)
0.9
1
0
0.7
0.8
0.9
1
BiSECt (|A| : 20)
Figure 3: Performance (number of evaluated edges) of all algorithms on 2D geometric planning. Snapshots,
at start, interim and final stages respectively, show evaluated valid edges (green), invalid edges (red) and the
final path (magenta). The utility of edges as computed by algorithms is shown varying from low (black) to high
(cream).
?
Wires in
Region 2: Many
edges with
high probability
real flight
Region 1: Single edge
with low probability
(a)
(b)
(c)
Figure 4: (a) A 7D arm has to perform pick and place tasks at high speed in a table with clutter. (b) Experimental
data from a full-scale helicopter that has to react quickly to avoid unmapped wires detected by the sensor.
B I SEC T (given an informative prior) checks a small number of edges around the detected wire and identifies a
path. (c) Scenario where regions have size disparity. Unconstrained B I SEC T significantly outperforms other
algorithms on such a scenario.
Table 1 shows that on 13 out of the 14 datasets, B I SEC T is at par with the best. On 7 of those it is exclusively
the best.
O 2. The M AX P ROB R EG variant improves the performance of all algorithms on most datasets
Table 1 shows that this is true on 12 datasets. The impact is greatest on R ANDOM on the 2D Forest dataset performance improves from (19.45, 27.66) to (0.13, 0.30). However, this is not true in general. On datasets
with large disparity in region sizes as illustrated in Fig. 4(c), unconstrained B I SEC T significantly outperforms
other algorithms. In such scenarios, M AX P ROB R EG latches on to the most probable path which also happens to
have a large number of edges. It performs poorly on instances where this region is invalid, while the other region
containing a single edge is valid. Unconstrained B I SEC T prefers to evaluate the single edge belonging to region
1 before proceeding to evaluate region 2, performing optimally on those instances. Hence, the myopic nature of
M AX P ROB R EG is the reason behind its poor performance.
O 3. On planning problems, B I SEC T strikes a trade-off between the complimentary natures of M AX TALLY
and MVO I.
8
Table 1: Normalized evaluation cost - (lower, upper) bound of 95% confidence interval
L AZY SP
Small
m : 100
Medium
m : 500
Large
m : 1e3
Forest
OneWall
TwoWall
OneWall
m : 300
OneWall
m : 858
Forest
OneWall
Table
Clutter
Synth.
(T : 10)
2D Plan
(m : 2)
R ANDOM
M AX TALLY
S ET C OVER
B I SEC T
Unconstrained
MVO I
Unconstrained
Unconstrained
Unconstrained
Unconstrained
Constrained
MaxProbReg
MaxProbReg
MaxProbReg
MaxProbReg
Synthetic Bernoulli Test: Variation across region overlap
(4.18, 6.67) (3.49, 5.23)
(1.77, 3.01)
(0.00, 0.08)
(0.12, 0.29) (0.12, 0.25)
(0.18, 0.40)
(3.27, 4.40) (3.04, 4.30)
(3.55, 4.67)
(0.00, 0.00)
(0.05, 0.25) (0.14, 0.24)
(0.14, 0.33)
(2.86, 4.26) (2.62, 3.85)
(2.94, 3.71)
(?0.11, 0.00) (0.00, 0.28) (0.06, 0.26)
(0.09, 0.22)
2D Geometric Planning: Variation across environments
(10.8, 14.3)
(19.5, 27.7) (4.68, 6.55)
(3.53, 5.07)
(1.38, 2.51)
(0.03, 0.18)
(0.13, 0.30) (0.09, 0.18)
(0.00, 0.09)
(6.96, 11.3)
(13.4, 17.8) (4.12, 4.89)
(1.36, 2.11)
(0.16, 0.55)
(0.045, 0.21) (0.11, 0.42) (0.00, 0.12)
(0.14, 0.29)
(18.9, 25.6)
(13.8, 16.6) (2.76, 3.93)
(2.07, 2.94)
(?0.17, 0.01) (0.00, 0.09)
(0.33, 0.51) (0.10, 0.20)
(0.00, 0.00)
2D Geometric Planning: Variation across region size
(5.82, 12.1)
(12.1, 16.0) (4.47, 5.13)
(2.00, 3.41)
(0.00, 0.57)
(0.00, 0.17)
(0.12, 0.42) (0.06, 0.24)
(0.00, 0.38)
(5.43, 10.02)
(13.3, 16.8) (2.18, 3.77)
(1.04, 1.62)
(?0.03, 0.45) (0.00, 0.14)
(0.09, 0.27) (?0.04, 0.08) (0.00, 0.14)
Non-holonomic Path Planning: Variation across environments
(1.97, 3.81)
(22.4, 29.7) (9.79, 11.14)
(2.63, 5.28)
(0.15, 0.47)
(0.09, 0.18)
(0.46, 0.79) (0.25, 0.38)
(0.00, 0.00)
(0.97, 2.45)
(13.0, 15.8) (8.40, 11.47)
(3.72, 4.54)
(0.02, 0.51)
(?0.11, 0.11) (0.00, 0.12) (0.21, 0.28)
(?0.11, 0.11)
7D Arm Planning: Variation across environments
(0.97, 1.59)
(15.1, 19.4) (4.80, 6.98)
(1.36, 2.17)
(0.24, 0.72)
(0.28, 0.54)
(0.13, 0.31) (0.00, 0.04)
(0.00, 0.11)
(0.28, 1.19)
(7.92, 9.85) (3.96, 6.44)
(1.42, 2.07)
(0.00, 0.38)
(0.02, 0.20)
(0.14, 0.36) (0.00, 0.00)
(0.00, 0.11)
Datasets with large disparity in region sizes
(6.50, 8.00) (5.50, 6.50)
(3.00, 3.50)
(3.00, 3.50)
(3.00, 4.50) (5.00, 7.50)
(3.00, 3.50)
(9.50, 11.3) (2.80, 6.10)
(6.60, 10.5)
(6.60, 10.5)
(6.90, 10.8) (6.80, 8.30)
(6.60, 10.5)
(1.42, 2.36)
(0.00, 0.00)
(1.77, 2.64)
(0.00, 0.00)
(1.33, 1.81)
(0.00, 0.00)
(1.90, 2.46)
(0.00, 0.00)
(0.76, 1.20)
(0.00, 0.00)
(0.91, 1.44)
(0.00, 0.00)
(0.94, 1.42)
(0.00, 0.00)
(0.41, 0.91)
(0.00, 0.00)
(1.54, 2.46)
(0.00, 0.00)
(3.28, 3.78)
(0.00, 0.00)
(0.32, 0.67)
(0.00, 0.00)
(1.23, 1.75)
(0.00, 0.00)
(0.00, 0.00)
(3.00, 3.50)
(0.00, 0.00)
(7.30, 11.2)
We examine this in the context of 2D planning as shown in Fig. 3. M AX TALLY selects edges belonging to many
paths which is useful for path elimination but does not reason about the event when the edge is not in collision.
MVO I selects edges to eliminate the most probable path but does not reason about how many paths a single edge
can eliminate. B I SEC T switches between these behaviors thus achieving greater efficiency than both heuristics.
O 4. B I SEC T checks informative edges in collision avoidance problems encountered a helicopter
Fig. 4(b) shows the efficacy of B I SEC T on experimental flight data from a helicopter avoiding wire.
6
Conclusion
In this paper, we addressed the problem of identification of a feasible path from a library while minimizing the
expected cost of edge evaluation given priors on the likelihood of edge validity. We showed that this problem
is equivalent to a decision region determination problem where the goal is to select tests (edges) that drive
uncertainty into a single decision region (a valid path). We proposed B I SEC T, and efficient and near-optimal
algorithm that solves this problem by greedily optimizing a surrogate objective.We validated B I SEC T on a
spectrum of problems against state of the art heuristics and showed that it has a consistent performance across
datasets. This works serves as a first step towards importing Bayesian active learning approaches into the domain
of motion planning.
9
Acknowledgments
We would like to acknowledge the support from ONR grant N000141310821. We would like to thank Shushman
Choudhury for insightful discussions and the 7D arm planning datasets. We would like to thank Oren Salzaman,
Mohak Bhardwaj, Vishal Dugar and Paloma Sodhi for feedback on the paper.
References
[1] Robert Bohlin and Lydia E Kavraki. Path planning using lazy prm. In ICRA, 2000.
[2] Brendan Burns and Oliver Brock. Sampling-based motion planning using predictive models. In ICRA,
2005.
[3] John F Canny. The complexity of robot motion planning. 1988.
[4] Kathryn Chaloner and Isabella Verdinelli. Bayesian experimental design: A review. Statistical Science,
pages 273?304, 1995.
[5] Yuxin Chen, Shervin Javdani, Amin Karbasi, J. Andrew (Drew) Bagnell, Siddhartha Srinivasa, and Andreas
Krause. Submodular surrogates for value of information. In AAAI, 2015.
[6] Sanjiban Choudhury, Jonathan D. Gammell, Timothy D. Barfoot, Siddhartha Srinivasa, and Sebastian
Scherer. Regionally accelerated batch informed trees (rabit*): A framework to integrate local information
into optimal path planning. In ICRA, 2016.
[7] Shushman Choudhury, Christopher M Dellin, and Siddhartha S Srinivasa. Pareto-optimal search over
configuration space beliefs for anytime motion planning. In IROS, 2016.
[8] Benjamin Cohen, Mike Phillips, and Maxim Likhachev. Planning single-arm manipulations with n-arm
robots. In Eigth Annual Symposium on Combinatorial Search, 2015.
[9] Hugh Cover, Sanjiban Choudhury, Sebastian Scherer, and Sanjiv Singh. Sparse tangential network (spartan):
Motion planning for micro aerial vehicles. In ICRA. IEEE, 2013.
[10] Sanjoy Dasgupta. Analysis of a greedy active learning strategy. In NIPS, 2004.
[11] Christopher M Dellin and Siddhartha S Srinivasa. A unifying formalism for shortest path problems with
expensive edge evaluations via lazy best-first search over paths with edge selectors. In ICAPS, 2016.
[12] Christopher M Dellin, Kyle Strabala, G Clark Haynes, David Stager, and Siddhartha S Srinivasa. Guided
manipulation planning at the darpa robotics challenge trials. In Experimental Robotics, 2016.
[13] Alan Frieze and Micha? Karo?nski. Introduction to random graphs. Cambridge Press, 2015.
[14] Jonathan D. Gammell, Siddhartha S. Srinivasa, and Timothy D. Barfoot. Batch Informed Trees: Samplingbased optimal planning via heuristically guided search of random geometric graphs. In ICRA, 2015.
[15] Daniel Golovin and Andreas Krause. Adaptive submodularity: Theory and applications in active learning
and stochastic optimization. Journal of Artificial Intelligence Research, 2011.
[16] Daniel Golovin, Andreas Krause, and Debajyoti Ray. Near-optimal bayesian active learning with noisy
observations. In NIPS, 2010.
[17] Peter E Hart, Nils J Nilsson, and Bertram Raphael. A formal basis for the heuristic determination of
minimum cost paths. IEEE Trans. on Systems Science and Cybernetics, 1968.
[18] Ronald A Howard. Information value theory. IEEE Tran. Systems Science Cybernetics, 1966.
[19] Shervin Javdani, Yuxin Chen, Amin Karbasi, Andreas Krause, J. Andrew (Drew) Bagnell, and Siddhartha
Srinivasa. Near optimal bayesian active learning for decision making. In AISTATS, 2014.
[20] Sertac Karaman and Emilio Frazzoli. Sampling-based algorithms for optimal motion planning. The
International Journal of Robotics Research, 30(7):846?894, 2011.
[21] Igor Kononenko. Machine learning for medical diagnosis: History, state of the art and perspective. Artificial
Intelligence in Medicine, 2001.
[22] Andreas Krause and Carlos Guestrin. Optimal value of information in graphical models. Journal of
Artificial Intelligence Research, 35:557?591, 2009.
10
[23] S. M. LaValle. Planning Algorithms. Cambridge University Press, Cambridge, U.K., 2006.
[24] Steven M LaValle and James J Kuffner Jr. Randomized kinodynamic planning. IJRR, 2001.
[25] Venkatraman Narayanan and Maxim Likhachev. Heuristic search on graphs with existence priors for
expensive-to-evaluate edges. In ICAPS, 2017.
[26] Christian L Nielsen and Lydia E Kavraki. A 2 level fuzzy prm for manipulation planning. In IROS, 2000.
[27] Mihail Pivtoraiko, Ross A Knepper, and Alonzo Kelly. Differentially constrained mobile robot motion
planning in state lattices. Journal of Field Robotics, 2009.
[28] Takayuki Yoshizumi, Teruhisa Miura, and Toru Ishida. A* with partial expansion for large branching factor
problems. In AAAI/IAAI, pages 923?929, 2000.
11
| 7049 |@word trial:1 version:6 open:2 heuristically:1 simulation:1 hec:2 pick:1 dramatic:1 configuration:5 contains:2 efficacy:2 selecting:1 disparity:3 exclusively:1 daniel:2 ours:1 kinodynamic:1 outperforms:3 existing:2 com:2 si:12 assigning:1 yet:1 must:1 john:1 ronald:1 realistic:1 sanjiv:1 informative:2 enables:2 christian:1 remove:1 update:2 v:1 lydia:2 greedy:5 selected:4 intelligence:3 samplingbased:1 wec:4 yuxin:2 completeness:2 provides:6 node:1 simpler:1 constructed:1 symposium:1 prove:1 consists:1 stager:1 combine:1 ray:1 incentivizes:2 introduce:1 expected:6 behavior:1 planning:48 examine:4 inspired:1 decreasing:1 nppp:1 actual:1 maximizes:1 mass:3 medium:1 complimentary:1 minimizes:1 fuzzy:2 cognizant:1 informed:3 finding:2 guarantee:9 every:1 runtime:1 icaps:2 rm:2 unit:1 medical:2 grant:1 maneuver:1 before:1 local:1 modify:1 ext:2 path:57 black:1 burn:2 equivalence:5 collect:1 dynamically:1 challenging:1 deployment:1 micha:1 limited:1 collapse:2 range:1 acknowledgment:1 implement:1 significantly:4 isabella:1 confidence:2 regular:1 selection:9 context:1 applying:3 equivalent:5 map:1 maximizing:2 attention:1 cluttered:1 focused:1 identifying:1 assigns:2 react:1 insight:1 examines:1 rule:1 importantly:2 avoidance:1 enabled:5 classic:1 searching:1 variation:5 updated:1 target:1 us:1 kathryn:1 hypothesis:17 expensive:8 rec:4 continues:1 barfoot:2 observed:1 mike:1 subproblem:6 steven:1 solved:3 worst:1 region:61 ensures:1 connected:1 movement:1 removed:1 bisect:1 trade:1 ran:1 intuition:2 environment:5 benjamin:1 complexity:3 singh:1 solving:5 predictive:1 flying:3 efficiency:1 basis:2 darpa:1 various:3 derivation:2 distinct:1 fast:4 describe:1 detected:2 spartan:1 artificial:3 outcome:15 exhaustive:1 modular:1 supplementary:4 solve:3 heuristic:11 drawing:1 squash:1 highlighted:1 noisy:3 final:2 online:2 sequence:3 rr:6 propose:2 tran:1 product:1 helicopter:6 raphael:1 canny:1 relevant:2 combining:1 iff:1 poorly:1 amin:2 regionally:1 differentially:1 constituent:1 convergence:2 plethora:1 r1:6 object:1 help:1 derive:1 develop:1 andrew:2 received:2 progress:1 eq:8 solves:1 subregion:5 c:2 implies:1 come:1 quantify:1 submodularity:3 guided:2 correct:1 stochastic:1 elimination:1 require:2 suffices:2 alleviate:1 probable:2 around:2 ground:2 mapping:2 prm:4 driving:1 adopt:1 combinatorial:1 ross:1 weighted:1 sensor:2 aim:2 choudhury:6 avoid:2 mobile:3 casting:1 varying:1 ax:8 derived:1 validated:1 odt:1 improvement:1 bernoulli:14 check:6 likelihood:1 consistently:1 chaloner:1 contrast:1 brendan:1 greedily:6 baseline:1 typically:1 eliminate:3 entire:1 diminishing:1 selects:6 arg:7 fidelity:1 denoted:1 development:1 plan:1 art:6 platform:1 special:4 constrained:3 field:1 construct:1 once:1 beach:1 sampling:3 haynes:1 igor:1 venkatraman:1 simplify:1 micro:1 primarily:1 tangential:1 javdani:3 neighbour:1 randomly:2 simultaneously:1 frieze:1 lavalle:2 eval:3 evaluation:22 behind:3 myopic:2 poorer:1 oliver:1 edge:74 partial:3 tree:4 theoretical:1 instance:2 formalism:1 obstacle:2 boolean:1 cover:1 measuring:1 miura:1 lattice:1 cost:12 applicability:1 vertex:4 subset:1 uniform:3 delay:1 optimally:3 varies:1 considerably:1 bhardwaj:1 synthetic:3 adaptively:1 st:1 international:1 chooses:1 randomized:2 ec2:3 hugh:1 nski:1 probabilistic:2 off:1 quickly:6 shervin:3 aaai:2 containing:1 frazzoli:1 creating:1 style:1 return:3 pmin:3 importing:1 sec:29 explicitly:1 vehicle:1 performed:3 view:1 observing:1 red:2 start:3 competitive:2 option:1 carlos:1 contribution:1 minimize:4 characteristic:1 efficiently:3 correspond:1 identify:2 bayesian:7 identification:10 drive:2 cybernetics:2 straight:1 j6:1 history:2 tcas:8 suffers:1 sebastian:3 evaluates:3 against:2 james:1 conveys:1 naturally:1 associated:1 sampled:1 dataset:3 iaai:1 anytime:2 improves:2 andom:3 nielsen:1 higher:1 reflected:1 specify:1 formulation:2 done:1 evaluated:5 shrink:1 strongly:2 xa:49 stage:1 until:2 correlation:1 flight:4 christopher:3 lack:1 drd:19 mohak:1 defines:1 quality:1 manipulator:2 usa:1 effect:1 validity:8 requiring:1 true:4 normalized:2 hence:5 iteratively:1 illustrated:1 deal:1 eg:4 latch:2 branching:2 self:1 suboptimality:1 criterion:2 generalized:3 prominent:1 trying:1 theoretic:1 demonstrate:1 complete:1 performs:1 motion:19 consideration:1 novel:3 kyle:1 srinivasa:9 common:3 cohen:1 linking:2 belong:1 extend:1 fec:6 interpret:1 mellon:4 refer:3 cambridge:3 phillips:1 rd:2 unconstrained:10 vanilla:1 i6:2 submodular:4 dot:1 ishida:1 moving:1 robot:14 access:1 longer:1 posterior:1 own:1 recent:1 showed:2 perspective:1 optimizing:3 driven:1 scenario:3 manipulation:3 binary:1 continue:1 onr:1 aee:2 guestrin:1 minimum:3 additional:2 care:1 greater:1 eo:1 converge:1 paradigm:1 maximize:2 shortest:5 strike:1 branch:1 full:4 multiple:2 rj:2 emilio:1 ing:1 alan:1 determination:11 adapt:1 offer:1 long:2 hart:1 uav:4 impact:1 variant:5 simplistic:1 unmapped:2 bertram:1 txt:2 cmu:4 metric:1 affordable:1 holonomic:1 represent:2 robotics:8 oren:1 krause:5 addressed:2 interval:2 source:2 subject:1 cream:1 leveraging:1 inconsistent:1 near:15 leverage:4 ideal:1 constraining:1 mvo:4 knepper:1 variety:1 switch:1 identified:1 reduce:3 andreas:5 bottleneck:3 expression:3 utility:1 gb:1 effort:1 likhachev:2 peter:1 e3:1 prefers:1 generally:1 collision:12 detailed:1 covered:1 useful:1 clutter:2 subregions:8 narayanan:1 http:2 disjoint:2 per:1 bulk:1 blue:1 diagnosis:2 carnegie:4 dasgupta:1 incentive:1 siddhartha:8 lwa:1 key:1 fractionally:1 achieving:1 verified:1 iros:2 graph:15 monotone:2 mihail:1 uncertainty:5 place:1 planner:1 draw:1 decision:17 acceptable:1 pushed:1 bound:4 kavraki:2 simplification:2 encountered:2 annual:1 constraint:3 ri:19 encodes:1 speed:2 optimality:8 min:5 pruned:1 performing:1 interim:1 according:1 alternate:1 poor:1 aerial:1 belonging:4 jr:1 terminates:3 ascertain:1 slightly:1 across:8 deferring:1 rob:4 modification:1 s1:10 happens:1 nilsson:1 making:1 kuffner:1 karbasi:2 karaman:1 computationally:1 resource:1 r3:3 fail:1 tractable:1 serf:1 apply:1 actuator:1 observe:3 away:1 alternative:1 encounter:2 batch:2 existence:1 binomial:2 standardized:1 ensure:1 ecd:2 graphical:1 maintaining:1 unifying:1 medicine:1 giving:1 invokes:1 especially:1 icra:5 objective:15 question:1 quantity:1 strategy:1 bagnell:2 surrogate:7 exhibit:1 incentivized:1 separate:1 thank:2 roadmap:1 collected:3 reason:5 provable:1 spanning:1 assuming:2 code:2 length:1 mini:1 providing:1 minimizing:2 difficult:1 unfortunately:4 robert:1 potentially:1 synth:1 rise:1 intent:4 design:4 motivates:1 policy:21 perform:1 takayuki:1 upper:2 observation:5 wire:5 datasets:12 snapshot:1 howard:2 enabling:1 minh:1 acknowledge:1 defining:1 frame:1 varied:1 nonuniform:1 david:1 pair:1 required:1 cast:1 connection:1 optimized:1 nip:3 trans:1 address:5 able:1 challenge:1 green:2 max:7 explanation:1 belief:2 greatest:1 overlap:4 event:1 natural:1 endows:1 hr:5 arm:10 github:2 brief:1 library:6 ijrr:1 identifies:1 created:1 eigth:1 brock:2 prior:7 geometric:5 review:1 checking:2 kelly:1 determining:2 asymptotic:1 par:1 scherer:3 versus:4 vg:1 clark:1 integrate:1 consistent:4 pareto:1 share:1 land:1 repeat:1 last:1 free:4 bern:10 offline:1 formal:1 institute:4 wide:1 fall:1 sparse:1 distributed:1 feedback:1 dimension:1 valid:20 world:4 rich:1 evaluating:1 computes:2 cumulative:1 made:1 adaptive:9 collection:2 oftentimes:1 sj:2 debajyoti:1 selector:2 cutting:4 sequentially:2 robotic:6 active:9 assumed:1 kononenko:1 xi:5 spectrum:4 search:13 sk:1 ph1:2 table:6 additionally:1 nature:2 ca:1 golovin:3 forest:3 interact:1 expansion:3 alg:1 constructing:1 domain:2 sp:2 aistats:1 main:1 spread:1 rh:11 fig:8 board:1 fashion:1 slow:1 sub:6 explicit:4 wish:1 exponential:2 tally:4 candidate:11 vishal:1 removing:1 theorem:6 rk:1 magenta:1 xt:19 navigate:1 insightful:2 maxi:1 r2:4 naively:3 adding:1 azy:2 drew:2 maxim:2 conditioned:1 push:2 chen:6 intersection:1 led:1 timothy:2 likely:4 lazy:5 partially:1 toru:1 truth:2 satisfies:2 goal:3 invalid:4 exposition:1 towards:1 feasible:10 change:1 determined:2 operates:1 lemma:2 total:2 sanjoy:1 nil:1 verdinelli:1 experimental:5 select:7 mark:1 support:1 jonathan:2 accelerated:1 evaluate:11 avoiding:1 correlated:1 |
6,688 | 705 | A Recurrent Neural Network for
Generation of Ocular Saccades
Lina L.E. Massone
Department of Physiology
Department of Electrical Engineering and Computer Scienc~
Northwestern University
303 E. Chicago Avenue, Chicago, 1160611
Abstract
This paper presents a neural network able to control saccadic
movements. The input to the network is a specification of a
stimulation site on the collicular motor map. The output is the time
course of the eye position in the orbit (horizontal and vertical angles).
The units in the network exhibit a one-to-one correspondance with
neurons in the intermediate layer of the superior colliculus (collicular
motor map), in the brainstem and with oculomotor neurons.
Simulations carried out with this network demonstrate its ability to
reproduce in a straightforward fashion many experimental observations.
1. INTRODUCTION
It is known that the superior colliculus (SC) plays an important role in the control of eye
movements (Schiller et a1. 1980). Electrophysiological studies (Cynader and Berman
1972, Robinson 1972) showed that the intermediate layer of SC is topographically
organized into a motor map. The location of active neurons in this area was found to be
related to the oculomotor error (Le. how far the eyes are from the target) and their firing
rate to saccade velocity (Roher et al. 1987, Berthoz et al. 1987). Neurons in the rostral
area of the motor map, the so-called fixation neurons, tend to become active when the
eyes are on target (Munoz and Wurtz 1992) and they can provide a gating mechanism to
1014
A Recurrent Neural Network for Generation of Ocular Saccades
arrest the movement (Guitton 1992). SC sends signals to the brainstem whose circuitry
translates them into commands to the oculomotor neurons that innervate the eye muscles
(Robinson 1981).
This paper presents a recurrent neural network that performs a spatio-temporal
transformation from a stimulation site on the collicular motor map and an eye movement.
The units in the network correspond to neurons in the intermediate layer of the colliculus,
neurons in the brainstem and to oculomotor neurons.
Medial
(up)
:
:
.
.
Caudal
(right)
Caudal
(left)
Lateral
(down)
Figure 1: An array of units that represents the collicular motor map. The dark square
represents the fixation area. The units in the array project to four units that represent burst
cells devoted to process rightward, leftward, upward and downward saccades.
The network was built entirely on anatomical and physiological observations.
Specifically, the following assumptions were used: (1) The activity on the collicular
motor map shifts towards the fixation area during movement (Munoz et aI. 1991, Droulez
and Berthoz 1991). (2) The output of the superior colliculus is a vectorial velocity signal
1015
1016
Massone
that is the sum of the contributions from each active collicular neuron. (3) Such signal is
decomposed into horizontal velocity and vertical velocity by a topographic and graded
connectivity pattern from SC to the burst cells in the brainstem. (4) The computation
performed from the burst-cells level down to the actual eye movement is carried out
according to the push-pull arrangement proposed by Robinson (1981). (5) The activity on
the collicular motor map is shifted by signals that represent the eye velocity. Efferent
copies of the horizontal and vertical eye velocities are fed back onto the collicular map in
order to implement the activity shift.
(a)
(c)
(b)
(d)
Figure 2: The topographic and graded pattern of connectivity from the collicular array to
the four burst cells. Black means no connection, brighter colors represent larger weight
values. (a) To the right cell. (b) To the left cell. (c) To the up cell. (d) To the down cell.
Simulations conducted with such a system (Massone submitted) demonstrated the
network's ability to reproduce a number of experimental observations. Namely the
network can: (1) Spontaneously produce oblique saccades whose curvature varies with the
ratio between the horizontal and vertical components of the motor error.(2) Automatically
hold the eye position in the orbit at the end of a saccade by exploiting the internal
dynamic of the network. (3) Continuously produce efferent copies of the movements
A Recurrent Neural Network for Generation of Ocular Saccades
without the need for reset signals. (4) Account for the outcome of the lidocaine
experiment (Lee et al. 1988) without assuming a population averaging mechanism.
Section 2 describes the network architecture. A more detailed description of the network,
it mechanisms and physiological ground as well as a number of simulation results can be
found in Massone (submitted).
2. THE NETWORK
The network input layer is a bidimensional array of linear units that represent neurons in
the collicular motor map. The array is topographically arranged as shown in Figure 1.
Activity along the caudal axis produces horizontal saccades in a contralateral fashion,
activity along the medio-Iateral axis produces vertical saccades, activity in the rest of the
array produces oblique saccades. The dark square in the center (rostral area) represents the
fixation area. The units in this array project to four logistic units that represent two pairs
of burst cells, one pair devoted to control horizontal movements, one pair devoted to
control vertical movements. The pattern of connectivity between the collicular array and
the units that represent the burst cells is qualitatively shown in Figure 2. The value of the
weights of such connections increases exponentially when one moves from the center
towards the periphery of the array. The fixation area projects to four other units that
represent the so-called omnipause neurons. These units send a gating signal to the burstcells units and are responsible for arresting the movement when the eyes are on target. i.e.
when the activity in the input array reaches the center. Each pair of burst-cells units
project to the network shown in Figure 3. This network is a computational version of the
push-pull arrangement proposed by Robinson (1981). The bottom part of the network
represents the oculomotor plant, the top part represents the brainstem circuitry and the
oculomotor neurons. The weights in the bottom part of the network were derived by
splitting into two equations the differential equation proposed by Robinson (1981) to
describe the behavior of the oculomotor plant under a combined motorneuron input R.
d91
R1 =k91 + r <l
R 1 and R2 are the firing rates of the agonist and antagonist motorneurons, 91 and 92 are
the components of the eye position due to motions in opposite directions (e.g. left and
right), k is the eye stiffness and r is the eye viscosity.
The weights in the top part of the network were analytically computed from the weights
in the bottom part of the network by imposing the following constraints: (1) The
difference between 91 and 92 must produce the correct 9. (2) The output of the neural
integrators must be an efferent copy of the eye movement. (3) The output of the
motorneurons must hold the eye at the current orbital position when the burst-cells units
are shut off by the gating action of the omnipause cells. Efferent copies of the horizontal
and vertical eye velocities were computed by differentiating the output of the neural
1017
1018
Massone
from fixation neurons
from fixation neurons
from collicular array
\1/
\1/
a
,--
Rl
.1t/r
.1t1r
R2
-,
l-k.1t1r
+
1
-1
a1
a
J---------
,
"":YE_ I
Figure 3: The recurrent network used to control eye movements in one direction, e.g.
horizontal. An identical network is required to control vertical movements. OPN:
omnipause neurons. Bel, BC2: burst cells. NIl, NI2: neural integrators. MNI, MN2:
motor neurons. The architecture is based on Robinson'S push-pull arrangement. k=4.0,
r=O.95, a=O.5, L\t=l msec.
A Recurrent Neural Network for Generation of Ocular Saccades
integrators. These signals were recurrently fed back onto the input array and made the
activity in the array shift towards the fixation area. This architecture assumes that the
output of the collicular array represents saccade velocity. The network is started by
selecting one unit in the input array, i.e. a "stimulation" site. When the unit is selected, a
square area centered at that unit becomes active with a gaussian activity profile (Ottes et
a1. 1986. Munoz and Guitton 1991). At the time the input units are activated the eye
starts moving and, as a consequence of the velocity feedback the activity on the input
array starts shifting. The movement is arrested when the fixation area becomes activated.
The activity of all units i~l the network represents neurons firing rates and is expressed in
spikes/second.
Figure 4 shows the response of the network when the collicular array is stimulated at two
sites sequentially. Each site causes an oblique saccade with unequal components.
Stimulation number 1 brings the eye up and to the right. stimulation number 2 brings
the eye back to the initial position. Fixation is maintained for a while inbetween
stimulations and at the end of the two movements. The reSUlting trajectories in the
movement plane (vertical angle versus horizontal angle) demonstrate the ability of the
network to (i) maintain the eye position in the orbit when the burst cells activation is set
to zero by the gating action of the omnipause neurons. (ii) produce curved trajectories
with opposite curvatures when the eye moves back and forth between the same two
angular positions. None of the units in the network is ever reset between saccades;
because of the push-pull arrangement, when the activity of one neural integrator increases,
the activity of the antagonist integrator decreases. This mechanism ensures that their
activity does not grow indefinetely.
3. CONCLUSIONS
In this paper I presented an anatomically and physiologically inspired network able to
control saccadic movements and to reproduce the outcome of some experimental
observations. The results of simulations carried out with this network can be found in
Massone (submitted). This work is currently being extended to (i) modeling the activity
shift phenomenon as the relaxation of a dynamical system to its equilibrium
configuration rather than as a feedback-driven mechanism, (ii) studying the role of the
collicular output signals in the calibration and accuracy of arm movements (Massone
1992).
Acknowledgements
This work was supported by the National Science Foundation, grant BCS-9113455 to the
author.
References
Berthoz A., Grantyn A., Droulez J. (1987) Some collicular neurons code saccadic eye
velocity, Neuroscience Letters, 72.289-294.
1019
"'No"'
o
~
I\)
VI
VI
o
::3
~
BC_JtP&
BC}eft
BC_..
~.~.
~. [A.
TJwta
BC_.....
C. E=.
L.~.
M
PhI
c
~~
...........
Figure 4: The response of the network to two sequential stimulations that produce two oblique saccades with unequal components.
A Recurrent Neural Network for Generation of Ocular Saccades
Cynader M., Berman N. (1972) Receptive-field organization of monkey superior
colliculus, Journal of Neurophysiology, 35, 187-201.
Droulez J., Berthoz A. (1991) The concept of dynamic memory in sensorimotor
control, in Motor Control Concepts and Issues, Humphrey D.R. and Freund H.J. Eds.,
1. Whiley and Sons, 137-161.
GuiUon D. (1992) Control of eye-head coordination during orienting gaze shifts, Trends
i1l Neuroscience, 15(5),174-179.
Lee C., Roher W.H., Sparks D.L. (1988) Population coding of saccadic eye movements
by neurons in the superior colliculus. Nature, 332, 357-360.
Massone L. E. (1992) A biologically-inspired architecture for reactive motor control, in
Neural Networks for Control, G. Beckey and K. Goldberg Eds., Kluwer Academic
Publishers, 1992.
Massone L.E. (submitted) A velocity-based model for control of ocular saccades, Neural
Computation.
Munoz D.P., Pellisson D., Guitton D. (1991) Movement of Neural Activity on the
Superior Colliculus Motor Map during Gaze Shifts, Science, 251. 1358-1360.
Munoz D.P., Guitton D. (1991) Gaze control by the tecto-reticulo-spinal system in the
head-free cat. II. Sustained discharges coding gaze position error, Journal of
Neurophysiology, 66, 1624-1641.
Munoz D.P., Wurtz R.H. (1992) Role of the rostral superior colliculus in active visual
fixation and execution of express saccades, Journal of Neurophysiology, 67, 1000-1002.
Ottes F.P., Van Gisbergen J.A.M., Eggermont J.J. (1986) Visuomotor fields of the
superior colliculus: a quantitative model, Vision Research, 26, 857-873.
Robinson D.A. (1972) Eye movements evoked by collicular stimulation in the alert
monkey, Vision Research, 12, 1795-1808.
Robinson D.A. (1981) Control of eye movements, in Handbook of Physiology - The
Nervous System fl, V.B. Brooks Ed., 1275-1320.
Roher W.H., White J.M., Sparks D.L. (1987) Saccade-related burst cells in the superior
colliculus: relationship of activity with saccade velocity. Society of Neuroscience
Abstracts, 13, 1092.
1021
| 705 |@word neurophysiology:3 version:1 simulation:4 t1r:2 initial:1 configuration:1 selecting:1 bc:1 current:1 activation:1 must:3 i1l:1 chicago:2 motor:14 medial:1 selected:1 shut:1 nervous:1 scienc:1 plane:1 oblique:4 location:1 burst:11 along:2 alert:1 become:1 differential:1 fixation:11 sustained:1 rostral:3 orbital:1 behavior:1 integrator:5 inspired:2 decomposed:1 automatically:1 actual:1 humphrey:1 becomes:2 project:4 opn:1 monkey:2 transformation:1 temporal:1 quantitative:1 control:15 unit:20 grant:1 engineering:1 consequence:1 firing:3 black:1 evoked:1 responsible:1 spontaneously:1 implement:1 area:10 physiology:2 onto:2 map:11 demonstrated:1 center:3 send:1 straightforward:1 spark:2 splitting:1 array:17 pull:4 population:2 discharge:1 target:3 play:1 goldberg:1 velocity:12 trend:1 bottom:3 role:3 electrical:1 ensures:1 movement:22 decrease:1 dynamic:2 topographically:2 cynader:2 rightward:1 arrested:1 cat:1 mn2:1 describe:1 sc:4 visuomotor:1 outcome:2 whose:2 larger:1 ability:3 topographic:2 reset:2 forth:1 description:1 exploiting:1 r1:1 produce:8 recurrent:7 berman:2 direction:2 correct:1 centered:1 brainstem:5 hold:2 ground:1 equilibrium:1 circuitry:2 currently:1 coordination:1 caudal:3 gaussian:1 rather:1 command:1 derived:1 reproduce:3 massone:9 upward:1 issue:1 field:2 identical:1 represents:7 national:1 maintain:1 organization:1 inbetween:1 activated:2 devoted:3 orbit:3 modeling:1 contralateral:1 conducted:1 varies:1 combined:1 lee:2 off:1 collicular:17 gaze:4 continuously:1 connectivity:3 ni2:1 account:1 coding:2 vi:2 performed:1 start:2 contribution:1 correspondance:1 square:3 accuracy:1 correspond:1 agonist:1 none:1 trajectory:2 submitted:4 reach:1 ed:3 sensorimotor:1 ocular:6 efferent:4 color:1 electrophysiological:1 organized:1 back:4 response:2 arranged:1 angular:1 horizontal:9 logistic:1 brings:2 orienting:1 concept:2 analytically:1 white:1 during:3 maintained:1 arrest:1 antagonist:2 demonstrate:2 performs:1 motion:1 superior:9 stimulation:8 rl:1 spinal:1 exponentially:1 eft:1 kluwer:1 munoz:6 imposing:1 ai:1 innervate:1 moving:1 specification:1 calibration:1 curvature:2 showed:1 leftward:1 driven:1 periphery:1 muscle:1 signal:8 ii:3 bcs:1 academic:1 lina:1 a1:3 vision:2 wurtz:2 represent:7 cell:16 grow:1 sends:1 publisher:1 rest:1 tend:1 intermediate:3 brighter:1 architecture:4 opposite:2 avenue:1 translates:1 shift:6 cause:1 action:2 detailed:1 viscosity:1 dark:2 shifted:1 neuroscience:3 anatomical:1 express:1 droulez:3 four:4 relaxation:1 sum:1 colliculus:10 angle:3 letter:1 entirely:1 layer:4 fl:1 activity:17 mni:1 vectorial:1 constraint:1 department:2 according:1 describes:1 son:1 berthoz:4 biologically:1 anatomically:1 equation:2 mechanism:5 fed:2 end:2 studying:1 stiffness:1 top:2 assumes:1 graded:2 eggermont:1 society:1 move:2 arrangement:4 spike:1 receptive:1 saccadic:4 exhibit:1 schiller:1 lateral:1 assuming:1 code:1 relationship:1 ratio:1 vertical:9 neuron:21 observation:4 gisbergen:1 curved:1 extended:1 ever:1 head:2 namely:1 pair:4 required:1 connection:2 bel:1 unequal:2 robinson:8 brook:1 able:2 dynamical:1 pattern:3 oculomotor:7 built:1 memory:1 shifting:1 arm:1 eye:28 axis:2 started:1 carried:3 acknowledgement:1 freund:1 plant:2 northwestern:1 generation:5 versus:1 foundation:1 course:1 supported:1 copy:4 free:1 differentiating:1 van:1 feedback:2 author:1 qualitatively:1 made:1 far:1 active:5 sequentially:1 handbook:1 spatio:1 physiologically:1 stimulated:1 nature:1 motorneurons:2 profile:1 site:5 fashion:2 position:8 msec:1 down:3 gating:4 recurrently:1 r2:2 physiological:2 sequential:1 execution:1 downward:1 push:4 visual:1 expressed:1 phi:1 saccade:20 towards:3 specifically:1 averaging:1 called:2 nil:1 experimental:3 internal:1 bidimensional:1 reactive:1 phenomenon:1 |
6,689 | 7,050 | Non-Stationary Spectral Kernels
Sami Remes
Markus Heinonen
Samuel Kaski
[email protected]
[email protected]
[email protected]
Helsinki Institute for Information Technology HIIT
Department of Computer Science, Aalto University
Abstract
We propose non-stationary spectral kernels for Gaussian process regression by
modelling the spectral density of a non-stationary kernel function as a mixture of
input-dependent Gaussian process frequency density surfaces. We solve the generalised Fourier transform with such a model, and present a family of non-stationary
and non-monotonic kernels that can learn input-dependent and potentially longrange, non-monotonic covariances between inputs. We derive efficient inference
using model whitening and marginalized posterior, and show with case studies that
these kernels are necessary when modelling even rather simple time series, image
or geospatial data with non-stationary characteristics.
1
Introduction
Gaussian processes are a flexible method for non-linear regression [18]. They define a distribution
over functions, and their performance depends heavily on the covariance function that constrains the
function values. Gaussian processes interpolate function values by considering the value of functions
at other similar points, as defined by the kernel function. Standard kernels, such as the Gaussian
kernel, lead to smooth neighborhood-dominated interpolation that is oblivious of any periodic or
long-range connections within the input space, and can not adapt the similarity metric to different
parts of the input space.
Two key properties of covariance functions are stationarity and monotony. A stationary kernel
K(x, x0 ) = K(x + a, x0 + a) is a function only of the distance x ? x0 and not directly the value of
x. Hence it encodes an identical similarity notion across the input space, while a monotonic kernel
decreases over distance. Kernels that are both stationary and monotonic, such as the Gaussian and
Mat?rn kernels, can encode neither input-dependent function dynamics nor long-range correlations
within the input space. Non-monotonic and non-stationary functions are commonly encountered in
realistic signal processing [19], time series analysis [9], bioinformatics [5, 20], and in geostatistics
applications [7, 8].
Recently, several authors have explored kernels that are either non-monotonic or non-stationary. A
non-monotonic kernel can reveal informative manifolds over the input space by coupling distant
points due to periodic or other effects. Non-monotonic kernels have been derived from the Fourier
decomposition of kernels [13, 24, 30], which renders them inherently stationary. Non-stationary
kernels, on the other hand, are based on generalising monotonic base kernels, such as the Mat?rn
family of kernels [6, 15], by partitioning the input space [4], or by input transformations [25].
We propose an expressive and efficient kernel family that is ? in contrast to earlier methods ?
both non-stationary and non-monotonic, and hence can infer long-range or periodic relations in an
input-dependent manner. We derive the kernel from first principles by solving the more expressive
generalised Fourier decomposition of non-stationary functions, than the more limited standard Fourier
decomposition exploited by earlier works. We propose and solve the generalised spectral density as a
mixture of Gaussian process density surfaces that model flexible input-dependent frequency patterns.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
The kernel reduces to a stationary kernel with appropriate parameterisation. We show the expressivity
of the kernel with experiments on time series data, image-based pattern recognition and extrapolation,
and on climate data modelling.
2
Related Work
Bochner?s theorem for stationary signals, whose covariance can be written as k(? ) = k(x ? x0 ) =
k(x, x0 ), implies a Fourier dual [30]
Z
k(? ) = S(s)e2?is? ds
Z
S(s) = k(? )e?2?is? d?.
The dual is a special case of the more general Fourier transform (1), and has been exploited to
design rich, yet stationary kernel representations [24, 32] and used for large-scale inference [17].
Lazaro-Gredilla et al. [13] proposed to directly learn the spectral density as a mixture of Dirac delta
PQ
1
T
functions leading to a sparse spectrum (SS) kernel kSS (? ) = Q
i=1 cos(2?si ? ).
Wilson et al. [30] derived a stationary spectral mixture P
(SM) kernel by modelling the univariate
2
2
spectral density using a mixture of normals SSM
(s)
=
i wi [N (s|?i , ?i ) + N (s| ? ?i , ?i )]/2,
P
2 2
corresponding to the kernel function kSM (? ) = i wi exp(?2? ?i ? ) cos(2??i ? ), which we generalize to the non-stationary case. The SM kernel was also extended for multidimensional inputs
using Kronecker structure for scalability [27]. Kernels derived from the spectral representation are
particularly well suited to encoding long-range, non-monotonic or periodic kernels; however, they
have so far been unable to handle non-stationarity, although [29] presented a partly non-stationary
SM kernel that has input-dependent mixture weights. Kom Samo and Roberts also derived a kernel
similar to our bivariate spectral mixture kernel in a recent technical report [11].
Non-stationary kernels, on the other hand, have been constructed by non-stationary extensions of
Mat?rn and Gaussian kernels with input-dependent length-scales [3, 6, 15, 16], input space warpings
[22, 25], and with local stationarity with products of stationary and non-stationary kernels [2, 23].
The simplest non-stationary kernel is arguably the dot product kernel [18], which has been used as
a way to assign input-dependent signal variances [26]. Non-stationary kernels are a good match
for functions with transitions in their dynamics, yet are unsuitable for modelling non-monotonic
properties.
Our work can also be seen as a generalisation of wavelets, or time-dependent frequency components,
into general and smooth input-dependent components. In signal processing, Hilbert-Huang transforms
and Hilbert spectral analysis explore input-dependent frequencies, but with deterministic transform
functions on the inputs [8, 9].
3
Non-stationary spectral mixture kernels
This section introduces the main contributions. We employ the generalised spectral decomposition of
non-stationary functions and derive a practical and efficient family of kernels based on non-stationary
spectral components. Our approach relies on associating input-dependent frequencies for data inputs,
and solving a kernel through the generalised spectral transform.
The most general family of kernels is the non-stationary kernels, which include stationary kernels
as special cases [2]. A non-stationary kernel k(x, x0 ) ? R for scalar inputs x, x0 ? R can be
characterized by its spectral density S(s, s0 ) over frequencies s, s0 ? R, and the two are related via a
generalised Fourier inverse transform1
Z Z
0 0
0
k(x, x ) =
e2?i(xs?x s ) ?S (ds, ds0 ) ,
(1)
R
R
1
We focus on scalar inputs and frequencies for simplicity. An extension based on vector-valued inputs and
frequencies [2, 10] is straightforward.
2
(a)
(b)
Figure 1: (a): Spectral density surface of a single component bivariate spectral mixture kernel with 8
permuted peaks. (b): The corresponding kernel on inputs x ? [?1, 1].
where ?S is a Lebesgue-Stieltjes measure associated to some positive semi-definite (PSD) spectral
density function S(s, s0 ) with bounded variations [2, 14, 31], which we denote as the spectral surface
since it considers the amplitude of frequency pairs (See Figure 1a).
The generalised Fourier transform (1) specifies that a spectral surface S(s, s0 ) generates a PSD kernel
K(x, x0 ) that is non-stationary unless the spectral measure mass is concentrated only on the diagonal
s = s0 . We design a practical, efficient and flexible parameterisation of spectral surfaces that, in turn,
specifies novel non-stationary kernels with input-dependent characteristics and potentially long-range
non-monotonic correlation structures.
3.1
Bivariate Spectral Mixture kernel
Next, we introduce spectral kernels that remove the restriction of stationarity of earlier works. We
start by modeling the spectral density as a mixture of Q bivariate Gaussian components
2
X
?i
?i ?i ?i0
s
0
Si (s, s ) =
N
|?i , ?i ,
?i =
,
(2)
2
s0
?i ?i ?i0
?i0
0 2
?i ??{?i ,?i }
2
with parameterisation using the correlation ?i , means ?i , ?0i and variances ?i2 , ?i0 . To produce a PSD
spectral density Si as required by equation (1) we need to include symmetries Si (s, s0 ) = Si (s0 , s)
and sufficient diagonal components Si (s, s), Si (s0 , s0 ). To additionally result in a real-valued kernel,
symmetryP
is required with respect to the negative frequencies as well, i.e., Si (s, s0 ) = Si (?s, ?s0 ).
The sum ?i ??{?i ,?0 }2 satisfies all three requirements by iterating over the four permutations of
i
{?i , ?0i }2 and the opposite signs (??i , ??0i ), resulting in eight components (see Figure 1a).
The generalised Fourier inverse transform (1) can be solved in closed form for a weighted spectral
PQ
surface mixture S(s, s0 ) = i=1 wi2 Si (s, s0 ) using Gaussian integral identities (see the Supplement):
k(x, x0 ) =
Q
X
? T ?i x
? )??i ,?0i (x)T ??i ,?0i (x0 )
wi2 exp(?2? 2 x
(3)
i=1
where
??i ,?0i (x) =
cos 2??i x + cos 2??0i x
,
sin 2??i x + sin 2??0i x
? = (x, ?x0 )T and introduce mixture weights wi for each component. We
and where we define x
denote the proposed kernel as the bivariate spectral mixture (BSM) kernel (see Figure 1b). The
positive definiteness of the kernel is guaranteed by the spectral transform, and is also easily verified
since the sinusoidal components form an inner product and the exponential component resembles an
unscaled Gaussian density. A similar formulation for non-stationary spectral kernels was presented
also in a technical report [11].
3
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(i)
(j)
(k)
(l)
Figure 2: (a)-(d): Examples of kernel matrices on inputs x ? [?1, 1] for a Gaussian kernel (a), sparse
spectrum kernel [13] (b), spectral mixture kernel [30] (c), and for the GSM kernel (d). (e)-(h): The
corresponding generalised spectral density surfaces of the four kernels. (i)-(l): The corresponding
spectrograms, that is, input-dependent frequency amplitudes. The GSM kernel is highlighted with a
spectrogram mixture of Q = 2 Gaussian process surface functions.
We immediately notice that the BSM kernel vanishes rapidly outside the origin (x, x0 ) = (0, 0). We
would require a huge number of components centered at different points xi to cover a reasonably-sized
input space.
3.2
Generalised Spectral Mixture (GSM) kernel
We extend the kernel derived in Section 3.1 further by parameterising the frequencies, length-scales
and mixture weights as a Gaussian processes2 , that form a smooth spectrogram (See Figure 2(l)):
log wi (x) ? GP(0, kw (x, x0 )),
(4)
log `i (x) ? GP(0, k` (x, x0 )),
(5)
logit ?i (x) ? GP(0, k? (x, x0 )).
(6)
Here the log transform is used to ensure the weights w(x) and lengthscales `(x) are non-negative,
and the logit transform logit ?(x) = log FN??? limits the learned frequencies between zero and the
Nyquist frequency FN , which is defined as half of the sampling rate of the signal.
A GP prior f (x) ? GP(0, k(x, x0 )) defines a distribution over zero-mean functions, and denotes
the covariance between function values cov[f (x), f (x0 )] = k(x, x0 ) equals their prior kernel. For
any collection of inputs, x1 , . . . , xN , the function values follow a multivariate normal distribution
(f (x1 ), . . . , f (xN ))T ? N (0, K), where Kij = k(xi , xj ). The key property of Gaussian processes
is that they can encode smooth functions by correlating function values of input points that are similar
according to the kernel k(x, x0 ). We use standard Gaussian kernels kw , k` and k? .
2
See the Supplement for a tutorial on Gaussian processes.
4
We accommodate the input-dependent lengthscale by replacing the exponential part of (3) by the
Gibbs kernel
s
(x ? x0 )2
2`i (x)`i (x0 )
0
kGibbs,i (x, x ) =
exp ?
,
`i (x)2 + `i (x0 )2
`i (x)2 + `i (x0 )2
which is a non-stationary generalisation of the Gaussian kernel [3, 6, 15]. We propose a non-stationary
generalised spectral mixture (GSM) kernel with a simple closed form (see the Supplement):
kGSM (x, x0 ) =
Q
X
wi (x)wi (x0 )kgibbs,i (x, x0 ) cos(2?(?i (x)x ? ?i (x0 )x0 )) .
(7)
i=1
The kernel is a product of three PSD terms. The GSM kernel encodes the similarity between two
data points based on their combined signal variance w(x)w(x0 ), and the frequency surface based on
the frequencies ?(x), ?(x0 ) and frequency lengthscales `(x), `(x0 ) associated with both inputs. The
GSM kernel encodes the spectrogram surface mixture into a relatively simple kernel. The kernel
reduces to the stationary Spectral Mixture (SM) kernel [30] with constant functions wi (x) = wi ,
?i (x) = ?i and `i (x) = 1/(2??i ) (see the Supplement).
We have presented the proposed kernel (7) for univariate inputs for simplicity. The kernel can be
extended to multivariate inputs in a straightforward manner using the generalised Fourier transform
with vector-valued inputs [2, 10]. However, in many applications multivariate inputs have a gridlike structure, for instance in geostatistics, image analysis and temporal models. We exploit this
assumption and propose a multivariate extension that assumes the inputs to decompose across input
dimensions [1, 27]:
kGSM (x, x0 |?) =
P
Y
kGSM (xp , x0p |? p ) .
(8)
p=1
Here x, x0 ? RP , ? = (? 1 , . . . , ? P ) collects the dimension-wise kernel parameters ? p =
n
(wip , `ip , ?ip )Q
i=1 of the n-dimensional realisations wip , `ip , ?ip ? R per dimension p. Then,
the kernel matrix can be expressed using Kronecker products as K? = K?1 ? ? ? ? ? K?P , while
missing values and data not on a regular grid can be handled with standard techniques [1, 21, 28, 27].
4
Inference
We use the Gaussian process regression framework and assume a Gaussian likelihood over N = nP
N
data points3 (xj , yj )N
j=1 with all outputs collected into a vector y ? R ,
yj = f (xj ) + ?j ,
?j ? N (0, ?n2 )
f (x) ? GP(0, kGSM (x, x0 |?)),
(9)
with a standard predictive GP posterior f (x? |y) for a new input point x? [18]. The posterior can be
efficiently computed using Kronecker identities [21] (see the Supplement).
We aim to infer the noise variance ?n2 and the kernel parameters ? = (wip , `ip , ?ip )Q,P
i=1,p=1 that
reveal the input-dependent frequency-based correlation structures in the data, while regularising the
learned kernel to penalise overfitting. We perform MAP inference over the log marginalized posterior
log p(?|y) ? log p(y|?)p(?) = L(?), where the functions f (x) have been marginalised out,
?
?
Q,P
Y
? ip |0, Kwp )N (?
? ip |0, K?p )N (?`ip |0, K`p )? , (10)
L(?) = log ?N (y|0, K? + ?n2 I)
N (w
i,p=1
? ?
? and ?` represent the log or
where Kwp , K?p , K`p are n ? n prior matrices per dimensions p, and w,
logit transformed variables. The marginalized posterior automatically balances between parameters
? that fit the data and a model that is not overly complex [18]. We can efficiently evaluate both
3
Assuming that we have equal number of points n in all dimensions.
5
the marginalized posterior and its gradients in O(P N
[21, 27] (see the Supplement).
P +1
P
) instead of the usual O(N 3 ) complexity
? ip , ?
? ip , ?`ip
Gradient-based optimisation of (10) is likely to converge very slowly due to parameters w
?
?
being highly self-correlated. We remove the correlations by whitening the variables as ? = L?1 ?
where L is the Cholesky decomposition of the prior covariances. We maximize L using gradient
? by evaluating L(L?)
? and the gradient as [6, 12]
ascent with respect to the whitened variables ?
?L
?L ?? ? ??
?L
=
= LT
.
?
?
?
??
?? ??
? ??
??
5
(11)
Experiments
We apply our proposed kernel first on simple simulated time series, then on texture images and lastly
on a land surface temperature dataset. With the image data, we compare our method to two stationary
mixture kernels, specifically the spectral mixture (SM) [30] and sparse spectrum (SS) kernels [13],
and the standard squared exponential (SE) kernel. We employ the GPML Matlab toolbox, which
directly implements the SM and SE kernels, and the SS kernel as a meta kernel combining simple
cosine kernels. The GPML toolbox also implements Kronecker inference automatically for these
kernels.
We implemented the proposed GSM kernel and inference in Matlab4 . For optimising the log posterior
(10) we employ the L-BFGS algorithm. For both our method and the comparisons, we restart
the optimisation from 10 different initialisations, each of which is chosen as the best among 100
randomly sampled hyperparameter values as evaluating the log posterior is cheap compared to
evaluating gradients or running the full optimisation.
5.1
Simulated time series with a decreasing frequency component
First we experiment whether the GSM kernel can find a simulated time-varying frequency pattern. We
simulated a dataset where the frequency of the signal changes deterministically as ?(x) = 1+(1?x)2
on the interval x ? [?1, 1]. We built a single-component GSM kernel K using the specified functions
?(x), `(x) = ` = exp(?1) and w(x) = w = 1. We sampled a noisy function y ? N (0, K + ?n2 I)
with a noise variance ?n2 = 0.1. The example in Figure 3 shows the learned GSM kernel, as well
as the data and the function posterior f (x). For this 1D case, we also employed the empirical
spectrogram for initialising the hyperparameter values. The kernel correctly captures the increasing
frequency towards negative values (towards left in Figure 3a).
5.2
Image data
We applied our kernel to two texture images. The first image of a sheet of metal represents a
mostly stationary periodic pattern. The second, a wood texture, represents an example of a very
non-stationary pattern, especially on the horizontal axis. We use majority of the image as training
data (the non-masked regions of Figure 3a and 3f) , and use the compared kernels to predict a missing
cross-section in the middle, and also to extrapolate outside the borders of the original image.
Figure 4 shows the two texture images, and extrapolation predictions given by the proposed GSM
kernel, with a comparison to the spectral mixture (SM), sparse spectrum (SS) and standard squared
exponential (SE) kernels. For GSM, SM and SS we used Q = 5 mixture components for the metal
texture, and Q = 10 components for the more complex wood texture.
The GSM kernel gives the most pleasing result visually, and fills in both patterns well with consistent
external extrapolation as well. The stationary SM kernel does capture the cross-section, but has
trouble extrapolation outside the borders. The SS kernel fails to represent even the training data, it
lacks any smoothness in the frequency space. The gaussian kernel extrapolates poorly.
4
Implementation available at https://github.com/sremes/nonstationary-spectral-kernels
6
3.5
1.5
3
1
2.5
0.5
2
0
1.5
-0.5
1
-1
0.5
-1.5
(a)
(b)
(c)
(d)
Figure 3: (a) A simulated time series with a single decreasing frequency component and a GP fitted
using a GSM kernel. (b) The learned kernel shows that close to x = ?1 the signal is highly correlated
and anti-correlated with close time points, while these periodic dependencies
vanish when moving
p
towards x = 1. For visualisation, the values are scaled as K = sgn(K) |K|. (c) The spectrogram
shows the decreasing frequency. (d) The learned latent frequency function ?(x) correctly finds the
decreasing trend. The length-scale `(x) is almost a constant, and weights w(x) slightly decrease in
time.
5.3
Spatio-Temporal Analysis of Land Surface Temperatures
NASA5 provides a land surface temperature dataset that we used to demonstrate our kernel in analysis
of spatio-temporal data. Our primary objective is to demonstrate the capability of the kernel in
inferring long-range, non-stationary spatial and temporal covariances.
We took a subset of four years (February 2000 to February 2004) of North American land temperatures for training data. In total we get 407,232 data points, constituting 48 monthly temperature
measurements on a 84 ? 101 map grid. The grid also contains water regions, which we imputed
with the mean temperature of each month. We experimented with the data by learning a generalized
spectral mixture kernel using Q = 5 components.
Figure 5 presents our results. Figure 5b highlights the training data and model fits for a winter
and summer month, respectively. Figure 5a shows the non-stationary kernel slices at two locations
across both latitude and longitude, as well as indicating that the spatial covariances are remarkably
non-symmetric. Figure 5c indicates five months of successive training data followed by three months
of test data predictions.
6
Discussion
In this paper we have introduced non-stationary spectral mixture kernels, with treatment based on
the generalised Fourier transform of non-stationary functions. We first derived the bivariate spectral
mixture (BSM) kernel as a mixture of non-stationary spectral components. However, we argue it
has only limited practical use due to requiring an impractical amount of components to cover any
sufficiently sized input space. The main contribution of the paper is the generalised spectral mixture
(GSM) kernel with input-dependent Gaussian process frequency surfaces. The Gaussian process
components can cover non-trivial input spaces with just a few interpretable components. The GSM
kernel is a flexible, practical and efficient kernel that can learn both local and global correlations
5
https://neo.sci.gsfc.nasa.gov/view.php?datasetId=MOD11C1_M_LSTDA
7
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(i)
(j)
Figure 4: A metal texture data with Q = 5 components used for GSM, SM and SS kernels shown in
(a)-(e) and a wood texture in (f)-(j) (with Q = 10 components). The GSM kernel performs the best,
making the most believable extrapolation outside image borders in (b) and (g). The SM kernel fills in
the missing cross pattern in (c) but does not extrapolate well. In (h) the SM kernel fills in the vertical
middle block only with the mean value while GSM in (g) is able to fill in a wood-like pattern. SS is
not able discover enough structure in either texture (d) or (i), while the SE kernel overfits by using a
too short length-scale in (e) and (j).
across the input domains in an input-dependent manner. We highlighted the capability of the kernel
to find interesting patterns in the data by applying it on climate data where it is highly unrealistic
to assume the same (stationary) covariance pattern for every spatial location irrespective of spatial
structures.
Even though the proposed kernel is motivated by the generalised Fourier transform, the solution to its
spectral surface
ZZ
0 0
0
SGSM (s, s ) =
kGSM (x, x0 )e?2?i(xs?x s ) dxdx0
(12)
remains unknown due to having multiple GP functions inside the integral. Figure 2h highlights a
numerical integration of the surface equation (12) on an example GP frequency surface. Furthermore,
the theoretical work of Kom Samo and Roberts [11] on generalised spectral transforms suggests
that the GSM kernel may also be dense in the family of non-stationary kernels, that is, to reproduce
arbitrary non-stationary kernels.
Acknowledgments
This work has been partly supported by the Finnish Funding Agency for Innovation (project Re:Know)
and Academy of Finland (COIN CoE, and grants 299915, 294238 and 292334). We acknowledge the
computational resources provided by the Aalto Science-IT project.
References
[1] S. Flaxman, A. G. Wilson, D. Neill, H. Nickisch, and A. Smola. Fast kronecker inference in
Gaussian processes with non-Gaussian likelihoods. In ICML, volume 2015, 2015.
[2] M. Genton. Classes of kernels for machine learning: A statistics perspective. Journal of
Machine Learning Research, 2:299?312, 2001.
[3] M. Gibbs. Bayesian Gaussian Processes for Regression and Classification. PhD thesis,
University of Cambridge, 1997.
8
(b)
(a)
(c)
Figure 5: (a) Demonstrates the non-stationary spatial covariances in the land surface data. The
vertical black lines denote the point x0 at which the kernel function k(?, x0 ) is centered. (b) Sample
reconstructions. In all plots, only the land area temperatures are shown. (c) Posterior for five last
training months (until Jan 2004) and prediction for the three next months (February 2004 to April
2004), which the model is able to to construct reasonably accurately.
[4] R. Gramacy and H. Lee. Bayesian treed Gaussian process models with an application to
computer modeling. Journal of the American Statistical Association, 103:1119?1130, 2008.
[5] M. Grzegorczyk, D. Husmeier, K. Edwards, P. Ghazal, and A. Millar. Modelling non-stationary
gene regulatory processes with a non-homogeneous bayesian network and the allocation sampler.
Bioinformatics, 24:2071?2078, 2008.
[6] M. Heinonen, H. Mannerstr?m, J. Rousu, S. Kaski, and H. L?hdesm?ki. Non-stationary
Gaussian process regression with Hamiltonian Monte Carlo. In AISTATS, volume 51, pages
732?740, 2016.
[7] D. Higdon, J. Swall, and J. Kern. Non-stationary spatial modeling. Bayesian statistics, 6:761?
768, 1999.
[8] N. Huang. A review on hilbert-huang transform: Method and its applications to geophysical
studies. Reviews of Geophysics, 46, 2008.
[9] N. Huang, S. Zheng, S. Long, M. Wu, H. Shih, Q. Zheng, N.-Q. Yen, C. Tung, and H. Liu. The
empirical mode decomposition and the hilbert spectrum for nonlinear and non-stationary time
series analysis. In Proceedings of the Royal Society of London A: Mathematical, Physical and
Engineering Sciences, 454:903?995, 1998.
[10] Y. Kakihara. A note on harmonizable and v-bounded processes. Journal of Multivariate
Analysis, 16:140?156, 1985.
9
[11] Y.-L. Kom Samo and S. Roberts. Generalized spectral kernels. Technical report, University of
Oxford, 2015. arXiv:1506.02236.
[12] M. Kuss and C. E. Rasmussen. Assessing approximate inference for binary Gaussian process
classification. Journal of Machine Learning Research, 6:1679?1704, 2005.
[13] M. L?zaro-Gredilla, J. Qui?onero-Candela, C. E. Rasmussen, and A. R. Figueiras-Vidal. Sparse
spectrum Gaussian process regression. Journal of Machine Learning Research, 11:1865?1881,
2010.
[14] M. Loeve. Probability Theory II, volume 46 of Graduate Texts in Mathematics. Springer, 1978.
[15] C. Paciorek and M. Schervish. Nonstationary covariance functions for Gaussian process
regression. In NIPS, pages 273?280, 2004.
[16] C. Paciorek and M. Schervish. Spatial modelling using a new class of nonstationary covariance
functions. Environmetrics, 17(5):483?506, 2006.
[17] A. Rahimi and B. Recht. Random features for large-scale kernel machines. In Advances in
neural information processing systems, pages 1177?1184, 2008.
[18] C. E. Rasmussen and C. Williams. Gaussian processes for machine learning. MIT Press, 2006.
[19] O. Rioul and V. Martin. Wavelets and signal processing. IEEE signal processing magazine,
8:14?38, 1991.
[20] J. Robinson and A. Hartemink. Non-stationary dynamic bayesian networks. In Advances in
neural information processing systems, pages 1369?1376, 2009.
[21] Y. Saat?i. Scalable Inference for Structured Gaussian Process Models. PhD thesis, University
of Cambridge, 2011.
[22] P. Sampson and P. Guttorp. Nonparametric estimation of nonstationary spatial covariance
structure. Journal of the American Statistical Association, 87, 1992.
[23] R. Silverman. Locally stationary random processes. Information Theory, IRE Transactions on,
3:182?187, 1957.
[24] A. Sinha and J. Duchi. Learning lernels with random features. In NIPS, 2016.
[25] J. Snoek, K. Swersky, R. Zemel, and R. Adams. Input warping for bayesian optimization of
non-stationary functions. In ICML, volume 32, pages 1674?1682, 2014.
[26] V. Tolvanen, P. Jyl?nki, and A. Vehtari. Expectation propagation for nonstationary heteroscedastic Gaussian process regression. In Machine Learning for Signal Processing (MLSP), 2014
IEEE International Workshop on, pages 1?6. IEEE, 2014.
[27] A. Wilson, E. Gilboa, J. P. Cunningham, and A. Nehorai. Fast kernel learning for multidimensional pattern extrapolation. In NIPS, 2014.
[28] A. Wilson and H. Nickisch. Kernel interpolation for scalable structured gaussian processes
(KISS-GP). In International Conference on Machine Learning, pages 1775?1784, 2015.
[29] A. G. Wilson. Covariance kernels for fast automatic pattern discovery and extrapolation with
Gaussian processes. PhD thesis, University of Cambridge, 2014.
[30] A. G. Wilson and R. Adams. Gaussian process kernels for pattern discovery and extrapolation.
In ICML, 2013.
[31] A. M. Yaglom. Correlation theory of stationary and related random functions: Volume I: Basic
results. Springer Series in Statistics. Springer, 1987.
[32] Z. Yang, A. Smola, L. Song, and A. Wilson. A la carte: Learning fast kernels. In AISTATS,
2015.
10
| 7050 |@word middle:2 logit:4 covariance:14 decomposition:6 bsm:3 accommodate:1 liu:1 series:8 contains:1 initialisation:1 com:1 si:10 yet:2 written:1 fn:2 realistic:1 distant:1 numerical:1 informative:1 cheap:1 remove:2 plot:1 interpretable:1 stationary:58 half:1 ksm:1 hamiltonian:1 short:1 geospatial:1 provides:1 ire:1 location:2 successive:1 ssm:1 treed:1 five:2 mathematical:1 constructed:1 inside:1 introduce:2 manner:3 x0:37 snoek:1 nor:1 decreasing:4 automatically:2 gov:1 considering:1 increasing:1 project:2 discover:1 bounded:2 provided:1 mass:1 transformation:1 impractical:1 temporal:4 every:1 multidimensional:2 penalise:1 matlab4:1 scaled:1 demonstrates:1 partitioning:1 grant:1 arguably:1 generalised:16 positive:2 engineering:1 local:2 limit:1 encoding:1 oxford:1 interpolation:2 black:1 resembles:1 higdon:1 collect:1 suggests:1 heteroscedastic:1 co:5 limited:2 range:6 graduate:1 practical:4 acknowledgment:1 zaro:1 yj:2 block:1 definite:1 implement:2 silverman:1 jan:1 area:1 empirical:2 regular:1 kern:1 get:1 close:2 sheet:1 applying:1 restriction:1 deterministic:1 map:2 missing:3 straightforward:2 williams:1 simplicity:2 immediately:1 gramacy:1 fill:4 handle:1 notion:1 variation:1 heavily:1 magazine:1 homogeneous:1 origin:1 trend:1 recognition:1 particularly:1 tung:1 solved:1 capture:2 region:2 decrease:2 vehtari:1 vanishes:1 agency:1 complexity:1 constrains:1 dynamic:3 nehorai:1 solving:2 predictive:1 easily:1 kaski:3 fast:4 lengthscale:1 monte:1 london:1 zemel:1 neighborhood:1 outside:4 lengthscales:2 whose:1 kom:3 solve:2 valued:3 s:8 cov:1 statistic:3 gp:11 transform:13 highlighted:2 noisy:1 ip:12 took:1 propose:5 reconstruction:1 product:5 combining:1 rapidly:1 poorly:1 academy:1 dirac:1 scalability:1 figueiras:1 requirement:1 assessing:1 produce:1 adam:2 derive:3 coupling:1 edward:1 longitude:1 implemented:1 implies:1 grzegorczyk:1 centered:2 sgn:1 genton:1 require:1 assign:1 decompose:1 extension:3 sufficiently:1 normal:2 exp:4 visually:1 predict:1 finland:1 estimation:1 weighted:1 carte:1 mit:1 gaussian:37 aim:1 rather:1 varying:1 wilson:7 gpml:2 encode:2 derived:6 focus:1 modelling:7 likelihood:2 indicates:1 aalto:5 contrast:1 inference:9 dependent:18 i0:4 cunningham:1 relation:1 visualisation:1 transformed:1 reproduce:1 dual:2 flexible:4 among:1 classification:2 spatial:8 special:2 integration:1 equal:2 construct:1 having:1 beach:1 sampling:1 zz:1 identical:1 kw:2 optimising:1 represents:2 icml:3 report:3 np:1 realisation:1 oblivious:1 employ:3 few:1 randomly:1 winter:1 interpolate:1 lebesgue:1 psd:4 pleasing:1 stationarity:4 huge:1 highly:3 zheng:2 introduces:1 mixture:30 parameterising:1 integral:2 necessary:1 unless:1 re:1 theoretical:1 fitted:1 sinha:1 kij:1 instance:1 earlier:3 modeling:3 jyl:1 cover:3 paciorek:2 subset:1 masked:1 too:1 dependency:1 lazaro:1 periodic:6 nickisch:2 combined:1 st:1 density:13 peak:1 recht:1 international:2 lee:1 squared:2 thesis:3 huang:4 slowly:1 transform1:1 external:1 american:3 leading:1 sinusoidal:1 bfgs:1 north:1 mlsp:1 depends:1 view:1 extrapolation:8 stieltjes:1 closed:2 overfits:1 candela:1 millar:1 start:1 capability:2 yen:1 contribution:2 php:1 variance:5 characteristic:2 efficiently:2 generalize:1 bayesian:6 accurately:1 carlo:1 onero:1 kuss:1 gsm:20 frequency:28 e2:2 associated:2 sampled:2 dataset:3 treatment:1 hilbert:4 amplitude:2 nasa:1 follow:1 april:1 formulation:1 hiit:1 though:1 furthermore:1 just:1 smola:2 lastly:1 correlation:7 d:2 hand:2 until:1 horizontal:1 expressive:2 replacing:1 nonlinear:1 lack:1 propagation:1 defines:1 mode:1 reveal:2 usa:1 effect:1 requiring:1 hence:2 believable:1 x0p:1 symmetric:1 i2:1 climate:2 sin:2 self:1 samuel:2 cosine:1 generalized:2 yaglom:1 demonstrate:2 performs:1 duchi:1 temperature:7 neo:1 image:12 wise:1 novel:1 fi:3 recently:1 funding:1 permuted:1 physical:1 volume:5 extend:1 association:2 measurement:1 monthly:1 cambridge:3 gibbs:2 smoothness:1 automatic:1 grid:3 mathematics:1 pq:2 dot:1 moving:1 similarity:3 surface:19 whitening:2 base:1 posterior:10 multivariate:5 recent:1 perspective:1 kwp:2 meta:1 binary:1 exploited:2 seen:1 spectrogram:6 employed:1 bochner:1 converge:1 maximize:1 husmeier:1 signal:11 semi:1 ii:1 full:1 multiple:1 infer:2 reduces:2 rahimi:1 smooth:4 technical:3 match:1 adapt:1 characterized:1 cross:3 long:8 prediction:3 scalable:2 basic:1 regression:8 whitened:1 optimisation:3 metric:1 rousu:1 expectation:1 arxiv:1 kernel:139 represent:2 remarkably:1 interval:1 finnish:1 ascent:1 nonstationary:5 yang:1 sami:2 enough:1 xj:3 fit:2 associating:1 opposite:1 inner:1 whether:1 motivated:1 handled:1 remes:2 nyquist:1 song:1 render:1 matlab:1 iterating:1 se:4 transforms:2 amount:1 nonparametric:1 locally:1 concentrated:1 simplest:1 imputed:1 http:2 specifies:2 tutorial:1 notice:1 sign:1 delta:1 overly:1 per:2 correctly:2 hyperparameter:2 mat:3 key:2 four:3 shih:1 neither:1 verified:1 schervish:2 sum:1 wood:4 year:1 inverse:2 swersky:1 family:6 almost:1 wu:1 environmetrics:1 initialising:1 qui:1 ks:1 ki:1 guaranteed:1 summer:1 followed:1 neill:1 encountered:1 extrapolates:1 kronecker:5 helsinki:1 encodes:3 markus:2 dominated:1 generates:1 fourier:12 loeve:1 relatively:1 martin:1 department:1 structured:2 gredilla:2 according:1 across:4 slightly:1 wi:8 parameterisation:3 making:1 equation:2 resource:1 remains:1 turn:1 know:1 available:1 vidal:1 eight:1 apply:1 spectral:46 appropriate:1 coin:1 rp:1 original:1 denotes:1 assumes:1 include:2 ensure:1 running:1 trouble:1 marginalized:4 coe:1 unsuitable:1 exploit:1 especially:1 february:3 society:1 dxdx0:1 warping:2 objective:1 primary:1 usual:1 diagonal:2 gradient:5 distance:2 unable:1 simulated:5 samo:3 restart:1 majority:1 sci:1 manifold:1 argue:1 considers:1 collected:1 trivial:1 water:1 assuming:1 length:4 balance:1 innovation:1 mostly:1 robert:3 potentially:2 negative:3 design:2 implementation:1 unknown:1 perform:1 vertical:2 sm:12 acknowledge:1 anti:1 extended:2 rn:3 arbitrary:1 introduced:1 pair:1 required:2 toolbox:2 specified:1 connection:1 ds0:1 learned:5 expressivity:1 geophysics:1 geostatistics:2 nip:4 robinson:1 able:3 pattern:13 wi2:2 latitude:1 built:1 royal:1 unrealistic:1 nki:1 marginalised:1 github:1 technology:1 axis:1 irrespective:1 flaxman:1 text:1 prior:4 review:2 discovery:2 permutation:1 highlight:2 interesting:1 allocation:1 sufficient:1 xp:1 s0:14 metal:3 principle:1 consistent:1 unscaled:1 land:6 supported:1 last:1 rasmussen:3 gilboa:1 institute:1 sparse:5 slice:1 dimension:5 xn:2 transition:1 evaluating:3 rich:1 author:1 commonly:1 collection:1 far:1 constituting:1 transaction:1 approximate:1 longrange:1 gene:1 global:1 heinonen:3 correlating:1 overfitting:1 generalising:1 spatio:2 xi:2 spectrum:6 latent:1 regulatory:1 guttorp:1 additionally:1 learn:3 reasonably:2 ca:1 inherently:1 correlated:3 symmetry:1 complex:2 domain:1 aistats:2 main:2 dense:1 border:3 noise:2 n2:5 x1:2 definiteness:1 fails:1 inferring:1 deterministically:1 exponential:4 vanish:1 wavelet:2 theorem:1 wip:3 explored:1 x:2 experimented:1 bivariate:6 workshop:1 supplement:6 texture:9 phd:3 points3:1 suited:1 lt:1 univariate:2 explore:1 likely:1 expressed:1 hartemink:1 kiss:1 scalar:2 monotonic:13 springer:3 satisfies:1 relies:1 identity:2 sized:2 month:6 towards:3 sampson:1 regularising:1 change:1 generalisation:2 specifically:1 sampler:1 total:1 partly:2 la:1 geophysical:1 indicating:1 cholesky:1 bioinformatics:2 evaluate:1 extrapolate:2 |
6,690 | 7,051 | Overcoming Catastrophic Forgetting by
Incremental Moment Matching
Sang-Woo Lee1 , Jin-Hwa Kim1 , Jaehyun Jun1 , Jung-Woo Ha2 , and Byoung-Tak Zhang1,3
Seoul National University1
Clova AI Research, NAVER Corp2
Surromind Robotics3
{slee,jhkim,jhjun}@bi.snu.ac.kr [email protected]
[email protected]
Abstract
Catastrophic forgetting is a problem of neural networks that loses the information
of the first task after training the second task. Here, we propose a method, i.e. incremental moment matching (IMM), to resolve this problem. IMM incrementally
matches the moment of the posterior distribution of the neural network which is
trained on the first and the second task, respectively. To make the search space
of posterior parameter smooth, the IMM procedure is complemented by various
transfer learning techniques including weight transfer, L2-norm of the old and the
new parameter, and a variant of dropout with the old parameter. We analyze our approach on a variety of datasets including the MNIST, CIFAR-10, Caltech-UCSDBirds, and Lifelog datasets. The experimental results show that IMM achieves
state-of-the-art performance by balancing the information between an old and a
new network.
1
Introduction
Catastrophic forgetting is a fundamental challenge for artificial general intelligence based on neural
networks. The models that use stochastic gradient descent often forget the information of previous
tasks after being trained on a new task [1, 2]. Online multi-task learning that handles such problems
is described as continual learning. This classic problem has resurfaced with the renaissance of deep
learning research [3, 4].
Recently, the concept of applying a regularization function to a network trained by the old task to
learning a new task has received much attention. This approach can be interpreted as an approximation of sequential Bayesian [5, 6]. Representative examples of this regularization approach include
learning without forgetting [7] and elastic weight consolidation [8]. These algorithms succeeded in
some experiments where their own assumption of the regularization function fits the problem.
Here, we propose incremental moment matching (IMM) to resolve the catastrophic forgetting problem. IMM uses the framework of Bayesian neural networks, which implies that uncertainty is introduced on the parameters in neural networks, and that the posterior distribution is calculated [9, 10].
The dimension of the random variable in the posterior distribution is the number of the parameters
in the neural networks. IMM approximates the mixture of Gaussian posterior with each component
representing parameters for a single task to one Gaussian distribution for a combined task. To merge
the posteriors, we introduce two novel methods of moment matching. One is mean-IMM, which
simply averages the parameters of two networks for old and new tasks as the minimization of the
average of KL-divergence between one approximated posterior distribution for the combined task
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
?1:2Mode = (S1-1 + S 2-1 )-1 (S1-1?1 + S -21?2 )
?1:2Mode
?1
?1:2Mean
?1:2Mean = ( ?1 + ?2 ) / 2
?2
weight-transfer
?1 ? ?2
L2-transfer
?2 - ?1
2
2
drop-transfer
?1 + 2 ? dropout ( ?2 - ?1 )
Find !2 ,
which
makes
!1:2 perform
better
Figure 1: Geometric illustration of incremental moment matching (IMM). Mean-IMM simply averages the parameters of two neural networks, whereas mode-IMM tries to find a maximum of the
mixture of Gaussian posteriors. To make IMM be reasonable, the search space of the loss function
between the posterior means ?1 and ?2 should be reasonably smooth and convex-like. To find a
?2 which satisfies this condition of a smooth and convex-like path from ?1 , we propose applying
various transfer techniques for the IMM procedure.
and each Gaussian posterior for the single task [11]. The other is mode-IMM, which merges the parameters of two networks using a Laplacian approximation [9] to approximate a mode of the mixture
of two Gaussian posteriors, which represent the parameters of the two networks.
In general, it is too na?ve to assume that the final posterior distribution for the whole task is Gaussian.
To make our IMM work, the search space of the loss function between the posterior means needs to
be smooth and convex-like. In other words, there should not be high cost barriers between the means
of the two networks for an old and a new task. To make our assumption of Gaussian distribution for
neural network reasonable, we applied three main transfer learning techniques on the IMM procedure: weight transfer, L2-norm of the old and the new parameters, and our newly proposed variant
of dropout using the old parameters. The whole procedure of IMM is illustrated in Figure 1.
2
Previous Works on Catastrophic Forgetting
One of the major approaches preventing catastrophic forgetting is to use an ensemble of neural networks. When a new task arrives, the algorithm makes a new network, and shares the representation
between the tasks [12, 13]. However, this approach has a complexity issue, especially in inference,
because the number of networks increases as the number of new tasks that need to be learned increases.
Another approach studies the methods using implicit distributed storage of information, in typical
stochastic gradient descent (SGD) learning. These methods use the idea of dropout, maxout, or neural module to distributively store the information for each task by making use of the large capacity of
the neural network [4]. Unfortunately, most studies following this approach had limited success and
failed to preserve performance on the old task when an extreme change to the environment occurred
[3]. Alternatively, Fernando et al. [14] proposed PathNet, which extends the idea of the ensemble
approach for parameter reuse [13] within a single network. In PathNet, a neural network has ten or
twenty modules in each layer, and three or four modules are picked for one task in each layer by
an evolutionary approach. This method alleviates the complexity issue of the ensemble approach to
continual learning in a plausible way.
The approach with a regularization term also has received attention. Learning without forgetting
(LwF) is one example of this approach, which uses the pseudo-training data from the old task [7].
Before learning the new task, LwF puts the training data of the new task into the old network,
and uses the output as pseudo-labels of the pseudo-training data. By optimizing both the pseudotraining data of the old task and the real data of the new task, LwF attempts to prevent catastrophic
forgetting. This framework is promising where the properties of the pseudo training set is similar to
the ideal training set. Elastic weight consolidation (EWC), another example of this approach, uses
sequential Bayesian estimation to update neural networks for continual learning [8]. In EWC, the
posterior distribution trained by the previous task is used to update the new prior distribution. This
new prior is used for learning the new posterior distribution of the new task in a Bayesian manner.
2
EWC assumes that the covariance matrix of the posterior is diagonal and there are no correlations
between the nodes. Though this assumption is fragile, EWC performs well in some domains.
EWC is a monumental recent work that uses sequential Bayesian for continual learning of neural
networks. However, updating the parameter of complex hierarchical models by sequential Bayesian
estimation is not new [5]. Sequential Bayes was used to learn topic models from stream data by
Broderick et al. [6]. Huang et al. applied sequential Bayesian to adapt a deep neural network to
the specific user in the speech recognition domain [15, 16]. They assigned the layer for the user
adaptation and applied MAP estimation to this single layer. Similar to our IMM method, Bayesian
moment matching is used for sum-product networks, a kind of deep hierarchical probabilistic model
[17]. Though sum-product networks are usually not scalable to large datasets, their online learning
method is useful, and it achieves similar performance to the batch learner. Our method using moment
matching focuses on continual learning and deals with significantly different statistics between tasks,
unlike the previous method.
3
Incremental Moment Matching
In incremental moment matching (IMM), the moments of posterior distributions are matched in an
incremental way. In our work, we use a Gaussian distribution to approximate the posterior distribution of parameters. Given K sequential tasks, we want to find the optimal parameter ??1:K and
??1:K of the Gaussian approximation function q1:K from the posterior parameter for each kth task,
(?k , ?k ).
p1:K ? p(?|X1 , ? ? ? , XK , y1 , ? ? ? , yK ) ? q1:K ? q(?|?1:K , ?1:K )
pk ? p(?|Xk , yk ) ? qk ? q(?|?k , ?k )
(1)
(2)
q1:K denotes an approximation of the true posterior distribution p1:K for the whole task, and qk
denotes an approximation of the true posterior distribution pk over the training dataset (Xk , yk ) for
the kth task. ? denotes the vectorized parameter of the neural network. The dimension of ?k and
?1:k is D, and the dimension of ?k and ?1:k is D ? D, respectively, where D is the dimension of ?.
For example, a multi-layer perceptrons (MLP) with [784-800-800-800-10] has the number of nodes,
D = 1917610 including bias terms.
Next, we explain two proposed moment matching algorithms for the continual learning of modern
deep neural networks. The two algorithms generate two different moments of Gaussian with different
objective functions for the same dataset.
3.1
Mean-based Incremental Moment Matching (mean-IMM)
Mean-IMM averages the parameters of two networks in each layer, using mixing ratios ?k with
PK
k ?k = 1. The objective function of mean-IMM is to minimize the following local KL-distance
or the weighted sum of KL-divergence between each qk and q1:K [11, 18]:
??1:K , ??1:K = argmin
PK
k
?1:K ,?1:K
??1:K =
??1:K
=
PK
k
PK
k
?k KL(qk ||q1:K )
?k ?k
?k (?k + (?k ?
??1:K )(?k
(3)
(4)
?
??1:K )T )
(5)
??1:K and ??1:K are the optimal solution of the local KL-distance. Notice that covariance information
is not needed for mean-IMM, since calculating ??1:K does not require any ?k . A series of ?k is
sufficient to perform the task. The idea of mean-IMM is commonly used in shallow networks [19,
20]. However, the contribution of this paper is to discover when and how mean-IMM can be applied
in modern deep neural networks and to show it can performs better with other transfer techniques.
Future works may include other measures to merge the networks, including the KL-divergence bePK
tween q1:K and the mixture of each qk (i.e. KL(q1:K || k ?k qk )) [18].
3
3.2
Mode-based Incremental Moment Matching (mode-IMM)
Mode-IMM is a variant of mean-IMM which uses the covariance information of the posterior of
Gaussian distribution. In general, a weighted average of two mean vectors of Gaussian distributions
is not a mode of MoG. In discriminative learning, the maximum of the distribution is of primary
interest. According to Ray and Lindsay [21], all the modes of MoG with K clusters lie on (K ? 1)PK
PK
P
?1
dimensional hypersurface {?|? = ( k ak ??1
( k ak ??1
k ak = 1}.
k )
k ?k ), 0 < ak < 1 and
See Appendix A for more details.
Motivated by the above description, a mode-IMM approximate MoG with Laplacian approximation,
in which the logarithm of the function is expressed by the Taylor expansion [9]. Using Laplacian
approximation, the MoG is approximated as follows:
log q1:K ?
PK
k
PK
PK
1
?1
0
?k log qk + C = ? ?T ( k ?k ??1
k ?k ?k ?k )? + C
k )? + (
2
PK
??1:K = ??1:K ? ( k ?k ??1
k ?k )
PK
?1 ?1
?
?1:K = ( k ?k ?k )
(6)
(7)
(8)
For Equation 8, we add I to the term to be inverted in practice, with an identity matrix I and a small
constant .
Here, we assume diagonal covariance matrices, which means that there is no correlation among
parameters. This diagonal assumption is useful, since it decreases the number of parameters for
each covariance matrix from O(D2 ) to O(D) for the dimension of the parameters D.
For covariance, we use the inverse of a Fisher information matrix, following [8, 22]. The main
idea of this approximation is that the square of gradients for parameters is a good indicator of their
precision, which is the inverse of the variance. The Fisher information matrix for the kth task, Fk is
defined as:
?
?
T
ln p(?
y |x, ?k ) ?
ln p(?
y |x, ?k ) ,
Fk = E
??k
??k
(9)
where the probability of the expectation follows x ? ?k and y? ? p(y|x, ?k ), where ?k denotes an
empirical distribution of Xk .
4
Transfer Techniques for Incremental Moment Matching
In general, the loss function of neural networks is not convex. Consider that shuffling nodes and
their weights in a neural network preserves the original performance. If the parameters of two neural
networks initialized independently are averaged, it might perform poorly because of the high cost
barriers between the parameters of the two neural networks [23]. However, we will show that various
transfer learning techniques can be used to ease this problem, and make the assumption of Gaussian
distribution for neural networks reasonable. In this section, we introduce three practical techniques
for IMM, including weight-transfer, L2-transfer, and drop-transfer.
4.1
Weight-Transfer
Weight-transfer initialize the parameters for the new task ?k with the parameters of the previous
task ?k?1 [24]. In our experiments, the use of weight-transfer was critical to the continual learning performance. For this reason, the experiments on IMM in this paper use the weight-transfer
technique by default.
The weight-transfer technique is motivated by the geometrical property of neural networks discovered in the previous work [23]. They found that there is a straight path from the initial point to the
solution without any high cost barrier, in various types of neural networks and datasets. This discovery suggests that the weight-transfer from the previous task to the new task makes a smooth loss
4
Figure 2: Experimental results on visualizing the effect of weight-transfer. The geometric property
of the parameter space of the neural network is analyzed. Brighter is better. ?1 , ?2 , and ?3 are the
vectorized parameters of trained networks from randomly selected subsets of the CIFAR-10 dataset.
This figure shows that there are better solutions between the three locally optimized parameters.
surface between two solutions for the tasks, so that the optimal solution for both tasks lies on the
interpolated point of the two solutions.
To empirically validate the concept of weight-transfer, we use the linear path analysis proposed by
Goodfellow et al. [23] (Figure 2). We randomly chose 18,000 instances from the training dataset
of CIFAR-10, and divided them into three subsets of 6,000 instances each. These three subsets are
used for sequential training by CNN models, parameterized by ?1 , ?2 , and ?3 , respectively. Here, ?2
is initialized from ?1 , and then ?3 is initialized from ?2 , in the same way as weight-transfer. In this
analysis, each loss and accuracy is evaluated at a series of points ? = ?1 + ?(?2 ? ?1 ) + ?(?3 ?
?2 ), varying ? and ?. In Figure 2, the loss surface of the model on each online subset is nearly
convex. The figure shows that the parameter at 31 (?1 + ?2 + ?3 ), which is the same as the solution
of mean-IMM, performs better than any other reference points ?1 , ?2 , or ?3 . However, when ?2 is
not initialized by ?1 , the convex-like shape disappears, since there is a high cost barrier between the
loss function of ?1 and ?2 .
4.2
L2-transfer
L2-transfer is a variant of L2-regularization. L2-transfer can be interpreted as a special case of
EWC where the prior distribution is Gaussian with ?I as a covariance matrix. In L2-transfer, a
regularization term of the distance between ?k?1 and ?k is added to the following objective function
for finding ?k , where ? is a hyperparameter:
log p(yk |Xk , ?k ) ? ? ? ||?k ? ?k?1 ||22
(10)
The concept of L2-transfer is commonly used in transfer learning [25, 26] and continual learning
[7, 8] with large ?. Unlike the previous usage of large ?, we use small ? for the IMM procedure.
In other words, ?k is first trained by Equation 10 with small ?, and then merged to ?1:k in our
IMM. Since we want to make the loss surface between ?k?1 and ?k smooth, and not to minimize
the distance between ?k?1 and ?k . In convex optimization, the L2-regularizer makes the convex
function strictly convex. Similarly, we hope L2-transfer with small ? help to find a ?k with a convexlike loss space between ?k?1 and ?k .
4.3
Drop-transfer
Drop-transfer is a novel method devised in this paper. Drop-transfer is a variant of dropout where
?k?1 is the zero point of the dropout procedure. In the training phase, the following ?
?k,i is used for
the weight vector corresponding to the ith node ?k,i :
?
?k,i
(
?k?1,i ,
=
1
1?p ? ?k,i ?
p
1?p
if ith node is turned off
? ?k?1,i , otherwise
where p is the dropout ratio. Notice that the expectation of ?
?k,i is ?k,i .
5
(11)
Table 1: The averaged accuracies on the disjoint MNIST for two sequential tasks (Top) and the
shuffled MNIST for three sequential tasks (Bottom). The untuned setting refers to the most natural
hyperparameter in the equation of each algorithm, whereas the tuned setting refers to using heuristic
hand-tuned hyperparameters. Hyperparam denotes the main hyperparameter of each algorithm. For
IMM with transfer, only ? is tuned. The numbers in the parentheses refer to standard deviation.
Every IMM uses weight-transfer.
Disjoint MNIST Experiment
SGD [3]
L2-transfer [25]
Drop-transfer
EWC [8]
Mean-IMM
Mode-IMM
L2-transfer + Mean-IMM
L2-transfer + Mode-IMM
Drop-transfer + Mean-IMM
Drop-transfer + Mode-IMM
L2, Drop-transfer + Mean-IMM
L2, Drop-transfer + Mode-IMM
Explanation of
Hyperparam
epoch per dataset
? in (10)
p in (11)
? in (20)
?2 in (4)
?2 in (7)
? / ?2
? / ?2
p / ?2
p / ?2
? / p / ?2
? / p / ?2
Untuned
Hyperparam
Accuracy
10
47.72 (? 0.11)
0.5
51.72 (? 0.79)
1.0
47.84 (? 0.04)
0.50
90.45 (? 2.24)
0.50
91.49 (? 0.98)
0.001 / 0.50
78.34 (? 1.82)
0.001 / 0.50
92.52 (? 0.41)
0.5 / 0.50
80.75 (? 1.28)
0.5 / 0.50
93.35 (? 0.49)
0.001 / 0.5 / 0.50
66.10 (? 3.19)
0.001 / 0.5 / 0.50
93.97 (? 0.32)
Tuned
Hyperparam
Accuracy
0.05
71.32 (? 1.54)
0.05
85.81 (? 0.52)
0.5
51.72 (? 0.79)
600M
52.72 (? 1.36)
0.55
91.92 (? 0.98)
0.45
92.02 (? 0.73)
0.001 / 0.60
92.62 (? 0.95)
0.001 / 0.45
92.73 (? 0.35)
0.5 / 0.60
92.64 (? 0.60)
0.5 / 0.50
93.35 (? 0.49)
0.001 / 0.5 / 0.75
93.97 (? 0.23)
0.001 / 0.5 / 0.45
94.12 (? 0.27)
Shuffled MNIST Experiment
SGD [3]
L2-transfer [25]
Drop-transfer
EWC [8]
Mean-IMM
Mode-IMM
L2-transfer + Mean-IMM
L2-transfer + Mode-IMM
Drop-transfer + Mean-IMM
Drop-transfer + Mode-IMM
L2, Drop-transfer + Mean-IMM
L2, Drop-transfer + Mode-IMM
epoch per dataset
? in (10)
p in (11)
? in (20)
?3 in (4)
?3 in (7)
? / ?3
? / ?3
p / ?3
p / ?3
? / p / ?3
? / p / ?3
Hyperparam
60
0.5
0.33
0.33
1e-4 / 0.33
1e-4 / 0.33
0.5 / 0.33
0.5 / 0.33
1e-4 / 0.5 / 0.33
1e-4 / 0.5 / 0.33
Hyperparam
1e-3
0.2
0.55
0.60
1e-4 / 0.65
1e-4 / 0.60
0.5 / 0.65
0.5 / 0.55
1e-4 / 0.5 / 0.90
1e-4 / 0.5 / 0.50
Accuracy
89.15 (? 2.34)
94.75 (? 0.62)
93.23 (? 1.37)
98.02 (? 0.05)
90.38 (? 1.74)
98.16 (? 0.08)
90.79 (? 1.30)
97.80 (? 0.07)
89.51 (? 2.85)
97.83 (? 0.10)
Accuracy
?95.5 [8]
96.37 (? 0.62)
96.86 (? 0.21)
?98.2 [8]
95.02 (? 0.42)
98.08 (? 0.08)
95.93 (? 0.31)
98.30 (? 0.08)
96.49 (? 0.44)
97.95 (? 0.08)
97.36 (? 0.19)
97.92 (? 0.05)
There are studies [27, 20] that have interpreted dropout as an exponential ensemble of weak learners.
By this perspective, since the marginalization of output distribution over the whole weak learner is
intractable, the parameters multiplied by the inverse of the dropout rate are used for the test phase
in the procedure. In other words, the parameters of the weak learners are, in effect, simply averaged
oversampled learners by dropout. At the process of drop-transfer in our continual learning setting,
we hypothesize that the dropout process makes the averaged point of two arbitrary sampled points
using Equation 11 a good estimator.
We investigated the search space of the loss function of the MLP trained from the MNIST handwritten digit recognition dataset for with and without dropout regularization, to supplement the evidence
of the described hypothesis. Dropout regularization makes the accuracy of a sampled point from
dropout distribution and an average point of two sampled parameters, from 0.450 (? 0.084) to 0.950
(? 0.009) and 0.757 (? 0.065) to 0.974 (? 0.003), respectively. For the case of both with and without
dropout, the space between two arbitrary samples is empirically convex, and fits to the second-order
equation. Based on this experiment, we expect not only that the search space of the loss function
between modern neural networks can be easily nearly convex [23], but also that regularizers, such
as dropout, make the search space smooth and the point in the search space have a good accuracy in
continual learning.
5
Experimental Results
We evaluate our approach on four experiments, whose settings are intensively used in the previous
works [4, 8, 7, 12]. For more details and experimental results, see Appendix D.
Disjoint MNIST Experiment. The first experiment is the disjoint MNIST experiment [4]. In this
experiment, the MNIST dataset is divided into two datasets: the first dataset consists of only digits
{0, 1, 2, 3, 4} and the second dataset consists of the remaining digits {5, 6, 7, 8, 9}. Our task is 10class joint categorization, unlike the setting in the previous work, which considers two independent
tasks of 5-class categorization. Because the inference should decide whether a new instance comes
from the first or the second task, our task is more difficult than the task of the previous work.
6
The disjoint MNIST experiment
The shuffled MNIST experiment
The ImageNet2CUB experiment
1
First Task, Mean?IMM
Second Task, Mean?IMM
First Task, Mode?IMM
Second Task, Mode?IMM
1.2
First Task, Mean?IMM
Second Task, Mean?IMM
First Task, Mode?IMM
Second Task, Mode?IMM
0.995
0.99
First Task, Mean?IMM
Second Task, Mean?IMM
First Task, Mode?IMM
Second Task, Mode?IMM
0.62
0.6
1
0.6
Test Accuracy
Test Accuracy
Test Accuracy
0.985
0.8
0.98
0.975
0.97
0.58
0.56
0.965
0.4
0.54
0.96
0.2
0.955
0
0
0.2
0.4
0.6
alpha, for weighing two networks
0.8
1
0.95
0.52
0
0.2
0.4
0.6
alpha, for weighing two networks
0.8
1
0
0.2
0.4
0.6
alpha, for weighing two networks
0.8
1
Figure 3: Test accuracies of two IMM models with weight-transfer on the disjoint MNIST (Left),
the shuffled MNIST (Middle), and the ImageNet2CUB experiment (Right). ? is a hyperparameter
that balances the information between the old and the new task.
The disjoint MNIST experiment
The disjoint MNIST experiment
0.95
0.95
0.9
0.9
0.85
0.85
0.8
Test Accuracy
Test Accuracy
0.8
0.75
0.7
0.65
0.7
0.65
0.6
0
0.2
0.4
0.6
alpha, for weighing two networks
0.8
Mean?IMM
Mode?IMM
Drop?transfer + Mean?IMM
Drop?transfer + Mode?IMM
L2, Drop?transfer + Mean?IMM
L2, Drop?transfer + Mode?IMM
0.6
Mean?IMM
Mode?IMM
L2?transfer + Mean?IMM
L2?transfer + Mode?IMM
0.55
0.5
0.75
0.55
0.5
1
0
0.2
0.4
0.6
alpha, for weighing two networks
0.8
1
Figure 4: Test accuracies of IMM with various transfer techniques on the disjoint MNIST. Both L2transfer and drop-transfer boost the performance of IMM and make the optimal value of ? larger
than 1/2. However, drop-transfer tends to make the accuracy curve more smooth than L2-transfer
does.
We evaluate the models both on the untuned setting and the tuned setting. The untuned setting refers
to the most natural hyperparameter in the equation of each algorithm. The tuned setting refers to
using heuristic hand-tuned hyperparameters. Consider that tuned hyperparameter setting is often
used in previous works of continual learning as it is difficult to define a validation set in their setting.
For example, when the model needs to learn from the new task after learning from the old task, a low
learning rate or early stopping without a validation set, or arbitrary hyperparameter for balancing is
used [3, 8]. We discover hyperparameters in the tuned setting not only to find the oracle performance
of each algorithm, but also to show that there exist some paths consisting of the point that performs
reasonably for both tasks. Hyperparam in Table 1 denotes hyperparameter mainly searched in the
tuned setting. Table 1 (Top) and Figure 3 (Left) shows the experimental results from the disjoint
MNIST experiment.
In our experimental setting, the usual SGD-based optimizers always perform less than 50%, because
the biases of the output layer for the old task are always pushed to large negative values, which
implies that our task is difficult. Figure 4 also shows that mode-IMM is robust with ? and the
optimal ? of mean-IMM is larger than 1/2 in the disjoint MNIST experiment.
Shuffled MNIST Experiment. The second experiment is the shuffled MNIST experiment [3, 8] of
three sequential tasks. In this experiment, the first dataset is the same as the original MNIST dataset.
However, in the second dataset, the input pixels of all images are shuffled with a fixed, random permutation. In previous work, EWC reaches the performance level of the batch learner, and it is argued
that EWC overcomes catastrophic forgetting in some domains. The experimental details are similar
to the disjoint MNIST experiment, except all models are allowed to use dropout regularization. In
the experiment, the first dataset is the same as the original MNIST dataset. However, in the second
and the third dataset, the input pixels of all images are shuffled with a fixed, random permutation,
respectively. Therefore, the difficulty of the three datasets is the same, though a different solution is
required for each dataset.
7
Table 2: Experimental results on the Lifelog dataset among different classes (location, sub-location,
and activity) and different subjects (A, B, C). Every IMM uses weight-transfer.
Dual memory architecture [12]
Mean-IMM
Mode-IMM
Location
78.11
77.60
77.14
Sub-location
72.36
73.78
75.76
Activity
52.92
52.74
54.07
A
67.02
67.03
67.97
B
58.80
57.73
60.12
C
77.57
79.35
78.89
Table 1 (Bottom) and Figure 3 (Middle) shows the experimental results from the shuffled MNIST
experiment. Notice that accuracy of drop-transfer (p = 0.2) alone is 96.86 (? 0.21) and L2-transfer
(? = 1e-4) + drop-transfer (p = 0.4) alone is 97.61 (? 0.15). These results are competitive to EWC
without dropout, whose performance is around 97.0.
ImageNet to CUB Dataset. The third experiment is the ImageNet2CUB experiment [7], the continual learning problem from the ImageNet dataset to the Caltech-UCSD Birds-200-2011 finegrained classification (CUB) dataset [28]. The numbers of classes of ImageNet and CUB dataset
are around 1K and 200, and the numbers of training instances are 1M and 5K, respectively. In the
ImageNet2CUB experiment, the last-layer is separated for the ImageNet and the CUB task. The
structure of AlexNet is used for the trained model of ImageNet [29]. In our experiment, we match
the moments of the last-layer fine-tuning model and the LwF model, with mean-IMM and modeIMM.
Figure 3 (Right) shows that mean-IMM moderately balances the performance of two tasks between
two networks. However, the balanced hyperparameter of mode-IMM is far from ? = 0.5. We think
that it is because the scale of the Fisher matrix F is different between the ImageNet and the CUB
task. Since the number of training data of the two tasks is different, the mean of the square of the
gradient, which is the definition of F , tends to be different. This implies that the assumption of
mode-IMM does not always hold for heterogeneous tasks. See Appendix D.3 for more information
including the learning methods of IMM where a different class output layer or a different scale of
the dataset is used.
Our results of IMM with LwF exceed the previous state-of-the-art performance, whose model is
also LwF. This is because, in the previous works, the LwF model is initialized by the last-layer finetuning model, not directly by the original AlexNet. In this case, the performance loss of the old task
is not only decreased, but also the performance gain of the new task is decreased. The accuracies of
our mean-IMM (? = 0.5) are 56.20 and 56.73 for the ImageNet task and the CUB task, respectively.
The gains compared to the previous state-of-the-art are +1.13 and -1.14. In the case of mean-IMM
(? = 0.8) and mode-IMM (? = 0.99), the accuracies are 55.08 and 59.08 (+0.01, +1.12), and 55.10
and 59.12 (+0.02, +1.35), respectively.
Lifelog Dataset. Lastly, we evaluate the proposed methods on the Lifelog dataset [12]. The Lifelog
dataset consists of 660,000 instances of egocentric video stream data, collected over 46 days from
three participants using Google Glass [30]. Three class categories, location, sub-location, and activity, are labeled on each frame of video. In the Lifelog dataset, the class distribution changes continuously and new classes appear as the day passes. Table 2 shows that mean-IMM and mode-IMM
are competitive to the dual-memory architecture, the previous state-of-the-art ensemble model, even
though IMM uses single network.
6
Discussion
A Shift of Optimal Hyperparameter of IMM. The tuned setting shows there often exists some ?
which makes the performance of the mean-IMM close to the mode-IMM. However, in the untuned
hyperparameter setting, mean-IMM performs worse when more transfer techniques are applied. Our
Bayesian interpretation in IMM assumes that the SGD training of the k-th network ?k is mainly
affected by the k-th task and is rarely affected by the information of the previous tasks. However,
transfer techniques break this assumption; thus the optimal ? is shifted to larger than 1/k. Fortunately, mode-IMM works more robustly than mean-IMM where transfer techniques are applied.
Figure 4 illustrates the change of the test accuracy curve corresponding to the applied transfer techniques and the following shift of the optimal ? in mean-IMM and mode-IMM.
8
Bayesian Approach on Continual Learning. Kirkpatrick et al. [8] interpreted that the Fisher
matrix F as weight importance in explaining their EWC model. In the shuffled MNIST experiment,
since a large number of pixels always have a value of zero, the corresponding elements of the Fisher
matrix are also zero. Therefore, EWC does work by allowing weights to change, which are not used
in the previous tasks. On the other hand, mode-IMM also works by selectively balancing between
two weights using variance information. However, these assumptions on weight importance do not
always hold, especially in the disjoint MNIST experiment. The most important weight in the disjoint
MNIST experiment is the bias term in the output layer. Nevertheless, these bias parts of the Fisher
matrix are not guaranteed to be the highest value nor can they be used to balance the class distribution
between the first and second task. We believe that using only the diagonal of the covariance matrix
in Bayesian neural networks is too na?ve in general and that this is why EWC failed in the disjoint
MNIST experiment. We think it could be alleviated in future work by using a more complex prior,
such as a matrix Gaussian distribution considering the correlations between nodes in the network
[31].
Balancing the Information of an Old and a New Task. The IMM procedure produces a neural
network without a performance loss for kth task ?k , which is better than the final solution ?1:k in
terms of the performance of the kth task. Furthermore, IMM can easily weigh the importance of
tasks in IMM models in real time. For example, ?t can be easily changed for the solution of meanPk
IMM ?1:k =
t ?t ?t . In actual service situations of IT companies, the importance of the old
and the new task frequently changes in real time, and IMM can handle this problem. This property
differentiates IMM from the other continual learning methods using the regularization approach,
including LwF and EWC.
7
Conclusion
Our contributions are four folds. First, we applied mean-IMM to the continual learning of modern
deep neural networks. Mean-IMM makes competitive results to comparative models and balances
the information between an old and a new network. We also interpreted the success of IMM by the
Bayesian framework with Gaussian posterior. Second, we extended mean-IMM to mode-IMM with
the interpretation of mode-finding in the mixture of Gaussian posterior. Mode-IMM outperforms
mean-IMM and comparative models in various datasets. Third, we introduced drop-transfer, a novel
method proposed in the paper. Experimental results showed that drop-transfer alone performs well
and is similar to the EWC without dropout, in the domain where EWC rarely forgets. Fourth, We
applied various transfer techniques in the IMM procedure to make our assumption of Gaussian
distribution reasonable. We argued that not only the search space of the loss function among neural
networks can easily be nearly convex, but also regularizers, such as dropout, make the search space
smooth, and the point in the search space have good accuracy. Experimental results showed that
applying transfer techniques often boost the performance of IMM. Overall, we made state-of-theart performance in various datasets of continual learning and explored geometrical properties and a
Bayesian perspective of deep neural networks.
Acknowledgments
The authors would like to thank Jiseob Kim, Min-Oh Heo, Donghyun Kwak, Insu Jeon, Christina
Baek, and Heidi Tessmer for helpful comments and editing. This work was supported by the Naver
Corp. and partly by the Korean government (IITP-R0126-16-1072-SW.StarLab, KEIT-10044009HRI.MESSI, KEIT-10060086-RISF). Byoung-Tak Zhang is the corresponding author.
References
[1] Michael McCloskey and Neal J Cohen. Catastrophic interference in connectionist networks:
The sequential learning problem. Psychology of learning and motivation, 24:109?165, 1989.
[2] Robert M French. Catastrophic forgetting in connectionist networks. Trends in cognitive
sciences, 3(4):128?135, 1999.
9
[3] Ian J Goodfellow, Mehdi Mirza, Da Xiao, Aaron Courville, and Yoshua Bengio. An empirical investigation of catastrophic forgetting in gradient-based neural networks. arXiv preprint
arXiv:1312.6211, 2013.
[4] Rupesh K Srivastava, Jonathan Masci, Sohrob Kazerounian, Faustino Gomez, and J?rgen
Schmidhuber. Compete to compute. In Advances in neural information processing systems,
pages 2310?2318, 2013.
[5] Zoubin Ghahramani. Online variational bayesian learning. In NIPS workshop on Online Learning, 2000.
[6] Tamara Broderick, Nicholas Boyd, Andre Wibisono, Ashia C Wilson, and Michael I Jordan.
Streaming variational bayes. In Advances in Neural Information Processing Systems, pages
1727?1735, 2013.
[7] Zhizhong Li and Derek Hoiem. Learning without forgetting. In European Conference on
Computer Vision, pages 614?629. Springer, 2016.
[8] James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, et al.
Overcoming catastrophic forgetting in neural networks. Proceedings of the National Academy
of Sciences, 2017.
[9] David JC MacKay. A practical bayesian framework for backpropagation networks. Neural
computation, 4(3):448?472, 1992.
[10] Charles Blundell, Julien Cornebise, Koray Kavukcuoglu, and Daan Wierstra. Weight uncertainty in neural network. In Proceedings of the 32nd International Conference on Machine
Learning (ICML-15), pages 1613?1622, 2015.
[11] Jacob Goldberger and Sam T Roweis. Hierarchical clustering of a mixture model. In Advances
in Neural Information Processing Systems, pages 505?512, 2005.
[12] Sang-Woo Lee, Chung-Yeon Lee, Dong Hyun Kwak, Jiwon Kim, Jeonghee Kim, and ByoungTak Zhang. Dual-memory deep learning architectures for lifelong learning of everyday human
behaviors. In Twenty-Fifth International Joint Conference on Artificial Intelligencee, pages
1669?1675, 2016.
[13] Andrei A Rusu, Neil C Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick,
Koray Kavukcuoglu, Razvan Pascanu, and Raia Hadsell. Progressive neural networks. arXiv
preprint arXiv:1606.04671, 2016.
[14] Chrisantha Fernando, Dylan Banarse, Charles Blundell, Yori Zwols, David Ha, Andrei A Rusu,
Alexander Pritzel, and Daan Wierstra. Pathnet: Evolution channels gradient descent in super
neural networks. arXiv preprint arXiv:1701.08734, 2017.
[15] Zhen Huang, Jinyu Li, Sabato Marco Siniscalchi, I-Fan Chen, Chao Weng, and Chin-Hui Lee.
Feature space maximum a posteriori linear regression for adaptation of deep neural networks.
In Fifteenth Annual Conference of the International Speech Communication Association, 2014.
[16] Zhen Huang, Sabato Marco Siniscalchi, I-Fan Chen, Jinyu Li, Jiadong Wu, and Chin-Hui Lee.
Maximum a posteriori adaptation of network parameters in deep models. In Sixteenth Annual
Conference of the International Speech Communication Association, 2015.
[17] Abdullah Rashwan, Han Zhao, and Pascal Poupart. Online and distributed bayesian moment
matching for parameter learning in sum-product networks. In Proceedings of the 19th International Conference on Artificial Intelligence and Statistics, pages 1469?1477, 2016.
[18] Kai Zhang and James T Kwok. Simplifying mixture models through function approximation.
Neural Networks, IEEE Transactions on, 21(4):644?658, 2010.
[19] Manas Pathak, Shantanu Rane, and Bhiksha Raj. Multiparty differential privacy via aggregation of locally trained classifiers. In Advances in Neural Information Processing Systems,
pages 1876?1884, 2010.
10
[20] Pierre Baldi and Peter J Sadowski. Understanding dropout. In Advances in Neural Information
Processing Systems, pages 2814?2822, 2013.
[21] Surajit Ray and Bruce G Lindsay. The topography of multivariate normal mixtures. Annals of
Statistics, pages 2042?2065, 2005.
[22] Razvan Pascanu and Yoshua Bengio. Revisiting natural gradient for deep networks. arXiv
preprint arXiv:1301.3584, 2013.
[23] Ian J Goodfellow, Oriol Vinyals, and Andrew M Saxe. Qualitatively characterizing neural
network optimization problems. arXiv preprint arXiv:1412.6544, 2014.
[24] Jason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lipson. How transferable are features
in deep neural networks? In Advances in neural information processing systems, pages 3320?
3328, 2014.
[25] Theodoros Evgeniou and Massimiliano Pontil. Regularized multi?task learning. In Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data
mining, pages 109?117. ACM, 2004.
[26] Wolf Kienzle and Kumar Chellapilla. Personalized handwriting recognition via biased regularization. In Proceedings of the 23rd international conference on Machine learning, pages
457?464. ACM, 2006.
[27] Nitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine
Learning Research, 15(1):1929?1958, 2014.
[28] Catherine Wah, Steve Branson, Peter Welinder, Pietro Perona, and Serge Belongie. The
caltech-ucsd birds-200-2011 dataset. Tech. Rep. CNS-TR-2011-001, 2011.
[29] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep
convolutional neural networks. In Advances in neural information processing systems, pages
1097?1105, 2012.
[30] Sang-Woo Lee, Chung-Yeon Lee, Dong-Hyun Kwak, Jung-Woo Ha, Jeonghee Kim, and
Byoung-Tak Zhang. Dual-memory neural networks for modeling cognitive activities of humans via wearable sensors. Neural Networks, 2017.
[31] Christos Louizos and Max Welling. Structured and efficient variational deep learning with
matrix gaussian posteriors. arXiv preprint arXiv:1603.04733, 2016.
[32] Surajit Ray and Dan Ren. On the upper bound of the number of modes of a multivariate normal
mixture. Journal of Multivariate Analysis, 108:41?52, 2012.
[33] Carlos Am?ndola, Alexander Engstr?m, and Christian Haase. Maximum number of modes of
gaussian mixtures. arXiv preprint arXiv:1702.05066, 2017.
[34] Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint
arXiv:1312.6114, 2013.
11
| 7051 |@word cnn:1 middle:2 norm:2 nd:1 d2:1 covariance:8 jacob:1 simplifying:1 q1:8 slee:1 sgd:5 tr:1 moment:18 initial:1 series:2 hoiem:1 tuned:11 outperforms:1 com:1 goldberger:1 diederik:1 john:1 shape:1 christian:1 hypothesize:1 drop:26 update:2 alone:3 intelligence:2 selected:1 weighing:5 xk:5 ith:2 pascanu:3 node:6 location:6 theodoros:1 zhang:4 wierstra:2 differential:1 pritzel:1 naver:2 consists:3 shantanu:1 dan:1 ray:3 baldi:1 manner:1 privacy:1 introduce:2 forgetting:14 behavior:1 p1:2 nor:1 frequently:1 multi:3 salakhutdinov:1 company:1 resolve:2 actual:1 considering:1 btzhang:1 discover:2 matched:1 banarse:1 alexnet:2 grabska:1 kind:1 interpreted:5 argmin:1 finding:2 pseudo:4 every:2 continual:16 classifier:1 zhang1:1 appear:1 before:1 service:1 local:2 tends:2 soyer:1 encoding:1 ak:4 path:4 rashwan:1 merge:2 might:1 chose:1 bird:2 suggests:1 branson:1 ease:1 limited:1 bi:2 averaged:4 practical:2 acknowledgment:1 practice:1 backpropagation:1 optimizers:1 digit:3 procedure:9 razvan:3 pontil:1 empirical:2 significantly:1 matching:14 alleviated:1 word:3 boyd:1 refers:4 zoubin:1 close:1 storage:1 put:1 applying:3 map:1 attention:2 independently:1 convex:12 hadsell:1 keit:2 estimator:1 oh:1 classic:1 handle:2 annals:1 lindsay:2 user:2 us:9 goodfellow:3 hypothesis:1 element:1 ewc:17 approximated:2 recognition:3 updating:1 trend:1 labeled:1 bottom:2 module:3 preprint:8 revisiting:1 iitp:1 decrease:1 highest:1 yk:4 balanced:1 weigh:1 environment:1 complexity:2 broderick:2 moderately:1 hri:1 trained:9 learner:6 easily:4 joint:2 finetuning:1 various:8 regularizer:1 separated:1 ramalho:1 massimiliano:1 artificial:3 whose:3 heuristic:2 larger:3 plausible:1 kai:1 otherwise:1 statistic:3 neil:2 think:2 final:2 online:6 lee1:1 propose:3 product:3 adaptation:3 turned:1 alleviates:1 mixing:1 poorly:1 roweis:1 academy:1 sixteenth:1 description:1 validate:1 milan:1 everyday:1 sutskever:2 cluster:1 produce:1 categorization:2 incremental:10 comparative:2 help:1 andrew:1 ac:2 sohrob:1 received:2 jiwon:1 implies:3 come:1 university1:1 merged:1 stochastic:2 human:2 saxe:1 require:1 argued:2 government:1 investigation:1 strictly:1 hold:2 marco:2 around:2 distributively:1 normal:2 rgen:1 major:1 achieves:2 early:1 cub:6 desjardins:2 estimation:3 ruslan:1 faustino:1 label:1 weighted:2 minimization:1 hope:1 sensor:1 gaussian:20 always:5 super:1 renaissance:1 rusu:3 pathnet:3 varying:1 wilson:1 clune:1 focus:1 kwak:3 mainly:2 tech:1 sigkdd:1 kim:4 am:1 glass:1 posteriori:2 inference:2 helpful:1 rupesh:1 stopping:1 streaming:1 perona:1 tak:3 pixel:3 issue:2 among:3 dual:4 classification:2 pascal:1 overall:1 art:4 special:1 initialize:1 mackay:1 haase:1 evgeniou:1 beach:1 veness:1 koray:2 progressive:1 icml:1 nearly:3 theart:1 future:2 connectionist:2 mirza:1 yoshua:3 modern:4 randomly:2 preserve:2 national:2 divergence:3 ve:2 phase:2 consisting:1 cns:1 jeon:1 attempt:1 mlp:2 interest:1 mining:1 joel:1 kirkpatrick:3 mixture:10 extreme:1 arrives:1 analyzed:1 weng:1 regularizers:2 hubert:1 succeeded:1 jeonghee:2 old:19 logarithm:1 taylor:1 initialized:5 instance:5 modeling:1 heo:1 cost:4 deviation:1 subset:4 krizhevsky:2 welinder:1 too:2 ha2:1 combined:2 st:1 fundamental:1 international:7 probabilistic:1 off:1 lee:6 dong:2 michael:2 continuously:1 ilya:2 na:2 huang:3 worse:1 cognitive:2 chung:2 zhao:1 sang:3 li:3 jc:1 stream:2 try:1 picked:1 break:1 zhizhong:1 analyze:1 jason:1 competitive:3 bayes:3 participant:1 aggregation:1 carlos:1 bruce:1 lipson:1 contribution:2 hwa:1 minimize:2 square:2 accuracy:21 convolutional:1 qk:7 variance:2 ensemble:5 serge:1 weak:3 bayesian:16 handwritten:1 kavukcuoglu:2 ren:1 straight:1 explain:1 reach:1 andre:1 definition:1 manas:1 derek:1 tamara:1 james:3 handwriting:1 sampled:3 newly:1 dataset:28 gain:2 wearable:1 finegrained:1 intensively:1 knowledge:1 steve:1 day:2 zwols:1 editing:1 evaluated:1 though:4 furthermore:1 implicit:1 lastly:1 correlation:3 hand:3 mehdi:1 incrementally:1 google:1 french:1 mode:45 rabinowitz:2 believe:1 usa:1 effect:2 usage:1 concept:3 true:2 bhiksha:1 evolution:1 regularization:11 assigned:1 shuffled:10 neal:1 illustrated:1 deal:1 insu:1 visualizing:1 transferable:1 chin:2 performs:6 geometrical:2 image:2 variational:4 novel:3 recently:1 charles:2 empirically:2 cohen:1 association:2 occurred:1 approximates:1 interpretation:2 yosinski:1 louizos:1 refer:1 jinyu:2 ai:1 shuffling:1 tuning:1 rd:1 fk:2 similarly:1 had:1 han:1 surface:3 add:1 posterior:25 own:1 recent:1 showed:2 perspective:2 optimizing:1 raj:1 multivariate:3 schmidhuber:1 store:1 corp:1 catherine:1 rep:1 success:2 caltech:3 inverted:1 fortunately:1 fernando:2 heidi:1 smooth:9 barwinska:1 match:2 adapt:1 long:1 cifar:3 divided:2 devised:1 christina:1 raia:1 laplacian:3 parenthesis:1 variant:5 scalable:1 regression:1 heterogeneous:1 vision:1 mog:4 expectation:2 fifteenth:1 arxiv:16 represent:1 whereas:2 want:2 fine:1 decreased:2 sabato:2 biased:1 unlike:3 pass:1 comment:1 subject:1 quan:1 jordan:1 ideal:1 exceed:1 bengio:3 variety:1 marginalization:1 fit:2 psychology:1 brighter:1 architecture:3 idea:4 fragile:1 shift:2 blundell:2 whether:1 motivated:2 reuse:1 peter:2 speech:3 deep:14 useful:2 ten:1 locally:2 category:1 generate:1 exist:1 notice:3 shifted:1 disjoint:15 per:2 hyperparameter:11 affected:2 four:3 nevertheless:1 prevent:2 tenth:1 egocentric:1 pietro:1 sum:4 compete:1 inverse:3 parameterized:1 uncertainty:2 fourth:1 extends:1 multiparty:1 reasonable:4 decide:1 wu:1 appendix:3 pushed:1 dropout:22 layer:12 bound:1 guaranteed:1 gomez:1 courville:1 abdullah:1 fold:1 fan:2 oracle:1 activity:4 annual:2 alex:2 personalized:1 interpolated:1 nitish:1 min:1 kumar:1 structured:1 according:1 byoung:3 sam:1 snu:2 shallow:1 making:1 s1:2 interference:1 kim1:1 equation:6 ln:2 differentiates:1 needed:1 multiplied:1 kwok:1 hierarchical:3 nicholas:1 pierre:1 robustly:1 batch:2 original:4 assumes:2 denotes:6 include:2 top:2 remaining:1 clustering:1 tiago:1 sw:1 donghyun:1 calculating:1 ghahramani:1 especially:2 byoungtak:1 objective:3 added:1 primary:1 usual:1 diagonal:4 rane:1 evolutionary:1 gradient:7 kth:5 distance:4 thank:1 capacity:1 poupart:1 topic:1 considers:1 collected:1 reason:1 illustration:1 ratio:2 balance:4 difficult:3 unfortunately:1 korean:1 robert:1 kieran:1 ashia:1 negative:1 twenty:2 perform:4 allowing:1 upper:1 datasets:8 daan:2 hyun:2 jin:1 descent:3 situation:1 extended:1 communication:2 hinton:2 y1:1 discovered:1 ucsd:2 frame:1 arbitrary:3 overcoming:2 introduced:2 david:2 required:1 kl:7 optimized:1 oversampled:1 imagenet:8 wah:1 merges:1 learned:1 boost:2 kingma:1 nip:2 usually:1 challenge:1 max:2 including:7 explanation:1 memory:4 video:2 cornebise:1 critical:1 pathak:1 natural:3 difficulty:1 regularized:1 indicator:1 representing:1 lwf:8 julien:1 disappears:1 zhen:2 woo:5 auto:1 chao:1 prior:4 geometric:2 l2:28 discovery:2 epoch:2 understanding:1 loss:14 expect:1 permutation:2 topography:1 geoffrey:2 untuned:5 validation:2 vectorized:2 sufficient:1 xiao:1 share:1 balancing:4 changed:1 jung:2 consolidation:2 last:3 supported:1 bias:4 explaining:1 lifelong:1 characterizing:1 barrier:4 fifth:1 distributed:2 curve:2 calculated:1 dimension:5 default:1 preventing:1 author:2 commonly:2 made:1 agnieszka:1 qualitatively:1 far:1 yori:1 welling:2 hypersurface:1 transaction:1 approximate:3 alpha:5 overcomes:1 imm:119 overfitting:1 belongie:1 discriminative:1 alternatively:1 search:10 why:1 table:6 promising:1 channel:1 learn:2 robust:1 transfer:75 reasonably:2 ca:1 elastic:2 hyperparam:7 expansion:1 investigated:1 complex:2 european:1 domain:4 da:1 tween:1 pk:13 main:3 whole:4 motivation:1 hyperparameters:3 allowed:1 x1:1 representative:1 andrei:3 precision:1 sub:3 christos:1 exponential:1 dylan:1 lie:2 forgets:1 third:3 ian:2 masci:1 sadowski:1 specific:1 baek:1 chellapilla:1 explored:1 evidence:1 intractable:1 exists:1 mnist:28 workshop:1 sequential:12 kr:2 importance:4 supplement:1 hui:2 illustrates:1 hod:1 chen:2 forget:1 ndola:1 simply:3 surajit:2 failed:2 vinyals:1 expressed:1 mccloskey:1 springer:1 wolf:1 loses:1 satisfies:1 complemented:1 acm:3 identity:1 maxout:1 jeff:1 fisher:6 change:5 typical:1 except:1 kienzle:1 kazerounian:1 catastrophic:12 experimental:11 partly:1 perceptrons:1 rarely:2 selectively:1 aaron:1 guillaume:2 searched:1 seoul:1 jonathan:1 alexander:2 wibisono:1 oriol:1 evaluate:3 srivastava:2 |
6,691 | 7,052 | Balancing information exposure in social networks
Kiran Garimella
Aalto University & HIIT
Helsinki, Finland
[email protected]
Aristides Gionis
Aalto University & HIIT
Helsinki, Finland
[email protected]
Nikos Parotsidis
University of Rome Tor Vergata
Rome, Italy
[email protected]
Nikolaj Tatti
Aalto University & HIIT
Helsinki, Finland
[email protected]
Abstract
Social media has brought a revolution on how people are consuming news. Beyond the undoubtedly large number of advantages brought by social-media platforms, a point of criticism has been the creation of echo chambers and filter bubbles, caused by social homophily and algorithmic personalization.
In this paper we address the problem of balancing the information exposure in
a social network. We assume that two opposing campaigns (or viewpoints) are
present in the network, and that network nodes have different preferences towards
these campaigns. Our goal is to find two sets of nodes to employ in the respective campaigns, so that the overall information exposure for the two campaigns
is balanced. We formally define the problem, characterize its hardness, develop
approximation algorithms, and present experimental evaluation results.
Our model is inspired by the literature on influence maximization, but there are
significant differences from the standard model. First, balance of information exposure is modeled by a symmetric difference function, which is neither monotone
nor submodular, and thus, not amenable to existing approaches. Second, while
previous papers consider a setting with selfish agents and provide bounds on bestresponse strategies (i.e., move of the last player), we consider a setting with a
centralized agent and provide bounds for a global objective function.
1
Introduction
Social-media platforms have revolutionized many aspects of human culture, among others, the way
people are exposed to information. A recent survey estimates that 62% of adults in the US get
their news on social media [15]. Despite providing many desirable features, such as, searching,
personalization, and recommendations, one point of criticism is that social media amplify echo
chambers and filter bubbles: users get less exposure to conflicting viewpoints and are isolated in their
own informational bubble. This phenomenon is contributed to social homophily and algorithmic
personalization, and is more acute for controversial topics [9, 12, 14].
In this paper we address the problem of reducing the filter-bubble effect by balancing information
exposure among users. We consider social-media discussions around a topic that are characterized
by two or more conflicting viewpoints. Let us refer to these viewpoints as campaigns. Our approach
follows the popular paradigm of influence propagation [18]: we want to select a small number
of seed users for each campaign so as to maximize the number of users who are exposed to both
campaigns. In contrast to existing work on competitive viral marketing, we do not consider the
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
problem of finding an optimal selfish strategy for each campaign separately. Instead we consider a
centralized agent responsible for balancing information exposure for the two campaigns Consider
the following motivating examples.
Example 1: Social-media companies have been called to act as arbiters so as to prevent ideological
isolation and polarization in the society. The motivation for companies to assume this role could
be for improving their public image or due to legislation.1 Consider a controversial topic being
discussed in social-media platform X, which has led to polarization and filter bubbles. As part of
a new filter-bubble bursting service, platform X would like to disseminate two high-quality and
thought-provoking dueling op-eds, articles, one for each side, which present the arguments of the
other side in a fair manner. Assume that X is interested in following a viral-marketing approach.
Which users should X target, for each of the two articles, so that people in the network are informed
in the most balanced way?
Example 2: Government organization Y is initiating a program to help assimilate foreigners who
have newly arrived in the country. Part of the initiative focuses on bringing the communities of
foreigners and locals closer in social media. Organization Y is interested in identifying individuals
who can help spreading news of one community into the other.
From the technical standpoint, we consider the following problem setting: We assume that information is propagated in the network according to the independent-cascade model [18]. We assume
that there are two opposing campaigns, and for each one there is a set of initial seed nodes, I1 and
I2 , which are not necessarily distinct. Furthermore, we assume that the users in the network are
exposed to information about campaign i via diffusion from the set of seed nodes Ii . The diffusion
in the network occurs according to some information-propagation model.
The objective is to recruit two additional sets of seed nodes, S1 and S2 , for the two campaigns, with
|S1 | + |S2 | ? k, for a given budget k, so as to maximize the expected number of balanced users,
i.e., the users who are exposed to information from both campaigns (or from none).
We show that the problem of balancing the information exposure is NP-hard. We develop different
approximation algorithms for the different settings we consider, as well as heuristic variants of the
proposed algorithm. We experimentally evaluate our methods, on several real-world datasets.
Although our approach is inspired by the large body of work on information propagation, and resembles previous problem formulations for competitive viral marketing, there are significant differences.
In particular:
? This is the first paper to address the problem of balancing information exposure and breaking
filter bubbles, using the information-propagation methodology.
? The objective function that best suits our problem setting is related to the size of the symmetric
difference of users exposed to the two campaigns. This is in contrast to previous settings that
consider functions related to the size of the coverage of the campaigns.
? As a technical consequence of the previous point, our objective function is neither monotone
nor submodular making our problem more challenging. Yet we are able to analyze the problem
structure and provide algorithms with approximation guarantees.
? While most previous papers consider selfish agents, and provide bounds on best-response strategies (i.e., move of the last player), we consider a centralized setting and provide bounds for a
global objective function.
Omitted proofs, figures, and tables are provided as supplementary material. Moreover, our datasets
and implementations are publicly available.2
2
Related Work
Detecting and breaking filter bubbles. Several studies have observed that users in online social
networks prefer to associate with like-minded individuals and consume agreeable content. This
phenomenon leads to filter bubbles, echo chambers [25], and to online polarization [1, 9, 12, 22].
1
2
For instance, Germany is now fining Facebook for the spread of fake news.
https://users.ics.aalto.fi/kiran/BalanceExposure/
2
Once these filter bubbles are detected, the next step is to try to overcome them. One way to achieve
this is by making recommendations to individuals of opposing viewpoints. This idea has been
explored, in different ways, by a number of studies in the literature [13, 19]. However, previous
studies address the problem of breaking filter bubbles by the means of content recommendation. To
the best of our knowledge, this is the first paper that considers an information diffusion approach.
Information diffusion. Following a large body of work, we model diffusion using the independentcascade model [18]. In the basic model a single item propagates in the network. An extension is
when multiple items propagate simultaneously. All works that study optimization problems in the
case of multiple items, consider that items compete for being adopted by users. In other words, every
user adopts at most one of the existing items and participates in at most one cascade.
Myers and Leskovec [23] argue that spreading processes may either cooperate or compete. Competing contagions decrease each other?s probability of diffusion, while cooperating ones help each
other in being adopted. They propose a model that quantifies how different spreading cascades interact with each other. Carnes et al. [7] propose two models for competitive diffusion. Subsequently,
several other models have been proposed [4, 10, 11, 17, 21, 27, 28].
Most of the work on competitive information diffusion consider the problem of selecting the best
k seeds for one campaign, for a given objective, in the presence of competing campaigns [3, 6].
Bharathi et al. [3] show that, if all campaigns but one have fixed sets of seeds, the problem for
selecting the seeds for the last player is submodular, and thus, obtain an approximation algorithm
for the strategy of the last player. Game theoretic aspects of competitive cascades in social networks, including the investigation of conditions for the existence of Nash equilibrium, have also
been studied [2, 16, 26].
The work that is most related to ours, in the sense of considering a centralized authority, is the
one by Borodin et al. [5]. They study the problem where multiple campaigns wish to maximize
their influence by selecting a set of seeds with bounded cardinality. They propose a centralized
mechanism to allocate sets of seeds (possibly overlapping) to the campaigns so as to maximize the
social welfare, defined as the sum of the individual?s selfish objective functions. One can choose
any objective functions as long as it is submodular and non-decreasing. Under this assumption
they provide strategyproof (truthful) algorithms that offer guarantees on the social welfare. Their
framework applies for several competitive influence models. In our case, the number of balanced
users is not submodular, and so we do not have any approximation guarantees. Nevertheless, we can
use this framework as a heuristic baseline, which we do in the experimental section.
3
Problem Definition
Preliminaries: We start with a directed graph G = (V, E, p1 , p2 ) representing a social network.
We assume that there are two distinct campaigns that propagate through the network. Each edge
e = (u, v) ? E is assigned two probabilities, p1 (e) and p2 (e), representing the probability that a
post from vertex u will propagate (e.g., it will be reposted) to vertex v in the respective campaigns.
Cascade model: We assume that information on the two campaigns propagates in the network
following the independent-cascade model [18]. For instance, consider the first campaign (the process
for the second campaign is analogous): we assume that there exists a set of seeds I1 from which the
process begins. Propagation proceeds in rounds. At each round, there exists a set of active vertices
A1 (initially, A1 = I1 ), where each vertex u ? A1 attempts to activate each vertex v ?
/ A1 , such
that (u, v) ? E, with probability p1 (u, v). If the propagation attempt from a vertex u to a vertex v
is successful, we say that v propagates the first campaign. At the end of each round, A1 is set to be
the set of vertices that propagated the campaign during the current round.
Given a seed set S, we write r1 (S) and r2 (S) for the vertices that are reached from S using the
aforementioned cascade process, for the respective campaign. Note that since this process is random,
both r1 (S) and r2 (S) are random variables. Computing the expected number of active vertices is a
#P-hard problem [8], however, we can approximate it within an arbitrary small factor , with high
probability, via Monte-Carlo simulations. Due to this obstacle, all approximation algorithms that
evaluate an objective function over diffusion processes reduce their approximation by an additive .
Throughout this work we avoid repeating this fact for the sake of simplicity of the notation.
3
Heterogeneous vs. correlated propagations: We also need to specify how the propagation on the
two campaigns interact with each other. We consider two settings: In the first setting, we assume
that the campaign messages propagate independently of each other. Given an edge e = (u, v), the
vertex v is activated on the first campaign with probability p1 (e), given that vertex u is activated on
the first campaign. Similarly, v is activated on the second campaign with probability p2 (e), given
that u is activated on the second campaign. We refer to this setting as heterogeneous.3 In the second
setting we assume that p1 (e) = p2 (e), for each edge e. We further assume that the coin flips for
the propagation of the two campaigns are totally correlated. Namely, consider an edge e = (u, v),
where u is reached by either or both campaigns. Then with probability p1 (e), any campaign that has
reached u, will also reach v. We refer to this second setting as correlated.
Note that in both settings, a vertex may be active by none, either, or both campaigns. This is in
contrast to most existing work in competitive viral marketing, where it is assumed that a vertex can
be activated by at most one campaign. The intuition is that in our setting activation means merely
passing a message or posting an article, and it does not imply full commitment to the campaign. We
also note that the heterogeneous setting is more realistic than the correlated, however, we also study
the correlated model as it is mathematically simpler.
Problem definition: We are now ready to state our problem for balancing information exposure
(BALANCE). Given a directed graph, initial seed sets for both campaigns and a budget, we ask to
find additional seeds that would balance the vertices. More formally:
Problem 3.1 (BALANCE). Let G = (V, E, p1 , p2 ) be a directed graph, and two sets I1 and I2 of
initial seeds of the two campaigns. Assume that we are given a budget k. Find two sets S1 and S2 ,
where |S1 | + |S2 | ? k maximizing
?(S1 , S2 ) = E[|V \ (r1 (I1 ? S1 ) 4 r2 (I2 ? S2 ))|] .
The objective function ?(S1 , S2 ) is the expected number of vertices that are either reached by both
campaigns or remain oblivious to both campaigns. Problem 3.1 is defined for both settings, heterogeneous and correlated. When we need to make explicit the underlying setting we refer to the
respective problems by BALANCE -H and BALANCE -C. When referring to BALANCE -H, we denote
the objective by ?H . Similarly, when referring to BALANCE -C, we write ?C . We drop the indices,
when we are referring to both models simultaneously.
Computational complexity: As expected, the optimization problem BALANCE turns out to be
NP-hard for both settings, heterogeneous and correlated. A straightforward way to prove it is by
setting I2 = V , so the problems reduce to standard influence maximization. However, we provide
a stronger result. Note that instead of maximizing balanced vertices we can equivalently minimize
the imbalanced vertices. However, this turns to be a more difficult problem.
Proposition 1. Assume a graph G = (V, E, p1 , p2 ) with two sets I1 and I2 and a budget k. It is
an NP-hard problem to decide whether there are sets S1 and S2 such that |S1 | + |S2 | ? k and
E[|r1 (I1 ? S1 ) 4 r2 (I2 ? S2 )|] = 0.
This result holds for both models, even when p1 = p2 = 1. This result implies that the minimization
version of the problem is NP-hard, and there is no algorithm with multiplicative approximation
guarantee. It also implies that BALANCE -H and BALANCE -C are also NP-hard. However, we will
see later that we can obtain approximation guarantees for these maximization problems.
4
Greedy algorithms yielding approximation guarantees
In this section we propose three greedy algorithms. The first algorithm yields an approximation
guarantee of (1 ? 1/e)/2 for both models. The remaining two algorithms yield a guarantee for the
correlated model only.
Decomposing the objective: Recall that the objective function of the BALANCE problem is
?(S1 , S2 ). In order to show that this function admits an approximation guarantee, we decompose it
into two components. To do that, assume that we are given initial seeds I1 and I2 , and let us write
3
Although independent is probably a better term than heterogeneous, we adopt the latter to avoid any confusion with the independent-cascade model.
4
X = r1 (I1 ) ? r2 (I2 ), Y = V \ X. Here X are vertices reached by any initial seed in the two campaigns and Y are the vertices that are not reached at all. Note that X and Y are random variables.
Since X and Y partition V , we can decompose the score ?(S1 , S2 ) as
?(S1 , S2 ) = ?(S1 , S2 ) + ?(S1 , S2 ), where
?(S1 , S2 ) = E[|X \ (r1 (I1 ? S1 ) 4 r2 (I2 ? S2 ))|] ,
?(S1 , S2 ) = E[|Y \ (r1 (I1 ? S1 ) 4 r2 (I2 ? S2 ))|] .
We first show that ?(S1 , S2 ) is monotone and submodular. It is well-known that for maximizing
a function that has these two properties under a size constraint, the greedy algorithm computes an
(1 ? 1e ) approximate solution [24].
Lemma 2. ?(S1 , S2 ) is monotone and submodular.
We are ready to discuss our algorithms.
Algorithm 0: ignore ?. Our first algorithm is very simple: instead of maximizing ?, we maximize
?, i.e., we ignore any vertices that are made imbalanced during the process. Since ? is submodular
and monotone we can use the greedy algorithm. If we then compare the obtained result with the
empty solution, we get the promised approximation guarantee. We refer to this algorithm as Cover.
Proposition 3. Let hS1? , S2? i be the optimal solution maximizing ?. Let hS1 , S2 i be the solution
obtained via greedy algorithm maximizing ?. Then
max{?(S1 , S2 ), ?(?, ?)} ?
1 ? 1/e
?(S1? , S2? ).
2
Algorithm 1: force common seeds. Ignoring the ? term may prove costly as it is possible to
introduce a lot of new imbalanced vertices. The idea behind the second algorithm is to force ? = 0.
We do this by either adding the same seeds to both campaigns, or adding a seed that is covered
by an opposing campaign. This algorithm has guarantees only in the correlated setting with even
budget k but in practice we can use the algorithm also for the heterogeneous setting. We refer to this
algorithm as Common and the pseudo-code is given in Algorithm 1.
Algorithm 1: Common, greedy algorithm that only adds common seeds
1 S1 ? S2 ? ?;
2 while |S1 | + |S2 | ? k do
3
c ? arg maxc ?(S1 ? {c} , S2 ? {c});
4
s1 ? arg maxs?I1 ?(S1 , S2 ? {s});
5
s2 ? arg maxs?I2 ?(S1 ? {s} , S2 );
6
add the best option among hc, ci, h?, s1 i, hs2 , ?i to hS1 , S2 i while respecting the budget.
We first show in the following lemma that adding common seeds may halve the score, in the worst
case. Then, we use this lemma to prove the approximation guarantee
Lemma 4. Let hS1 , S2 i be a solution to BALANCE -C, with an even budget k. There exists a solution
hS10 , S20 i with S10 = S20 such that ?C (S10 , S20 ) ? ?C (S1 , S2 )/2.
It is easy to see that the greedy algorithm satisfies the conditions of the following proposition.
Proposition 5. Assume an iterative algorithm where at each iteration, we add one or two vertices
to our solution until our constraints are met. Let S1i , S2i be the sets after the i-th iteration, S10 =
S20 = ?. Let ?i = ?C (S1i , S2i ) be the cost after the i-th iteration. Assume that ?i ? ?i?1 . Assume
further that for i = 1, . . . , k/2 it holds that ?i ? ?C (S1i?1 ? {c} , S2i?1 ? {c}). Then the algorithm
yields (1 ? 1/e)/2 approximation.
Algorithm 2: common seeds as baseline. Not allowing new imbalanced vertices may prove to be
too restrictive. We can relax this condition by allowing new imbalanced vertices as long as the gain is
at least as good as adding a common seed. We refer to this algorithm as Hedge and the pseudo-code
is given in Algorithm 2. The approximation guarantee for this algorithm?in the correlated setting
and with even budget?follows immediately from Proposition 5 as it also satisfies the conditions.
5
Algorithm 2: Hedge, greedy algorithm, where each step is as good as adding the best common
seed
1 S1 ? S2 ? ?;
2 while |S1 | + |S2 | ? k do
3
c ? arg maxc ?(S1 ? {c} , S2 ? {c});
4
s1 ? arg maxs ?(S1 , S2 ? {s});
5
s2 ? arg maxs ?(S1 ? {s} , S2 );
6
add the best option among hc, ci, h?, s1 i, hs2 , ?i, hs2 , s1 i, to hS1 , S2 i while respecting the
budget.
5
Experimental evaluation
In this section, we evaluate the effectiveness of our algorithms on real-world datasets. We focus
on (i) analyzing the quality of the seeds picked by our algorithms in comparison to other heuristic
approaches and baselines; (ii) analyzing the efficiency and the scalability of our algorithms; and
(iii) providing anecdotal examples of the obtained results. Although we setup our experiments in
order to mimic social behavior, we note that fully realistic experiments would entail the ability to
intervene in the network, select seeds, and observe the resulting cascades. This, however, is well
beyond our capacity and the scope of the paper.
In all experiments we set k to range between 5 and 50 with a step of 5. We report averages over
1 000 random simulations of the cascade process.
Datasets: To evaluate the effectiveness of our algorithms, we run experiments on real-world data
collected from twitter. Let G = (V, E) be the twitter follower graph. A directed edge (u, v) ? E
indicates that user v follows u; note that the edge direction indicates the ?information flow? from
a user to their followers. We define a cascade GX = (X, EX ) as a graph over the set of users
X ? V who have retweeted at least one hashtag related to a topic (e.g., US elections). An edge
(u, v) ? EX ? E indicates that v retweeted u.
We use datasets from six topics with opposing viewpoints, covering politics (US-elections,
Brexit, ObamaCare), policy (Abortion, Fracking), and lifestyle (iPhone, focusing on iPhone
vs. Samsung). All datasets are collected by filtering the twitter streaming API (1% random sample
of all tweets) for a set of keywords used in previous work [20]. For each dataset, we identify two
sides (indicating the two view-points) on the retweet graph, which has been shown to capture best
the two opposing sides of a controversy [12]. Details on the statistics of the dataset can be found at
the supplementary material.
After building the graphs, we need to estimate the diffusion probabilities for the heterogeneous
and correlated models. Note that the estimation of the diffusion probabilities is orthogonal to our
contribution in this paper. For the sake of concreteness we have used the approach described below.
One could use a different, more advanced, method; our methods are still applicable.
Let q1 (v) and q2 (v) be an a priori probability of a user v retweeting sides 1 and 2, respectively.
These are measured from the data by looking at how often a user retweets content from users and
keywords that are discriminative of each side. For example, for US-elections, the discriminative
users and keywords for side Hillary would be @hillaryclinton and #imwither, and for Trump, @realdonaldtrump and #makeamericagreatagain. The probability that user v retweets user u (cascade
probability) is then defined as
R(u, v) + 1
pi (u, v) = ? qi (v) + (1 ? ?)
, i = 1, 2,
R(v) + 2
where R(u, v) is the number of times v has retweeted u, and R(v) is the total number of retweets
of user v. The cascade probabilities pi capture the fact that users retweet content if they see it from
their friends (term R(u,v)+1
R(v)+2 ) or based on their own biases (term qi (v)). The additive terms in the
numerator and denominator provide an additive smoothing by Laplace?s rule of succession.
We set the value of ? to 0.8 for the heterogeneous setting. For ? = 0 the edge probabilities become
equal for the two campaigns, which is our assumption for the correlated setting.
6
ObamaCare
US-elections
600
500
400
symm. diff.
2 500
symm. diff.
symm. diff.
iPhone
2 000
1 500
Cover
Hedge
Common
Greedy
1 500
1 000
500
300
10
20
30
budget k
40
50
10
iPhone
40
50
10
40
20
40
50
2 000
400
symm. diff.
symm. diff.
60
20
30
budget k
US-elections
ObamaCare
80
symm. diff.
20
30
budget k
200
0
1 500
1 000
500
0
10
20
30
budget k
40
50
10
20
30
budget k
40
50
10
20
30
budget k
40
50
Figure 1: Expected symmetric difference n ? ?C as a function of the budget k. Top row, heterogeneous model, bottom row: Correlated model. Low values are better.
Baselines. We use 5 different baselines. The first baseline, BBLO, is an adaptation of the framework
by Borodin et al. [5]. This framework requires an objective function as input, and here we use our
objective function ?. The framework works as follows: The two campaigns are given a budget k/2
on the number of seeds that they can select. At each round, we select a vertex v for S1 , optimizing
?(S1 ? {v} , S2 ), and a vertex w for S2 , optimizing ?(S1 , S2 ? {w}). We should stress that the
theoretical guarantees by [5] do not apply because our objective is not submodular.
The next two heuristics add a set of common seeds to both campaigns. We run a greedy algorithm
for campaign i = 1, 2 to select the set Si0 with the ` k vertices Pi that optimizes the function
ri (Si0 ? Ii ). We consider two heuristics: Union selects S1 and S2 to be equal to the k/2 first distinct
vertices in S10 ? S20 while Intersection selects S1 and S2 to be equal to k/2 first vertices in S10 ? S20 .
Here the vertices are ordered based on their discovery time.
Finally, HighDegree selects the vertices with the largest number of followers and assigns them alternately to the two cascades; and Random assigns k/2 random seeds to each campaign.
In addition to the baselines, we also consider a simple greedy algorithm Greedy. The difference
between Cover and Greedy is that, in each iteration, Cover adds the seed that maximizes ?, while
Greedy adds the seed that maximizes ?. We can only show an approximation guarantee for Cover
but Greedy is a more intuitive approach, and we use it as a heuristic.
Comparison of the algorithms. We start by evaluating the quality of the sets of seeds computed by
our algorithms, i.e., the number of equally-informed vertices.
Heterogeneous setting. We consider first the case of heterogeneous networks. The results for the
selected datasets are shown in Figure 1. Full results are shown in the supplementary material. Instead
of plotting ?, we plot the number of the remaining unbalanced vertices, n??, as it makes the results
easier to distinguish; i.e., an optimal solution achieves the value 0.
The first observation is that the approximation algorithm Cover performs, in general, worse than
the other two heuristics. This is due to the fact that Cover does not optimize directly the objective
function. Hedge performs better than Greedy, in general, since it examines additional choices to
select. The only deviation from this picture is for the US-elections dataset, where the Greedy
outperforms Hedge by a small factor. This may due to the fact that while Hedge has more options,
it allocates seeds in batches of two.
Correlated setting. Next we consider correlated networks. We experiment with the three approximation algorithms Cover, Common, Hedge, and the heuristic Greedy. The results are shown in
Figure 1. Cover performs again the worst since it is the only method that introduces new unbalanced
vertices without caring about their cardinality. Its variant, Greedy, performs much better in practice
even though it does not provide an approximation guarantee. The algorithms Common, Greedy, and
Hedge perform very similar to each other without a clear winner.
7
Heterogeneous
?103
6
symm. diff.
symm. diff.
4
4
2
0
Abortion Brexit Fracking iPhoneObamaCare
3
2
1
0
US
Hedge
BBLO
Intersection
Union
HighDegree
Random
?103
Correlated
Abortion Brexit Fracking iPhoneObamaCare
US
Figure 2: Expected symm. diff. n ? ? of Hedge and the baselines. k = 20. Low values are better.
Comparison with baselines. Our next step is to compare against the baselines. For simplicity, we
focus on k = 20; the overall conclucions hold for other budgets. The results for Hedge versus the
five baselines are shown in Figure 2.
From the results we see that BBLO is the best competitor: its scores are the closest to Hedge, and
it receives slightly better scores in 3 out of 12 cases. The competitiveness is not surprising because
we specifically set the objective function in BBLO to be ?(S1 , S2 ). The Intersection and Union
also perform well but are always worse than Hedge. Random is unpredictable but always worse
than Hedge. In the case of heterogeneous networks, Hedge selects seeds that leave less unbalanced
vertices, by a factor of two on average, compared to the seeds selected by the HighDegree method.
For correlated networks, our method outperforms the two baselines by an order of magnitude. The
actual values of this experiment can be found in the supplementary material.
Running time. We proceed to evaluate the efficiency and the scalability of our algorithms. We
observe that all algorithms have comparable running times and good scalability. More information
can be found in the supplementary material.
Use case with Fracking. We present a qualitative case-study analysis for the seeds selected by our
algorithm. We highlight the Fracking dataset, even though we applied similar analysis to the other
datasets as well (the results are given in the supplementary material of the paper). Recall that for
each dataset we identify two sides with opposing views, and a set of initial seeds for each side (I1
and I2 ). We consider the users in the initial seeds I1 (side supporting fracking), and summarize the
text of all their Twitter profile descriptions in a word cloud. The result, contains words that are used
to emphasize the benefits of fracking (energy, oil, gas, etc.). We then draw a similar word cloud
for the users identified by the Hedge algorithm as seed nodes in the sets S1 and S2 (k = 50). The
result, contains a more balanced set of words, which includes many words used to underline the
environmental dangers of fracking. We use word clouds as a qualitative case study to complement
our quantitative results and to provide more intuition about our problem statement, rather than an
alternative quantitative measure.
6
Conclusion
We presented the first study of the problem of balancing information exposure in social networks
using techniques from the area of information diffusion. Our approach has several novel aspects. In
particular, we formulate our problem by seeking to optimize a symmetric difference function, which
is neither monotone nor submodular, and thus, not amenable to existing approaches. Additionally,
while previous studies consider a setting with selfish agents and provide bounds on best-response
strategies (i.e., move of the last player), we consider a centralized setting and provide bounds for a
global objective function.
Our work provides several directions for future work. One interesting problem is to improve the
approximation guarantee for the problem we define. Second, we would like to extend the problem
definition for more than two campaigns and design approximation algorithms for that case. Finally,
we believe that it is worth studying the BALANCE problem under complex diffusion models that
capture more realistic social behavior in the presence of multiple campaigns. One such extension
is to consider propagation probabilities on the edges that are dependent in the past behavior of the
nodes with respect to the two campaigns, e.g., one could consider Hawkes processes [28].
Acknowledgments. This work has been supported by the Academy of Finland projects ?Nestor?
(286211) and ?Agra? (313927), and the EC H2020 RIA project ?SoBigData? (654024).
8
References
[1] L. A. Adamic and N. Glance. The political blogosphere and the 2004 us election: divided they blog. In
LinkKDD, pages 36?43, 2005.
[2] N. Alon, M. Feldman, A. D. Procaccia, and M. Tennenholtz. A note on competitive diffusion through
social networks. IPL, 110(6):221?225, 2010.
[3] S. Bharathi, D. Kempe, and M. Salek. Competitive influence maximization in social networks. In WINE,
2007.
[4] A. Borodin, Y. Filmus, and J. Oren. Threshold models for competitive influence in social networks. In
WINE, 2010.
[5] A. Borodin, M. Braverman, B. Lucier, and J. Oren. Strategyproof mechanisms for competitive influence
in networks. In WWW, pages 141?150, 2013.
[6] C. Budak, D. Agrawal, and A. El Abbadi. Limiting the spread of misinformation in social networks. In
WWW, pages 665?674, 2011.
[7] T. Carnes, C. Nagarajan, S. M. Wild, and A. Van Zuylen. Maximizing influence in a competitive social
network: a follower?s perspective. In EC, 2007.
[8] W. Chen, C. Wang, and Y. Wang. Scalable influence maximization for prevalent viral marketing in largescale social networks. In KDD, pages 1029?1038, 2010.
[9] M. Conover, J. Ratkiewicz, M. Francisco, B. Gonc?alves, F. Menczer, and A. Flammini. Political Polarization on Twitter. In ICWSM, 2011.
[10] P. Dubey, R. Garg, and B. De Meyer. Competing for customers in a social network: The quasi-linear case.
In WINE, 2006.
[11] M. Farajtabar, X. Ye, S. Harati, L. Song, and H. Zha. Multistage campaigning in social networks. In
NIPS, pages 4718?4726. 2016.
[12] K. Garimella, G. De Francisci Morales, A. Gionis, and M. Mathioudakis. Quantifying controversy in
social media. In WSDM, pages 33?42, 2016.
[13] K. Garimella, G. De Francisci Morales, A. Gionis, and M. Mathioudakis. Reducing controversy by
connecting oppposing views. In WSDM, 2017.
[14] R. K. Garrett. Echo chambers online?: Politically motivated selective exposure among internet news
users1. JCMC, 14(2):265?285, 2009.
[15] J. Gottfried and E. Shearer. News use across social media platforms 2016. Pew Research Center, 2016.
[16] S. Goyal, H. Heidari, and M. Kearns. Competitive contagion in networks. Games and Economic Behavior,
2014.
[17] R. Jie, J. Qiao, G. Xu, and Y. Meng. A study on the interaction between two rumors in homogeneous
complex networks under symmetric conditions. Physica A, 454:129?142, 2016.
? Tardos. Maximizing the spread of influence through a social network. In
[18] D. Kempe, J. Kleinberg, and E.
KDD, pages 137?146, 2003.
[19] Q. V. Liao and W.-T. Fu. Expert voices in echo chambers: effects of source expertise indicators on
exposure to diverse opinions. In CHI, pages 2745?2754, 2014.
[20] H. Lu, J. Caverlee, and W. Niu. Biaswatch: A lightweight system for discovering and tracking topicsensitive opinion bias in social media. In CIKM, pages 213?222, 2015.
[21] W. Lu, W. Chen, and L. V. Lakshmanan. From competition to complementarity: comparative influence
diffusion and maximization. PVLDB, 9(2):60?71, 2015.
[22] A. Morales, J. Borondo, J. Losada, and R. Benito. Measuring political polarization: Twitter shows the
two sides of Venezuela. Chaos, 25(3), 2015.
[23] S. A. Myers and J. Leskovec. Clash of the contagions: Cooperation and competition in information
diffusion. In ICDM, pages 539?548, 2012.
[24] G. Nemhauser, L. Wolsey, and M. Fisher. An analysis of approximations for maximizing submodular set
functions ? I. Mathematical Programming, 14(1):265?294, 1978.
[25] E. Pariser. The filter bubble: What the Internet is hiding from you. Penguin UK, 2011.
[26] V. Tzoumas, C. Amanatidis, and E. Markakis. A game-theoretic analysis of a competitive diffusion
process over social networks. In WINE, 2012.
[27] I. Valera and M. Gomez-Rodriguez. Modeling adoption of competing products and conventions in social
media. In ICDM, 2015.
[28] A. Zarezade, A. Khodadadi, M. Farajtabar, H. R. Rabiee, and H. Zha. Correlated cascades: Compete or
cooperate. In AAAI, pages 238?244, 2017.
9
| 7052 |@word version:1 stronger:1 underline:1 simulation:2 propagate:4 q1:1 lakshmanan:1 initial:7 contains:2 score:4 selecting:3 lightweight:1 ours:1 outperforms:2 existing:5 past:1 current:1 clash:1 surprising:1 activation:1 yet:1 follower:4 additive:3 realistic:3 partition:1 kdd:2 drop:1 plot:1 v:2 greedy:20 selected:3 discovering:1 item:5 ria:1 pvldb:1 detecting:1 authority:1 node:7 provides:1 preference:1 gx:1 simpler:1 five:1 mathematical:1 become:1 initiative:1 competitiveness:1 prove:4 qualitative:2 wild:1 introduce:1 manner:1 expected:6 hardness:1 behavior:4 p1:9 nor:3 chi:1 inspired:2 informational:1 initiating:1 decreasing:1 company:2 linkkdd:1 election:7 highdegree:3 unpredictable:1 considering:1 cardinality:2 totally:1 provided:1 begin:1 moreover:1 bounded:1 notation:1 medium:13 maximizes:2 project:2 underlying:1 what:1 recruit:1 q2:1 informed:2 finding:1 guarantee:17 pseudo:2 quantitative:2 every:1 act:1 uk:1 hillary:1 service:1 local:1 obamacare:3 consequence:1 api:1 despite:1 analyzing:2 meng:1 niu:1 garg:1 resembles:1 bursting:1 studied:1 challenging:1 campaign:55 range:1 adoption:1 directed:4 acknowledgment:1 responsible:1 practice:2 union:3 goyal:1 mathioudakis:2 danger:1 area:1 thought:1 cascade:15 word:7 get:3 amplify:1 influence:12 optimize:2 www:2 customer:1 center:1 maximizing:9 exposure:13 straightforward:1 independently:1 survey:1 formulate:1 simplicity:2 identifying:1 immediately:1 assigns:2 rule:1 examines:1 searching:1 analogous:1 laplace:1 limiting:1 target:1 tardos:1 user:27 actual:1 homogeneous:1 programming:1 associate:1 complementarity:1 filmus:1 observed:1 role:1 bottom:1 cloud:3 wang:2 capture:3 worst:2 s1i:3 news:6 decrease:1 balanced:6 intuition:2 nash:1 complexity:1 respecting:2 multistage:1 controversy:3 exposed:5 creation:1 efficiency:2 samsung:1 s2i:3 rumor:1 distinct:3 activate:1 monte:1 detected:1 bharathi:2 lifestyle:1 heuristic:8 supplementary:6 consume:1 say:1 relax:1 ability:1 statistic:1 echo:5 online:3 advantage:1 myers:2 agrawal:1 caverlee:1 propose:4 interaction:1 product:1 commitment:1 adaptation:1 achieve:1 academy:1 intuitive:1 description:1 competition:2 scalability:3 empty:1 r1:7 comparative:1 h2020:1 leave:1 help:3 develop:2 friend:1 alon:1 measured:1 keywords:3 op:1 p2:7 coverage:1 implies:2 met:1 convention:1 direction:2 filter:11 subsequently:1 kiran:3 human:1 opinion:2 public:1 material:6 government:1 trump:1 nagarajan:1 investigation:1 preliminary:1 proposition:5 decompose:2 mathematically:1 extension:2 physica:1 hold:3 around:1 ic:1 welfare:2 seed:39 algorithmic:2 scope:1 equilibrium:1 tor:1 finland:4 adopt:1 achieves:1 omitted:1 wine:4 estimation:1 applicable:1 spreading:3 hs1:5 si0:2 largest:1 minded:1 minimization:1 anecdotal:1 brought:2 always:2 rather:1 avoid:2 hs2:3 focus:3 prevalent:1 indicates:3 aalto:7 contrast:3 harati:1 criticism:2 baseline:12 sense:1 retweeting:1 political:3 twitter:6 dependent:1 el:1 streaming:1 fracking:8 initially:1 quasi:1 selective:1 i1:14 germany:1 selects:4 interested:2 arg:6 overall:2 among:5 aforementioned:1 priori:1 platform:5 smoothing:1 kempe:2 equal:3 once:1 beach:1 mimic:1 future:1 others:1 np:5 report:1 employ:1 oblivious:1 penguin:1 simultaneously:2 nestor:1 individual:4 opposing:7 suit:1 attempt:2 assimilate:1 undoubtedly:1 centralized:6 organization:2 message:2 braverman:1 evaluation:2 introduces:1 personalization:3 yielding:1 activated:5 behind:1 amenable:2 edge:9 closer:1 fu:1 culture:1 respective:4 allocates:1 orthogonal:1 isolated:1 theoretical:1 leskovec:2 instance:2 modeling:1 obstacle:1 cover:9 measuring:1 maximization:6 cost:1 vertex:36 deviation:1 successful:1 too:1 characterize:1 motivating:1 referring:3 st:1 participates:1 connecting:1 again:1 aaai:1 choose:1 possibly:1 worse:3 expert:1 de:3 hiding:1 includes:1 gionis:4 caused:1 multiplicative:1 try:1 later:1 lot:1 picked:1 analyze:1 view:3 reached:6 competitive:14 start:2 option:3 zha:2 contribution:1 minimize:1 publicly:1 who:5 succession:1 yield:3 identify:2 none:2 carlo:1 lu:2 worth:1 expertise:1 maxc:2 reach:1 halve:1 ed:1 facebook:1 definition:3 against:1 competitor:1 energy:1 proof:1 propagated:2 gain:1 newly:1 dataset:5 popular:1 ask:1 recall:2 knowledge:1 lucier:1 garrett:1 nikolaj:2 focusing:1 methodology:1 response:2 specify:1 formulation:1 hiit:3 though:2 furthermore:1 marketing:5 heidari:1 until:1 receives:1 adamic:1 overlapping:1 propagation:10 glance:1 gonc:1 rodriguez:1 quality:3 believe:1 usa:1 effect:2 building:1 oil:1 ye:1 polarization:5 assigned:1 symmetric:5 i2:12 round:5 game:3 during:2 numerator:1 covering:1 hawkes:1 arrived:1 stress:1 theoretic:2 confusion:1 performs:4 cooperate:2 image:1 chaos:1 novel:1 fi:4 conover:1 common:12 viral:5 homophily:2 winner:1 discussed:1 extend:1 significant:2 refer:7 feldman:1 pew:1 similarly:2 submodular:11 entail:1 acute:1 intervene:1 etc:1 add:7 closest:1 own:2 recent:1 imbalanced:5 perspective:1 italy:1 optimizing:2 optimizes:1 revolutionized:1 blog:1 additional:3 nikos:2 paradigm:1 maximize:5 truthful:1 ii:3 multiple:4 desirable:1 full:2 technical:2 bestresponse:1 characterized:1 offer:1 long:3 divided:1 icdm:2 post:1 equally:1 a1:5 qi:2 variant:2 basic:1 scalable:1 heterogeneous:14 denominator:1 liao:1 symm:9 iteration:4 strategyproof:2 oren:2 addition:1 want:1 separately:1 zuylen:1 country:1 source:1 standpoint:1 bringing:1 probably:1 flow:1 effectiveness:2 presence:2 iii:1 easy:1 caring:1 isolation:1 competing:4 identified:1 reduce:2 idea:2 economic:1 politics:1 whether:1 six:1 motivated:1 allocate:1 song:1 passing:1 proceed:1 jie:1 fake:1 covered:1 clear:1 dubey:1 repeating:1 http:1 cikm:1 diverse:1 write:3 carnes:2 nevertheless:1 promised:1 threshold:1 prevent:1 neither:3 diffusion:17 graph:8 concreteness:1 monotone:6 cooperating:1 sum:1 merely:1 legislation:1 compete:3 run:2 tweet:1 you:1 farajtabar:2 throughout:1 decide:1 draw:1 prefer:1 comparable:1 bound:6 abortion:3 internet:2 distinguish:1 gomez:1 markakis:1 constraint:2 s10:5 helsinki:3 ri:1 sake:2 kleinberg:1 aspect:3 argument:1 according:2 remain:1 slightly:1 across:1 making:2 s1:46 tatti:2 turn:2 discus:1 mechanism:2 flip:1 end:1 adopted:2 available:1 decomposing:1 studying:1 apply:1 observe:2 chamber:5 batch:1 coin:1 alternative:1 voice:1 existence:1 top:1 remaining:2 running:2 restrictive:1 society:1 iphone:4 seeking:1 move:3 objective:19 occurs:1 strategy:5 costly:1 nemhauser:1 capacity:1 topic:5 argue:1 considers:1 collected:2 code:2 modeled:1 index:1 providing:2 balance:14 equivalently:1 difficult:1 setup:1 statement:1 implementation:1 design:1 policy:1 contributed:1 allowing:2 perform:2 observation:1 datasets:8 gas:1 supporting:1 looking:1 rome:2 menczer:1 arbitrary:1 community:2 complement:1 namely:1 s20:6 qiao:1 conflicting:2 ideological:1 nip:2 alternately:1 address:4 beyond:2 adult:1 able:1 proceeds:1 below:1 tennenholtz:1 borodin:4 provoking:1 summarize:1 program:1 including:1 max:5 dueling:1 force:2 largescale:1 indicator:1 valera:1 advanced:1 representing:2 improve:1 wsdm:2 imply:1 contagion:3 picture:1 retweeted:3 ready:2 bubble:12 campaigning:1 text:1 literature:2 discovery:1 fully:1 highlight:1 interesting:1 wolsey:1 filtering:1 versus:1 agent:5 controversial:2 article:3 propagates:3 viewpoint:6 plotting:1 pi:3 balancing:8 row:2 morale:3 cooperation:1 supported:1 last:5 side:11 bias:2 ipl:1 benefit:1 van:1 overcome:1 world:3 evaluating:1 computes:1 adopts:1 made:1 ec:2 social:35 approximate:2 emphasize:1 ignore:2 global:3 active:3 assumed:1 francisco:1 consuming:1 arbiter:1 discriminative:2 iterative:1 quantifies:1 table:1 additionally:1 aristides:2 ca:1 correlated:18 ignoring:1 improving:1 interact:2 hc:2 necessarily:1 complex:2 spread:3 motivation:1 s2:48 profile:1 fair:1 body:2 xu:1 retweets:3 meyer:1 wish:1 explicit:1 breaking:3 posting:1 retweet:2 revolution:1 explored:1 r2:7 admits:1 exists:3 adding:5 ci:2 magnitude:1 venezuela:1 budget:18 alves:1 chen:2 easier:1 intersection:3 led:1 selfish:5 blogosphere:1 ordered:1 tracking:1 recommendation:3 disseminate:1 applies:1 satisfies:2 environmental:1 hedge:16 goal:1 quantifying:1 towards:1 fisher:1 content:4 hard:6 experimentally:1 specifically:1 reducing:2 diff:9 lemma:4 kearns:1 called:1 total:1 experimental:3 player:5 indicating:1 formally:2 select:6 procaccia:1 people:3 icwsm:1 latter:1 unbalanced:3 evaluate:5 phenomenon:2 ex:2 |
6,692 | 7,053 | SafetyNets: Verifiable Execution of Deep Neural
Networks on an Untrusted Cloud
Zahra Ghodsi, Tianyu Gu, Siddharth Garg
New York University
{zg451, tg1553, sg175}@nyu.edu
Abstract
Inference using deep neural networks is often outsourced to the cloud since it is
a computationally demanding task. However, this raises a fundamental issue of
trust. How can a client be sure that the cloud has performed inference correctly?
A lazy cloud provider might use a simpler but less accurate model to reduce its
own computational load, or worse, maliciously modify the inference results sent to
the client. We propose SafetyNets, a framework that enables an untrusted server
(the cloud) to provide a client with a short mathematical proof of the correctness of
inference tasks that they perform on behalf of the client. Specifically, SafetyNets
develops and implements a specialized interactive proof (IP) protocol for verifiable
execution of a class of deep neural networks, i.e., those that can be represented
as arithmetic circuits. Our empirical results on three- and four-layer deep neural
networks demonstrate the run-time costs of SafetyNets for both the client and server
are low. SafetyNets detects any incorrect computations of the neural network by
the untrusted server with high probability, while achieving state-of-the-art accuracy
on the MNIST digit recognition (99.4%) and TIMIT speech recognition tasks
(75.22%).
1
Introduction
Recent advances in deep learning have shown that multi-layer neural networks can achieve state-ofthe-art performance on a wide range of machine learning tasks. However, training and performing
inference (using a trained neural network for predictions) can be computationally expensive. For this
reason, several commercial vendors have begun offering ?machine learning as a service" (MLaaS)
solutions that allow clients to outsource machine learning computations, both training and inference,
to the cloud.
While promising, the MLaaS model (and outsourced computing, in general) raises immediate security
concerns, specifically relating to the integrity (or correctness) of computations performed by the
cloud and the privacy of the client?s data [16]. This paper focuses on the former, i.e., the question
of integrity. Specifically, how can a client perform inference using a deep neural network on an
untrusted cloud, while obtaining strong assurance that the cloud has performed inference correctly?
Indeed, there are compelling reasons for a client to be wary of a third-party cloud?s computations. For
one, the cloud has a financial incentive to be ?lazy." A lazy cloud might use a simpler but less accurate
model, for instance, a single-layer instead of a multi-layer neural network, to reduce its computational
costs. Further the cloud could be compromised by malware that modifies the results sent back to
the client with malicious intent. For instance, the cloud might always mis-classify a certain digit in
a digit recognition task, or allow unauthorized access to certain users in a face recognition based
authentication system.
The security risks posed by cloud computing have spurred theoretical advances in the area of verifiable
computing (VC) [21]. The idea is to enable a client to provably (and cheaply) verify that an untrusted
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
server has performed computations correctly. To do so, the server provides to the client (in addition
to the result of computation) a mathematical proof of the correctness of the result. The client rejects,
with high probability, any incorrectly computed results (or proofs) provided by the server, while
always accepting correct results (and corresponding proofs) 1 . VC techniques aim for the following
desirable properties: the size of the proof should be small, the client?s verification effort must be
lower than performing the computation locally, and the server?s effort in generating proofs should not
be too high.
The advantage of proof-based VC is that it provides unconditional, mathematical guarantees on the
integrity of computation performed by the server. Alternative solutions for verifiable execution require
the client to make trust assumptions that are hard for the client to independently verify. Trusted
platform modules [7], for instance, require the client to place trust on the hardware manufacturer, and
assume that the hardware is tamper-proof. Audits based on the server?s execution time [15] require
precise knowledge of the server?s hardware configuration and assume, for instance, that the server is
not over-clocked.
The work in this paper leverages powerful VC techniques referred to as indigit=4 5
teractive proof (IP) systems [5, 9, 18,
Random
challenge 1
19].
An IP system consists of two enChallenge
Compute
response 1
Response
tities, a prover (P), i.e., the untrusted
Verify
server, and a verifier (V), i.e., the
challenge n
Random
client. The framework is illustrated in
Compute
Challenge
response n
Figure 1. The verifier sends the prover
Response
Reject
an input x, say a batch of test images,
Reject
and asks the prover to compute a funcFigure 1: High-level overview of the SafetyNets IP protocol. tion y = f (x). In our setting, f (.) is
In this example, an untrusted server intentionally changes a trained multi-layer neural network
the classification output from 4 to 5.
that is known to both the verifier and
prover, and y is the neural network?s classification output for each image in the batch. The prover
performs the computation and sends the verifier a purported result y 0 (which is not equal to y if the
prover cheats). The verifier and prover then engage in n rounds of interaction. In each round, the
verifier sends the prover a randomly picked challenge, and the prover provides a response based on
the IP protocol. The verifier accepts that y 0 is indeed equal to f (x) if it is satisfied with the prover?s
response in each round, and rejects otherwise.
Client
(verifier)
Input
Image
Execute
Neural
Network
Untrusted
Server
(prover)
...
A major criticism of IP systems (and, indeed, all existing VC techniques) when used for verifying
general-purpose computations is that the prover?s overheads are large, often orders of magnitude
more than just computing f (x) [21]. Recently, however, Thaler [18] showed that certain types of
computations admit IP protocols with highly efficient verifiers and provers, which lays the foundations
for the specialized IP protocols for deep neural networks that we develop in this paper.
Paper Contributions. This paper introduces SafetyNets, a new (and to the best of our knowledge,
the first) approach for verifiable execution of deep neural networks on untrusted clouds. Specifically,
SafetyNets composes a new, specialized IP protocol for the neural network?s activation layers with
Thaler?s IP protocol for matrix multiplication to achieve end-to-end verifiability, dramatically reducing
the bandwidth costs versus a naive solution that verifies the execution of each layer of the neural
network separately.
SafetyNets applies to a certain class of neural networks that can be represented as arithmetic circuits
that perform computations over finite fields (i.e., integers modulo a large prime p). Our implementation of SafetyNets addresses several practical challenges in this context, including the choice of the
prime p, its relationship to accuracy of the neural network, and to the verifier and prover run-times.
Empirical evaluations on the MNIST digit recognition and TIMIT speech recognition tasks illustrate
that SafetyNets enables practical, low-cost verifiable outsourcing of deep neural network execution
without compromising classification accuracy. Specifically, the client?s execution time is 8?-80?
lower than executing the network locally, the server?s overhead in generating proofs is less than 5%,
and the client/server exchange less than 8 KBytes of data during the IP protocol. SafetyNets? security
1
Note that the SafetyNets is not intended to and cannot catch any inherent mis-classifications due to the
model itself, only those that result from incorrect computations of the model by the server.
2
guarantees ensure that a client can detect any incorrect computations performed by a malicious
server with probability vanishingly close to 1. At the same time, SafetyNets achieves state-of-the-art
classification accuracies of 99.4% and 75.22% on the MNIST and TIMIT datasets, respectively.
2
Background
In this section, we begin by reviewing necessary background on IP systems, and then describe the
restricted class of neural networks (those that can be represented as arithmetic circuits) that SafetyNets
handles.
2.1
Interactive Proof Systems
Existing IP systems proposed in literature [5, 9, 18, 19] use, at their heart, a protocol referred to as
the sum-check protocol [13] that we describe here in some detail, and then discuss its applicability in
verifying general-purpose computations expressed as arithmetic circuits.
Sum-check Protocol Consider a d-degree n-variate polynomial g(x1 , x2 , . . . , xn ), where each
variable xi ? Fp (Fp is the set of all natural numbers between zero and p ? 1, for a given prime p)
and g : Fnp ? Fp . The prover P seeks to prove the following claim:
X
X
X
y=
...
g(x1 , x2 , . . . , xn )
(1)
x1 ?{0,1} x2 ?{0,1}
xn ?{0,1}
that is, the sum of g evaluated at 2n points is y. P and V now engage in a sum-check protocol to
verify this claim. In the first round of the protocol, P sends the following unidimensional polynomial
X
X
X
h(x1 ) =
...
g(x1 , x2 , . . . , xn )
(2)
x2 ?{0,1} x3 ?{0,1}
xn ?{0,1}
to V in the form of its coefficients. V checks if h(0) + h(1) = y. If yes, it proceeds, otherwise
it rejects P?s claim. Next, V picks a random value q1 ? Fp and evaluates h(q1 ) which, based on
Equation 2, yields a new claim:
X
X
X
h(q1 ) =
...
g(q1 , x2 , . . . , xn ).
(3)
x2 ?{0,1} x3 ?{0,1}
xn ?{0,1}
V now recursively calls the sum-check protocol to verify this new claim. By the final round of the
sum-check protocol, P returns the value g(q1 , q2 , . . . , qn ) and the V checks if this value is correct by
evaluating the polynomial by itself. If so, V accepts the original claim in Equation 1, otherwise it
rejects the claim.
Lemma 2.1. [2] V rejects an incorrect claim by P with probability greater than (1 ? ) where
= nd
p is referred to as the soundness error.
IPs for Verifying Arithmetic Circuits In their seminal work, Goldwasser et al. [9] demonstrated
how sum-check can be used to verify the execution of arithmetic circuits using an IP protocol now
referred to as GKR. An arithmetic circuit is a directed acyclic graph of computation over elements of
a finite field Fp in which each node can perform either addition or multiplication operations (modulo
p). While we refer the reader to [9] for further details of GKR, one important aspect of the protocol
bears mention.
GKR organizes nodes of an arithmetic circuit into layers; starting with the circuit inputs, the outputs
of one layer feed the inputs of the next. The GKR proof protocol operates backwards from the circuit
outputs to its inputs. Specifically, GKR uses sum-check to reduce the prover?s assertion about the
circuit output into an assertion about the inputs of the output layer. This assertion is then reduced to
an assertion about the inputs of the penultimate layer, and so on. The protocol continues iteratively till
the verifier is left with an assertion about the circuit inputs, which it checks on its own. The layered
nature of GKR?s prover aligns almost perfectly with the structure of a multi-layer neural network and
motivates the use of an IP system based on GKR for SafetyNets.
3
2.2
Neural Networks as Arithmetic Circuits
As mentioned before, SafetyNets applies to neural networks that can be expressed as arithmetic
circuits. This requirement places the following restrictions on the neural network layers.
Quadratic Activations The activation functions in SafetyNets must be polynomials with integer
coefficients (or, more precisely, coefficients in the field Fp ). The simplest of these is the element-wise
quadratic activation function whose output is simply the square of its input. Other commonly used
activation functions such as ReLU, sigmoid or softmax activations are precluded, except in the final
output layer. Prior work has shown that neural networks with quadratic activations have the same
representation power as networks with threshold activations and can be efficiently trained [6, 12].
Sum Pooling Pooling layers are commonly used to reduce the network size, to prevent overfitting
and provide translation invariance. SafetyNets uses sum pooling, wherein the output of the pooling
layer is the sum of activations in each local region. However, techniques such as max pooling [10]
and stochastic pooling [22] are not supported since max and divisions operations are not easily
represented as arithmetic circuits.
Finite Field Computations SafetyNets supports computations over elements of the field Fp , that
p?1
is, integers in the range {? p?1
2 , . . . , 0, . . . , 2 }. The inputs, weights and all intermediate values
computed in the network must lie in this range. Note that due to the use of quadratic activations
and sum pooling, the values in the network can become quite large. In practice, we will pick large
primes to support these large values. We note that this restriction applies to the inference phase only;
the network can be trained with floating point inputs and weights. The inputs and weights are then
re-scaled and quantized, as explained in Section 3.3, to finite field elements.
We note that the restrictions above are shared by a recently proposed technique, CryptoNets [8], that
seeks to perform neural network based inference on encrypted inputs so as to guarantee data privacy.
However, Cryptonets does not guarantee integrity and compared to SafetyNets, incurs high costs
for both the client and server (see Section 4.3 for a comparison). Conversely, SafetyNets is targeted
towards applications where integrity is critical, but does not provide privacy.
2.3
Mathematical Model
An L layer neural network with the constraints discussed above can be modeled, without loss of
generality, as follows. The input to the network is x ? Fnp 0 ?b , where n0 is the dimension of each
input and b is the batch size. Layer i ? [1, L] has ni output neurons2 , and is specified using a weight
n ?n
matrix wi?1 ? Fp i i?1 , and biases bi?1 ? Fnp i .
The output of Layer i ? [1, L], yi ? Fnp i ?b is:
yi = ?quad (wi?1 .yi?1 + bi?1 1T ) ?i ? [1, L ? 1];
yL = ?out (wL?1 .yL?1 + bL?1 1T ), (4)
where ?quad (.) is the quadratic activation function, ?out (.) is the activation function of the output
layer, and 1 ? Fbp is the vector of all ones. We will typically use softmax activations in the output
n
?b
layer. We will also find it convenient to introduce the variable zi ? Fp i+1 defined as
zi = wi .yi + bi 1T ?i ? [0, L ? 1].
(5)
The model captures both fully connected and convolutional layers; in the latter case the weight matrix
is sparse. Further, without loss of generality, all successive linear transformations in a layer, for
instance sum pooling followed by convolutions, are represented using a single weight matrix.
With this model in place, the goal of SafetyNets is to enable the client to verify that yL was correctly
computed by the server. We note that as in prior work [19], SafetyNets amortizes the prover and
verifier costs over batches of inputs. If the server incorrectly computes the output corresponding to
any input in a batch, the verifier rejects the entire batch of computations.
2
The 0th layer is defined to be input layer and thus y0 = x.
4
3
SafetyNets
We now describe the design and implementation of our end-to-end IP protocol for verifying execution
of deep networks. The SafetyNets protocol is a specialized form of the IP protocols developed by
Thaler [18] for verifying ?regular" arithmetic circuits, that themselves specialize and refine prior
work [5]. The starting point for the protocol is a polynomial representation of the network?s inputs
and parameters, referred to as a multilinear extension.
Multilinear Extensions Consider a matrix w ? Fn?n
. Each row and column of w can be
p
referenced using m = log2 (n) bits, and consequently one can represent w as a function W :
{0, 1}m ? {0, 1}m ? Fp . That is, given Boolean vectors t, u ? {0, 1}m , the function W (t, u)
returns the element of w at the row and column specified by Boolean vectors t and u, respectively.
m
? : Fm
A multi-linear extension of W is a polynomial function W
p ? Fp ? Fp that has the following
m
?
two properties: (1) given vectors t, u ? Fp such that W (t, u) = W (t, u) for all points on the unit
? has degree 1 in each of its variables. In the
hyper-cube, that is, for all t, u ? {0, 1}m ; and (2) W
?
?
? i to refer to multi-linear extensions of
remainder of this discussion, we will use X, Yi and Z?i and W
x, yi , zi , and wi , respectively, for i ? [1, L]. We will also assume, for clarity of exposition, that
the biases, bi are zero for all layers. The supplementary draft describes how biases are incorporated.
Consistent with the IP literature, the description of our protocol refers to the client as the verifier and
the server as the prover.
Protocol Overview The verifier seeks to check the result yL provided by the prover corresponding
to input x. Note that yL is the output of the final activation layer which, as discussed in Section 2.2,
is the only layer that does not use quadratic activations, and is hence not amenable to an IP.
Instead, in SafetyNets, the prover computes and sends zL?1 (the input of the final activation layer) as a
result to the verifier. zL?1 has the same dimensions as yL and therefore this refinement has no impact
on the server to client bandwidth. Furthermore, the verifier can easily compute yL = ?out (zL?1 )
locally.
Now, the verifier needs to check whether the prover computed zL?1 correctly. As noted by Vu
et al. [19], this check can be replaced by a check on whether the multilinear extension of zL?1 is
correctly computed at a randomly picked point in the field, with minimal impact on the soundness
log(n )
log(b)
error. That is, the verifier picks two vectors, qL?1 ? Fp L and rL?1 ? Fp
at random,
?
evaluates ZL?1 (qL?1 , rL?1 ), and checks whether it was correctly computed using a specialized
sum-check protocol for matrix multiplication due to Thaler [18] (described in Section 3.1).
Since zL?1 depends on wL?1 and yL?1 , sum-check yields assertions on the values of
L?1 )
? L?1 (qL?1 , sL?1 ) and Y?L?1 (sL?1 , rL?1 ), where sL?1 ? Flog(n
W
is another random vector
p
picked by the verifier during sum-check.
? L?1 (qL?1 , sL?1 ) is an assertion about the weight of the final layer. This is checked by the verifier
W
locally since the weights are known to both the prover and verifier. Finally, the verifier uses our
specialized sum-check protocol for activation layers (described in Section 3.2) to reduce the assertion
on Y?L?1 (sL?1 , rL?1 ) to an assertion on Z?L?2 (qL?2 , sL?2 ). The protocol repeats till it reaches the
? 0 , r0 ), the multilinear extension of the input x. The
input layer and produces an assertion on X(s
verifier checks this locally. If at any point in the protocol, the verifier?s checks fail, it rejects the
prover?s computation. Next, we describe the sum-check protocols for matrix multiplication and
activation that SafetyNets uses.
3.1
Sum-check for Matrix Multiplication
Since zi = wi .yi (recall we assumed zero biases for clarity), we can check an assertion about the
multilinear extension of zi evaluated at randomly picked points qi and ri by expressing Z?i (qi , ri )
as [18]:
X
? i (qi , j).Y?i (j, ri )
Z?i (qi , ri ) =
W
(6)
j?{0,1}log(ni )
5
Note that Equation 6 has the same form as the sum-check problem in Equation 1. Consequently the
sum-check protocol described in Section 2.1 can be used to verify this assertion. At the end of the
? i which it checks locally, and Y?i which is
sum-check rounds, the verifier will have assertions on W
checked using the sum-check protocol for quadratic activations described in Section 3.2.
The prover run-time for running the sum-check protocol in layer i is O(ni (ni?1 + b)), the verifier?s
run-time is O(ni ni?1 ) and the prover/verifier exchange 4 log(ni ) field elements.
3.2
Sum-check for Quadratic Activation
In this step, we check an assertion about the output of quadratic activation layer i, Y?i (si , ri ), by
writing it in terms of the input of the activation layer as follows:
X
? i , j)I(r
? i , k)Z? 2 (j, k),
Y?i (si , ri ) =
I(s
(7)
i?1
j?{0,1}log(ni ) ,k?{0,1}log(b)
? .) is the multilinear extension of the identity matrix. Equation 7 can also be verified using
where I(.,
the sum-check protocol, and yields an assertion about Z?i?1 , i.e., the inputs to the activation layer.
This assertion is in turn checked using the protocol described in Section 3.1.
The prover run-time for running the sum-check protocol in layer i is O(bni ), the verifier?s runtime is O(log(bni )) and the prover/verifier exchange 5 log(bni ) field elements. This completes the
theoertical description of the SafetyNets specialized IP protocol.
Lemma 3.1. The SafetyNets
verifier rejects incorrect computations with probability greater than
P
3b L
i=0 ni
(1 ? ) where =
is
the soundness error.
p
In practice, with p = 261 ? 1 the soundness error <
sizes.
3.3
1
230
for practical network parameters and batch
Implementation
The fact that SafetyNets operates only on elements in a finite field Fp during inference imposes a
practical challenge. That is, how do we convert floating point inputs and weights from training into
field elements, and how do we select the size of the field p?
Let wi0 ? Rni?1 ?ni and b0i ? Rni be the floating point parameters obtained from training for
each layer i ? [1, L]. We convert the weights to integers by multiplying with a constant ? > 1 and
rounding, i.e., wi = b?wi0 e. We do the same for inputs with a scaling factor ?, i.e., x = b?x0 e. Then,
i?1
i?1
to ensure that all values in the network scale isotropically, we must set bi = b?2 ? (2 +1) b0i e.
While larger ? and ? values imply lower quantization errors, they also result in large values in the
network, especially in the layers closer to the output. Similar empirical observations were made
by the CryptoNets work [8]. To ensure accuracy the values in the network must lie in the range
p?1
[? p?1
2 , 2 ]; this influences the choice of the prime p. On the other hand, we note that large primes
increase the verifier and prover run-time because of the higher cost of performing modular additions
and multiplications.
As in prior works [5, 18, 19], we restrict our choice of p to Mersenne primes since they afford efficient
modular arithmetic implementations, and specifically to the primes p = 261 ? 1 and p = 2127 ? 1.
For a given p, we explore and different values of ? and ? and use the validation dataset to the pick the
p?1
ones that maximize accuracy while ensuring that the values in the network lie within [? p?1
2 , 2 ].
4
Empirical Evaluation
In this section, we present empirical evidence to support our claim that SafetyNets enables low-cost
verifiable execution of deep neural networks on untrusted clouds without compromising classification
accuracy.
6
10
CNN-2-ReLU Train
CNN-2-ReLU Test
CNN-2-Quad Train
CNN-2-Quad Test
1.5
1
80
CNN-2-ReLU Train
CNN-2-ReLU Test
CNN-2-Quad Train
CNN-2-Quad Test
8
Error (%)
Error (%)
2
6
4
FcNN-3-ReLU Train
FcNN-3-ReLU Test
FcNN-3-Quad Train
FcNN-3-Quad Test
70
60
Error (%)
2.5
50
40
30
0.5
0
2
200
400
600 800
Time (s)
(a) MNIST
1000 1200
0
20
0
200
400
600 800
Time (s)
1000 1200
(b) MNIST-Back-Rand
10
10000
20000
Time (s)
30000
40000
(c) TIMIT
Figure 2: Evolution of training and test error for the MNIST, MNIST-Back-Rand and TIMIT tasks.
4.1
Setup
Datasets We evaluated SafetyNets on three classifications tasks. (1) Handwritten digit recognition
on the MNIST dataset, using 50,000 training, 10,000 validation and 10,000 test images. (2) A
more challenging version of digit recognition, MNIST-Back-Rand, an artificial dataset generated
by inserting a random background into MNIST image [1]. The dataset has 10,000 training, 2,000
validation and 50,000 test images. ZCA whitening is applied to the raw dataset before training and
testing [4]. (3) Speech recognition on the TIMIT dataset, split into a training set with 462 speakers,
a validation set with 144 speakers and a testing set with 24 speakers. The raw audio samples are
pre-processed as described by [3]. Each example includes its preceding and succeeding 7 frames,
resulting in a 1845-dimensional input in total. During testing, all labels are mapped to 39 classes [11]
for evaluation.
Neural Networks For the two MNIST tasks, we used a convolutional neural network same as [23]
with 2 convolutional layers with 5 ? 5 filters, a stride of 1 and a mapcount of 16 and 32 for the
first and second layer respectively. Each convolutional layer is followed by quadratic activations
and 2 ? 2 sum pooling with a stride of 2. The fully connected layer uses softmax activation. We
refer to this network as CNN-2-Quad. For TIMIT, we use a four layer network described by [3]
with 3 hidden, fully connected layers with 2000 neurons and quadratic activations. The output layer
is fully connected with 183 output neurons and softmax activation. We refer to this network as
FcNN-3-Quad. Since quadratic activations are not commonly used, we compare the performance
of CNN-2-Quad and FcNN-3-Quad with baseline versions in which the quadratic activations are
replaced by ReLUs. The baseline networks are CNN-2-ReLU and FcNN-3-ReLU.
The hyper-parameters for training are selected based on the validation datasets. The Adam Optimizer
is used for CNNs with learning rate 0.001, exponential decay and dropout probability 0.75. The
AdaGrad optimizer is used for FcNNs with a learning rate of 0.01 and dropout probability 0.5. We
found that norm gradient clipping was required for training the CNN-2-Quad and FcNN-3-Quad
networks, since the gradient values for quadratic activations can become large.
Our implementation of SafetyNets uses Thaler?s code for the IP protocol for matrix multiplication
[18] and our own implementation of the IP for quadratic activations. We use an Intel Core i7-4600U
CPU running at 2.10 GHz for benchmarking.
4.2
Classification Accuracy of SafetyNets
SafetyNets places certain restrictions on the activation function (quadratic) and requires weights
and inputs to be integers (in field Fp ). We begin by analyzing how (and if) these restrictions impact
classification accuracy/error. Figure 2 compares training and test error of CNN-2-Quad/FcNN-3-Quad
versus CNN-2-ReLU/FcNN-3-ReLU. For all three tasks, the networks with quadratic activations are
competitive with networks that use ReLU activations. Further, we observe that the networks with
quadratic activations appear to converge faster during training, possibly because their gradients are
larger despite gradient clipping.
Next, we used the scaling and rounding strategy proposed in Section 3.3 to convert weights and
inputs to integers. Table 1 shows the impact of scaling factors ? and ? on the classification error and
maximum values observed in the network during inference for MNIST-Back-Rand. The validation
7
Table 1: Validation error and maximum value observed in the network for MNIST-Rand-Back and
different values of scaling parameters, ? and ?. Shown in bold red font are values of ? and ? that are
infeasible because the maximum value exceeds that allowed by prime p = 261 ? 1.
?
Err
0.188
0.194
0.188
0.186
0.185
4
8
16
32
64
?=4
Max
4.0 ? 108
6.1 ? 109
9.4 ? 1010
1.5 ? 1012
2.5 ? 1013
Err
0.073
0.072
0.072
0.073
0.073
?=8
Max
4.0 ? 1010
6.9 ? 1011
1.1 ? 1013
1.7 ? 1014
2.8 ? 1015
Err
0.042
0.039
0.036
0.038
0.038
? = 16
Max
5.5 ? 1012
8.3 ? 1013
1.3 ? 1015
2.1 ? 1016
3.4 ? 1017
Err
0.039
0.038
0.037
0.037
0.037
? = 32
Max
6.6 ? 1014
1.0 ? 1016
1.6 ? 1017
2.6 ? 1018
4.2 ? 1019
Err
0.04
0.037
0.035
0.036
0.036
? = 64
Max
8.8 ? 1016
1.3 ? 1018
2.1 ? 1019
3.5 ? 1020
5.6 ? 1021
error drops as ? and ? are increased. On the other hand, for p = 261 ? 1, the largest value allowed is
1.35 ? 1018 ; this rules out ? and ? greater than 64, as shown in the table. For MNIST-Back-Rand,
we pick ? = ? = 16 based on validation data, and obtain a test error of 4.67%. Following a similar
methodology, we obtain a test error of 0.63% for MNIST (p = 261 ? 1) and 25.7% for TIMIT
(p = 2127 ? 1). We note that SafetyNets does not support techniques such as Maxout [10] that have
demonstrated lower error on MNIST (0.45%). Ba et al. [3] report an error of 18.5% for TIMIT using
an ensemble of nine deep neural networks, which SafetyNets might be able to support by verifying
each network individually and performing ensemble averaging at the client-side.
4.3
Verifier and Prover Run-times
The relevant performance metrics for SafetyNets are (1)
the client?s (or verifier?s) run-time, (2) the server?s runtime which includes baseline time to execute the neural
network and overhead of generating proofs, and (3) the
bandwidth required by the IP protocol. Ideally, these quantities should be small, and importantly, the client?s runtime should be smaller than the case in which it executes
the network by itself. Figure 3 plots run-time data over
input batch sizes ranging from 256 to 2048 for FcNNQuad-3.
1000
FcNN-Quad-3 Exe Time
Additional Prover Time
Verifier Time
Running Time (s)
100
10
1
0.1
28
29
210
211
Input Batch Size
212
For FcNN-Quad-3, the client?s time for verifying proofs
is 8? to 82? faster than the baseline in which it executes
Figure 3: Run-time of verifier, prover
FcNN-Quad-3 itself, and decreases with batch size. The
and baseline execution time for the arithincrease in the server?s execution time due to the overmetic circuit representation of FcNNhead of generating proofs is only 5% over the baseline
Quad-3 versus input batch size.
unverified execution of FcNN-Quad-3. The prover and
verifier exchange less than 8 KBytes of data during the IP protocol for a batch size of 2048, which is
negligible (less than 2%) compared to the bandwidth required to communicate inputs and outputs
back and forth. In all settings, the soundness error , i.e., the chance that the verifier fails to detect
incorrect computations by the server is less than 2130 , a negligible value. We note SafetyNets has
significantly lower bandwidth costs compared to an approach that separately verifies the execution of
each layer using only the IP protocol for matrix multiplication.
A closely related technique, CryptoNets [8], uses homomorphic encryption to provide privacy, but not
integrity, for neural networks executing in the cloud. Since SafetyNets and CryptoNets target different
security goals a direct comparison is not entirely meaningful. However, from the data presented in
the CryptoNets paper, we note that the client?s run-time for MNIST using a CNN similar to ours and
an input batch size b = 4096 is about 600 seconds, primarily because of the high cost of encryptions.
For the same batch size, the client-side run-time of SafetyNets is less than 10 seconds. Recent work
has also looked at how neural networks can be trained in the cloud without compromising the user?s
training data [14], but the proposed techniques do not guarantee integrity. We expect that SafetyNets
can be extended to address the verifiable neural network training problem as well.
5
Conclusion
In this paper, we have presented SafetyNets, a new framework that allows a client to provably verify
the correctness of deep neural network based inference running on an untrusted clouds. Building
upon the rich literature on interactive proof systems for verifying general-purpose and specialized
computations, we designed and implemented a specialized IP protocol tailored for a certain class
8
of deep neural networks, i.e., those that can be represented as arithmetic circuits. We showed that
placing these restrictions did not impact the accuracy of the networks on real-world classification
tasks like digit and speech recognition, while enabling a client to verifiably outsource inference
to the cloud at low-cost. For our future work, we will apply SafetyNets to deeper networks and
extend it to address both integrity and privacy. There are VC techniques [17] that guarantee both, but
typically come at higher costs. Further, building on prior work on the use of IPs to build verifiable
hardware [20], we intend to deploy the SafetyNets protocol in the design of a verifiable hardware
accelerator for neural network inference.
References
[1] Variations on the MNIST digits. http://www.iro.umontreal.ca/~lisa/twiki/bin/
view.cgi/Public/MnistVariations.
[2] S. Arora and B. Barak. Computational complexity: a modern approach. Cambridge University
Press, 2009.
[3] J. Ba and R. Caruana. Do deep nets really need to be deep? In Advances in Neural Information
Processing Systems, pages 2654?2662, 2014.
[4] A. Coates, A. Ng, and H. Lee. An analysis of single-layer networks in unsupervised feature
learning. In International Conference on Artificial Intelligence and Statistics, pages 215?223,
2011.
[5] G. Cormode, J. Thaler, and K. Yi. Verifying computations with streaming interactive proofs.
Proceedings of the Very Large Database Endowment, pages 25?36, 2011.
[6] A. Gautier, Q. N. Nguyen, and M. Hein. Globally optimal training of generalized polynomial
neural networks with nonlinear spectral methods. In Advances in Neural Information Processing
Systems, pages 1687?1695, 2016.
[7] R. Gennaro, C. Gentry, and B. Parno. Non-interactive verifiable computing: Outsourcing
computation to untrusted workers. Annual Cryptology Conference, pages 465?482, 2010.
[8] R. Gilad-Bachrach, N. Dowlin, K. Laine, K. Lauter, M. Naehrig, and J. Wernsing. Cryptonets:
Applying neural networks to encrypted data with high throughput and accuracy. In International
Conference on Machine Learning, pages 201?210, 2016.
[9] S. Goldwasser, Y. T. Kalai, and G. N. Rothblum. Delegating computation: interactive proofs for
muggles. Symposium on Theory of Computing, pages 113?122, 2008.
[10] I. J. Goodfellow, D. Warde-Farley, M. Mirza, A. Courville, and Y. Bengio. Maxout networks.
arXiv preprint arXiv:1302.4389, 2013.
[11] K. Lee and H. Hon. Speaker-independent phone recognition using hidden markov models.
IEEE Transactions on Acoustics, Speech, and Signal Processing, pages 1641?1648, 1989.
[12] R. Livni, S. Shalev-Shwartz, and O. Shamir. On the computational efficiency of training neural
networks. In Advances in Neural Information Processing Systems, pages 855?863, 2014.
[13] C. Lund, L. Fortnow, H. Karloff, and N. Nisan. Algebraic methods for interactive proof systems.
Journal of the ACM, pages 859?868, 1992.
[14] P. Mohassel and Y. Zhang. Secureml: A system for scalable privacy-preserving machine
learning. IACR Cryptology ePrint Archive, 2017.
[15] F. Monrose, P. Wyckoff, and A. D. Rubin. Distributed execution with remote audit. In Network
and Distributed System Security Symposium, pages 3?5, 1999.
[16] N. Papernot, P. McDaniel, A. Sinha, and M. Wellman. Towards the science of security and
privacy in machine learning. arXiv preprint arXiv:1611.03814, 2016.
[17] B. Parno, J. Howell, C. Gentry, and M. Raykova. Pinocchio: Nearly practical verifiable
computation. In Symposium on Security and Privacy, pages 238?252, 2013.
9
[18] J. Thaler. Time-optimal interactive proofs for circuit evaluation. In International Cryptology
Conference, pages 71?89, 2013.
[19] V. Vu, S. Setty, A. J. Blumberg, and M. Walfish. A hybrid architecture for interactive verifiable
computation. In Symposium on Security and Privacy, pages 223?237, 2013.
[20] R. S. Wahby, M. Howald, S. Garg, A. Shelat, and M. Walfish. Verifiable asics. In Symposium
on Security and Privacy, pages 759?778, 2016.
[21] M. Walfish and A. J. Blumberg. Verifying computations without reexecuting them. Communications of the ACM, pages 74?84, 2015.
[22] M. D. Zeiler and R. Fergus. Stochastic pooling for regularization of deep convolutional neural
networks. arXiv preprint arXiv:1301.3557, 2013.
[23] Y. Zhang, P. Liang, and M. J. Wainwright. Convexified convolutional neural networks. arXiv
preprint arXiv:1609.01000, 2016.
Proof of Lemma 3.1
Lemma 3.1 The SafetyNets
verifier rejects incorrect computations with probability greater than
P
3b L
i=0 ni
(1 ? ) where =
is the soundness error.
p
Proof. Verifying a multi-linear extension of the output sampled at a random point, instead of each
value adds a soundness error of = bnpL . Each instance of the sum-check protocol adds to the
soundness error [19]. The IP protocol for matrix multiplication adds a soundness error of = 2npi?1
i
in Layer i [18]. Finally, the IP protocol for quadratic activations adds a soundness error of = 3bn
p
in Layer i [18]. Summing together we get a total soundness error of
final result is an upper bound on this value.
2
PL?1
i=0
ni +3
PL?1
i=1
p
bni +bnL
. The
Handling Bias Variables
We assumed that the bias variables were zero, allowing us to write bmzi = wi .yi while it should be
bmzi = wi .yi + bi 1T . Let zi0 = wi .yi We seek to convert an assertion on Z?i (qi , ri ) to an assertion
on Z?0 i . We can do so by noting that:
X
? qi )(Z?0 i (j, ri ) + B
?i (j))
I(j,
(8)
Z?i (qi , ri ) =
j?{0,1}log(ni )
?i which the verifier checks
which can be reduced to sum-check and thus yields an assertion on B
0
?
locally and Z i , which is passed to the IP protocol for matrix multiplication.
10
| 7053 |@word cnn:15 version:2 polynomial:7 norm:1 nd:1 seek:4 bn:1 q1:5 pick:5 incurs:1 asks:1 mention:1 recursively:1 configuration:1 offering:1 ours:1 existing:2 err:5 activation:36 si:2 must:5 fn:1 enables:3 verifiability:1 succeeding:1 drop:1 n0:1 plot:1 designed:1 intelligence:1 selected:1 assurance:1 short:1 kbytes:2 core:1 accepting:1 cormode:1 provides:3 quantized:1 node:2 draft:1 successive:1 simpler:2 zhang:2 mathematical:4 direct:1 become:2 symposium:5 incorrect:7 consists:1 prove:1 specialize:1 overhead:3 introduce:1 privacy:10 x0:1 indeed:3 themselves:1 multi:7 detects:1 globally:1 siddharth:1 cpu:1 quad:21 provided:2 begin:2 circuit:19 q2:1 developed:1 flog:1 transformation:1 guarantee:6 interactive:9 runtime:3 scaled:1 zl:7 unit:1 appear:1 before:2 negligible:2 service:1 referenced:1 local:1 modify:1 despite:1 cheat:1 analyzing:1 gentry:2 rothblum:1 bnl:1 might:4 garg:2 conversely:1 challenging:1 zi0:1 range:4 bi:6 directed:1 practical:5 testing:3 vu:2 practice:2 implement:1 x3:2 digit:8 area:1 empirical:5 reject:11 significantly:1 convenient:1 pre:1 regular:1 refers:1 get:1 cannot:1 close:1 layered:1 risk:1 context:1 seminal:1 writing:1 influence:1 restriction:6 www:1 applying:1 demonstrated:2 outsourcing:2 modifies:1 starting:2 independently:1 bachrach:1 rule:1 maliciously:1 importantly:1 financial:1 mersenne:1 handle:1 amortizes:1 variation:1 target:1 commercial:1 deploy:1 user:2 engage:2 modulo:2 shamir:1 us:7 goodfellow:1 element:9 recognition:11 expensive:1 continues:1 lay:1 database:1 observed:2 cloud:21 module:1 preprint:4 verifying:11 capture:1 region:1 connected:4 remote:1 decrease:1 mentioned:1 complexity:1 ideally:1 warde:1 trained:5 raise:2 reviewing:1 upon:1 division:1 efficiency:1 untrusted:12 gu:1 easily:2 represented:6 train:6 provers:1 describe:4 artificial:2 hyper:2 shalev:1 whose:1 quite:1 posed:1 supplementary:1 larger:2 say:1 modular:2 otherwise:3 soundness:11 statistic:1 itself:4 ip:31 final:6 advantage:1 net:1 propose:1 interaction:1 vanishingly:1 remainder:1 inserting:1 relevant:1 till:2 achieve:2 forth:1 description:2 requirement:1 produce:1 generating:4 adam:1 executing:2 encryption:2 illustrate:1 develop:1 cryptology:3 strong:1 implemented:1 come:1 tianyu:1 closely:1 correct:2 compromising:3 filter:1 stochastic:2 vc:6 cnns:1 enable:2 public:1 bin:1 require:3 exchange:4 really:1 multilinear:6 extension:9 pl:2 claim:9 major:1 achieves:1 optimizer:2 purpose:3 gautier:1 label:1 individually:1 largest:1 wl:2 correctness:4 trusted:1 always:2 aim:1 kalai:1 b0i:2 focus:1 check:37 criticism:1 zca:1 detect:2 baseline:6 inference:15 streaming:1 typically:2 entire:1 hidden:2 provably:2 issue:1 classification:11 hon:1 art:3 platform:1 softmax:4 cube:1 equal:2 field:13 beach:1 ng:1 placing:1 unsupervised:1 throughput:1 nearly:1 future:1 report:1 mirza:1 develops:1 inherent:1 primarily:1 modern:1 randomly:3 floating:3 replaced:2 intended:1 phase:1 highly:1 blumberg:2 evaluation:4 introduces:1 wellman:1 farley:1 unconditional:1 amenable:1 accurate:2 closer:1 worker:1 necessary:1 re:1 hein:1 theoretical:1 minimal:1 sinha:1 increased:1 instance:6 classify:1 column:2 compelling:1 boolean:2 assertion:19 caruana:1 clipping:2 cost:12 applicability:1 rounding:2 too:1 exe:1 st:1 fundamental:1 international:3 lee:2 yl:8 together:1 satisfied:1 homomorphic:1 possibly:1 worse:1 admit:1 return:2 stride:2 bold:1 includes:2 coefficient:3 depends:1 nisan:1 performed:6 tion:1 picked:4 view:1 red:1 competitive:1 relus:1 npi:1 timit:9 contribution:1 square:1 ni:13 accuracy:11 convolutional:6 efficiently:1 ensemble:2 yield:4 ofthe:1 yes:1 handwritten:1 raw:2 provider:1 multiplying:1 howell:1 gkr:7 executes:2 composes:1 reach:1 aligns:1 checked:3 papernot:1 evaluates:2 fcnn:14 intentionally:1 proof:23 mi:2 bni:4 sampled:1 dataset:6 begun:1 recall:1 knowledge:2 back:8 feed:1 higher:2 methodology:1 response:6 wherein:1 rand:6 execute:2 evaluated:3 generality:2 furthermore:1 just:1 hand:2 trust:3 nonlinear:1 usa:1 building:2 verify:9 former:1 hence:1 wi0:2 evolution:1 regularization:1 iteratively:1 illustrated:1 round:6 during:7 authentication:1 noted:1 speaker:4 clocked:1 generalized:1 demonstrate:1 performs:1 image:6 wise:1 ranging:1 recently:2 umontreal:1 sigmoid:1 specialized:9 rl:4 overview:2 fortnow:1 discussed:2 extend:1 relating:1 refer:4 expressing:1 cambridge:1 convexified:1 access:1 whitening:1 add:4 integrity:8 own:3 recent:2 showed:2 prime:9 phone:1 certain:6 server:26 yi:11 preserving:1 greater:4 additional:1 preceding:1 r0:1 converge:1 maximize:1 signal:1 arithmetic:14 desirable:1 exceeds:1 faster:2 long:1 impact:5 prediction:1 qi:7 ensuring:1 zahra:1 scalable:1 metric:1 arxiv:8 represent:1 tailored:1 gilad:1 encrypted:2 addition:3 background:3 separately:2 completes:1 malicious:2 sends:5 archive:1 sure:1 pooling:10 sent:2 eprint:1 integer:6 call:1 leverage:1 backwards:1 intermediate:1 split:1 bengio:1 noting:1 variate:1 relu:12 zi:5 architecture:1 bandwidth:5 perfectly:1 fm:1 reduce:5 idea:1 restrict:1 unidimensional:1 goldwasser:2 karloff:1 i7:1 whether:3 passed:1 effort:2 outsourced:2 algebraic:1 speech:5 york:1 afford:1 nine:1 deep:17 dramatically:1 verifiable:14 locally:7 hardware:5 processed:1 mcdaniel:1 simplest:1 reduced:2 http:1 sl:6 coates:1 correctly:7 write:1 incentive:1 four:2 threshold:1 achieving:1 clarity:2 prevent:1 verified:1 graph:1 sum:30 convert:4 laine:1 run:12 powerful:1 communicate:1 place:4 almost:1 reader:1 purported:1 scaling:4 bit:1 dropout:2 bound:1 layer:50 entirely:1 followed:2 courville:1 quadratic:19 refine:1 annual:1 precisely:1 constraint:1 ghodsi:1 x2:7 ri:9 aspect:1 performing:4 describes:1 smaller:1 y0:1 wi:9 explained:1 restricted:1 heart:1 computationally:2 vendor:1 equation:5 discus:1 turn:1 fail:1 end:5 operation:2 manufacturer:1 observe:1 apply:1 spectral:1 alternative:1 batch:14 outsource:2 original:1 spurred:1 ensure:3 running:5 zeiler:1 log2:1 malware:1 verifier:40 especially:1 build:1 bl:1 intend:1 question:1 prover:32 quantity:1 font:1 strategy:1 looked:1 behalf:1 gradient:4 mapped:1 penultimate:1 cgi:1 reason:2 iro:1 code:1 modeled:1 relationship:1 asics:1 liang:1 ql:5 setup:1 intent:1 ba:2 implementation:6 design:2 motivates:1 perform:5 allowing:1 upper:1 convolution:1 observation:1 datasets:3 neuron:2 markov:1 finite:5 enabling:1 incorrectly:2 immediate:1 extended:1 incorporated:1 precise:1 communication:1 frame:1 fcnns:1 required:3 specified:2 security:9 acoustic:1 accepts:2 nip:1 address:3 able:1 precluded:1 proceeds:1 lund:1 fp:17 challenge:6 including:1 max:7 wainwright:1 power:1 critical:1 demanding:1 natural:1 client:34 unverified:1 hybrid:1 thaler:7 imply:1 arora:1 catch:1 naive:1 prior:5 literature:3 multiplication:10 adagrad:1 loss:2 fully:4 bear:1 expect:1 accelerator:1 acyclic:1 versus:3 validation:8 foundation:1 degree:2 rni:2 verification:1 consistent:1 imposes:1 rubin:1 endowment:1 translation:1 row:2 supported:1 repeat:1 infeasible:1 bias:6 allow:2 side:2 deeper:1 lisa:1 wide:1 barak:1 face:1 livni:1 sparse:1 ghz:1 distributed:2 dimension:2 xn:7 evaluating:1 world:1 rich:1 qn:1 computes:2 commonly:3 refinement:1 made:1 nguyen:1 party:1 transaction:1 overfitting:1 summing:1 assumed:2 xi:1 parno:2 shwartz:1 fergus:1 compromised:1 wary:1 table:3 promising:1 nature:1 ca:2 obtaining:1 protocol:47 did:1 verifies:2 allowed:2 x1:5 referred:5 intel:1 benchmarking:1 fails:1 exponential:1 lie:3 third:1 load:1 nyu:1 decay:1 concern:1 evidence:1 mnist:18 quantization:1 delegating:1 magnitude:1 execution:16 simply:1 explore:1 cheaply:1 lazy:3 expressed:2 isotropically:1 applies:3 chance:1 acm:2 goal:2 targeted:1 identity:1 consequently:2 exposition:1 towards:2 maxout:2 shared:1 hard:1 change:1 specifically:7 except:1 reducing:1 operates:2 averaging:1 lemma:4 total:2 invariance:1 twiki:1 meaningful:1 organizes:1 audit:2 select:1 support:5 latter:1 audio:1 handling:1 |
6,693 | 7,054 | Query Complexity of Clustering with
Side Information
Arya Mazumdar and Barna Saha
College of Information and Computer Sciences
University of Massachusetts Amherst
Amherst, MA 01003
{arya,barna}@cs.umass.edu
Abstract
Suppose, we are given a set of n elements to be clustered into k (unknown) clusters,
and an oracle/expert labeler that can interactively answer pair-wise queries of the
form, ?do two elements u and v belong to the same cluster??. The goal is to recover
the optimum clustering by asking the minimum number of queries. In this paper,
we provide a rigorous theoretical study of this basic problem of query complexity
of interactive clustering, and give strong information theoretic lower bounds, as
well as nearly matching upper bounds. Most clustering problems come with a
similarity matrix, which is used by an automated process to cluster similar points
together. However, obtaining an ideal similarity function is extremely challenging
due to ambiguity in data representation, poor data quality etc., and this is one of
the primary reasons that makes clustering hard. To improve accuracy of clustering,
a fruitful approach in recent years has been to ask a domain expert or crowd to
obtain labeled data interactively. Many heuristics have been proposed, and all of
these use a similarity function to come up with a querying strategy. Even so, there
is a lack systematic theoretical study. Our main contribution in this paper is to
show the dramatic power of side information aka similarity matrix on reducing
the query complexity of clustering. A similarity matrix represents noisy pair-wise
relationships such as one computed by some function on attributes of the elements.
A natural noisy model is where similarity values are drawn independently from
some arbitrary probability distribution f+ when the underlying pair of elements
belong to the same cluster, and from some f? otherwise. We show that given
such a similarity matrix, the query complexity reduces drastically from ?(nk)
2
n
) where H2 denotes the squared Hellinger
(no similarity matrix) to O( H2k(flog
+ kf? )
divergence. Moreover, this is also information-theoretic optimal within an O(log n)
factor. Our algorithms are all efficient, and parameter free, i.e., they work without
any knowledge of k, f+ and f? , and only depend logarithmically with n. Our
lower bounds could be of independent interest, and provide a general framework
for proving lower bounds for classification problems in the interactive setting.
Along the way, our work also reveals intriguing connection to popular community
detection models such as the stochastic block model and opens up many avenues
for interesting future research.
1
Introduction
Clustering is one of the most fundamental and popular methods for data classification. In this paper
we provide a rigorous theoretical study of clustering with the help of an oracle, a model that saw a
recent surge of popular heuristic algorithms.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
Suppose we are given a set of n points, that need to be clustered into k clusters where k is unknown
to us. Suppose there is an oracle that either knows the true underlying clustering or can compute the
best clustering under some optimization constraints. We are allowed to query the oracle whether
any two points belong to the same cluster or not. How many such queries are needed to be asked at
minimum to perform the clustering exactly? The motivation to this problem lies at the heart of modern
machine learning applications where the goal is to facilitate more accurate learning from less data by
interactively asking for labeled data, e.g., active learning and crowdsourcing. Specifically, automated
clustering algorithms that rely just on a similarity matrix often return inaccurate results. Whereas,
obtaining few labeled data adaptively can help in significantly improving its accuracy. Coupled with
this observation, clustering with an oracle has generated tremendous interest in the last few years with
increasing number of heuristics developed for this purpose [22, 40, 13, 42, 43, 18, 39, 12, 21, 29].
The number of queries is a natural measure of ?efficiency? here, as it directly relates to the amount of
labeled data or the cost of using crowd workers ?however, theoretical guarantees on query complexity
is lacking in the literature.
On the theoretical side, query complexity or the decision tree complexity is a classical model of
computation that has been extensively studied for different problems [16, 4, 8]. For the clustering
problem, one can obtain an upper bound of O(nk) on the query complexity easily and it is achievable
even when k is unknown [40, 13]: to cluster an element at any stage of the algorithm, ask one query
per existing cluster with this element (this is sufficient due to transitivity), and start a new cluster if all
queries are negative. It turns out that ?(nk) is also a lower bound, even for randomized algorithms
(see, e.g., [13]). In contrast, the heuristics developed in practice often ask significantly less queries
than nk. What could be a possible reason for this deviation between the theory and practice?
Before delving into this question, let us look at a motivating application that drives this work.
A Motivating Application: Entity Resolution. Entity resolution (ER, also known as record linkage)
is a fundamental problem in data mining and has been studied since 1969 [17]. The goal of ER is to
identify and link/group different manifestations of the same real world object, e.g., different ways of
addressing (names, email address, Facebook accounts) the same person, Web pages with different
descriptions of the same business, different photos of the same object etc. (see the excellent survey
by Getoor and Machanavajjhala [20]). However, lack of an ideal similarity function to compare
objects makes ER an extremely challenging task. For example, DBLP, the popular computer science
bibliography dataset is filled with ER errors [30]. It is common for DBLP to merge publication records
of different persons if they share similar attributes (e.g. same name), or split the publication record of
a single person due to slight difference in representation (e.g. Marcus Weldon vs Marcus K. Weldon).
In recent years, a popular trend to improve ER accuracy has been to incorporate human wisdom. The
works of [42, 43, 40] (and many subsequent works) use a computer-generated similarity matrix to
come up with a collection of pair-wise questions that are asked interactively to a crowd. The goal is
to minimize the number of queries to the crowd while maximizing the accuracy. This is analogous
to our interactive clustering framework. But intriguingly, as shown by extensive experiments on
various real datasets, these heuristics use far less queries than nk [42, 43, 40]?barring the ?(nk)
theoretical lower bound. On a close scrutiny, we find that all of these heuristics use some computer
generated similarity matrix to guide in selecting the queries. Could these similarity matrices, aka side
information, be the reason behind the deviation and significant reduction in query complexity?
Let us call this clustering using side information, where the clustering algorithm has access to a
similarity matrix. This can be generated directly from the raw data (e.g., by applying Jaccard similarity
on the attributes), or using a crude classifier which is trained on a very small set of labelled samples.
Let us assume the following generative model of side information: a noisy weighted upper-triangular
similarity matrix W = {wi,j }, 1 ? i < j ? n, where wi,j is drawn from a probability distribution
f+ if i, j, i 6= j, belong to the same cluster, and else from f? . However, the algorithm designer is
given only the similarity matrix without any information on f+ and f? . In this work, one of our
major contributions is to show the separation in query complexity of clustering with and without such
side information. Indeed the recent works of [18, 33] analyze popular heuristic algorithms of [40, 43]
where the probability distributions are obtained from real datasets which show that these heuristics
are significantly suboptimal even for very simple distributions. To the best of our knowledge, before
this work, there existed no algorithm that works for arbitrary unknown distributions f+ and f? with
near-optimal performances. We develop a generic framework for proving information theoretic lower
bounds for interactive clustering using side information, and design efficient algorithms for arbitrary
2
f+ and f? that nearly match the lower bound. Moreover, our algorithms are parameter free, that is
they work without any knowledge of f+ , f? or k.
Connection to popular community detection models. The model of side information considered
in this paper is a direct and significant generalization of the planted partition model, also known as
the stochastic block model (SBM) [28, 15, 14, 2, 1, 25, 24, 11, 36]. The stochastic block model is
an extremely well-studied model of random graphs which is used for modeling communities in real
world, and is a special case of a similarity matrix we consider. In SBM, two vertices within the same
community share an edge with probability p, and two vertices in different communities share an edge
with probability q, that is f+ is Bernoulli(p) and f? is Bernoulli(q). It is often assumed that k, the
number of communities, is a constant (e.g. k = 2 is known as the planted bisection model and is
studied extensively [1, 36, 15] or a slowly growing function of n (e.g. k = o(log n)). The points are
assigned to clusters according to a probability distribution indicating the relative sizes of the clusters.
In contrast, not only in our model f+ and f? can be arbitrary probability mass functions (pmfs),
we do not have to make any assumption on k or the cluster size distribution, and can allow for any
partitioning of the set of elements (i.e., adversarial setting). Moreover, f+ and f? are unknown. For
SBM, parameter free algorithms are known relatively recently for constant number of linear sized
clusters [3, 24].
There are extensive literature that characterize the threshold phenomenon in SBM in terms of p and
q for exact and approximate recovery of clusters when relative cluster sizes are known and nearly
balanced (e.g., see [2] and therein for many references). For k = 2 and equal sized clusters, sharp
thresholds are derived in [1, 36] for a specific sparse region of p and q 1 . In a more general setting, the
vertices in the ith and the jth communities are connected with probability qij and threshold results
for the sparse region has been derived in [2] - our model can be allowed to have this as a special case
when we have pmfs fi,j s denoting the distributions of the corresponding random variables. If an
oracle gives us some of the pairwise binary relations between elements (whether they belong to the
same cluster or not), the threshold of SBM must also change. But by what amount? This connection
to SBM could be of independent interest to study query complexity of interactive clustering with side
information, and our work opens up many possibilities for future direction.
Developing lower bounds in the interactive setting appears to be significantly challenging, as algorithms may choose to get any deterministic information adaptively by querying, and standard
lower bounding techniques based on Fano-type inequalities [9, 31] do not apply. One of our major
contributions in this paper is to provide a general framework for proving information-theoretic lower
bound for interactive clustering algorithms which holds even for randomized algorithms, and even
with the full knowledge of f+ , f? and k. In contrast, our algorithms are computationally efficient and
are parameter free (works without knowing f+ , f? and k). The technique that we introduce for our
upper bounds could be useful for designing further parameter free algorithms which are extremely
important in practice.
Other Related works. The interactive framework of clustering model has been studied before where
the oracle is given the entire clustering and the oracle can answer whether a cluster needs to be split
or two clusters must be merged [7, 6]. Here we contain our attention to pair-wise queries, as in all
practical applications that motivate this work [42, 43, 22, 40]. In most cases, an expert human or
crowd serves as an oracle. Due to the scale of the data, it is often not possible for such an oracle to
answer queries on large number of input data. Only recently, some heuristic algorithms with k-wise
queries for small values of k but k > 2 have been proposed in [39], and a non-interactive algorithm
that selects random triangle queries have been analyzed in [41]. Also recently, the stochastic block
model with active label-queries have been studied in [19]. Perhaps conceptually closest to us is a
recent work by [5] where they consider pair-wise queries for clustering. However, their setting is very
different. They consider the specific NP-hard k-means objective with distance matrix which must be
a metric and must satisfy a deterministic separation property. Their lower bounds are computational
and not information theoretic; moreover their algorithm must know the parameters. There exists a
significant gap between their lower and upper bounds:? log k vs k 2 , and it would be interesting if
our techniques can be applied to improve this.
Here we have assumed the oracle always returns the correct answer. To deal with the possibility that
the crowdsourced oracle may give wrong answers, there are simple majority voting mechanisms or
more complicated techniques [39, 12, 21, 29, 10, 41] to handle such errors. Our main objective is to
1
Most recent works consider the region of interest as p =
3
a log n
n
and q =
b log n
n
for some a > b > 0.
study the power of side information, and we do not consider the more complex scenarios of handling
erroneous oracle answers. The related problem of clustering with noisy queries is studied by us in a
companion work [34]. Most of the results of the two papers are available online in a more extensive
version [32].
Contributions. Formally the problem we study in this paper can be described as follows.
Problem 1 (Query-Cluster with an Oracle). Consider a set of elements V ? [n] with k latent
clusters Vi , i = 1, . . . , k, where k is unknown. There is an oracle O : V ? V ? {?1}, that when
queried with a pair of elements u, v ? V ? V , returns +1 iff u and v belong to the same cluster,
and ?1 iff u and v belong to different clusters. The queries Q ? V ? V can be done adaptively.
Consider the side information W = {wu,v : 1 ? u < v ? n}, where the (u, v)th entry of W , wu,v is
a random variable drawn from a discrete probability distribution f+ if u, v belong to the same cluster,
and is drawn from a discrete2 probability distribution f? 3 if u, v belong to different clusters. The
parameters k, f+ and f? are unknown. Given V and W , find Q ? V ? V such that |Q| is minimum,
and from the oracle answers and W it is possible to recover Vi , i = 1, 2, ..., k.
Without side information, as noted earlier, it is easy to see an algorithm with query complexity O(nk)
for Query-Cluster. When no side information is available, it is also not difficult to have a lower
bound of ?(nk) on the query complexity. Our main contributions are to develop strong information
theoretic lower bounds as well as nearly matching upper bounds when side information is available,
and characterize the effect of side information on query complexity precisely.
Upper Bound (Algorithms). We show that with side information W , a drastic reduction in query
complexity of clustering is possible, even with unknown parameters f+ , f? , and k. We propose a
2
n
Monte Carlo randomized algorithm that reduces the number of queries from O(nk) to O( H2k(flog
),
+ kf? )
where H(f kg) is the Hellinger divergence between the probability distributions f , and g, and recovers
the clusters accurately with high probability (with success probability 1 ? n1 ) without knowing f+ ,
f? or k (see, Theorem 1). Depending on the value of k, this could be highly sublinear in n. Note that
the squared Hellinger divergence between two pmfs f and g is defined to be,
2
p
1 X p
H2 (f kg) =
f (i) ? g(i) .
2 i
We also develop a Las Vegas algorithm, that is one which recovers the clusters with probability 1 (and
2
n
not just with high probability), with query complexity O(n log n + H2k(flog
). Since f+ and f?
+ kf? )
can be arbitrary, not knowing the distributions provides a major challenge, and we believe, our recipe
could be fruitful for designing further parameter-free algorithms. We note that all our algorithms are
computationally efficient - in fact, the time required is bounded by the size of the side information
matrix, i.e., O(n2 ).
Theorem 1. Let the number of clusters k be unknown and f+ and f? be unknown discrete distributions with fixed cardinality of support. There exists an efficient (polynomial-time) Monte Carlo
2
n
algorithm for Query-Cluster that has query complexity O(min (nk, H2k(flog
)) and recovers all
+ kf? )
1
the clusters accurately with probability 1 ? o( n ). Moreover there exists an efficient Las Vegas
2
n
algorithm that with probability 1 ? o( n1 ) has query complexity O(n log n + min (nk, H2k(flog
)).
+ kf? )
Lower Bound. Our main lower bound result is information theoretic, and can be summarized in the
following theorem. Note especially that, for lower bound we can assume the knowledge of k, f+ , f?
in contrast to upper bounds, which makes the results stronger. In addition, f+ and f? can be discrete
or continuous distributions. Note that when H2 (f+ kf? ) is close to 1, e.g., when the side information
is perfect, no queries are required. However, that is not the case in practice, and we are interested in
the region where f+ and f? are ?close?, that is H2 (f+ kf? ) is small.
1
Theorem 2. Assume H2 (f+ kf? ) ? 18
. Any (possibly randomized)
algorithm with the knowledge
2
of f+ , f? , and the number of clusters k, that does not perform ? min {nk, H2 (fk+ kf? ) } expected
2
Our lower bound holds for continuous distributions as well.
For simplicity of expression, we treat the sample space to be of constant size. However, all our results
extend to any finite sample space scaling linearly with its size.
3
4
number of queries, will be unable to return the correct clustering with probability at least16 ?
O( ?1k ). And to recover the clusters with probability 1, the number of queries must be ? n +
2
min {nk, H2 (fk+ kf? ) } .
The lower bound therefore matches the query complexity upper bound within a logarithmic factor.
Note that when no querying is allowed, this turns out exactly to be the setting of stochastic block
model though with much general distributions. We have analyzed this case in Appendix C. To see how
the probability of error must scale, we have used a generalized version of Fano?s inequality (e.g., [23]).
However, when the number of queries is greater than zero, plus when queries can be adaptive, any
such standard technique fails. Hence, significant effort has to be put forth to construct a setting where
information theoretic minimax bounds can be applied. This lower bound could be of independent
interest, and provides a general framework for deriving lower bounds for fundamental problems of
classification, hypothesis testing, distribution testing etc. in the interactive learning setting. They
may also lead to new lower bound proving techniques in the related multi-round communication
complexity model where information again gets revealed adaptively.
Organization. The proof of the lower bound is provided in Section 2. The Monte Carlo algorithm is
given in Section 3. The detailed proof of the Monte Carlo algorithm, and the Las Vegas algorithm
and its proof are given in Appendix A and Appendix B respectively in the supplementary material for
space constraint.
2
Lower Bound (Proof of Theorem 2)
In this section, we develop our information theoretic lower bounds. We prove a more general result
from which Theorem 2 follows.
Lemma 1. Consider the case when we have k equally sized clusters of size a each (that is total
number of elements is n = ka). Suppose we are allowed to make at most Q adaptive queries to the
oracle. The probability of error for any algorithm for Query-Cluster is at least,
r
?
2
4Q 2
4Q
1?
?
? 2 aH(f+ kf? ).
1+
k
ak
ak(k ? 1)
The main high-level technique to prove Lemma 1 is the following. Suppose, a node is to be assigned
to a cluster. This situation is obviously akin to a k-hypothesis testing problem, and we want to use
a lower bound on the probability of error. The side information and the query answers constitute a
random vector whose distributions (among the k possible) must be far apart for us to successfully
identify the clustering. But the main challenge comes from the interactive nature of the algorithm
since it reveals deterministic information and into characterizing the set of elements that are not
queried much by the algorithm.
Proof of Lemma 1. Since the total number of queries is Q, the average number of queries per element
4Q
ak
is at most 2Q
ak . Therefore there exist at least 2 elements that are queried at most T < ak times. Let x
be one such element. We just consider the problem of assignment of x to a cluster (all other elements
have been correctly assigned already), and show that any algorithm will make wrong assignment with
positive probability.
Step 1: Setting up the hypotheses. Note that the side information matrix W = (wi,j ) is provided
where the wi,j s are independent random variables. Now assume the scenario when we use an
algorithm ALG to assign x to one of the k clusters, Vu , u = 1, . . . , k. Therefore, given x, ALG takes
as input the random variables wi,x s where i ? tt Vt , makes some queries involving x and outputs
a cluster index, which is an assignment for x. Based on the observations wi,x s, the task of ALG
is thus a multi-hypothesis testing among k hypotheses. Let Hu , u = 1, . . . k denote the k different
hypotheses Hu : x ? Vu . And let Pu , u = 1, . . . k denote the joint probability distributions of the
random matrix W when x ? Vu . In short, for any event A, Pu (A) = Pr(A|Hu ). Going forward, the
subscript of probabilities or expectations will denote the appropriate conditional distribution.
Step 2: Finding ?weak? clusters. There must exist t ? {1, . . . , k} such that,
k
X
Pt { a query made by ALG involving cluster Vv } ? Et {Number of queries made by ALG} ? T.
v=1
5
We now find a subset of clusters, that are ?weak,? i.e., not queried enough if Ht were true. Consider
2T
the set J 0 ? {v ? {1, . . . , k} : Pt { a query made by ALG involving cluster Vv } < k(1??)
}, where
??
1
?
. We must have, (k ? |J 0 |) ?
4Q
1+
2T
k(1??)
? T, which implies, |J 0 | ?
(1+?)k
.
2
ak
Now, to output a cluster without using the side information, ALG has to either make a query to the
actual cluster the element is from, or query at least k ? 1 times. In any other case, ALG must use
the side information (in addition to using queries) to output a cluster. Let E u denote the event that
2
ALG outputs cluster Vu by using the side information. Let J 00 ? {u ? {1, . . . , k} : Pt (E u ) ? ?k
}.
Pk
(2??)k
?k
2
00
u
00
Since u=1 Pt (E ) ? 1, we must have, (k ? |J |) ? ?k < 1, or |J | > k ? 2 =
.
2
+ (2??)k
? k = k2 . This means, {Vu : u ? J 0 ? J 00 } contains more
We have, |J 0 ? J 00 | > (1+?)k
2
2
ak
than ak
2 elements. Since there are 2 elements that are queried at most T times, these two sets must
have nonzero intersection. Hence, we can assume that x ? V` for some ` ? J 0 ? J 00 , i.e., let H` be
the true hypothesis. Now we characterize the error events of the algorithm ALG in assignment of x.
Step 3: Characterizing error events for ?x?. We now consider the following two events. E1 =
{a query made by ALG involving cluster V` }; E2 = {k ? 1 or more queries were made by ALG}.
Note that if the algorithm ALG can correctly assign x to a cluster without using the side information
then either of E1 or E2 must have to happen. Recall, E ` denotesSthe event
S that ALG outputs cluster V`
using the side information. Now consider the event E ? E ` E1 E2 . The probability of correct
assignment is at most P` (E). We now bound this probability of correct recovery from above.
Step 4: Bounding probability of correct recovery via Hellinger distance. We have,
?
P` (E) ? Pt (E) + |P` (E) ? Pt (E)| ? Pt (E) + kP` ? Pt kT V ? Pt (E) + 2H(P` kPt ),
where, kP ? QkT V ? supA |P (A) ? Q(A)| denotes the total variation distance between two
probability distributions P and Q and in the last step we have used the relationship between total
variation distance and the Hellinger divergence (see, for example, [38, Eq. (3)]). Now, recall that
P` and Pt are the joint distributions of the independent random variables wi,x , i ? ?u Vu . Now, we
use the fact that squared Hellinger divergence between product distribution of independent random
variables are less than the sum of the squared Hellinger divergence between the individual distribution.
We also note that the divergence between identical random variables are 0. We obtain
p
p
?
2H2 (P` kPt ) ? 2 ? 2aH2 (f+ kf? ) = 2 aH(f+ kf? ).
This is true because the only times when wi,x differs
? under Pt and under P` is when x ? Vt or
x ? V` . As a result we have, P` (E) ? Pt (E) + 2 aH(f+ kf? ). Now, using Markov inequality
4Q
T
Pt (E2 ) ? k?1
? ak(k?1)
. Therefore,
8Q
4Q
2
+
+
.
?k ak 2 (1 ? ?) ak(k ? 1)
q 2
?
4Q
Therefore, putting the value of ? we get, P` (E) ? k2 1 + 4Q
+ ak(k?1)
+ 2 aH(f+ kf? ),
ak
which proves the lemma.
Pt (E) ? Pt (E ` ) + Pt (E1 ) + Pt (E2 ) ?
2
Proof of Theorem 2. Consider two cases. In the first case, suppose, nk < 9H2 (fk+ kf? ) . Now consider
the situation of Lemma 1, with a = nk . The probability of error of any algorithm must be at least,
q 2
4Q
1 ? k2 1 + 4Q
? ak(k?1)
? 23 ? 16 ? O( ?1k ), if the number of queries Q ? nk
ak
72 .
2
In the second case, suppose nk ? 9H2 (fk+ kf? ) . Assume, a = b 9H2 (f1+ kf? ) c. Then a ? 2, since
1
H2 (f+ kf? ) ? 18
. We have nk ? k 2 a. Consider the situation when we are already given a complete
cluster Vk with n ? (k ? 1)a elements, remaining (k ? 1) clusters each has 1 element, and the rest
(a ? 1)(k ? 1) elements are evenly distributed (but yet to be assigned) to the k ? 1 clusters. Now we
2
are exactly in the situation of Lemma 1 with k ? 1 playing the role of k. If we have Q < ak
72 , The
2
probability of error is at least 1 ? ok (1) ? 16 ? 23 = 16 ? O( ?1k ). Therefore Q must be ?( H2 (fk+ kf? ) ).
Note that in this proof we have not in particular tried to optimize the constants.
If we want to recover the clusters with probability 1, then ?(n) is a trivial lower bound. Hence,
2
coupled with the above we get a lower bound of ?(n + min {nk, H2 (fk+ kf? ) }) in that case.
6
3
Algorithms
We propose two algorithms (Monte Carlo and Las Vegas) both of which are completely parameter
free that is they work without any knowledge of k, f+ and f? , and meet the respective lower bounds
within an O(log n) factor. Here we present the Monte Carlo algorithm which drastically reduces
2
n
) and recovers the clusters
the number of queries from O(nk) (no side information) to O( H2k(flog
+ kf? )
exactly with probability at least 1 ? on (1). The detailed proof of it, as well as the Las Vegas algorithm
are presented in Appendix A and Appendix B respectively in the supplementary material.
Our algorithm uses a subroutine called Membership that takes as input an element v ? V and
a subset of elements C ? V \ {v}. Assume that f+ , f? are discrete distributions over fixed set
of q points a1 , a2 , . . . , aq ; that is wi,j takes value in the set {a1 , a2 , . . . , aq }. Define the empirical
|{u?C:wu,v =ai }|
?inter? distribution pv,C for i = 1, . . . , q, pv,C (i) =
Also compute the ?intra? dis|C|
|{(u,v)?C?C:u6=v,w
=a }|
u,v
i
tribution pC for i = 1, . . . , q, pC (i) =
. Then we use Membership(v, C)
|C|(|C|?1)
2
= ?H (pv,C kpC ) as affinity of vertex v to C, where H(pv,C kpC ) denotes the Hellinger divergence
between distributions. Note that since the membership is always negative, a higher membership
implies that the ?inter? and ?intra? distributions are closer in terms of Hellinger distance.
Designing a parameter free Monte Carlo algorithm seems to be highly challenging as here, the
number of queries depends only logarithmically with n. Intuitively, if an element v has the highest
membership in some cluster C, then v should be queried with C first. Also an estimation from side
information is reliable when the cluster already has enough members. Unfortunately, we know neither
whether the current cluster size is reliable, nor we are allowed to make even one query per element.
To overcome this bottleneck, we propose an iterative-update algorithm which we believe will find
more uses in developing parameter free algorithms. We start by querying a few points so that there is
at least one cluster with ?(log n) points. Now based on these queried memberships, we learn two
empirical distributions p1+ from intra-cluster similarity values, and p1? from inter-cluster similarity
values. Given an element v which has not been clustered yet, and a cluster C with the highest number
of current members, we would like to consider the submatrix of side information pertaining to v
and all u ? C and determine whether that side information is generated from f+ or f? . We know
if the statistical distance between f+ and f? is small, then we would need more members in C to
successfully do this test. Since we do not know f+ and f? , we compute the squared Hellinger
divergence between p1+ and p1? , and use that to compute a threshold ?1 on the size of C. If C crosses
this size threshold, we just use the side information to determine if v should belong to C. Otherwise,
we query further until there is one cluster with size ?1 , and re-estimate the empirical distributions p2+
and p2? . Again, we recompute a threshold ?2 , and stop if the cluster under consideration crosses this
new threshold. If not we continue. Interestingly, we can show when the process converges, we have a
very good estimate of H(f+ kf? ) and, moreover it converges fast.
Algorithm. Phase 1. Initialization. We initialize the algorithm by selecting any element v and
creating a singleton cluster {v}. We then keep selecting new elements randomly and uniformly that
have not yet been clustered, and query the oracle with it by choosing exactly one element from each
of the clusters formed so far. If the oracle returns +1 to any of these queries then we include the
element in the corresponding cluster, else we create a new singleton cluster with it. We continue this
process until one cluster has grown to a size of dC log ne, where C is a constant.
Phase 2. Iterative Update. Let C1 , C2 , ...Clx be the set of clusters formed after the xth iteration for
some lx ? k, where we consider Phase 1 as the 0-th iteration. We estimate
p+,x =
|{u, v ? Ci : u 6= v, wu,v = ai }|
|{u ? Ci , v ? Cj , i < j, i, j ? [1, lx ] : wu,v = ai }|
; p?,x =
Plx
Plx P
i=1 |Ci |(|Ci ? 1|)
i=1
i<j |Ci ||Cj |
C log n
E
Define MxE = H(p+,x
kp?,x )2 . If there is no cluster of size at least Mx formed so far, we select a new
element yet to be clustered and query it exactly once with the existing clusters (that is by selecting
one arbitrary point from every cluster and querying the oracle with it and the new element), and
include it in an existing cluster or create a new cluster with it based on the query answer. We then set
x = x + 1 and move to the next iteration to get updated estimates of p+,x , p?,x , MxE and lx .
Else if there is a cluster of size at least MxE , we stop and move to the next phase.
7
Phase 3. Processing the grown clusters. Once Phase 2 has converged, let p+ , p? , H(p+ kp? ), M E
and l be the final estimates. For every cluster C of size |C| ? M E , call it grown and do the following.
+ kp? )
(3A.) For every unclustered element v, if Membership(v, C) ? ?( 4H(pC
?
we include v in C without querying.
2H(p+ kp? )2
?
),
C log n
+ kp? )
?
(3B.) We create a new list Waiting(C), initially empty. If ?( 4H(pC
+ kp? )
?( 4H(pC
then
2H(p+ kp? )2
?
)
C log n
>
2H(p+ kp? )2
?
),
C log n
Membership(v, C) ?
then we include v in Waiting(C). For every
+
element in Waiting(C), we query the oracle with it by choosing exactly one element from each of the
clusters formed so far starting with C. If oracle returns answer ?yes? to any of these queries then we
include the element in that cluster, else we create a new singleton cluster with it. We continue this
until Waiting(C) is exhausted.
We then call C completely grown, remove it from further consideration, and move to the next grown
cluster. if there is no other grown cluster, then we move back to Phase 2.
Analysis. The main steps of the analysis are as follows (for full analysis see Appendix A).
2
kp )
?+ ? ] for a
1. First, Lemma 3 shows with high probability H(p+ kp? ) ? [H(f+ kf? ) ? 4H(p
B log n
suitable constant B that depends on C. Using it, we can show the process converges whenever
log n
a cluster has grown to a size of H4C
2 (f kf ) . The proof relies on adapting the Sanov?s Theorem
+
?
(see Lemma 2) of information theory. We are measuring the distance between distributions via
Hellinger distance, as opposed to KL divergence (which would have been a natural choice because
of its presence in the rate function in Sanov?s therem), because Hellinger distance is a metric
which proves to be crucial in our analysis.
2. Lemma 5 and Corollary 1 show that every element that is included in C in Phase (3A) truly
belongs to C, and elements that are not in Waiting(C) can not be in C with high probability. Once
Phase 2 has converged, if the condition of (3A) is satisfied, the element must belong to C. There
is a small gray region of confidence interval (3B) such that if an element belongs there, we cannot
be sure either way, but if an element does not satisfy either (3A) or 3B, it cannot be part of C.
3. Lemma 6 shows that size of Waiting(C) is constant showing an anti-concentration property. This
log n
coupled with the fact that the process converges when a cluster reaches size H4C
2 (f kf ) gives the
+
?
desired query complexity bound in Lemma 7.
4
Experimental Results
In this section, we report experimental results on a popular bibliographic dataset cora [35] consisting
of 1879 nodes, 191 clusters and 1699612 edges out of which 62891 are intra-cluster edges. We
remove any singleton node from the dataset ? the final number of vertices that we classify is 1812
with 124 clusters. We use the similarity function computation used by [18] to compute f+ and f? .
The two distributions are shown in Figure 1 on the left. The Hellinger square divergence between the
two distributions is 0.6. In order to observe the dependency of the algorithm performance on the learnt
distributions, we perturb the exact distributions to obtain two approximate distributions as shown
in Figure 1 (middle) with Hellinger square divergence being 0.4587. We consider three strategies.
Suppose the cluster in which a node v must be included has already been initialized and exists in the
current solution. Moreover, suppose the algorithm decides to use queries to find membership of v.
Then in the best strategy, only one query is needed to identify the cluster in which v belongs. In the
worst strategy, the algorithm finds the correct cluster after querying all the existing clusters whose
current membership is not enough to take a decision using side information. In the greedy strategy,
the algorithm queries the clusters in non-decreasing order of Hellinger square divergence between f+
(or approximate version of it) and the estimated distribution from side information between v and
each existing clusters. Note that, in practice, we will follow the greedy strategy. Figure 2 shows
the performance of each strategy. We plot the number of queries vs F1 Score which computes the
harmonic mean of precision and recall. We observe that the performance of greedy strategy is very
close to that of best. With just 1136 queries, greedy achieves 80% precision and close to 90%
recall. The best strategy would need 962 queries to achieve that performance. The performance of
our algorithm on the exact and approximate distributions are also very close which indicates it is
enough to learn a distribution that is close to exact. For example, using the approximate distributions,
8
Figure 1: (left) Exact distributions of similarity values, (middle) approximate distributions of similarity values, (right) Number of Queries vs F1 Score for both distributions.
to achieve similar precision and recall, the greedy strategy just uses 1148 queries, that is 12 queries
more than when we use when the distributions are known.
Figure 2: Number of Queries vs F1 Score using three strategies: best, greedy, worst.
Discussion. This is the first rigorous theoretical study of interactive clustering with side information,
and it unveils many interesting directions for future study of both theoretical and practical significance
(see Appendix D for more details). Having arbitrary f+ , f? is a generalization of SBM. Also it raises
an important question about how SBM recovery threshold changes with queries. For sparse region of
0
0
n
n
SBM, where f+ is Bernoulli( a log
) and f? is Bernoulli( b log
), a0 > b0 , Lemma 1 is not tight
n
n
n
yet. However, it shows the following trend. Let us set a = k in Lemma 1 with the above f+ , f? .
?
We conjecture by ignoring the lower order terms and a log n factor that with Q queries,the sharp
?
?
?
?
?
?
Q
recovery threshold of sparse SBM changes from ( a0 ? b0 ) ? k to ( a0 ? b0 ) ? k 1 ? nk
.
Proving this bound remains an exciting open question.
We propose two computationally efficient algorithms that match the query complexity lower bound
within log n factor and are completely parameter free. In particular, our iterative-update method to
design Monte-Carlo algorithm provides a general recipe to develop any parameter-free algorithm,
which are of extreme practical importance. The convergence result is established by extending
Sanov?s theorem from the large deviation theory which gives bound only in terms of KL-divergence.
Due to the generality of the distributions, the only tool we could use is Sanov?s theorem. However,
Hellinger distance comes out to be the right measure both for lower and upper bounds. If f+ and
f? are common distributions like Gaussian, Bernoulli etc., then other concentration results stronger
than Sanov may be applied to improve the constants and a logarithm factor to show the trade-off
between queries and thresholds as in sparse SBM. While some of our results apply to general fi,j s,
a full picture with arbitrary fi,j s and closing the gap of log n between the lower and upper bound
remain an important future direction.
9
Acknowledgement. This work is supported in part by NSF awards CCF 1642658, CCF 1642550,
CCF 1464310, CCF 1652303, a Yahoo ACE Award and a Google Faculty Research Award. We are
particularly thankful to an anonymous reviewer whose comments led to notable improvement of the
presentation of the paper.
References
[1] E. Abbe, A. S. Bandeira, and G. Hall. Exact recovery in the stochastic block model. IEEE
Trans. Information Theory, 62(1):471?487, 2016.
[2] E. Abbe and C. Sandon. Community detection in general stochastic block models: Fundamental
limits and efficient algorithms for recovery. In IEEE 56th Annual Symposium on Foundations
of Computer Science, FOCS 2015, Berkeley, CA, USA, 17-20 October, 2015, pages 670?688,
2015.
[3] E. Abbe and C. Sandon. Recovering communities in the general stochastic block model without
knowing the parameters. In Advances in Neural Information Processing Systems, pages 676?684,
2015.
[4] M. Ajtai, J. Komlos, W. L. Steiger, and E. Szemer?edi. Deterministic selection in o (loglog n)
parallel time. In Proceedings of the eighteenth annual ACM symposium on Theory of computing,
pages 188?195. ACM, 1986.
[5] H. Ashtiani, S. Kushagra, and S. Ben-David. Clustering with same-cluster queries. NIPS, 2016.
[6] P. Awasthi, M.-F. Balcan, and K. Voevodski. Local algorithms for interactive clustering. In
ICML, pages 550?558, 2014.
[7] M.-F. Balcan and A. Blum. Clustering with interactive feedback. In International Conference
on Algorithmic Learning Theory, pages 316?328. Springer, 2008.
[8] B. Bollob?as and G. Brightwell. Parallel selection with high probability. SIAM Journal on
Discrete Mathematics, 3(1):21?31, 1990.
[9] K. Chaudhuri, F. C. Graham, and A. Tsiatas. Spectral clustering of graphs with general degrees
in the extended planted partition model. In COLT, pages 35?1, 2012.
[10] Y. Chen, G. Kamath, C. Suh, and D. Tse. Community recovery in graphs with locality. In
Proceedings of The 33rd International Conference on Machine Learning, pages 689?698, 2016.
[11] P. Chin, A. Rao, and V. Vu. Stochastic block model and community detection in the sparse
graphs: A spectral algorithm with optimal rate of recovery. arXiv preprint arXiv:1501.05021,
2015.
[12] N. Dalvi, A. Dasgupta, R. Kumar, and V. Rastogi. Aggregating crowdsourced binary ratings. In
WWW, pages 285?294, 2013.
[13] S. B. Davidson, S. Khanna, T. Milo, and S. Roy. Top-k and clustering with noisy comparisons.
ACM Trans. Database Syst., 39(4):35:1?35:39, 2014.
[14] A. Decelle, F. Krzakala, C. Moore, and L. Zdeborov?a. Asymptotic analysis of the stochastic block model for modular networks and its algorithmic applications. Physical Review E,
84(6):066106, 2011.
[15] M. E. Dyer and A. M. Frieze. The solution of some random np-hard problems in polynomial
expected time. Journal of Algorithms, 10(4):451?489, 1989.
[16] U. Feige, P. Raghavan, D. Peleg, and E. Upfal. Computing with noisy information. SIAM
Journal on Computing, 23(5):1001?1018, 1994.
[17] I. P. Fellegi and A. B. Sunter. A theory for record linkage. Journal of the American Statistical
Association, 64(328):1183?1210, 1969.
10
[18] D. Firmani, B. Saha, and D. Srivastava. Online entity resolution using an oracle. PVLDB,
9(5):384?395, 2016.
[19] A. Gadde, E. E. Gad, S. Avestimehr, and A. Ortega. Active learning for community detection in
stochastic block models. In Information Theory (ISIT), 2016 IEEE International Symposium on,
pages 1889?1893. IEEE, 2016.
[20] L. Getoor and A. Machanavajjhala. Entity resolution: theory, practice & open challenges.
PVLDB, 5(12):2018?2019, 2012.
[21] A. Ghosh, S. Kale, and P. McAfee. Who moderates the moderators?: crowdsourcing abuse
detection in user-generated content. In EC, pages 167?176, 2011.
[22] C. Gokhale, S. Das, A. Doan, J. F. Naughton, N. Rampalli, J. Shavlik, and X. Zhu. Corleone:
Hands-off crowdsourcing for entity matching. In SIGMOD Conference, pages 601?612, 2014.
[23] A. Guntuboyina. Lower bounds for the minimax risk using-divergences, and applications. IEEE
Transactions on Information Theory, 57(4):2386?2399, 2011.
[24] B. Hajek, Y. Wu, and J. Xu. Achieving exact cluster recovery threshold via semidefinite
programming. IEEE Transactions on Information Theory, 62(5):2788?2797, 2016.
[25] B. E. Hajek, Y. Wu, and J. Xu. Computational lower bounds for community detection on
random graphs. In Proceedings of The 28th Conference on Learning Theory, COLT 2015, Paris,
France, July 3-6, 2015, pages 899?928, 2015.
[26] T. S. Han and S. Verdu. Generalizing the fano inequality. IEEE Transactions on Information
Theory, 40(4):1247?1251, 1994.
[27] W. Hoeffding. Probability inequalities for sums of bounded random variables. Journal of the
American statistical association, 58(301):13?30, 1963.
[28] P. W. Holland, K. B. Laskey, and S. Leinhardt. Stochastic blockmodels: First steps. Social
networks, 5(2):109?137, 1983.
[29] D. R. Karger, S. Oh, and D. Shah. Iterative learning for reliable crowdsourcing systems. In
NIPS, pages 1953?1961, 2011.
[30] H. K?opcke, A. Thor, and E. Rahm. Evaluation of entity resolution approaches on real-world
match problems. Proceedings of the VLDB Endowment, 3(1-2):484?493, 2010.
[31] S. H. Lim, Y. Chen, and H. Xu. Clustering from labels and time-varying graphs. In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, editors, Advances in
Neural Information Processing Systems 27, pages 1188?1196. Curran Associates, Inc., 2014.
[32] A. Mazumdar and B. Saha. Clustering via crowdsourcing. arXiv preprint arXiv:1604.01839,
2016.
[33] A. Mazumdar and B. Saha. A Theoretical Analysis of First Heuristics of Crowdsourced Entity
Resolution. The Thirty-First AAAI Conference on Artificial Intelligence (AAAI-17), 2017.
[34] A. Mazumdar and B. Saha. Clustering with noisy queries. In Advances in Neural Information
Processing Systems (NIPS) 31, 2017.
[35] A. McCallum, 2004. http://www.cs.umass.edu/~mcallum/data/cora-refs.tar.gz.
[36] E. Mossel, J. Neeman, and A. Sly. Consistency thresholds for the planted bisection model. In
Proceedings of the Forty-Seventh Annual ACM on Symposium on Theory of Computing, pages
69?75. ACM, 2015.
[37] Y. Polyanskiy and S. Verd?u. Arimoto channel coding converse and r?enyi divergence. In
Communication, Control, and Computing (Allerton), 2010 48th Annual Allerton Conference on,
pages 1327?1333. IEEE, 2010.
11
[38] I. Sason and S. V?erdu. f divergence inequalities. IEEE Transactions on Information Theory,
62(11):5973?6006, 2016.
[39] V. Verroios and H. Garcia-Molina. Entity resolution with crowd errors. In 31st IEEE International Conference on Data Engineering, ICDE 2015, Seoul, South Korea, April 13-17, 2015,
pages 219?230, 2015.
[40] N. Vesdapunt, K. Bellare, and N. Dalvi. Crowdsourcing algorithms for entity resolution. PVLDB,
7(12):1071?1082, 2014.
[41] R. K. Vinayak and B. Hassibi. Crowdsourced clustering: Querying edges vs triangles. In
Advances in Neural Information Processing Systems, pages 1316?1324, 2016.
[42] J. Wang, T. Kraska, M. J. Franklin, and J. Feng. Crowder: Crowdsourcing entity resolution.
PVLDB, 5(11):1483?1494, 2012.
[43] J. Wang, G. Li, T. Kraska, M. J. Franklin, and J. Feng. Leveraging transitive relations for
crowdsourced joins. In SIGMOD Conference, pages 229?240, 2013.
12
| 7054 |@word middle:2 version:3 polynomial:2 achievable:1 stronger:2 seems:1 faculty:1 open:4 hu:3 vldb:1 tried:1 dramatic:1 reduction:2 contains:1 uma:2 selecting:4 bibliographic:1 score:3 denoting:1 karger:1 interestingly:1 neeman:1 franklin:2 existing:5 ka:1 current:4 yet:5 intriguing:1 must:18 subsequent:1 partition:2 happen:1 remove:2 plot:1 update:3 v:6 generative:1 greedy:6 intelligence:1 mccallum:1 ith:1 pvldb:4 short:1 record:4 provides:3 recompute:1 node:4 lx:3 allerton:2 along:1 c2:1 direct:1 symposium:4 qij:1 prove:2 focs:1 dalvi:2 introduce:1 krzakala:1 hellinger:16 pairwise:1 inter:3 expected:2 indeed:1 p1:4 surge:1 growing:1 multi:2 nor:1 decreasing:1 actual:1 cardinality:1 increasing:1 provided:2 underlying:2 moreover:7 bounded:2 mass:1 what:2 kg:2 flog:6 developed:2 sanov:5 finding:1 ghosh:1 guarantee:1 berkeley:1 every:5 voting:1 interactive:14 exactly:7 classifier:1 wrong:2 k2:3 partitioning:1 control:1 converse:1 scrutiny:1 before:3 positive:1 engineering:1 local:1 treat:1 aggregating:1 limit:1 decelle:1 ak:16 subscript:1 meet:1 merge:1 abuse:1 plus:1 therein:1 studied:7 initialization:1 verdu:1 challenging:4 practical:3 thirty:1 testing:4 vu:7 practice:6 block:11 tribution:1 differs:1 kpt:2 empirical:3 significantly:4 adapting:1 matching:3 confidence:1 get:5 cannot:2 close:7 selection:2 put:1 risk:1 applying:1 optimize:1 fruitful:2 deterministic:4 reviewer:1 eighteenth:1 maximizing:1 www:2 kale:1 attention:1 starting:1 independently:1 survey:1 resolution:9 simplicity:1 recovery:10 sbm:11 kushagra:1 deriving:1 oh:1 u6:1 proving:5 handle:1 variation:2 analogous:1 updated:1 pt:17 suppose:9 user:1 exact:7 programming:1 us:3 designing:3 hypothesis:7 curran:1 verd:1 associate:1 element:42 logarithmically:2 trend:2 particularly:1 roy:1 labeled:4 database:1 role:1 preprint:2 wang:2 worst:2 region:6 connected:1 trade:1 highest:2 balanced:1 complexity:22 asked:2 unveils:1 trained:1 depend:1 motivate:1 raise:1 tight:1 efficiency:1 completely:3 triangle:2 easily:1 joint:2 various:1 grown:7 enyi:1 fast:1 monte:8 kp:12 query:88 pertaining:1 artificial:1 verroios:1 choosing:2 crowd:6 whose:3 heuristic:10 supplementary:2 ace:1 modular:1 otherwise:2 triangular:1 noisy:7 final:2 online:2 obviously:1 komlos:1 propose:4 leinhardt:1 product:1 vesdapunt:1 iff:2 chaudhuri:1 achieve:2 forth:1 description:1 recipe:2 convergence:1 cluster:97 optimum:1 empty:1 extending:1 perfect:1 converges:4 ben:1 object:3 help:2 depending:1 develop:5 thankful:1 b0:3 eq:1 strong:2 p2:2 recovering:1 c:2 come:5 implies:2 peleg:1 direction:3 merged:1 correct:6 attribute:3 stochastic:12 human:2 raghavan:1 material:2 assign:2 f1:4 clustered:5 generalization:2 anonymous:1 isit:1 voevodski:1 fellegi:1 hold:2 considered:1 hall:1 lawrence:1 algorithmic:2 major:3 achieves:1 a2:2 purpose:1 estimation:1 pmfs:3 kpc:2 label:2 saw:1 create:4 successfully:2 tool:1 weighted:1 cora:2 awasthi:1 always:2 gaussian:1 tar:1 varying:1 publication:2 corollary:1 derived:2 vk:1 improvement:1 bernoulli:5 indicates:1 aka:2 contrast:4 rigorous:3 adversarial:1 membership:10 inaccurate:1 entire:1 a0:3 initially:1 weldon:2 relation:2 going:1 subroutine:1 selects:1 france:1 interested:1 classification:3 among:2 qkt:1 colt:2 yahoo:1 special:2 initialize:1 equal:1 construct:1 once:3 intriguingly:1 beach:1 barring:1 labeler:1 identical:1 represents:1 having:1 look:1 icml:1 nearly:4 abbe:3 future:4 np:2 report:1 few:3 saha:5 modern:1 randomly:1 frieze:1 divergence:17 individual:1 phase:9 consisting:1 n1:2 detection:7 organization:1 interest:5 mining:1 possibility:2 highly:2 intra:4 evaluation:1 analyzed:2 truly:1 extreme:1 semidefinite:1 pc:5 behind:1 accurate:1 kt:1 edge:5 closer:1 worker:1 respective:1 korea:1 tree:1 filled:1 logarithm:1 initialized:1 re:1 desired:1 theoretical:9 classify:1 modeling:1 earlier:1 asking:2 tse:1 rao:1 measuring:1 vinayak:1 assignment:5 cost:1 deviation:3 addressing:1 vertex:5 entry:1 subset:2 seventh:1 motivating:2 characterize:3 dependency:1 answer:10 crowder:1 learnt:1 adaptively:4 st:2 person:3 fundamental:4 amherst:2 randomized:4 international:4 siam:2 systematic:1 off:2 together:1 squared:5 aaai:2 ambiguity:1 interactively:4 again:2 choose:1 slowly:1 possibly:1 opposed:1 satisfied:1 hoeffding:1 creating:1 expert:3 american:2 return:6 li:1 syst:1 account:1 singleton:4 summarized:1 coding:1 inc:1 satisfy:2 notable:1 vi:2 depends:2 analyze:1 start:2 recover:4 crowdsourced:5 complicated:1 parallel:2 contribution:5 minimize:1 formed:4 square:3 accuracy:4 who:1 rastogi:1 identify:3 wisdom:1 yes:1 conceptually:1 ashtiani:1 weak:2 raw:1 accurately:2 bisection:2 machanavajjhala:2 carlo:8 drive:1 ah:4 converged:2 moderator:1 reach:1 whenever:1 email:1 facebook:1 e2:5 proof:9 recovers:4 stop:2 dataset:3 massachusetts:1 ask:3 popular:8 recall:5 knowledge:7 lim:1 cj:2 hajek:2 back:1 appears:1 ok:1 higher:1 follow:1 april:1 done:1 though:1 generality:1 just:6 stage:1 sly:1 until:3 tsiatas:1 hand:1 web:1 lack:2 google:1 khanna:1 sunter:1 quality:1 perhaps:1 gray:1 laskey:1 believe:2 facilitate:1 usa:2 name:2 contain:1 true:4 effect:1 ccf:4 hence:3 assigned:4 nonzero:1 moore:1 deal:1 round:1 transitivity:1 noted:1 manifestation:1 generalized:1 ortega:1 chin:1 theoretic:9 tt:1 complete:1 polyanskiy:1 balcan:2 wise:6 consideration:2 vega:5 recently:3 fi:3 harmonic:1 common:2 gokhale:1 physical:1 arimoto:1 belong:11 slight:1 extend:1 association:2 significant:4 queried:7 ai:3 rd:1 fk:6 mathematics:1 fano:3 consistency:1 closing:1 aq:2 access:1 han:1 similarity:23 etc:4 pu:2 closest:1 recent:6 belongs:3 apart:1 moderate:1 scenario:2 bandeira:1 inequality:6 binary:2 success:1 continue:3 vt:2 molina:1 minimum:3 greater:1 determine:2 forty:1 july:1 relates:1 full:3 reduces:3 match:4 cross:2 long:1 clx:1 equally:1 e1:4 award:3 a1:2 involving:4 basic:1 metric:2 expectation:1 arxiv:4 iteration:3 avestimehr:1 c1:1 whereas:1 addition:2 want:2 interval:1 else:4 crucial:1 rest:1 sure:1 comment:1 south:1 member:3 leveraging:1 call:3 near:1 presence:1 ideal:2 revealed:1 split:2 easy:1 enough:4 automated:2 mcafee:1 suboptimal:1 avenue:1 knowing:4 plx:2 ajtai:1 bottleneck:1 whether:5 expression:1 linkage:2 effort:1 akin:1 constitute:1 useful:1 detailed:2 amount:2 extensively:2 bellare:1 http:1 exist:2 nsf:1 designer:1 estimated:1 per:3 correctly:2 xth:1 discrete:5 milo:1 dasgupta:1 waiting:6 group:1 putting:1 threshold:13 blum:1 achieving:1 drawn:4 neither:1 guntuboyina:1 ht:1 graph:6 icde:1 year:3 sum:2 naughton:1 wu:7 separation:2 decision:2 appendix:7 jaccard:1 scaling:1 graham:1 submatrix:1 bound:45 existed:1 oracle:23 annual:4 constraint:2 precisely:1 bibliography:1 mazumdar:4 extremely:4 min:5 kumar:1 relatively:1 conjecture:1 developing:2 according:1 poor:1 remain:1 feige:1 wi:9 intuitively:1 handling:1 pr:1 heart:1 computationally:3 remains:1 turn:2 mechanism:1 needed:2 know:5 dyer:1 drastic:1 serf:1 photo:1 available:3 apply:2 observe:2 generic:1 appropriate:1 spectral:2 weinberger:1 shah:1 denotes:3 clustering:39 remaining:1 include:5 top:1 firmani:1 sigmod:2 perturb:1 especially:1 prof:2 ghahramani:1 classical:1 feng:2 objective:2 move:4 question:4 already:4 strategy:11 primary:1 planted:4 concentration:2 affinity:1 zdeborov:1 mx:1 distance:10 link:1 unable:1 entity:10 majority:1 evenly:1 trivial:1 reason:3 marcus:2 index:1 relationship:2 bollob:1 difficult:1 unfortunately:1 october:1 kamath:1 negative:2 design:2 unknown:10 perform:2 upper:11 observation:2 datasets:2 arya:2 markov:1 finite:1 anti:1 situation:4 extended:1 communication:2 dc:1 supa:1 arbitrary:8 sharp:2 community:13 rating:1 edi:1 david:1 pair:7 required:2 kl:2 extensive:3 connection:3 sandon:2 paris:1 barna:2 tremendous:1 established:1 nip:4 trans:2 address:1 unclustered:1 challenge:3 reliable:3 power:2 getoor:2 event:7 natural:3 rely:1 business:1 suitable:1 szemer:1 zhu:1 minimax:2 improve:4 mossel:1 ne:1 picture:1 gz:1 transitive:1 coupled:3 review:1 literature:2 acknowledgement:1 kf:26 relative:2 asymptotic:1 lacking:1 sublinear:1 interesting:3 querying:8 h2:14 foundation:1 upfal:1 degree:1 sufficient:1 doan:1 exciting:1 editor:1 playing:1 share:3 endowment:1 supported:1 last:2 free:11 jth:1 dis:1 drastically:2 side:34 guide:1 allow:1 vv:2 shavlik:1 characterizing:2 sparse:6 distributed:1 overcome:1 feedback:1 world:3 computes:1 forward:1 collection:1 adaptive:2 made:5 far:5 ec:1 social:1 transaction:4 welling:1 approximate:6 keep:1 thor:1 active:3 reveals:2 decides:1 assumed:2 davidson:1 continuous:2 latent:1 iterative:4 suh:1 channel:1 nature:1 learn:2 delving:1 ca:2 ignoring:1 obtaining:2 improving:1 alg:14 excellent:1 complex:1 kraska:2 domain:1 da:1 pk:1 main:7 significance:1 linearly:1 blockmodels:1 motivation:1 bounding:2 n2:1 brightwell:1 allowed:5 ref:1 xu:3 gad:1 join:1 precision:3 fails:1 hassibi:1 pv:4 lie:1 crude:1 loglog:1 companion:1 theorem:10 erroneous:1 specific:2 showing:1 er:5 list:1 cortes:1 exists:4 importance:1 ci:5 exhausted:1 nk:21 dblp:2 gap:2 chen:2 locality:1 intersection:1 logarithmic:1 led:1 generalizing:1 gadde:1 garcia:1 h2k:6 holland:1 srivastava:1 springer:1 relies:1 acm:5 ma:1 conditional:1 goal:4 sized:3 presentation:1 labelled:1 content:1 hard:3 change:3 included:2 specifically:1 reducing:1 uniformly:1 lemma:13 total:4 called:1 experimental:2 la:5 indicating:1 formally:1 college:1 select:1 support:1 seoul:1 incorporate:1 phenomenon:1 crowdsourcing:7 |
6,694 | 7,055 | QMDP-Net: Deep Learning for Planning under
Partial Observability
Peter Karkus1,2
1
David Hsu1,2
Wee Sun Lee2
NUS Graduate School for Integrative Sciences and Engineering
2
School of Computing
National University of Singapore
{karkus, dyhsu, leews}@comp.nus.edu.sg
Abstract
This paper introduces the QMDP-net, a neural network architecture for planning under
partial observability. The QMDP-net combines the strengths of model-free learning and
model-based planning. It is a recurrent policy network, but it represents a policy for a
parameterized set of tasks by connecting a model with a planning algorithm that solves the
model, thus embedding the solution structure of planning in a network learning architecture.
The QMDP-net is fully differentiable and allows for end-to-end training. We train a QMDPnet on different tasks so that it can generalize to new ones in the parameterized task set
and ?transfer? to other similar tasks beyond the set. In preliminary experiments, QMDP-net
showed strong performance on several robotic tasks in simulation. Interestingly, while
QMDP-net encodes the QMDP algorithm, it sometimes outperforms the QMDP algorithm
in the experiments, as a result of end-to-end learning.
1
Introduction
Decision-making under uncertainty is of fundamental importance, but it is computationally hard,
especially under partial observability [24]. In a partially observable world, the agent cannot determine
the state exactly based on the current observation; to plan optimal actions, it must integrate information
over the past history of actions and observations. See Fig. 1 for an example. In the model-based
approach, we may formulate the problem as a partially observable Markov decision process (POMDP).
Solving POMDPs exactly is computationally intractable in the worst case [24]. Approximate POMDP
algorithms have made dramatic progress on solving large-scale POMDPs [17, 25, 29, 32, 37]; however,
manually constructing POMDP models or learning them from data remains difficult. In the model-free
approach, we directly search for an optimal solution within a policy class. If we do not restrict the
policy class, the difficulty is data and computational efficiency. We may choose a parameterized
policy class. The effectiveness of policy search is then constrained by this a priori choice.
Deep neural networks have brought unprecedented success in many domains [16, 21, 30] and
provide a distinct new approach to decision-making under uncertainty. The deep Q-network (DQN),
which consists of a convolutional neural network (CNN) together with a fully connected layer,
has successfully tackled many Atari games with complex visual input [21]. Replacing the postconvolutional fully connected layer of DQN by a recurrent LSTM layer allows it to deal with partial
observaiblity [10]. However, compared with planning, this approach fails to exploit the underlying
sequential nature of decision-making.
We introduce QMDP-net, a neural network architecture for planning under partial observability.
QMDP-net combines the strengths of model-free learning and model-based planning. A QMDP-net
is a recurrent policy network, but it represents a policy by connecting a POMDP model with an
algorithm that solves the model, thus embedding the solution structure of planning in a network
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
(a)
(b)
(c)
(d)
Fig. 1: A robot learning to navigate in partially observable grid worlds. (a) The robot has a map. It
has a belief over the initial state, but does not know the exact initial state. (b) Local observations
are ambiguous and are insufficient to determine the exact state. (c, d) A policy trained on expert
demonstrations in a set of randomly generated environments generalizes to a new environment. It
also ?transfers? to a much larger real-life environment, represented as a LIDAR map [12].
learning architecture. Specifically, our network uses QMDP [18], a simple, but fast approximate
POMDP algorithm, though other more sophisticated POMDP algorithms could be used as well.
A QMDP-net consists of two main network modules (Fig. 2). One represents a Bayesian filter, which
integrates the history of an agent?s actions and observations into a belief, i.e. a probabilistic estimate
of the agent?s state. The other represents the QMDP algorithm, which chooses the action given the
current belief. Both modules are differentiable, allowing the entire network to be trained end-to-end.
We train a QMDP-net on expert demonstrations in a set of randomly generated environments. The
trained policy generalizes to new environments and also ?transfers? to more complex environments
(Fig. 1c?d). Preliminary experiments show that QMDP-net outperformed state-of-the-art network
architectures on several robotic tasks in simulation. It successfully solved difficult POMDPs that
require reasoning over many time steps, such as the well-known Hallway2 domain [18]. Interestingly,
while QMDP-net encodes the QMDP algorithm, it sometimes outperformed the QMDP algorithm in
our experiments, as a result of end-to-end learning.
2
2.1
Background
Planning under Uncertainty
A POMDP is formally defined as a tuple (S, A, O, T, Z, R), where S, A and O are the state, action,
and observation space, respectively. The state-transition function T (s, a, s0 ) = P (s0 |s, a) defines the
probability of the agent being in state s0 after taking action a in state s. The observation function
Z(s, a, o) = p(o|s, a) defines the probability of receiving observation o after taking action a in
state s. The reward function R(s, a) defines the immediate reward for taking action a in state s.
In a partially observable world, the agent does not know its exact state. It maintains a belief, which is
a probability distribution over S. The agent starts with an initial belief b0 and updates the belief bt at
each time step t with a Bayesian filter:
P
bt (s0 ) = ? (bt 1 , at , ot ) = ?O(s0 , at , ot ) s2S T (s, at , s0 )bt 1 (s),
(1)
where ? is a normalizing constant. The belief bt recursively integrates information from the entire
past history (a1 , o1 , a2 , o2 , . . . , at , ot ) for decision making. POMDP planning seeks a policy ? that
maximizes the value, i.e., the expected total discounted reward:
P1 t
V? (b0 ) = E
R(st , at+1 ) b0 , ? ,
(2)
t=0
where st is the state at time t, at+1 = ?(bt ) is the action that the policy ? chooses at time t, and
2 (0, 1) is a discount factor.
2.2
Related Work
To learn policies for decision making in partially observable domains, one approach is to learn models
[6, 19, 26] and solve the models through planning. An alternative is to learn policies directly [2, 5].
Model learning is usually not end-to-end. While policy learning can be end-to-end, it does not exploit
model information for effective generalization. Our proposed approach combines model-based and
2
model-free learning by embedding a model and a planning algorithm in a recurrent neural network
(RNN) that represents a policy and then training the network end-to-end.
RNNs have been used earlier for learning in partially observable domains [4, 10, 11]. In particular,
Hausknecht and Stone extended DQN [21], a convolutional neural network (CNN), by replacing
its post-convolutional fully connected layer with a recurrent LSTM layer [10]. Similarly, Mirowski
et al. [20] considered learning to navigate in partially observable 3-D mazes. The learned policy
generalizes over different goals, but in a fixed environment. Instead of using the generic LSTM,
our approach embeds algorithmic structure specific to sequential decision making in the network
architecture and aims to learn a policy that generalizes to new environments.
The idea of embedding specific computation structures in the neural network architecture has been
gaining attention recently. Tamar et al. implemented value iteration in a neural network, called
Value Iteration Network (VIN), to solve Markov decision processes (MDPs) in fully observable
domains, where an agent knows its exact state and does not require filtering [34]. Okada et al.
addressed a related problem of path integral optimal control, which allows for continuous states
and actions [23]. Neither addresses the issue of partial observability, which drastically increases
the computational complexity of decision making [24]. Haarnoja et al. [9] and Jonschkowski and
Brock [15] developed end-to-end trainable Bayesian filters for probabilistic state estimation. Silver
et al. introduced Predictron for value estimation in Markov reward processes [31]. They do not
deal with decision making or planning. Both Shankar et al. [28] and Gupta et al. [8] addressed
planning under partial observability. The former focuses on learning a model rather than a policy.
The learned model is trained on a fixed environment and does not generalize to new ones. The
latter proposes a network learning approach to robot navigation in an unknown environment, with a
focus on mapping. Its network architecture contains a hierarchical extension of VIN for planning
and thus does not deal with partial observability during planning. The QMDP-net extends the prior
work on network architectures for MDP planning and for Bayesian filtering. It imposes the POMDP
model and computation structure priors on the entire network architecture for planning under partial
observability.
3
Overview
We want to learn a policy that enables an agent to act effectively in a diverse set of partially
observable stochastic environments. Consider, for example, the robot navigation domain in Fig. 1.
The environments may correspond to different buildings. The robot agent does not observe its own
location directly, but estimates it based on noisy readings from a laser range finder. It has access
to building maps, but does not have models of its own dynamics and sensors. While the buildings
may differ significantly in their layouts, the underlying reasoning required for effective navigation is
similar in all buildings. After training the robot in a few buildings, we want to place the robot in a
new building and have it navigate effectively to a specified goal.
Formally, the agent learns a policy for a parameterized set of tasks in partially observable stochastic
environments: W? = {W (?) | ? 2 ?}, where ? is the set of all parameter values. The parameter
value ? captures a wide variety of task characteristics that vary within the set, including environments,
goals, and agents. In our robot navigation example, ? encodes a map of the environment, a goal,
and a belief over the robot?s initial state. We assume that all tasks in W? share the same state space,
action space, and observation space. The agent does not have prior models of its own dynamics,
sensors, or task objectives. After training on tasks for some subset of values in ?, the agent learns a
policy that solves W (?) for any given ? 2 ?.
A key issue is a general representation of a policy for W? , without knowing the specifics of W? or its
parametrization. We introduce the QMDP-net, a recurrent policy network. A QMDP-net represents a
policy by connecting a parameterized POMDP model with an approximate POMDP algorithm and
embedding both in a single, differentiable neural network. Embedding the model allows the policy
to generalize over W? effectively. Embedding the algorithm allows us to train the entire network
end-to-end and learn a model that compensates for the limitations of the approximate algorithm.
Let M (?) = (S, A, O, fT (?|?), fZ (?|?), fR (?|?)) be the embedded POMDP model, where S, A and
O are the shared state space, action space, observation space designed manually for all tasks
in W? and fT (?|?), fZ (?|?), fR (?|?) are the state-transition, observation, and reward functions to
be learned from data. It may appear that a perfect answer to our learning problem would have
3
(a)
(b)
(c)
QMDP
planner
QMDP
planner
QMDP
planner
QMDP
planner
Policy
Bayesian
filter
Bayesian
filter
Bayesian
filter
Bayesian
filter
Fig. 2: QMDP-net architecture. (a) A policy maps a history of actions and observations to a new
action. (b) A QMDP-net is an RNN that imposes structure priors for sequential decision making
under partial observability. It embeds a Bayesian filter and the QMDP algorithm in the network. The
hidden state of the RNN encodes the belief for POMDP planning. (c) A QMDP-net unfolded in time.
fT (?|?), fZ (?|?), and fR (?|?) represent the ?true? underlying models of dynamics, observation, and
reward for the task W (?). This is true only if the embedded POMDP algorithm is exact, but not
true in general. The agent may learn an alternative model to mitigate an approximate algorithm?s
limitations and obtain an overall better policy. In this sense, while QMDP-net embeds a POMDP
model in the network architecture, it aims to learn a good policy rather than a ?correct? model.
A QMDP-net consists of two modules (Fig. 2). One encodes a Bayesian filter, which performs
state estimation by integrating the past history of agent actions and observations into a belief. The
other encodes QMDP, a simple, but fast approximate POMDP planner [18]. QMDP chooses the
agent?s actions by solving the corresponding fully observable Markov decision process (MDP) and
performing one-step look-ahead search on the MDP values weighted by the belief.
We evaluate the proposed network architecture in an imitation learning setting. We train on a
set of expert trajectories with randomly chosen task parameter values in ? and test with new
parameter values. An expert trajectory consist of a sequence of demonstrated actions and observations
(a1 , o1 , a2 , o2 , . . .) for some ? 2 ?. The agent does not access the ground-truth states or beliefs
along the trajectory during the training. We define loss as the cross entropy between predicted and
demonstrated action sequences and use RMSProp [35] for training. See Appendix C.7 for details. Our
implementation in Tensorflow [1] is available online at http://github.com/AdaCompNUS/qmdp-net.
4
QMDP-Net
We assume that all tasks in a parameterized set W? share the same underlying state space S, action
space A, and observation space O. We want to learn a QMDP-net policy for W? , conditioned on the
parameters ? 2 ?. A QMDP-net is a recurrent policy network. The inputs to a QMDP-net are the
action at 2 A and the observation ot 2 O at time step t, as well as the task parameter ? 2 ?. The
output is the action at+1 for time step t + 1.
A QMDP-net encodes a parameterized POMDP model M (?) = (S, A, O, T = fT (?|?), Z =
fZ (?|?), R = fR (?|?)) and the QMDP algorithm, which selects actions by solving the model approximately. We choose S, A, and O of M (?) manually, based on prior knowledge on W? , specifically,
prior knowledge on S, A, and O. In general, S 6= S, A 6= A, and O 6= O. The model states,
actions, and observations may be abstractions of their real-world counterparts in the task. In our robot
navigation example (Fig. 1), while the robot moves in a continuous space, we choose S to be a grid
of finite size. We can do the same for A and O, in order to reduce representational and computational
complexity. The transition function T , observation function Z, and reward function R of M (?) are
conditioned on ?, and are learned from data through end-to-end training. In this work, we assume
that T is the same for all tasks in W? to simplify the network architecture. In other words, T does
not depend on ?.
End-to-end training is feasible, because a QMDP-net encodes both a model and the associated
algorithm in a single, fully differentiable neural network. The main idea for embedding the algorithm
in a neural network is to represent linear operations, such as matrix multiplication and summation, by
convolutional layers and represent maximum operations by max-pooling layers. Below we provide
some details on the QMDP-net?s architecture, which consists of two modules, a filter and a planner.
4
(a) Bayesian filter module
(b) QMDP planner module
Fig. 3: A QMDP-net consists of two modules. (a) The Bayesian filter module incorporates the
current action at and observation ot into the belief. (b) The QMDP planner module selects the action
according to the current belief bt .
Filter module. The filter module (Fig. 3a) implements a Bayesian filter. It maps from a belief,
action, and observation to a next belief, bt+1 = f (bt |at , ot ). The belief is updated in two steps. The
first accounts for actions, the second for observations:
P
b0t (s) = s0 2S T (s, at , s0 )bt (s0 ),
(3)
bt+1 (s) = ?Z(s, ot )b0t (s),
(4)
where ot 2 O is the observation received after taking action at 2 A and ? is a normalization factor.
We implement the Bayesian filter by transforming Eq. (3) and Eq. (4) to layers of a neural network.
For ease of discussion consider our N ?N grid navigation task (Fig. 1a?c). The agent does not know
its own state and only observes neighboring cells. It has access to the task parameter ? that encodes
the obstacles, goal, and a belief over initial states. Given the task, we choose M (?) to have a N ?N
state space. The belief, bt (s), is now an N ?N tensor.
Eq. (3) is implemented as a convolutional layer with |A| convolutional filters. We denote the
convolutional layer by fT . The kernel weights of fT encode the transition function T in M (?). The
output of the convolutional layer, b0t (s, a), is a N ?N ?|A| tensor.
b0t (s, a) encodes the updated belief after taking each of the actions, a 2 A. We need to select the
belief corresponding to the last action taken by the agent, at . We can directly index b0t (s, a) by at if
A = A. In general A 6= A, so we cannot use simple indexing. Instead, we will use ?soft indexing?.
First we encode actions in A to actions in A through a learned function fA . fA maps from at to
an indexing vector wta , a distribution over actions in A. We then weight b0t (s, a) by wta along the
appropriate dimension, i.e.
P
b0t (s) = a2A b0t (s, a)wta .
(5)
Eq. (4) incorporates observations through an observation model Z(s, o). Now Z(s, o) is a N ?N ?|O|
tensor that represents the probability of receiving observation o 2 O in state s 2 S. In our grid
navigation task observations depend on the obstacle locations. We condition Z on the task parameter,
Z(s, o) = fZ (s, o|?) for ? 2 ?. The function fZ is a neural network, mapping from ? to Z(s, o). In
this paper fZ is a CNN.
Z(s, o) encodes observation probabilities for each of the observations, o 2 O. We need the observation probabilities for the last observation ot . In general O 6= O and we cannot index Z(s, o)
directly. Instead, we will use soft indexing again. We encode observations in O to observations in O
through fO . fO is a function mapping from ot to an indexing vector, wto , a distribution over O. We
then weight Z(s, o) by wto , i.e.
P
Z(s) = o2O Z(s, o)wto .
(6)
Finally, we obtain the updated belief, bt+1 (s), by multiplying b0t (s) and Z(s) element-wise, and
normalizing over states. In our setting the initial belief for the task W (?) is encoded in ?. We
initialize the belief in QMDP-net through an additional encoding function, b0 = fB (?).
5
Planner module. The QMDP planner (Fig. 3b) performs value iteration at its core. Q values are
computed by iteratively applying Bellman updates,
P
0
0
Qk+1 (s, a) = R(s, a) +
(7)
s0 2S T (s, a, s )Vk (s ),
Vk (s) = maxa Qk (s, a).
(8)
Actions are then selected by weighting the Q values with the belief.
We can implement value iteration using convolutional and max pooling layers [28, 34]. In our grid
navigation task Q(s, a) is a N ?N ?|A| tensor. Eq. (8) is expressed by a max pooling layer, where
Qk (s, a) is the input and Vk (s) is the output. Eq. (7) is a N ?N convolution with |A| convolutional
filters, followed by an addition operation with R(s, a), the reward tensor. We denote the convolutional
layer by fT0 . The kernel weights of fT0 encode the transition function T , similarly to fT in the filter.
Rewards for a navigation task depend on the goal and obstacles. We condition rewards on the task
parameter, R(s, a) = fR (s, a|?). fR maps from ? to R(s, a). In this paper fR is a CNN.
We implement K iterations of Bellman updates by stacking the layers representing Eq. (7) and Eq. (8)
K times with tied weights. After K iterations we get QK (s, a), the approximate Q values for each
state-action pair. We weight the Q values by the belief to obtain action values,
P
q(a) = s2S QK (s, a)bt (s).
(9)
Finally, we choose the output action through a low-level policy function, f? , mapping from q(a) to
the action output, at+1 .
QMDP-net naturally extends to higher dimensional discrete state spaces (e.g. our maze navigation
task) where n-dimensional convolutions can be used [14]. While M (?) is restricted to a discrete
space, we can handle continuous tasks W? by simultaneously learning a discrete M (?) for planning,
and fA , fO , fB , f? to map between states, actions and observations in W? and M (?).
5
Experiments
The main objective of the experiments is to understand the benefits of structure priors on learning
neural-network policies. We create several alternative network architectures by gradually relaxing
the structure priors and evaluate the architectures on simulated robot navigation and manipulation
tasks. While these tasks are simpler than, for example, Atari games, in terms of visual perception,
they are in fact very challenging, because of the sophisticated long-term reasoning required to handle
partial observability and distant future rewards. Since the exact state of the robot is unknown, a
successful policy must reason over many steps to gather information and improve state estimation
through partial and noisy observations. It also must reason about the trade-off between the cost of
information gathering and the reward in the distance future.
5.1
Experimental Setup
We compare the QMDP-net with a number of related alternative architectures. Two are QMDP-net
variants. Untied QMDP-net relaxes the constraints on the planning module by untying the weights
representing the state-transition function over the different CNN layers. LSTM QMDP-net replaces
the filter module with a generic LSTM module. The other two architectures do not embed POMDP
structure priors at all. CNN+LSTM is a state-of-the-art deep CNN connected to an LSTM. It is similar
to the DRQN architecture proposed for reinforcement learning under partially observability [10].
RNN is a basic recurrent neural network with a single fully-connected hidden layer. RNN contains no
structure specific to planning under partial observability.
Each experimental domain contains a parameterized set of tasks W? . The parameters ? encode an
environment, a goal, and a belief over the robot?s initial state. To train a policy for W? , we generate
random environments, goals, and initial beliefs. We construct ground-truth POMDP models for the
generated data and apply the QMDP algorithm. If the QMDP algorithm successfully reaches the
goal, we then retain the resulting sequence of action and observations (a1 , o1 , a2 , o2 , . . .) as an expert
trajectory, together with the corresponding environment, goal, and initial belief. It is important to note
that the ground-truth POMDPs are used only for generating expert trajectories and not for learning
the QMDP-net.
6
For fair comparison, we train all networks using the same set of expert trajectories in each domain. We
perform basic search over training parameters, the number of layers, and the number of hidden units
for each network architecture. Below we briefly describe the experimental domains. See Appendix C
for implementation details.
Grid-world navigation. A robot navigates in an unknown building given a floor map and a goal.
The robot is uncertain of its own location. It is equipped with a LIDAR that detects obstacles in its
direct neighborhood. The world is uncertain: the robot may fail to execute desired actions, possibly
because of wheel slippage, and the LIDAR may produce false readings. We implemented a simplified
version of this task in a discrete n?n grid world (Fig. 1c). The task parameter ? is represented as
an n?n image with three channels. The first channel encodes the obstacles in the environment, the
second channel encodes the goal, and the last channel encodes the belief over the robot?s initial state.
The robot?s state represents its position in the grid. It has five actions: moving in each of the four
canonical directions or staying put. The LIDAR observations are compressed into four binary values
corresponding to obstacles in the four neighboring cells. We consider both a deterministic and a
stochastic variant of the domain. The stochastic variant adds action and observation uncertainties.
The robot fails to execute the specified move action and stays in place with probability 0.2. The
observations are faulty with probability 0.1 independently in each direction. We trained a policy
using expert trajectories from 10, 000 random environments, 5 trajectories from each environment.
We then tested on a separate set of 500 random environments.
Maze navigation. A differential-drive robot navigates
in a maze with the help of a map, but it does not know its
pose (Fig. 1d). This domain is similar to the grid-world
navigation, but it is significant more challenging. The
robot?s state contains both its position and orientation.
The robot cannot move freely because of kinematic constraints. It has four actions: move forward, turn left, turn
right and stay put. The observations are relative to the
robot?s current orientation, and the increased ambiguity
makes it more difficult to localize the robot, especially
when the initial state is highly uncertain. Finally, successful trajectories in mazes are typically much longer
than those in randomly-generated grid worlds. Again
we trained on expert trajectories in 10, 000 randomly
generated mazes and tested them in 500 new ones.
Fig. 4: Highly ambiguous observations in
a maze. The four observations (in red) are
the same, despite that the robot states are
all different.
(a)
(b)
2-D object grasping. A robot gripper picks up novel
objects from a table using a two-finger hand with noisy
touch sensors at the finger tips. The gripper uses the
fingers to perform compliant motions while maintaining
contact with the object or to grasp the object. It knows the
shape of the object to be grasped, maybe from an object Fig. 5: Object grasping using touch sensdatabase. However, it does not know its own pose relative ing. (a) An example [3]. (b) Simplified
to the object and relies on the touch sensors to localize 2-D object grasping. Objects from the
itself. We implemented a simplified 2-D variant of this training set (top) and the test set (bottom).
task, modeled as a POMDP [13]. The task parameter ?
is an image with three channels encoding the object shape, the grasp point, and a belief over the
gripper?s initial pose. The gripper has four actions, each moving in a canonical direction unless it
touches the object or the environment boundary. Each finger has 3 binary touch sensors at the tip,
resulting in 64 distinct observations. We trained on expert demonstration on 20 different objects with
500 randomly sampled poses for each object. We then tested on 10 previously unseen objects in
random poses.
5.2
Choosing QMDP-Net Components for a Task
Given a new task W? , we need to choose an appropriate neural network representation for
M (?). More specifically, we need to choose S, A and O, and a representation for the functions
fR , fT , fT0 , fZ , fO , fA , fB , f? . This provides an opportunity to incorporate domain knowledge in a
principled way. For example, if W? has a local and spatially invariant connectivity structure, we can
choose convolutions with small kernels to represent fT , fR and fZ .
7
In our experiments we use S = N ?N for N ?N grid navigation, and S = N ?N ?4 for N ?N maze
navigation where the robot has 4 possible orientations. We use |A| = |A| and |O| = |O| for all tasks
except for the object grasping task, where |O| = 64 and |O| = 16. We represent fT , fR and fZ by
CNN components with 3?3 and 5?5 kernels depending on the task. We enforce that fT and fZ
are proper probability distributions by using softmax and sigmoid activations on the convolutional
kernels, respectively. Finally, fO is a small fully connected component, fA is a one-hot encoding
function, f? is a single softmax layer, and fB is the identity function.
We can adjust the amount of planning in a QMDP-net by setting K. A large K allows propagating
information to more distant states without affecting the number of parameters to learn. However, it
results in deeper networks that are computationally expensive to evaluate and more difficult to train.
We used K = 20 . . . 116 depending on the problem size. We were able to transfer policies to larger
environments by increasing K up to 450 when executing the policy.
In our experiments the representation of the task parameter ? is isomorphic to the chosen state
space S. While the architecture is not restricted to this setting, we rely on it to represent fT , fZ , fR by
convolutions with small kernels. Experiments with a more general class of problems is an interesting
direction for future work.
5.3
Results and Discussion
The main results are reported in Table 1. Some additional results are reported in Appendix A. For
each domain, we report the task success rate and the average number of time steps for task completion.
Comparing the completion time is meaningful only when the success rates are similar.
QMDP-net successfully learns policies that generalize to new environments. When evaluated
on new environments, the QMDP-net has higher success rate and faster completion time than the
alternatives in nearly all domains. To understand better the performance difference, we specifically
compared the architectures in a fixed environment for navigation. Here only the initial state and the
goal vary across the task instances, while the environment remains the same. See the results in the
last row of Table 1. The QMDP-net and the alternatives have comparable performance. Even RNN
performs very well. Why? In a fixed environment, a network may learn the features of an optimal
policy directly, e.g., going straight towards the goal. In contrast, the QMDP-net learns a model for
planning, i.e., generating a near-optimal policy for a given arbitrary environment.
POMDP structure priors improve the performance of learning complex policies. Moving
across Table 1 from left to right, we gradually relax the POMDP structure priors on the network
architecture. As the structure priors weaken, so does the overall performance. However, strong priors
sometimes over-constrain the network and result in degraded performance. For example, we found
that tying the weights of fT in the filter and fT0 in the planner may lead to worse policies. While both
fT and fT0 represent the same underlying transition dynamics, using different weights allows each
to choose its own approximation and thus greater flexibility. We shed some light on this issue and
visualize the learned POMDP model in Appendix B.
QMDP-net learns ?incorrect?, but useful models. Planning under partial observability is intractable in general, and we must rely on approximation algorithms. A QMDP-net encodes both a
POMDP model and QMDP, an approximate POMDP algorithm that solves the model. We then train
the network end-to-end. This provides the opportunity to learn an ?incorrect?, but useful model that
compensates the limitation of the approximation algorithm, in a way similar to reward shaping in
reinforcement learning [22]. Indeed, our results show that the QMDP-net achieves higher success
rate than QMDP in nearly all tasks. In particular, QMDP-net performs well on the well-known
Hallway2 domain, which is designed to expose the weakness of QMDP resulting from its myopic
planning horizon. The planning algorithm is the same for both the QMDP-net and QMDP, but the
QMDP-net learns a more effective model from expert demonstrations. This is true even though
QMDP generates the expert data for training. We note that the expert data contain only successful
QMDP demonstrations. When both successful and unsuccessful QMDP demonstrations were used
for training, the QMDP-net did not perform better than QMDP, as one would expect.
QMDP-net policies learned in small environments transfer directly to larger environments.
Learning a policy for large environments from scratch is often difficult. A more scalable approach
8
Table 1: Performance comparison of QMDP-net and alternative architectures for recurrent policy
networks. SR is the success rate in percentage. Time is the average number of time steps for task
completion. D-n and S-n denote deterministic and stochastic variants of a domain with environment
size n?n.
QMDP
Domain
QMDP-net
Untied
QMDP-net
LSTM
QMDP-net
CNN
+LSTM
RNN
SR
Time
SR
Time
SR
Time
SR
Time
SR
Time
SR
Time
Grid D-10
Grid D-18
Grid D-30
99.8
99.0
97.6
8.8
15.5
24.6
99.6
99.0
98.6
8.2
14.6
25.0
98.6
98.8
98.8
8.3
14.8
23.9
84.4
43.8
22.2
12.8
27.9
51.1
90.0
57.8
19.4
13.4
33.7
45.2
87.8
35.8
16.4
13.4
24.5
39.3
Grid S-18
98.1
23.9
98.8
23.9
95.9
24.0
23.8
55.6
41.4
65.9
34.0
64.1
Maze D-29
Maze S-19
63.2
63.1
54.1
50.5
98.0
93.9
56.5
60.4
95.4
98.7
62.5
57.1
9.8
18.9
57.2
79.0
9.2
19.2
41.4
80.8
9.8
19.6
47.0
82.1
Hallway2
37.3
28.2
82.9
64.4
69.6 104.4
82.8
89.7
77.8
99.5
68.0 108.8
Grasp
98.3
14.6
99.6
18.2
98.9
20.4
91.4
26.4
92.8
22.1
94.1
Intel Lab
Freiburg
90.2
88.4
85.4
66.9
94.4 107.7
93.2 81.1
20.0
37.4
55.3
51.7
Fixed grid
98.8
17.4
98.6
99.8
17.0
17.6
97.0
19.7
98.4
25.7
-
19.9
98.0
19.8
would be to learn a policy in small environments and transfer it to large environments by repeating
the reasoning process. To transfer a learned QMDP-net policy, we simply expand its planning module
by adding more recurrent layers. Specifically, we trained a policy in randomly generated 30 ? 30
grid worlds with K = 90. We then set K = 450 and applied the learned policy to several real-life
environments, including Intel Lab (100?101) and Freiburg (139?57), using their LIDAR maps
(Fig. 1c) from the Robotics Data Set Repository [12]. See the results for these two environments in
Table 1. Additional results with different K settings and other buildings are available in Appendix A.
6
Conclusion
A QMDP-net is a deep recurrent policy network that embeds POMDP structure priors for planning
under partial observability. While generic neural networks learn a direct mapping from inputs to
outputs, QMDP-net learns how to model and solve a planning task. The network is fully differentiable
and allows for end-to-end training.
Experiments on several simulated robotic tasks show that learned QMDP-net policies successfully
generalize to new environments and transfer to larger environments as well. The POMDP structure
priors and end-to-end training substantially improve the performance of learned policies. Interestingly,
while a QMDP-net encodes the QMDP algorithm for planning, learned QMDP-net policies sometimes
outperform QMDP.
There are many exciting directions for future exploration. First, a major limitation of our current
approach is the state space representation. The value iteration algorithm used in QMDP iterates
through the entire state space and is well known to suffer from the ?curse of dimensionality?. To
alleviate this difficulty, the QMDP-net, through end-to-end training, may learn a much smaller
abstract state space representation for planning. One may also incorporate hierarchical planning [8].
Second, QMDP makes strong approximations in order to reduce computational complexity. We
want to explore the possibility of embedding more sophisticated POMDP algorithms in the network
architecture. While these algorithms provide stronger planning performance, their algorithmic
sophistication increases the difficulty of learning. Finally, we have so far restricted the work to
imitation learning. It would be exciting to extend it to reinforcement learning. Based on earlier
work [28, 34], this is indeed promising.
Acknowledgments We thank Leslie Kaelbling and Tom?s Lozano-P?rez for insightful discussions that
helped to improve our understanding of the problem. The work is supported in part by Singapore Ministry of
Education AcRF grant MOE2016-T2-2-068 and National University of Singapore AcRF grant R-252-000-587112.
9
References
[1] M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean,
M. Devin, et al. TensorFlow: Large-scale machine learning on heterogeneous systems, 2015. URL
http://tensorflow.org/.
[2] J. A. Bagnell, S. Kakade, A. Y. Ng, and J. G. Schneider. Policy search by dynamic programming. In
Advances in Neural Information Processing Systems, pages 831?838, 2003.
[3] H. Bai, D. Hsu, W. S. Lee, and V. A. Ngo. Monte carlo value iteration for continuous-state POMDPs. In
Algorithmic Foundations of Robotics IX, pages 175?191, 2010.
[4] B. Bakker, V. Zhumatiy, G. Gruener, and J. Schmidhuber. A robot that reinforcement-learns to identify and
memorize important previous observations. In International Conference on Intelligent Robots and Systems,
pages 430?435, 2003.
[5] J. Baxter and P. L. Bartlett. Infinite-horizon policy-gradient estimation. Journal of Artificial Intelligence
Research, 15:319?350, 2001.
[6] B. Boots, S. M. Siddiqi, and G. J. Gordon. Closing the learning-planning loop with predictive state
representations. The International Journal of Robotics Research, 30(7):954?966, 2011.
[7] K. Cho, B. Van Merri?nboer, C. Gulcehre, D. Bahdanau, F. Bougares, H. Schwenk, and Y. Bengio. Learning
phrase representations using RNN encoder-decoder for statistical machine translation. arXiv preprint
arXiv:1406.1078, 2014.
[8] S. Gupta, J. Davidson, S. Levine, R. Sukthankar, and J. Malik. Cognitive mapping and planning for visual
navigation. arXiv preprint arXiv:1702.03920, 2017.
[9] T. Haarnoja, A. Ajay, S. Levine, and P. Abbeel. Backprop kf: Learning discriminative deterministic state
estimators. In Advances in Neural Information Processing Systems, pages 4376?4384, 2016.
[10] M. J. Hausknecht and P. Stone. Deep recurrent Q-learning for partially observable MDPs. arXiv preprint,
2015. URL http://arxiv.org/abs/1507.06527.
[11] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural Computation, 9(8):1735?1780, 1997.
[12] A. Howard and N. Roy. The robotics data set repository (radish), 2003. URL http://radish.
sourceforge.net/.
[13] K. Hsiao, L. P. Kaelbling, and T. Lozano-P?rez. Grasping POMDPs. In International Conference on
Robotics and Automation, pages 4685?4692, 2007.
[14] S. Ji, W. Xu, M. Yang, and K. Yu. 3D convolutional neural networks for human action recognition. IEEE
Transactions on Pattern Analysis and Machine Intelligence, 35(1):221?231, 2013.
[15] R. Jonschkowski and O. Brock. End-to-end learnable histogram filters. In Workshop on Deep Learning
for Action and Interaction at NIPS, 2016. URL http://www.robotics.tu-berlin.de/fileadmin/
fg170/Publikationen_pdf/Jonschkowski-16-NIPS-WS.pdf.
[16] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural
networks. In Advances in Neural Information Processing Systems, pages 1097?1105, 2012.
[17] H. Kurniawati, D. Hsu, and W. S. Lee. Sarsop: Efficient point-based POMDP planning by approximating
optimally reachable belief spaces. In Robotics: Science and Systems, volume 2008, 2008.
[18] M. L. Littman, A. R. Cassandra, and L. P. Kaelbling. Learning policies for partially observable environments: Scaling up. In International Conference on Machine Learning, pages 362?370, 1995.
[19] M. L. Littman, R. S. Sutton, and S. Singh. Predictive representations of state. In Advances in Neural
Information Processing Systems, pages 1555?1562, 2002.
[20] P. Mirowski, R. Pascanu, F. Viola, H. Soyer, A. Ballard, A. Banino, M. Denil, R. Goroshin, L. Sifre,
K. Kavukcuoglu, et al. Learning to navigate in complex environments. arXiv preprint arXiv:1611.03673,
2016.
[21] V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller,
A. K. Fidjeland, G. Ostrovski, et al. Human-level control through deep reinforcement learning. Nature,
518(7540):529?533, 2015.
10
[22] A. Y. Ng, D. Harada, and S. Russell. Policy invariance under reward transformations: Theory and
application to reward shaping. In International Conference on Machine Learning, pages 278?287, 1999.
[23] M. Okada, L. Rigazio, and T. Aoshima. Path integral networks: End-to-end differentiable optimal control.
arXiv preprint arXiv:1706.09597, 2017.
[24] C. H. Papadimitriou and J. N. Tsitsiklis. The complexity of Markov decision processes. Mathematics of
Operations Research, 12(3):441?450, 1987.
[25] J. Pineau, G. J. Gordon, and S. Thrun. Applying metric-trees to belief-point POMDPs. In Advances in
Neural Information Processing Systems, page None, 2003.
[26] G. Shani, R. I. Brafman, and S. E. Shimony. Model-based online learning of POMDPs. In European
Conference on Machine Learning, pages 353?364, 2005.
[27] G. Shani, J. Pineau, and R. Kaplow. A survey of point-based POMDP solvers. Autonomous Agents and
Multi-agent Systems, 27(1):1?51, 2013.
[28] T. Shankar, S. K. Dwivedy, and P. Guha. Reinforcement learning via recurrent convolutional neural
networks. In International Conference on Pattern Recognition, pages 2592?2597, 2016.
[29] D. Silver and J. Veness. Monte-carlo planning in large POMDPs. In Advances in Neural Information
Processing Systems, pages 2164?2172, 2010.
[30] D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. Van Den Driessche, J. Schrittwieser,
I. Antonoglou, V. Panneershelvam, M. Lanctot, et al. Mastering the game of Go with deep neural
networks and tree search. Nature, 529(7587):484?489, 2016.
[31] D. Silver, H. van Hasselt, M. Hessel, T. Schaul, A. Guez, T. Harley, G. Dulac-Arnold, D. Reichert,
N. Rabinowitz, A. Barreto, et al. The predictron: End-to-end learning and planning. arXiv preprint, 2016.
URL https://arxiv.org/abs/1612.08810.
[32] M. T. Spaan and N. Vlassis. Perseus: Randomized point-based value iteration for POMDPs. Journal of
Artificial Intelligence Research, 24:195?220, 2005.
[33] C. Stachniss. Robotics 2D-laser dataset. URL http://www.ipb.uni-bonn.de/datasets/.
[34] A. Tamar, S. Levine, P. Abbeel, Y. Wu, and G. Thomas. Value iteration networks. In Advances in Neural
Information Processing Systems, pages 2146?2154, 2016.
[35] T. Tieleman and G. Hinton. Lecture 6.5 - rmsprop: Divide the gradient by a running average of its recent
magnitude. COURSERA: Neural networks for machine learning, pages 26?31, 2012.
[36] S. Xingjian, Z. Chen, H. Wang, D.-Y. Yeung, W.-k. Wong, and W.-c. Woo. Convolutional LSTM network:
A machine learning approach for precipitation nowcasting. In Advances in Neural Information Processing
Systems, pages 802?810, 2015.
[37] N. Ye, A. Somani, D. Hsu, and W. S. Lee. Despot: Online POMDP planning with regularization. Journal
of Artificial Intelligence Research, 58:231?266, 2017.
11
| 7055 |@word cnn:9 version:1 briefly:1 repository:2 stronger:1 integrative:1 simulation:2 seek:1 pick:1 dramatic:1 recursively:1 bai:1 initial:13 contains:4 interestingly:3 outperforms:1 past:3 o2:3 current:6 com:1 comparing:1 hasselt:1 activation:1 guez:2 must:4 devin:1 distant:2 shape:2 enables:1 qmdp:94 designed:2 update:3 intelligence:4 selected:1 parametrization:1 core:1 short:1 provides:2 iterates:1 pascanu:1 location:3 org:3 simpler:1 five:1 along:2 direct:2 differential:1 ipb:1 incorrect:2 consists:5 abadi:1 combine:3 introduce:2 indeed:2 expected:1 p1:1 planning:41 multi:1 bellman:2 untying:1 discounted:1 detects:1 unfolded:1 curse:1 equipped:1 solver:1 increasing:1 precipitation:1 underlying:5 maximizes:1 tying:1 atari:2 substantially:1 bakker:1 maxa:1 developed:1 perseus:1 shani:2 transformation:1 mitigate:1 act:1 shed:1 exactly:2 control:3 unit:1 dyhsu:1 grant:2 appear:1 engineering:1 local:2 despite:1 encoding:3 sutton:1 soyer:1 path:2 approximately:1 hsiao:1 rnns:1 relaxing:1 challenging:2 ease:1 graduate:1 range:1 acknowledgment:1 implement:4 grasped:1 riedmiller:1 rnn:8 significantly:1 word:1 integrating:1 get:1 cannot:4 wheel:1 shankar:2 put:2 faulty:1 applying:2 bellemare:1 sukthankar:1 www:2 wong:1 map:12 demonstrated:2 deterministic:3 dean:1 layout:1 attention:1 go:1 independently:1 pomdp:31 formulate:1 survey:1 estimator:1 embedding:9 handle:2 autonomous:1 merri:1 banino:1 updated:3 dulac:1 exact:6 programming:1 us:2 element:1 roy:1 expensive:1 recognition:2 hessel:1 slippage:1 bottom:1 ft:14 module:16 preprint:6 levine:3 solved:1 capture:1 worst:1 wang:1 connected:6 sun:1 coursera:1 grasping:5 trade:1 russell:1 observes:1 principled:1 environment:42 transforming:1 complexity:4 rmsprop:2 reward:15 littman:2 nowcasting:1 dynamic:5 trained:8 depend:3 solving:4 singh:1 predictive:2 efficiency:1 schwenk:1 represented:2 finger:4 train:8 laser:2 distinct:2 fast:2 effective:3 describe:1 monte:2 artificial:3 neighborhood:1 choosing:1 encoded:1 larger:4 solve:3 relax:1 compressed:1 compensates:2 encoder:1 unseen:1 noisy:3 itself:1 online:3 sequence:3 differentiable:6 unprecedented:1 net:63 interaction:1 fr:11 neighboring:2 tu:1 loop:1 flexibility:1 representational:1 schaul:1 sourceforge:1 sutskever:1 produce:1 generating:2 silver:5 perfect:1 staying:1 object:16 help:1 depending:2 recurrent:13 completion:4 propagating:1 pose:5 executing:1 school:2 received:1 progress:1 b0:4 solves:4 eq:8 implemented:4 predicted:1 strong:3 memorize:1 goroshin:1 lee2:1 differ:1 direction:5 correct:1 filter:22 stochastic:5 exploration:1 human:2 education:1 backprop:1 require:2 abbeel:2 generalization:1 preliminary:2 alleviate:1 summation:1 extension:1 kurniawati:1 somani:1 considered:1 ground:3 algorithmic:3 mapping:6 visualize:1 major:1 vary:2 achieves:1 a2:3 estimation:5 integrates:2 outperformed:2 wto:3 expose:1 create:1 successfully:5 weighted:1 brought:1 sensor:5 aim:2 rather:2 denil:1 rusu:1 encode:5 focus:2 vk:3 contrast:1 sense:1 abstraction:1 entire:5 bt:14 typically:1 hidden:3 w:1 expand:1 going:1 selects:2 issue:3 overall:2 orientation:3 classification:1 priori:1 proposes:1 plan:1 constrained:1 softmax:2 initialize:1 art:2 construct:1 beach:1 ng:2 manually:3 veness:2 represents:8 look:1 yu:1 nearly:2 future:4 papadimitriou:1 report:1 t2:1 simplify:1 intelligent:1 few:1 gordon:2 randomly:7 wee:1 simultaneously:1 national:2 ab:2 harley:1 b0t:9 ostrovski:1 highly:2 kinematic:1 possibility:1 mnih:1 grasp:3 adjust:1 weakness:1 introduces:1 navigation:18 light:1 myopic:1 tuple:1 integral:2 partial:15 hausknecht:2 unless:1 tree:2 divide:1 desired:1 weaken:1 uncertain:3 increased:1 instance:1 earlier:2 obstacle:6 soft:2 hallway2:3 shimony:1 leslie:1 phrase:1 stacking:1 cost:1 kaelbling:3 subset:1 krizhevsky:1 successful:4 harada:1 guha:1 radish:2 optimally:1 reported:2 answer:1 despot:1 chooses:3 cho:1 st:3 fundamental:1 lstm:10 international:6 randomized:1 retain:1 stay:2 probabilistic:2 off:1 receiving:2 compliant:1 lee:3 tip:2 connecting:3 together:2 connectivity:1 again:2 ambiguity:1 choose:9 possibly:1 huang:1 worse:1 cognitive:1 expert:13 account:1 de:2 automation:1 helped:1 lab:2 red:1 start:1 maintains:1 vin:2 degraded:1 convolutional:16 qk:5 characteristic:1 correspond:1 identify:1 generalize:5 bayesian:14 kavukcuoglu:2 none:1 carlo:2 trajectory:10 comp:1 pomdps:10 multiplying:1 drive:1 straight:1 history:5 fo:5 reach:1 naturally:1 associated:1 sampled:1 hsu:3 dataset:1 knowledge:3 dimensionality:1 shaping:2 sophisticated:3 higher:3 tom:1 execute:2 though:2 evaluated:1 sarsop:1 hand:1 replacing:2 touch:5 acrf:2 defines:3 pineau:2 rabinowitz:1 mdp:3 dqn:3 usa:1 building:8 ye:1 contain:1 true:4 counterpart:1 former:1 lozano:2 regularization:1 spatially:1 iteratively:1 deal:3 game:3 during:2 ambiguous:2 davis:1 stone:2 pdf:1 freiburg:2 performs:4 motion:1 reasoning:4 image:2 wise:1 novel:1 recently:1 sigmoid:1 ji:1 overview:1 volume:1 extend:1 drqn:1 bougares:1 significant:1 xingjian:1 grid:17 mathematics:1 similarly:2 closing:1 reachable:1 moving:3 robot:30 access:3 longer:1 mirowski:2 add:1 navigates:2 own:7 showed:1 recent:1 manipulation:1 schmidhuber:2 binary:2 success:6 life:2 ministry:1 additional:3 greater:1 floor:1 schneider:1 freely:1 determine:2 corrado:1 ing:1 faster:1 cross:1 long:3 post:1 a1:3 finder:1 variant:5 basic:2 scalable:1 heterogeneous:1 ajay:1 metric:1 arxiv:12 histogram:1 iteration:10 sometimes:4 represent:7 normalization:1 kernel:6 robotics:8 cell:2 agarwal:1 hochreiter:1 background:1 want:4 addition:1 addressed:2 affecting:1 ot:10 sr:7 rigazio:1 pooling:3 bahdanau:1 incorporates:2 effectiveness:1 ngo:1 near:1 yang:1 bengio:1 relaxes:1 baxter:1 variety:1 architecture:26 restrict:1 observability:14 idea:2 reduce:2 tamar:2 knowing:1 barham:1 bartlett:1 url:6 suffer:1 peter:1 action:48 deep:10 useful:2 maybe:1 amount:1 repeating:1 discount:1 siddiqi:1 http:7 fz:12 generate:1 percentage:1 outperform:1 canonical:2 singapore:3 diverse:1 discrete:4 key:1 four:6 localize:2 neither:1 parameterized:8 uncertainty:4 extends:2 place:2 planner:11 wu:1 decision:13 appendix:5 scaling:1 lanctot:1 comparable:1 layer:20 followed:1 tackled:1 replaces:1 strength:2 ahead:1 constraint:2 constrain:1 untied:2 encodes:16 generates:1 bonn:1 performing:1 nboer:1 leews:1 according:1 across:2 smaller:1 mastering:1 kakade:1 spaan:1 wta:3 making:9 den:1 restricted:3 indexing:5 gradually:2 gathering:1 invariant:1 taken:1 computationally:3 remains:2 previously:1 turn:2 fail:1 know:7 antonoglou:1 end:36 gulcehre:1 generalizes:4 available:2 operation:4 brevdo:1 panneershelvam:1 apply:1 observe:1 hierarchical:2 generic:3 appropriate:2 enforce:1 alternative:7 reichert:1 thomas:1 top:1 running:1 opportunity:2 maintaining:1 exploit:2 especially:2 approximating:1 contact:1 tensor:5 objective:2 move:4 malik:1 fa:5 bagnell:1 gradient:2 distance:1 separate:1 thank:1 simulated:2 berlin:1 decoder:1 fidjeland:1 thrun:1 maddison:1 reason:2 o1:3 index:2 modeled:1 insufficient:1 demonstration:6 schrittwieser:1 difficult:5 setup:1 haarnoja:2 implementation:2 proper:1 policy:59 unknown:3 perform:3 allowing:1 boot:1 observation:43 convolution:4 markov:5 datasets:1 howard:1 finite:1 immediate:1 viola:1 extended:1 hinton:2 vlassis:1 arbitrary:1 david:1 s2s:2 introduced:1 required:2 specified:2 pair:1 trainable:1 imagenet:1 learned:12 tensorflow:3 nu:2 nip:3 address:1 beyond:1 able:1 usually:1 below:2 perception:1 pattern:2 reading:2 gaining:1 including:2 max:3 belief:33 unsuccessful:1 hot:1 memory:1 difficulty:3 rely:2 representing:2 improve:4 github:1 mdps:2 woo:1 brock:2 yeung:1 prior:15 sg:1 understanding:1 kf:1 multiplication:1 relative:2 graf:1 embedded:2 fully:10 loss:1 a2a:1 expect:1 lecture:1 interesting:1 limitation:4 filtering:2 foundation:1 integrate:1 agent:21 gather:1 s0:10 imposes:2 exciting:2 share:2 translation:1 row:1 supported:1 last:4 free:4 brafman:1 drastically:1 tsitsiklis:1 understand:2 deeper:1 arnold:1 wide:1 taking:5 benefit:1 van:3 boundary:1 dimension:1 world:10 transition:7 maze:10 fb:4 forward:1 made:1 reinforcement:6 simplified:3 sifre:2 far:1 transaction:1 approximate:8 observable:13 uni:1 robotic:3 predictron:2 davidson:1 discriminative:1 imitation:2 search:6 continuous:4 why:1 table:6 promising:1 ballard:1 learn:15 nature:3 transfer:8 ca:1 okada:2 channel:5 complex:4 ft0:5 constructing:1 domain:17 european:1 did:1 main:4 fair:1 xu:1 fig:17 intel:2 embeds:4 fails:2 position:2 tied:1 weighting:1 learns:8 rez:2 ix:1 embed:1 specific:4 navigate:4 insightful:1 learnable:1 gupta:2 normalizing:2 workshop:1 intractable:2 consist:1 gripper:4 false:1 sequential:3 effectively:3 importance:1 adding:1 magnitude:1 conditioned:2 horizon:2 chen:2 cassandra:1 entropy:1 sophistication:1 simply:1 explore:1 visual:3 expressed:1 partially:12 driessche:1 truth:3 tieleman:1 relies:1 goal:14 identity:1 towards:1 shared:1 feasible:1 hard:1 lidar:5 specifically:5 except:1 infinite:1 total:1 called:1 isomorphic:1 invariance:1 experimental:3 meaningful:1 citro:1 formally:2 select:1 latter:1 barreto:1 incorporate:2 evaluate:3 tested:3 scratch:1 |
6,695 | 7,056 | Robust Optimization for Non-Convex Objectives
Robert Chen
Computer Science
Harvard University
Brendan Lucier
Microsoft Research
New England
Yaron Singer
Computer Science
Harvard University
Vasilis Syrgkanis
Microsoft Research
New England
Abstract
We consider robust optimization problems, where the goal is to optimize in the
worst case over a class of objective functions. We develop a reduction from
robust improper optimization to stochastic optimization: given an oracle that
returns ?-approximate solutions for distributions over objectives, we compute a
distribution over solutions that is ?-approximate in the worst case. We show that
derandomizing this solution is NP-hard in general, but can be done for a broad
class of statistical learning tasks. We apply our results to robust neural network
training and submodular optimization. We evaluate our approach experimentally on
corrupted character classification, and robust influence maximization in networks.
1
Introduction
In many learning tasks we face uncertainty about the loss we aim to optimize. Consider, for example,
a classification task such as character recognition, required to perform well under various types of
distortion. In some environments, such as recognizing characters in photos, the classifier must handle
rotation and patterned backgrounds. In a different environment, such as low-resolution images, it
is more likely to encounter noisy pixelation artifacts. Instead of training a separate classifier for
each possible scenario, one seeks to optimize performance in the worst case over different forms of
corruption (or combinations thereof) made available to the trainer as black-boxes.
More generally, our goal is to find a minimax solution that optimizes in the worst case over a given
family of functions. Even if each individual function can be optimized effectively, it is not clear such
solutions would perform well in the worst case. In many cases of interest, individual objectives are
non-convex and hence state-of-the-art methods are only approximate. In stochastic optimization,
where one must optimize a distribution over loss functions, approximate stochastic optimization is
often straightforward, since loss functions are commonly closed under convex combination. Can
approximately optimal stochastic solutions yield an approximately optimal robust solution?
In this paper we develop a reduction from robust optimization to stochastic optimization. Given an ?approximate oracle for stochastic optimization we show how to implement an ?-approximate solution
for robust optimization under a necessary extension, and illustrate its effectiveness in applications.
Main Results. Given an ?-approximate stochastic oracle for distributions over (potentially nonconvex) loss functions, we show how to solve ?-approximate robust optimization in a convexified
solution space. This outcome is ?improper? in the sense that it may lie outside the original solution
space, if the space is non-convex. This can be interpreted as computing a distribution over solutions.
We show that the relaxation to improper learning is necessary in general: It is NP-hard to achieve
robust optimization with respect to the original outcome space, even if stochastic optimization can be
solved exactly, and even if there are only polynomially many loss functions. We complement this
by showing that in any statistical learning scenario where loss is convex in the predicted dependent
variable, we can find a single (deterministic) solution with matching performance guarantees.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
Technical overview. Our approach employs an execution of no-regret dynamics on a zero-sum
game, played between a learner equipped with an ?-approximate stochastic oracle, and an adversary
who aims to find a distribution over loss functions that maximizes the learner?s loss. This game
converges to an approximately robust solution, in which the learner and adversary settle upon an ?approximate minimax solution. This convergence is subject to an additive regret term that converges
at a rate of T ?1/2 over T rounds of the learning dynamics.
Applications. We illustrate the power of our reduction through two main examples. We first
consider statistical learning via neural networks. Given an arbitrary training method, our reduction
generates a net that optimizes robustly over a given class of loss functions. We evaluate our method
experimentally on a character recognition task, where the loss functions correspond to different
corruption models made available to the learner as black boxes. We verify experimentally that our
approach significantly outperforms various baselines, including optimizing for average performance
and optimizing for each loss separately. We also apply our reduction to influence maximization,
where the goal is to maximize a concave function (the independent cascade model of influence
[9]) over a non-convex space (subsets of vertices in a network). Previous work has studied robust
influence maximization directly [7, 3, 12], focusing on particular, natural classes of functions (e.g.,
edge weights chosen within a given range) and establishing hardness and approximation results.
In comparison, our method is agnostic to the particular class of functions, and achieves a strong
approximation result by returning a distribution over solutions. We evaluate our method on real and
synthetic datasets, with the goal of robustly optimizing a suite of random influence instantiations. We
verify experimentally that our approach significantly outperforms natural baselines.
Related work. There has recently been a great deal of interest in robust optimization in machine
learning [16, 2, 13, 17]. For continuous optimization, the work that is closest to ours is perhaps that
by Shalev-Shwartz and Wexler [16] and Namkoong and Duchi [13] that use robust optimization to
train against convex loss functions. The main difference is that we assume a more general setting
in which the loss functions are non-convex and one is only given access to the stochastic oracle.
Hence, the proof techniques and general results from these papers do not apply to our setting. We
note that our result generalizes these works, as they can be considered as the special case in which
we have a distributional oracle whose approximation is optimal. In particular, [16, Theorem 1]
applies to the realizable statistical learning setting where the oracle has small mistake bound C. Our
applications require a more general framing that hold for any optimization setting with access to
an approximate oracle, and approximation is in the multiplicative sense with respect to the optimal
value. In submodular optimization there has been a great deal of interest in robust optimization
as well [10, 8, 4]. The work closest to ours is that by He and Kempe [8] who consider a slightly
different objective than ours. Kempe and He?s results apply to influence but do not extend to general
submodular functions. Finally, we note that unlike recent work on non-convex optimization [5, 1, 6]
our goal in this paper is not to optimize a non-convex function. Rather, we abstract the non-convex
guarantees via the approximate stochastic oracle.
2
Robust Optimization with Approximate Stochastic Oracles
We consider the following model of optimization that is robust to objective uncertainty. There is a
space X over which to optimize, and a finite set of loss functions1 L = {L1 , . . . , Lm } where each
Li ? L is a function from X to [0, 1]. Intuitively, our goal is to find some x ? X that achieves low
loss in the worst-case over loss functions in L. For x ? X , write g(x) = maxi?[m] Li (x) for the
worst-case loss of x. The minimax optimum ? is given by
? = min g(x) = min max Li (x).
x?X
x?X i?[m]
(1)
The goal of ?-approximate robust optimization is to find x such that g(x) ? ?? .
Given a distribution P over solutions X , write g(P) = maxi?[m] Ex?P [Li (x)] for the worst-case
expected loss of a solution drawn from P. A weaker version of robust approximation is improper
robust optimization: find a distribution P over X such that g(P) ? ?? .
1
We describe an extension to infinite sets of loss functions in the full version of the paper. Our results also
extend naturally to the goal of maximizing the minimum of a class of reward functions.
2
Algorithm 1 Oracle Efficient Improper Robust Optimization
Input: Objectives L = {L1 , . . . , Lm }, Apx stochastic oracle M , parameters T, ?
for each time step t ? [T ] do
Set
( t?1
)
X
wt [i] ? exp ?
Li (x? )
(3)
? =1
Set xt = M (wt )
end for
Output: the uniform distribution over {x1 , . . . , xT }
Our results take the form of reductions to an approximate stochastic oracle, which finds a solution
x ? X that approximately minimizes a given distribution over loss functions.2
Definition 1 (?-Approximate Stochastic Oracle). Given a distribution D over L, an ?-approximate
stochastic oracle M (D) computes x? ? X such that
EL?D [L(x? )] ? ? min EL?D [L(x)] .
x?X
2.1
(2)
Improper Robust Optimization with Oracles
We first show that, given access to an ?-approximate stochastic oracle, it is possible to efficiently
implement improper ?-approximate robust optimization, subject to a vanishing additive loss term.
q
Theorem 1. Given access to an ?-approximate stochastic oracle, Algorithm 1 with ? = log(m)
2T
computes a distribution P over solutions, defined as a uniform distribution over a set {x1 , . . . , xT },
so that
r
2 log(m)
.
(4)
max Ex?P [Li (x)] ? ?? +
T
i?[m]
Moreover, for any ? the distribution P computed by Algorithm 1 satisfies:
max Ex?P [Li (x)] ? ?(1 + ?)? +
i?[m]
2 log(m)
.
?T
(5)
Proof. We give the proof of the first result and defer the second result to the full version of the paper.
We can interpret Algorithm 1 in the following way. We define a zero-sum game between a learner
and an adversary. The learner?s action set is equal to X and the adversary?s action set is equal to [m].
The loss of the learner when he picks x ? X and the adversary picks i ? [m] is defined as Li (x).
The corresponding payoff of the adversary is Li (x).
We will run no-regret dynamics on this zero-sum game, where at every iteration t = 1, . . . , T , the
adversary will pick a distribution over functions and subsequently the learner picks a solution xt .
For simpler notation we will denote with wt the probability density function on [m] associated with
the distribution of the adversary. That is, wt [i] is the probability of picking function Li ? L. The
adversary picks a distribution wt based on some arbitrary no-regret learning algorithm on the m
actions in L. For concreteness consider the case where the adversary picks a distribution based on the
multiplicative weight updates algorithm, i.e.,
(r
)
t?1
log(m) X
Li (x? ) .
(6)
wt [i] ? exp
2T ? =1
Subsequently the learner picks a solution xt that is the output of the ?-approximate stochastic oracle
on the distribution selected by the adversary at time-step t. That is,
xt = M (wt ) .
(7)
2
All our results easily extend to the case where the oracle computes a solution that is approximately optimal
up to an additive error, rather than a multiplicative one. For simplicity of exposition we present the multiplicative
error case as it is more in line with the literature on approximation algorithms.
3
Write (T ) =
that
q
2 log(m)
.
T
By the guarantees of the no-regret algorithm for the adversary, we have
T
T
1X
1X
EI?wt [LI (xt )] ? max
Li (xt ) ? (T ).
T t=1
i?[m] T
t=1
(8)
Combining the above with the guarantee of the stochastic oracle we have
? = min max Li (x) ? min
x?X i?[m]
x?X
?
T
T
1X
1X
EI?wt [LI (x)] ?
min EI?wt [LI (x)]
T t=1
T t=1 x?X
T
1X1
? EI?wt [LI (xt )]
T t=1 ?
1
?
?
?
!
T
1X
Li (xt ) ? (T ) .
max
i?[m] T
t=1
(By oracle guarantee for each t)
(By no-regret of adversary)
Thus, if we define with P to be the uniform distribution over {x1 , . . . , xT }, then we have derived
max Ex?P [Li (x)] ? ?? + (T )
i?[m]
(9)
as required.
A corollary of Theorem 1 is that if the solution space X is convex and the objective functions Li ? L
are all convex functions, then we can compute a single solution x? that is approximately minimax
optimal. Of course, in this setting one can calculate and optimize the maximum loss directly in time
proportional to |L|; this result therefore has the most bite when the set of functions is large.
Corollary 2. If the space X is a convex space and each loss function Li ? L is a convex function,
PT
then the point x? = T1 t=1 xt ? X , where {x1 , . . . , xT } are the output of Algorithm 1, satisfies:
r
2 log(m)
?
(10)
max Li (x ) ? ?? +
T
i?[m]
Proof. By Theorem 1, we get that if P is the uniform distribution over {x1 , . . . , xT } then
r
2 log(m)
max Ex?P [Li (x)] ? ?? +
.
T
i?[m]
Since X is convex, the solution x? = Ex?P [x] is also part of X . Moreover, since each Li ? L is
convex, we have that Ex?P [Li (x)] ? Li (Ex?P [x]) = Li (x? ). We therefore conclude
r
2 log(m)
?
max Li (x ) ? max Ex?P [Li (x)] ? ?? +
T
i?[m]
i?[m]
as required.
2.2
Robust Statistical Learning
Next we apply our main theorem to statistical learning. Consider regression or classification settings
where data points are pairs (z, y), z ? Z is a vector of features, and y ? Y is the dependent variable.
The solution space X is then a space of hypotheses H, with each h ? H a function from Z to Y. We
also assume that Y is a convex subset of a finite-dimensional vector space.
We are given a set of loss functions L = {L1 , . . . , Lm }, where each Li ? L is a functional
Li : H ? [0, 1]. Theorem 1 implies that, given an ?-approximate stochastic optimization oracle,
we can compute a distribution over T hypotheses from H that achieves an ?-approximate minimax
guarantee. If the loss functionals are convex over hypotheses, then we can compute a single ensemble
hypothesis h? (possibly from a larger space of hypotheses, if H is non-convex) that achieves this
guarantee.
4
Theorem 3. Suppose that L = {L1 , . . . , Lm } are convex functionals. Then the ensemble hypothPT
esis h? = T1 t=1 h, where {h1 , . . . , hT } are the hypotheses output by Algorithm 1 given an
?-approximate stochastic oracle, satisfies
r
2 log(m)
?
.
(11)
max Li (h ) ? ? min max Li (h) +
h?H i?[m]
T
i?[m]
Proof. The proof is similar to the proof of Corollary 2.
We emphasize that the convexity condition in Theorem 3 is over the class of hypotheses, rather than
over features or any natural parameterization of H (such as weights in a neural network). This is a
mild condition that applies to many examples in statistical learning theory. For instance, consider the
case where each loss Li (h) is the expected value of some ex-post loss function `i (h(z), y) given a
distribution Di over Z ? Y :
Li (h) = E(z,y)?Di [`i (h(z), y)] .
(12)
In this case, it is enough for the function `i (?, ?) to be convex with respect to its first argument
(i.e., the predicted dependent variable). This is satisfied by most loss functions
used in machine
P
learning, such as multinomial logistic loss (cross-entropy loss) `(?
y , y) = ? c?[k] yc log(?
yc ) from
2
multi-class classification, the hinge or the square loss, or squared loss `(?
y , y) = k?
y ? yk as used in
regression. For all these settings, Theorem 3 provides a tool for improper robust learning, where the
final hypothesis h? is an ensemble of T base hypotheses from H. Again, the underlying optimization
problem can be arbitrarily non-convex in the natural parameters of the hypothesis space; in Section 3.1
we will show how to apply this approach to robust training of neural networks, where the stochastic
oracle is simply a standard network training method. For neural networks, the fact that we achieve
improper learning (as opposed to standard learning) corresponds to training a neural network with a
single extra layer relative to the networks generated by the oracle.
2.3
Robust Submodular Maximization
In robust submodular maximization we are given a family of reward functions F = {f1 , . . . , fm },
where each fi ? F is a monotone submodular function from a ground set N of n elements to [0, 1].
Each function is assumed to be monotone and submodular, i.e., for any S ? T ? N , fi (S) ? fi (T );
and for any S, T ? N , f (S ? T ) + f (S ? T ) ? f (S) + f (T ). The goal is to select a set S ? N
of size k whose worst-case value over i, i.e., g(S) = mini?[m] fi (S), is at least a 1/? factor of the
minimax optimum ? = maxT :|T |?k mini?[m] fi (T ).
This setting is a special case of our general robust optimization setting (phrased in terms of rewards
rather than losses). The solution space X is equal to the set of subsets of size k among all elements in
N and the set F is the set of possible objective functions. The stochastic oracle 1, instantiated in
this
of submodular functions F (S) =
Pmsetting, asks for the following:?given a convex?combination
1
i=1 w[i] ? fi (S), compute a set S such that F (S ) ? ? maxS:|S|?k F (S).
Computing the maximum value set of size k is NP-hard even for a single submodular function. The
following very simple greedy algorithm computes a (1 ? 1/e)-approximate solution [15]: begin with
Scur = ?, and at each iteration add to the current solution Scur the element j ? N ? Scur that has
the largest marginal contribution: f ({j} ? Scur ) ? f (Scur ). Moreover, this approximation ratio is
known to be the best possible in polynomial time [14]. Since a convex combination of monotone
submodular functions is also a monotone submodular function, we immediately get that there exists a
(1 ? 1/e)-approximate stochastic oracle that can be computed in polynomial time. The algorithm is
formally given in Algorithm 2. Combining the above with Theorem 1 we get the following corollary.
Corollary 4. Algorithm 1, with stochastic oracle Mgreedy , computes in time poly(T, n) a distribution
P over sets of size k, defined as a uniform distribution over a set {S1 , . . . , ST }, such that
1
log(m)
min ES?P [fi (S)] ? 1 ?
(1 ? ?)? ?
.
(13)
e
?T
i?[m]
We show in the full version of the paper that computing a single set S that achieves a (1 ? 1/e)approximation to ? is also N P -hard. This is true even if the functions fi are additive. However, by
5
Algorithm 2 Greedy stochastic Oracle for Submodular Maximization Mgreedy
Input: Set of elements N , objectives F = {f1 , . . . , fm }, distribution over objectives w
Set Scur = ?
for j = 1 to k do
Pm
Let j ? = arg maxj?N ?Scur i=1 w[i] (fi ({j} ? Scur ) ? fi (Scur ))
Set Scur = {j ? } ? Scur
end for
Figure 1: Sample MNIST image with each of the corruptions applied to it. Background Corruption
Set & Shrink Corruption Set (top). Pixel Corruption Set & Mixed Corruption Set (bottom).
allowing a randomized solution over sets we can achieve a constant factor approximation to ? in
polynomial time.
Since the functions are monotone, the above result implies a simple way of constructing a single set
S ? that is of larger size than k, which deterministically achieves a constant factor approximation to ? .
The latter holds by simply taking the union of the sets {S1 , . . . , ST } in the support of the distribution
returned by Algorithm 1. We get the following bi-criterion approximation scheme.
Corollary 5. Suppose that we run the reward version of Algorithm 1, with ? = and for T = log(m)
? 2 ,
k log(m)
?
returning {S1 , . . . , ST }. Then the set S = S1 ? . . . ? ST , which is of size at most ? 2 , satisfies
1
min fi (S ? ) ? 1 ? ? 2 ?.
(14)
e
i?[m]
3
3.1
Experiments
Robust Classification with Neural Networks
A classic application of our robust optimization framework is classification with neural networks
for corrupted or perturbed datasets. We have a data set Z of pairs (z, y) of an image z ? Z and
label y ? Y that can be corrupted in m different ways which produces data sets Z1 , . . . , Zm . The
hypothesis space H is the set of all neural nets of some fixed architecture and for each possible
assignment of weights. We denote each such hypothesis with h(?; ?) : Z ? Y for ? ? Rd , with d
being the number of parameters (weights) of the neural net. If we let Di be the uniform distribution
over each corrupted data set Zi , then we are interested in minimizing the empirical cross-entropy
(aka multinomial logistic) loss in the worst case over these different distributions Di . The latter is a
special case of our robust statistical learning framework from Section 2.2.
Training a neural network is a non-convex optimization problem and we have no guarantees on its
performance. We instead assume that for any given distribution D over pairs (z, y) of images and
labels and for any loss function `(h(z; ?), y), training a neural net with stochastic gradient descent
run on images drawn from D can achieve an ? approximation to the optimal expected loss, i.e.
min??Rd E(z,y)?D [`(h(z; ?), y)]. Notice that this implies an ?-approximate stochastic Oracle for
the corrupted dataset robust training problem: for any distribution w over the different corruptions
[m], the stochastic oracle asks to give an ?-approximation to the minimization problem:
min
??Rd
m
X
w[i] ? E(z,y)?Di [`(h(z; ?), y)]
i=1
6
(15)
The latter is simply another expected loss problem with distribution over images being the mixture
distribution defined by first drawing a corruption index i from w and then drawing a corrupted
image from distribution Di . Hence, our oracle assumption implies that SGD on this mixture is an
?-approximation. By linearity of expectation, an alternative way of viewing the stochastic oracle
problem is that we are training a neural net on the original distribution
of images, but with loss
Pm
function being the weighted combination of loss functions i=1 w[i] ? `(h(ci (z); ?), y), where
ci (z) is the i-th corrupted version of image z. In our experiments we implemented both of these
interpretations of the stochastic oracle, which we call the Hybrid Method and Composite Method,
respectively, when designing our neural network training scheme (see the full version of the paper
for further details). Finally, because we use the cross-entropy loss, which is convex in the prediction
of the neural net, we can also apply Theorem 3 to get that the ensemble neural net, which takes the
average of the predictions of the neural nets created at each iteration of the robust optimization, will
also achieve good worst-case loss (we refer to this as Ensemble Bottleneck Loss).
Experiment Setup. We use the MNIST handwritten digits data set containing 55000 training
images, 5000 validation images, and 10000 test images, each image being a 28 ? 28 pixel grayscale
image. The intensities of these 576 pixels (ranging from 0 to 1) are used as input to a neural network
that has 1024 nodes in its one hidden layer. The output layer uses the softmax function to give a
distribution over digits 0 to 9. The activation function is ReLU and the network is trained using
Gradient Descent with learning parameter 0.5 through 500 iterations of mini-batches of size 100.
In general, the corruptions can be any black-box corruption of the image. In our experiments, we
consider four four types of corruption (m = 4). See the full version of the paper for further details
about corruptions.
Baselines. We consider three baselines: (i) Individual Corruption: for each corruption type i ? [m],
we construct an oracle that trains a neural network using the training data perturbed by corruption i,
and then returns the trained network weights as ?t , for every t = 1, . . . , T . This gives m baselines,
one for each corruption type; (ii) Even Split: this baseline alternates between training with different
corruption types between iterations. In particular, call the previous m baseline oracles O1 , ..., Om .
Then this new baseline oracle will produce ?t with Oi+1 , where i ? t mod m, for every t = 1, ..., T ;
(iii) Uniform Distribution: This more advanced baseline runs the robust optimization scheme with the
Hybrid Method (see Appendix), but without the distribution updates. Instead, the distribution over
1
1
corruption types is fixed as the discrete uniform [ m
, ..., m
] over all T iterations. This allows us to
check if the multiplicative weight updates in the robust optimization algorithm are providing benefit.
Results. The Hybrid and Composite Methods produce results far superior to all three baseline types,
with differences both substantial in magnitude and statistically significant. The more sophisticated
Composite Method outperforms the Hybrid Method. Increasing T improves performance, but with
diminishing returns?largely because for sufficiently large T , the distribution over corruption types
has moved from the initial uniform distribution to some more optimal stable distribution (see the full
version for details). All these effects are consistent across the 4 different corruption sets tested. The
Ensemble Bottleneck Loss is empirically much smaller than Individual Bottleneck Loss. For the best
performing algorithm, the Composite Method, the mean Ensemble Bottleneck Loss (mean Individual
Bottleneck Loss) with T = 50 was 0.34 (1.31) for Background Set, 0.28 (1.30) for Shrink Set, 0.19
(1.25) for Pixel Set, and 0.33 (1.25) for Mixed Set. Thus combining the T classifiers obtained from
robust optimization is practical for making predictions on new data.
3.2
Robust Influence Maximization
We apply the results of Section 2.3 to the robust influence maximization problem. Given a directed
graph G = (V, E), the goal is to pick a seed set S of k nodes that maximize an influence function
fG (S), where fG (S) is the expected number of individuals influenced by opinion of the members of
S. We used fG (S) to be the number of nodes reachable from S (our results extend to other models).
In robust influence maximization, the goal is to maximize influence in the worst-case (Bottleneck
Influence) over m functions {f1 , . . . , fm }, corresponding to m graphs {G1 , . . . , Gm }, for some fixed
seed set of size k. This is a special case of robust submodular maximization after rescaling to [0, 1].
7
Background Set
2.0
Uniform
Hybrid
Composite
1.8
1.6
1.4
0
10
30
40
Indiv. Bottleneck Loss
1.32
1.30
1.28
1.26
1.24
0
10
20
30
Number of Iterations T
1.7
1.6
1.5
1.4
1.3
0
10
40
30
40
50
40
50
1.50
1.45
1.40
1.35
1.30
1.25
1.20
50
20
Number of Iterations T
Mixed Set
1.55
1.34
1.22
1.8
1.2
50
Pixel Set
1.36
Indiv. Bottleneck Loss
20
Number of Iterations T
Shrink Set
1.9
Indiv. Bottleneck Loss
Indiv. Bottleneck Loss
2.2
0
10
20
30
Number of Iterations T
Figure 2: Comparison of methods, showing mean of 10 independent runs and a 95% confidence band. The
criterion is Individual Bottleneck Loss: min[m] E??P [`(h(z; ?), y)], where P is uniform over all solutions ?i
for that method. Baselines (i) and (ii) are not shown as they produce significantly higher loss.
Experiment Setup. Given a base directed graph G(V, E), we produce m graphs Gi = (V, Ei ) by
randomly including each edge e ? E with some probability p. We consider two base graphs and two
sets of parameters for each: (i) The Wikipedia Vote Graph [11]. In Experiment A, the parameters are
|V | = 7115, |E| = 103689, m = 10, p = 0.01 and k = 10. In Experiment B, change p = 0.015 and
k = 3. (ii) The Complete Directed Graph on |V | = 100 vertices. In Experiment A, the parameters
are m = 50, p = 0.015 and k = 2. In Experiment B, change p = 0.01 and k = 4.
Baselines. We compared our algorithm (Section 2.3) to three baselines: (i) Uniform over Individual
Greedy Solutions: Apply greedy maximization (Algorithm 2) on each graph separately, to get
g
solutions {S1g , . . . , Sm
}. Return the uniform distribution over these solutions; (ii) Greedy on Uniform
Distribution over Graphs: Return the output of greedy submodular maximization (Algorithm 2)
on the uniform distribution over influence functions. This can be viewed as maximizing expected
influence; (iii) Uniform over Greedy Solutions on Multiple Perturbed Distributions: Generate T
distributions {w?1 , . . . , w?T } over the m functions, by randomly perturbing the uniform distribution.
Perturbation magnitudes were chosen s.t. w?t has the same expected `1 distance from uniform as the
distribution returned by robust optimization at iteration t.
Results. For both graph experiments, robust optimization outperforms all baselines on Bottleneck
Influence; the difference is statistically significant as well as large in magnitude for all T > 50.
Moreover, the individual seed sets generated at each iteration t of robust optimization themselves
achieve empirically good influence as well; see the full version for further details.
References
[1] Zeyuan Allen Zhu and Elad Hazan. Variance reduction for faster non-convex optimization. In
Proceedings of the 33nd International Conference on Machine Learning, ICML 2016, New York
City, NY, USA, June 19-24, 2016, pages 699?707, 2016.
[2] Sabyasachi Chatterjee, John C. Duchi, John D. Lafferty, and Yuancheng Zhu. Local minimax
complexity of stochastic convex optimization. In Advances in Neural Information Processing
Systems 29: Annual Conference on Neural Information Processing Systems 2016, December
5-10, 2016, Barcelona, Spain, pages 3423?3431, 2016.
8
Wikipedia Graph A
65
90
80
70
Robust Opt
Perturbed Dist
Uniform Dist
Individual
60
50
0
50
100
150
Number of Iterations T
55
50
45
40
35
0
50
100
150
Number of Iterations T
200
Complete Graph B
20
18
Bottleneck Influence
35
Bottleneck Influence
60
30
200
Complete Graph A
40
30
25
20
15
10
5
0
Wikipedia Graph B
70
Bottleneck Influence
Bottleneck Influence
100
0
50
100
150
Number of Iterations T
16
14
12
10
8
6
4
200
0
50
100
150
Number of Iterations T
200
Figure 3: Comparison for various T , showing mean Bottleneck Influence and 95% confidence on 10 runs.
[3] Wei Chen, Tian Lin, Zihan Tan, Mingfei Zhao, and Xuren Zhou. Robust influence maximization.
In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery
and Data Mining, San Francisco, CA, USA, August 13-17, 2016, pages 795?804, 2016.
[4] Wei Chen, Tian Lin, Zihan Tan, Mingfei Zhao, and Xuren Zhou. Robust influence maximization.
In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery
and Data Mining, San Francisco, CA, USA, August 13-17, 2016, pages 795?804, 2016.
[5] Elad Hazan, Kfir Y. Levy, and Shai Shalev-Shwartz. Beyond convexity: Stochastic quasi-convex
optimization. In Advances in Neural Information Processing Systems 28: Annual Conference
on Neural Information Processing Systems 2015, December 7-12, 2015, Montreal, Quebec,
Canada, pages 1594?1602, 2015.
[6] Elad Hazan, Kfir Yehuda Levy, and Shai Shalev-Shwartz. On graduated optimization for
stochastic non-convex problems. In Proceedings of the 33nd International Conference on
Machine Learning, ICML 2016, New York City, NY, USA, June 19-24, 2016, pages 1833?1841,
2016.
[7] Xinran He and David Kempe. Robust influence maximization. In Proceedings of the 22nd ACM
SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco,
CA, USA, August 13-17, 2016, pages 885?894, 2016.
[8] Xinran He and David Kempe. Robust influence maximization. In Proceedings of the 22nd ACM
SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco,
CA, USA, August 13-17, 2016, pages 885?894, 2016.
[9] David Kempe, Jon Kleinberg, and ?va Tardos. Maximizing the spread of influence through
a social network. In Proceedings of the Ninth ACM SIGKDD International Conference on
Knowledge Discovery and Data Mining, KDD ?03, pages 137?146, New York, NY, USA, 2003.
ACM.
[10] Andreas Krause, H. Brendan McMahan, Carlos Guestrin, and Anupam Gupta. Selecting observations against adversarial objectives. In Advances in Neural Information Processing Systems
20, Proceedings of the Twenty-First Annual Conference on Neural Information Processing
Systems, Vancouver, British Columbia, Canada, December 3-6, 2007, pages 777?784, 2007.
[11] Jure Leskovec. Wikipedia vote network. Stanford Network Analysis Project.
9
[12] Meghna Lowalekar, Pradeep Varakantham, and Akshat Kumar. Robust influence maximization:
(extended abstract). In Proceedings of the 2016 International Conference on Autonomous
Agents & Multiagent Systems, Singapore, May 9-13, 2016, pages 1395?1396, 2016.
[13] Hongseok Namkoong and John C. Duchi. Stochastic gradient methods for distributionally
robust optimization with f-divergences. In Advances in Neural Information Processing Systems
29: Annual Conference on Neural Information Processing Systems 2016, December 5-10, 2016,
Barcelona, Spain, pages 2208?2216, 2016.
[14] G. L. Nemhauser and L. A. Wolsey. Best algorithms for approximating the maximum of a
submodular set function. Mathematics of Operations Research, 3(3):177?188, 1978.
[15] G. L. Nemhauser, L. A. Wolsey, and M. L. Fisher. An analysis of approximations for maximizing
submodular set functions?i. Mathematical Programming, 14(1):265?294, 1978.
[16] Shai Shalev-Shwartz and Yonatan Wexler. Minimizing the maximal loss: How and why. In
Proceedings of the 33nd International Conference on Machine Learning, ICML 2016, New York
City, NY, USA, June 19-24, 2016, pages 793?801, 2016.
[17] Jacob Steinhardt and John C. Duchi. Minimax rates for memory-bounded sparse linear regression. In Proceedings of The 28th Conference on Learning Theory, COLT 2015, Paris, France,
July 3-6, 2015, pages 1564?1587, 2015.
10
| 7056 |@word mild:1 version:10 polynomial:3 nd:7 seek:1 wexler:2 jacob:1 pick:8 sgd:1 asks:2 reduction:7 initial:1 selecting:1 ours:3 outperforms:4 current:1 activation:1 must:2 john:4 additive:4 kdd:1 update:3 greedy:7 selected:1 parameterization:1 vanishing:1 provides:1 node:3 simpler:1 mathematical:1 scur:11 expected:7 hardness:1 themselves:1 dist:2 multi:1 equipped:1 increasing:1 begin:1 spain:2 moreover:4 notation:1 maximizes:1 agnostic:1 underlying:1 project:1 bounded:1 linearity:1 interpreted:1 minimizes:1 namkoong:2 suite:1 guarantee:8 every:3 concave:1 exactly:1 returning:2 classifier:3 t1:2 local:1 mistake:1 establishing:1 approximately:6 black:3 studied:1 patterned:1 range:1 bi:1 statistically:2 tian:2 directed:3 practical:1 union:1 regret:6 implement:2 yehuda:1 digit:2 empirical:1 significantly:3 cascade:1 matching:1 composite:5 confidence:2 get:6 indiv:4 influence:27 derandomizing:1 optimize:7 deterministic:1 maximizing:4 syrgkanis:1 straightforward:1 zihan:2 convex:31 resolution:1 simplicity:1 immediately:1 classic:1 handle:1 autonomous:1 tardos:1 pt:1 suppose:2 gm:1 tan:2 programming:1 us:1 designing:1 hypothesis:12 harvard:2 element:4 recognition:2 distributional:1 bottom:1 solved:1 worst:12 calculate:1 improper:9 yk:1 substantial:1 environment:2 convexity:2 complexity:1 reward:4 dynamic:3 trained:2 upon:1 learner:9 easily:1 various:3 train:2 instantiated:1 describe:1 outcome:2 outside:1 shalev:4 whose:2 larger:2 solve:1 elad:3 distortion:1 drawing:2 stanford:1 gi:1 g1:1 apx:1 noisy:1 final:1 net:8 maximal:1 zm:1 vasilis:1 combining:3 achieve:6 moved:1 convergence:1 optimum:2 yuancheng:1 produce:5 converges:2 illustrate:2 develop:2 montreal:1 strong:1 implemented:1 predicted:2 implies:4 stochastic:36 subsequently:2 viewing:1 settle:1 opinion:1 require:1 f1:3 opt:1 extension:2 hold:2 sufficiently:1 considered:1 ground:1 exp:2 great:2 seed:3 lm:4 achieves:6 label:2 largest:1 city:3 tool:1 weighted:1 minimization:1 aim:2 rather:4 zhou:2 corollary:6 derived:1 june:3 check:1 aka:1 brendan:2 sigkdd:5 baseline:14 sense:2 realizable:1 adversarial:1 dependent:3 el:2 diminishing:1 hidden:1 quasi:1 france:1 interested:1 trainer:1 pixel:5 arg:1 classification:6 colt:1 among:1 art:1 special:4 kempe:5 softmax:1 marginal:1 equal:3 construct:1 beach:1 functions1:1 broad:1 icml:3 jon:1 np:3 employ:1 randomly:2 sabyasachi:1 divergence:1 individual:10 maxj:1 microsoft:2 interest:3 mining:5 mixture:2 pradeep:1 kfir:2 edge:2 necessary:2 varakantham:1 leskovec:1 instance:1 assignment:1 maximization:17 vertex:2 subset:3 uniform:19 recognizing:1 perturbed:4 corrupted:7 synthetic:1 st:5 density:1 international:9 randomized:1 picking:1 hongseok:1 squared:1 again:1 satisfied:1 containing:1 opposed:1 possibly:1 zhao:2 return:5 rescaling:1 li:35 multiplicative:5 h1:1 closed:1 hazan:3 carlos:1 yaron:1 shai:3 defer:1 contribution:1 om:1 square:1 oi:1 variance:1 who:2 efficiently:1 ensemble:7 yield:1 correspond:1 largely:1 handwritten:1 corruption:21 influenced:1 definition:1 against:2 thereof:1 naturally:1 proof:7 associated:1 di:6 dataset:1 lucier:1 knowledge:5 improves:1 sophisticated:1 focusing:1 higher:1 wei:2 done:1 box:3 shrink:3 ei:5 logistic:2 artifact:1 perhaps:1 usa:9 effect:1 verify:2 true:1 hence:3 deal:2 round:1 game:4 criterion:2 complete:3 duchi:4 l1:4 allen:1 image:15 ranging:1 recently:1 fi:11 superior:1 rotation:1 wikipedia:4 functional:1 multinomial:2 empirically:2 overview:1 perturbing:1 extend:4 he:5 interpretation:1 interpret:1 refer:1 significant:2 rd:3 pm:2 mathematics:1 submodular:16 convexified:1 reachable:1 access:4 stable:1 base:3 add:1 closest:2 recent:1 xinran:2 optimizing:3 optimizes:2 scenario:2 yonatan:1 nonconvex:1 arbitrarily:1 guestrin:1 minimum:1 zeyuan:1 maximize:3 july:1 ii:4 full:7 multiple:1 technical:1 faster:1 england:2 cross:3 long:1 lin:2 post:1 va:1 prediction:3 regression:3 expectation:1 iteration:16 background:4 separately:2 krause:1 extra:1 unlike:1 subject:2 member:1 december:4 lafferty:1 mod:1 effectiveness:1 quebec:1 call:2 split:1 enough:1 iii:2 relu:1 zi:1 graduated:1 architecture:1 fm:3 andreas:1 bottleneck:17 returned:2 york:4 action:3 generally:1 clear:1 band:1 generate:1 singapore:1 notice:1 write:3 discrete:1 four:2 drawn:2 ht:1 graph:14 relaxation:1 concreteness:1 monotone:5 sum:3 run:6 uncertainty:2 family:2 appendix:1 bound:1 layer:3 played:1 oracle:38 annual:4 phrased:1 generates:1 kleinberg:1 argument:1 min:12 kumar:1 performing:1 alternate:1 combination:5 across:1 slightly:1 smaller:1 character:4 making:1 s1:4 intuitively:1 singer:1 end:2 photo:1 available:2 generalizes:1 operation:1 apply:9 robustly:2 alternative:1 encounter:1 batch:1 anupam:1 original:3 top:1 hinge:1 approximating:1 objective:12 gradient:3 nemhauser:2 distance:1 separate:1 o1:1 index:1 mini:3 ratio:1 minimizing:2 providing:1 setup:2 robert:1 potentially:1 twenty:1 perform:2 allowing:1 observation:1 datasets:2 sm:1 finite:2 descent:2 payoff:1 extended:1 perturbation:1 ninth:1 arbitrary:2 august:4 intensity:1 canada:2 david:3 complement:1 pair:3 required:3 paris:1 optimized:1 z1:1 framing:1 barcelona:2 nip:1 jure:1 beyond:1 adversary:13 yc:2 bite:1 including:2 max:14 memory:1 power:1 natural:4 hybrid:5 advanced:1 zhu:2 minimax:8 scheme:3 created:1 columbia:1 literature:1 discovery:5 vancouver:1 relative:1 loss:54 multiagent:1 mixed:3 wolsey:2 proportional:1 validation:1 agent:1 consistent:1 maxt:1 course:1 weaker:1 face:1 taking:1 sparse:1 fg:3 benefit:1 computes:5 made:2 commonly:1 san:4 far:1 polynomially:1 social:1 functionals:2 approximate:27 emphasize:1 instantiation:1 conclude:1 assumed:1 francisco:4 shwartz:4 grayscale:1 continuous:1 why:1 robust:51 ca:5 poly:1 constructing:1 main:4 spread:1 x1:6 ny:4 deterministically:1 lie:1 mcmahan:1 levy:2 theorem:11 british:1 xt:14 showing:3 maxi:2 gupta:1 exists:1 mnist:2 effectively:1 esis:1 ci:2 magnitude:3 execution:1 chatterjee:1 chen:3 entropy:3 simply:3 likely:1 steinhardt:1 applies:2 corresponds:1 satisfies:4 acm:6 goal:11 viewed:1 exposition:1 fisher:1 hard:4 experimentally:4 change:2 infinite:1 wt:11 e:1 vote:2 distributionally:1 select:1 formally:1 support:1 latter:3 akshat:1 evaluate:3 tested:1 ex:10 |
6,696 | 7,057 | Thy Friend is My Friend: Iterative Collaborative
Filtering for Sparse Matrix Estimation
Christian Borgs
Jennifer Chayes
Christina E. Lee
[email protected]
[email protected]
[email protected]
Microsoft Research New England
One Memorial Drive, Cambridge MA, 02142
Devavrat Shah
[email protected]
Massachusetts Institute of Technology
77 Massachusetts Ave, Cambridge, MA 02139
Abstract
The sparse matrix estimation problem consists of estimating the distribution of
an n ? n matrix Y , from a sparsely observed single instance of this matrix where
the entries of Y are independent random variables. This captures a wide array
of problems; special instances include matrix completion in the context of recommendation systems, graphon estimation, and community detection in (mixed
membership) stochastic block models. Inspired by classical collaborative filtering
for recommendation systems, we propose a novel iterative, collaborative filteringstyle algorithm for matrix estimation in this generic setting. We show that the
mean squared error (MSE) of our estimator goes to 0 as long as ?(d2 n) random
entries from a total of n2 entries of Y are observed (uniformly sampled), E[Y ] has
rank d, and the entries of Y have bounded support. The maximum squared error
across all entries converges to 0 with high probability as long as we observe a little
more, ?(d2 n ln2 (n)) entries. Our results are the best known sample complexity
results in this generality. Our intuitive, easy to implement iterative nearest-neighbor
style algorithm matches the conjectured sample complexity lower bound of d2 n
for a computationally efficient algorithm for detection in the mixed membership
stochastic block model.
1
Introduction
In this work, we propose and analyze an iterative similarity-based collaborative filtering algorithm
for the sparse matrix completion problem with noisily observed entries. As a prototype for such a
problem, consider a noisy observation of a social network where observed interactions are signals
of true underlying connections. We might want to predict the probability that two users would
choose to connect if recommended by the platform, e.g. LinkedIn. As a second example, consider
a recommendation system where we observe movie ratings provided by users, and we may want
to predict the probability distribution over ratings for specific movie-user pairs. The classical
collaborative filtering approach is to compute similarities between pairs of users by comparing their
commonly rated movies. For a social network, similarities between users would be computed by
comparing their sets of friends. We will be particularly interested in the very sparse case where most
pairs of users have no common friends, or most pairs of users have no commonly rated movies; thus
there is insufficient data to compute the traditional similarity metrics.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
To overcome this limitation, we propose a novel algorithm which computes similarities iteratively,
incorporating information within a larger radius neighborhood. Whereas traditional collaborative
filtering learns the preferences of a user through the ratings of her/his ?friends?, i.e. users who share
similar ratings on commonly rated movies, our algorithm learns about a user through the ratings of
the friends of her/his friends, i.e. users who may be connected through an indirect path in the data.
For a social network, this intuition translates to computing similarities of two users by comparing
the boundary of larger radius neighborhoods of their connections in the network. While an actual
implementation of our algorithm will benefit from modifications to make it practical, we believe
that our approach is very practical; indeed, we plan to implement it in a corporate setting. Like all
such nearest-neighbor style algorithms, our algorithm can be accelerated and scaled to large datasets
in practice by using a parallel implementation via an approximate nearest neighbor data structure.
In this paper, however, our goal is to describe the basic setting and concept of the algorithm, and
provide clear mathematical foundation and analysis. The theoretical results indicate that this method
achieves consistency (i.e. guaranteed convergence to the correct solution) for very sparse datasets for
a reasonably general Latent Variable Model with bounded entries.
The problems discussed above can be mathematically formulated as a matrix estimation problem,
where we observe a sparse subset of entries in an m ? n random matrix Y , and we wish to complete
or de-noise the matrix by estimating the probability distribution of Yij for all (i, j). Suppose that Yij
is categorical, taking values in [k] according to some unknown distribution. The task of estimating the
distribution of Yij can be reduced to k ? 1 smaller tasks of estimating the expectation of a binary data
matrix, e.g. Y t where Yijt = I(Yij = t) and E[Yijt ] = P(Yij = t). If the matrix that we would like to
learn is asymmetric,
we can transform it to an equivalent symmetric model by defining a new data
matrix Y 0 = Y0T Y0 . Therefore, for the remainder of the paper, we will assume a n ? n symmetric
matrix which takes values in [0, 1] (real-valued or binary), but as argued above, our results apply
more broadly to categorical-valued asymmetric matrices. We assume that the data is generated from
a Latent Variable Model in which latent variables ?1 , . . . , ?n are sampled independently from U [0, 1],
and the distribution of Yij is such that E[Yij |?i , ?j ] = f (?i , ?j ) ? Fij for some latent function f . Our
goal is to estimate the matrix F . It is worth remarking that the Latent Variable Model is a canonical
representation for exchangeable arrays as shown by Aldous and Hoover [5, 25, 7].
We present a novel algorithm for estimating F = [Fij ] from a sparsely sampled dataset {Yij }(i,j)?E
where E ? [n] ? [n] is generated by assuming each entry is observed independently with probability
p. We require that the latent function f when regarded as an integral operator has finite spectrum with
rank d. We prove that the mean squared error (MSE) of our estimates converges to zero at a rate of
O((pn)?1/5 ) as long as the sparsity p = ?(d2 n?1 ) (i.e. ?(d2 n) total observations). In addition, with
high probability, the maximum squared error converges to zero at a rate of O((pn)?1/5 ) as long as
the sparsity p = ?(d2 n?1 ln2 (n)). Our analysis applies to a generic noise setting as long as Yij has
bounded support. Somewhat surprisingly, our simple nearest-neighbor style algorithm matches the
conjectured sample complexity lower bound of total of d2 n samples for a computationally efficient
algorithm, arising in the context of the mixed membership stochastic block model for detection
(weaker than MSE going to 0).
Our work takes inspiration from [1, 2, 3], which estimates clusters of the stochastic block model by
computing distances from local neighborhoods around vertices. We improve upon their analysis to
provide MSE bounds for the general latent variable model with finite spectrum, which includes a
larger class of generative models such as mixed membership stochastic block models, while they
consider the stochastic block model with non-overlapping communities. We show that our results
hold even when the rank d increases with n, as long as d = o((pn)1/2 ). As compared to spectral
methods such as [28, 39, 20, 19, 18], our analysis handles the general bounded noise model and holds
for sparser regimes, only requiring p = ?(n?1 ).
Related work. The matrix estimation problem introduced above includes as specific cases problems
from different areas of literature: matrix completion popularized in the context of recommendation
systems, graphon estimation arising from the asymptotic theory of graphs, and community detection
using the stochastic block model or its generalization known as the mixed membership stochastic
block model. The key representative results for each of these are mentioned in Table 1. We discuss
the scaling of the sample complexity with respect to d (model complexity, usually rank) and n
for polynomial time algorithms, including results for both mean squared error convergence, exact
recovery in the noiseless setting, and convergence with high probability in the noisy setting. As can
2
Table 1: Sample Complexity of Related Literature grouped in sections according to the following
areas ?matrix completion, 1-bit matrix completion, stochastic block model, mixed membership
stochastic block model, graphon estimation, and our results
Paper
Sample Complexity
Data/Noise
Expected matrix
Guarantee
[27]
[28]
[37]
[19]
[18]
[32]
[17]
[27]
[39]
?(dn)
?(dn max(log n, d)), ?(dn)
?(dn log n)
?(n max(d, log2 n))
?(dn log6 n)
?(n3/2 )
?(dn log2 n max(d, log4 n))
?(dn max(d, log n))
?(dn log2 n)
noiseless
iid Gaussian
iid Gaussian
iid Gaussian
indep bounded
iid bounded
noiseless
noiseless
noiseless
rank d
rank d
rank d
rank d
rank d
Lipschitz
rank d
rank d
rank d
MSE? 0
MSE? 0
MSE? 0
MSE? 0
MSE? 0
MSE? 0
exact recovery
exact recovery
exact recovery
[19]
[20]
?(n max(d log n, log2 n, d2 ))
?(n max(d, log n)), ?(dn)
binary entries
binary entries
rank d
rank d
MSE? 0
MSE? 0
[1, 3]
[1]
?(d2 n)
?(dn log n)
binary entries
binary entries
d blocks
d blocks (SBM)
partial recovery
exact recovery
[6]
[40]
?(d2 n polylog n)
?(d2 n)
binary entries
binary entries
rank d
rank d
whp error ? 0
detection
[4]
[43]
[10]
?(n2 )
?(n2 )
?(n)
binary entries
binary entries
binary entries
monotone row sum
piecewise Lipschitz
monotone row sum
MSE? 0
MSE? 0
MSE? 0
this
work
?(d2 n)
?(d2 n log2 n)
indep bounded
indep bounded
rank d, Lipschitz
rank d, Lipschitz
MSE? 0
whp error ? 0
be seen from Table 1, our result provides the best sample complexity for the general matrix estimation
problem with bounded entries noise model and rank d, as the other models either require extra log
factors, or impose additional requirements on the noise model or the expected matrix. Similarly, ours
is the best known sample complexity for high probability max-error convergence to 0 for the general
rank d bounded entries setting, as other results either assume block constant or noiseless.
It is worth comparing our results with the known lower bounds on the sample complexity. For the
special case of matrix completion with an additive noise model, i.e. Yij = E[Yij ] + ?ij and ?ij are
i.i.d. zero mean, [16, 20] showed that ?(dn) samples are needed for a consistent estimator, i.e. MSE
convergence to 0, and [17] showed that dn log n samples are needed for exact recovery. There is a
conjectured computational lower bound for the mixed membership stochastic block model of d2 n
even for detection, which is weaker than MSE going to 0. Recently, [40] showed a partial result
that this computational lower bound holds for algorithms that rely on fitting low-degree polynomials
to the observed data. Given that these lower bounds apply to special cases of our setting, it seems
that our result is nearly optimal if not optimal in terms of its dependence on both n and d for MSE
convergence as well as high probability (near) exact recovery.
Next we provide a brief overview of prior works reported in the Tables 1. In the context of matrix
completion, there has been much progress under the low-rank assumption. Most theoretically founded
methods are based on spectral decompositions or minimizing a loss function with respect to spectral
constraints [27, 28, 15, 17, 39, 37, 20, 19, 18]. A work that is closely related to ours is by [32]. It
proves that a similarity based collaborative filtering-style algorithm provides a consistent estimator
for matrix completion under the generic model when the latent function is Lipschitz, not just low
? 3/2 ) samples. In a sense, ours can be viewed as an algorithmic
rank; however, it requires O(n
generalization of [32] that handles the sparse sampling regime and a generic noise model. Most of
the results in matrix completion require additive noise models, which do not extend to setting when
the observations are binary or quantized. The USVT estimator is able to handle general bounded
noise, although it requires a few log factors more in its sample complexity [18]. Our work removes
the extra log factors while still allowing for general bounded noise.
3
There is also a significant amount of literature which looks at the estimation problem when the data
matrix is binary, also known as 1-bit matrix completion, stochastic block model (SBM) parameter
estimation, or graphon estimation. The latter two terms are found within the context of community
detection and network analysis, as the binary data matrix can alternatively be interpreted as the
adjacency matrix of a graph ? which are symmetric, by definition. Under the SBM, each vertex is
associated to one of d community types, and the probability of an edge is a function of the community
types of both endpoints. Estimating the n ? n parameter matrix becomes an instance of matrix
estimation. In SBM, the expected matrix is at most rank d due to its block structure. Precise thresholds
for cluster detection (better than random) and estimation have been established by [1, 2, 3]. Our
work, both algorithmically and technically, draws insight from this sequence of works, extending
the analysis to a broader class of generative models through the design of an iterative algorithm, and
improving the technical results with precise MSE bounds.
The mixed membership stochastic block model (MMSBM) allows each vertex to be associated to
a length d vector, which represents its weighted membership in each of the d communities. The
probability of an edge is a function of the weighted community memberships vectors of both endpoints,
resulting in an expected matrix with rank at most d. Recent work by [40] provides an algorithm for
weak detection for MMSBM with sample complexity d2 n, when the community membership vectors
are sparse and evenly weighted. They provide partial results to support a conjecture that d2 n is a
computational lower bound, separated by a gap of d from the information theoretic lower bound of
dn. This gap was first shown in the simpler context of the stochastic block model [21]. Our results
also achieve this conjectured lower bound, with a sample complexity of ?(d2 n) in order to guarantee
consistency, which is much stronger than weak detection.
Graphon estimation extends SBM and MMSBM to the generic Latent Variable Model where the
probability of an edge can be any measurable function f of real-valued types (or latent variables)
associated to each endpoint. Graphons were first defined as the limiting object of a sequence of large
dense graphs [14, 22, 34], with recent work extending the theory to sparse graphs [12, 13, 11, 41].
In the graphon estimation problem, we would like to estimate the function f given an instance of
a graph generated from the graphon associated to f . [23, 29] provide minimax optimal rates for
graphon estimation; however a majority of the proposed estimators are not computable in polynomial
time, since they require optimizing over an exponentially large space (e.g. least squares or maximum
likelihood) [42, 10, 9, 23, 29]. [10] provided a polynomial time method based on degree sorting in
the special case when the expected degree function is monotonic. To our knowledge, existing positive
results for sparse graphon estimation require either strong monotonicity assumptions [10], or rank
constraints as assumed in the SBM, the 1-bit matrix completion, and in this work.
We call special attention to the similarity based methods which are able to bypass the rank constraints,
relying instead on smoothness properties of the latent function f (e.g. Lipschitz) [43, 32]. They
hinge upon computing similarities between rows or columns by comparing commonly observed
entries. Similarity based methods, also known in the literature as collaborative filtering, have been
successfully employed across many large scale industry applications (Netflix, Amazon, Youtube) due
to its simplicity and scalability [24, 33, 30, 38]; however the theoretical results have been relatively
sparse. These recent results suggest that the practical success of these methods across a variety of
applications may be due to its ability to capture local structure. A key limitation of this approach is
that it requires a dense dataset with sufficient entries in order to compute similarity metrics, requiring
that each pair of rows or columns has a growing number of overlapped observed entries, which does
not hold when p = o(n?1/2 ). This work overcomes this limitation in an intuitive and simple way;
rather than only considering directly overlapped entries, we consider longer ?paths? of data associated
to each row, expanding the set of associated datapoints until there is sufficient overlap. Although we
may initially be concerned that this would introduce bias and variance due to the sparse sampling,
our analysis shows that in fact the estimate does converge to the true solution.
The idea of comparing vertices by looking at larger radius neighborhoods was introduced in [1], and
has connections to belief propagation [21, 3] and the non-backtracking operator [31, 26, 36, 35, 8].
The non-backtracking operator was introduced to overcome the issue of sparsity. For sparse graphs,
vertices with high-degree dominate the spectrum, such that the informative components of the
spectrum get hidden behind the high degree vertices. The non-backtracking operator avoids paths
that immediately return to the previously visited vertex in a similar manner as belief propagation,
and its spectrum has been shown to be more well-behaved, perhaps adjusting for the high degree
vertices, which get visited very often by paths in the graph. In our algorithm, the neighborhood paths
4
are defined by first selecting a rooted tree at each vertex, thus enforcing that each vertex along a path
in the tree is unique. This is important in our analysis, as it guarantees that the distribution of vertices
at the boundary of each subsequent depth of the neighborhood is unbiased, since the sampled vertices
are freshly visited.
2
Model
We shall use graph and matrix notations in an interchangeable manner. For each pair of vertices (i.e.
row or column indices) u, v ? [n], let Yuv ? [0, 1] denote its random realization. Let E denote the
edges. If (u, v) ? E, Yuv is observed; otherwise it is unknown.
? Each vertex u ? [n] is associated to a latent variable ?u ? U [0, 1] sampled i.i.d.
? For each (u, v) ? [n] ? [n], Yuv = Yvu ? [0, 1] is a bounded random variable. Conditioned on
{?i }i?[n] , the random variables {Yuv }1?u<v?n are independent.
? Fuv := E Yuv | {?w }w?[n] = f (?u , ?v ) ? [0, 1] for a symmetric L-Lipschitz function f .
? The function f , when regarded as an integral operator, has finite spectrum with rank d. That is,
Pd
f (?u , ?v ) = k=1 ?k qk (?u )qk (?v ),
where qk are orthonormal L2 -integrable basis functions. We assume that there exists some B such
that |qk (y)| ? B for all k and y ? [0, 1].
? For every (unordered) index pair (u, v), the entry is observed independently with probability p, i.e.
(u, v) ? E and Muv = Mvu = Yuv . If (u, v) ?
/ E, then Muv = 0.
The data (E, M ) can be viewed as a weighted undirected graph over n vertices with each (u, v) ? E
having weights Muv . The goal is to estimate the matrix F = [Fuv ]u,v?[n] . Let ? denote the d ? d
diagonal matrix with {?k }k?[d] as the diagonal entries. Let the eigenvalues be sorted in such a way
that |?1 | ? |?2 | ? ? ? ? ? |?d | > 0. Let Q denote the d ? n matrix where Q(k, u) = qk (?u ). Since
Q is a random matrix depending on the sampled ?, it is not guaranteed to be an orthonormal matrix
(even though qk are orthonormal functions). By definition, it follows that F = QT ?Q. Let d0 be the
? denote be the d ? d0 matrix where ?(a,
? b) = ?a?1 .
number of distinct valued eigenvalues. Let ?
b
Discussing Assumptions. The latent variable model imposes a natural and mild assumption, as
Aldous and Hoover proved that if the network is exchangeable, i.e. the distribution over edges is
invariant under permutations of vertex labels, then the network can be equivalently represented by a
latent variable model [5, 25, 7]. Exchangeability is reasonable for anonymized datasets for which
the identity of entities can be easily renamed. Our model additionally requires that the function is
L-Lipschitz and has finite spectrum when regarded as an integral operator, i.e. F is low rank; this
includes interesting scenarios such as the mixed membership stochastic block model and finite degree
polynomials. We can also relax the condition to piecewise Lipschitz, as we only need to ensure that
for every vertex u there are sufficiently many vertices v which are similar in function value to u. We
assume observations are sampled independently with probability p; however, we discuss a possible
solution for dealing with non-uniform sampling in Section 5.
3
Algorithm
The algorithm that we propose uses the concept of local approximation, first determining which
datapoints are similar in value, and then computing neighborhood averages for the final estimate. All
similarity-based collaborative filtering methods have the following basic format:
1. Compute distances between pairs of vertices, e.g.,
R1
dist(u, a) ? 0 (f (?u , t) ? f (?a , t))2 dt.
(1)
2. Form estimate by averaging over ?nearby? datapoints,
P
1
F?uv = |Euv
(a,b)?Euv Mab ,
|
(2)
where Euv := {(a, b) ? E s.t. dist(u, a) < ?n , dist(v, b) < ?n }.
5
The choice of ?n = (c1 pn)?1/5 will be small enough to drive the bias to zero, ensuring the included
datapoints are close in value, yet large enough to reduce the variance, ensuring |Euv | diverges.
Inutition. Various similarity-based algorithms differ in the distance computation (Step 1). For
dense datasets, i.e. p = ?(n?1/2 ), previous works have proposed and analyzed algorithms which
approximate the L2 distance of (1) by using variants of the finite sample approximation,
P
(3)
dist(u, a) = |X1ua | y?Xua (Fuy ? Fay )2 ,
where y ? Xua iff (u, y) ? E and (a, y) ? E [4, 43, 32]. For sparse datasets, with high probability,
Xua = ? for almost all pairs (u, a), such that this distance cannot be computed.
In this paper we are interested in the sparse setting when p is significantly smaller than n?1/2 , down
to the lowest threshold of p = ?(n?1 ). If we visualize the data via a graph with edge set E, then (3)
corresponds to comparing common neighbors of vertices u and a. A natural extension when u and
a have no common neighbors, is to instead compare the r-hop neighbors of u and a, i.e. vertices y
which are at distance exactly r from both u and a. We compare the product of weights along edges in
the path from u to y and a to y respectively, which in expectation approximates
R
Qr?2
P
f (?u , t1 )( s=1 f (ts , ts+1 ))f (tr?1 , ?y )d~t = k ?rk qk (?u )qk (?y ) = eTu QT ?r Qey . (4)
[0,1]r?1
We choose a large enough r such that there are sufficiently many ?common? vertices y which have
paths to both u and a, guaranteeing that our distance can be computed from a sparse dataset.
Algorithm Details. We present and discuss details of each step of the algorithm, which primarily
involves computing pairwise distances (or similarities) between vertices.
Step 1: Sample Splitting. We partition the datapoints into disjoint sets, which are used in different
steps of the computation to minimize correlation across steps for the analysis. Each edge in E is
independently placed into E1 , E2 , or E3 , with probabilities c1 , c2 , and 1 ? c1 ? c2 respectively.
Matrices M1 , M2 , and M3 contain information from the subset of the data in M associated to E1 , E2 ,
and E3 respectively. M1 is used to define local neighborhoods of each vertex, M2 is used to compute
similarities of these neighborhoods, and M3 is used to average over datapoints for the final estimate
in (2).
Step 2: Expanding the Neighborhood. We first expand local neighborhoods of radius r around each
vertex. Let Su,s denote the set of vertices which are at distance s from vertex u in the graph defined
by edge set E1 . Specifically, i ? Su,s if the shortest path in G1 = ([n], E1 ) from u to i has a length
of s. Let Tu denote a breadth-first tree in G1 rooted at vertex u. The breadth-first property ensures
that the length of the path from u to i within Tu is equal to the length of the shortest path from u
to i in G1 . If there is more than one valid breadth-first tree rooted at u, choose one uniformly at
random. Let Nu,r ? [0, 1]n denote the following vector with support on the boundary of the r-radius
neighborhood of vertex u (we also call Nu,r the neighborhood boundary):
(Q
(a,b)?pathTu (u,i) M1 (a, b) if i ? Su,r ,
Nu,r (i) =
0
if i ?
/ Su,r ,
where pathTu (u, i) denotes the set of edges along the path from u to i in the tree Tu . The sparsity of
Nu,r (i) is equal to Su,r , and the value of the coordinate Nu,r (i) is equal to the product of weights
?u,r denote the normalized neighborhood boundary such that
along the path from u to i. Let N
ln(1/p)
?
.
Nu,r = Nu,r /|Su,r |. We will choose radius r to be r = 86ln(c
1 pn)
Step 3: Computing the distances. For each vertex, we present two variants for estimating the distance.
1. For each pair (u, v), compute dist1 (u, v) according to
1?c1 p
?u,r ? N
?v,r T M2 N
?u,r+1 ? N
?v,r+1 .
N
c2 p
2. For each pair (u, v), compute distance according to
P
dist2 (u, v) = i?[d0 ] zi ?uv (r, i),
6
where ?uv (r, i) is defined as
1?c1 p
c2 p
?u,r ? N
?v,r
N
T
?u,r+i ? N
?v,r+i ,
M2 N
0
? T z = ?2 1. z always exists and is unique
and z ? Rd is a vector that satisfies ?2r+2 ?
T
?2r
? is a Vandermonde matrix, and ? 1 lies within the span of its columns.
because ?
Computing dist1 does not require knowledge of the spectrum of f . In our analysis we prove that
the expected squared error of the estimate computed in (2) using dist1 converges to zero with n for
p = ?(n?1+ ) for some > 0, i.e. p must be polynomially larger than n?1 . Although computing
dist2 requires knowledge of the spectrum of f to determine the vector z, the expected squared error
of the estimate computed in (2) using dist2 conveges to zero for p = ?(n?1 ), which includes the
sparser settings when p is only larger than n?1 by polylogarithmic factors. It seems plausible that
the technique employed by [2] could be used to design a modified algorithm which does not need
to have prior knowledge of the spectrium. They achieve this for the stochastic block model case by
bootstrapping the algorithm with a method which estimates the spectrum first and then computes
pairwise distances with the estimated eigenvalues.
Step 4: Averaging datapoints to produce final estimate. The estimate F? (u, v) is computed by
averaging over nearby points defined by the distance estimates dist1 (or dist2 ). Recall that B ? 1
was assumed in the model definition to upper bound supy?[0,1] |qk (y)|.
Let Euv1 denote the set of undirected edges (a, b) such that (a, b) ? E3 and both dist1 (u, a) and
dist1 (v, b) are less than ?1 (n) = (c1 pn)?1/5 . The final estimate F? (u, v) produced by using dist1
is computed by averaging over the undirected edge set Euv1 ,
X
1
M3 (a, b).
(5)
F? (u, v) =
|Euv1 |
(a,b)?Euv1
Let Euv2 denote the set of undirected edges (a, b) such that (a, b) ? E3 , and both dist2 (u, a) and
dist2 (v, b) are less than ?2 (n) = (c1 pn)?1/5 . The final estimate F? (u, v) produced by using dist2
is computed by averaging over the undirected edge set Euv2 ,
X
1
M3 (a, b).
(6)
F? (u, v) =
|Euv2 |
(a,b)?Euv2
4
Main Results
We prove bounds on the estimation error of our algorithm in terms of the mean squared error (MSE),
h
i
P
1
2
?
MSE := E n(n?1)
,
u6=v (Fuv ? Fuv )
which averages the squared error over all edges. It follows from the model that
R1
Pd
(f (?u , y) ? f (?v , y))2 dy = k=1 ?2k (qk (?u ) ? qk (?v ))2 = k?Q(eu ? ev )k22 .
0
The key part of the analysis is to show that the computed distances are in fact good estimates of
k?Q(eu ? ev )k22 . The analysis essentially relies on showing that the neighborhood growth around a
vertex behaves according to its expectation, according to some properly defined notion. The radius
r must be small enough to guarantee that the growth of the size of the neighborhood boundary
is exponential, increasing at a factor of approximately c1 pn. However, if the radius is too small,
then the boundaries of the respective neighborhoods of the two chosen vertices would have a small
intersection, so that estimating the similarities based on the small intersection of datapoints would
result in high variance. Therefore, the choice of r is critical to the algorithm and analysis. We are
able to prove bounds on the squared error when r is chosen to satisfy the following conditions:
1
ln(1/c1 p)
ln(1/c1 p)
6 ln(1/p)
ln(1/p)
r + d0 ? 8 7ln(9c
=
?
,
r
+
?
(7)
2 c pn/8|? |) = ? ln(c pn) .
pn/8)
ln(c
pn)
8
ln(7|?
|
1
1
1
1
1
d
2
The parameter d0 denotes the number of distinct valued eigenvalues in the spectrum of f , (?1 . . . ?d ),
and determines the number of different radius ?measurements? involved in computing dist2 (u, v).
7
Computing dist1 (u, v) only involves a single measurement, thus the left hand side of (7) can be
reduced to r + 1 instead of r + d0 . When p is above a threshold, we choose c1 to decrease with n to
ensure (7) can be satisfied, sparsifying the edge set E1 used for expanding the neighborhood around
a vertex . When the sample probability is polynomially larger than n?1 , i.e. p = n?1+ for some
? ?1 ), we
> 0, these constraints imply that r is a constant with respect to n. However, if p = O(n
will need r to grow with n according to a rate of 6 ln(1/p)/8 ln(c1 pn).
Theorem
4.1. If p = n?1+ for some > 0, with a choice of c1 such that c1 pn =
1
? max(pn, (p6 n7 ) 19 ) , there exists a constant r (with respect to n) which satisfies (7). If
d = o((c1 pn)1/2 ), then the estimate computed using dist1 with parameter r achieves
MSE = O |?d |?2r (c1 pn)?1/5 = O (c1 pn)?1/5 .
pn)1/2
With probability greater than 1 ? O d exp ? (c19B
, the estimate satisfies
2d
kF? ? F kmax := max |F?ij ? Fij | = O(|?d |?r (c1 pn)?1/10 ).
i,j
Theorem 4.1 proves that the mean squared error (MSE) of the estimate computed with dist1 is
bounded by O(|?d |?2r (c1 pn)?1/5 ). Therefore, our algorithm with dist1 provides a consistent
estimate when r is constant with respect to n, which occurs for p = n?1+ for some > 0. In fact,
the reason why the error blows up with a factor of |?d |?2r is because we compute the distance by
summing product of weights over paths of length 2r. From (4), we see that in expectation, when
we take the product of edge weights over a path of length r from u to y, instead of computing
f (?u , ?y ) = eTu Q?Qey , the expression concentrates around eTu Q?r Qey , which contains extra
factors of ?r?1 . Therefore, by computing over a radius r, the calculation in dist1 will approximate
k?r+1 Q(eu ? ev )k22 rather than our intended k?Q(eu ? ev )k22 , thus leading to an error factor of
|?d |?2r . It turns out that dist2 adjusts for this bias, as the multiple measurements ?uv (r, i) with
different length paths allows us to separate out ek ?Q(eu ? ev ) for all k with distinct values of ?k .
1
Theorem 4.2. If p = O(n?2/3 ), with a choice of c1 such that c1 pn = ? max(pn, (p6 n7 ) (8d0 +11) ) ,
there exists a value for r which satisfies (7). If d = o((c1 pn)1/2 ) and d = o(r), then the estimate
computed using dist2 with parameter r achieves
MSE = O (c1 pn)?1/5 .
pn)1/2
, the estimate satisfies
If p = ?(n?1 d2 ln2 (n)), with probability 1 ? O d exp ? (c19B
2d
kF? ? F kmax := max |F?ij ? Fij | = O((c1 pn)?1/10 ).
i,j
Theorem 4.2 proves that the mean squared error (MSE) of the estimate computed using dist2 is
bounded by O((c1 pn)?1/5 ); and thus the estimate is consistent in the ultra sparse sampling regime
of p = ?(d2 n?1 ). We also present high probability bounds on the squared error of each entry.
Lemma 4.3. For any u, v ? [n], if d = o((c1 pn)1/2 ), with probability at least
2
pn)1/2
(c1 pn)?2/5
1 ? O d exp ? (c18B
+ exp ? c3 pn
,
2d
48L2 |?1 |2r
the squared error of the estimate computed with dist1 for parameter r satisfying (7) is bounded by
(F?uv ? f (?u , ?v ))2 = O(|?d |?2r (c1 pn)?1/5 ).
Lemma 4.4. For any u, v ? [n], assuming d = o((c1 pn)1/2 ) and d = o(r), with probability at least
pn)1/2
1 ? O d exp ? (c18B
,
2d
the squared error of the estimate computed with dist2 for parameter r satisfying (7) is bounded by
(F?uv ? f (?u , ?v ))2 = O((c1 pn)?1/5 ).
8
5
Discussion
In this work we presented a similarity based collaborative filtering algorithm which is provably
consistent in sparse sampling regimes, as long as the sample probability p = ?(n?1 ). The algorithm
computes similarity between two users by comparing their local neighborhoods. Our model assumes
that the data matrix is generated according to a latent variable model, in which the weight on an
observed edge (u, v) is equal in expectation to a function f over associated latent variables ?u and ?v .
We presented two variants for computing similarities (or distances) between vertices. Computing
dist1 does not require knowledge of the spectrum of f , but the estimate requires p to be polynomially
larger than n in order to guarantee the expected squared error converges to zero. Computing dist2
uses the knowledge of the spectrum of f , but it provides an estimate that is provably consistent
for a significantly sparse regime, only requiring that p = ?(n?1 ). The mean squared error of both
algorithms is bounded by O((pn)?1/5 ). Since the computation is based on of comparing local
neighborhoods within the graph, the algorithm can be easily implemented for large scale datasets
where the data may be stored in a distributed fashion optimized for local graph computations.
Practical implementation. In practice, we do not know the model parameters, and we would use
cross validation to tune the radius r and threshold ?n . If r is either too small or too large, then the
vector Nu,r will be too sparse. The threshold ?n trades off between bias and variance of the final
estimate. Since we do not know the spectrum, dist1 may be easier to compute, and still enjoys good
properties as long as r is not too large. When the sampled observations are not uniform across entries,
the algorithm may require more modifications to properly normalize for high degree hub vertices, as
the optimal choice of r may differ depending on the local sparsity. The key computational step of
our algorithm involves comparing the expanded local neighborhoods of pairs of vertices to find the
?nearest neighbors?. The local neighborhoods can be computed in parallel, as they are independent
computations. Furthermore, the local neighborhood computations are suitable for systems in which
the data is distributed across different machines in a way that optimizes local neighborhood queries.
The most expensive part of our algorithm involves computing similarities for all pairs of vertices in
order to determine the set of nearest neighbors. However, it would be possible to use approximate
nearest neighbor techniques to greatly reduce the computation such that approximate nearest neighbor
sets could be computed with significantly fewer than n2 pairwise comparisons.
Non-uniform sampling. In reality, the probability that entries are observed is not be uniform across
all pairs (i, j). However, we believe that an extension of our result can also handle variations in
the sample probability as long as the sample probability is a function of the latent variables and
scales in the same way with respect to n across all entries. Suppose that the probability of observing
(i, j) is given by pg(?i , ?j ), where p is the scaling factor (contains the dependence upon n), and g
allows for constant factor variations in the sample probability across entries as a function of the latent
variables. If we let matrix X indicate the presence of an observation or not, then we can apply our
algorithm twice, first on matrix X to estimate function g, and then on data matrix M to estimate f
times g. We can simply divide by the estimate for g to obtain the estimate for f . The limitation is that
if g(?i , ?j ) is very small, then the error in estimating the corresponding f (?i , ?j ) will have higher
variance. However, it is expected that error increases for edge types with fewer samples.
Acknowledgments
This work is supported in parts by NSF under grants CMMI-1462158 and CMMI-1634259, by
DARPA under grant W911NF-16-1-0551, and additionally by a NSF Graduate Fellowship and
Claude E. Shannon Research Assistantship.
References
[1] Emmanuel Abbe and Colin Sandon. Community detection in general stochastic block models:
Fundamental limits and efficient algorithms for recovery. In Foundations of Computer Science
(FOCS), 2015 IEEE 56th Annual Symposium on, pages 670?688. IEEE, 2015.
[2] Emmanuel Abbe and Colin Sandon. Recovering communities in the general stochastic block
model without knowing the parameters. In Advances in neural information processing systems,
2015.
9
[3] Emmanuel Abbe and Colin Sandon. Detection in the stochastic block model with multiple
clusters: proof of the achievability conjectures, acyclic bp, and the information-computation
gap. Advances in neural information processing systems, 2016.
[4] Edo M Airoldi, Thiago B Costa, and Stanley H Chan. Stochastic blockmodel approximation of
a graphon: Theory and consistent estimation. In Advances in Neural Information Processing
Systems, pages 692?700, 2013.
[5] D.J. Aldous. Representations for partially exchangeable arrays of random variables. J. Multivariate Anal., 11:581 ? 598, 1981.
[6] Animashree Anandkumar, Rong Ge, Daniel Hsu, and Sham Kakade. A tensor spectral approach
to learning mixed membership community models. In Conference on Learning Theory, pages
867?881, 2013.
[7] Tim Austin. Exchangeable random arrays. Technical Report, Notes for IAS workshop., 2012.
[8] Charles Bordenave, Marc Lelarge, and Laurent Massouli?. Non-backtracking spectrum of
random graphs: community detection and non-regular ramanujan graphs. In Foundations of
Computer Science (FOCS), 2015 IEEE 56th Annual Symposium on, pages 1347?1357. IEEE,
2015.
[9] Christian Borgs, Jennifer Chayes, and Adam Smith. Private graphon estimation for sparse
graphs. In Advances in Neural Information Processing Systems, pages 1369?1377, 2015.
[10] Christian Borgs, Jennifer T Chayes, Henry Cohn, and Shirshendu Ganguly. Consistent nonparametric estimation for heavy-tailed sparse graphs. arXiv preprint arXiv:1508.06675, 2015.
[11] Christian Borgs, Jennifer T Chayes, Henry Cohn, and Nina Holden. Sparse exchangeable graphs
and their limits via graphon processes. arXiv preprint arXiv:1601.07134, 2016.
[12] Christian Borgs, Jennifer T Chayes, Henry Cohn, and Yufei Zhao. An Lp theory of sparse graph
convergence I: limits, sparse random graph models, and power law distributions. arXiv preprint
arXiv:1401.2906, 2014.
[13] Christian Borgs, Jennifer T Chayes, Henry Cohn, and Yufei Zhao. An Lp theory of sparse
graph convergence II: Ld convergence, quotients, and right convergence. arXiv preprint
arXiv:1408.0744, 2014.
[14] Christian Borgs, Jennifer T Chayes, L?szl? Lov?sz, Vera T S?s, and Katalin Vesztergombi.
Convergent sequences of dense graphs I: Subgraph frequencies, metric properties and testing.
Advances in Mathematics, 219(6):1801?1851, 2008.
[15] Emmanuel Candes and Benjamin Recht. Exact matrix completion via convex optimization.
Communications of the ACM, 55(6):111?119, 2009.
[16] Emmanuel J Candes and Yaniv Plan. Matrix completion with noise. Proceedings of the IEEE,
98(6):925?936, 2010.
[17] Emmanuel J Cand?s and Terence Tao. The power of convex relaxation: Near-optimal matrix
completion. IEEE Transactions on Information Theory, 56(5):2053?2080, 2010.
[18] Sourav Chatterjee. Matrix estimation by universal singular value thresholding. The Annals of
Statistics, 43(1):177?214, 2015.
[19] Yudong Chen and Martin J Wainwright. Fast low-rank estimation by projected gradient descent:
General statistical and algorithmic guarantees. arXiv preprint arXiv:1509.03025, 2015.
[20] Mark A Davenport, Yaniv Plan, Ewout van den Berg, and Mary Wootters. 1-bit matrix
completion. Information and Inference, 3(3):189?223, 2014.
[21] Aurelien Decelle, Florent Krzakala, Cristopher Moore, and Lenka Zdeborov?. Asymptotic
analysis of the stochastic block model for modular networks and its algorithmic applications.
Phys. Rev. E, 84:066106, Dec 2011.
10
[22] Persi Diaconis and Svante Janson. Graph limits and exchangeable random graphs. Rendiconti
di Matematica, VII(28):33?61, 2008.
[23] Chao Gao, Yu Lu, and Harrison H Zhou. Rate-optimal graphon estimation. The Annals of
Statistics, 43(6):2624?2652, 2015.
[24] David Goldberg, David Nichols, Brian M. Oki, and Douglas Terry. Using collaborative filtering
to weave an information tapestry. Commun. ACM, 1992.
[25] D.N. Hoover. Row-column exchangeability and a generalized model for probability. In
Exchangeability in Probability and Statistics (Rome, 1981), pages 281 ? 291, 1981.
[26] Brian Karrer, M. E. J. Newman, and Lenka Zdeborov?. Percolation on sparse networks. Phys.
Rev. Lett., 113:208702, Nov 2014.
[27] Raghunandan H Keshavan, Andrea Montanari, and Sewoong Oh. Matrix completion from a
few entries. IEEE Transactions on Information Theory, 56(6):2980?2998, 2010.
[28] Raghunandan H Keshavan, Andrea Montanari, and Sewoong Oh. Matrix completion from noisy
entries. Journal of Machine Learning Research, 11(Jul):2057?2078, 2010.
[29] Olga Klopp, Alexandre B Tsybakov, and Nicolas Verzelen. Oracle inequalities for network
models and sparse graphon estimation. To appear in Annals of Statistics, 2015.
[30] Yehuda Koren and Robert Bell. Advances in collaborative filtering. In Recommender Systems
Handbook, pages 145?186. Springer US, 2011.
[31] Florent Krzakala, Cristopher Moore, Elchanan Mossel, Joe Neeman, Allan Sly, Lenka Zdeborov?, and Pan Zhang. Spectral redemption in clustering sparse networks. Proceedings of the
National Academy of Sciences, 110(52):20935?20940, 2013.
[32] Christina E. Lee, Yihua Li, Devavrat Shah, and Dogyoon Song. Blind regression: Nonparametric regression for latent variable models via collaborative filtering. In Advances in Neural
Information Processing Systems 29, pages 2155?2163, 2016.
[33] Greg Linden, Brent Smith, and Jeremy York. Amazon.com recommendations: Item-to-item
collaborative filtering. IEEE Internet Computing, 7(1):76?80, 2003.
[34] L?szl? Lov?sz. Large networks and graph limits, volume 60. American Mathematical Society
Providence, 2012.
[35] Laurent Massouli?. Community detection thresholds and the weak ramanujan property. In
Proceedings of the Forty-sixth Annual ACM Symposium on Theory of Computing, STOC ?14,
pages 694?703, New York, NY, USA, 2014. ACM.
[36] Elchanan Mossel, Joe Neeman, and Allan Sly. A proof of the block model threshold conjecture.
Combinatorica, Aug 2017.
[37] Sahand Negahban and Martin J Wainwright. Estimation of (near) low-rank matrices with noise
and high-dimensional scaling. The Annals of Statistics, pages 1069?1097, 2011.
[38] Xia Ning, Christian Desrosiers, and George Karypis. Recommender Systems Handbook, chapter
A Comprehensive Survey of Neighborhood-Based Recommendation Methods, pages 37?76.
Springer US, 2015.
[39] Benjamin Recht. A simpler approach to matrix completion. Journal of Machine Learning
Research, 12(Dec):3413?3430, 2011.
[40] David Steurer and Sam Hopkins. Bayesian estimation from few samples: community detection
and related problems. https://arxiv.org/abs/1710.00264, 2017.
[41] Victor Veitch and Daniel M Roy. The class of random graphs arising from exchangeable random
measures. arXiv preprint arXiv:1512.03099, 2015.
11
[42] Patrick J Wolfe and Sofia C Olhede. Nonparametric graphon estimation. arXiv preprint
arXiv:1309.5936, 2013.
[43] Yuan Zhang, Elizaveta Levina, and Ji Zhu. Estimating network edge probabilities by neighborhood smoothing. arXiv preprint arXiv:1509.08588, 2015.
12
| 7057 |@word mild:1 private:1 polynomial:5 seems:2 stronger:1 d2:19 decomposition:1 pg:1 tr:1 ld:1 contains:2 selecting:1 daniel:2 neeman:2 ours:3 janson:1 existing:1 com:3 comparing:10 whp:2 yet:1 must:2 additive:2 subsequent:1 informative:1 partition:1 christian:8 remove:1 generative:2 fewer:2 item:2 smith:2 olhede:1 provides:5 quantized:1 preference:1 org:1 simpler:2 zhang:2 mathematical:2 dn:13 along:4 c2:4 symposium:3 focs:2 consists:1 prove:4 yuan:1 fitting:1 weave:1 introduce:1 manner:2 pairwise:3 krzakala:2 theoretically:1 expected:9 lov:2 thy:1 cand:1 dist:4 growing:1 usvt:1 indeed:1 andrea:2 inspired:1 relying:1 little:1 actual:1 considering:1 increasing:1 becomes:1 provided:2 estimating:10 bounded:18 underlying:1 notation:1 vera:1 lowest:1 interpreted:1 bootstrapping:1 guarantee:6 every:2 growth:2 exactly:1 scaled:1 oki:1 exchangeable:7 grant:2 appear:1 positive:1 t1:1 local:13 decelle:1 limit:5 laurent:2 path:16 approximately:1 might:1 twice:1 graduate:1 karypis:1 practical:4 unique:2 acknowledgment:1 testing:1 practice:2 block:25 implement:2 yehuda:1 area:2 universal:1 bell:1 significantly:3 regular:1 suggest:1 yufei:2 get:2 cannot:1 close:1 operator:6 mvu:1 context:6 kmax:2 equivalent:1 measurable:1 ramanujan:2 go:1 attention:1 independently:5 convex:2 survey:1 amazon:2 recovery:9 simplicity:1 immediately:1 splitting:1 m2:4 estimator:5 sbm:6 array:4 regarded:3 insight:1 dominate:1 orthonormal:3 his:2 datapoints:8 u6:1 oh:2 handle:4 notion:1 coordinate:1 linkedin:1 variation:2 limiting:1 annals:4 suppose:2 user:13 exact:8 us:2 goldberg:1 overlapped:2 wolfe:1 roy:1 satisfying:2 particularly:1 expensive:1 asymmetric:2 sparsely:2 observed:12 preprint:8 capture:2 ensures:1 connected:1 indep:3 eu:5 decrease:1 trade:1 redemption:1 mentioned:1 intuition:1 pd:2 benjamin:2 complexity:13 interchangeable:1 technically:1 upon:3 basis:1 dist1:15 easily:2 darpa:1 indirect:1 represented:1 various:1 chapter:1 separated:1 distinct:3 fast:1 describe:1 query:1 newman:1 y0t:1 neighborhood:26 modular:1 larger:8 valued:5 plausible:1 relax:1 otherwise:1 ability:1 statistic:5 g1:3 ganguly:1 transform:1 noisy:3 final:6 chayes:7 sequence:3 eigenvalue:4 claude:1 propose:4 interaction:1 product:4 remainder:1 tu:3 realization:1 iff:1 subgraph:1 achieve:2 academy:1 intuitive:2 normalize:1 scalability:1 qr:1 dist2:13 convergence:10 cluster:3 requirement:1 extending:2 r1:2 diverges:1 produce:1 adam:1 guaranteeing:1 converges:5 object:1 tim:1 polylog:1 friend:7 completion:18 depending:2 ij:4 nearest:8 qt:2 progress:1 aug:1 strong:1 implemented:1 recovering:1 involves:4 indicate:2 quotient:1 differ:2 concentrate:1 ning:1 radius:11 fij:4 correct:1 closely:1 stochastic:21 desrosiers:1 adjacency:1 argued:1 require:8 generalization:2 hoover:3 mab:1 ultra:1 brian:2 mathematically:1 yij:11 extension:2 graphon:15 rong:1 hold:4 around:5 sufficiently:2 exp:5 algorithmic:3 predict:2 visualize:1 achieves:3 estimation:29 label:1 visited:3 percolation:1 grouped:1 successfully:1 assistantship:1 weighted:4 mit:2 gaussian:3 always:1 modified:1 rather:2 pn:37 zhou:1 exchangeability:3 broader:1 properly:2 rank:30 likelihood:1 greatly:1 ave:1 blockmodel:1 sense:1 inference:1 membership:13 holden:1 initially:1 hidden:1 her:2 expand:1 going:2 interested:2 tao:1 provably:2 issue:1 plan:3 platform:1 special:5 smoothing:1 equal:4 having:1 beach:1 sampling:6 hop:1 represents:1 look:1 yu:1 nearly:1 abbe:3 report:1 piecewise:2 few:3 primarily:1 diaconis:1 national:1 comprehensive:1 intended:1 raghunandan:2 microsoft:3 ab:1 detection:15 szl:2 analyzed:1 behind:1 integral:3 edge:20 partial:3 ewout:1 fay:1 respective:1 elchanan:2 tree:5 divide:1 theoretical:2 instance:4 column:5 industry:1 w911nf:1 karrer:1 vertex:37 entry:34 subset:2 uniform:4 too:5 reported:1 stored:1 connect:1 providence:1 muv:3 my:1 st:1 recht:2 fundamental:1 negahban:1 adjusts:1 lee:2 off:1 terence:1 hopkins:1 squared:17 satisfied:1 choose:5 davenport:1 brent:1 ek:1 zhao:2 style:4 return:1 yuv:6 leading:1 li:1 american:1 jeremy:1 de:1 blow:1 unordered:1 includes:4 satisfy:1 blind:1 analyze:1 observing:1 netflix:1 parallel:2 candes:2 jul:1 yvu:1 collaborative:14 minimize:1 square:1 greg:1 variance:5 who:2 qk:11 weak:3 bayesian:1 produced:2 iid:4 lu:1 worth:2 drive:2 phys:2 lenka:3 edo:1 definition:3 sixth:1 lelarge:1 frequency:1 involved:1 e2:2 associated:9 proof:2 di:1 hsu:1 sampled:8 costa:1 dataset:3 adjusting:1 massachusetts:2 proved:1 recall:1 knowledge:6 animashree:1 persi:1 stanley:1 alexandre:1 higher:1 dt:1 though:1 generality:1 furthermore:1 just:1 p6:2 until:1 correlation:1 hand:1 sly:2 keshavan:2 su:6 cohn:4 overlapping:1 propagation:2 yihua:1 perhaps:1 behaved:1 believe:2 mary:1 usa:2 k22:4 normalized:1 concept:2 true:2 requiring:3 unbiased:1 contain:1 inspiration:1 nichols:1 symmetric:4 iteratively:1 fuv:4 moore:2 euv:4 rooted:3 ln2:3 generalized:1 complete:1 theoretic:1 novel:3 recently:1 charles:1 common:4 behaves:1 ji:1 overview:1 endpoint:3 exponentially:1 volume:1 discussed:1 extend:1 approximates:1 m1:3 thiago:1 significant:1 measurement:3 cambridge:2 smoothness:1 rd:1 uv:6 consistency:2 mathematics:1 similarly:1 henry:4 similarity:20 longer:1 patrick:1 multivariate:1 showed:3 recent:3 noisily:1 conjectured:4 aldous:3 optimizing:1 optimizes:1 scenario:1 chan:1 commun:1 inequality:1 binary:14 success:1 discussing:1 victor:1 integrable:1 seen:1 additional:1 somewhat:1 impose:1 greater:1 employed:2 george:1 converge:1 shortest:2 determine:2 recommended:1 signal:1 colin:3 ii:1 multiple:2 corporate:1 sham:1 d0:7 memorial:1 match:2 england:1 technical:2 calculation:1 long:10 cross:1 levina:1 christina:2 e1:5 ensuring:2 variant:3 basic:2 regression:2 noiseless:6 metric:3 expectation:5 essentially:1 arxiv:17 etu:3 c1:30 dec:2 whereas:1 want:2 addition:1 fellowship:1 harrison:1 grow:1 singular:1 extra:3 tapestry:1 undirected:5 n7:2 call:2 anandkumar:1 near:3 presence:1 vesztergombi:1 easy:1 concerned:1 enough:4 variety:1 zi:1 allan:2 florent:2 reduce:2 idea:1 prototype:1 knowing:1 computable:1 translates:1 expression:1 sahand:1 song:1 e3:4 york:2 wootters:1 clear:1 tune:1 amount:1 nonparametric:3 tsybakov:1 reduced:2 http:1 canonical:1 nsf:2 estimated:1 arising:3 algorithmically:1 disjoint:1 broadly:1 shall:1 sparsifying:1 key:4 threshold:7 douglas:1 breadth:3 graph:26 relaxation:1 monotone:2 sum:2 massouli:2 extends:1 almost:1 reasonable:1 verzelen:1 draw:1 dy:1 scaling:3 bit:4 bound:15 internet:1 guaranteed:2 koren:1 convergent:1 annual:3 oracle:1 constraint:4 bp:1 n3:1 bordenave:1 aurelien:1 nearby:2 span:1 expanded:1 relatively:1 conjecture:3 format:1 martin:2 according:8 popularized:1 renamed:1 across:9 smaller:2 pan:1 y0:1 sam:1 kakade:1 lp:2 rev:2 modification:2 den:1 invariant:1 qey:3 computationally:2 ln:12 previously:1 jennifer:7 devavrat:3 discus:3 turn:1 needed:2 know:2 ge:1 apply:3 observe:3 generic:5 spectral:5 shah:2 denotes:2 assumes:1 include:1 ensure:2 clustering:1 log2:5 hinge:1 emmanuel:6 prof:3 classical:2 klopp:1 society:1 tensor:1 occurs:1 cmmi:2 dependence:2 traditional:2 diagonal:2 gradient:1 zdeborov:3 elizaveta:1 distance:17 separate:1 entity:1 majority:1 veitch:1 evenly:1 reason:1 enforcing:1 nina:1 assuming:2 length:7 index:2 forty:1 insufficient:1 minimizing:1 equivalently:1 robert:1 stoc:1 implementation:3 design:2 anal:1 steurer:1 unknown:2 allowing:1 upper:1 recommender:2 observation:6 datasets:6 finite:6 descent:1 t:2 defining:1 looking:1 precise:2 yijt:2 communication:1 yaniv:2 rome:1 community:15 rating:5 introduced:3 david:3 pair:14 c3:1 connection:3 optimized:1 sandon:3 polylogarithmic:1 established:1 nu:8 nip:1 able:3 usually:1 remarking:1 ev:5 regime:5 sparsity:5 graphons:1 including:1 max:11 belief:2 wainwright:2 ia:1 overlap:1 critical:1 natural:2 rely:1 suitable:1 power:2 terry:1 zhu:1 minimax:1 improve:1 movie:5 technology:1 rated:3 brief:1 imply:1 mossel:2 categorical:2 chao:1 prior:2 literature:4 l2:3 kf:2 determining:1 asymptotic:2 law:1 loss:1 permutation:1 mixed:10 log6:1 limitation:4 filtering:13 interesting:1 acyclic:1 validation:1 foundation:3 vandermonde:1 supy:1 degree:8 sufficient:2 consistent:8 imposes:1 anonymized:1 thresholding:1 sewoong:2 bypass:1 share:1 heavy:1 row:7 austin:1 achievability:1 rendiconti:1 surprisingly:1 placed:1 supported:1 enjoys:1 bias:4 weaker:2 side:1 institute:1 wide:1 neighbor:11 taking:1 sparse:29 benefit:1 distributed:2 xia:1 overcome:2 boundary:7 depth:1 valid:1 avoids:1 yudong:1 computes:3 lett:1 commonly:4 projected:1 founded:1 polynomially:3 social:3 transaction:2 sourav:1 matematica:1 approximate:5 nov:1 overcomes:1 monotonicity:1 dealing:1 sz:2 handbook:2 summing:1 assumed:2 svante:1 alternatively:1 spectrum:15 freshly:1 iterative:5 latent:19 tailed:1 why:1 reality:1 table:4 additionally:2 learn:1 reasonably:1 ca:1 expanding:3 nicolas:1 improving:1 mse:26 marc:1 dense:4 main:1 montanari:2 noise:13 n2:4 sofia:1 representative:1 fashion:1 ny:1 wish:1 cristopher:2 exponential:1 lie:1 learns:2 down:1 rk:1 theorem:4 specific:2 borgs:8 showing:1 hub:1 linden:1 incorporating:1 exists:4 workshop:1 joe:2 airoldi:1 conditioned:1 chatterjee:1 sparser:2 gap:3 sorting:1 easier:1 chen:1 intersection:2 vii:1 backtracking:4 simply:1 gao:1 partially:1 van:1 recommendation:6 applies:1 springer:2 celee:1 monotonic:1 corresponds:1 satisfies:5 relies:1 determines:1 ma:2 acm:4 goal:3 formulated:1 viewed:2 sorted:1 identity:1 lipschitz:9 youtube:1 included:1 specifically:1 uniformly:2 averaging:5 olga:1 lemma:2 total:3 m3:4 shannon:1 berg:1 combinatorica:1 support:4 log4:1 latter:1 mark:1 accelerated:1 |
6,697 | 7,058 | Adaptive Classification for Prediction Under a Budget
Venkatesh Saligrama
Electrical Engineering
Boston University
Boston, MA 02215
[email protected]
Feng Nan
Systems Engineering
Boston University
Boston, MA 02215
[email protected]
Abstract
We propose a novel adaptive approximation approach for test-time resourceconstrained prediction motivated by Mobile, IoT, health, security and other applications, where constraints in the form of computation, communication, latency
and feature acquisition costs arise. We learn an adaptive low-cost system by training a gating and prediction model that limits utilization of a high-cost model to
hard input instances and gates easy-to-handle input instances to a low-cost model.
Our method is based on adaptively approximating the high-cost model in regions
where low-cost models suffice for making highly accurate predictions. We pose an
empirical loss minimization problem with cost constraints to jointly train gating
and prediction models. On a number of benchmark datasets our method outperforms state-of-the-art achieving higher accuracy for the same cost.
1
Introduction
Resource costs arise during test-time prediction in a number of machine learning applications. Feature costs in Internet, Healthcare, and Surveillance applications arise due to to feature extraction
time [23], and feature/sensor acquisition [19]. In addition to feature acquisition costs, communication and latency costs pose a key challenge in the design of mobile computing, or the Internet-ofThings(IoT) applications, where a large number of sensors/camera/watches/phones (known as edge
devices) are connected to a cloud.
Adaptive System: Rather than having the edge devices constantly transmit measurements/images
to the cloud where a centralized model makes prediction, a more efficient approach is to allow
the edge devices make predictions locally [12], whenever possible, saving the high communication
cost and reducing latency. Due to the memory, computing and battery constraints, the prediction
models on the edge devices are limited to low complexity. Consequently, to maintain high-accuracy,
adaptive systems are desirable. Such systems identify easy-to-handle input instances where local
edge models suffice, thus limiting the utilization cloud services for only hard instances. We propose
to learn an adaptive system by training on fully annotated training data. Our objective is to maintain
high accuracy while meeting average resource constraints during prediction-time.
There have been a number of promising approaches that focus on methods for reducing costs while
improving overall accuracy [9, 24, 19, 20, 13, 15]. These methods are adaptive in that, at testtime, resources (features, computation etc) are allocated adaptively depending on the difficulty of
the input. Many of these methods train models in a top-down manner, namely, attempt to build out
the model by selectively adding the most cost-effective features to improve accuracy.
In contrast we propose a novel bottom-up approach. We train adaptive models on annotated training
data by selectively identifying parts of the input space for which high accuracy can be maintained at
a lower cost. The principle advantage of our method is twofold. First, our approach can be readily
applied to cases where it is desirable to reduce costs of an existing high-cost legacy system. Second,
training top-down models in case of feature costs leads to fundamental combinatorial issues in multi-
stage search over all feature subsets (see Sec. 2). In contrast, we bypass many of these issues by
posing a natural adaptive approximation objective to partition the input space into easy and hard
cases.
In particular, when no legacy system is available, our
method consists of first learning a high-accuracy model
that minimizes the empirical loss regardless of costs. The
resulting high prediction-cost model (HPC) can be readily trained using any of the existing methods. For example, this could be a large neural network in the cloud
that achieves the state-of-the-art accuracy. Next, we
jointly learn a low-cost gating function as well as a low
prediction-cost (LPC) model so as to adaptively approximate the high-accuracy model by identifying regions of
input space where a low-cost gating and LPC model are
adequate to achieve high-accuracy. In IoT applications,
such low-complexity models can be deployed on the edge
devices to perform gating and prediction. At test-time, for
each input instance, the gating function decides whether
or not the LPC model is adequate for accurate classification. Intuitively, ?easy? examples can be correctly classified using only an LPC model while ?hard? examples
require HPC model. By identifying which of the input
instances can be classified accurately with LPCs we bypass the utilization of HPC model, thus reducing average
prediction cost. The upper part of Figure 1 is a schematic
of our approach, where x is feature vector and y is the
predicted label; we aim to learn g and an LPC model to
adaptively approximate the HPC. The key observation as
depicted in the lower figure is that the probability of correct classification given x for a HPC model is in general
a highly complex function with higher values than that of
a LPC model. Yet there exists regions of the input space
where the LPC has competitive accuracy (as shown to the
right of the gating threshold). Sending examples in such
regions (according to the gating function) to the LPC results in no loss of prediction accuracy while reducing prediction costs.
Figure 1: Upper: single stage schematic
of our approach. We learn low-cost gating
g and a LPC model to adaptively approximate a HPC model. Lower: Key insight
for adaptive approximation. x-axis represents feature space; y-axis represents conditional probability of correct prediction; LPC
can match HPC?s prediction in the input region corresponding to the right of the gating threshold but performs poorly otherwise.
Our goal is to learn a low-cost gating function that attempts to send examples on the
right to LPC and the left to HPC.
The problem would be simpler if our task were to primarily partition the input space into regions where LPC
models would suffice. The difficulty is that we must also
learn a low-cost gating function capable of identifying input instances for which LPC suffices. Since both prediction and gating account for cost, we favor
design strategies that lead to shared features and decision architectures between the gating function
and the LPC model. We pose the problem as a discriminative empirical risk minimization problem
that jointly optimizes for gating and prediction models in terms of a joint margin-based objective
function. The resulting objective is separately convex in gating and prediction functions. We propose
an alternating minimization scheme that is guaranteed to converge since with appropriate choice of
loss-functions (for instance, logistic loss), each optimization step amounts to a probabilistic approximation/projection (I-projection/M-projection) onto a probability space. While our method can be
recursively applied in multiple stages to successively approximate the adaptive system obtained in
the previous stage, thereby refining accuracy-cost trade-off, we observe that on benchmark datasets
even a single stage of our method outperforms state-of-art in accuracy-cost performance.
2
Related Work
Learning decision rules to minimize error subject to a budget constraint during prediction-time is an
area of active interest[9, 17, 24, 19, 22, 20, 21, 13, 16]. Pre-trained Models: In one instantiation
2
of these methods it is assumed that there exists a collection of prediction models with amortized
costs [22, 19, 1] so that a natural ordering of prediction models can be imposed. In other instances,
the feature dimension is assumed to be sufficiently low so as to admit an exhaustive enumeration of
all the combinatorial possibilities [20, 21]. These methods then learn a policy to choose amongst
the ordered prediction models. In contrast we do not impose any of these restrictions. Top-Down
Methods: For high-dimensional spaces, many existing approaches focus on learning complex adaptive decision functions top-down [9, 24, 13, 21]. Conceptually, during training, top-down methods
acquire new features based on their utility value. This requires exploration of partitions of the input
space together with different combinatorial low-cost feature subsets that would result in higher accuracy. These methods are based on multi-stage exploration leading to combinatorially hard problems.
Different novel relaxations and greedy heuristics have been developed in this context. Bottom-up
Methods: Our work is somewhat related to [16], who propose to prune a fully trained random forests
(RF) to reduce costs. Nevertheless, in contrast to our adaptive system, their perspective is to compress the original model and utilize the pruned forest as a stand-alone model for test-time prediction.
Furthermore, their method is specifically tailored to random forests.
Another set of related work includes classifier cascade [5] and decision DAG [3], both of which aim
to re-weight/re-order a set of pre-trained base learners to reduce prediction budget. Our method,
on the other hand, only requires to pre-train a high-accuracy model and jointly learns the low-cost
models to approximate it; therefore ours can be viewed as complementary to the existing work.
The teacher-student framework [14] is also related to our bottom-up approach; a low-cost student
model learns to approximate the teacher model so as to meet test-time budget. However, the goal
there is to learn a better stand-alone student model. In contrast, we make use of both the lowcost (student) and high-accuracy (teacher) model during prediction via a gating function, which
learns the limitation of the low-cost (student) model and consult the high-accuracy (teacher) model if
necessary, thereby avoiding accuracy loss. Our composite system is also related to HME [10], which
learns the composite system based on max-likelihood estimation of models. A major difference
is that HME does not address budget constraints. A fundamental aspect of budget constraints is
the resulting asymmetry, whereby, we start with an HPC model and sequentially approximate with
LPCs. This asymmetry leads us to propose a bottom-up strategy where the high-accuracy predictor
can be separately estimated and is critical to posing a direct empirical loss minimization problem.
3
Problem Setup
We consider the standard learning scenario of resource constrained prediction with feature costs. A
training sample S = {(x(i) , y (i) ) : i = 1, . . . , N } is generated i.i.d. from an unknown distribution,
where x(i) ? <K is the feature vector with an acquisition cost c? ? 0 assigned to each of the features
? = 1, . . . , K and y (i) is the label for the ith example. In the case of multi-class classification y ?
{1, . . . , M }, where M is the number of classes. Let us consider a single stage of our training method
in order to formalize our setup. The model, f0 , is a high prediction-cost (HPC) model, which is either
a priori known, or which we train to high-accuracy regardless of cost considerations. We would like
to learn an alternative low prediction-cost (LPC) model f1 . Given an example x, at test-time, we
have the option of selecting which model, f0 or f1 , to utilize to make a prediction. The accuracy of
a prediction model fz is modeled by a loss function `(fz (x), y), z ? {0, 1}. We exclusively employ
the logistic loss function in binary classification: `(fz (x), y) = log(1 + exp(?yfz (x)), although
our framework allows other loss models. For a given x, we assume that once it pays the cost to
acquire a feature, its value can be efficiently cached; its subsequent use does not incur additional
cost. Thus, the cost of utilizing a particular prediction model, denoted by c(fz , x), is computed as
the sum of the acquisition cost of unique features required by fz .
Oracle Gating: Consider a general gating likelihood function q(z|x) with z ? {0, 1}, that outputs
the likelihood of sending the input x to a prediction model, fz . The overall empirical loss is:
ESn Eq(z|x) [`(fz (x), y)] = ESn [`(f0 (x), y)] + ESn q(1|x) (`(f1 (x), y) ? `(f0 (x), y))
|
{z
}
Excess Loss
3
The first term only depends on f0 , and from our perspective a constant. Similar to average loss we
can write the average cost as (assuming gating cost is negligible for now):
ESn Eq(z|x) [c(fz , x)] = ESn [c(f0 , x)] ? ESn [q(1|x) (c(f0 , x) ? c(f1 , x))],
|
{z
}
Cost Reduction
where the first term is again constant. We can characterize the optimal gating function (see [19])
that minimizes the overall average loss subject to average cost constraint:
Cost reduction
Excess loss
z
}|
{ q(1|x)=0 z
}|
{
`(f1 , x) ? `(f0 , x) >
< ? (c(f0 , x) ? c(f1 , x))
q(1|x)=1
for a suitable choice ? ? R. This characterization encodes the important principle that if the marginal
cost reduction is smaller than the excess loss, we opt for the HPC model. Nevertheless, this characterization is generally infeasible. Note that the LHS depends on knowing how well HPC performs
on the input instance. Since this information is unavailable, this target can be unreachable with
low-cost gating.
Gating Approximation: Rather than directly enforcing a low-cost structure on q, we decouple
the constraint and introduce a parameterized family of gating functions g ? G that attempts to
mimic (or approximate) q. To ensure such approximation, we can minimize some distance measure D(q(?|x), g(x)). A natural choice for an approximation metric is the Kullback-Leibler (KL)
divergence although other
P choices are possible. The KL divergence between q and g is given by
DKL (q(?|x)kg(x)) = z q(z|x) log(q(z|x)/?(sgn(0.5 ? z)g(x))), where ?(s) = 1/(1 + e?s ) is
the sigmoid function. Besides KL divergence, we have also proposed another symmetrized metric
fitting g directly to the log odds ratio of q. See Suppl. Material for details.
Budget Constraint: With the gating function g, the cost of predicting x depends on whether the
example is sent to f0 or f1 . Let c(f0 , g, x) denote the feature cost of passing x to f0 through g.
As discussed, this is equal to the sum of the acquisition cost of unique features required by f0 and
g for x. Similarly c(f1 , g, x) denotes the cost if x is sent to f1 through g. In many cases the cost
c(fz , g, x) is independent of the example x and depends primarily on the model being used. This
is true for linear models where each x must be processed through the same collection of features.
For these cases c(fz , g, x) , c(fz , g). The total budget simplifies to: ESn [q(0|x)]c(f0 , g) + (1 ?
ESn [q(0|x)])c(f1 , g) = c(f1 , g) + ESn [q(0|x)](c(f0 , g) ? c(f1 , g)). The budget thus depends on 3
quantities: ESn [q(0|x)], c(f1 , g) and c(f0 , g). Often f0 is a high-cost model that requires most, if
not all, of features so c(f0 , g) can be considered a large constant.
Thus, to meet the budget constraint, we would like to have (a) low-cost g and f1 (small c(f1 , g));
and (b) small fraction of examples being sent to the high-accuracy model (small ESn [q(0|x)]). We
can therefore split theP
budget constraint into two separate objectives: (a) ensure low-cost through
penalty ?(f1 , g) = ? ? c? kV? + W? k0 , where ? is a tradeoff parameter and the indicator variables V? , W? ? {0, 1} denote whether or not the feature ? is required by f1 and g, respectively.
Depending on the model parameterization, we can approximate ?(f1 , g) using a group-sparse norm
or in a stage-wise manner as we will see in Algorithms 1 and 2. (b) Ensure only Pfull fraction of
examples are sent to f0 via the constraint ESn [q(0|x)] ? Pfull .
Putting Together: We are now ready to pose our general optimization problem:
Losses
min
f1 ?F ,g?G,q
ESn
z
X
}|
{ z Gating}|Approx { z Costs
}| {
[q(z|x)`(fz (x), y)] + D(q(?|x), g(x)) + ?(f1 , g)
(OPT)
z
subject to: ESn [q(0|x)] ? Pfull . (F raction to f0 )
The objective function penalizes excess loss and ensures through the second term that this excess
loss can be enforced through admissible gating functions. The third term penalizes the feature cost
usage of f1 and g. The budget constraint limits the fraction of examples sent to the costly model f0 .
Remark 1: Directly parameterizing q leads to non-convexity. Average loss is q-weighted sum
of losses from HPC and LPC; while the space of probability distributions is convex, a finitedimensional parameterization is generally non-convex (e.g. sigmoid). What we have done is to
keep q in non-parametric form to avoid non-convexity and only parameterize g, connecting both via
4
a KL term. Thus, (OPT) is now convex with respect to the f1 and g for a fixed q. It is again convex
in q for a fixed f1 and g. Otherwise it would introduce non-convexity as in prior work. For instance,
in [5] a non-convex problem is solved in each inner loop iteration (line 7 of their Algorithm 1).
Remark 2: We presented the case for a single stage approximation system. However, it is straightforward to recursively continue this process. We can then view the composite system f0 , (g, f1 , f0 )
as a black-box predictor and train a new pair of gating and prediction models to approximate the
composite system.
Remark 3: To limit the scope of our paper, we focus on reducing feature acquisition cost during
prediction as it is a more challenging (combinatorial) problem. However, other prediction-time costs
such as computation cost can be encoded in the choice of functional classes F and G in (OPT).
Surrogate Upper Bound of Composite System: We can get better insight for the first two terms
of the objective in P
(OPT) if we view z ? {0, 1} as a latent variable and consider the composite
system Pr(y|x) = z Pr(z|x; g) Pr(y|x, fz ). A standard application of Jensen?s inequality reveals
that, ? log(Pr(y|x)) ? Eq(z|x) `(fz (x), y) + DKL (q(z|x)k Pr(z|x; g)). Therefore, the conditionalentropy of the composite system is bounded by the expected value of our loss function (we overload
notation and represent random-variables in lower-case format):
H(y | x) , E[? log(Pr(y|x))] ? Ex?y [Eq(z|x) `(fz (x), y) + DKL (q(z|x)k Pr(z|x; g))].
This implies that the first two terms of our objective attempt to bound the loss of the composite
system; the third term in the objective together with the constraint serve to enforce budget limits on
the composite system.
Group Sparsity: Since the cost for feature re-use is zero we encourage feature re-use among gating and prediction models. So the fundamental question here is: How to choose a common, sparse
(low-cost) subset of features on which both g and f1 operate, such that g can effective gate examples between f1 and f0 for accurate prediction? This is a hard combinatorial problem. The main
contribution of our paper is to address it using the general optimization framework of (OPT).
4
Algorithms
To be concrete, we instantiate our general framework (OPT) into two algorithms via different parameterizations of g, f1 : A DAPT- LIN for the linear class and A DAPT-G BRT for the non-parametric class.
Both of them use the KL-divergence as distance
measure. We also provide a third algorithm Algorithm 1 A DAPT-L IN
A DAPT-L STSQ that uses the symmetrized disInput: (x(i) , y (i) ), Pfull , ?
tance in the Suppl. Material. All of the alTrain f0 . Initialize g, f1 .
gorithms perform alternating minimization of
repeat
(OPT) over q, g, f1 . Note that convergence
Solve (OPT1) for q given g, f1 .
of alternating minimization follows as in [8].
Solve (OPT2) for g, f1 given q.
Common to all of our algorithms, we use two
until convergence
parameters to control cost: Pfull and ?. In practice they are swept to generate various cost- Algorithm 2 A DAPT-G BRT
accuracy tradeoffs and we choose the best one
Input: (x(i) , y (i) ), Pfull , ?
satisfying the budget B using validation data.
Train f0 . Initialize g, f1 .
A DAPT- LIN: Let g(x) = g T x and f1 (x) =
repeat
T
f1 x be linear classifiers. A feature is used if the
Solve (OPT1) for q given g, f1 .
corresponding component is non-zero: V? = 1
for t = 1 to T do
if f1,? 6= 0, and W? = 1 if g? 6= 0. The miniFind f1t using CART to minimize (1).
mization for q solves the following problem:
f1 = f1 + f1t .
P
N
For each feature ? used, set u? = 0.
min N1 i=1 [(1 ? qi )Ai + qi Bi ? H(qi )]
q
Find g t using CART to minimize (2).
P
N
s.t. N1 i=1 qi ? Pfull ,
g = g + gt .
For each feature ? used, set u? = 0.
(OPT1)
end for
where we have used shorthand notations qi =
until convergence
q(z = 0|x(i) ), H(qi ) = ?qi log(qi ) ? (1 ?
(i) T (i)
qi ) log(1 ? qi ), Ai = log(1 + e?y f1 x ) +
5
T
(i)
T
(i)
log(1 + eg x ) and Bi = ? log p(y (i) |z (i) = 0; f0 ) + log(1 + e?g x ). This optimization has
a closed form solution: qi = 1/(1 + eBi ?Ai +? ) for some non-negative constant ? such that the
constraint is satisfied. This optimization is also known as I-Projection in information geometry
because of the entropy term [8]. Having optimized q, we hold it constant and minimizePwith respect
to g, f1 by solving the problem (OPT2), where we have relaxed the non-convex cost ? c? kV? +
W? k0 into a L2,1 norm for group sparsity and a tradeoff parameter ? to make sure the feature budget
is satisfied. Once we solve for g, f1 , we can hold them constant and minimize with respect to q again.
A DAPT-L IN is summarized in Algorithm 1.
min
g,f1
N
i
Xq
(i) T (i)
T (i)
T (i)
1 Xh
2 .
(1 ? qi ) log(1 + e?y f1 x ) + log(1 + eg x ) + qi log(1 + e?g x ) + ?
g?2 + f1,?
N i=1
?
(OPT2)
A DAPT-G BRT: We can also consider the non-parametric family of classifiers such as gradient
PT
PT
t
t
t
t
boosted trees [7]: g(x) =
t=1 g (x) and f1 (x) =
t=1 f1 (x), where g and f1 are limiteddepth regression trees. Since the trees are limited to low depth, we assume that the feature utility
of each tree is example-independent: V?,t (x) u V?,t , W?,t (x) u W?,t , ?x. V?,t = 1 if feature ? appears in f1t , otherwise V?,t = 0, similarly for W?,t . The optimization over q still solves
(i)
(i)
(i)
(OPT1). We modify Ai = log(1 + e?y f1 (x ) ) + log(1 + eg(x ) ) and Bi = ? log p(y (i) |z (i) =
(i)
0; f0 ) + log(1 + e?g(x ) ). Next, to minimize over g, f1 , denote loss:
"
#
N
1 X
?y (i) f1 (x(i) )
g(x(i) )
?g(x(i) )
(1 ? qi ) ? log(1 + e
) + log(1 + e
) + qi log(1 + e
) ,
`(f1 , g) =
N i=1
which is essentially the same as the first part of the objective in (OPT2). Thus, we need to minimize
`(f1 , g) + ?(f1 , g) with respect to f1 and g. Since both f1 and g are gradient boosted trees, we
naturally adopt a stage-wise approximation for the objective. In particular, we define an impurity
function which on the one hand approximates the negative gradient of `(f1 , g) with the squared
loss, and on the other hand penalizes the initial acquisition of features by their cost c? . To capture
the initial acquisition penalty, we let u? ? {0, 1} indicates if feature ? has already been used in
previous trees (u? = 0), or not (u? = 1). u? is updated after adding each tree. Thus we arrive at
the following impurity for f1 and g, respectively:
N
X
1 X ?`(f1 , g)
t (i) 2
(?
?
f
(x
))
+
?
u? c? V?,t ,
1
2 i=1 ?f1 (x(i) )
?
(1)
N
X
1 X ?`(f1 , g)
(?
? g t (x(i) ))2 + ?
u? c? W?,t .
(i)
2 i=1 ?g(x )
?
(2)
Minimizing such impurity functions balances the need to minimize loss and re-using the already
acquired features. Classification and Regression Tree (CART) [2] can be used to construct decision trees with such an impurity function. A DAPT-GBRT is summarized in Algorithm 2. Note
that a similar impurity is used in G REEDY M ISER [24]. Interestingly, if Pfull is set to 0, all the examples are forced to f1 , then A DAPT-G BRT exactly recovers the G REEDY M ISER. In this sense,
G REEDY M ISER is a special case of our algorithm. As we will see in the next section, thanks to the
bottom-up approach, A DAPT-G BRT benefits from high-accuracy initialization and is able to perform
accuracy-cost tradeoff in accuracy levels beyond what is possible for G REEDY M ISER.
5
Experiments
BASELINE A LGORITHMS: We consider the following simple L1 baseline approach for learning
f1 and g: first perform a L1-regularized logistic regression on all data to identify a relevant, sparse
subset of features; then learn f1 using training data restricted to the identified feature(s); finally,
learn g based on the correctness of f1 predictions as pseudo labels (i.e. assign pseudo label 1 to
example x if f1 (x) agrees with the true label y and 0 otherwise). We also compare with two stateof-the-art feature-budgeted algorithms: G REEDY M ISER[24] - a top-down method that builds out an
6
ensemble of gradient boosted trees with feature cost budget; and B UDGET P RUNE[16] - a bottom-up
method that prunes a random forest with feature cost budget. A number of other methods such as
ASTC [13] and CSTC [23] are omitted as they have been shown to under-perform G REEDY M ISER
on the same set of datasets [15]. Detailed experiment setups can be found in the Suppl. Material.
We first visualize/verify the adaptive approximation ability of A DAPT-L IN and A DAPT-G BRT on the
Synthetic-1 dataset without feature costs. Next, we illustrate the key difference between A DAPT-L IN
and the L1 baseline approach on the Synthetic-2 as well as the Letters datasets. Finally, we compare
A DAPT-G BRT with state-of-the-art methods on several resource constraint benchmark datasets.
(a) Input Data
(b) Lin Initialization
(c) Lin after 10 iterations
(d) RBF Contour
(e) Gbrt Initialization
(f) Gbrt after 10 iterations
Figure 2: Synthetic-1 experiment without feature cost. (a): input data. (d): decision contour of
RBF-SVM as f0 . (b) and (c): decision boundaries of linear g and f1 at initialization and after 10
iterations of A DAPT-L IN. (e) and (f): decision boundaries of boosted tree g and f1 at initialization
and after 10 iterations of A DAPT-G BRT. Examples in the beige areas are sent to f0 by the g.
P OWER OF A DAPTATION : We construct a 2D binary classification dataset (Synthetic-1)
as shown in (a) of Figure 2. We learn an RBF-SVM as the high-accuracy classifier f0
as in (d). To better visualize the adaptive approximation process in 2D, we turn off the
feature costs (i.e. set ?(f1 , g) to 0 in (OPT)) and run A DAPT-L IN and A DAPT-G BRT.
The initializations of g and f1 in (b) results in wrong predictions for many red points in the blue region. After 10
iterations of A DAPT-L IN, f1 adapts much better to the local region assigned by g while g sends about 60% (Pfull )
of examples to f0 . Similarly, the initialization in (e) results in wrong predictions in the blue region. A DAPTG BRT is able to identify the ambiguous region in the center and send those examples to f0 via g. Both of our algorithms maintain the same level of prediction accuracy as
f0 yet are able to classify large fractions of examples via
much simpler models.
Figure 3: A 2-D synthetic example for
P OWER OF J OINT O PTIMIZATION: We return to the adaptive feature acquisition. On the left:
problem of prediction under feature budget constrains. data distributed in four clusters. The
We illustrate why a simple L1 baseline approach for two features correspond to x and y colearning f1 and g would not work using a 2D dataset ordinates, respectively. On the right:
(Synthetic-2) as shown in Figure 3 (left). The data points accuracy-cost tradeoff curves. Our alare distributed in four clusters, with black triangles and gorithm can recover the optimal adapred circles representing two class labels. Let both feature tive system whereas a L1-based ap1 and 2 carry unit acquisition cost. A complex classifier proach cannot.
f0 that acquires both features can achieve full accuracy
at the cost of 2. In our synthetic example, clusters 1 and 2 are given more data points so that the
L1-regularized logistic regression would produce the vertical red dashed line, separating cluster 1
from the others. So feature 1 is acquired for both g and f1 . The best such an adaptive system can
7
0.138
0.930
0.90
0.136
0.925
Adapt_Gbrt
GreedyMiser(Xu et al. 2012)
BudgetPrune (Nan et al. 2016)
0.920
15
20
25 30 35 40
Average Feature Cost
(a) MiniBooNE
45
0.88
0.86
0.84
50
10
15
20
25
Average Feature Cost
30
(b) Forest Covertype
0.80
Test Accuracy
0.92
Average Precision@5
0.935
Test Accuracy
Test Accuracy
do is to send cluster 1 to f1 and the other three clusters to the complex classifier f0 , incurring an
average cost of 1.75, which is sub-optimal. A DAPT-L IN, on the other hand, optimizing between
q, g, f1 in an alternating manner, is able to recover the horizontal lines in Figure 3 (left) for g and
f1 . g sends the first two clusters to the full classifier and the last two clusters to f1 . f1 correctly
classifies clusters 3 and 4. So all of the examples are correctly classified by the adaptive system; yet
only feature 2 needs to be acquired for cluster 3 and 4 so the overall average feature cost is 1.5, as
shown by the solid curve in the accuracy-cost tradeoff plot on the right of Figure 3. This example
shows that the L1 baseline approach is sub-optimal as it doesnot optimize the selection of feature
subsets jointly for g and f1 .
0.134
0.132
0.130
0.128
0.75
0.70
0.65
40 60 80 100 120 140 160 180
Average Feature Cost
(c) Yahoo! Rank
0 50 100 150 200 250 300 350 400
Average Feature Cost
(d) CIFAR10
Figure 4: Comparison of A DAPT-G BRT against G REEDY M ISER and B UDGET P RUNE on four
benchmark datasets. RF is used as f0 for A DAPT-G BRT in (a-c) while an RBF-SVM is used as
f0 in (d). A DAPT-G BRT achieves better accuracy-cost tradeoff than other methods. The gap is significant in (b) (c) and (d). Note the accuracy of G REEDY M ISER in (b) never exceeds 0.86 and its
precision in (c) slowly rises to 0.138 at cost of 658. We limit the cost range for a clearer comparison.
R EAL DATASETS: We test various aspects
Table 1: Dataset Statistics
of our algorithms and compare with stateDataset
#Train #Validation
#Test
#Features Feature Costs
of-the-art feature-budgeted algorithms on five
Letters
12000
4000
4000
16
Uniform
45523
19510
65031
50
Uniform
real world benchmark datasets: Letters, Mini- MiniBooNE
Forest
36603
15688
58101
54
Uniform
BooNE Particle Identification, Forest Cover- CIFAR10 19761
8468
10000
400
Uniform
Yahoo!
141397
146769
184968
519
CPU units
type datasets from the UCI repository [6],
CIFAR-10 [11] and Yahoo!
Learning to
Rank[4]. Yahoo! is a ranking dataset where each example is associated with features of a querydocument pair together with the relevance rank of the document to the query. There are 519 such
features in total; each is associated with an acquisition cost in the set {1,5,20,50,100,150,200},
which represents the units of CPU time required to extract the feature and is provided by a Yahoo!
employee. The labels are binarized into relevant or not relevant. The task is to learn a model that
takes a new query and its associated documents and produce a relevance ranking so that the relevant
documents come on top, and to do this using as little feature cost as possible. The performance
metric is Average Precision @ 5 following [16]. The other datasets have unknown feature costs so
we assign costs to be 1 for all features; the aim is to show A DAPT-G BRT successfully selects sparse
subset of ?usefull? features for f1 and g. We summarize the statistics of these datasets in Table 1.
Next, we highlight the key insights from the real dataset experiments.
Generality of Approximation: Our framework allows approximation of powerful classifiers such
as RBF-SVM and Random Forests as shown in Figure 5 as red and black curves, respectively.
In particular, A DAPT-G BRT can well maintain high accuracy while reducing cost. This is a key
advantage for our algorithms because we can choose to approximate the f0 that achieves the best
accuracy. A DAPT-L IN Vs L1: Figure 5 shows that A DAPT-L IN outperforms L1 baseline method
on real dataset as well. Again, this confirms the intuition we have in the Synthetic-2 example as
A DAPT-L IN is able to iteratively select the common subset of features jointly for g and f1 . A DAPTG BRT Vs A DAPT-L IN: A DAPT-G BRT leads to significantly better performance than A DAPT-L IN in
approximating both RBF-SVM and RF as shown in Figure 5. This is expected as the non-parametric
non-linear classifiers are much more powerful than linear ones.
A DAPT-G BRT Vs B UDGET P RUNE: Both are bottom-up approaches that benefit from good initializations. In (a), (b) and (c) of Figure 4 we let f0 in A DAPT-G BRT be the same RF that B UDGETP RUNE starts with. A DAPT-G BRT is able to maintain high accuracy longer as the budget decreases.
8
Thus, A DAPT-G BRT improves state-of-the-art bottom-up method. Notice in (c) of Figure 4 around
the cost of 100, B UDGET P RUNE has a spike in precision. We believe this is because the initial
pruning improved the generalization performance of RF.
But in the cost region of 40-80, A DAPT-G BRT maintains much better accuracy than B UDGETP RUNE. Furthermore, A DAPT-G BRT has the freedom to approximate the best f0 given the problem.
So in (d) of Figure 4 we see that with f0 being RBF-SVM, A DAPT-G BRT can achieve much higher
accuracy than B UDGET P RUNE.
Significant Cost Reduction: Without sacrificing top accuracies (within 1%), A DAPT-G BRT reduces average feature costs during test-time by around 63%, 32%, 58%,
12% and 31% on MiniBooNE, Forest, Yahoo, Cifar10
and Letters datasets, respectively.
6
Conclusions
0.98
0.96
Test Accuracy
A DAPT-G BRT Vs G REEDY M ISER: A DAPT-G BRT outperforms G REEDY M ISER on all the datasets. The gaps in
Figure 5, (b) (c) and (d) of Figure 4 are especially significant.
0.94
0.92
0.90
0.88
11
Adapt_Gbrt RF
Adapt_Lin RF
L1 RF
Adapt_Gbrt RBF
12
Adapt_Lin RBF
L1 RBF
GreedyMiser(Xu et al 2012)
13
14
Average Feature Cost
15
16
Figure 5: Compare the L1 baseline approach, A DAPT-L IN and A DAPT-G BRT
based on RBF-SVM and RF as f0 ?s on
the Letters dataset.
We presented an adaptive approximation approach to account for prediction costs that arise in various applications. At test-time our method uses a gating function to
identify a prediction model among a collection of models
that is adapted to the input. The overall goal is to reduce
costs without sacrificing accuracy. We learn gating and
prediction models by means of a bottom-up strategy that trains low prediction-cost models to approximate high prediction-cost models in regions where low-cost models suffice. On a number of
benchmark datasets our method leads to an average of 40% cost reduction without sacrificing test
accuracy (within 1%). It outperforms state-of-the-art top-down and bottom-up budgeted learning
algorithms, with a significant margin in several cases.
Acknowledgments
Feng Nan would like to thank Dr Ofer Dekel for ideas and discussions on resource constrained
machine learning during an internship in Microsoft Research in summer 2016. Familiarity and
intuition gained during the internship contributed to the motivation and formulation in this paper.
We also thank Dr Joseph Wang and Tolga Bolukbasi for discussions and helps in experiments. This
material is based upon work supported in part by NSF Grants CCF: 1320566, CNS: 1330008, CCF:
1527618, DHS 2013-ST-061-ED0001, NGA Grant HM1582-09-1-0037 and ONR Grant N0001413-C-0288.
References
[1] Tolga Bolukbasi, Joseph Wang, Ofer Dekel, and Venkatesh Saligrama. Adaptive neural networks for efficient inference. In Proceedings of the 34th International Conference on Machine
Learning, volume 70 of Proceedings of Machine Learning Research, pages 527?536, International Convention Centre, Sydney, Australia, 06?11 Aug 2017. PMLR.
[2] Leo Breiman, Jerome Friedman, Charles J Stone, and Richard A Olshen. Classification and
regression trees. CRC press, 1984.
[3] R?bert Busa-Fekete, Djalel Benbouzid, and Bal?zs K?gl. Fast classification using sparse decision dags. In Proceedings of the 29th International Conference on Machine Learning, ICML
2012, Edinburgh, Scotland, UK, June 26 - July 1, 2012, 2012.
[4] O Chapelle, Y Chang, and T Liu, editors. Proceedings of the Yahoo! Learning to Rank Challenge, held at ICML 2010, Haifa, Israel, June 25, 2010, 2011.
9
[5] Minmin Chen, Zhixiang Eddie Xu, Kilian Q. Weinberger, Olivier Chapelle, and Dor Kedem.
Classifier cascade for minimizing feature evaluation cost. In Proceedings of the Fifteenth
International Conference on Artificial Intelligence and Statistics, AISTATS 2012, La Palma,
Canary Islands, April 21-23, 2012, pages 218?226, 2012.
[6] A. Frank and A. Asuncion. UCI machine learning repository, 2010.
[7] Jerome H. Friedman. Greedy function approximation: A gradient boosting machine. Annals
of Statistics, 29:1189?1232, 2000.
[8] Kuzman Ganchev, Ben Taskar, and Jo ao Gama. Expectation maximization and posterior
constraints. In J. C. Platt, D. Koller, Y. Singer, and S. T. Roweis, editors, Advances in Neural
Information Processing Systems 20, pages 569?576. Curran Associates, Inc., 2008.
[9] T. Gao and D. Koller. Active classification based on value of classifier. In Advances in Neural
Information Processing Systems (NIPS 2011), 2011.
[10] Michael I. Jordan and Robert A. Jacobs. Hierarchical mixtures of experts and the em algorithm.
Neural Comput., 6(2):181?214, March 1994.
[11] Alex Krizhevsky. Learning Multiple Layers of Features from Tiny Images. Master?s thesis,
2009.
[12] Ashish Kumar, Saurabh Goyal, and Manik Varma. Resource-efficient machine learning in 2
KB RAM for the internet of things. In Proceedings of the 34th International Conference on
Machine Learning, volume 70 of Proceedings of Machine Learning Research, pages 1935?
1944, International Convention Centre, Sydney, Australia, 06?11 Aug 2017. PMLR.
[13] M Kusner, W Chen, Q Zhou, E Zhixiang, K Weinberger, and Y Chen. Feature-cost sensitive
learning with submodular trees of classifiers. In AAAI Conference on Artificial Intelligence,
2014.
[14] D. Lopez-Paz, B. Sch?lkopf, L. Bottou, and V. Vapnik. Unifying distillation and privileged
information. In International Conference on Learning Representations, 2016.
[15] Feng Nan, Joseph Wang, and Venkatesh Saligrama. Feature-budgeted random forest. In David
Blei and Francis Bach, editors, Proceedings of the 32nd International Conference on Machine
Learning (ICML-15), pages 1983?1991. JMLR Workshop and Conference Proceedings, 2015.
[16] Feng Nan, Joseph Wang, and Venkatesh Saligrama. Pruning random forests for prediction on a
budget. In Advances in Neural Information Processing Systems 29, pages 2334?2342. Curran
Associates, Inc., 2016.
[17] Feng Nan, Joseph Wang, Kirill Trapeznikov, and Venkatesh Saligrama. Fast margin-based
cost-sensitive classification. In IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2014, Florence, Italy, May 4-9, 2014, 2014.
[18] Daniel P. Robinson and Suchi Saria. Trading-off cost of deployment versus accuracy in learning predictive models. In Proceedings of the Twenty-Fifth International Joint Conference on
Artificial Intelligence, IJCAI?16, pages 1974?1982. AAAI Press, 2016.
[19] K Trapeznikov and V Saligrama. Supervised sequential classification under budget constraints.
In International Conference on Artificial Intelligence and Statistics, pages 581?589, 2013.
[20] Joseph Wang, Tolga Bolukbasi, Kirill Trapeznikov, and Venkatesh Saligrama. Model Selection
by Linear Programming, pages 647?662. Springer International Publishing, Cham, 2014.
[21] Joseph Wang, Kirill Trapeznikov, and Venkatesh Saligrama. Efficient learning by directed
acyclic graph for resource constrained prediction. In Advances in Neural Information Processing Systems 28, pages 2143?2151. Curran Associates, Inc., 2015.
[22] D. Weiss, B. Sapp, and B. Taskar. Dynamic structured model selection. In 2013 IEEE International Conference on Computer Vision, pages 2656?2663, Dec 2013.
10
[23] Z Xu, M Kusner, M Chen, and K. Q Weinberger. Cost-sensitive tree of classifiers. In Proceedings of the 30th International Conference on Machine Learning, 2013.
[24] Zhixiang Eddie Xu, Kilian Q. Weinberger, and Olivier Chapelle. The greedy miser: Learning
under test-time budgets. In Proceedings of the 29th International Conference on Machine
Learning, ICML, 2012.
11
| 7058 |@word repository:2 norm:2 nd:1 dekel:2 lgorithms:1 palma:1 confirms:1 jacob:1 thereby:2 solid:1 recursively:2 carry:1 reduction:5 initial:3 liu:1 exclusively:1 selecting:1 ap1:1 daniel:1 ours:1 interestingly:1 document:3 greedymiser:2 outperforms:5 existing:4 yet:3 must:2 readily:2 subsequent:1 partition:3 minmin:1 plot:1 v:4 alone:2 greedy:3 instantiate:1 device:5 intelligence:4 parameterization:2 bolukbasi:3 scotland:1 ith:1 blei:1 characterization:2 parameterizations:1 boosting:1 simpler:2 five:1 direct:1 lopez:1 consists:1 shorthand:1 fitting:1 busa:1 introduce:2 manner:3 acquired:3 expected:2 multi:3 cpu:2 enumeration:1 little:1 provided:1 classifies:1 bounded:1 suffice:4 notation:2 israel:1 what:2 kg:1 minimizes:2 z:1 developed:1 pseudo:2 binarized:1 exactly:1 classifier:13 wrong:2 uk:1 utilization:3 healthcare:1 control:1 unit:3 grant:3 platt:1 service:1 engineering:2 local:2 negligible:1 modify:1 limit:5 meet:2 black:3 f1t:3 initialization:8 challenging:1 deployment:1 limited:2 bi:3 range:1 directed:1 unique:2 camera:1 acknowledgment:1 practice:1 goyal:1 area:2 empirical:5 cascade:2 composite:9 projection:4 significantly:1 pre:3 tolga:3 get:1 onto:1 cannot:1 selection:3 risk:1 context:1 restriction:1 optimize:1 imposed:1 center:1 send:3 straightforward:1 regardless:2 convex:7 identifying:4 rune:7 insight:3 rule:1 utilizing:1 parameterizing:1 suchi:1 varma:1 handle:2 transmit:1 limiting:1 target:1 pt:2 updated:1 annals:1 olivier:2 programming:1 us:2 curran:3 associate:3 amortized:1 satisfying:1 gorithm:1 bottom:10 cloud:4 kedem:1 taskar:2 electrical:1 solved:1 parameterize:1 capture:1 wang:7 region:12 ensures:1 connected:1 kilian:2 ordering:1 trade:1 cstc:1 decrease:1 intuition:2 convexity:3 complexity:2 constrains:1 battery:1 dynamic:1 trained:4 solving:1 impurity:5 incur:1 serve:1 upon:1 predictive:1 learner:1 triangle:1 icassp:1 joint:2 k0:2 mization:1 various:3 leo:1 train:9 forced:1 fast:2 effective:2 opt2:4 query:2 artificial:4 exhaustive:1 heuristic:1 encoded:1 solve:4 otherwise:4 favor:1 ability:1 beige:1 statistic:5 jointly:6 advantage:2 propose:6 saligrama:8 relevant:4 loop:1 uci:2 oint:1 poorly:1 achieve:3 adapts:1 roweis:1 kv:2 ebi:1 convergence:3 cluster:10 asymmetry:2 raction:1 ijcai:1 produce:2 cached:1 ben:1 help:1 depending:2 illustrate:2 clearer:1 pose:4 aug:2 eq:4 sydney:2 solves:2 predicted:1 implies:1 come:1 convention:2 trading:1 annotated:2 correct:2 kb:1 exploration:2 sgn:1 australia:2 material:4 crc:1 require:1 assign:2 f1:80 ao:1 suffices:1 generalization:1 opt:9 dapt:44 hold:2 sufficiently:1 considered:1 miniboone:3 around:2 exp:1 trapeznikov:4 scope:1 visualize:2 major:1 achieves:3 adopt:1 omitted:1 estimation:1 combinatorial:5 label:7 sensitive:3 agrees:1 hpc:13 combinatorially:1 correctness:1 successfully:1 ganchev:1 weighted:1 minimization:6 sensor:2 aim:3 rather:2 avoid:1 zhou:1 boosted:4 mobile:2 surveillance:1 breiman:1 focus:3 refining:1 june:2 rank:4 likelihood:3 indicates:1 contrast:5 baseline:7 sense:1 inference:1 koller:2 selects:1 overall:5 classification:12 issue:2 unreachable:1 denoted:1 priori:1 among:2 stateof:1 yahoo:7 art:8 constrained:3 initialize:2 special:1 marginal:1 equal:1 once:2 saving:1 extraction:1 having:2 never:1 saurabh:1 construct:2 represents:3 dhs:1 icml:4 mimic:1 others:1 richard:1 employ:1 primarily:2 divergence:4 geometry:1 dor:1 cns:1 maintain:5 n1:2 attempt:4 freedom:1 microsoft:1 friedman:2 centralized:1 interest:1 highly:2 possibility:1 evaluation:1 mixture:1 held:1 accurate:3 edge:6 capable:1 encourage:1 necessary:1 cifar10:3 lh:1 tree:14 penalizes:3 re:5 circle:1 sacrificing:3 benbouzid:1 haifa:1 instance:11 classify:1 eal:1 cover:1 maximization:1 cost:115 subset:7 predictor:2 uniform:4 krizhevsky:1 paz:1 characterize:1 teacher:4 synthetic:8 adaptively:5 thanks:1 st:1 fundamental:3 international:15 bu:2 probabilistic:1 off:3 michael:1 together:4 connecting:1 concrete:1 ashish:1 jo:1 again:4 aaai:2 satisfied:2 successively:1 squared:1 choose:4 slowly:1 thesis:1 dr:2 admit:1 expert:1 leading:1 return:1 account:2 hme:2 sec:1 student:5 includes:1 summarized:2 inc:3 ranking:2 depends:5 manik:1 view:2 closed:1 francis:1 red:3 competitive:1 start:2 option:1 recover:2 maintains:1 asuncion:1 florence:1 contribution:1 minimize:8 accuracy:47 who:1 efficiently:1 ensemble:1 correspond:1 identify:4 conceptually:1 lkopf:1 identification:1 accurately:1 budgetprune:1 classified:3 whenever:1 against:1 acquisition:12 esn:14 internship:2 testtime:1 naturally:1 associated:3 recovers:1 dataset:8 brt:28 improves:1 sapp:1 formalize:1 appears:1 higher:4 supervised:1 improved:1 april:1 wei:1 formulation:1 done:1 box:1 generality:1 furthermore:2 stage:10 until:2 jerome:2 hand:4 zhixiang:3 horizontal:1 logistic:4 believe:1 usage:1 verify:1 true:2 ccf:2 assigned:2 alternating:4 leibler:1 iteratively:1 eg:3 during:9 maintained:1 whereby:1 ambiguous:1 acquires:1 djalel:1 bal:1 stone:1 performs:2 l1:12 image:2 wise:2 consideration:1 novel:3 charles:1 sigmoid:2 common:3 functional:1 volume:2 discussed:1 approximates:1 opt1:4 employee:1 measurement:1 significant:4 distillation:1 dag:2 ai:4 approx:1 similarly:3 iser:10 particle:1 centre:2 submodular:1 chapelle:3 f0:43 longer:1 etc:1 base:1 gt:1 posterior:1 perspective:2 optimizing:1 optimizes:1 udgetp:2 phone:1 scenario:1 italy:1 inequality:1 binary:2 continue:1 onr:1 meeting:1 swept:1 cham:1 additional:1 somewhat:1 impose:1 relaxed:1 prune:2 converge:1 signal:1 dashed:1 july:1 multiple:2 desirable:2 full:2 reduces:1 exceeds:1 match:1 bach:1 lin:4 cifar:1 astc:1 dkl:3 privileged:1 schematic:2 prediction:52 qi:15 regression:5 ed0001:1 essentially:1 metric:3 expectation:1 fifteenth:1 vision:1 iteration:6 represent:1 tailored:1 suppl:3 dec:1 addition:1 whereas:1 separately:2 sends:2 allocated:1 sch:1 operate:1 sure:1 subject:3 cart:3 sent:6 thing:1 jordan:1 odds:1 consult:1 split:1 easy:4 architecture:1 identified:1 reduce:4 simplifies:1 inner:1 knowing:1 tradeoff:7 idea:1 whether:3 motivated:1 utility:2 penalty:2 boone:1 speech:1 passing:1 remark:3 adequate:2 generally:2 latency:3 detailed:1 lowcost:1 amount:1 locally:1 processed:1 generate:1 fz:15 nsf:1 notice:1 estimated:1 correctly:3 blue:2 write:1 proach:1 group:3 key:6 putting:1 four:3 threshold:2 nevertheless:2 achieving:1 budgeted:4 utilize:2 ram:1 graph:1 relaxation:1 fraction:4 sum:3 miser:1 enforced:1 run:1 nga:1 parameterized:1 letter:5 powerful:2 master:1 arrive:1 family:2 gbrt:3 decision:9 bound:2 internet:3 pay:1 guaranteed:1 nan:6 summer:1 layer:1 oracle:1 adapted:1 covertype:1 constraint:19 alex:1 encodes:1 aspect:2 min:3 pruned:1 kumar:1 format:1 structured:1 according:1 march:1 smaller:1 em:1 kusner:2 island:1 joseph:7 making:1 intuitively:1 restricted:1 pr:7 resource:8 turn:1 singer:1 end:1 sending:2 available:1 ofer:2 incurring:1 observe:1 hierarchical:1 appropriate:1 enforce:1 pmlr:2 alternative:1 weinberger:4 symmetrized:2 gate:2 original:1 compress:1 top:9 denotes:1 ensure:3 publishing:1 unifying:1 build:2 especially:1 approximating:2 feng:5 objective:11 question:1 quantity:1 already:2 spike:1 strategy:3 costly:1 parametric:4 surrogate:1 amongst:1 gradient:5 distance:2 separate:1 thank:2 separating:1 srv:1 enforcing:1 assuming:1 besides:1 modeled:1 mini:1 ratio:1 minimizing:2 acquire:2 balance:1 kuzman:1 setup:3 olshen:1 robert:1 frank:1 negative:2 rise:1 design:2 policy:1 unknown:2 perform:5 contributed:1 upper:3 vertical:1 observation:1 twenty:1 datasets:14 benchmark:6 communication:3 bert:1 ordinate:1 tive:1 venkatesh:7 namely:1 required:4 kl:5 pair:2 optimized:1 david:1 security:1 acoustic:1 nip:1 robinson:1 address:2 able:6 beyond:1 lpc:16 sparsity:2 challenge:2 summarize:1 rf:9 max:1 memory:1 tance:1 critical:1 suitable:1 difficulty:2 natural:3 regularized:2 predicting:1 indicator:1 representing:1 scheme:1 improve:1 axis:2 ready:1 canary:1 extract:1 health:1 xq:1 prior:1 l2:1 loss:26 fully:2 highlight:1 gama:1 limitation:1 acyclic:1 versus:1 validation:2 principle:2 editor:3 bypass:2 tiny:1 repeat:2 last:1 supported:1 gl:1 infeasible:1 legacy:2 allow:1 kirill:3 hm1582:1 fifth:1 sparse:5 edinburgh:1 benefit:2 distributed:2 boundary:2 dimension:1 finitedimensional:1 stand:2 depth:1 contour:2 curve:3 world:1 collection:3 adaptive:20 excess:5 approximate:13 pruning:2 kullback:1 pfull:9 keep:1 decides:1 active:2 instantiation:1 sequentially:1 reveals:1 assumed:2 discriminative:1 thep:1 eddie:2 search:1 latent:1 why:1 table:2 promising:1 learn:15 improving:1 forest:11 unavailable:1 posing:2 bottou:1 complex:4 aistats:1 main:1 motivation:1 arise:4 complementary:1 xu:5 deployed:1 usefull:1 gorithms:1 precision:4 sub:2 xh:1 resourceconstrained:1 comput:1 iot:3 jmlr:1 third:3 learns:4 admissible:1 down:7 familiarity:1 gating:31 jensen:1 svm:7 exists:2 workshop:1 vapnik:1 adding:2 sequential:1 ower:2 gained:1 budget:22 margin:3 gap:2 reedy:10 boston:4 chen:4 entropy:1 depicted:1 gao:1 ordered:1 watch:1 udget:5 chang:1 fekete:1 springer:1 constantly:1 ma:2 conditional:1 goal:3 viewed:1 consequently:1 rbf:11 twofold:1 shared:1 saria:1 hard:6 specifically:1 reducing:6 decouple:1 total:2 la:1 selectively:2 select:1 relevance:2 overload:1 avoiding:1 ex:1 |
6,698 | 7,059 | Convergence rates of a partition based Bayesian
multivariate density estimation method
Linxi Liu ?
Department of Statistics
Columbia University
[email protected]
Dangna Li
ICME
Stanford University
[email protected]
Wing Hung Wong
Department of Statistics
Stanford University
[email protected]
Abstract
We study a class of non-parametric density estimators under Bayesian settings.
The estimators are obtained by adaptively partitioning the sample space. Under
a suitable prior, we analyze the concentration rate of the posterior distribution,
and demonstrate that the rate does not directly depend on the dimension of the
problem in several special cases. Another advantage of this class of Bayesian
density estimators is that it can adapt to the unknown smoothness of the true
density function, thus achieving the optimal convergence rate without artificial
conditions on the density. We also validate the theoretical results on a variety of
simulated data sets.
1
Introduction
In this paper, we study the asymptotic behavior of posterior distributions of a class of Bayesian density
estimators based on adaptive partitioning. Density estimation is a building block for many other
statistical methods, such as classification, nonparametric testing, clustering, and data compression.
With univariate (or bivariate) data, the most basic non-parametric method for density estimation
is the histogram method. In this method, the sample space is partitioned into regular intervals
(or rectangles), and the density is estimated by the relative frequency of data points falling into
each interval (rectangle). However, this method is of limited utility in higher dimensional spaces
because the number of cells in a regular partition of a p-dimensional space will grow exponentially
with p, which makes the relative frequency highly variable unless the sample size is extremely
large. In this situation the histogram may be improved by adapting the partition to the data so
that larger rectangles are used in the parts of the sample space where data is sparse. Motivated
by this consideration, researchers have recently developed several multivariate density estimation
methods based on adaptive partitioning [13, 12]. For example, by generalizing the classical P?lya
Tree construction [7, 22] developed the Optional P?lya Tree (OPT) prior on the space of simple
functions. Computational issues related to OPT density estimates were discussed in [13], where
efficient algorithms were developed to compute the OPT estimate. The method performs quite well
when the dimension is moderately large (from 10 to 50).
The purpose of the current paper is to address the following questions on such Bayesian density
estimates based on partition-learning. Question 1: what is the class of density functions that can be
?well estimated? by the partition-learning based methods. Question 2: what is the rate at which the
posterior distribution is concentrated around the true density as the sample size increases. Our main
contributions lie in the following aspects:
?
Work was done while the author was at Stanford University.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
? We impose a suitable prior on the space of density functions defined on binary partitions, and
calculate the posterior concentration rate under the Hellinger distance with mild assumptions.
The rate is adaptive to the unknown smoothness of the true density.
? For two dimensional density functions of bounded variation, the posterior contraction rate
of our method is n?1/4 (log n)3 .
? For H?lder continuous (one-dimensional case) or mixture H?lder continuous (multidimensional case) density functions with regularity parameter ? in (0, 1], the posterior
p
?
concentration rate is n? 2?+p (log n)2+ 2? , whereas the minimax rate for one-dimensional
H?lder continuous functions is (n/ log n)??/(2?+1) .
? When the true density function is sparse in the sense that the Haar wavelet coefficients satisfy
q?1/2
1
a weak-lq (q > 1/2) constraint, the posterior concentration rate is n? 2q (log n)2+ 2q?1 .
? We can use a computationally efficient algorithm to sample from the posterior distribution.
We demonstrate the theoretical results on several simulated data sets.
1.1
Related work
An important feature of our method is that it can adapt to the unknown smoothness of the true density
function. The adaptivity of Bayesian approaches has drawn great attention in recent years. In terms of
density estimation, there are mainly two categories of adaptive Bayesian nonparametric approaches.
The first category of work relies on basis expansion of the density function and typically imposes a
random series prior [15, 17]. When the prior on the coefficients of the expansion is set to be normal
[4], it is also a Gaussian process prior. In the multivariate case, most existing work [4, 17] uses
tensor-product basis. Our improvement over these methods mainly lies in the adaptive structure. In
fact, as the dimension increases the number of tensor-product basis functions can be prohibitively
large, which imposes a great challenge on computation. By introducing adaptive partition, we are
able to handle the multivariate case even when the dimension is 30 (Example 2 in Section 4).
Another line of work considers mixture priors [16, 11, 18]. Although the mixture distributions have
good approximation properties and naturally lead to adaptivity to very high smoothness levels, they
may fail to detect or characterize the local features. On the other hand, by learning a partition of the
sample space, the partition based approaches can provide an informative summary of the structure,
and allow us to examine the density at different resolutions [14, 21].
The paper is organized as follows. In Section 2 we provide more details of the density functions on
binary partitions and define the prior distribution. Section 3 summarizes the theoretical results on
posterior concentration rates. The results are further validated in Section 4 by several experiments.
2
Bayesian multivariate density estimation
We focus on density estimation problems in p-dimensional Euclidean space. Let (?, B) be a measurable space and f0 be a compactly supported density function with respect to the Lebesgue
measure ?. Y1 , Y2 , ? ? ? , Yn is a sequence of independent variables distributed according to f0 . After translation and scaling, we can always assume that the support of f0 is contained in the unit
1 2
p
l
cube in Rp . Translating this into notations, we assume
R that ? = {(y , y , ? ? ? , y ) : y ? [0, 1]}.
F = {f is a nonnegative measurable function on ? : ? f d? = 1} denotes the collection of all the
density functions on (?, B, ?). Then F constitutes the parameter space in this problem. Note that F
is an infinite dimensional parameter space.
2.1
Densities on binary partitions
To address the infinite dimensionality of F, we construct a sequence of finite dimensional approximating spaces ?1 , ?2 , ? ? ? , ?I , ? ? ? based on binary partitions. With growing complexity, these
spaces provide more and more accurate approximations to the initial parameter space F. Here, we
use a recursive procedure to define a binary partition with I subregions of the unit cube in Rp . Let
? = {(y 1 , y 2 , ? ? ? , y p ) : y l ? [0, 1]} be the unit cube in Rp . In the first step, we choose one of
the coordinates y l and cut ? into two subregions along the midpoint of the range of y l . That is,
? = ?l0 ? ?l1 , where ?l0 = {y ? ? : y l ? 1/2} and ?l1 = ?\?l0 . In this way, we get a partition
2
with two subregions. Note that the total number of possible partitions after the first step is equal to
the dimension p. Suppose after I ? 1 steps of the recursion, we have obtained a partition {?i }Ii=1
with I subregions. In the I-th step, further partitioning of the region is defined as follows:
1. Choose a region from ?1 , ? ? ? , ?I . Denote it as ?i0 .
2. Choose one coordinate y l and divide ?i0 into two subregions along the midpoint of the
range of y l .
Such a partition obtained by I ? 1 recursive steps is called a binary partition of size I. Figure 1
displays all possible two dimensional binary partitions when I is 1, 2 and 3.
Figure 1: Binary partitions
Now, let
?I = {f : f =
I
I
X
X
?i
1?i , ?i = 1, {?i }Ii=1 is a binary partition of ?.}
|?i |
i=1
i=1
where |?i | is the volume of ?i . Then, ?I is the collection of the density functions supported by the
binary partitions of size I. They constitute a sequence of approximating spaces (i.e. a sieve, see
[10, 20] for background on sieve theory). Let ? = ??
I=1 ?I be the space containing all the density
functions supported by the binary partitions. Then ? is an approximation of the initial parameter
space F to certain approximation error which will be characterized later.
We take the metric on F, ? and ?I to be Hellinger distance, which is defined as
Z p
p
?(f, g) = ( ( f (y) ? g(y))2 dy)1/2 , f, g ? F.
(1)
?
2.2
Prior distribution
An ideal prior ? on ? = ??
I=1 ?I is supposed to be capable of balancing the approximation error
and the complexity of ?. The prior in this paper penalizes the size of the partition in the sense
that the probability mass on each ?I is proportional to exp(??I log I). Given a sample of size n,
n/ log n
we restrict our attention to ?n = ?I=1 ?I , because in practice we need enough samples within
each subregion to get a meaningful estimate of the density. This is to say, when I ? n/ log n,
?(?I ) ? exp(??I log I), otherwise ?(?I ) = 0.
If we use TI to denote the total number of possible partitions of size I, then it is not hard to see
that log TI ? c? I log I, where c? is a constant. Within each ?I , the prior is uniform across all
binary partitions. In other words, let {?i }Ii=1 be a binary partition of ? of size I, and F({?i }Ii=1 )
is the collection of piecewise constant density functions on this partition (i.e. F({?i }Ii=1 ) = {f =
PI
PI
?i
i=1 |?i | 1?i :
i=1 ?i = 1 and ?i ? 0, i = 1, . . . , I}), then
? F {?i }Ii=1 ? exp(??I log I)/TI .
(2)
3
Given a partition {?i }Ii=1 , the weights ?i on the subregions follow a Dirichlet distribution with
PI
parameters all equal to ? (? < 1). This is to say, for x1 , ? ? ? , xI ? 0 and i=1 xi = 1,
!
I
I
X
Y
?i
1
I
? f=
1?i : ?1 ? dx1 , ? ? ? , ?I ? dxI F {?i }i=1 =
xi??1 , (3)
|?
|
D(?,
?
?
?
,
?)
i
i=1
i=1
QI
PI
where D(?1 , ? ? ? , ?I ) = i=1 ?(?i )/?( i=1 ?i ).
Let ?n (?|Y1 , ? ? ? , Yn ) to denote the posterior distribution. After
integrating out the weights ?i , we
can compute the marginal posterior probability of F {?i }Ii=1 :
!
Z Y
I
n
?n F({?i }Ii=1 )Y1 , ? ? ? , Yn
? ? F({?i }Ii=1 )
(?i /|?i |) i
i=1
?
I
Y
1
???1
D(?, ? ? ? , ?) i=1 i
!
d?1 ? ? ? d?I
I
?
exp(??I log I) D(? + n1 , ? ? ? , ? + nI ) Y 1
?
, (4)
TI
D(?, ? ? ? , ?)
|?i |ni
i=1
where ni is the number of observations in ?i . Under the prior introduced in [13], the marginal
posterior distribution is:
I
??n F({?i }Ii=1 )Y1 , ? ? ? , Yn
? exp(??I)
D(? + n1 , ? ? ? , ? + nI ) Y 1
,
D(?, ? ? ? , ?)
|?i |ni
i=1
while the maximum log-likelihood achieved by histograms on the partition {?i }ni=1 is:
I
X
ni
I
ni log
ln (F({?i }i=1 )) :=
max
ln (f ) =
.
n|?i |
f ?F ({?i }Ii=1 )
i=1
(5)
(6)
From a model selection perspective, we may treat the histograms on each binary partition as a model
of the data. When I n, asymptotically,
1
(7)
log ??n F({?i }Ii=1 )Y1 , ? ? ? , Yn ln (F({?i }Ii=1 )) ? (I ? 1) log n.
2
This is to say, in [13], selecting the partition which maximizes the marginal posterior distribution is
equivalent to applying the Bayesian information criterion (BIC) to perform model selection. However,
if we allow I to increase with n, (7) will not hold any more. But if we use the prior introduced in this
section, in the case when nI ? ? ? (0, 1) as n ? ?, we still have
log ?n F({?i }Ii=1 )Y1 , ? ? ? , Yn ln (F({?i }Ii=1 )) ? ?I log I.
(8)
From a model selection perspective, this is closer to the risk inflation criterion (RIC, [8]).
3
Posterior concentration rates
We are interested in how fast the posterior probability measure concentrates around the true the
density f0 . Under the prior specified above, the posterior probability is the random measure given by
R Qn
j=1 f (Yj )d?(f )
?n (B|Y1 , ? ? ? , Yn ) = RB Qn
.
j=1 f (Yj )d?(f )
?
A Bayesian estimator is said to be consistent if the posterior distribution concentrates on arbitrarily
small neighborhoods of f0 , with probability tending to 1 under P0n (P0 is the probability measure
corresponding to the density function f0 ). The posterior concentration rate refers to the rate at which
these neighborhoods shrink to zero while still possessing most of the posterior mass. More explicitly,
we want to find a sequence n ? 0, such that for sufficiently large M ,
?n ({f : ?(f, f0 ) ? M n }|Y1 , ? ? ? , Yn ) ? 0 in P0n ? probability.
4
In [6] and [2], the authors demonstrated that it is impossible to find an estimator which works
uniformly well for every f in F. This is the case because for any estimator f?, there always exists
f ? F for which f? is inconsistent. Given the minimaxity of the Bayes estimator, we have to restrict
our attention to a subset of the original parameter space F. Here, we focus on the class of density
functions that can be well approximated by ?I ?s. To be more rigorous, a density function f ? F is
said to be well approximated by elements in ?, if there exits a sequence of fI ? ?I , satisfying that
?(fI , f ) = O(I ?r )(r > 0). Let F0 be the collection of these density functions. We will first derive
posterior concentration rate for the elements in F0 as a function of r. For different function classes,
this approximation rate r can be calculated explicitly. In addition to this, we also assume that f0 has
finite second moment.
The following theorem gives the posterior concentration rate under the prior introduced in Section
2.2.
Theorem 3.1. Y1 , ? ? ? , Yn is a sequence of independent random variables distributed according
to f0 . P0 is the probability measure corresponding to f0 . ? is the collection of p-dimensional
density functions supported by the binary partitions as defined in Section 2.1. With the modified prior
r
1
distribution, if f0 ? F0 , then the posterior concentration rate is n = n? 2r+1 (log n)2+ 2r .
The strategy to show this theorem is to write the posterior probability of the shrinking ball as
P? R
Qn f (Yj )
I=1 {f :?(f,f0 )?M n }??I
j=1 f0 (Yj ) d?(f )
.
?({f : ?(f, f0 ) ? M n }|Y1 , ? ? ? , Yn ) =
P? R Qn f (Yj )
I=1 ?I
j=1 f0 (Yj ) d?(f )
(9)
The proof employs the mechanism developed in the landmark works [9] and [19]. We first obtain
the upper bounds for the items in the numerator by dividing them into three blocks, each of which
accounts for bias, variance, and rapidly decaying prior respectively, and calculate the upper bound for
each block separately. Then we provide the prior thickness result, i.e., we bound the prior mass of a
ball around the true density from below. Due to space constraints, the details of the proof will be
provided in the appendix.
This theorem suggests the following two take-away messages: 1. The rate is adaptive to the unknown
r
1
smoothness of the true density. 2. The posterior contraction rate is n? 2r+1 (log n)2+ 2r , which does
not directly depend on the dimension p. For some density functions, r may depend on p. But in
several special cases, like the density function is spatially sparse or the density function lies in a low
dimensional subspace, we will show that the rate will not be affected by the full dimension of the
problem.
In the following three subsections, we will calculate the explicit rates for three density classes. Again,
all proofs are given in the appendix.
3.1
Spatial adaptation
First, we assume that the density concentrates spatially. Mathematically, this implies the density
function satisfies a type of sparsity. In the past two decades, sparsity has become one of the most
discussed types of structure under which we are able to overcome the curse of dimensionality. A
remarkable example is that it allows us to solve high-dimensional linear models, especially when the
system is underdetermined.
Let f be a ?
p dimensional density function and ? the p-dimensional Haar basis. We will work
with P
g = f first. Note that g ? L2 ([0, 1]p ). Thus we can expand g with respect to ? as
g = ??? hg, ?i?, ? ? ?. We rearrange this summation by the size of wavelet coefficients. In
other words, we order the coefficients as the following
|hg, ?(1) i| ? |hg, ?(2) i| ? ? ? ? ? |hg, ?(k) i| ? ? ? ? ,
then the sparsity condition imposed on the density functions is that the decay of the wavelet coefficients follows a power law,
|hg, ?(k) i| ? Ck ?q for all k ? N and q > 1/2,
where C is a constant.
5
(10)
We call such a constraint a weak-lq constraint. The condition has been widely used to characterize
the sparsity of signals and images [1, 3]. In particular, in [5], it was shown that for two-dimensional
cases, when q > 1/2, this condition reasonably captures the sparsity of real world images.
Corollary 3.2. (Application to spatial adaptation) Suppose f0 is a p-dimensional density function
and satisfies the condition (10). If we apply our approaches to this type of density functions, the
q?1/2
1
posterior concentration rate is n? 2q (log n)2+ 2q?1 .
3.2
Density functions of bounded variation
Let ? = [0, 1)2 be a domain in R2 . We first characterize the space BV (?) of functions of bounded
variation on ?.
For a vector ? ? R2 , the difference operator ?? along the direction ? is defined by
?? (f, y) := f (y + ?) ? f (y).
For functions f defined on ?, ?? (f, y) is defined whenever y ? ?(?), where ?(?) := {y :
[y, y + ?] ? ?} and [y, y + ?] is the line segment connecting y and y + ?. Denote by el , l = 1, 2 the
two coordinate vectors in R2 . We say that a function f ? L1 (?) is in BV (?) if and only if
V? (f ) := sup h?1
h>0
2
X
k?hel (f, ?)kL1 (?(hel )) = lim h?1
h?0
l=1
2
X
k?hel (f, ?)kL1 (?(hel ))
l=1
is finite. The quantity V? (f ) is the variation of f over ?.
Corollary 3.3. Assume that f0 ? BV (?). If we apply the Bayesian multivariate density estimator
based on adaptive partitioning here to estimate f0 , the posterior concentration rate is n?1/4 (log n)3 .
3.3
H?lder space
In one-dimensional case, the class of H?lder functions H(L, ?) with regularity parameter ? is defined
as the following: let ? be the largest integer smaller than ?, and denote by f (?) its ?th derivative.
H(L, ?) = {f : [0, 1] ? R : |f (?) (x) ? f (?) (y)| ? L|x ? y|??? }.
In multi-dimensional cases, we introduce the Mixed-H?lder continuity. In order to simplify the
notation, we give the definition when the dimension is two. It can be easily generalized to highdimensional cases. A real-valued function f on R2 is called Mixed-H?lder continuous for some
nonnegative constant C and ? ? (0, 1], if for any (x1 , y1 ), (x1 , y2 ) ? R2 ,
|f (x2 , y2 ) ? f (x2 , y1 ) ? f (x1 , y2 ) + f (x1 , y1 )| ? C|x1 ? x2 |? |y1 ? y2 |? .
?
Corollary 3.4. Let f0 be the p-dimensional density function. If f0 is H?lder continuous (when
p = 1) or mixed-H?lder continuous (when p ? 2) with regularity parameter ? ? (0, 1], then the
?
p
posterior concentration rate of the Bayes estimator is n? 2?+p (log n)2+ 2? .
This result also implies that if f0 only depends on p? variable where p? < p, but we do not know in
advance which p? variables, then the rate of this method is determined by the effective dimension p? of
the problem, since the smoothness parameter r is only a function of p?. In next section, we will use a
simulated data set to illustrate this point.
4
4.1
Simulation
Sequential importance sampling
Each partition AI = {?i }Ii=1 is obtained by recursively partitioning the sample space. We can
use a sequence of partitions A1 , A2 , ? ? ? , AI to keep track of the path leading to AI . Let ?n (?)
denote the posterior distribution ?n (?|Y1 , ? ? ? , Yn ) for simplicity, and ?In be the posterior distribution
conditioning on ?I . Then ?In (AI ) can be decomposed as
?In (AI ) = ?In (A1 )?In (A2 |A1 ) ? ? ? ?In (AI |AI?1 ).
6
Figure 2: Heatmap of the density and plots of the 2-dimensional Haar coefficients. For the plot on the
right, the left panel is the plot of the Haar coefficients from low resolution to high resolution up to
level 6. The middle one is the plot of the sorted coefficients according to their absolute values. And
the right one is the same as the middle plot but with the abscissa in log scale.
The conditional distribution ?In (Ai+1 |Ai ) can be calculated by ?In (Ai+1 )/?In (Ai ). However, the
computation of the marginal distribution ?In (Ai ) is sometimes infeasible, especially when both I and
I ? i are large, because we need to sum the marginal posterior probability over all binary partitions of
size I for which the first i steps in the partition generating path are the same as those of Ai . Therefore,
we adopt the sequential importance algorithm proposed in [13]. In order to build a sequence of binary
partitions, at each step, the conditional distribution is approximated by ?i+1
n (Ai+1 |Ai ). The obtained
partition is assigned a weight to compensate the approximation, where the weight is
wI (AI ) =
?In (AI )
.
?1n (A1 )?2n (A2 |A1 ) ? ? ? ?In (AI |AI?1 )
In order to make the data points as uniform as possible, we apply a copula transformation to each
variable in advance whenever the dimension exceeds 3. More specifically, we estimate the marginal
distribution of each variable Xj by our approach, denoted as f?j (we use F?j to denote the cdf of
Xj ), and transform each point (y 1 , ? ? ? , y p ) to (F1 (y 1 ), ? ? ? , Fp (y p )). Another advantage of this
transformation is that after the transformation the sample space naturally becomes [0, 1]p .
Example 1 Assume that the two-dimensional density function is
2
3
0.25
0.75
Y1
2
2
? N
, 0.05 I2?2 + N
, 0.05 I2?2 .
0.25
0.75
Y2
5
5
This density function both satisfies the spatial sparsity condition and belongs to the space of functions
of bounded variation. Figure 2 shows the heatmap of the density function and its Haar coefficients.
The last panel in the second plot displays the sorted coefficients with the abscissa in log-scale. From
this we can clearly see that the power-law decay defined in Section 3.1 is satisfied.
We apply the adaptive partitioning approach to estimate the density, and allow the sample size increase
from 102 to 105 . In Figure 3, the left plot is the density estimation result based on a sample with
10000 data points. The right one is the plot of Kullback-Leibler (KL) divergence from the estimated
density to f0 vs. sample size in log-scale. The sample sizes are set to be 100, 500, 1000, 5000, 104 ,
and 105 . The linear trend in the plot validates the posterior concentrate rates calculated in Section 3.
The reason why we use KL divergence instead of the Hellinger distance is that for any f0 ? F0 and
f? ? ?, we can show that the KL divergence and the Hellinger distance are of the same order. But
KL divergence is relatively easier to compute in our setting, since we can show that it is linear in
the logarithm of the posterior marginal probability of a partition. The proof will be provided in the
appendix. For each fixed sample size, we run the experiment 10 times and estimate the standard error,
which is shown by the lighter blue part in the plot.
Example 2 In the second example we work with a density function of moderately high dimension.
Assume that the first five random variables Y1 , ? ? ? Y5 are generated from the following location
7
KL divergence
1.00
0.80
0.60
0.40
0.20
0.01
1e2
5e2
1e3
5e3
1e4
1e5
sample size
Figure 3: Plot of the estimated density and KL divergence against sample size. We use the posterior
mean as the estimate. The right plot is on log-log scale, while the labels of x and y axes still represent
the sample size and the KL divergence before we take the logarithm.
Figure 4: KL divergence vs. sample size. The blue, purple and red curves correspond to the cases
when p = 5, p = 10 and p = 30 respectively. The slopes of the three lines are almost the same,
implying that the concentration rate only depends on the effective dimension of the problem (which
is 5 in this example).
mixture of the Gaussian distribution:
?
!
! ?
0.052
Y1
0.25
1 ?
Y2
0.25 , ?0.032
?
N
2
Y
0.25
0
3
Y4 , Y5
?
0.032
0.052
0
??
0
1
0 ?? + N
2
2
0.05
!
!
0.75
2
0.75 , 0.05 I3?3 ,
0.75
N (0.5, 0.1),
the other components Y6 , ? ? ? , Yp are independently uniformly distributed. We run experiments for
p = 5, 10, and 30. For a fixed p, we generate n ? {500, 1000, 5000, 104 , 105 } data points. For
each pair of p and n, we repeat the experiment 10 times and calculate the standard error. Figure 4
displays the plot of the KL divergence vs. the sample size on log-log scale. The density function is
continuous differentiable. Therefore, it satisfies the mixed-H?lder continuity condition. The effective
dimension of this example is p? = 5, and this is reflected in the plot: the slopes of the three lines,
which correspond to the concentration rates under different dimensions, almost remain the same as
we increase the full dimension of the problem.
5
Conclusion
In this paper, we study the posterior concentration rate of a class of Bayesian density estimators
based on adaptive partitioning. We obtain explicit rates when the density function is spatially sparse,
belongs to the space of bounded variation, or is H?lder continuous. For the last case, the rate is
minimax up to a logarithmic term. When the density function is sparse or lies in a low-dimensional
subspace, the rate will not be affected by the dimension of the problem. Another advantage of this
method is that it can adapt to the unknown smoothness of the underlying density function.
8
Bibliography
[1] Felix Abramovich, Yoav Benjamini, David L. Donoho, and Iain M. Johnstone. Adapting to unknown
sparsity by controlling the false discovery rate. The Annals of Statistics, 34(2):584?653, 04 2006.
[2] Lucien Birg? and Pascal Massart. Minimum contrast estimators on sieves: exponential bounds and rates of
convergence. Bernoulli, 4(3):329?375, 09 1998.
[3] E.J. Cand?s and T. Tao. Near-optimal signal recovery from random projections: Universal encoding
strategies? Information Theory, IEEE Transactions on, 52(12):5406?5425, Dec 2006.
[4] R. de Jonge and J.H. van Zanten. Adaptive estimation of multivariate functions using conditionally gaussian
tensor-product spline priors. Electron. J. Statist., 6:1984?2001, 2012.
[5] R.A. DeVore, B. Jawerth, and B.J. Lucier. Image compression through wavelet transform coding. Information Theory, IEEE Transactions on, 38(2):719?746, March 1992.
[6] R. H. Farrell. On the lack of a uniformly consistent sequence of estimators of a density function in certain
cases. The Annals of Mathematical Statistics, 38(2):471?474, 04 1967.
[7] Thomas S. Ferguson. Prior distributions on spaces of probability measures. Ann. Statist., 2:615?629, 1974.
[8] Dean P. Foster and Edward I. George. The risk inflation criterion for multiple regression. Ann. Statist.,
22(4):1947?1975, 12 1994.
[9] Subhashis Ghosal, Jayanta K. Ghosh, and Aad W. van der Vaart. Convergence rates of posterior distributions.
The Annals of Statistics, 28(2):500?531, 04 2000.
[10] U. Grenander. Abstract Inference. Probability and Statistics Series. John Wiley & Sons, 1981.
[11] Willem Kruijer, Judith Rousseau, and Aad van der Vaart. Adaptive bayesian density estimation with
location-scale mixtures. Electron. J. Statist., 4:1225?1257, 2010.
[12] Dangna Li, Kun Yang, and Wing Hung Wong. Density estimation via discrepancy based adaptive sequential
partition. 30th Conference on Neural Information Processing Systems (NIPS 2016), 2016.
[13] Luo Lu, Hui Jiang, and Wing H. Wong. Multivariate density estimation by bayesian sequential partitioning.
Journal of the American Statistical Association, 108(504):1402?1410, 2013.
[14] Li Ma and Wing Hung Wong. Coupling optional p?lya trees and the two sample problem. Journal of the
American Statistical Association, 106(496):1553?1565, 2011.
[15] Vincent Rivoirard and Judith Rousseau. Posterior concentration rates for infinite dimensional exponential
families. Bayesian Anal., 7(2):311?334, 06 2012.
[16] Judith Rousseau. Rates of convergence for the posterior distributions of mixtures of betas and adaptive
nonparametric estimation of the density. The Annals of Statistics, 38(1):146?180, 02 2010.
[17] Weining Shen and Subhashis Ghosal. Adaptive bayesian procedures using random series priors. Scandinavian Journal of Statistics, 42(4):1194?1213, 2015. 10.1111/sjos.12159.
[18] Weining Shen, Surya T. Tokdar, and Subhashis Ghosal. Adaptive bayesian multivariate density estimation
with dirichlet mixtures. Biometrika, 100(3):623?640, 2013.
[19] Xiaotong Shen and Larry Wasserman. Rates of convergence of posterior distributions. The Annals of
Statistics, 29(3):687?714, 06 2001.
[20] Xiaotong Shen and Wing Hung Wong. Convergence rate of sieve estimates. The Annals of Statistics,
22(2):pp. 580?615, 1994.
[21] Jacopo Soriano and Li Ma. Probabilistic multi-resolution scanning for two-sample differences. Journal of
the Royal Statistical Society: Series B (Statistical Methodology), 79(2):547?572, 2017.
[22] Wing H. Wong and Li Ma. Optional p?lya tree and bayesian inference. The Annals of Statistics, 38(3):1433?
1459, 06 2010.
9
| 7059 |@word mild:1 middle:2 compression:2 simulation:1 contraction:2 p0:2 recursively:1 moment:1 initial:2 liu:1 series:4 selecting:1 past:1 existing:1 current:1 luo:1 john:1 partition:41 informative:1 plot:14 v:3 implying:1 item:1 jonge:1 location:2 judith:3 five:1 mathematical:1 along:3 become:1 beta:1 introduce:1 hellinger:4 behavior:1 abscissa:2 examine:1 growing:1 multi:2 cand:1 decomposed:1 curse:1 becomes:1 provided:2 bounded:5 notation:2 maximizes:1 mass:3 panel:2 underlying:1 what:2 developed:4 ghosh:1 transformation:3 every:1 multidimensional:1 ti:4 prohibitively:1 biometrika:1 partitioning:9 unit:3 yn:11 before:1 felix:1 local:1 treat:1 encoding:1 jiang:1 path:2 suggests:1 limited:1 range:2 testing:1 yj:6 recursive:2 block:3 practice:1 procedure:2 universal:1 adapting:2 projection:1 word:2 integrating:1 regular:2 refers:1 get:2 selection:3 operator:1 risk:2 applying:1 impossible:1 wong:6 measurable:2 equivalent:1 demonstrated:1 imposed:1 dean:1 attention:3 independently:1 resolution:4 subhashis:3 simplicity:1 recovery:1 shen:4 wasserman:1 estimator:13 iain:1 handle:1 variation:6 coordinate:3 annals:7 construction:1 suppose:2 controlling:1 lighter:1 us:1 element:2 trend:1 approximated:3 satisfying:1 cut:1 capture:1 calculate:4 region:2 complexity:2 moderately:2 depend:3 segment:1 exit:1 basis:4 compactly:1 easily:1 fast:1 effective:3 artificial:1 neighborhood:2 quite:1 stanford:5 larger:1 solve:1 say:4 widely:1 otherwise:1 lder:11 valued:1 statistic:11 vaart:2 transform:2 validates:1 advantage:3 sequence:9 differentiable:1 grenander:1 product:3 jayanta:1 adaptation:2 rapidly:1 supposed:1 validate:1 convergence:7 regularity:3 generating:1 derive:1 illustrate:1 coupling:1 edward:1 dividing:1 subregion:1 implies:2 concentrate:4 direction:1 translating:1 larry:1 f1:1 opt:3 rousseau:3 underdetermined:1 mathematically:1 summation:1 hold:1 around:3 inflation:2 sufficiently:1 normal:1 exp:5 great:2 electron:2 adopt:1 a2:3 purpose:1 estimation:14 label:1 lucien:1 largest:1 clearly:1 gaussian:3 always:2 modified:1 ck:1 i3:1 corollary:3 validated:1 focus:2 l0:3 ax:1 improvement:1 bernoulli:1 likelihood:1 mainly:2 contrast:1 rigorous:1 linxi:1 sense:2 detect:1 inference:2 el:1 i0:2 ferguson:1 typically:1 expand:1 interested:1 tao:1 issue:1 classification:1 pascal:1 denoted:1 heatmap:2 spatial:3 special:2 copula:1 cube:3 equal:2 construct:1 marginal:7 beach:1 sampling:1 sjos:1 y6:1 constitutes:1 discrepancy:1 spline:1 piecewise:1 simplify:1 employ:1 divergence:9 lebesgue:1 n1:2 message:1 highly:1 mixture:7 rearrange:1 hg:5 accurate:1 capable:1 closer:1 unless:1 tree:4 euclidean:1 divide:1 penalizes:1 logarithm:2 theoretical:3 yoav:1 introducing:1 subset:1 kl1:2 uniform:2 characterize:3 thickness:1 scanning:1 p0n:2 adaptively:1 st:1 density:74 probabilistic:1 connecting:1 again:1 satisfied:1 containing:1 choose:3 american:2 wing:6 derivative:1 leading:1 yp:1 li:5 account:1 de:1 coding:1 coefficient:10 abramovich:1 satisfy:1 explicitly:2 farrell:1 depends:2 later:1 analyze:1 sup:1 red:1 bayes:2 decaying:1 slope:2 contribution:1 purple:1 ni:9 variance:1 correspond:2 weak:2 bayesian:18 vincent:1 lu:1 researcher:1 whenever:2 definition:1 against:1 frequency:2 pp:1 e2:2 naturally:2 proof:4 dxi:1 subsection:1 lim:1 dimensionality:2 lucier:1 organized:1 weining:2 higher:1 follow:1 reflected:1 devore:1 improved:1 methodology:1 done:1 shrink:1 hand:1 lack:1 icme:1 continuity:2 usa:1 building:1 true:8 y2:7 sieve:4 assigned:1 spatially:3 leibler:1 i2:2 conditionally:1 numerator:1 criterion:3 generalized:1 demonstrate:2 performs:1 l1:3 image:3 consideration:1 recently:1 possessing:1 fi:2 tending:1 tokdar:1 conditioning:1 exponentially:1 volume:1 discussed:2 association:2 ai:19 smoothness:7 benjamini:1 scandinavian:1 f0:27 multivariate:9 posterior:38 recent:1 perspective:2 belongs:2 certain:2 binary:17 arbitrarily:1 der:2 minimum:1 george:1 impose:1 lya:4 signal:2 ii:17 full:2 multiple:1 exceeds:1 adapt:3 characterized:1 long:1 compensate:1 a1:5 qi:1 basic:1 regression:1 metric:1 histogram:4 sometimes:1 represent:1 achieved:1 cell:1 dec:1 whereas:1 background:1 want:1 addition:1 interval:2 separately:1 grow:1 massart:1 inconsistent:1 call:1 integer:1 near:1 yang:1 ideal:1 enough:1 variety:1 xj:2 bic:1 restrict:2 soriano:1 motivated:1 utility:1 e3:2 constitute:1 hel:4 nonparametric:3 subregions:6 concentrated:1 statist:4 category:2 generate:1 estimated:4 track:1 rb:1 blue:2 write:1 affected:2 achieving:1 falling:1 drawn:1 rectangle:3 asymptotically:1 year:1 sum:1 run:2 almost:2 family:1 summarizes:1 scaling:1 dy:1 ric:1 appendix:3 bound:4 display:3 nonnegative:2 bv:3 constraint:4 x2:3 bibliography:1 aspect:1 extremely:1 xiaotong:2 relatively:1 department:2 according:3 ball:2 march:1 across:1 smaller:1 remain:1 son:1 partitioned:1 wi:1 computationally:1 ln:4 fail:1 mechanism:1 know:1 willem:1 apply:4 away:1 birg:1 dangna:3 rp:3 original:1 thomas:1 denotes:1 clustering:1 dirichlet:2 especially:2 build:1 approximating:2 classical:1 society:1 tensor:3 question:3 quantity:1 parametric:2 concentration:17 strategy:2 said:2 subspace:2 distance:4 simulated:3 landmark:1 y5:2 considers:1 reason:1 y4:1 kun:1 anal:1 unknown:6 perform:1 upper:2 observation:1 finite:3 optional:3 situation:1 y1:18 ghosal:3 introduced:3 david:1 pair:1 specified:1 kl:9 nip:2 address:2 able:2 below:1 fp:1 sparsity:7 challenge:1 max:1 royal:1 power:2 suitable:2 haar:5 recursion:1 minimax:2 minimaxity:1 columbia:2 prior:23 l2:1 discovery:1 asymptotic:1 relative:2 law:2 adaptivity:2 mixed:4 proportional:1 remarkable:1 consistent:2 imposes:2 foster:1 pi:4 balancing:1 translation:1 summary:1 supported:4 last:2 repeat:1 infeasible:1 bias:1 allow:3 aad:2 johnstone:1 midpoint:2 sparse:5 absolute:1 distributed:3 van:3 overcome:1 dimension:16 calculated:3 world:1 curve:1 qn:4 author:2 collection:5 adaptive:16 transaction:2 kullback:1 keep:1 xi:3 surya:1 continuous:8 decade:1 why:1 reasonably:1 ca:1 e5:1 expansion:2 zanten:1 domain:1 main:1 x1:6 wiley:1 shrinking:1 explicit:2 lq:2 exponential:2 lie:4 wavelet:4 theorem:4 e4:1 dx1:1 r2:5 decay:2 bivariate:1 exists:1 false:1 sequential:4 importance:2 hui:1 easier:1 generalizing:1 logarithmic:1 univariate:1 contained:1 satisfies:4 relies:1 cdf:1 ma:3 conditional:2 sorted:2 donoho:1 ann:2 hard:1 infinite:3 determined:1 uniformly:3 specifically:1 total:2 called:2 meaningful:1 highdimensional:1 support:1 hung:4 |
6,699 | 706 | Synchronization and Grammatical Inference
in an Oscillating Elman Net
Bill Baird
Dept Mathematics,
U .C.Berkeley,
Berkeley, Ca. 94720,
[email protected]
Todd Troyer
Dept Mathematics,
U .C.Berkeley,
Berkeley, Ca. 94720
Frank Eeckman
Lawrence Livermore
National Laboratory,
P.O. Box 808 (L-426),
Livermore, Ca. 94551
Abstract
We have designed an architecture to span the gap between biophysics and cognitive science to address and explore issues of how
a discrete symbol processing system can arise from the continuum,
and how complex dynamics like oscillation and synchronization can
then be employed in its operation and affect its learning. We show
how a discrete-time recurrent "Elman" network architecture can
be constructed from recurrently connected oscillatory associative
memory modules described by continuous nonlinear ordinary differential equations. The modules can learn connection weights between themselves which will cause the system to evolve under a
clocked "machine cycle" by a sequence of transitions of attractors
within the modules, much as a digital computer evolves by transitions of its binary flip-flop attractors. The architecture thus employs the principle of "computing with attractors" used by macroscopic systems for reliable computation in the presence of noise. We
have specifically constructed a system which functions as a finite
state automaton that recognizes or generates the infinite set of six
symbol strings that are defined by a Reber grammar. It is a symbol
processing system, but with analog input and oscillatory subsymbolic representations. The time steps (machine cycles) of the system are implemented by rhythmic variation (clocking) of a bifurcation parameter. This holds input and "context" modules clamped
at their attractors while 'hidden and output modules change state,
then clamps hidden and output states while context modules are
released to load those states as the new context for the next cycle of
input. Superior noise immunity has been demonstrated for systems
with dynamic attractors over systems with static attractors, and
synchronization ("binding") between coupled oscillatory attractors
in different modules has been shown to be important for effecting
reliable transitions.
236
Synchronization and Grammatical Inference in an Oscillating Elman Net
1
Introduction
Patterns of 40 to 80 Hz oscillation have been observed in the large scale activity (local field potentials) of olfactory cortex [Freeman and Baird, 1987] and
visual neocortex [Gray and Singer, 1987], and shown to predict the olfactory
[Freeman and Baird, 1987] and visual pattern recognition responses of a trained
animal. Similar observations of 40 Hz oscillation in auditory and motor cortex (in
primates), and in the retina and EMG have been reported. It thus appears that
cortical computation in general may occur by dynamical interaction of resonant
modes, as has been thought to be the case in the olfactory system.
The oscillation can serve a macroscopic clocking function and entrain or "bind"
the relevant microscopic activity of disparate cortical regions into a well defined
phase coherent collective state or "gestalt". This can overide irrelevant microscopic
activity and produce coordinated motor output. There is further evidence that
although the oscillatory activity appears to be roughly periodic, it is actually chaotic
when examined in detail.
If this view is correct, then oscillatoryI chaotic network modules form the actual cortical substrate of the diverse sensory, motor, and cognitive operations now studied
in static networks. It must then be shown how those functions can be accomplished
with oscillatory and chaotic dynamics, and what advantages are gained thereby. It
is our expectation that nature makes ~ood use of this dynamical complexity, and
our intent is to search here for novel deSign principles that may underly the superior
computational performance of biological systems over man made devices in many
task domains. These principles may then be applied in artificial systems to engineering problems to advance the art of computation. We have therefore constructed
a parallel distributed processing architecture that is inspired by the structure and
dynamics of cerebral cortex, and applied it to the problem of grammatical inference.
The construction assumes that cortex is a set of coupled oscillatory associative
memories, and is also guided by the principle that at tractors must be used by
macroscopic systems for reliable computation in the presence of noise. Present day
digital computers are built of flip-flops which, at the level of their transistors, are
continuous dissipative dynamical systems with different attractors underlying the
symbols we call "0" and "1".
2
Oscillatory Network Modules
The network modules of this architecture were developed previously as models of
olfactory cortex, or caricatures of "patches"of neocortex [Baird, 1990a]. A particular subnetwork is formed by a set of neural populations whose interconnections
also contain higher order synapses. These synapses determine at tractors for that
subnetwork independent of other subnetworks. Each subnetwork module assumes
only minimal coupling justified by known olfactory anatomy. An N node module
can be shown to function as an associative memory for up to N 12 oscillatory and
NI3 chaotic memory attractors [Baird, 1990b, Baird and Eeckman, 1992b). Single
modules with static, oscillatory, and three types of chaotic attractors - Lorenz,
Roessler, Ruelle-Takens - have been sucessfully used for recognition of handwritten
characters [Baird and Eeckman, 1992b].
We have shown in these modules a superior stability of oscillatory attractors over
static attractors in the presence of additive Gaussian noise perturbations with
the 1If spectral character of the noise found experimentally by Freeman in the
brain[Baird and Eeckman, 1992a]. This may be one reason why the brain uses
dynamic attractors. An oscillatory attractor acts like a a bandpass filter and is
237
238
Baird, Troyer, and Eeckman
effectively immune to the many slower macroscopic bias perturbations in the thetaalpha-beta range (3 - 25 Hz) below its 40 -80 Hz passband, and the more microscopic
perturbations of single neuron spikes in the 100 - 1000 Hz range.
The mathematical foundation for the construction of network modules is contained
in the normal form projection algorithm[Baird, 1990b]. This is a learning algorithm for recurrent analog neural networks which allows associative memory storage
of analog patterns, continuous periodic sequences, and chaotic attractors in the same
network. A key feature of a net constructed by this algorithm is that the underlying dynamics is explicitly isomorphic to any of a class of standard, well understood
nonlinear dynamical systems - a "normal form" [Guckenheimer and Holmes, 1983].
This system is chosen in advance, independent of both the patterns to be stored
and the learning algorithm to be used. This control over the dynamics permits the
design of important aspects of the network dynamics independent of the particular patterns to be stored. Stability, basin geometry, and rates of convergence to
attractors can be programmed in the standard dynamical system.
By analyzing the network in the polar form of these "normal form coordinates",
the amplitude and phase dynamics have a particularly simple interaction. When
the input to a module is synchronized with its intrinsic oscillation, the amplitudes
of the periodic activity may be considered separately from the phase rotation, and
the network of the module may be viewed as a static network with these amplitudes
as its activity. We can further show analytically that the network modules we have
constructed have a strong tendency to synchronize as required.
3
Oscillatory Elman Architecture
Because we work with this class of mathematically well-understood associative memory networks, we can take a constructive approach to building a cortical computer
architecture, using these networks as modules in the same way that digital computers are designed from well behaved continuous analog flip-flop circuits. The
architecture is such that the larger system is itself a special case of the type of
network of the submodules, and can be analysed with the same tools used to design
the subnetwork modules.
Each module is described in normal form or "mode" coordinates as a k-winnertake-all network where the winning set of units may have static, periodic or chaotic
dynamics. By choosing modules to have only two attractors, networks can be built
which are similar to networks using binary units. There can be fully recurrent connections between modules. The entire super-network of connected modules, however, is itself a polynomial network that can be projected into standard network
coordinates. The attractors within the modules may then be distributed patterns
like those described for the biological model [Baird, 1990a], and observed experimentally in the olfactory system [Freeman and Baird, 1987]. The system is still
equivalent to the architecture of modules in normal form, however, and may easily
be designed, simulated, and theoretically evaluated in these coordinates. In this
paper all networks are discussed in normal form coordinates.
As a benchmark for the capabilities of the system, and to create a point of contact
to standard network architectures, we have constructed a discrete-time recurrent
"Elman" network [Elman, 1991] from oscillatory modules defined by ordinary differential equations. We have at present a system which functions as a finite state
automaton that perfectly recognizes or generates the infinite set of strings defined by
the Reber grammar described in Cleeremans et. al. [Cleeremans et al., 1989]. The
connections for this network were found by psuedo-inverting to find the connection
matrices between a set of pre-chosen automata states for the hidden layer modules
Synchronization and Grammatical Inference in an Oscillating Elman Net
and the proper possible output symbols of the Reber grammar, and between the
proper next hidden state and each legal combination of a new input symbol and the
present state contained in the context modules.
We use two types of modules in implementing the Elman network architecture.
The input and output layer each consist of a single associative memory module
with six oscillatory at tractors (six competing oscillatory modes), one for each of the
six possible symbols in the grammar. An attractor in these winner-take-all normal
form cordinates is one oscillator at its maximum amplitude, with the others near
zero amplitude. The hidden and context layers consist of binary "units" composed
of a two competing oscillator module. We think of one mode within the unit as
representing "I" and the other as representing"O" (see fig. 1).
A "weight" for this unit is simply defined to be the weight of a driving unit to the
input of the 1 attractor. The weights for the 0 side of the unit are then given as
the compliment of these, w O = A - WI. This forces the input to the 0 side of the
unit be the complement of the input to the 1 side, If
A - If, where A is a bias
constant chosen to divide input equally between the oscillators at the midpoint of
activation.
=
.---------------------------
Figure 1.
OUTPUT
I
:
I
:
\(!)@00<V@)1 ??? ? ? ? ?. ?????????? ;? ??. . ? ? ? )
HIDDEN
fr
:
I
1""'"tJ~~I~~t~_~J~~!~~U
,I
--------~-------------~------- ~\:.:
. - - - - .:.'
.. - - I
: 1@)01@)01@)01@)01@)01
1~(!)@Cge~1 :
L
I ________________________________________
CONTEXT
INPUT
~
I
Information flow in the network is controlled by a "machine cycle" implemented
by the sinusoidal clocking of a bifurcation parameter which controls the level of
inhibitory inter-mode coupling or "competition" between the individual oscillatory
modes within each winner-take-all module.
For illustration, we use a binary module represnting either a single hidden or context
unit; the behavior of the larger input and output modules is similar. Such a unit is
defined in polar normal form coordinates by the following equations:
rli
Uirli - crli(rii + (d - bsin(wc/od: t ?r5i) +
L: wijIj cos(Oj -
Oli)
j
rOi
UirOi - croi(r5i + (d - bsin(wc/ockt?rii) + L:(A - Wij )Ij cos(Oj - OOi)
j
Oli
Wi
+L
Wij(Ij !rli) sin(Oj
- Oli)
j
00i
Wi
+ I)A j
Wij )(Ij !rOi)
sin(Oj - Ood
239
240
Baird, Troyer, and Eeckman
The clocked parameter bsin(wclockt) has lower (1/10) frequency than the intrinsic
frequency of the unit Wi. Asuming that all inputs to the unit are phase-locked,
examination of the phase equations shows that the unit will synchronize with this
input. When the oscillators are phase-locked to the input, (J; - (Jli
0, and the
phase terms cos((J; - (Jli)
cos(O)
1 dissappear. This leaves the amplitude
equations rli and rOi with static inputs E; wi;I; and E;(A - wi;)I;. The phase
equations show a strong tendency to phase-lock, since there is an attractor at zero
phase difference </> = (Jo - (JI = (Jo - wIt = 0, and a repellor at 180 degrees in the
phase difference equations ;p for either side of a unit driven by an input of the same
frequency, WI - Wo o.
=
=
=
=
? = -sin-1[(ro/rI)(WI -
;p = Wo -WI + (rI/ro)sin(-</?, so,
wo)]
Thus we have a network module which approximates a static network unit in its
amplitude activity when fully phase-locked. Amplitude information is transmitted
between modules, with an oscillatory carrier. If the frequencies of attractors in the
architecture are randomly dispersed by a significant amount, phase-lags appear,
then synchronization is lost and improper transitions begin to occur.
For the remainder of the paper we assume the entire system is operating in the synchronized regime and examine the flow of information characterized by the pattern
of amplitudes of the oscillatory modes within the network.
4
Machine Cycle by Clocked Bifurcation
Given this assumption of a phase-locked system, the amplitude dynamics behave as
a gradient dynamical system for an energy function given by
=
=
E; wii Ii and B E; Ii. Figures 2a and 2b show the
where the total input I
energy landscape with no external input for minimal and maximal levels of competition respectively. External input simply adds a linear "tilt" to the landscape,
with large I giving a larger tilt toward the rli axis and small I a larger tilt toward
the rOi axis.
Note that for low levels of competition, there is a broad circular valley. When tilted
by external input, there is a unique equilibrium that is determined by the bias in
tilt alon~ one axis over the other. Thinking of rli as the "acitivity" of the unit,
this acitlvity becomes an increasing function of I. The module behaves as analog
connectionist unit whose transfer function can be approximated by a sigmoid.
With high levels of competition, the unit will behave as a binary (bistable) "digital"
flip-flop element. There are two deep valleys, one on each axis. Hence the final
steady state of the unit is determined by which basin contains the initial state of the
system reached during the analog mode of operation before competition is increased
by the clock. This state changes little under the influence of external input: a
tilt will move the location of the valleys only slightly. Hence the unit performs
a winner-take-all choice on the coordinates of its initial state and maintains that
choice independent of external input.
Synchronization and Grammatical Inference in an Oscillating Elman Net
Figure 2a.
Figure 2b.
Low
Competition
High
Competition
We use this bifurcation in the behavior of the modules to control information flow
within the network. We think of the input and context modules as "sensory", and
the hidden and output modules as "motor" modules. The action of the clock is
applied reciprocally to these two sets (grouped by dotted lines in fig.1) so that
they alternatively open to receive input from each other and make transitions of
attractors. This enables a network completely defined as a set of ordinary differential
equations to implement the discrete-time recurrent Elman network.
At the beginning of a machine cycle, the input and context layers are at high competition and hence their activity is "clamped" at the bottom of deep attractors.
The hidden and output modules are at low competition and therefore behave as
traditional feedforward network free to take on analog values. Then the situation
reverses. As the competition comes up in the output module, it makes a winnertake-all choice as to the next symbol. Meanwhile high competition has quantized
and clamped the activity in the hidden layer to a fixed binary vector. Then competition is lowered in the input and context layers, freeing these modules from their
attractors.
Identity mappings from hidden to context and from output to input (gray arrows
in fig.1) "load" the binarized activity of the hidden layer to the context layer for
the next cycle, and "place" the generated output symbol into the input layer. For a
Reber grammar there are always two equally possible next symbols being generated
in the output layer, and we apply noise to break this symmetry and let the winnertake-all dynamics of the output module chose one. For the recognition mode of
operation, these symbols are thought of as "predicted" by the output, and one of
them must always match the next actual input of a string to be recognized or the
string is instantly rejected.
Note that even though the clocking is sinusiodal and these transitions are not sharp,
the system is robust and reliable. It is only necessary to set the rates of convergence
within modules to be faster than the rate of change of the clocked bifurcation
parameter, so that the modules are operating "adiabatically" - i.e. always internally
relaxed to an equilibrium that is moved slowly by the clocked parameter.
It is the bifurcation in the phase portrait of a module from one to two attractors
that contributes the essential "digitization" of the system in time and state. A
bifurcation is a discontinuous (topologically sharp) change in the phase portrait of
possibilities for the continuous dynamical behavior of a system that occurs as a
bifurcation parameter reaches a "critical" value. We can think of the analog mode
for a module as allowing input to prepare its initial state for the binary "decision"
between attractor basins that occurs as competition rises and the double potential
well appears.
The feedback between sensory and motor modules is effectively cut when one set
is clamped at high competition. The system can thus be viewed as operating in
discrete time by alternating transitions between a finite set of attracting states.
This kind of clocking and "buffering" (clamping) of some states while other states
241
242
Baird, Troyer, and Eeckman
relax is essential to the reliable operation of digital architectures. The clock input
on a flip-flop clamps it's state until its signal inputs have settled and the choice of
transition can be made with the proper information available. In our simulations, if
we clock all modules to transition at once, the programmed sequences lose stability,
and we get transitions to unprogrammed fixed points and simple limit cycles for
the whole system.
5
Training
When the input and context modules are clamped at their attractors, and the hidden
and output modules are in the analog operating mode and synchronized to their
inputs, the network approximates the behavior of a standard feedforward network
in terms of its amplitude activities. Thus a real valued error can be defined for the
hidden and output units and standard learning algorithms like back propagation can
be used to train the connections.
We can use techniques of Giles et. aI. [Giles et aI., 1992] who have trained simple
recurrent networks to become finite state automata that can recognize the regular
Tomita languages and others. If the context units are clamped with high competition, they are essentially "quantized" to take on only their 0 or 1 attractor values,
and the feedback connections from the hidden units cannot affect them. While
Giles, et. aI. often do not quantize their units until the end of training to extract a
finite state automaton, they find that quantizing of the context units during training
like this increases learning speed in many cases[Giles et aI., 1992]. In preparation
for learning in the dynamic architecture, we have sucessfully trained the back propogation network of Cleermans et. aI. with digititized context units and a shifted
sigmoid activation function that approximates the one calculated for our oscillatory
units.
In the dynamic architecture, we have also the option of leaving the competition
within the context units at intermediate levels to allow them to take on analog
values in a variable sized neighborhood of the 0 or 1 attractors. Since our system
is recurrently connected by an identity map from hidden to context units, it will
relax to some equilibrium determined by the impact of the context units and the
clamped input on the hidden unit states, and the effect of the feedback from those
hidden states on the context states. We can thus further explore the impact on
learning of this range of operation between discrete time and space automaton and
continuous analog recurrent network.
6
Discusion
The ability to operate as an finite automaton with oscillatory/chaotic "states" is
an important benchmark for this architecture, but only a subset of its capabilities.
At low to zero competition, the supra-system reverts to one large continuous dynamical system. We expect that this kind of variation of the operational regime,
especially with chaotic attractors inside the modules, though unreliable for habitual
behaviors, may nontheless be very useful in other areas such as the search process
of reinforcement learning.
An important element of intra-cortical communication in the brain, and between
modules in this architecture, is the ability of a module to detect and respond to
the proper input signal from a particular module, when inputs from other modules
which is irrelevant to the present computation are contributing cross-talk and noise.
This is smilar to the problem of coding messages in a computer architecture like the
Synchronization and Grammatical Inference in an Oscillating Elman Net
connection machine so that they can be picked up from the common communication buss line by the proper receiving module. We believe that sychronization is one
important aspect of how the brain solves this coding problem. Attractors in modules of the architecture may be frequency coded during learning so that they will
sychronize only with the appropriate active attractors in other modules that have
a similar resonant frequency. The same hardware (or "wetware") and connection
matrix can thus subserve many different networks of interaction between modules
at the same time without cross-talk problems.
This type of computing architecture and its learning algorithms for computation
with oscillatory spatial modes may be ideal for implementation in optical systems,
where electromagnetic oscillations, very high dimensional modes, and high processing speeds are available. The mathematical expressions for optical mode competition are nearly identical to our normal forms.
Acknowledgements
Supported by AFOSR-91-0325, and a grant from LLNL . It is a pleasure to acknowledge the invaluable assistance of Morris Hirsch and Walter Freeman.
References
[Baird, 1990a] Baird, B. (1990a) . Bifurcation and learning in network models of
oscillating cortex. In Forest, S., editor, Emergent Computation, pages 365-384.
North Holland. also in Physica D, 42.
[Baird, 1990b] Baird, B. (1990b) . A learning rule for cam storage of continuous
periodic sequences. In Proc. Int. Joint Conf. on Neural Networks, San Diego,
pages 3: 493-498.
[Baird and Eeckman, 1992a] Baird, B. and Eeckman, F. H. (1992a). A hierarchical
sensory-motor architecture of oscillating cortical area subnetworks. In Eeckman,
F. H., editor, Analysis and Modeling of Neural Systems II, pages 96-204, Norwell,
Ma. Kluwer.
[Baird and Eeckman, 1992b] Baird, B. and Eeckman, F . H. (1992b). A normal form
projection algorithm for associative memory. In Hassoun, M. 11., editor, Associative Neural Memories: Theory and Implementation, New York, NY. Oxford
University Press. in press.
[Cleeremans et al., 1989] Cleeremans, A., Servan-Schreiber, D., and McClelland, J.
(1989). Finite state automata and simple recurrent networks. Neural Computation, 1(3):372-381.
[Elman, 1991] Elman, J. (1991). Distributed representations, simple recurrent networks and grammatical structure. Machine Learning, 7(2/3):91.
[Freeman and Baird, 19871 Freeman, W. and Baird, B. (1987) . Relation of olfactory
eeg to behavior: Spatiaf analysis. Behavioral Neuroscience, 101:393-408 .
[Giles et al., 1992] Giles, C., Miller, C.B.and Chen, D., Chen, H., Sun, G. , and
Lee, Y. (1992). Learning and extracting finite state automata with second order
recurrent neural networks. Neural Computation, pages 393-405.
[Gray and Singer, 1987] Gray, C. M. and Singer, W . (1987) . Stimulus dependent
neuronal oscillations in the cat visual cortex area 17. Neuroscience [Supplj,
22:1301P.
[Guckenheimer and Holmes, 1983] Guckenheimer, J. and Holmes, D. (1983). Nonlinear Oscillations, Dynamical Systems, and Bifurcations of Vector Fields.
Springer, New York.
243
| 706 |@word polynomial:1 open:1 simulation:1 thereby:1 sychronization:1 initial:3 cordinates:1 contains:1 od:1 analysed:1 activation:2 must:3 tilted:1 additive:1 underly:1 enables:1 motor:6 designed:3 leaf:1 device:1 beginning:1 math:1 node:1 location:1 quantized:2 mathematical:2 constructed:6 differential:3 beta:1 ood:2 become:1 behavioral:1 inside:1 olfactory:7 theoretically:1 inter:1 behavior:6 elman:13 themselves:1 examine:1 roughly:1 brain:4 freeman:7 inspired:1 adiabatically:1 actual:2 little:1 increasing:1 becomes:1 begin:1 underlying:2 circuit:1 what:1 kind:2 string:4 developed:1 berkeley:5 ooi:1 wclockt:1 act:1 binarized:1 ro:2 control:3 unit:31 grant:1 bsin:3 appear:1 internally:1 carrier:1 before:1 understood:2 local:1 todd:1 bind:1 engineering:1 limit:1 analyzing:1 oxford:1 chose:1 studied:1 examined:1 dissipative:1 co:4 programmed:2 range:3 locked:4 unique:1 lost:1 implement:1 chaotic:9 area:3 thought:2 projection:2 pre:1 regular:1 get:1 cannot:1 valley:3 storage:2 context:20 influence:1 bill:1 equivalent:1 demonstrated:1 map:1 automaton:9 wit:1 rule:1 holmes:3 population:1 stability:3 jli:2 variation:2 coordinate:7 construction:2 diego:1 wetware:1 substrate:1 us:1 element:2 recognition:3 particularly:1 approximated:1 cut:1 observed:2 bottom:1 module:63 cleeremans:4 region:1 connected:3 cycle:8 improper:1 sun:1 complexity:1 cam:1 dynamic:14 trained:3 serve:1 completely:1 easily:1 joint:1 emergent:1 cat:1 talk:2 train:1 walter:1 artificial:1 choosing:1 neighborhood:1 whose:2 lag:1 larger:4 valued:1 relax:2 interconnection:1 grammar:5 ability:2 think:3 itself:2 final:1 associative:8 sequence:4 advantage:1 transistor:1 net:6 quantizing:1 clamp:2 interaction:3 maximal:1 fr:1 remainder:1 relevant:1 moved:1 competition:18 convergence:2 double:1 supra:1 oscillating:7 produce:1 coupling:2 recurrent:10 alon:1 freeing:1 ij:3 solves:1 strong:2 implemented:2 predicted:1 come:1 revers:1 synchronized:3 guided:1 anatomy:1 psuedo:1 correct:1 discontinuous:1 filter:1 bistable:1 implementing:1 electromagnetic:1 biological:2 mathematically:1 physica:1 hold:1 considered:1 normal:10 roi:4 lawrence:1 equilibrium:3 predict:1 mapping:1 driving:1 continuum:1 released:1 polar:2 proc:1 lose:1 prepare:1 grouped:1 schreiber:1 create:1 tool:1 guckenheimer:3 gaussian:1 always:3 super:1 detect:1 inference:6 dependent:1 entire:2 hidden:18 relation:1 wij:3 caricature:1 issue:1 takens:1 animal:1 art:1 special:1 bifurcation:10 spatial:1 field:2 once:1 identical:1 buffering:1 broad:1 nearly:1 thinking:1 others:2 connectionist:1 stimulus:1 employ:1 retina:1 randomly:1 composed:1 national:1 recognize:1 individual:1 phase:16 geometry:1 attractor:33 message:1 circular:1 clocking:5 possibility:1 intra:1 tj:1 norwell:1 necessary:1 divide:1 minimal:2 increased:1 portrait:2 modeling:1 giles:6 servan:1 ordinary:3 subset:1 reported:1 stored:2 emg:1 sucessfully:2 periodic:5 propogation:1 lee:1 receiving:1 jo:2 settled:1 slowly:1 cognitive:2 external:5 conf:1 potential:2 sinusoidal:1 coding:2 north:1 int:1 baird:25 coordinated:1 explicitly:1 view:1 break:1 picked:1 reached:1 maintains:1 parallel:1 capability:2 option:1 formed:1 who:1 miller:1 landscape:2 handwritten:1 oscillatory:21 synapsis:2 reach:1 energy:2 frequency:6 static:8 auditory:1 amplitude:11 actually:1 back:2 appears:3 higher:1 day:1 response:1 evaluated:1 box:1 though:2 rejected:1 clock:4 until:2 nonlinear:3 propagation:1 mode:14 wijij:1 gray:4 behaved:1 believe:1 building:1 effect:1 contain:1 analytically:1 hence:3 alternating:1 laboratory:1 sin:4 during:3 assistance:1 steady:1 clocked:5 performs:1 cleermans:1 llnl:1 invaluable:1 novel:1 superior:3 rotation:1 sigmoid:2 behaves:1 common:1 ji:1 winner:3 oli:3 cerebral:1 tilt:5 analog:11 discussed:1 approximates:3 smilar:1 kluwer:1 significant:1 eeckman:12 compliment:1 ai:5 subserve:1 mathematics:2 winnertake:3 language:1 immune:1 lowered:1 habitual:1 cortex:7 operating:4 attracting:1 add:1 irrelevant:2 driven:1 binary:7 accomplished:1 transmitted:1 relaxed:1 employed:1 recognized:1 determine:1 signal:2 ii:3 match:1 characterized:1 faster:1 cross:2 equally:2 reber:4 coded:1 biophysics:1 controlled:1 impact:2 essentially:1 expectation:1 justified:1 receive:1 separately:1 leaving:1 macroscopic:4 operate:1 hz:5 flow:3 call:1 extracting:1 near:1 presence:3 ideal:1 feedforward:2 intermediate:1 affect:2 submodules:1 architecture:21 perfectly:1 competing:2 rli:5 six:4 expression:1 wo:3 york:2 cause:1 action:1 deep:2 useful:1 amount:1 neocortex:2 morris:1 hardware:1 mcclelland:1 inhibitory:1 shifted:1 dotted:1 neuroscience:2 dissappear:1 instantly:1 diverse:1 discrete:6 key:1 respond:1 topologically:1 place:1 hassoun:1 resonant:2 ruelle:1 patch:1 oscillation:8 decision:1 layer:10 activity:11 occur:2 ri:2 generates:2 aspect:2 wc:2 speed:2 span:1 optical:2 combination:1 slightly:1 character:2 wi:9 evolves:1 primate:1 legal:1 equation:8 previously:1 bus:1 singer:3 flip:5 end:1 subnetworks:2 available:2 operation:6 wii:1 permit:1 apply:1 supplj:1 hierarchical:1 spectral:1 appropriate:1 slower:1 assumes:2 tomita:1 recognizes:2 lock:1 giving:1 especially:1 passband:1 contact:1 move:1 spike:1 occurs:2 traditional:1 microscopic:3 subnetwork:4 gradient:1 pleasure:1 simulated:1 digitization:1 reason:1 toward:2 illustration:1 repellor:1 effecting:1 frank:1 disparate:1 intent:1 rise:1 design:3 rii:2 collective:1 proper:5 implementation:2 allowing:1 observation:1 neuron:1 benchmark:2 finite:8 acknowledge:1 behave:3 flop:5 situation:1 communication:2 perturbation:3 sharp:2 inverting:1 complement:1 required:1 livermore:2 connection:8 immunity:1 coherent:1 address:1 dynamical:9 pattern:7 below:1 regime:2 reverts:1 built:2 reliable:5 memory:9 oj:4 reciprocally:1 critical:1 force:1 synchronize:2 examination:1 representing:2 axis:4 coupled:2 extract:1 acknowledgement:1 evolve:1 contributing:1 tractor:3 afosr:1 synchronization:8 fully:2 expect:1 digital:5 foundation:1 degree:1 basin:3 principle:4 editor:3 supported:1 free:1 bias:3 side:4 allow:1 rhythmic:1 midpoint:1 distributed:3 grammatical:7 feedback:3 calculated:1 cortical:6 transition:10 sensory:4 made:2 reinforcement:1 projected:1 san:1 gestalt:1 unreliable:1 hirsch:1 active:1 alternatively:1 continuous:8 search:2 why:1 learn:1 nature:1 transfer:1 ca:3 robust:1 operational:1 symmetry:1 eeg:1 contributes:1 forest:1 quantize:1 complex:1 meanwhile:1 domain:1 troyer:4 arrow:1 whole:1 noise:7 arise:1 neuronal:1 fig:3 ny:1 bandpass:1 winning:1 entrain:1 clamped:7 subsymbolic:1 load:2 symbol:11 recurrently:2 evidence:1 intrinsic:2 consist:2 lorenz:1 essential:2 effectively:2 gained:1 clamping:1 gap:1 chen:2 acitivity:1 simply:2 explore:2 visual:3 contained:2 holland:1 binding:1 springer:1 dispersed:1 ma:1 viewed:2 identity:2 sized:1 oscillator:4 man:1 change:4 experimentally:2 specifically:1 infinite:2 determined:3 total:1 isomorphic:1 tendency:2 preparation:1 constructive:1 dept:2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.