doi
stringlengths
10
10
chunk-id
int64
0
936
chunk
stringlengths
401
2.02k
id
stringlengths
12
14
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
references
list
1703.04908
6
Related Work Recent years have seen substantial progress in practical natural language applications such as machine translation (Sutskever, Vinyals, and Le 2014; Bahdanau, Cho, and Ben- gio 2014), sentiment analysis (Socher et al. 2013), document summarization (Durrett, Berg-Kirkpatrick, and Klein 2016), and domain-specific dialogue (Dhingra et al. 2016). Much of this success is a result of intelligently designed statistical models trained on large static datasets. However, such ap- proaches do not produce an understanding of language that can lead to productive cooperation with humans. An interest in pragmatic view of language understand- ing has been longstanding (Austin 1962; Grice 1975) and
1703.04908#6
Emergence of Grounded Compositional Language in Multi-Agent Populations
By capturing statistical patterns in large corpora, machine learning has enabled significant advances in natural language processing, including in machine translation, question answering, and sentiment analysis. However, for agents to intelligently interact with humans, simply capturing the statistical patterns is insufficient. In this paper we investigate if, and how, grounded compositional language can emerge as a means to achieve goals in multi-agent populations. Towards this end, we propose a multi-agent learning environment and learning methods that bring about emergence of a basic compositional language. This language is represented as streams of abstract discrete symbols uttered by agents over time, but nonetheless has a coherent structure that possesses a defined vocabulary and syntax. We also observe emergence of non-verbal communication such as pointing and guiding when language communication is unavailable.
http://arxiv.org/pdf/1703.04908
Igor Mordatch, Pieter Abbeel
cs.AI, cs.CL
null
null
cs.AI
20170315
20180724
[ { "id": "1603.08887" }, { "id": "1611.01779" }, { "id": "1612.07182" }, { "id": "1609.00777" }, { "id": "1612.08810" } ]
1703.04933
6
Sharp Minima Can Generalize For Deep Nets However, several common architectures and parametriza- tions in deep learning are already at odds with this conjec- ture, requiring at least some degree of refinement in the statements made. In particular, we show how the geome- try of the associated parameter space can alter the ranking between prediction functions when considering several mea- sures of flatness/sharpness. We believe the reason for this contradiction stems from the Bayesian arguments about KL- divergence made to justify the generalization ability of flat minima (Hinton & Van Camp, 1993). Indeed, Kullback- Liebler divergence is invariant to change of parameters whereas the notion of "flatness" is not. The demonstrations of Hochreiter & Schmidhuber (1997) are approximately based on a Gibbs formalism and rely on strong assumptions and approximations that can compromise the applicability of the argument, including the assumption of a discrete function space. the literature. Hochreiter & Schmidhuber (1997) defines a flat minimum as "a large connected region in weight space where the error remains approximately constant". We interpret this formulation as follows:
1703.04933#6
Sharp Minima Can Generalize For Deep Nets
Despite their overwhelming capacity to overfit, deep learning architectures tend to generalize relatively well to unseen data, allowing them to be deployed in practice. However, explaining why this is the case is still an open area of research. One standing hypothesis that is gaining popularity, e.g. Hochreiter & Schmidhuber (1997); Keskar et al. (2017), is that the flatness of minima of the loss function found by stochastic gradient based methods results in good generalization. This paper argues that most notions of flatness are problematic for deep models and can not be directly applied to explain generalization. Specifically, when focusing on deep networks with rectifier units, we can exploit the particular geometry of parameter space induced by the inherent symmetries that these architectures exhibit to build equivalent models corresponding to arbitrarily sharper minima. Furthermore, if we allow to reparametrize a function, the geometry of its parameters can change drastically without affecting its generalization properties.
http://arxiv.org/pdf/1703.04933
Laurent Dinh, Razvan Pascanu, Samy Bengio, Yoshua Bengio
cs.LG
8.5 pages of main content, 2.5 of bibliography and 1 page of appendix
null
cs.LG
20170315
20170515
[ { "id": "1609.03193" }, { "id": "1701.04271" }, { "id": "1609.04836" }, { "id": "1606.04838" }, { "id": "1611.03530" }, { "id": "1605.08803" }, { "id": "1511.01029" }, { "id": "1609.08144" }, { "id": "1611.01838" }, { "id": "1606.05336" }, { "id": "1603.01431" }, { "id": "1511.01844" }, { "id": "1612.04010" }, { "id": "1611.07476" }, { "id": "1611.02344" } ]
1703.05175
6
# 2 Prototypical Networks # 2.1 Notation In few-shot classification we are given a small support set of N labeled examples S = {(x1, y1), . . . , (xN , yN )} where each xi ∈ RD is the D-dimensional feature vector of an example and yi ∈ {1, . . . , K} is the corresponding label. Sk denotes the set of examples labeled with class k. # 2.2 Model Prototypical networks compute an M -dimensional representation ck ∈ RM , or prototype, of each class through an embedding function fφ : RD → RM with learnable parameters φ. Each prototype is the mean vector of the embedded support points belonging to its class: 1 c= DL folxi) ) (xi,yi)ESk Given a distance function d : RM × RM → [0, +∞), prototypical networks produce a distribution over classes for a query point x based on a softmax over distances to the prototypes in the embedding space: x exp(—d( f(x), cx)) B13) = 5 exp(—dlfo(), cr) °) Poly
1703.05175#6
Prototypical Networks for Few-shot Learning
We propose prototypical networks for the problem of few-shot classification, where a classifier must generalize to new classes not seen in the training set, given only a small number of examples of each new class. Prototypical networks learn a metric space in which classification can be performed by computing distances to prototype representations of each class. Compared to recent approaches for few-shot learning, they reflect a simpler inductive bias that is beneficial in this limited-data regime, and achieve excellent results. We provide an analysis showing that some simple design decisions can yield substantial improvements over recent approaches involving complicated architectural choices and meta-learning. We further extend prototypical networks to zero-shot learning and achieve state-of-the-art results on the CU-Birds dataset.
http://arxiv.org/pdf/1703.05175
Jake Snell, Kevin Swersky, Richard S. Zemel
cs.LG, stat.ML
null
null
cs.LG
20170315
20170619
[ { "id": "1605.05395" }, { "id": "1502.03167" } ]
1703.04908
7
An interest in pragmatic view of language understand- ing has been longstanding (Austin 1962; Grice 1975) and has recently argued for in (Gauthier and Mordatch 2016; Lake et al. 2016; Lazaridou, Pham, and Baroni 2016). Prag- matic language use has been proposed in the context of two- player reference games (Golland, Liang, and Klein 2010; Vogel et al. 2014; Andreas and Klein 2016) focusing on the task of identifying object references through a learned language. (Winograd 1973; Wang, Liang, and Manning 2016) ground language in a physical environment and fo- cusing on language interaction with humans for comple- tion of tasks in the physical environment. In such a prag- matic setting, language use for communication of spatial concepts has received particular attention in (Steels 1995; Ullman, Xu, and Goodman 2016).
1703.04908#7
Emergence of Grounded Compositional Language in Multi-Agent Populations
By capturing statistical patterns in large corpora, machine learning has enabled significant advances in natural language processing, including in machine translation, question answering, and sentiment analysis. However, for agents to intelligently interact with humans, simply capturing the statistical patterns is insufficient. In this paper we investigate if, and how, grounded compositional language can emerge as a means to achieve goals in multi-agent populations. Towards this end, we propose a multi-agent learning environment and learning methods that bring about emergence of a basic compositional language. This language is represented as streams of abstract discrete symbols uttered by agents over time, but nonetheless has a coherent structure that possesses a defined vocabulary and syntax. We also observe emergence of non-verbal communication such as pointing and guiding when language communication is unavailable.
http://arxiv.org/pdf/1703.04908
Igor Mordatch, Pieter Abbeel
cs.AI, cs.CL
null
null
cs.AI
20170315
20180724
[ { "id": "1603.08887" }, { "id": "1611.01779" }, { "id": "1612.07182" }, { "id": "1609.00777" }, { "id": "1612.08810" } ]
1703.04933
7
Definition 1. Given « > 0, a minimum 6, and a loss L, we define C(L, 0, €) as the largest (using inclusion as the partial order over the subsets of 0) connected set containing 6 such that V6’ € C(L,6,€),L(0') < L(@) +. The e- flatness will be defined as the volume of C(L, 0, €). We will call this measure the volume ¢-flatness. In Figure 1, C(L, 0, €) will be the purple line at the top of the red area if the height is € and its volume will simply be the length of the purple line. # 2 Definitions of flatness/sharpness
1703.04933#7
Sharp Minima Can Generalize For Deep Nets
Despite their overwhelming capacity to overfit, deep learning architectures tend to generalize relatively well to unseen data, allowing them to be deployed in practice. However, explaining why this is the case is still an open area of research. One standing hypothesis that is gaining popularity, e.g. Hochreiter & Schmidhuber (1997); Keskar et al. (2017), is that the flatness of minima of the loss function found by stochastic gradient based methods results in good generalization. This paper argues that most notions of flatness are problematic for deep models and can not be directly applied to explain generalization. Specifically, when focusing on deep networks with rectifier units, we can exploit the particular geometry of parameter space induced by the inherent symmetries that these architectures exhibit to build equivalent models corresponding to arbitrarily sharper minima. Furthermore, if we allow to reparametrize a function, the geometry of its parameters can change drastically without affecting its generalization properties.
http://arxiv.org/pdf/1703.04933
Laurent Dinh, Razvan Pascanu, Samy Bengio, Yoshua Bengio
cs.LG
8.5 pages of main content, 2.5 of bibliography and 1 page of appendix
null
cs.LG
20170315
20170515
[ { "id": "1609.03193" }, { "id": "1701.04271" }, { "id": "1609.04836" }, { "id": "1606.04838" }, { "id": "1611.03530" }, { "id": "1605.08803" }, { "id": "1511.01029" }, { "id": "1609.08144" }, { "id": "1611.01838" }, { "id": "1606.05336" }, { "id": "1603.01431" }, { "id": "1511.01844" }, { "id": "1612.04010" }, { "id": "1611.07476" }, { "id": "1611.02344" } ]
1703.05175
7
x exp(—d( f(x), cx)) B13) = 5 exp(—dlfo(), cr) °) Poly Learning proceeds by minimizing the negative log-probability J(φ) = − log pφ(y = k | x) of the true class k via SGD. Training episodes are formed by randomly selecting a subset of classes from the training set, then choosing a subset of examples within each class to act as the support set and a subset of the remainder to serve as query points. Pseudocode to compute the loss J(φ) for a training episode is provided in Algorithm 1. 2 Algorithm 1 Training episode loss computation for prototypical networks. N is the number of examples in the training set, K is the number of classes in the training set, NC ≤ K is the number of classes per episode, NS is the number of support examples per class, NQ is the number of query examples per class. RANDOMSAMPLE(S, N ) denotes a set of N elements chosen uniformly at random from set S, without replacement.
1703.05175#7
Prototypical Networks for Few-shot Learning
We propose prototypical networks for the problem of few-shot classification, where a classifier must generalize to new classes not seen in the training set, given only a small number of examples of each new class. Prototypical networks learn a metric space in which classification can be performed by computing distances to prototype representations of each class. Compared to recent approaches for few-shot learning, they reflect a simpler inductive bias that is beneficial in this limited-data regime, and achieve excellent results. We provide an analysis showing that some simple design decisions can yield substantial improvements over recent approaches involving complicated architectural choices and meta-learning. We further extend prototypical networks to zero-shot learning and achieve state-of-the-art results on the CU-Birds dataset.
http://arxiv.org/pdf/1703.05175
Jake Snell, Kevin Swersky, Richard S. Zemel
cs.LG, stat.ML
null
null
cs.LG
20170315
20170619
[ { "id": "1605.05395" }, { "id": "1502.03167" } ]
1703.04908
8
Aside from producing agents that can interact with hu- mans through language, research in pragmatic language un- derstanding can be informative to the fields of linguistics and cognitive science. Of particular interest in these fields has been the question of how syntax and compositional structure in language emerged, and why it is largely unique to human languages (Kirby 1999; Nowak, Plotkin, and Jansen 2000; Steels 2005). Models such as Rational Speech Acts (Frank and Goodman 2012) and Iterated Learning (Kirby, Griffiths, and Smith 2014) have been popular in cognitive science and evolutionary linguistics, but such approaches tend to rely on pre-specified procedures or models that limit their general- ity. The recent work that is most similar to ours is the applica- tion of reinforcement learning approaches towards the pur- poses of learning a communication protocol, as exemplified by (Bratman et al. 2010; Foerster et al. 2016; Sukhbaatar, Szlam, and Fergus 2016; Lazaridou, Peysakhovich, and Ba- roni 2016). # Problem Formulation
1703.04908#8
Emergence of Grounded Compositional Language in Multi-Agent Populations
By capturing statistical patterns in large corpora, machine learning has enabled significant advances in natural language processing, including in machine translation, question answering, and sentiment analysis. However, for agents to intelligently interact with humans, simply capturing the statistical patterns is insufficient. In this paper we investigate if, and how, grounded compositional language can emerge as a means to achieve goals in multi-agent populations. Towards this end, we propose a multi-agent learning environment and learning methods that bring about emergence of a basic compositional language. This language is represented as streams of abstract discrete symbols uttered by agents over time, but nonetheless has a coherent structure that possesses a defined vocabulary and syntax. We also observe emergence of non-verbal communication such as pointing and guiding when language communication is unavailable.
http://arxiv.org/pdf/1703.04908
Igor Mordatch, Pieter Abbeel
cs.AI, cs.CL
null
null
cs.AI
20170315
20180724
[ { "id": "1603.08887" }, { "id": "1611.01779" }, { "id": "1612.07182" }, { "id": "1609.00777" }, { "id": "1612.08810" } ]
1703.04933
8
Figure 1: An illustration of the notion of flatness. The loss L as a function of 6 is plotted in black. If the height of the red area is ¢, the width will represent the volume e-flatness. If the width is 2¢, the height will then represent the e-sharpness. Best seen with colors. For conciseness, we will restrict ourselves to supervised scalar output problems, but several conclusions in this pa- per can apply to other problems as well. We will consider a function f that takes as input an element x from an in- put space ¥ and outputs a scalar y. We will denote by fg the prediction function. This prediction function will be parametrized by a parameter vector @ in a parameter space ©. Often, this prediction function will be over-parametrized and two parameters (0, 6’) € ©? that yield the same pre- diction function everywhere, Va € 4’, fo(a) = for (x), are called observationally equivalent. The model is trained to minimize a continuous loss function L which takes as argu- ment the prediction function fg. We will often think of the loss L as a function of 6 and adopt the notation L(@).
1703.04933#8
Sharp Minima Can Generalize For Deep Nets
Despite their overwhelming capacity to overfit, deep learning architectures tend to generalize relatively well to unseen data, allowing them to be deployed in practice. However, explaining why this is the case is still an open area of research. One standing hypothesis that is gaining popularity, e.g. Hochreiter & Schmidhuber (1997); Keskar et al. (2017), is that the flatness of minima of the loss function found by stochastic gradient based methods results in good generalization. This paper argues that most notions of flatness are problematic for deep models and can not be directly applied to explain generalization. Specifically, when focusing on deep networks with rectifier units, we can exploit the particular geometry of parameter space induced by the inherent symmetries that these architectures exhibit to build equivalent models corresponding to arbitrarily sharper minima. Furthermore, if we allow to reparametrize a function, the geometry of its parameters can change drastically without affecting its generalization properties.
http://arxiv.org/pdf/1703.04933
Laurent Dinh, Razvan Pascanu, Samy Bengio, Yoshua Bengio
cs.LG
8.5 pages of main content, 2.5 of bibliography and 1 page of appendix
null
cs.LG
20170315
20170515
[ { "id": "1609.03193" }, { "id": "1701.04271" }, { "id": "1609.04836" }, { "id": "1606.04838" }, { "id": "1611.03530" }, { "id": "1605.08803" }, { "id": "1511.01029" }, { "id": "1609.08144" }, { "id": "1611.01838" }, { "id": "1606.05336" }, { "id": "1603.01431" }, { "id": "1511.01844" }, { "id": "1612.04010" }, { "id": "1611.07476" }, { "id": "1611.02344" } ]
1703.05175
8
Input: Training set D = {(x1,41),...,(xn,yn)}, where each y; € {1,..., A}. Dy denotes the subset of D containing all elements (x;, yi) such that y; = k. Output: The loss J for a randomly generated training episode. V < RANDOMSAMPLE({1,..., A}, No) > Select class indices for episode for k in {1,...,Nco}do S; < RANDOMSAMPLE(Dy, , Ns) > Select support examples Qk ++ RANDOMSAMPLE(Dy, \ Si, Ng) > Select query examples 1 Che Ne > fo(xi) > Compute prototype from support examples © (xiv ESt end for J<-0 > Initialize loss for k in {1,...,Nco} do for (x, y) in Q;, do Dede 4 (fo(x), ex) ) + los) _exp(- d(fo(x), €r)) > Update loss end for end for # 2.3 Prototypical Networks as Mixture Density Estimation
1703.05175#8
Prototypical Networks for Few-shot Learning
We propose prototypical networks for the problem of few-shot classification, where a classifier must generalize to new classes not seen in the training set, given only a small number of examples of each new class. Prototypical networks learn a metric space in which classification can be performed by computing distances to prototype representations of each class. Compared to recent approaches for few-shot learning, they reflect a simpler inductive bias that is beneficial in this limited-data regime, and achieve excellent results. We provide an analysis showing that some simple design decisions can yield substantial improvements over recent approaches involving complicated architectural choices and meta-learning. We further extend prototypical networks to zero-shot learning and achieve state-of-the-art results on the CU-Birds dataset.
http://arxiv.org/pdf/1703.05175
Jake Snell, Kevin Swersky, Richard S. Zemel
cs.LG, stat.ML
null
null
cs.LG
20170315
20170619
[ { "id": "1605.05395" }, { "id": "1502.03167" } ]
1703.04908
9
# Problem Formulation The setting we are considering is a cooperative partially ob- servable Markov game (inman 1994), which is a multi- agent extension of a Markov decision process. A Markov game for N agents is defined by set of states S describ- ing the possible configurations of all agents, a set of ac- tions A,,...,Ay and a set of observations O,,...,Oy for each agent. Initial states are determined by a distribution p: S++ (0, 1]. State transitions are determined by a function T:SxA, x... x An © S. For each agent 7, rewards are given by function r; : S x A; +> R, observations are given by function 0; : S ++ O;. To choose actions, each agent i uses a stochastic policy 7; : O; x A; +> [0,1]. In this work, we assume all agents have identical action and observation spaces, and all agents act according to the same policy π and receive a shared reward. We consider a fi- nite horizon setting, with episode length T . In a cooperative setting, the problem is to find a policy that maximizes the expected shared return for all agents, which can be solved as a joint minimization problem:
1703.04908#9
Emergence of Grounded Compositional Language in Multi-Agent Populations
By capturing statistical patterns in large corpora, machine learning has enabled significant advances in natural language processing, including in machine translation, question answering, and sentiment analysis. However, for agents to intelligently interact with humans, simply capturing the statistical patterns is insufficient. In this paper we investigate if, and how, grounded compositional language can emerge as a means to achieve goals in multi-agent populations. Towards this end, we propose a multi-agent learning environment and learning methods that bring about emergence of a basic compositional language. This language is represented as streams of abstract discrete symbols uttered by agents over time, but nonetheless has a coherent structure that possesses a defined vocabulary and syntax. We also observe emergence of non-verbal communication such as pointing and guiding when language communication is unavailable.
http://arxiv.org/pdf/1703.04908
Igor Mordatch, Pieter Abbeel
cs.AI, cs.CL
null
null
cs.AI
20170315
20180724
[ { "id": "1603.08887" }, { "id": "1611.01779" }, { "id": "1612.07182" }, { "id": "1609.00777" }, { "id": "1612.08810" } ]
1703.04933
9
Flatness can also be defined using the local curvature of the loss function around the minimum if it is a critical point 1. Chaudhari et al. (2017); Keskar et al. (2017) suggest that this information is encoded in the eigenvalues of the Hessian. However, in order to compare how flat one minimum versus another, the eigenvalues need to be reduced to a single number. Here we consider the spectral norm and trace of the Hessian, two typical measurements of the eigenvalues of a matrix. Additionally Keskar et al. (2017) defines the notion of e- sharpness. In order to make proofs more readable, we will slightly modify their definition. However, because of norm equivalence in finite dimensional space, our results will transfer to the original definition in full space as well. Our modified definition is the following: Definition 2. Let Bz(€,) be an Euclidean ball centered on a minimum 6 with radius €. Then, for a non-negative valued loss function L, the e-sharpness will be defined as proportional to mMaxX/€ Bo (c,0) (L(6’) — L(6)) 1+ L(0) ,
1703.04933#9
Sharp Minima Can Generalize For Deep Nets
Despite their overwhelming capacity to overfit, deep learning architectures tend to generalize relatively well to unseen data, allowing them to be deployed in practice. However, explaining why this is the case is still an open area of research. One standing hypothesis that is gaining popularity, e.g. Hochreiter & Schmidhuber (1997); Keskar et al. (2017), is that the flatness of minima of the loss function found by stochastic gradient based methods results in good generalization. This paper argues that most notions of flatness are problematic for deep models and can not be directly applied to explain generalization. Specifically, when focusing on deep networks with rectifier units, we can exploit the particular geometry of parameter space induced by the inherent symmetries that these architectures exhibit to build equivalent models corresponding to arbitrarily sharper minima. Furthermore, if we allow to reparametrize a function, the geometry of its parameters can change drastically without affecting its generalization properties.
http://arxiv.org/pdf/1703.04933
Laurent Dinh, Razvan Pascanu, Samy Bengio, Yoshua Bengio
cs.LG
8.5 pages of main content, 2.5 of bibliography and 1 page of appendix
null
cs.LG
20170315
20170515
[ { "id": "1609.03193" }, { "id": "1701.04271" }, { "id": "1609.04836" }, { "id": "1606.04838" }, { "id": "1611.03530" }, { "id": "1605.08803" }, { "id": "1511.01029" }, { "id": "1609.08144" }, { "id": "1611.01838" }, { "id": "1606.05336" }, { "id": "1603.01431" }, { "id": "1511.01844" }, { "id": "1612.04010" }, { "id": "1611.07476" }, { "id": "1611.02344" } ]
1703.05175
9
# 2.3 Prototypical Networks as Mixture Density Estimation For a particular class of distance functions, known as regular Bregman divergences [4], the prototypi- cal networks algorithm is equivalent to performing mixture density estimation on the support set with an exponential family density. A regular Bregman divergence dϕ is defined as: d,(z,2') = 9(z) — g(z’) — (2 -2') Ve(z’), (3) where y is a differentiable, strictly convex function of the Legendre type. Examples of Bregman divergences include squared Euclidean distance ||z — z’ ||? and Mahalanobis distance. Prototype computation can be viewed in terms of hard clustering on the support set, with one cluster per class and each support point assigned to its corresponding class cluster. It has been shown [4] for Bregman divergences that the cluster representative achieving minimal distance to its assigned points is the cluster mean. Thus the prototype computation in Equation (1) yields optimal cluster representatives given the support set labels when a Bregman divergence is used. Moreover, any regular exponential family distribution pψ(z|θ) with parameters θ and cumulant function ψ can be written in terms of a uniquely determined regular Bregman divergence [4]:
1703.05175#9
Prototypical Networks for Few-shot Learning
We propose prototypical networks for the problem of few-shot classification, where a classifier must generalize to new classes not seen in the training set, given only a small number of examples of each new class. Prototypical networks learn a metric space in which classification can be performed by computing distances to prototype representations of each class. Compared to recent approaches for few-shot learning, they reflect a simpler inductive bias that is beneficial in this limited-data regime, and achieve excellent results. We provide an analysis showing that some simple design decisions can yield substantial improvements over recent approaches involving complicated architectural choices and meta-learning. We further extend prototypical networks to zero-shot learning and achieve state-of-the-art results on the CU-Birds dataset.
http://arxiv.org/pdf/1703.05175
Jake Snell, Kevin Swersky, Richard S. Zemel
cs.LG, stat.ML
null
null
cs.LG
20170315
20170619
[ { "id": "1605.05395" }, { "id": "1502.03167" } ]
1703.04908
10
T N max R(7), where Rm) =| 7) ris'.a))] t=0 i=0 agent 1 landmark ° B landmark landmark v @ agent 3 agent 2 Figure 1: An example of environments we consider. # Grounded Communication Environment As argued in the introduction, grounding multi-agent com- munication in a physical environment is crucial for interest- ing communication behaviors to emerge. In this work, we consider a physically-simulated two-dimensional environ- ment in continuous space and discrete time. This environ- ment consists of N agents and M landmarks. Both agent and landmark entities inhabit a physical location in space p and posses descriptive physical characteristics, such as color and shape type. In addition, agents can direct their gaze to a loca- tion v.Agents can act to move in the environment and direct their gaze, but may also be affected by physical interactions with other agents. We denote the physical state of an entity (including descriptive characteristics) by x and describe its precise details and transition dynamics in the Appendix.
1703.04908#10
Emergence of Grounded Compositional Language in Multi-Agent Populations
By capturing statistical patterns in large corpora, machine learning has enabled significant advances in natural language processing, including in machine translation, question answering, and sentiment analysis. However, for agents to intelligently interact with humans, simply capturing the statistical patterns is insufficient. In this paper we investigate if, and how, grounded compositional language can emerge as a means to achieve goals in multi-agent populations. Towards this end, we propose a multi-agent learning environment and learning methods that bring about emergence of a basic compositional language. This language is represented as streams of abstract discrete symbols uttered by agents over time, but nonetheless has a coherent structure that possesses a defined vocabulary and syntax. We also observe emergence of non-verbal communication such as pointing and guiding when language communication is unavailable.
http://arxiv.org/pdf/1703.04908
Igor Mordatch, Pieter Abbeel
cs.AI, cs.CL
null
null
cs.AI
20170315
20180724
[ { "id": "1603.08887" }, { "id": "1611.01779" }, { "id": "1612.07182" }, { "id": "1609.00777" }, { "id": "1612.08810" } ]
1703.04933
10
mMaxX/€ Bo (c,0) (L(6’) — L(6)) 1+ L(0) , In Figure 1, if the width of the red area is 2e then the height of the red area is maxg<p,(c,9) (L(6") — L(6)). e-sharpness can be related to the spectral norm of the Hes- sian. Indeed, a second-order Taylor expansion of L around a critical point minimum is written L(6’) = L(0) + ; (0' — 0) (V7L)(0)(0' — 0) + 0(||’ — ||). The notion of flatness/sharpness of a minimum is relative, therefore we will discuss metrics that can be used to com- pare the relative flatness between two minima. In this sec- tion we will formalize three used definitions of flatness in In this second order approximation, the e-sharpness at 0 1In this paper, we will often assume that is the case when dealing with Hessian-based measures in order to have them well- defined. Sharp Minima Can Generalize For Deep Nets would be Iw? Z)llhoe? 2(1+L(0)) ©
1703.04933#10
Sharp Minima Can Generalize For Deep Nets
Despite their overwhelming capacity to overfit, deep learning architectures tend to generalize relatively well to unseen data, allowing them to be deployed in practice. However, explaining why this is the case is still an open area of research. One standing hypothesis that is gaining popularity, e.g. Hochreiter & Schmidhuber (1997); Keskar et al. (2017), is that the flatness of minima of the loss function found by stochastic gradient based methods results in good generalization. This paper argues that most notions of flatness are problematic for deep models and can not be directly applied to explain generalization. Specifically, when focusing on deep networks with rectifier units, we can exploit the particular geometry of parameter space induced by the inherent symmetries that these architectures exhibit to build equivalent models corresponding to arbitrarily sharper minima. Furthermore, if we allow to reparametrize a function, the geometry of its parameters can change drastically without affecting its generalization properties.
http://arxiv.org/pdf/1703.04933
Laurent Dinh, Razvan Pascanu, Samy Bengio, Yoshua Bengio
cs.LG
8.5 pages of main content, 2.5 of bibliography and 1 page of appendix
null
cs.LG
20170315
20170515
[ { "id": "1609.03193" }, { "id": "1701.04271" }, { "id": "1609.04836" }, { "id": "1606.04838" }, { "id": "1611.03530" }, { "id": "1605.08803" }, { "id": "1511.01029" }, { "id": "1609.08144" }, { "id": "1611.01838" }, { "id": "1606.05336" }, { "id": "1603.01431" }, { "id": "1511.01844" }, { "id": "1612.04010" }, { "id": "1611.07476" }, { "id": "1611.02344" } ]
1703.05175
10
pψ(z|θ) = exp{zT θ − ψ(θ) − gψ(z)} = exp{−dϕ(z, µ(θ)) − gϕ(z)} (4) Consider now a regular exponential family mixture model with parameters Γ = {θk, πk}K k=1: p(2|P) = Ym 2/0.) = Yomeso(- dy(z, M(x) — 9p(z)) (5) Given Γ, inference of the cluster assignment y for an unlabeled point z becomes: mx exp(—dy(z, W(Ox))) Der Te exp(—de(z, w(Ox))) p(y = kz) (6) For an equally-weighted mixture model with one cluster per class, cluster assignment inference (6) is equivalent to query class prediction (2) with fφ(x) = z and ck = µ(θk). In this case, prototypical networks are effectively performing mixture density estimation with an exponential family distribution determined by dϕ. The choice of distance therefore specifies modeling assumptions about the class- conditional data distribution in the embedding space. 3 # 2.4 Reinterpretation as a Linear Model
1703.05175#10
Prototypical Networks for Few-shot Learning
We propose prototypical networks for the problem of few-shot classification, where a classifier must generalize to new classes not seen in the training set, given only a small number of examples of each new class. Prototypical networks learn a metric space in which classification can be performed by computing distances to prototype representations of each class. Compared to recent approaches for few-shot learning, they reflect a simpler inductive bias that is beneficial in this limited-data regime, and achieve excellent results. We provide an analysis showing that some simple design decisions can yield substantial improvements over recent approaches involving complicated architectural choices and meta-learning. We further extend prototypical networks to zero-shot learning and achieve state-of-the-art results on the CU-Birds dataset.
http://arxiv.org/pdf/1703.05175
Jake Snell, Kevin Swersky, Richard S. Zemel
cs.LG, stat.ML
null
null
cs.LG
20170315
20170619
[ { "id": "1605.05395" }, { "id": "1502.03167" } ]
1703.04908
11
In addition to performing physical actions, agents utter verbal communication symbols c at every timestep. These utterances are discrete elements of an abstract symbol vo- cabulary C of size K. We do not assign any significance or meaning to these symbols. They are treated as abstract cate- gorical variables that are emitted by each agent and observed by all other agents. It is up to agents at training time to as- sign meaning to these symbols. As shown in Section , these symbols become assigned to interpretable concepts. Agents may also choose not to utter anything at a given timestep, and there is a cost to making an utterance, loosely represent- ing the metabolic effort of vocalization. We denote a vector representing one-hot encoding of symbol c with boldface c. Each agent has internal goals specified by vector g that are private and not observed by other agents. These goals are grounded in the physical environment and include tasks such as moving to or gazing at a location. These goals may involve other agents (requiring the other agent to move to a location, for example) but are not observed by them and thus necessitate coordination and communication between agents. Verbal utterances are one tool which the agents can use to cooperatively accomplish all goals, but we also ob- serve emergent use of non-verbal signals and altogether non- communicative strategies.
1703.04908#11
Emergence of Grounded Compositional Language in Multi-Agent Populations
By capturing statistical patterns in large corpora, machine learning has enabled significant advances in natural language processing, including in machine translation, question answering, and sentiment analysis. However, for agents to intelligently interact with humans, simply capturing the statistical patterns is insufficient. In this paper we investigate if, and how, grounded compositional language can emerge as a means to achieve goals in multi-agent populations. Towards this end, we propose a multi-agent learning environment and learning methods that bring about emergence of a basic compositional language. This language is represented as streams of abstract discrete symbols uttered by agents over time, but nonetheless has a coherent structure that possesses a defined vocabulary and syntax. We also observe emergence of non-verbal communication such as pointing and guiding when language communication is unavailable.
http://arxiv.org/pdf/1703.04908
Igor Mordatch, Pieter Abbeel
cs.AI, cs.CL
null
null
cs.AI
20170315
20180724
[ { "id": "1603.08887" }, { "id": "1611.01779" }, { "id": "1612.07182" }, { "id": "1609.00777" }, { "id": "1612.08810" } ]
1703.04933
11
Sharp Minima Can Generalize For Deep Nets would be Iw? Z)llhoe? 2(1+L(0)) © # 3 Properties of Deep Rectified Networks Before moving forward to our results, in this section we first introduce the notation used in the rest of paper. Most of our results, for clarity, will be on the deep rectified feedforward networks with a linear output layer that we describe below, though they can easily be extended to other architectures (e.g. convolutional, etc.). 9. Definition 3. Given K weight matrices (0x )p<K with ny, = dim(vec(@,)) and n = vy nx, the output y of a deep rectified feedforward networks with a linear output layer is: y= rect (Srecr( +++ brect(@ +01) ++ ‘) : x1) OK, where # o
1703.04933#11
Sharp Minima Can Generalize For Deep Nets
Despite their overwhelming capacity to overfit, deep learning architectures tend to generalize relatively well to unseen data, allowing them to be deployed in practice. However, explaining why this is the case is still an open area of research. One standing hypothesis that is gaining popularity, e.g. Hochreiter & Schmidhuber (1997); Keskar et al. (2017), is that the flatness of minima of the loss function found by stochastic gradient based methods results in good generalization. This paper argues that most notions of flatness are problematic for deep models and can not be directly applied to explain generalization. Specifically, when focusing on deep networks with rectifier units, we can exploit the particular geometry of parameter space induced by the inherent symmetries that these architectures exhibit to build equivalent models corresponding to arbitrarily sharper minima. Furthermore, if we allow to reparametrize a function, the geometry of its parameters can change drastically without affecting its generalization properties.
http://arxiv.org/pdf/1703.04933
Laurent Dinh, Razvan Pascanu, Samy Bengio, Yoshua Bengio
cs.LG
8.5 pages of main content, 2.5 of bibliography and 1 page of appendix
null
cs.LG
20170315
20170515
[ { "id": "1609.03193" }, { "id": "1701.04271" }, { "id": "1609.04836" }, { "id": "1606.04838" }, { "id": "1611.03530" }, { "id": "1605.08803" }, { "id": "1511.01029" }, { "id": "1609.08144" }, { "id": "1611.01838" }, { "id": "1606.05336" }, { "id": "1603.01431" }, { "id": "1511.01844" }, { "id": "1612.04010" }, { "id": "1611.07476" }, { "id": "1611.02344" } ]
1703.05175
11
3 # 2.4 Reinterpretation as a Linear Model A simple analysis is useful in gaining insight into the nature of the learned classifier. When we use Euclidean distance d(z,z’) = ||z — z’||?, then the model in Equation (2) is equivalent to a linear model with a particular parameterization [19]. To see this, expand the term in the exponent: —lFo(x) — ex? = —So()" fox) + 2h fo(x) — ef cx 7) The first term in Equation (7) is constant with respect to the class k, so it does not affect the softmax probabilities. We can write the remaining terms as a linear model as follows: 2c) f(x) — eer = wi f(x) + be, where wy = 2cy and by = —cj cx (8) We focus primarily on squared Euclidean distance (corresponding to spherical Gaussian densities) in this work. Our results indicate that Euclidean distance is an effective choice despite the equivalence to a linear model. We hypothesize this is because all of the required non-linearity can be learned within the embedding function. Indeed, this is the approach that modern neural network classification systems currently use, e.g., [14, 28].
1703.05175#11
Prototypical Networks for Few-shot Learning
We propose prototypical networks for the problem of few-shot classification, where a classifier must generalize to new classes not seen in the training set, given only a small number of examples of each new class. Prototypical networks learn a metric space in which classification can be performed by computing distances to prototype representations of each class. Compared to recent approaches for few-shot learning, they reflect a simpler inductive bias that is beneficial in this limited-data regime, and achieve excellent results. We provide an analysis showing that some simple design decisions can yield substantial improvements over recent approaches involving complicated architectural choices and meta-learning. We further extend prototypical networks to zero-shot learning and achieve state-of-the-art results on the CU-Birds dataset.
http://arxiv.org/pdf/1703.05175
Jake Snell, Kevin Swersky, Richard S. Zemel
cs.LG, stat.ML
null
null
cs.LG
20170315
20170619
[ { "id": "1605.05395" }, { "id": "1502.03167" } ]
1703.04908
12
To aid in accomplishing goals, each agent has internal re- current memory bank m that is also private and not observed by other agents. This memory bank has no pre-designed be- havior and it is up to the agents to learn to utilize it appro- priately. The full state of the environment is given by s = [x1 jes (N+M) ©1,...,N M1,....N 81,.. ] € S. Each agent observes physical states of all entities in the environment, verbal utterances of all agents, and its own private mem- ory and goal vector. The observation for agent i is 0;(s) = [ @X1,...,(W+a2) C1,....N Mj Bi ] . Where ;x, is the observa- tion of entity 7’s physical state in agent i’s reference frame (see Appendix for details). More intricate observation mod- els are possible, such as physical o| pixels or verbal observations from These models would require agents sual processing and source separati nal to this work. Despite the dimens: varying with the number of physical bservations solely from a single input channel. learning to perform vi- on, which are orthogo- ionality of observations entities and communi- cation streams, our policy architecture as described in Sec- tion allows a single policy parameterization across these variations.
1703.04908#12
Emergence of Grounded Compositional Language in Multi-Agent Populations
By capturing statistical patterns in large corpora, machine learning has enabled significant advances in natural language processing, including in machine translation, question answering, and sentiment analysis. However, for agents to intelligently interact with humans, simply capturing the statistical patterns is insufficient. In this paper we investigate if, and how, grounded compositional language can emerge as a means to achieve goals in multi-agent populations. Towards this end, we propose a multi-agent learning environment and learning methods that bring about emergence of a basic compositional language. This language is represented as streams of abstract discrete symbols uttered by agents over time, but nonetheless has a coherent structure that possesses a defined vocabulary and syntax. We also observe emergence of non-verbal communication such as pointing and guiding when language communication is unavailable.
http://arxiv.org/pdf/1703.04908
Igor Mordatch, Pieter Abbeel
cs.AI, cs.CL
null
null
cs.AI
20170315
20180724
[ { "id": "1603.08887" }, { "id": "1611.01779" }, { "id": "1612.07182" }, { "id": "1609.00777" }, { "id": "1612.08810" } ]
1703.04933
12
y= rect (Srecr( +++ brect(@ +01) ++ ‘) : x1) OK, where # o Figure 2: An illustration of the effects of non-negative ho- mogeneity. The graph depicts level curves of the behavior of the loss L embedded into the two dimensional param- eter space with the axis given by θ1 and θ2. Specifically, each line of a given color corresponds to the parameter as- signments (θ1, θ2) that result observationally in the same prediction function fθ. Best seen with colors. • x is the input to the model, a high-dimensional vector • φrect is the rectified elementwise activation func- tion (Jarrett et al., 2009; Nair & Hinton, 2010; Glorot et al., 2011), which is the positive part (zi)i • vec reshapes a matrix into a vector.
1703.04933#12
Sharp Minima Can Generalize For Deep Nets
Despite their overwhelming capacity to overfit, deep learning architectures tend to generalize relatively well to unseen data, allowing them to be deployed in practice. However, explaining why this is the case is still an open area of research. One standing hypothesis that is gaining popularity, e.g. Hochreiter & Schmidhuber (1997); Keskar et al. (2017), is that the flatness of minima of the loss function found by stochastic gradient based methods results in good generalization. This paper argues that most notions of flatness are problematic for deep models and can not be directly applied to explain generalization. Specifically, when focusing on deep networks with rectifier units, we can exploit the particular geometry of parameter space induced by the inherent symmetries that these architectures exhibit to build equivalent models corresponding to arbitrarily sharper minima. Furthermore, if we allow to reparametrize a function, the geometry of its parameters can change drastically without affecting its generalization properties.
http://arxiv.org/pdf/1703.04933
Laurent Dinh, Razvan Pascanu, Samy Bengio, Yoshua Bengio
cs.LG
8.5 pages of main content, 2.5 of bibliography and 1 page of appendix
null
cs.LG
20170315
20170515
[ { "id": "1609.03193" }, { "id": "1701.04271" }, { "id": "1609.04836" }, { "id": "1606.04838" }, { "id": "1611.03530" }, { "id": "1605.08803" }, { "id": "1511.01029" }, { "id": "1609.08144" }, { "id": "1611.01838" }, { "id": "1606.05336" }, { "id": "1603.01431" }, { "id": "1511.01844" }, { "id": "1612.04010" }, { "id": "1611.07476" }, { "id": "1611.02344" } ]
1703.05175
12
# 2.5 Comparison to Matching Networks Prototypical networks differ from matching networks in the few-shot case with equivalence in the one-shot scenario. Matching networks [29] produce a weighted nearest neighbor classifier given the support set, while prototypical networks produce a linear classifier when squared Euclidean distance is used. In the case of one-shot learning, ck = xk since there is only one support point per class, and matching networks and prototypical networks become equivalent. A natural question is whether it makes sense to use multiple prototypes per class instead of just one. If the number of prototypes per class is fixed and greater than 1, then this would require a partitioning scheme to further cluster the support points within a class. This has been proposed in Mensink et al. [19] and Rippel et al. [25]; however both methods require a separate partitioning phase that is decoupled from the weight updates, while our approach is simple to learn with ordinary gradient descent methods.
1703.05175#12
Prototypical Networks for Few-shot Learning
We propose prototypical networks for the problem of few-shot classification, where a classifier must generalize to new classes not seen in the training set, given only a small number of examples of each new class. Prototypical networks learn a metric space in which classification can be performed by computing distances to prototype representations of each class. Compared to recent approaches for few-shot learning, they reflect a simpler inductive bias that is beneficial in this limited-data regime, and achieve excellent results. We provide an analysis showing that some simple design decisions can yield substantial improvements over recent approaches involving complicated architectural choices and meta-learning. We further extend prototypical networks to zero-shot learning and achieve state-of-the-art results on the CU-Birds dataset.
http://arxiv.org/pdf/1703.05175
Jake Snell, Kevin Swersky, Richard S. Zemel
cs.LG, stat.ML
null
null
cs.LG
20170315
20170619
[ { "id": "1605.05395" }, { "id": "1502.03167" } ]
1703.04908
13
Figure 2: The transition dynamics of N agents from time t − 1 to t. Dashed lines indicate one-to-one dependencies between agents and solid lines indicate all-to-all dependen- cies. Policy Learning with Backpropagation Each agent acts by sampling actions from a stochastic pol- icy π, which is identical for all agents and defined by pa- rameters θ. There are several common options for finding optimal policy parameters. The model-free framework of Q- learning can be used to find the optimal state-action value function, and employ a policy that acts greedily to accord- ing to the value function. Unfortunately, Q function dimen- sionality scales quadratically with communication vocabu- lary size, which can quickly become intractably large. Alter- natively it is possible to directly learn a policy function using model-free policy gradient methods, which use sampling to estimate the gradient of policy return dR dθ . The gradient es- timates from these methods can exhibit very high variance and credit assignment becomes an especially difficult prob- lem in the presence of sequential communication actions.
1703.04908#13
Emergence of Grounded Compositional Language in Multi-Agent Populations
By capturing statistical patterns in large corpora, machine learning has enabled significant advances in natural language processing, including in machine translation, question answering, and sentiment analysis. However, for agents to intelligently interact with humans, simply capturing the statistical patterns is insufficient. In this paper we investigate if, and how, grounded compositional language can emerge as a means to achieve goals in multi-agent populations. Towards this end, we propose a multi-agent learning environment and learning methods that bring about emergence of a basic compositional language. This language is represented as streams of abstract discrete symbols uttered by agents over time, but nonetheless has a coherent structure that possesses a defined vocabulary and syntax. We also observe emergence of non-verbal communication such as pointing and guiding when language communication is unavailable.
http://arxiv.org/pdf/1703.04908
Igor Mordatch, Pieter Abbeel
cs.AI, cs.CL
null
null
cs.AI
20170315
20180724
[ { "id": "1603.08887" }, { "id": "1611.01779" }, { "id": "1612.07182" }, { "id": "1609.00777" }, { "id": "1612.08810" } ]
1703.04933
13
• vec reshapes a matrix into a vector. Note that in our definition we excluded the bias terms, usu- ally found in any neural architecture. This is done mainly for convenience, to simplify the rendition of our arguments. However, the arguments can be extended to the case that includes biases (see Appendix B). Another choice is that of the linear output layer. Having an output activation func- tion does not affect our argument either: since the loss is a function of the output activation, it can be rephrased as a function of linear pre-activation. Deep rectifier models have certain properties that allows us in section 4 to arbitrary manipulate the flatness of a minimum. An important topic for optimization of neural networks is understanding the non-Euclidean geometry of the param- eter space as imposed by the neural architecture (see, for example Amari, 1998). In principle, when we take a step in parameter space what we expect to control is the change in the behavior of the model (i.e. the mapping of the input x to the output y). In principle we are not interested in the parameters per se, but rather only in the mapping they represent.
1703.04933#13
Sharp Minima Can Generalize For Deep Nets
Despite their overwhelming capacity to overfit, deep learning architectures tend to generalize relatively well to unseen data, allowing them to be deployed in practice. However, explaining why this is the case is still an open area of research. One standing hypothesis that is gaining popularity, e.g. Hochreiter & Schmidhuber (1997); Keskar et al. (2017), is that the flatness of minima of the loss function found by stochastic gradient based methods results in good generalization. This paper argues that most notions of flatness are problematic for deep models and can not be directly applied to explain generalization. Specifically, when focusing on deep networks with rectifier units, we can exploit the particular geometry of parameter space induced by the inherent symmetries that these architectures exhibit to build equivalent models corresponding to arbitrarily sharper minima. Furthermore, if we allow to reparametrize a function, the geometry of its parameters can change drastically without affecting its generalization properties.
http://arxiv.org/pdf/1703.04933
Laurent Dinh, Razvan Pascanu, Samy Bengio, Yoshua Bengio
cs.LG
8.5 pages of main content, 2.5 of bibliography and 1 page of appendix
null
cs.LG
20170315
20170515
[ { "id": "1609.03193" }, { "id": "1701.04271" }, { "id": "1609.04836" }, { "id": "1606.04838" }, { "id": "1611.03530" }, { "id": "1605.08803" }, { "id": "1511.01029" }, { "id": "1609.08144" }, { "id": "1611.01838" }, { "id": "1606.05336" }, { "id": "1603.01431" }, { "id": "1511.01844" }, { "id": "1612.04010" }, { "id": "1611.07476" }, { "id": "1611.02344" } ]
1703.05175
13
Vinyals et al. [29] propose a number of extensions, including decoupling the embedding functions of the support and query points, and using a second-level, fully-conditional embedding (FCE) that takes into account specific points in each episode. These could likewise be incorporated into prototypical networks, however they increase the number of learnable parameters, and FCE imposes an arbitrary ordering on the support set using a bi-directional LSTM. Instead, we show that it is possible to achieve the same level of performance using simple design choices, which we outline next. # 2.6 Design Choices Distance metric Vinyals et al. [29] and Ravi and Larochelle [22] apply matching networks using cosine distance. However for both prototypical and matching networks any distance is permissible, and we found that using squared Euclidean distance can greatly improve results for both. We conjecture this is primarily due to cosine distance not being a Bregman divergence, and thus the equivalence to mixture density estimation discussed in Section 2.3 does not hold.
1703.05175#13
Prototypical Networks for Few-shot Learning
We propose prototypical networks for the problem of few-shot classification, where a classifier must generalize to new classes not seen in the training set, given only a small number of examples of each new class. Prototypical networks learn a metric space in which classification can be performed by computing distances to prototype representations of each class. Compared to recent approaches for few-shot learning, they reflect a simpler inductive bias that is beneficial in this limited-data regime, and achieve excellent results. We provide an analysis showing that some simple design decisions can yield substantial improvements over recent approaches involving complicated architectural choices and meta-learning. We further extend prototypical networks to zero-shot learning and achieve state-of-the-art results on the CU-Birds dataset.
http://arxiv.org/pdf/1703.05175
Jake Snell, Kevin Swersky, Richard S. Zemel
cs.LG, stat.ML
null
null
cs.LG
20170315
20170619
[ { "id": "1605.05395" }, { "id": "1502.03167" } ]
1703.04908
14
Instead of using model-free reinforcement learning meth- ods, we build an end-to-end differentiable model of all agent and environment state dynamics over time and calculate dR dθ with backpropagation. At every optimization iteration, we sample a new batch of 1024 random environment instan- tiations and backpropagate their dynamics through time to calculate the total return gradient. Figure 2 shows the de- pendency chain between two timesteps. A similar approach was employed by (Foerster et al. 2016; Sukhbaatar, Szlam, and Fergus 2016) to compute gradients for communication actions, although the latter still employed model-free meth- ods for physical action computation. The physical state dynamics, including discontinuous contact events can be made differentiable with smoothing. However, communication actions require emission of dis- crete symbols, which present difficulties for backpropaga- tion.
1703.04908#14
Emergence of Grounded Compositional Language in Multi-Agent Populations
By capturing statistical patterns in large corpora, machine learning has enabled significant advances in natural language processing, including in machine translation, question answering, and sentiment analysis. However, for agents to intelligently interact with humans, simply capturing the statistical patterns is insufficient. In this paper we investigate if, and how, grounded compositional language can emerge as a means to achieve goals in multi-agent populations. Towards this end, we propose a multi-agent learning environment and learning methods that bring about emergence of a basic compositional language. This language is represented as streams of abstract discrete symbols uttered by agents over time, but nonetheless has a coherent structure that possesses a defined vocabulary and syntax. We also observe emergence of non-verbal communication such as pointing and guiding when language communication is unavailable.
http://arxiv.org/pdf/1703.04908
Igor Mordatch, Pieter Abbeel
cs.AI, cs.CL
null
null
cs.AI
20170315
20180724
[ { "id": "1603.08887" }, { "id": "1611.01779" }, { "id": "1612.07182" }, { "id": "1609.00777" }, { "id": "1612.08810" } ]
1703.04933
14
If one defines a measure for the change in the behavior of the model, which can be done under some assumptions, then, it can be used to define, at any point in the parameter space, a metric that says what is the equivalent change in the parameters for a unit of change in the behavior of the model. As it turns out, for neural networks, this metric is not constant over Θ. Intuitively, the metric is related to the curvature, and since neural networks can be highly non- linear, the curvature will not be constant. See Amari (1998); Pascanu & Bengio (2014) for more details. Coming back to the concept of flatness or sharpness of a minimum, this metric should define the flatness.
1703.04933#14
Sharp Minima Can Generalize For Deep Nets
Despite their overwhelming capacity to overfit, deep learning architectures tend to generalize relatively well to unseen data, allowing them to be deployed in practice. However, explaining why this is the case is still an open area of research. One standing hypothesis that is gaining popularity, e.g. Hochreiter & Schmidhuber (1997); Keskar et al. (2017), is that the flatness of minima of the loss function found by stochastic gradient based methods results in good generalization. This paper argues that most notions of flatness are problematic for deep models and can not be directly applied to explain generalization. Specifically, when focusing on deep networks with rectifier units, we can exploit the particular geometry of parameter space induced by the inherent symmetries that these architectures exhibit to build equivalent models corresponding to arbitrarily sharper minima. Furthermore, if we allow to reparametrize a function, the geometry of its parameters can change drastically without affecting its generalization properties.
http://arxiv.org/pdf/1703.04933
Laurent Dinh, Razvan Pascanu, Samy Bengio, Yoshua Bengio
cs.LG
8.5 pages of main content, 2.5 of bibliography and 1 page of appendix
null
cs.LG
20170315
20170515
[ { "id": "1609.03193" }, { "id": "1701.04271" }, { "id": "1609.04836" }, { "id": "1606.04838" }, { "id": "1611.03530" }, { "id": "1605.08803" }, { "id": "1511.01029" }, { "id": "1609.08144" }, { "id": "1611.01838" }, { "id": "1606.05336" }, { "id": "1603.01431" }, { "id": "1511.01844" }, { "id": "1612.04010" }, { "id": "1611.07476" }, { "id": "1611.02344" } ]
1703.05175
14
Episode composition A straightforward way to construct episodes, used in Vinyals et al. [29] and Ravi and Larochelle [22], is to choose Nc classes and NS support points per class in order to match the expected situation at test-time. That is, if we expect at test-time to perform 5-way classification and 1-shot learning, then training episodes could be comprised of Nc = 5, NS = 1. We have found, however, that it can be extremely beneficial to train with a higher Nc, or “way”, than will be used at test-time. In our experiments, we tune the training Nc on a held-out validation set. Another consideration is whether to match NS, or “shot”, at train and test-time. For prototypical networks, we found that it is usually best to train and test with the same “shot” number. # 2.7 Zero-Shot Learning Zero-shot learning differs from few-shot learning in that instead of being given a support set of training points, we are given a class meta-data vector vk for each class. These could be determined 4 Table 1: Few-shot classification accuracies on Omniglot.
1703.05175#14
Prototypical Networks for Few-shot Learning
We propose prototypical networks for the problem of few-shot classification, where a classifier must generalize to new classes not seen in the training set, given only a small number of examples of each new class. Prototypical networks learn a metric space in which classification can be performed by computing distances to prototype representations of each class. Compared to recent approaches for few-shot learning, they reflect a simpler inductive bias that is beneficial in this limited-data regime, and achieve excellent results. We provide an analysis showing that some simple design decisions can yield substantial improvements over recent approaches involving complicated architectural choices and meta-learning. We further extend prototypical networks to zero-shot learning and achieve state-of-the-art results on the CU-Birds dataset.
http://arxiv.org/pdf/1703.05175
Jake Snell, Kevin Swersky, Richard S. Zemel
cs.LG, stat.ML
null
null
cs.LG
20170315
20170619
[ { "id": "1605.05395" }, { "id": "1502.03167" } ]
1703.04908
15
Discrete Communication and Gumbel-Softmax Estimator In order to use categorical communication emissions c in our setting, it must be possible to differentiate through them. There has been a wealth of work in machine learn- ing on differentiable models with discrete variables, but we found recent approach in (Jang, Gu, and Poole 2016; Maddison, Mnih, and Teh 2016) to be particularly effective in our setting. The approach proposes a Gumbel-Softmax distribution, which is a continuous relaxation of a discrete categorical distribution. Given K-categorical distribution parameters p, a differentiable K-dimensional one-hot en- coding sample G from the Gumbel-Softmax distribution can be calculated as: G(logp), exp ((logp +e)/r) Yj=0 exp((logp; + €)/T)
1703.04908#15
Emergence of Grounded Compositional Language in Multi-Agent Populations
By capturing statistical patterns in large corpora, machine learning has enabled significant advances in natural language processing, including in machine translation, question answering, and sentiment analysis. However, for agents to intelligently interact with humans, simply capturing the statistical patterns is insufficient. In this paper we investigate if, and how, grounded compositional language can emerge as a means to achieve goals in multi-agent populations. Towards this end, we propose a multi-agent learning environment and learning methods that bring about emergence of a basic compositional language. This language is represented as streams of abstract discrete symbols uttered by agents over time, but nonetheless has a coherent structure that possesses a defined vocabulary and syntax. We also observe emergence of non-verbal communication such as pointing and guiding when language communication is unavailable.
http://arxiv.org/pdf/1703.04908
Igor Mordatch, Pieter Abbeel
cs.AI, cs.CL
null
null
cs.AI
20170315
20180724
[ { "id": "1603.08887" }, { "id": "1611.01779" }, { "id": "1612.07182" }, { "id": "1609.00777" }, { "id": "1612.08810" } ]
1703.04933
15
However, the geometry of the parameter space is more com- plicated. Regardless of the measure chosen to compare two instantiations of a neural network, because of the structure of the model, it also exhibits a large number of symmet- ric configurations that result in exactly the same behavior. Because the rectifier activation has the non-negative homo- geneity property, as we will see shortly, one can construct a continuum of points that lead to the same behavior, hence the metric is singular. Which means that one can exploit these directions in which the model stays unchanged to shape the neighbourhood around a minimum in such a way that, by most definitions of flatness, this property can be controlled. See Figure 2 for a visual depiction, where the flatness (given here as the distance between the different level curves) can be changed by moving along the curve. Sharp Minima Can Generalize For Deep Nets
1703.04933#15
Sharp Minima Can Generalize For Deep Nets
Despite their overwhelming capacity to overfit, deep learning architectures tend to generalize relatively well to unseen data, allowing them to be deployed in practice. However, explaining why this is the case is still an open area of research. One standing hypothesis that is gaining popularity, e.g. Hochreiter & Schmidhuber (1997); Keskar et al. (2017), is that the flatness of minima of the loss function found by stochastic gradient based methods results in good generalization. This paper argues that most notions of flatness are problematic for deep models and can not be directly applied to explain generalization. Specifically, when focusing on deep networks with rectifier units, we can exploit the particular geometry of parameter space induced by the inherent symmetries that these architectures exhibit to build equivalent models corresponding to arbitrarily sharper minima. Furthermore, if we allow to reparametrize a function, the geometry of its parameters can change drastically without affecting its generalization properties.
http://arxiv.org/pdf/1703.04933
Laurent Dinh, Razvan Pascanu, Samy Bengio, Yoshua Bengio
cs.LG
8.5 pages of main content, 2.5 of bibliography and 1 page of appendix
null
cs.LG
20170315
20170515
[ { "id": "1609.03193" }, { "id": "1701.04271" }, { "id": "1609.04836" }, { "id": "1606.04838" }, { "id": "1611.03530" }, { "id": "1605.08803" }, { "id": "1511.01029" }, { "id": "1609.08144" }, { "id": "1611.01838" }, { "id": "1606.05336" }, { "id": "1603.01431" }, { "id": "1511.01844" }, { "id": "1612.04010" }, { "id": "1611.07476" }, { "id": "1611.02344" } ]
1703.05175
15
4 Table 1: Few-shot classification accuracies on Omniglot. 5-way Acc. 20-way Acc. Model Dist. Fine Tune 1-shot 5-shot 1-shot 5-shot MATCHING NETWORKS [29] MATCHING NETWORKS [29] NEURAL STATISTICIAN [6] PROTOTYPICAL NETWORKS (OURS) Cosine Cosine - Euclid. N Y N N 98.1% 98.9% 93.8% 98.5% 97.9% 98.7% 93.5% 98.7% 98.1% 99.5% 93.2% 98.1% 98.8% 99.7% 96.0% 98.9% in advance, or they could be learned from e.g., raw text [7]. Modifying prototypical networks to deal with the zero-shot case is straightforward: we simply define ck = gϑ(vk) to be a separate embedding of the meta-data vector. An illustration of the zero-shot procedure for prototypical networks as it relates to the few-shot procedure is shown in Figure 1. Since the meta-data vector and query point come from different input domains, we found it was helpful empirically to fix the prototype embedding g to have unit length, however we do not constrain the query embedding f .
1703.05175#15
Prototypical Networks for Few-shot Learning
We propose prototypical networks for the problem of few-shot classification, where a classifier must generalize to new classes not seen in the training set, given only a small number of examples of each new class. Prototypical networks learn a metric space in which classification can be performed by computing distances to prototype representations of each class. Compared to recent approaches for few-shot learning, they reflect a simpler inductive bias that is beneficial in this limited-data regime, and achieve excellent results. We provide an analysis showing that some simple design decisions can yield substantial improvements over recent approaches involving complicated architectural choices and meta-learning. We further extend prototypical networks to zero-shot learning and achieve state-of-the-art results on the CU-Birds dataset.
http://arxiv.org/pdf/1703.05175
Jake Snell, Kevin Swersky, Richard S. Zemel
cs.LG, stat.ML
null
null
cs.LG
20170315
20170619
[ { "id": "1605.05395" }, { "id": "1502.03167" } ]
1703.04908
16
G(logp), exp ((logp +e)/r) Yj=0 exp((logp; + €)/T) Where ε are i.i.d. samples from Gumbel(0, 1) distribution, ε = −log(−log(u)), u ∼ U[0, 1] and τ is a softmax tem- perature parameter. We did not find it necessary to anneal the temperature and set it to 1 in all our experiments for train- ing and sample directly from the categorical distribution at test time. To emit a communication symbol, our policy is trained to directly output logp ∈ RK, which is transformed to a symbol emission sample c ∼ G(logp). The resulting gradient can be estimated as dc Policy Architecture The policy class we consider in this work are stochastic neu- ral networks. The policy outputs samples of an agent’s phys- ical actions u, communication symbol utterance c, and in- ternal memory updates ∆m. The policy must consolidate multiple incoming communication symbol streams emitted by other agents, as well as incoming observations of physi- cal entities. Importantly, the number of agents (and thus the
1703.04908#16
Emergence of Grounded Compositional Language in Multi-Agent Populations
By capturing statistical patterns in large corpora, machine learning has enabled significant advances in natural language processing, including in machine translation, question answering, and sentiment analysis. However, for agents to intelligently interact with humans, simply capturing the statistical patterns is insufficient. In this paper we investigate if, and how, grounded compositional language can emerge as a means to achieve goals in multi-agent populations. Towards this end, we propose a multi-agent learning environment and learning methods that bring about emergence of a basic compositional language. This language is represented as streams of abstract discrete symbols uttered by agents over time, but nonetheless has a coherent structure that possesses a defined vocabulary and syntax. We also observe emergence of non-verbal communication such as pointing and guiding when language communication is unavailable.
http://arxiv.org/pdf/1703.04908
Igor Mordatch, Pieter Abbeel
cs.AI, cs.CL
null
null
cs.AI
20170315
20180724
[ { "id": "1603.08887" }, { "id": "1611.01779" }, { "id": "1612.07182" }, { "id": "1609.00777" }, { "id": "1612.08810" } ]
1703.04933
16
Sharp Minima Can Generalize For Deep Nets Let us redefine, for convenience, the non-negative homo- geneity property (Neyshabur et al., 2015; Lafond et al., 2016) below. Note that beside this property, the reason for study- ing the rectified linear activation is for its widespread adop- tion (Krizhevsky et al., 2012; Simonyan & Zisserman, 2015; Szegedy et al., 2015; He et al., 2016). Definition 4. A given a function φ is non-negative homoge- neous if # 4 Deep Rectified networks and flat minima In this section we exploit the resulting strong non- identifiability to showcase a few shortcomings of some definitions of flatness. Although α-scale transformation does not affect the function represented, it allows us to sig- nificantly decrease several measures of flatness. For another definition of flatness, α-scale transformation show that all minima are equally flat. ∀(z, α) ∈ R × R+, φ(αz) = αφ(z) . # 4.1 Volume «-flatness
1703.04933#16
Sharp Minima Can Generalize For Deep Nets
Despite their overwhelming capacity to overfit, deep learning architectures tend to generalize relatively well to unseen data, allowing them to be deployed in practice. However, explaining why this is the case is still an open area of research. One standing hypothesis that is gaining popularity, e.g. Hochreiter & Schmidhuber (1997); Keskar et al. (2017), is that the flatness of minima of the loss function found by stochastic gradient based methods results in good generalization. This paper argues that most notions of flatness are problematic for deep models and can not be directly applied to explain generalization. Specifically, when focusing on deep networks with rectifier units, we can exploit the particular geometry of parameter space induced by the inherent symmetries that these architectures exhibit to build equivalent models corresponding to arbitrarily sharper minima. Furthermore, if we allow to reparametrize a function, the geometry of its parameters can change drastically without affecting its generalization properties.
http://arxiv.org/pdf/1703.04933
Laurent Dinh, Razvan Pascanu, Samy Bengio, Yoshua Bengio
cs.LG
8.5 pages of main content, 2.5 of bibliography and 1 page of appendix
null
cs.LG
20170315
20170515
[ { "id": "1609.03193" }, { "id": "1701.04271" }, { "id": "1609.04836" }, { "id": "1606.04838" }, { "id": "1611.03530" }, { "id": "1605.08803" }, { "id": "1511.01029" }, { "id": "1609.08144" }, { "id": "1611.01838" }, { "id": "1606.05336" }, { "id": "1603.01431" }, { "id": "1511.01844" }, { "id": "1612.04010" }, { "id": "1611.07476" }, { "id": "1611.02344" } ]
1703.04908
17
Figure 3: Overview of our policy architecture, mapping ob- servations to actions at every point time time. FC indicates a fully-connected processing module that shares weights with all others of its label. pool indicates a softmax pooling layer. number of communication streams) and number of physi- cal entities can vary between environment instantiations. To support this, the policy instantiates a collection of identi- cal processing modules for each communication stream and each observed physical entity. Each processing module is a fully-connected multi-layer perceptron. The weights be- tween all communication processing and physical observa- tion modules are shared. The outputs of individual process- ing modules are pooled with a softmax operation into feature vectors φc and φx for communication and physical observa- tion streams, respectively. Such weight sharing and pooling makes it possible to apply the same policy parameters to any number of communication and physical observations. The pooled features and agent’s private goal vector are passed to the final processing module that outputs distribu- tion parameters [ ψu ψc ] from which action samples are generated as u = ψu + ε and c ∼ G(ψc), where ε is a zero-mean Gaussian noise.
1703.04908#17
Emergence of Grounded Compositional Language in Multi-Agent Populations
By capturing statistical patterns in large corpora, machine learning has enabled significant advances in natural language processing, including in machine translation, question answering, and sentiment analysis. However, for agents to intelligently interact with humans, simply capturing the statistical patterns is insufficient. In this paper we investigate if, and how, grounded compositional language can emerge as a means to achieve goals in multi-agent populations. Towards this end, we propose a multi-agent learning environment and learning methods that bring about emergence of a basic compositional language. This language is represented as streams of abstract discrete symbols uttered by agents over time, but nonetheless has a coherent structure that possesses a defined vocabulary and syntax. We also observe emergence of non-verbal communication such as pointing and guiding when language communication is unavailable.
http://arxiv.org/pdf/1703.04908
Igor Mordatch, Pieter Abbeel
cs.AI, cs.CL
null
null
cs.AI
20170315
20180724
[ { "id": "1603.08887" }, { "id": "1611.01779" }, { "id": "1612.07182" }, { "id": "1609.00777" }, { "id": "1612.08810" } ]
1703.04933
17
∀(z, α) ∈ R × R+, φ(αz) = αφ(z) . # 4.1 Volume «-flatness Theorem 1. The rectified function φrect(x) = max(x, 0) is non-negative homogeneous. Theorem 2. For a one-hidden layer rectified neural network of the form y = φrect(x · θ1) · θ2, Proof. Follows trivially from the constraint that α > 0, given that x > 0 ⇒ αx > 0, iff α > 0. and a minimum 6 = (61,62), such that 0, 4 0 and 62 # 0, Ve > 0 C(L, 6, €) has an infinite volume. For a deep rectified neural network it means that: # brect (x + (a61)) 02 = brect( * 01) - (92), meaning that for this one (hidden) layer neural network, the parameters (αθ1, θ2) is observationally equivalent to (θ1, αθ2). This observational equivalence similarly holds for convolutional layers.
1703.04933#17
Sharp Minima Can Generalize For Deep Nets
Despite their overwhelming capacity to overfit, deep learning architectures tend to generalize relatively well to unseen data, allowing them to be deployed in practice. However, explaining why this is the case is still an open area of research. One standing hypothesis that is gaining popularity, e.g. Hochreiter & Schmidhuber (1997); Keskar et al. (2017), is that the flatness of minima of the loss function found by stochastic gradient based methods results in good generalization. This paper argues that most notions of flatness are problematic for deep models and can not be directly applied to explain generalization. Specifically, when focusing on deep networks with rectifier units, we can exploit the particular geometry of parameter space induced by the inherent symmetries that these architectures exhibit to build equivalent models corresponding to arbitrarily sharper minima. Furthermore, if we allow to reparametrize a function, the geometry of its parameters can change drastically without affecting its generalization properties.
http://arxiv.org/pdf/1703.04933
Laurent Dinh, Razvan Pascanu, Samy Bengio, Yoshua Bengio
cs.LG
8.5 pages of main content, 2.5 of bibliography and 1 page of appendix
null
cs.LG
20170315
20170515
[ { "id": "1609.03193" }, { "id": "1701.04271" }, { "id": "1609.04836" }, { "id": "1606.04838" }, { "id": "1611.03530" }, { "id": "1605.08803" }, { "id": "1511.01029" }, { "id": "1609.08144" }, { "id": "1611.01838" }, { "id": "1606.05336" }, { "id": "1603.01431" }, { "id": "1511.01844" }, { "id": "1612.04010" }, { "id": "1611.07476" }, { "id": "1611.02344" } ]
1703.05175
17
# 3.1 Omniglot Few-shot Classification Omniglot [16] is a dataset of 1623 handwritten characters collected from 50 alphabets. There are 20 examples associated with each character, where each example is drawn by a different human subject. We follow the procedure of Vinyals et al. [29] by resizing the grayscale images to 28 × 28 and augmenting the character classes with rotations in multiples of 90 degrees. We use 1200 characters plus rotations for training (4,800 classes in total) and the remaining classes, including rotations, for test. Our embedding architecture mirrors that used by Vinyals et al. [29] and is composed of four convolutional blocks. Each block comprises a 64-filter 3 × 3 convolution, batch normalization layer [10], a ReLU nonlinearity and a 2 × 2 max-pooling layer. When applied to the 28 × 28 Omniglot images this architecture results in a 64-dimensional output space. We use the same encoder for embedding both support and query points. All of our models were trained via SGD with Adam [11]. We used an initial learning rate of 10−3 and cut the learning rate in half every 2000 episodes. No regularization was used other than batch normalization.
1703.05175#17
Prototypical Networks for Few-shot Learning
We propose prototypical networks for the problem of few-shot classification, where a classifier must generalize to new classes not seen in the training set, given only a small number of examples of each new class. Prototypical networks learn a metric space in which classification can be performed by computing distances to prototype representations of each class. Compared to recent approaches for few-shot learning, they reflect a simpler inductive bias that is beneficial in this limited-data regime, and achieve excellent results. We provide an analysis showing that some simple design decisions can yield substantial improvements over recent approaches involving complicated architectural choices and meta-learning. We further extend prototypical networks to zero-shot learning and achieve state-of-the-art results on the CU-Birds dataset.
http://arxiv.org/pdf/1703.05175
Jake Snell, Kevin Swersky, Richard S. Zemel
cs.LG, stat.ML
null
null
cs.LG
20170315
20170619
[ { "id": "1605.05395" }, { "id": "1502.03167" } ]
1703.04908
18
Unlike communication games where agents only emit a single utterance, our agents continually emit a stream of symbols over time. Thus processing modules that read and write communication utterance streams benefit greatly from recurrent memory that can capture meaning of a stream over time. To this end, we augment each communication process- ing and output module with an independent internal mem- ory state m, and each module outputs memory state updates ∆m. In this work we use simple additive memory updates mt = tanh(mt−1 + ∆mt−1 + ε) for simplicity and in- terpretability, but other memory architectures such LSTMs can be used. We build all fully-connected modules with 256 hidden units and 2 layers each in all our experiments, us- ing exponential-linear units and dropout with a rate of 0.1 between all hidden layers. Size is feature vectors φ is 256 and size of each memory module is 32. The overall policy architecture is shown in Figure 3.
1703.04908#18
Emergence of Grounded Compositional Language in Multi-Agent Populations
By capturing statistical patterns in large corpora, machine learning has enabled significant advances in natural language processing, including in machine translation, question answering, and sentiment analysis. However, for agents to intelligently interact with humans, simply capturing the statistical patterns is insufficient. In this paper we investigate if, and how, grounded compositional language can emerge as a means to achieve goals in multi-agent populations. Towards this end, we propose a multi-agent learning environment and learning methods that bring about emergence of a basic compositional language. This language is represented as streams of abstract discrete symbols uttered by agents over time, but nonetheless has a coherent structure that possesses a defined vocabulary and syntax. We also observe emergence of non-verbal communication such as pointing and guiding when language communication is unavailable.
http://arxiv.org/pdf/1703.04908
Igor Mordatch, Pieter Abbeel
cs.AI, cs.CL
null
null
cs.AI
20170315
20180724
[ { "id": "1603.08887" }, { "id": "1611.01779" }, { "id": "1612.07182" }, { "id": "1609.00777" }, { "id": "1612.08810" } ]
1703.04933
18
We will not consider the solution @ where any of the weight matrices 6),62 is zero, 6; = 0 or 02 = 0, as it results in a constant function which we will assume to give poor training performance. For a > 0, the a-scale transformation To : (01,02) (61,0710) has Jacobian determinant a2, where once again n; = dim(vec(61)) and nz = dim(vec(62)). Note that the Jacobian determinant of this linear transformation is the change in the volume induced by T,, and T,, o Tg = Tyg. We show below that there is a connected region containing 6 with infinite volume and where the error remains approximately constant. Given this non-negative homogeneity, if (0,,42) 4 (0,0) then {(a01, 07102), a > o} is an infinite set of obser- vationally equivalent parameters, inducing a strong non- identifiability in this learning scenario. Other models like deep linear networks (Saxe et al., 2013), leaky rectifiers (He et al., 2015) or maxout networks (Goodfellow et al., 2013) also have this non-negative homogeneity property. In what follows we will rely on such transformations, in particular we will rely on the following definition:
1703.04933#18
Sharp Minima Can Generalize For Deep Nets
Despite their overwhelming capacity to overfit, deep learning architectures tend to generalize relatively well to unseen data, allowing them to be deployed in practice. However, explaining why this is the case is still an open area of research. One standing hypothesis that is gaining popularity, e.g. Hochreiter & Schmidhuber (1997); Keskar et al. (2017), is that the flatness of minima of the loss function found by stochastic gradient based methods results in good generalization. This paper argues that most notions of flatness are problematic for deep models and can not be directly applied to explain generalization. Specifically, when focusing on deep networks with rectifier units, we can exploit the particular geometry of parameter space induced by the inherent symmetries that these architectures exhibit to build equivalent models corresponding to arbitrarily sharper minima. Furthermore, if we allow to reparametrize a function, the geometry of its parameters can change drastically without affecting its generalization properties.
http://arxiv.org/pdf/1703.04933
Laurent Dinh, Razvan Pascanu, Samy Bengio, Yoshua Bengio
cs.LG
8.5 pages of main content, 2.5 of bibliography and 1 page of appendix
null
cs.LG
20170315
20170515
[ { "id": "1609.03193" }, { "id": "1701.04271" }, { "id": "1609.04836" }, { "id": "1606.04838" }, { "id": "1611.03530" }, { "id": "1605.08803" }, { "id": "1511.01029" }, { "id": "1609.08144" }, { "id": "1611.01838" }, { "id": "1606.05336" }, { "id": "1603.01431" }, { "id": "1511.01844" }, { "id": "1612.04010" }, { "id": "1611.07476" }, { "id": "1611.02344" } ]
1703.05175
18
We trained prototypical networks using Euclidean distance in the 1-shot and 5-shot scenarios with training episodes containing 60 classes and 5 query points per class. We found that it is advantageous to match the training-shot with the test-shot, and to use more classes (higher “way”) per training episode rather than fewer. We compare against various baselines, including the neural statistician [6] and both the fine-tuned and non-fine-tuned versions of matching networks [29]. We computed classification accuracy for our models averaged over 1000 randomly generated episodes from the test set. The results are shown in Table 1 and to our knowledge they represent the state-of-the-art on this dataset. # 3.2 miniImageNet Few-shot Classification
1703.05175#18
Prototypical Networks for Few-shot Learning
We propose prototypical networks for the problem of few-shot classification, where a classifier must generalize to new classes not seen in the training set, given only a small number of examples of each new class. Prototypical networks learn a metric space in which classification can be performed by computing distances to prototype representations of each class. Compared to recent approaches for few-shot learning, they reflect a simpler inductive bias that is beneficial in this limited-data regime, and achieve excellent results. We provide an analysis showing that some simple design decisions can yield substantial improvements over recent approaches involving complicated architectural choices and meta-learning. We further extend prototypical networks to zero-shot learning and achieve state-of-the-art results on the CU-Birds dataset.
http://arxiv.org/pdf/1703.05175
Jake Snell, Kevin Swersky, Richard S. Zemel
cs.LG, stat.ML
null
null
cs.LG
20170315
20170619
[ { "id": "1605.05395" }, { "id": "1502.03167" } ]
1703.04908
19
Auxiliary Prediction Reward To help policy training avoid local minima in more com- plex environments, we found it helpful to include auxiliary goal prediction tasks, similar to recent work in reinforce- ment learning (Dosovitskiy and Koltun 2016; Silver et al. 2016). In agent i’s policy, each communication processing module j additionally outputs a prediction ˆgi,j of agent j’s goals. We do not use ˆg as an input in calculating actions. It is only used for the purposes of auxiliary prediction task. At the end of the episode, we add a reward for predicting other agent’s goals, which in turn encourages communication ut- terances that convey the agent’s goals clearly to other agents. Across all agents this reward has the form: rg=- > \efj)-2F |? {i,j|iA5}
1703.04908#19
Emergence of Grounded Compositional Language in Multi-Agent Populations
By capturing statistical patterns in large corpora, machine learning has enabled significant advances in natural language processing, including in machine translation, question answering, and sentiment analysis. However, for agents to intelligently interact with humans, simply capturing the statistical patterns is insufficient. In this paper we investigate if, and how, grounded compositional language can emerge as a means to achieve goals in multi-agent populations. Towards this end, we propose a multi-agent learning environment and learning methods that bring about emergence of a basic compositional language. This language is represented as streams of abstract discrete symbols uttered by agents over time, but nonetheless has a coherent structure that possesses a defined vocabulary and syntax. We also observe emergence of non-verbal communication such as pointing and guiding when language communication is unavailable.
http://arxiv.org/pdf/1703.04908
Igor Mordatch, Pieter Abbeel
cs.AI, cs.CL
null
null
cs.AI
20170315
20180724
[ { "id": "1603.08887" }, { "id": "1611.01779" }, { "id": "1612.07182" }, { "id": "1609.00777" }, { "id": "1612.08810" } ]
1703.04933
19
In what follows we will rely on such transformations, in particular we will rely on the following definition: Proof. We will first introduce a small region with approxi- mately constant error around @ with non-zero volume. Given € > 0 and if we consider the loss function continuous with respect to the parameter, C(L, 0, €) is an open set containing 9. Since we also have 6; 4 0 and 62 ¥ 0, let r > 0 such that the £.. ball Boo (r, 0) is in C(L,0,€) and has empty intersection with {0,0, = 0}. Let v = (2r)™*"2 > 0 the volume of B,,(r, 9). Definition 5. For a single hidden layer rectifier feedforward network we define the family of transformations -1 Ta : (01,02) + (a1, 07°82) which we refer to as a α-scale transformation.
1703.04933#19
Sharp Minima Can Generalize For Deep Nets
Despite their overwhelming capacity to overfit, deep learning architectures tend to generalize relatively well to unseen data, allowing them to be deployed in practice. However, explaining why this is the case is still an open area of research. One standing hypothesis that is gaining popularity, e.g. Hochreiter & Schmidhuber (1997); Keskar et al. (2017), is that the flatness of minima of the loss function found by stochastic gradient based methods results in good generalization. This paper argues that most notions of flatness are problematic for deep models and can not be directly applied to explain generalization. Specifically, when focusing on deep networks with rectifier units, we can exploit the particular geometry of parameter space induced by the inherent symmetries that these architectures exhibit to build equivalent models corresponding to arbitrarily sharper minima. Furthermore, if we allow to reparametrize a function, the geometry of its parameters can change drastically without affecting its generalization properties.
http://arxiv.org/pdf/1703.04933
Laurent Dinh, Razvan Pascanu, Samy Bengio, Yoshua Bengio
cs.LG
8.5 pages of main content, 2.5 of bibliography and 1 page of appendix
null
cs.LG
20170315
20170515
[ { "id": "1609.03193" }, { "id": "1701.04271" }, { "id": "1609.04836" }, { "id": "1606.04838" }, { "id": "1611.03530" }, { "id": "1605.08803" }, { "id": "1511.01029" }, { "id": "1609.08144" }, { "id": "1611.01838" }, { "id": "1606.05336" }, { "id": "1603.01431" }, { "id": "1511.01844" }, { "id": "1612.04010" }, { "id": "1611.07476" }, { "id": "1611.02344" } ]
1703.05175
19
# 3.2 miniImageNet Few-shot Classification The miniImageNet dataset, originally proposed by Vinyals et al. [29], is derived from the larger ILSVRC-12 dataset [26]. The splits used by Vinyals et al. [29] consist of 60,000 color images of size 84 × 84 divided into 100 classes with 600 examples each. For our experiments, we use the splits introduced by Ravi and Larochelle [22] in order to directly compare with state-of-the-art algorithms for few-shot learning. Their splits use a different set of 100 classes, divided into 64 training, 16 validation, and 20 test classes. We follow their procedure by training on the 64 training classes and using the 16 validation classes for monitoring generalization performance only. We use the same four-block embedding architecture as in our Omniglot experiments, though here it results in a 1600-dimensional output space due to the increased size of the images. We also 5 Table 2: Few-shot classification accuracies on miniImageNet. All accuracy results are averaged over 600 test episodes and are reported with 95% confidence intervals. ∗Results reported by [22].
1703.05175#19
Prototypical Networks for Few-shot Learning
We propose prototypical networks for the problem of few-shot classification, where a classifier must generalize to new classes not seen in the training set, given only a small number of examples of each new class. Prototypical networks learn a metric space in which classification can be performed by computing distances to prototype representations of each class. Compared to recent approaches for few-shot learning, they reflect a simpler inductive bias that is beneficial in this limited-data regime, and achieve excellent results. We provide an analysis showing that some simple design decisions can yield substantial improvements over recent approaches involving complicated architectural choices and meta-learning. We further extend prototypical networks to zero-shot learning and achieve state-of-the-art results on the CU-Birds dataset.
http://arxiv.org/pdf/1703.05175
Jake Snell, Kevin Swersky, Richard S. Zemel
cs.LG, stat.ML
null
null
cs.LG
20170315
20170619
[ { "id": "1605.05395" }, { "id": "1502.03167" } ]
1703.04908
20
rg=- > \efj)-2F |? {i,j|iA5} Compositionality and Vocabulary Size What leads to compositional syntax formation? One known constructive hypothesis requires modeling the process of language transmission and acquisition from one generation of agents to the next iteratively as in (Kirby, Griffiths, and Smith 2014). In such iterated learning setting, composition- ality emerges due to poverty of stimulus - one generation will only observe a limited number of symbol utterances from the previous generation and must infer meaning of un- seen symbols. This approach requires modeling language acquisition between agents, but when implemented with pre- designed rules was shown over multiple iterations between generations to lead to formation of a compositional vocabu- lary.
1703.04908#20
Emergence of Grounded Compositional Language in Multi-Agent Populations
By capturing statistical patterns in large corpora, machine learning has enabled significant advances in natural language processing, including in machine translation, question answering, and sentiment analysis. However, for agents to intelligently interact with humans, simply capturing the statistical patterns is insufficient. In this paper we investigate if, and how, grounded compositional language can emerge as a means to achieve goals in multi-agent populations. Towards this end, we propose a multi-agent learning environment and learning methods that bring about emergence of a basic compositional language. This language is represented as streams of abstract discrete symbols uttered by agents over time, but nonetheless has a coherent structure that possesses a defined vocabulary and syntax. We also observe emergence of non-verbal communication such as pointing and guiding when language communication is unavailable.
http://arxiv.org/pdf/1703.04908
Igor Mordatch, Pieter Abbeel
cs.AI, cs.CL
null
null
cs.AI
20170315
20180724
[ { "id": "1603.08887" }, { "id": "1611.01779" }, { "id": "1612.07182" }, { "id": "1609.00777" }, { "id": "1612.08810" } ]
1703.04933
20
-1 Ta : (01,02) + (a1, 07°82) which we refer to as a α-scale transformation. Note that a α-scale transformation will not affect the gener- alization, as the behavior of the function is identical. Also while the transformation is only defined for a single layer rectified feedforward network, it can trivially be extended to any architecture having a single rectified network as a submodule, e.g. a deep rectified feedforward network. For simplicity and readability we will rely on this definition. Since the Jacobian determinant of T,, is the multiplicative change of induced by T,,, the volume of Ty, (Boo(r, 9)) is va™—"2, If ny A ng, we can arbitrarily grow the volume of Ta(Boo(r, 4)), with error within an ¢-interval of L(0), by having a tends to +00 if n > nz or to 0 otherwise. If ny = no, Va’ > 0,Ty (Bar, 6)) has volume v. Let Co = Ugo La (Bor, 6). C’ is a connected region where the error remains approximately constant, i.e. within an e-interval of L(@).
1703.04933#20
Sharp Minima Can Generalize For Deep Nets
Despite their overwhelming capacity to overfit, deep learning architectures tend to generalize relatively well to unseen data, allowing them to be deployed in practice. However, explaining why this is the case is still an open area of research. One standing hypothesis that is gaining popularity, e.g. Hochreiter & Schmidhuber (1997); Keskar et al. (2017), is that the flatness of minima of the loss function found by stochastic gradient based methods results in good generalization. This paper argues that most notions of flatness are problematic for deep models and can not be directly applied to explain generalization. Specifically, when focusing on deep networks with rectifier units, we can exploit the particular geometry of parameter space induced by the inherent symmetries that these architectures exhibit to build equivalent models corresponding to arbitrarily sharper minima. Furthermore, if we allow to reparametrize a function, the geometry of its parameters can change drastically without affecting its generalization properties.
http://arxiv.org/pdf/1703.04933
Laurent Dinh, Razvan Pascanu, Samy Bengio, Yoshua Bengio
cs.LG
8.5 pages of main content, 2.5 of bibliography and 1 page of appendix
null
cs.LG
20170315
20170515
[ { "id": "1609.03193" }, { "id": "1701.04271" }, { "id": "1609.04836" }, { "id": "1606.04838" }, { "id": "1611.03530" }, { "id": "1605.08803" }, { "id": "1511.01029" }, { "id": "1609.08144" }, { "id": "1611.01838" }, { "id": "1606.05336" }, { "id": "1603.01431" }, { "id": "1511.01844" }, { "id": "1612.04010" }, { "id": "1611.07476" }, { "id": "1611.02344" } ]
1703.05175
20
5-way Acc. Model Dist. Fine Tune 1-shot 5-shot BASELINE NEAREST NEIGHBORS∗ MATCHING NETWORKS [29]∗ MATCHING NETWORKS FCE [29]∗ META-LEARNER LSTM [22]∗ PROTOTYPICAL NETWORKS (OURS) Cosine Cosine Cosine - Euclid. N N N N N 28.86 ± 0.54% 49.79 ± 0.79% 43.40 ± 0.78% 51.09 ± 0.71% 43.56 ± 0.84% 55.31 ± 0.73% 43.44 ± 0.77% 60.60 ± 0.71% 49.42 ± 0.78% 68.20 ± 0.66% 80% + 80% ~ EE Matching / Proto. Nets ~ ME Matching Nets § 70% J 70% TE Proto. Nets e 60% 4 2 60% e e 3 50% + 3 50% 8 8 ft 40% 4 < 40% _ a Oo | ? 30% + 2 30% ~ 6 20% 20% 5-way 5-way 20-way 20-way 5-way 5-way 20-way 20-way Cosine Euclid Cosine Euclid. Cosine Euclid Cosine Euclid. 1-shot 5-shot
1703.05175#20
Prototypical Networks for Few-shot Learning
We propose prototypical networks for the problem of few-shot classification, where a classifier must generalize to new classes not seen in the training set, given only a small number of examples of each new class. Prototypical networks learn a metric space in which classification can be performed by computing distances to prototype representations of each class. Compared to recent approaches for few-shot learning, they reflect a simpler inductive bias that is beneficial in this limited-data regime, and achieve excellent results. We provide an analysis showing that some simple design decisions can yield substantial improvements over recent approaches involving complicated architectural choices and meta-learning. We further extend prototypical networks to zero-shot learning and achieve state-of-the-art results on the CU-Birds dataset.
http://arxiv.org/pdf/1703.05175
Jake Snell, Kevin Swersky, Richard S. Zemel
cs.LG, stat.ML
null
null
cs.LG
20170315
20170619
[ { "id": "1605.05395" }, { "id": "1502.03167" } ]
1703.04908
21
Alternatively, (Nowak, Plotkin, and Jansen 2000) ob- served that emergence of compositionality requires the num- ber of concepts describable by a language to be above a fac- tor of vocabulary size. In our preliminary environments the number of concepts to communicate is still fairly small and is within the capacity of a non-compositional language. We use a maximum vocabulary size K = 20 in all our exper- iments. We tested a smaller maximum vocabulary size, but found that policy optimization became stuck in a poor lo- cal minima where concepts became conflated. Instead, we propose to use a large vocabulary size limit but use a soft penalty function to prevent the formation of unnecessarily large vocabularies. This allows the intermediate stages of policy optimization to explore large vocabularies, but then converge on an appropriate active vocabulary size. As shown in Figure 6, this is indeed what happens. How do we penalize large vocabulary sizes? (Nowak, Plotkin, and Jansen 2000) proposed a word population dy- namics model that defines reproductive ratios of words to be proportional to their frequency, making already popu- lar words more likely to survive. Inspired by these rich-get- richer dynamics, we model the communication symbols as being generated from a Dirichlet Process (Teh 2011). Each communication symbol has a probability of being symbol ck as
1703.04908#21
Emergence of Grounded Compositional Language in Multi-Agent Populations
By capturing statistical patterns in large corpora, machine learning has enabled significant advances in natural language processing, including in machine translation, question answering, and sentiment analysis. However, for agents to intelligently interact with humans, simply capturing the statistical patterns is insufficient. In this paper we investigate if, and how, grounded compositional language can emerge as a means to achieve goals in multi-agent populations. Towards this end, we propose a multi-agent learning environment and learning methods that bring about emergence of a basic compositional language. This language is represented as streams of abstract discrete symbols uttered by agents over time, but nonetheless has a coherent structure that possesses a defined vocabulary and syntax. We also observe emergence of non-verbal communication such as pointing and guiding when language communication is unavailable.
http://arxiv.org/pdf/1703.04908
Igor Mordatch, Pieter Abbeel
cs.AI, cs.CL
null
null
cs.AI
20170315
20180724
[ { "id": "1603.08887" }, { "id": "1611.01779" }, { "id": "1612.07182" }, { "id": "1609.00777" }, { "id": "1612.08810" } ]
1703.04933
21
— 9 ll@llo+r oF Leta = eer Since Boo(7, 0) = Boo(r, 01) X Boo(r, 02), Sharp Minima Can Generalize For Deep Nets T,(Bz.(r',8)) T.(Boe(r’,8)) curvature (e.g. Desjardins et al., 2015; Salimans & Kingma, 2016). In this section we look at two widely used measures of the Hessian, the spectral radius and trace, showing that either of these values can be manipulated without actually changing the behavior of the function. If the flatness of a minimum is defined by any of these quantities, then it could also be easily manipulated. Theorem 3. The gradient and Hessian of the loss L with respect to θ can be modified by Tα. Proof. L(θ1, θ2) = L(αθ1, α−1θ2), we have then by differentiation
1703.04933#21
Sharp Minima Can Generalize For Deep Nets
Despite their overwhelming capacity to overfit, deep learning architectures tend to generalize relatively well to unseen data, allowing them to be deployed in practice. However, explaining why this is the case is still an open area of research. One standing hypothesis that is gaining popularity, e.g. Hochreiter & Schmidhuber (1997); Keskar et al. (2017), is that the flatness of minima of the loss function found by stochastic gradient based methods results in good generalization. This paper argues that most notions of flatness are problematic for deep models and can not be directly applied to explain generalization. Specifically, when focusing on deep networks with rectifier units, we can exploit the particular geometry of parameter space induced by the inherent symmetries that these architectures exhibit to build equivalent models corresponding to arbitrarily sharper minima. Furthermore, if we allow to reparametrize a function, the geometry of its parameters can change drastically without affecting its generalization properties.
http://arxiv.org/pdf/1703.04933
Laurent Dinh, Razvan Pascanu, Samy Bengio, Yoshua Bengio
cs.LG
8.5 pages of main content, 2.5 of bibliography and 1 page of appendix
null
cs.LG
20170315
20170515
[ { "id": "1609.03193" }, { "id": "1701.04271" }, { "id": "1609.04836" }, { "id": "1606.04838" }, { "id": "1611.03530" }, { "id": "1605.08803" }, { "id": "1511.01029" }, { "id": "1609.08144" }, { "id": "1611.01838" }, { "id": "1606.05336" }, { "id": "1603.01431" }, { "id": "1511.01844" }, { "id": "1612.04010" }, { "id": "1611.07476" }, { "id": "1611.02344" } ]
1703.05175
21
Figure 2: Comparison showing the effect of distance metric and number of classes per training episode on 5-way classification accuracy for both matching and prototypical networks on miniImageNet. The x-axis indicates configuration of the training episodes (way, distance, and shot), and the y-axis indicates 5-way test accuracy for the corresponding shot. Error bars indicate 95% confidence intervals as computed over 600 test episodes. Note that matching networks and prototypical networks are identical in the 1-shot case. use the same learning rate schedule as in our Omniglot experiments and train until validation loss stops improving. We train using 30-way episodes for 1-shot classification and 20-way episodes for 5-shot classification. We match train shot to test shot and each class contains 15 query points per episode. We compare to the baselines as reported by Ravi and Larochelle [22], which include a simple nearest neighbor approach on top of features learned by a classification network on the 64 training classes. The other baselines are two non-fine-tuned variants of matching networks (both ordinary and FCE) and the Meta-Learner LSTM. As can be seen in Table 2, prototypical networks achieves state-of-the-art here by a wide margin.
1703.05175#21
Prototypical Networks for Few-shot Learning
We propose prototypical networks for the problem of few-shot classification, where a classifier must generalize to new classes not seen in the training set, given only a small number of examples of each new class. Prototypical networks learn a metric space in which classification can be performed by computing distances to prototype representations of each class. Compared to recent approaches for few-shot learning, they reflect a simpler inductive bias that is beneficial in this limited-data regime, and achieve excellent results. We provide an analysis showing that some simple design decisions can yield substantial improvements over recent approaches involving complicated architectural choices and meta-learning. We further extend prototypical networks to zero-shot learning and achieve state-of-the-art results on the CU-Birds dataset.
http://arxiv.org/pdf/1703.05175
Jake Snell, Kevin Swersky, Richard S. Zemel
cs.LG, stat.ML
null
null
cs.LG
20170315
20170619
[ { "id": "1605.05395" }, { "id": "1502.03167" } ]
1703.04908
22
p(ck) = nk α + n − 1 Where nk is the number of times symbol ck has been uttered and n is the total number of symbols uttered. These counts are accumulated over agents, timesteps, and batch entries. α is a Dirichlet Process hyperparameter corresponding to the probability of observing an out-of-vocabulary word. The re- sulting reward across all agents is the log-likelihood of all communication utterances to independently have been gen- erated by a Dirichlet Process: rc = 1[ct i = ck]logp(ck) i,t,k Maximizing this reward leads to consolidation of symbols and the formation of compositionality. This approach is sim- ilar to encouraging code population sparsity in autoencoders (Ng 2011), which was shown to give rise to compositional representations for images. # Experiments
1703.04908#22
Emergence of Grounded Compositional Language in Multi-Agent Populations
By capturing statistical patterns in large corpora, machine learning has enabled significant advances in natural language processing, including in machine translation, question answering, and sentiment analysis. However, for agents to intelligently interact with humans, simply capturing the statistical patterns is insufficient. In this paper we investigate if, and how, grounded compositional language can emerge as a means to achieve goals in multi-agent populations. Towards this end, we propose a multi-agent learning environment and learning methods that bring about emergence of a basic compositional language. This language is represented as streams of abstract discrete symbols uttered by agents over time, but nonetheless has a coherent structure that possesses a defined vocabulary and syntax. We also observe emergence of non-verbal communication such as pointing and guiding when language communication is unavailable.
http://arxiv.org/pdf/1703.04908
Igor Mordatch, Pieter Abbeel
cs.AI, cs.CL
null
null
cs.AI
20170315
20180724
[ { "id": "1603.08887" }, { "id": "1611.01779" }, { "id": "1612.07182" }, { "id": "1609.00777" }, { "id": "1612.08810" } ]
1703.04933
22
Proof. L(θ1, θ2) = L(αθ1, α−1θ2), we have then by differentiation Figure 3: An illustration of how we build different dis- joint volumes using 7T,,. In this two-dimensional exam- ple, Ty (Boo(r’,4)) and B.o(r’, 9) have the same volume. Boo (r", 9), Ta (Boo(r’, 9)),T3(Boo(r’, 0)),... will there- fore be a sequence of disjoint constant volumes. C’ will therefore have an infinite volume. Best seen with colors. (7L)(61,05) = (VL)(at, 0M) | 0 0 ath, atl © (VL)(a,,0- Ma) = (VE)( 0102) | 0 ol | and where × is the Cartesian set product, we have # Ta # (Boo(r,0)) = Boar, a) X Bo(aatr, 762). (V?L)(a61,07 162) antl, 0 2 atl, 0 = [Oo gt, |(772N6.8)/ oO ge |:
1703.04933#22
Sharp Minima Can Generalize For Deep Nets
Despite their overwhelming capacity to overfit, deep learning architectures tend to generalize relatively well to unseen data, allowing them to be deployed in practice. However, explaining why this is the case is still an open area of research. One standing hypothesis that is gaining popularity, e.g. Hochreiter & Schmidhuber (1997); Keskar et al. (2017), is that the flatness of minima of the loss function found by stochastic gradient based methods results in good generalization. This paper argues that most notions of flatness are problematic for deep models and can not be directly applied to explain generalization. Specifically, when focusing on deep networks with rectifier units, we can exploit the particular geometry of parameter space induced by the inherent symmetries that these architectures exhibit to build equivalent models corresponding to arbitrarily sharper minima. Furthermore, if we allow to reparametrize a function, the geometry of its parameters can change drastically without affecting its generalization properties.
http://arxiv.org/pdf/1703.04933
Laurent Dinh, Razvan Pascanu, Samy Bengio, Yoshua Bengio
cs.LG
8.5 pages of main content, 2.5 of bibliography and 1 page of appendix
null
cs.LG
20170315
20170515
[ { "id": "1609.03193" }, { "id": "1701.04271" }, { "id": "1609.04836" }, { "id": "1606.04838" }, { "id": "1611.03530" }, { "id": "1605.08803" }, { "id": "1511.01029" }, { "id": "1609.08144" }, { "id": "1611.01838" }, { "id": "1606.05336" }, { "id": "1603.01431" }, { "id": "1511.01844" }, { "id": "1612.04010" }, { "id": "1611.07476" }, { "id": "1611.02344" } ]
1703.05175
22
We conducted further analysis, to determine the effect of distance metric and the number of training classes per episode on the performance of prototypical networks and matching networks. To make the methods comparable, we use our own implementation of matching networks that utilizes the same embedding architecture as our prototypical networks. In Figure 2 we compare cosine vs. Euclidean distance and 5-way vs. 20-way training episodes in the 1-shot and 5-shot scenarios, with 15 query points per class per episode. We note that 20-way achieves higher accuracy than 5-way and conjecture that the increased difficulty of 20-way classification helps the network to generalize better, because it forces the model to make more fine-grained decisions in the embedding space. Also, using Euclidean distance improves performance substantially over cosine distance. This effect is even more pronounced for prototypical networks, in which computing the class prototype as the mean of embedded support points is more naturally suited to Euclidean distances since cosine distance is not a Bregman divergence. # 3.3 CUB Zero-shot Classification
1703.05175#22
Prototypical Networks for Few-shot Learning
We propose prototypical networks for the problem of few-shot classification, where a classifier must generalize to new classes not seen in the training set, given only a small number of examples of each new class. Prototypical networks learn a metric space in which classification can be performed by computing distances to prototype representations of each class. Compared to recent approaches for few-shot learning, they reflect a simpler inductive bias that is beneficial in this limited-data regime, and achieve excellent results. We provide an analysis showing that some simple design decisions can yield substantial improvements over recent approaches involving complicated architectural choices and meta-learning. We further extend prototypical networks to zero-shot learning and achieve state-of-the-art results on the CU-Birds dataset.
http://arxiv.org/pdf/1703.05175
Jake Snell, Kevin Swersky, Richard S. Zemel
cs.LG, stat.ML
null
null
cs.LG
20170315
20170619
[ { "id": "1605.05395" }, { "id": "1502.03167" } ]
1703.04908
23
# Experiments We experimentally investigate how variation in goals, envi- ronment configuration, and agents physical capabilities lead to different communication strategies. In this work, we con- sider three types of actions an agent needs to perform: go to location, look at location, and do nothing. Goal for agent i consists of an action to perform, a location to perform it on ¯r, and an agent r that should perform that action. These goal properties are accumulated into goal description vector g. These goals are private to each agent, but may involve other agents. For example, agent i may want agent r to go to location ¯r. This goal is not observed by agent r, and re- quires communication between agents i and r. The goals are assigned to agents such that no agent receives conflicting goals. We do however show generalization in the presence of conflicting goals in Section .
1703.04908#23
Emergence of Grounded Compositional Language in Multi-Agent Populations
By capturing statistical patterns in large corpora, machine learning has enabled significant advances in natural language processing, including in machine translation, question answering, and sentiment analysis. However, for agents to intelligently interact with humans, simply capturing the statistical patterns is insufficient. In this paper we investigate if, and how, grounded compositional language can emerge as a means to achieve goals in multi-agent populations. Towards this end, we propose a multi-agent learning environment and learning methods that bring about emergence of a basic compositional language. This language is represented as streams of abstract discrete symbols uttered by agents over time, but nonetheless has a coherent structure that possesses a defined vocabulary and syntax. We also observe emergence of non-verbal communication such as pointing and guiding when language communication is unavailable.
http://arxiv.org/pdf/1703.04908
Igor Mordatch, Pieter Abbeel
cs.AI, cs.CL
null
null
cs.AI
20170315
20180724
[ { "id": "1603.08887" }, { "id": "1611.01779" }, { "id": "1612.07182" }, { "id": "1609.00777" }, { "id": "1612.08810" } ]
1703.04933
23
(V?L)(a61,07 162) antl, 0 2 atl, 0 = [Oo gt, |(772N6.8)/ oO ge |: Therefore, Ty (Bo(r, 9) 1 Boo (7,9) = 0 (see Figure 3). Similarly, Bao (r,0), Tax (Boo(r,0)),T?(Boo(r,9)),.-. are disjoint and have volume v. We have also TE (Boo(1’,0)) = Tyr (Boo(r’,0)) € C’. The vol- ume of C’ is then lower bounded by 0 < vu+u+u+-:+ and is therefore infinite. C'(L, 0, €) has then infinite volume too, making the volume e-flatness of 0 infinite. Sharpest direction Through these transformations we can easily find, for any critical point which is a minimum with non-zero Hessian, an observationally equivalent param- eter whose Hessian has an arbitrarily large spectral norm. Theorem 4. For a one-hidden layer rectified neural network of the form
1703.04933#23
Sharp Minima Can Generalize For Deep Nets
Despite their overwhelming capacity to overfit, deep learning architectures tend to generalize relatively well to unseen data, allowing them to be deployed in practice. However, explaining why this is the case is still an open area of research. One standing hypothesis that is gaining popularity, e.g. Hochreiter & Schmidhuber (1997); Keskar et al. (2017), is that the flatness of minima of the loss function found by stochastic gradient based methods results in good generalization. This paper argues that most notions of flatness are problematic for deep models and can not be directly applied to explain generalization. Specifically, when focusing on deep networks with rectifier units, we can exploit the particular geometry of parameter space induced by the inherent symmetries that these architectures exhibit to build equivalent models corresponding to arbitrarily sharper minima. Furthermore, if we allow to reparametrize a function, the geometry of its parameters can change drastically without affecting its generalization properties.
http://arxiv.org/pdf/1703.04933
Laurent Dinh, Razvan Pascanu, Samy Bengio, Yoshua Bengio
cs.LG
8.5 pages of main content, 2.5 of bibliography and 1 page of appendix
null
cs.LG
20170315
20170515
[ { "id": "1609.03193" }, { "id": "1701.04271" }, { "id": "1609.04836" }, { "id": "1606.04838" }, { "id": "1611.03530" }, { "id": "1605.08803" }, { "id": "1511.01029" }, { "id": "1609.08144" }, { "id": "1611.01838" }, { "id": "1606.05336" }, { "id": "1603.01431" }, { "id": "1511.01844" }, { "id": "1612.04010" }, { "id": "1611.07476" }, { "id": "1611.02344" } ]
1703.05175
23
# 3.3 CUB Zero-shot Classification In order to assess the suitability of our approach for zero-shot learning, we also run experiments on the Caltech-UCSD Birds (CUB) 200-2011 dataset [31]. The CUB dataset contains 11,788 images of 200 bird species. We closely follow the procedure of Reed et al. [23] in preparing the data. We use 6 Table 3: Zero-shot classification accuracies on CUB-200. Model Image Features 50-way Acc. 0-shot ALE [1] SJE [2] SAMPLE CLUSTERING [17] SJE [2] DS-SJE [23] DA-SJE [23] PROTO. NETS (OURS) Fisher AlexNet AlexNet GoogLeNet GoogLeNet GoogLeNet GoogLeNet 26.9% 40.3% 44.3% 50.1% 50.4% 50.9% 54.6%
1703.05175#23
Prototypical Networks for Few-shot Learning
We propose prototypical networks for the problem of few-shot classification, where a classifier must generalize to new classes not seen in the training set, given only a small number of examples of each new class. Prototypical networks learn a metric space in which classification can be performed by computing distances to prototype representations of each class. Compared to recent approaches for few-shot learning, they reflect a simpler inductive bias that is beneficial in this limited-data regime, and achieve excellent results. We provide an analysis showing that some simple design decisions can yield substantial improvements over recent approaches involving complicated architectural choices and meta-learning. We further extend prototypical networks to zero-shot learning and achieve state-of-the-art results on the CU-Birds dataset.
http://arxiv.org/pdf/1703.05175
Jake Snell, Kevin Swersky, Richard S. Zemel
cs.LG, stat.ML
null
null
cs.LG
20170315
20170619
[ { "id": "1605.05395" }, { "id": "1502.03167" } ]
1703.04908
24
Agents can only communicate in discrete symbols and have individual reference frames without a shared global po- sitioning reference (see Appendix), so cannot directly send goal position vector. What makes the task possible is that we place goal locations ¯r on landmark locations of which are observed by all agents (in their invidiaul reference frames). The strategy then is for agent i to unambiguously commu- nicate landmark reference to agent r. Importantly, we do not provide explicit association between goal positions and landmark reference. It is up to the agents to learn to asso- ciate a position vector with a set of landmark properties and communicate them with discrete symbols. In the results that follow, agents do not observe other agents. This disallows capacity for non-verbal communica- tion, necessitating the use of language. In section we report what happens when agents are able to observe each other and capacity for non-verbal communication is available. Despite training with continuous relaxation of the cate- gorical distribution, we observe very similar reward perfor- mance at test time. No communication is provided as a base- line (again, non-verbal communication is not possible). The no-communication strategy is for all agents go towards the centroid of all landmarks. Condition No Communication Communication Train Reward Test Reward -0.919 -0.332 -0.920 -0.392 Table 1: Training and test physical reward for setting with and without communication.
1703.04908#24
Emergence of Grounded Compositional Language in Multi-Agent Populations
By capturing statistical patterns in large corpora, machine learning has enabled significant advances in natural language processing, including in machine translation, question answering, and sentiment analysis. However, for agents to intelligently interact with humans, simply capturing the statistical patterns is insufficient. In this paper we investigate if, and how, grounded compositional language can emerge as a means to achieve goals in multi-agent populations. Towards this end, we propose a multi-agent learning environment and learning methods that bring about emergence of a basic compositional language. This language is represented as streams of abstract discrete symbols uttered by agents over time, but nonetheless has a coherent structure that possesses a defined vocabulary and syntax. We also observe emergence of non-verbal communication such as pointing and guiding when language communication is unavailable.
http://arxiv.org/pdf/1703.04908
Igor Mordatch, Pieter Abbeel
cs.AI, cs.CL
null
null
cs.AI
20170315
20180724
[ { "id": "1603.08887" }, { "id": "1611.01779" }, { "id": "1612.07182" }, { "id": "1609.00777" }, { "id": "1612.08810" } ]
1703.04933
24
Theorem 4. For a one-hidden layer rectified neural network of the form This theorem can generalize to rectified neural networks in general with a similar proof. Given that every minimum has an infinitely large region (volume-wise) in which the error remains approximately constant, that means that every minimum would be infinitely flat according to the volume e-flatness. Since all minima are equally flat, it is not possible to use volume ¢-flatness to gauge the generalization property of a minimum. # 4.2 Hessian-based measures The non-Euclidean geometry of the parameter space, cou- pled with the manifolds of observationally equal behavior of the model, allows one to move from one region of the param- eter space to another, changing the curvature of the model without actually changing the function. This approach has been used with success to improve optimization, by moving from a region of high curvature to a region of well behaved y = φrect(x · θ1) · θ2,
1703.04933#24
Sharp Minima Can Generalize For Deep Nets
Despite their overwhelming capacity to overfit, deep learning architectures tend to generalize relatively well to unseen data, allowing them to be deployed in practice. However, explaining why this is the case is still an open area of research. One standing hypothesis that is gaining popularity, e.g. Hochreiter & Schmidhuber (1997); Keskar et al. (2017), is that the flatness of minima of the loss function found by stochastic gradient based methods results in good generalization. This paper argues that most notions of flatness are problematic for deep models and can not be directly applied to explain generalization. Specifically, when focusing on deep networks with rectifier units, we can exploit the particular geometry of parameter space induced by the inherent symmetries that these architectures exhibit to build equivalent models corresponding to arbitrarily sharper minima. Furthermore, if we allow to reparametrize a function, the geometry of its parameters can change drastically without affecting its generalization properties.
http://arxiv.org/pdf/1703.04933
Laurent Dinh, Razvan Pascanu, Samy Bengio, Yoshua Bengio
cs.LG
8.5 pages of main content, 2.5 of bibliography and 1 page of appendix
null
cs.LG
20170315
20170515
[ { "id": "1609.03193" }, { "id": "1701.04271" }, { "id": "1609.04836" }, { "id": "1606.04838" }, { "id": "1611.03530" }, { "id": "1605.08803" }, { "id": "1511.01029" }, { "id": "1609.08144" }, { "id": "1611.01838" }, { "id": "1606.05336" }, { "id": "1603.01431" }, { "id": "1511.01844" }, { "id": "1612.04010" }, { "id": "1611.07476" }, { "id": "1611.02344" } ]
1703.05175
24
their splits to divide the classes into 100 training, 50 validation, and 50 test. For images we use 1,024- dimensional features extracted by applying GoogLeNet [28] to middle, upper left, upper right, lower left, and lower right crops of the original and horizontally-flipped image2. At test time we use only the middle crop of the original image. For class meta-data we use the 312-dimensional continuous attribute vectors provided with the CUB dataset. These attributes encode various characteristics of the bird species such as their color, shape, and feather patterns. We learned a simple linear mapping on top of both the 1024-dimensional image features and the 312-dimensional attribute vectors to produce a 1,024-dimensional output space. For this dataset we found it helpful to normalize the class prototypes (embedded attribute vectors) to be of unit length, since the attribute vectors come from a different domain than the images. Training episodes were constructed with 50 classes and 10 query images per class. The embeddings were optimized via SGD with Adam at a fixed learning rate of 10−4 and weight decay of 10−5. Early stopping on validation loss was used to determine the optimal number of epochs for retraining on the training plus validation set.
1703.05175#24
Prototypical Networks for Few-shot Learning
We propose prototypical networks for the problem of few-shot classification, where a classifier must generalize to new classes not seen in the training set, given only a small number of examples of each new class. Prototypical networks learn a metric space in which classification can be performed by computing distances to prototype representations of each class. Compared to recent approaches for few-shot learning, they reflect a simpler inductive bias that is beneficial in this limited-data regime, and achieve excellent results. We provide an analysis showing that some simple design decisions can yield substantial improvements over recent approaches involving complicated architectural choices and meta-learning. We further extend prototypical networks to zero-shot learning and achieve state-of-the-art results on the CU-Birds dataset.
http://arxiv.org/pdf/1703.05175
Jake Snell, Kevin Swersky, Richard S. Zemel
cs.LG, stat.ML
null
null
cs.LG
20170315
20170619
[ { "id": "1605.05395" }, { "id": "1502.03167" } ]
1703.04908
25
Condition No Communication Communication Train Reward Test Reward -0.919 -0.332 -0.920 -0.392 Table 1: Training and test physical reward for setting with and without communication. GoTo ° ° e ° ° ° e . BLUE Goto GREEN ° . ° ‘ GoTo ad BLUE-AGENT e ° ® ° RED-AGENT GREEN *sLooKar °.. ° ° e ° BLUE-AGENT a . G DONOTHING RED F ; ‘ 5 ° ° ° eo ° é GREEN-AGENT Goto Aa BLUE Goro RED-AGENT cE . BLUE-AGENT ‘a RED GoTo 7 coro, oRED © GREEN-AGENT || © . RED-AGENT . ° ° . © GOTO ° BLUE ° ° t=0 te1 t=2 t=3 Figure 4: A collection of typical sequences of events in our environments shown over time. Each row is an independent trial. Large circles represent agents and small circles repre- sent landmarks. Communication symbols are shown next to the agent making the utterance. The labels for abstract com- munication symbols are chosen purely for visualization and ... represents silence symbol.
1703.04908#25
Emergence of Grounded Compositional Language in Multi-Agent Populations
By capturing statistical patterns in large corpora, machine learning has enabled significant advances in natural language processing, including in machine translation, question answering, and sentiment analysis. However, for agents to intelligently interact with humans, simply capturing the statistical patterns is insufficient. In this paper we investigate if, and how, grounded compositional language can emerge as a means to achieve goals in multi-agent populations. Towards this end, we propose a multi-agent learning environment and learning methods that bring about emergence of a basic compositional language. This language is represented as streams of abstract discrete symbols uttered by agents over time, but nonetheless has a coherent structure that possesses a defined vocabulary and syntax. We also observe emergence of non-verbal communication such as pointing and guiding when language communication is unavailable.
http://arxiv.org/pdf/1703.04908
Igor Mordatch, Pieter Abbeel
cs.AI, cs.CL
null
null
cs.AI
20170315
20180724
[ { "id": "1603.08887" }, { "id": "1611.01779" }, { "id": "1612.07182" }, { "id": "1609.00777" }, { "id": "1612.08810" } ]
1703.04933
25
y = φrect(x · θ1) · θ2, (01,02) being a minimum 0, VM > 0,da > |(V?L)(Ta(4)) ll], és and critical point 0 = for L, such that (V?L)(@) 0, ||| (V?L) (Ta(9)) |||, = M4 where | the spectral norm of (V?L) (Ta(9)). Proof. The trace of a symmetric matrix is the sum of its eigenvalues and a real symmetric matrix can be diagonalized in R, therefore if the Hessian is non-zero, there is one non- zero positive diagonal element. Without loss of generality, we will assume that this non-zero element of value y > 0 corresponds to an element in 0;. Therefore the Frobenius norm |||(V?L)(T.(4)) ||| - of (V?L) (a1, 07102) _ ath, 0 2 aly, 0 = 0. aly (V°L) (41, 42) 0 ala, | Sharp Minima Can Generalize For Deep Nets is lower bounded by α−2γ.
1703.04933#25
Sharp Minima Can Generalize For Deep Nets
Despite their overwhelming capacity to overfit, deep learning architectures tend to generalize relatively well to unseen data, allowing them to be deployed in practice. However, explaining why this is the case is still an open area of research. One standing hypothesis that is gaining popularity, e.g. Hochreiter & Schmidhuber (1997); Keskar et al. (2017), is that the flatness of minima of the loss function found by stochastic gradient based methods results in good generalization. This paper argues that most notions of flatness are problematic for deep models and can not be directly applied to explain generalization. Specifically, when focusing on deep networks with rectifier units, we can exploit the particular geometry of parameter space induced by the inherent symmetries that these architectures exhibit to build equivalent models corresponding to arbitrarily sharper minima. Furthermore, if we allow to reparametrize a function, the geometry of its parameters can change drastically without affecting its generalization properties.
http://arxiv.org/pdf/1703.04933
Laurent Dinh, Razvan Pascanu, Samy Bengio, Yoshua Bengio
cs.LG
8.5 pages of main content, 2.5 of bibliography and 1 page of appendix
null
cs.LG
20170315
20170515
[ { "id": "1609.03193" }, { "id": "1701.04271" }, { "id": "1609.04836" }, { "id": "1606.04838" }, { "id": "1611.03530" }, { "id": "1605.08803" }, { "id": "1511.01029" }, { "id": "1609.08144" }, { "id": "1611.01838" }, { "id": "1606.05336" }, { "id": "1603.01431" }, { "id": "1511.01844" }, { "id": "1612.04010" }, { "id": "1611.07476" }, { "id": "1611.02344" } ]
1703.05175
25
Table 3 shows that we achieve state-of-the-art results by a large margin when compared to methods utilizing attributes as class meta-data. We compare our method to other embedding approaches, such as ALE [1], SJE [2], and DS-SJE/DA-SJE [23]. We also compare to a recent clustering approach [17] which trains an SVM on a learned feature space obtained by fine-tuning AlexNet [14]. These zero-shot classification results demonstrate that our approach is general enough to be applied even when the data points (images) are from a different domain relative to the classes (attributes). # 4 Related Work
1703.05175#25
Prototypical Networks for Few-shot Learning
We propose prototypical networks for the problem of few-shot classification, where a classifier must generalize to new classes not seen in the training set, given only a small number of examples of each new class. Prototypical networks learn a metric space in which classification can be performed by computing distances to prototype representations of each class. Compared to recent approaches for few-shot learning, they reflect a simpler inductive bias that is beneficial in this limited-data regime, and achieve excellent results. We provide an analysis showing that some simple design decisions can yield substantial improvements over recent approaches involving complicated architectural choices and meta-learning. We further extend prototypical networks to zero-shot learning and achieve state-of-the-art results on the CU-Birds dataset.
http://arxiv.org/pdf/1703.05175
Jake Snell, Kevin Swersky, Richard S. Zemel
cs.LG, stat.ML
null
null
cs.LG
20170315
20170619
[ { "id": "1605.05395" }, { "id": "1502.03167" } ]
1703.04908
26
Syntactic Structure We observe a compositional syntactic structure emerging in the stream of symbol uttered by agents. When trained on environments with only two agents, but multiple landmarks and actions, we observe symbols forming for each of the landmark colors and each of the action types. A typical con- versation and physical agent configuration is shown in first row of Figure 4 and is as follows: Green Agent: GOTO, GREEN, ... Blue Agent: GOTO, BLUE, The labels for abstract symbols are chosen by us purely for interpretability and visualization and carry no mean- ing for training. While there is recent work on interpreting continuous machine languages (Andreas, Dragan, and Klein 2017), the discrete nature and small size of our symbol vo- cabulary makes it possible to manually labels to the sym- bols. See results in supplementary video for consistency of the vocabulary usage. Physical environment considerations play a part in the syntactic structure. The action type verb GOTO is uttered first because actions take time to accomplish in the grounded environment. When the agent receives GOTO symbol it starts moving toward the centroid of all the landmarks (to be equidistant from all of them) and then moves towards the specific landmark when it receives its color identity.
1703.04908#26
Emergence of Grounded Compositional Language in Multi-Agent Populations
By capturing statistical patterns in large corpora, machine learning has enabled significant advances in natural language processing, including in machine translation, question answering, and sentiment analysis. However, for agents to intelligently interact with humans, simply capturing the statistical patterns is insufficient. In this paper we investigate if, and how, grounded compositional language can emerge as a means to achieve goals in multi-agent populations. Towards this end, we propose a multi-agent learning environment and learning methods that bring about emergence of a basic compositional language. This language is represented as streams of abstract discrete symbols uttered by agents over time, but nonetheless has a coherent structure that possesses a defined vocabulary and syntax. We also observe emergence of non-verbal communication such as pointing and guiding when language communication is unavailable.
http://arxiv.org/pdf/1703.04908
Igor Mordatch, Pieter Abbeel
cs.AI, cs.CL
null
null
cs.AI
20170315
20180724
[ { "id": "1603.08887" }, { "id": "1611.01779" }, { "id": "1612.07182" }, { "id": "1609.00777" }, { "id": "1612.08810" } ]
1703.04933
26
Sharp Minima Can Generalize For Deep Nets is lower bounded by α−2γ. Since all norms are equivalent in finite dimension, there exists a constant r > 0 such that r|||.A]l| , < |||All], for al symmetric matrices A. So by picking a < \/ 77, we are guaranteed that |||(V?L)(Ta(9)) |||, = M. Any minimum with non-zero Hessian will be observation- ally equivalent to a minimum whose Hessian has an arbi- trarily large spectral norm. Therefore for any minimum in the loss function, if there exists another minimum that generalizes better then there exists another minimum that generalizes better and is also sharper according the spectral norm of the Hessian. The spectral norm of critical points’ Hessian becomes as a result less relevant as a measure of potential generalization error. Moreover, since the spectral norm lower bounds the trace for a positive semi-definite symmetric matrix, the same conclusion can be drawn for the trace. 0,4da > 0 such that (r - ming<x(Mx)) eigenvalues are greater than M. √
1703.04933#26
Sharp Minima Can Generalize For Deep Nets
Despite their overwhelming capacity to overfit, deep learning architectures tend to generalize relatively well to unseen data, allowing them to be deployed in practice. However, explaining why this is the case is still an open area of research. One standing hypothesis that is gaining popularity, e.g. Hochreiter & Schmidhuber (1997); Keskar et al. (2017), is that the flatness of minima of the loss function found by stochastic gradient based methods results in good generalization. This paper argues that most notions of flatness are problematic for deep models and can not be directly applied to explain generalization. Specifically, when focusing on deep networks with rectifier units, we can exploit the particular geometry of parameter space induced by the inherent symmetries that these architectures exhibit to build equivalent models corresponding to arbitrarily sharper minima. Furthermore, if we allow to reparametrize a function, the geometry of its parameters can change drastically without affecting its generalization properties.
http://arxiv.org/pdf/1703.04933
Laurent Dinh, Razvan Pascanu, Samy Bengio, Yoshua Bengio
cs.LG
8.5 pages of main content, 2.5 of bibliography and 1 page of appendix
null
cs.LG
20170315
20170515
[ { "id": "1609.03193" }, { "id": "1701.04271" }, { "id": "1609.04836" }, { "id": "1606.04838" }, { "id": "1611.03530" }, { "id": "1605.08803" }, { "id": "1511.01029" }, { "id": "1609.08144" }, { "id": "1611.01838" }, { "id": "1606.05336" }, { "id": "1603.01431" }, { "id": "1511.01844" }, { "id": "1612.04010" }, { "id": "1611.07476" }, { "id": "1611.02344" } ]
1703.05175
26
The literature on metric learning is vast [15, 5]; we summarize here the work most relevant to our proposed method. Neighborhood Components Analysis (NCA) [8] learns a Mahalanobis distance to maximize K-nearest-neighbor’s (KNN) leave-one-out accuracy in the transformed space. Salakhutdi- nov and Hinton [27] extend NCA by using a neural network to perform the transformation. Large margin nearest neighbor (LMNN) classification [30] also attempts to optimize KNN accuracy but does so using a hinge loss that encourages the local neighborhood of a point to contain other points with the same label. The DNet-KNN [21] is another margin-based method that improves upon LMNN by utilizing a neural network to perform the embedding instead of a simple linear transformation. Of these, our method is most similar to the non-linear extension of NCA [27] because we use a neural network to perform the embedding and we optimize a softmax based on Euclidean distances in the transformed space, as opposed to a margin loss. A key distinction between our approach and non-linear NCA is that we form a softmax
1703.05175#26
Prototypical Networks for Few-shot Learning
We propose prototypical networks for the problem of few-shot classification, where a classifier must generalize to new classes not seen in the training set, given only a small number of examples of each new class. Prototypical networks learn a metric space in which classification can be performed by computing distances to prototype representations of each class. Compared to recent approaches for few-shot learning, they reflect a simpler inductive bias that is beneficial in this limited-data regime, and achieve excellent results. We provide an analysis showing that some simple design decisions can yield substantial improvements over recent approaches involving complicated architectural choices and meta-learning. We further extend prototypical networks to zero-shot learning and achieve state-of-the-art results on the CU-Birds dataset.
http://arxiv.org/pdf/1703.05175
Jake Snell, Kevin Swersky, Richard S. Zemel
cs.LG, stat.ML
null
null
cs.LG
20170315
20170619
[ { "id": "1605.05395" }, { "id": "1502.03167" } ]
1703.04908
27
When the environment configuration can contain more than three agents, agents need to form symbols for referring to each other. Three new symbols form to refer to agent col- ors that are separate in meaning from landmark colors. The typical conversations are shown in second and third rows of Figure 4. Red Agent: GOTO, RED, BLUE-AGENT, ... Green Agent: ..., ..., ..., ... Blue Agent: RED-AGENT, GREEN, LOOKAT, ... Agents may not omit any utterances when they are the subject of their private goal, in which case they have access to that information and have no need to announce it. In this language, there is no set ordering to word utterances. Each symbol contributes to sentence meaning independently, sim- ilar to case marking grammatical strategies used in many hu- man languages (Beuls and Steels 2013). The agents largely settle on using a consistent set of sym- bols for each meaning, due to vocabulary size penalties and that discourage synonyms. We show the aggregate streams of communication utterances in Figure 5. Before Training AfterTraining vocabulary symbol Figure 5: Communication symbol streams emitted by agents over time before and after training accumulated over 10 thousand test trials.
1703.04908#27
Emergence of Grounded Compositional Language in Multi-Agent Populations
By capturing statistical patterns in large corpora, machine learning has enabled significant advances in natural language processing, including in machine translation, question answering, and sentiment analysis. However, for agents to intelligently interact with humans, simply capturing the statistical patterns is insufficient. In this paper we investigate if, and how, grounded compositional language can emerge as a means to achieve goals in multi-agent populations. Towards this end, we propose a multi-agent learning environment and learning methods that bring about emergence of a basic compositional language. This language is represented as streams of abstract discrete symbols uttered by agents over time, but nonetheless has a coherent structure that possesses a defined vocabulary and syntax. We also observe emergence of non-verbal communication such as pointing and guiding when language communication is unavailable.
http://arxiv.org/pdf/1703.04908
Igor Mordatch, Pieter Abbeel
cs.AI, cs.CL
null
null
cs.AI
20170315
20180724
[ { "id": "1603.08887" }, { "id": "1611.01779" }, { "id": "1612.07182" }, { "id": "1609.00777" }, { "id": "1612.08810" } ]
1703.04933
27
0,4da > 0 such that (r - ming<x(Mx)) eigenvalues are greater than M. √ Proof, For simplicity, we will note VM the principal square root of a symmetric positive-semidefinite matrix M. The eigenvalues of VM are the square root of the eigenvalues of M and are its singular values. By defini- tion, the singular values of \/(V?L)(0)Daq are the square root of the eigenvalues of D,(V?L)(9)D,. Without loss of generality, we consider ming< (Me) = nx and choose Vk < K,o, = 8! andagx = 6*~-1. Since Dy and \/(V?L)(@) are positive symmetric semi-definite matrices, we can apply the multiplicative Horn inequalities (Klyachko, 2000) on singular values of the product \/(V?L)(@)Da: Vi < nj <(n—nk), Nias ((V2L)(8)D2) > As((V2L)(0)) ?.
1703.04933#27
Sharp Minima Can Generalize For Deep Nets
Despite their overwhelming capacity to overfit, deep learning architectures tend to generalize relatively well to unseen data, allowing them to be deployed in practice. However, explaining why this is the case is still an open area of research. One standing hypothesis that is gaining popularity, e.g. Hochreiter & Schmidhuber (1997); Keskar et al. (2017), is that the flatness of minima of the loss function found by stochastic gradient based methods results in good generalization. This paper argues that most notions of flatness are problematic for deep models and can not be directly applied to explain generalization. Specifically, when focusing on deep networks with rectifier units, we can exploit the particular geometry of parameter space induced by the inherent symmetries that these architectures exhibit to build equivalent models corresponding to arbitrarily sharper minima. Furthermore, if we allow to reparametrize a function, the geometry of its parameters can change drastically without affecting its generalization properties.
http://arxiv.org/pdf/1703.04933
Laurent Dinh, Razvan Pascanu, Samy Bengio, Yoshua Bengio
cs.LG
8.5 pages of main content, 2.5 of bibliography and 1 page of appendix
null
cs.LG
20170315
20170515
[ { "id": "1609.03193" }, { "id": "1701.04271" }, { "id": "1609.04836" }, { "id": "1606.04838" }, { "id": "1611.03530" }, { "id": "1605.08803" }, { "id": "1511.01029" }, { "id": "1609.08144" }, { "id": "1611.01838" }, { "id": "1606.05336" }, { "id": "1603.01431" }, { "id": "1511.01844" }, { "id": "1612.04010" }, { "id": "1611.07476" }, { "id": "1611.02344" } ]
1703.05175
27
distances in the transformed space, as opposed to a margin loss. A key distinction between our approach and non-linear NCA is that we form a softmax directly over classes, rather than individual points, computed from distances to each class’s prototype representation. This allows each class to have a concise representation independent of the number of data points and obviates the need to store the entire support set to make predictions.
1703.05175#27
Prototypical Networks for Few-shot Learning
We propose prototypical networks for the problem of few-shot classification, where a classifier must generalize to new classes not seen in the training set, given only a small number of examples of each new class. Prototypical networks learn a metric space in which classification can be performed by computing distances to prototype representations of each class. Compared to recent approaches for few-shot learning, they reflect a simpler inductive bias that is beneficial in this limited-data regime, and achieve excellent results. We provide an analysis showing that some simple design decisions can yield substantial improvements over recent approaches involving complicated architectural choices and meta-learning. We further extend prototypical networks to zero-shot learning and achieve state-of-the-art results on the CU-Birds dataset.
http://arxiv.org/pdf/1703.05175
Jake Snell, Kevin Swersky, Richard S. Zemel
cs.LG, stat.ML
null
null
cs.LG
20170315
20170619
[ { "id": "1605.05395" }, { "id": "1502.03167" } ]
1703.04908
28
Before Training AfterTraining vocabulary symbol Figure 5: Communication symbol streams emitted by agents over time before and after training accumulated over 10 thousand test trials. In simplified environment configurations when there is only one landmark or one type of action to take, no sym- bols are formed to refer to those concepts because they are clear from context. Symbol Vocabulary Usage We find word activation counts to settle on the appropriate compositional word counts. That early during training large vocabulary sizes are being taken advantage of to explore the space of communication possibilities before settling on the appropriate effective vocabulary sizes as shown in Figure 6. In this figure, 1x1x3 case refers to environment with two agents and a single action, which requires only communi- cating one of three landmark identities. 1x2x3 contains two types of actions, and 3x3x3 case contains three agents that require explicit referencing. Generalization to Unseen Configurations One of the advantages of decentralised execution policies is that trained agents can be placed into arbitrarily-sized groups and still function reasonably. When there are addi- tional agents in the environment with the same color iden- tity, all agents of the same color will perform the same task if they are being referred to. Additionally, when agents of a
1703.04908#28
Emergence of Grounded Compositional Language in Multi-Agent Populations
By capturing statistical patterns in large corpora, machine learning has enabled significant advances in natural language processing, including in machine translation, question answering, and sentiment analysis. However, for agents to intelligently interact with humans, simply capturing the statistical patterns is insufficient. In this paper we investigate if, and how, grounded compositional language can emerge as a means to achieve goals in multi-agent populations. Towards this end, we propose a multi-agent learning environment and learning methods that bring about emergence of a basic compositional language. This language is represented as streams of abstract discrete symbols uttered by agents over time, but nonetheless has a coherent structure that possesses a defined vocabulary and syntax. We also observe emergence of non-verbal communication such as pointing and guiding when language communication is unavailable.
http://arxiv.org/pdf/1703.04908
Igor Mordatch, Pieter Abbeel
cs.AI, cs.CL
null
null
cs.AI
20170315
20180724
[ { "id": "1603.08887" }, { "id": "1611.01779" }, { "id": "1612.07182" }, { "id": "1609.00777" }, { "id": "1612.08810" } ]
1703.04933
28
Vi < nj <(n—nk), Nias ((V2L)(8)D2) > As((V2L)(0)) ?. Many directions However, some notion of sharpness might take into account the entire eigenspectrum of the Hessian as opposed to its largest eigenvalue, for instance, Chaudhari et al. (2017) describe the notion of wide valleys, allowing the presence of very few large eigenvalues. We can generalize the transformations between observationally equivalent parameters to deeper neural networks with K — 1 hidden layers: for a; > 0,Ta : (Ox )k<K > (AnOk)kew with []{_, ax = 1. If we define M By choosing β > λr M ((V2L)(8)) ” Ax((V2L)(8)) > 0 we can since we have Vi < r,Ai((V7L)(0)) > Ax((V2L)(8)) > 0 we can conclude that Vi <(r—nk), di((W2L)(0)D2) > Aran, ((V?L) (8) 8 > d.((W?L)(8)) 6? > M.
1703.04933#28
Sharp Minima Can Generalize For Deep Nets
Despite their overwhelming capacity to overfit, deep learning architectures tend to generalize relatively well to unseen data, allowing them to be deployed in practice. However, explaining why this is the case is still an open area of research. One standing hypothesis that is gaining popularity, e.g. Hochreiter & Schmidhuber (1997); Keskar et al. (2017), is that the flatness of minima of the loss function found by stochastic gradient based methods results in good generalization. This paper argues that most notions of flatness are problematic for deep models and can not be directly applied to explain generalization. Specifically, when focusing on deep networks with rectifier units, we can exploit the particular geometry of parameter space induced by the inherent symmetries that these architectures exhibit to build equivalent models corresponding to arbitrarily sharper minima. Furthermore, if we allow to reparametrize a function, the geometry of its parameters can change drastically without affecting its generalization properties.
http://arxiv.org/pdf/1703.04933
Laurent Dinh, Razvan Pascanu, Samy Bengio, Yoshua Bengio
cs.LG
8.5 pages of main content, 2.5 of bibliography and 1 page of appendix
null
cs.LG
20170315
20170515
[ { "id": "1609.03193" }, { "id": "1701.04271" }, { "id": "1609.04836" }, { "id": "1606.04838" }, { "id": "1611.03530" }, { "id": "1605.08803" }, { "id": "1511.01029" }, { "id": "1609.08144" }, { "id": "1611.01838" }, { "id": "1606.05336" }, { "id": "1603.01431" }, { "id": "1511.01844" }, { "id": "1612.04010" }, { "id": "1611.07476" }, { "id": "1611.02344" } ]
1703.05175
28
Our approach is also similar to the nearest class mean approach [19], where each class is represented by the mean of its examples. This approach was developed to rapidly incorporate new classes into a classifier without retraining, however it relies on a linear embedding and was designed to handle 2Features downloaded from https://github.com/reedscot/cvpr2016. 7 the case where the novel classes come with a large number of examples. In contrast, our approach utilizes neural networks to non-linearly embed points and we couple this with episodic training in order to handle the few-shot scenario. Mensink et al. attempt to extend their approach to also perform non-linear classification, but they do so by allowing classes to have multiple prototypes. They find these prototypes in a pre-processing step by using k-means on the input space and then perform a multi-modal variant of their linear embedding. Prototypical networks, on the other hand, learn a non-linear embedding in an end-to-end manner with no such pre-processing, producing a non-linear classifier that still only requires one prototype per class. In addition, our approach naturally generalizes to other distance functions, particularly Bregman divergences.
1703.05175#28
Prototypical Networks for Few-shot Learning
We propose prototypical networks for the problem of few-shot classification, where a classifier must generalize to new classes not seen in the training set, given only a small number of examples of each new class. Prototypical networks learn a metric space in which classification can be performed by computing distances to prototype representations of each class. Compared to recent approaches for few-shot learning, they reflect a simpler inductive bias that is beneficial in this limited-data regime, and achieve excellent results. We provide an analysis showing that some simple design decisions can yield substantial improvements over recent approaches involving complicated architectural choices and meta-learning. We further extend prototypical networks to zero-shot learning and achieve state-of-the-art results on the CU-Birds dataset.
http://arxiv.org/pdf/1703.05175
Jake Snell, Kevin Swersky, Richard S. Zemel
cs.LG, stat.ML
null
null
cs.LG
20170315
20170619
[ { "id": "1605.05395" }, { "id": "1502.03167" } ]
1703.04908
29
20 \ — 1xb3 — 1x2x3 —— 3x3x3 1s - 10 a | {MLL ALUM active vocabulary size HY tt) | ot | A tt 0 1000 2000 3000 4000 5000 iteration Figure 6: Word activations counts for different environment configurations over training iterations. particular color are asked to perform two conflicting tasks (such as being asked go to two different landmarks by two different agents), they will perform the average of the con- flicting goals assigned to them. Such cases occur despite never having been seen during training. Due to the modularized observation architecture, the num- ber of landmarks in the environment can also vary between training and execution. The agents perform sensible behav- iors with different numbers of landmarks, despite not being trained in such environments. For example, when there are distractor landmarks of novel colors, the agents never go to- wards them. When there are multiple landmarks of the same color, the agent communicating the goal still utters landmark color (because the goal is the position of one of the land- marks). However, the agents receiving the landmark color utterance go towards the centroid of all landmark of the same color, showing a very sensible generalization strategy. An example of such case is shown in fourth row of Figure 4.
1703.04908#29
Emergence of Grounded Compositional Language in Multi-Agent Populations
By capturing statistical patterns in large corpora, machine learning has enabled significant advances in natural language processing, including in machine translation, question answering, and sentiment analysis. However, for agents to intelligently interact with humans, simply capturing the statistical patterns is insufficient. In this paper we investigate if, and how, grounded compositional language can emerge as a means to achieve goals in multi-agent populations. Towards this end, we propose a multi-agent learning environment and learning methods that bring about emergence of a basic compositional language. This language is represented as streams of abstract discrete symbols uttered by agents over time, but nonetheless has a coherent structure that possesses a defined vocabulary and syntax. We also observe emergence of non-verbal communication such as pointing and guiding when language communication is unavailable.
http://arxiv.org/pdf/1703.04908
Igor Mordatch, Pieter Abbeel
cs.AI, cs.CL
null
null
cs.AI
20170315
20180724
[ { "id": "1603.08887" }, { "id": "1611.01779" }, { "id": "1612.07182" }, { "id": "1609.00777" }, { "id": "1612.08810" } ]
1703.04933
29
Dα = In1 α−1 1 0 ... 0 0 In2 α−1 2 ... 0 0 · · · 0 · · · ... . . . InK · · · α−1 K then the first and second derivatives at T,,(@) will be (VL)(Ta(0)) =(VE)(0)Da (V°L)(Ta(8)) =Da(V?L)(8)DaIt means that there exists an observationally equivalent pa- rameter with at least (r —ming< x (nx)) arbitrarily large eigenvalues. Since Sagun et al. (2016) seems to suggests that rank deficiency in the Hessian is due to over-parametrization of the model, one could conjecture that (r - ming<x (nx) can be high for thin and deep neural networks, resulting in a majority of large eigenvalues. Therefore, it would still be possible to obtain an equivalent parameter with large Hessian eigenvalues, i.e. sharp in multiple directions. We will show to which extent you can increase several eigenvalues of (V?L)(Tq(0)) by varying a. Definition 6. For each n x n matrix A, we define the vector (A) of sorted singular values of A with their multiplicity Ai (A) > A2(A) > +++ > An(A). # 4.3. ¢-sharpness
1703.04933#29
Sharp Minima Can Generalize For Deep Nets
Despite their overwhelming capacity to overfit, deep learning architectures tend to generalize relatively well to unseen data, allowing them to be deployed in practice. However, explaining why this is the case is still an open area of research. One standing hypothesis that is gaining popularity, e.g. Hochreiter & Schmidhuber (1997); Keskar et al. (2017), is that the flatness of minima of the loss function found by stochastic gradient based methods results in good generalization. This paper argues that most notions of flatness are problematic for deep models and can not be directly applied to explain generalization. Specifically, when focusing on deep networks with rectifier units, we can exploit the particular geometry of parameter space induced by the inherent symmetries that these architectures exhibit to build equivalent models corresponding to arbitrarily sharper minima. Furthermore, if we allow to reparametrize a function, the geometry of its parameters can change drastically without affecting its generalization properties.
http://arxiv.org/pdf/1703.04933
Laurent Dinh, Razvan Pascanu, Samy Bengio, Yoshua Bengio
cs.LG
8.5 pages of main content, 2.5 of bibliography and 1 page of appendix
null
cs.LG
20170315
20170515
[ { "id": "1609.03193" }, { "id": "1701.04271" }, { "id": "1609.04836" }, { "id": "1606.04838" }, { "id": "1611.03530" }, { "id": "1605.08803" }, { "id": "1511.01029" }, { "id": "1609.08144" }, { "id": "1611.01838" }, { "id": "1606.05336" }, { "id": "1603.01431" }, { "id": "1511.01844" }, { "id": "1612.04010" }, { "id": "1611.07476" }, { "id": "1611.02344" } ]
1703.05175
29
Another relevant few-shot learning method is the meta-learning approach proposed in Ravi and Larochelle [22]. The key insight here is that LSTM dynamics and gradient descent can be written in effectively the same way. An LSTM can then be trained to itself train a model from a given episode, with the performance goal of generalizing well on the query points. Matching networks and prototypical networks can also be seen as forms of meta-learning, in the sense that they produce simple classifiers dynamically from new training episodes; however the core embeddings they rely on are fixed after training. The FCE extension to matching nets involves a secondary embedding that depends on the support set. However, in the few-shot scenario the amount of data is so small that a simple inductive bias seems to work well, without the need to learn a custom embedding for each episode.
1703.05175#29
Prototypical Networks for Few-shot Learning
We propose prototypical networks for the problem of few-shot classification, where a classifier must generalize to new classes not seen in the training set, given only a small number of examples of each new class. Prototypical networks learn a metric space in which classification can be performed by computing distances to prototype representations of each class. Compared to recent approaches for few-shot learning, they reflect a simpler inductive bias that is beneficial in this limited-data regime, and achieve excellent results. We provide an analysis showing that some simple design decisions can yield substantial improvements over recent approaches involving complicated architectural choices and meta-learning. We further extend prototypical networks to zero-shot learning and achieve state-of-the-art results on the CU-Birds dataset.
http://arxiv.org/pdf/1703.05175
Jake Snell, Kevin Swersky, Richard S. Zemel
cs.LG, stat.ML
null
null
cs.LG
20170315
20170619
[ { "id": "1605.05395" }, { "id": "1502.03167" } ]
1703.04908
30
Non-verbal Communication and Other Strategies The presence of a physical environment also allows for al- ternative strategies aside from language use to accomplish goals. In this set of experiments we enable agents to observe other agents’ position and gaze location, and in turn dis- able communication capability via symbol utterances. When agents can observe each other’s gaze, a pointing strategy forms where the agent can communicate a landmark location by gazing in its direction, which the recipient correctly inter- prets and moves towards. When gazes of other agents cannot be observed, we see behavior of goal sender agent moving towards the location assigned to goal recipient agent (despite receiving no explicit reward for doing so), in order to guide the goal recipient to that location. Lastly, when neither visual not verbal observation is available on part of the goal recipi- ent, we observe the behavior of goal sender directly pushing the recipient to the target location. Examples of such strate- gies are shown in Figure 7 and supplementary video. It is important to us to build an environment with a diverse set of capabilities which language use develops alongside with.
1703.04908#30
Emergence of Grounded Compositional Language in Multi-Agent Populations
By capturing statistical patterns in large corpora, machine learning has enabled significant advances in natural language processing, including in machine translation, question answering, and sentiment analysis. However, for agents to intelligently interact with humans, simply capturing the statistical patterns is insufficient. In this paper we investigate if, and how, grounded compositional language can emerge as a means to achieve goals in multi-agent populations. Towards this end, we propose a multi-agent learning environment and learning methods that bring about emergence of a basic compositional language. This language is represented as streams of abstract discrete symbols uttered by agents over time, but nonetheless has a coherent structure that possesses a defined vocabulary and syntax. We also observe emergence of non-verbal communication such as pointing and guiding when language communication is unavailable.
http://arxiv.org/pdf/1703.04908
Igor Mordatch, Pieter Abbeel
cs.AI, cs.CL
null
null
cs.AI
20170315
20180724
[ { "id": "1603.08887" }, { "id": "1611.01779" }, { "id": "1612.07182" }, { "id": "1609.00777" }, { "id": "1612.08810" } ]
1703.04933
30
# 4.3. ¢-sharpness We have redefined for ¢ > 0 the e-sharpness of Keskar et al. (2017) as follow If A is symmetric positive semi-definite, λ(A) is also the vector of its sorted eigenvalues. Theorem 5. For a (K − 1)-hidden layer rectified neural network of the form y = φrect(φrect(· · · φrect(x · θ1) · · · ) · θK−1) · θK, and critical point 0 = (0k)k<K being a minimum for L, such that (V?L)(0) has rank r = rank((V?L)(9)), VM > maxyrepa(eo) (L(6') — L(8)) 1+ LO) where B2(e,6) is the Euclidean ball of radius € centered on 6. This modification will demonstrate more clearly the issues of that metric as a measure of probable generaliza- tion. If we use K = 2 and (6), 2) corresponding to a non-constant function, i.e. 6; 4 0 and 62 4 0, then we can Sharp Minima Can Generalize For Deep Nets parametrization of the model. 4; 0 S. a
1703.04933#30
Sharp Minima Can Generalize For Deep Nets
Despite their overwhelming capacity to overfit, deep learning architectures tend to generalize relatively well to unseen data, allowing them to be deployed in practice. However, explaining why this is the case is still an open area of research. One standing hypothesis that is gaining popularity, e.g. Hochreiter & Schmidhuber (1997); Keskar et al. (2017), is that the flatness of minima of the loss function found by stochastic gradient based methods results in good generalization. This paper argues that most notions of flatness are problematic for deep models and can not be directly applied to explain generalization. Specifically, when focusing on deep networks with rectifier units, we can exploit the particular geometry of parameter space induced by the inherent symmetries that these architectures exhibit to build equivalent models corresponding to arbitrarily sharper minima. Furthermore, if we allow to reparametrize a function, the geometry of its parameters can change drastically without affecting its generalization properties.
http://arxiv.org/pdf/1703.04933
Laurent Dinh, Razvan Pascanu, Samy Bengio, Yoshua Bengio
cs.LG
8.5 pages of main content, 2.5 of bibliography and 1 page of appendix
null
cs.LG
20170315
20170515
[ { "id": "1609.03193" }, { "id": "1701.04271" }, { "id": "1609.04836" }, { "id": "1606.04838" }, { "id": "1611.03530" }, { "id": "1605.08803" }, { "id": "1511.01029" }, { "id": "1609.08144" }, { "id": "1611.01838" }, { "id": "1606.05336" }, { "id": "1603.01431" }, { "id": "1511.01844" }, { "id": "1612.04010" }, { "id": "1611.07476" }, { "id": "1611.02344" } ]
1703.05175
30
Prototypical networks are also related to the neural statistician [6] from the generative modeling literature, which extends the variational autoencoder [12, 24] to learn generative models of datasets rather than individual points. One component of the neural statistician is the “statistic network” which summarizes a set of data points into a statistic vector. It does this by encoding each point within a dataset, taking a sample mean, and applying a post-processing network to obtain an approximate posterior over the statistic vector. Edwards and Storkey test their model for one-shot classification on the Omniglot dataset by considering each character to be a separate dataset and making predictions based on the class whose approximate posterior over the statistic vector has minimal KL-divergence from the posterior inferred by the test point. Like the neural statistician, we also produce a summary statistic for each class. However, ours is a discriminative model, as befits our discriminative task of few-shot classification.
1703.05175#30
Prototypical Networks for Few-shot Learning
We propose prototypical networks for the problem of few-shot classification, where a classifier must generalize to new classes not seen in the training set, given only a small number of examples of each new class. Prototypical networks learn a metric space in which classification can be performed by computing distances to prototype representations of each class. Compared to recent approaches for few-shot learning, they reflect a simpler inductive bias that is beneficial in this limited-data regime, and achieve excellent results. We provide an analysis showing that some simple design decisions can yield substantial improvements over recent approaches involving complicated architectural choices and meta-learning. We further extend prototypical networks to zero-shot learning and achieve state-of-the-art results on the CU-Birds dataset.
http://arxiv.org/pdf/1703.05175
Jake Snell, Kevin Swersky, Richard S. Zemel
cs.LG, stat.ML
null
null
cs.LG
20170315
20170619
[ { "id": "1605.05395" }, { "id": "1502.03167" } ]
1703.04908
31
Figure 7: Examples of non-verbal communication strategies, such as pointing, guiding, and pushing. Conclusion We have presented a multi-agent environment and learning methods that brings about emergence of an abstract compo- sitional language from grounded experience. This abstract language is formed without any exposure to human language use. We investigated how variation in environment configu- ration and physical capabilities of agents affect the commu- nication strategies that arise. In the future, we would like experiment with larger num- ber of actions that necessitate more complex syntax and larger vocabularies. We would also like integrate exposure to human language to form communication strategies that are compatible with human use. Acknowledgements We thank OpenAI team for helpful comments and fruitful discussions. This work was funded in part by ONR PECASE N000141612723.
1703.04908#31
Emergence of Grounded Compositional Language in Multi-Agent Populations
By capturing statistical patterns in large corpora, machine learning has enabled significant advances in natural language processing, including in machine translation, question answering, and sentiment analysis. However, for agents to intelligently interact with humans, simply capturing the statistical patterns is insufficient. In this paper we investigate if, and how, grounded compositional language can emerge as a means to achieve goals in multi-agent populations. Towards this end, we propose a multi-agent learning environment and learning methods that bring about emergence of a basic compositional language. This language is represented as streams of abstract discrete symbols uttered by agents over time, but nonetheless has a coherent structure that possesses a defined vocabulary and syntax. We also observe emergence of non-verbal communication such as pointing and guiding when language communication is unavailable.
http://arxiv.org/pdf/1703.04908
Igor Mordatch, Pieter Abbeel
cs.AI, cs.CL
null
null
cs.AI
20170315
20180724
[ { "id": "1603.08887" }, { "id": "1611.01779" }, { "id": "1612.07182" }, { "id": "1609.00777" }, { "id": "1612.08810" } ]
1703.04933
31
Sharp Minima Can Generalize For Deep Nets parametrization of the model. 4; 0 S. a Figure 4: An illustration of how we exploit non- identifiability and its particular geometry to obtain sharper minima: although 0 is far from the 62 = 0 line, the observa- tionally equivalent parameter 6” is closer. The green and red circle centered on each of these points have the same radius. Best seen with colors. # 5.1 Model reparametrization One thing that needs to be considered when relating flatness of minima to their probable generalization is that the choice of parametrization and its associated geometry are arbitrary. Since we are interested in finding a prediction function in a given family of functions, no reparametrization of this fam- ily should influence generalization of any of these functions. Given a bijection g onto θ, we can define new transformed parameter η = g−1(θ). Since θ and η represent in different space the same prediction function, they should generalize as well.
1703.04933#31
Sharp Minima Can Generalize For Deep Nets
Despite their overwhelming capacity to overfit, deep learning architectures tend to generalize relatively well to unseen data, allowing them to be deployed in practice. However, explaining why this is the case is still an open area of research. One standing hypothesis that is gaining popularity, e.g. Hochreiter & Schmidhuber (1997); Keskar et al. (2017), is that the flatness of minima of the loss function found by stochastic gradient based methods results in good generalization. This paper argues that most notions of flatness are problematic for deep models and can not be directly applied to explain generalization. Specifically, when focusing on deep networks with rectifier units, we can exploit the particular geometry of parameter space induced by the inherent symmetries that these architectures exhibit to build equivalent models corresponding to arbitrarily sharper minima. Furthermore, if we allow to reparametrize a function, the geometry of its parameters can change drastically without affecting its generalization properties.
http://arxiv.org/pdf/1703.04933
Laurent Dinh, Razvan Pascanu, Samy Bengio, Yoshua Bengio
cs.LG
8.5 pages of main content, 2.5 of bibliography and 1 page of appendix
null
cs.LG
20170315
20170515
[ { "id": "1609.03193" }, { "id": "1701.04271" }, { "id": "1609.04836" }, { "id": "1606.04838" }, { "id": "1611.03530" }, { "id": "1605.08803" }, { "id": "1511.01029" }, { "id": "1609.08144" }, { "id": "1611.01838" }, { "id": "1606.05336" }, { "id": "1603.01431" }, { "id": "1511.01844" }, { "id": "1612.04010" }, { "id": "1611.07476" }, { "id": "1611.02344" } ]
1703.05175
31
With respect to zero-shot learning, the use of embedded meta-data in prototypical networks resembles the method of [3] in that both predict the weights of a linear classifier. The DS-SJE and DA-SJE approach of [23] also learns deep multimodal embedding functions for images and class meta-data. Unlike ours, they learn using an empirical risk loss. Neither [3] nor [23] uses episodic training, which allows us to help speed up training and regularize the model. # 5 Conclusion
1703.05175#31
Prototypical Networks for Few-shot Learning
We propose prototypical networks for the problem of few-shot classification, where a classifier must generalize to new classes not seen in the training set, given only a small number of examples of each new class. Prototypical networks learn a metric space in which classification can be performed by computing distances to prototype representations of each class. Compared to recent approaches for few-shot learning, they reflect a simpler inductive bias that is beneficial in this limited-data regime, and achieve excellent results. We provide an analysis showing that some simple design decisions can yield substantial improvements over recent approaches involving complicated architectural choices and meta-learning. We further extend prototypical networks to zero-shot learning and achieve state-of-the-art results on the CU-Birds dataset.
http://arxiv.org/pdf/1703.05175
Jake Snell, Kevin Swersky, Richard S. Zemel
cs.LG, stat.ML
null
null
cs.LG
20170315
20170619
[ { "id": "1605.05395" }, { "id": "1502.03167" } ]
1703.04908
32
References [Andreas and Klein 2016] Andreas, J., and Klein, D. 2016. Reasoning about pragmatics with neural listeners and speak- In Proceedings of the 2016 Conference on Empirical ers. Methods in Natural Language Processing, EMNLP 2016, Austin, Texas, USA, November 1-4, 2016, 1173–1182. [Andreas, Dragan, and Klein 2017] Andreas, J.; Dragan, A.; and Klein, D. 2017. Translating neuralese. [Austin 1962] Austin, J. 1962. How to Do Things with Words. Oxford. [Bahdanau, Cho, and Bengio 2014] Bahdanau, D.; Cho, K.; 2014. Neural machine translation by and Bengio, Y. arXiv preprint jointly learning to align and translate. arXiv:1409.0473. [Beuls and Steels 2013] Beuls, K., and Steels, L. 2013. Agent-based models of strategies for the emergence and evo- lution of grammatical agreement. PloS one 8(3):e58960. [Bratman et al. 2010] Bratman, J.; Shvartsman, M.; Lewis, R. L.; and Singh, S. 2010. A
1703.04908#32
Emergence of Grounded Compositional Language in Multi-Agent Populations
By capturing statistical patterns in large corpora, machine learning has enabled significant advances in natural language processing, including in machine translation, question answering, and sentiment analysis. However, for agents to intelligently interact with humans, simply capturing the statistical patterns is insufficient. In this paper we investigate if, and how, grounded compositional language can emerge as a means to achieve goals in multi-agent populations. Towards this end, we propose a multi-agent learning environment and learning methods that bring about emergence of a basic compositional language. This language is represented as streams of abstract discrete symbols uttered by agents over time, but nonetheless has a coherent structure that possesses a defined vocabulary and syntax. We also observe emergence of non-verbal communication such as pointing and guiding when language communication is unavailable.
http://arxiv.org/pdf/1703.04908
Igor Mordatch, Pieter Abbeel
cs.AI, cs.CL
null
null
cs.AI
20170315
20180724
[ { "id": "1603.08887" }, { "id": "1611.01779" }, { "id": "1612.07182" }, { "id": "1609.00777" }, { "id": "1612.08810" } ]
1703.04933
32
define a = Ta: We will now consider the observation- ally equivalent parameter T,,(01,02) = (eq a~16). Given that ||@;:||2 < ||@l|2, we have that (0,a7'@2) € Bo(e,To(9)), making the maximum loss in this neighbor- hood at least as high as the best constant-valued function, incurring relatively high sharpness. Figure 4 provides a visualization of the proof. Let’s call Lη = L ◦ g the loss function with respect to the new parameter η. We generalize the derivation of Subsec- tion 4.2:
1703.04933#32
Sharp Minima Can Generalize For Deep Nets
Despite their overwhelming capacity to overfit, deep learning architectures tend to generalize relatively well to unseen data, allowing them to be deployed in practice. However, explaining why this is the case is still an open area of research. One standing hypothesis that is gaining popularity, e.g. Hochreiter & Schmidhuber (1997); Keskar et al. (2017), is that the flatness of minima of the loss function found by stochastic gradient based methods results in good generalization. This paper argues that most notions of flatness are problematic for deep models and can not be directly applied to explain generalization. Specifically, when focusing on deep networks with rectifier units, we can exploit the particular geometry of parameter space induced by the inherent symmetries that these architectures exhibit to build equivalent models corresponding to arbitrarily sharper minima. Furthermore, if we allow to reparametrize a function, the geometry of its parameters can change drastically without affecting its generalization properties.
http://arxiv.org/pdf/1703.04933
Laurent Dinh, Razvan Pascanu, Samy Bengio, Yoshua Bengio
cs.LG
8.5 pages of main content, 2.5 of bibliography and 1 page of appendix
null
cs.LG
20170315
20170515
[ { "id": "1609.03193" }, { "id": "1701.04271" }, { "id": "1609.04836" }, { "id": "1606.04838" }, { "id": "1611.03530" }, { "id": "1605.08803" }, { "id": "1511.01029" }, { "id": "1609.08144" }, { "id": "1611.01838" }, { "id": "1606.05336" }, { "id": "1603.01431" }, { "id": "1511.01844" }, { "id": "1612.04010" }, { "id": "1611.07476" }, { "id": "1611.02344" } ]
1703.05175
32
# 5 Conclusion We have proposed a simple method called prototypical networks for few-shot learning based on the idea that we can represent each class by the mean of its examples in a representation space learned by a neural network. We train these networks to specifically perform well in the few-shot setting by using episodic training. The approach is far simpler and more efficient than recent meta-learning approaches, and produces state-of-the-art results even without sophisticated extensions developed for matching networks (although these can be applied to prototypical nets as well). We show how performance can be greatly improved by carefully considering the chosen distance metric, and by modifying the episodic learning procedure. We further demonstrate how to generalize prototypical networks to the zero-shot setting, and achieve state-of-the-art results on the CUB-200 dataset. A natural direction for future work is to utilize Bregman divergences other than squared Euclidean distance, corresponding to class-conditional distributions beyond spherical Gaussians. We conducted preliminary explorations of this, including learning a variance per dimension for each class. This did not lead to any empirical gains, suggesting that the embedding network has enough flexibility on its own without requiring additional fitted parameters per class. Overall, the simplicity and effectiveness of prototypical networks makes it a promising approach for few-shot learning. 8 # Acknowledgements
1703.05175#32
Prototypical Networks for Few-shot Learning
We propose prototypical networks for the problem of few-shot classification, where a classifier must generalize to new classes not seen in the training set, given only a small number of examples of each new class. Prototypical networks learn a metric space in which classification can be performed by computing distances to prototype representations of each class. Compared to recent approaches for few-shot learning, they reflect a simpler inductive bias that is beneficial in this limited-data regime, and achieve excellent results. We provide an analysis showing that some simple design decisions can yield substantial improvements over recent approaches involving complicated architectural choices and meta-learning. We further extend prototypical networks to zero-shot learning and achieve state-of-the-art results on the CU-Birds dataset.
http://arxiv.org/pdf/1703.05175
Jake Snell, Kevin Swersky, Richard S. Zemel
cs.LG, stat.ML
null
null
cs.LG
20170315
20170619
[ { "id": "1605.05395" }, { "id": "1502.03167" } ]
1703.04933
33
Let’s call Lη = L ◦ g the loss function with respect to the new parameter η. We generalize the derivation of Subsec- tion 4.2: L,,(n) = L(g(n)) = (VLy)(n) = (VL)(g9(m)) (V9)(n) => (V?Ln)(n) = (Vg)(m)" (VL) (9(n)) (V9) (n) + (VL) (9(m)) (V9) (n)For rectified neural network every minimum is observation- ally equivalent to a minimum that generalizes as well but with high e-sharpness. This also applies when using the full-space ¢-sharpness used by Keskar et al. (2017). We can prove this similarly using the equivalence of norms in finite dimensional vector spaces and the fact that for c>0,€ > 0,€ < e(c + 1) (see Keskar et al. (2017)). We have not been able to show a similar problem with random subspace ¢-sharpness used by Keskar et al. (2017), ie. a restriction of the maximization to a random subspace, which could relate to the notion of wide valleys described by Chaudhari et al. (2017).
1703.04933#33
Sharp Minima Can Generalize For Deep Nets
Despite their overwhelming capacity to overfit, deep learning architectures tend to generalize relatively well to unseen data, allowing them to be deployed in practice. However, explaining why this is the case is still an open area of research. One standing hypothesis that is gaining popularity, e.g. Hochreiter & Schmidhuber (1997); Keskar et al. (2017), is that the flatness of minima of the loss function found by stochastic gradient based methods results in good generalization. This paper argues that most notions of flatness are problematic for deep models and can not be directly applied to explain generalization. Specifically, when focusing on deep networks with rectifier units, we can exploit the particular geometry of parameter space induced by the inherent symmetries that these architectures exhibit to build equivalent models corresponding to arbitrarily sharper minima. Furthermore, if we allow to reparametrize a function, the geometry of its parameters can change drastically without affecting its generalization properties.
http://arxiv.org/pdf/1703.04933
Laurent Dinh, Razvan Pascanu, Samy Bengio, Yoshua Bengio
cs.LG
8.5 pages of main content, 2.5 of bibliography and 1 page of appendix
null
cs.LG
20170315
20170515
[ { "id": "1609.03193" }, { "id": "1701.04271" }, { "id": "1609.04836" }, { "id": "1606.04838" }, { "id": "1611.03530" }, { "id": "1605.08803" }, { "id": "1511.01029" }, { "id": "1609.08144" }, { "id": "1611.01838" }, { "id": "1606.05336" }, { "id": "1603.01431" }, { "id": "1511.01844" }, { "id": "1612.04010" }, { "id": "1611.07476" }, { "id": "1611.02344" } ]
1703.05175
33
8 # Acknowledgements We would like to thank Marc Law, Sachin Ravi, Hugo Larochelle, Renjie Liao, and Oriol Vinyals for helpful discussions. This work was supported by the Samsung GRP project and the Canadian Institute for Advanced Research. # References [1] Zeynep Akata, Florent Perronnin, Zaid Harchaoui, and Cordelia Schmid. Label-embedding for attribute- based classification. In Computer Vision and Pattern Recognition, pages 819–826, 2013. [2] Zeynep Akata, Scott Reed, Daniel Walter, Honglak Lee, and Bernt Schiele. Evaluation of output embed- dings for fine-grained image classification. In Computer Vision and Pattern Recognition, pages 2927–2936, 2015. [3] Jimmy Ba, Kevin Swersky, Sanja Fidler, and Ruslan Salakhutdinov. Predicting deep zero-shot convolutional neural networks using textual descriptions. In International Conference on Computer Vision, pages 4247– 4255, 2015.
1703.05175#33
Prototypical Networks for Few-shot Learning
We propose prototypical networks for the problem of few-shot classification, where a classifier must generalize to new classes not seen in the training set, given only a small number of examples of each new class. Prototypical networks learn a metric space in which classification can be performed by computing distances to prototype representations of each class. Compared to recent approaches for few-shot learning, they reflect a simpler inductive bias that is beneficial in this limited-data regime, and achieve excellent results. We provide an analysis showing that some simple design decisions can yield substantial improvements over recent approaches involving complicated architectural choices and meta-learning. We further extend prototypical networks to zero-shot learning and achieve state-of-the-art results on the CU-Birds dataset.
http://arxiv.org/pdf/1703.05175
Jake Snell, Kevin Swersky, Richard S. Zemel
cs.LG, stat.ML
null
null
cs.LG
20170315
20170619
[ { "id": "1605.05395" }, { "id": "1502.03167" } ]
1703.04908
34
[Dhingra et al. 2016] Dhingra, B.; Li, L.; Li, X.; Gao, J.; Chen, Y.-N.; Ahmed, F.; and Deng, L. 2016. End-to-End Reinforcement Learning of Dialogue Agents for Informa- tion Access. arXiv:1609.00777 [cs]. arXiv: 1609.00777. [Dosovitskiy and Koltun 2016] Dosovitskiy, A., and Koltun, V. 2016. Learning to act by predicting the future. arXiv preprint arXiv:1611.01779. [Durrett, Berg-Kirkpatrick, and Klein 2016] Durrett, G.; Berg-Kirkpatrick, T.; and Klein, D. 2016. Learning-based single-document summarization with compression and anaphoricity constraints. arXiv preprint arXiv:1603.08887. [Foerster et al. 2016] Foerster, J. N.; Assael, Y. M.; de Fre- itas, N.; and Whiteson, S. 2016. Learning to Communicate with Deep Multi-Agent Reinforcement Learning. [Frank and Goodman 2012] Frank, M. C., and Goodman, N. D.
1703.04908#34
Emergence of Grounded Compositional Language in Multi-Agent Populations
By capturing statistical patterns in large corpora, machine learning has enabled significant advances in natural language processing, including in machine translation, question answering, and sentiment analysis. However, for agents to intelligently interact with humans, simply capturing the statistical patterns is insufficient. In this paper we investigate if, and how, grounded compositional language can emerge as a means to achieve goals in multi-agent populations. Towards this end, we propose a multi-agent learning environment and learning methods that bring about emergence of a basic compositional language. This language is represented as streams of abstract discrete symbols uttered by agents over time, but nonetheless has a coherent structure that possesses a defined vocabulary and syntax. We also observe emergence of non-verbal communication such as pointing and guiding when language communication is unavailable.
http://arxiv.org/pdf/1703.04908
Igor Mordatch, Pieter Abbeel
cs.AI, cs.CL
null
null
cs.AI
20170315
20180724
[ { "id": "1603.08887" }, { "id": "1611.01779" }, { "id": "1612.07182" }, { "id": "1609.00777" }, { "id": "1612.08810" } ]
1703.05175
34
[4] Arindam Banerjee, Srujana Merugu, Inderjit S Dhillon, and Joydeep Ghosh. Clustering with bregman divergences. Journal of machine learning research, 6(Oct):1705–1749, 2005. [5] Aurélien Bellet, Amaury Habrard, and Marc Sebban. A survey on metric learning for feature vectors and structured data. arXiv preprint arXiv:1306.6709, 2013. [6] Harrison Edwards and Amos Storkey. Towards a neural statistician. International Conference on Learning Representations, 2017. [7] Mohamed Elhoseiny, Babak Saleh, and Ahmed Elgammal. Write a classifier: Zero-shot learning using purely textual descriptions. In International Conference on Computer Vision, pages 2584–2591, 2013. [8] Jacob Goldberger, Geoffrey E. Hinton, Sam T. Roweis, and Ruslan Salakhutdinov. Neighbourhood components analysis. In Advances in Neural Information Processing Systems, pages 513–520, 2004. [9] Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural Computation, 9(8):1735–1780, 1997.
1703.05175#34
Prototypical Networks for Few-shot Learning
We propose prototypical networks for the problem of few-shot classification, where a classifier must generalize to new classes not seen in the training set, given only a small number of examples of each new class. Prototypical networks learn a metric space in which classification can be performed by computing distances to prototype representations of each class. Compared to recent approaches for few-shot learning, they reflect a simpler inductive bias that is beneficial in this limited-data regime, and achieve excellent results. We provide an analysis showing that some simple design decisions can yield substantial improvements over recent approaches involving complicated architectural choices and meta-learning. We further extend prototypical networks to zero-shot learning and achieve state-of-the-art results on the CU-Birds dataset.
http://arxiv.org/pdf/1703.05175
Jake Snell, Kevin Swersky, Richard S. Zemel
cs.LG, stat.ML
null
null
cs.LG
20170315
20170619
[ { "id": "1605.05395" }, { "id": "1502.03167" } ]
1703.04908
35
S. 2016. Learning to Communicate with Deep Multi-Agent Reinforcement Learning. [Frank and Goodman 2012] Frank, M. C., and Goodman, N. D. 2012. Predicting Pragmatic Reasoning in Language Games. Science 336(6084):998. [Gauthier and Mordatch 2016] Gauthier, J., and Mordatch, I. 2016. A paradigm for situated and goal-driven language learning. CoRR abs/1610.03585. [Golland, Liang, and Klein 2010] Golland, D.; Liang, P.; and Klein, D. 2010. A game-theoretic approach to generating In Proceedings of the 2010 Confer- spatial descriptions. ence on Empirical Methods in Natural Language Process- ing, EMNLP ’10, 410–419. Stroudsburg, PA, USA: Associ- ation for Computational Linguistics. [Grice 1975] Grice, H. P. 1975. Logic and conversation. In Cole, P., and Morgan, J. L., eds., Syntax and Semantics: Vol. 3: Speech Acts, 41–58. San Diego, CA: Academic Press. [Jang, Gu, and Poole 2016] Jang, E.; Gu, S.;
1703.04908#35
Emergence of Grounded Compositional Language in Multi-Agent Populations
By capturing statistical patterns in large corpora, machine learning has enabled significant advances in natural language processing, including in machine translation, question answering, and sentiment analysis. However, for agents to intelligently interact with humans, simply capturing the statistical patterns is insufficient. In this paper we investigate if, and how, grounded compositional language can emerge as a means to achieve goals in multi-agent populations. Towards this end, we propose a multi-agent learning environment and learning methods that bring about emergence of a basic compositional language. This language is represented as streams of abstract discrete symbols uttered by agents over time, but nonetheless has a coherent structure that possesses a defined vocabulary and syntax. We also observe emergence of non-verbal communication such as pointing and guiding when language communication is unavailable.
http://arxiv.org/pdf/1703.04908
Igor Mordatch, Pieter Abbeel
cs.AI, cs.CL
null
null
cs.AI
20170315
20180724
[ { "id": "1603.08887" }, { "id": "1611.01779" }, { "id": "1612.07182" }, { "id": "1609.00777" }, { "id": "1612.08810" } ]
1703.04933
35
At a differentiable critical point, we have by definition (VL)(g(n)) = 0, therefore the transformed Hessian at a critical point becomes (V?Ln)(n) = (V9)(n)" (VL) (9(n)) (V9) (n)This means that by reparametrizing the problem we can modify to a large extent the geometry of the loss function so as to have sharp minima of L in θ correspond to flat minima of Lη in η = g−1(θ) and conversely. Figure 5 illustrates that point in one dimension. Several practical (Dinh et al., 2014; Rezende & Mohamed, 2015; Kingma et al., 2016; Dinh et al., 2016) and theoretical works (Hyvärinen & Pajunen, 1999) show how powerful bijections can be. We can also note that the formula for the transformed Hessian at a critical point also applies if g is not invertible, g would just need to be surjective over Θ in order to cover exactly the same family of prediction functions # 5 Allowing reparametrizations {fθ, θ ∈ Θ} = {fg(η), η ∈ g−1(Θ)}.
1703.04933#35
Sharp Minima Can Generalize For Deep Nets
Despite their overwhelming capacity to overfit, deep learning architectures tend to generalize relatively well to unseen data, allowing them to be deployed in practice. However, explaining why this is the case is still an open area of research. One standing hypothesis that is gaining popularity, e.g. Hochreiter & Schmidhuber (1997); Keskar et al. (2017), is that the flatness of minima of the loss function found by stochastic gradient based methods results in good generalization. This paper argues that most notions of flatness are problematic for deep models and can not be directly applied to explain generalization. Specifically, when focusing on deep networks with rectifier units, we can exploit the particular geometry of parameter space induced by the inherent symmetries that these architectures exhibit to build equivalent models corresponding to arbitrarily sharper minima. Furthermore, if we allow to reparametrize a function, the geometry of its parameters can change drastically without affecting its generalization properties.
http://arxiv.org/pdf/1703.04933
Laurent Dinh, Razvan Pascanu, Samy Bengio, Yoshua Bengio
cs.LG
8.5 pages of main content, 2.5 of bibliography and 1 page of appendix
null
cs.LG
20170315
20170515
[ { "id": "1609.03193" }, { "id": "1701.04271" }, { "id": "1609.04836" }, { "id": "1606.04838" }, { "id": "1611.03530" }, { "id": "1605.08803" }, { "id": "1511.01029" }, { "id": "1609.08144" }, { "id": "1611.01838" }, { "id": "1606.05336" }, { "id": "1603.01431" }, { "id": "1511.01844" }, { "id": "1612.04010" }, { "id": "1611.07476" }, { "id": "1611.02344" } ]
1703.05175
35
[9] Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural Computation, 9(8):1735–1780, 1997. [10] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015. [11] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. [12] Diederik P. Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013. [13] Gregory Koch. Siamese neural networks for one-shot image recognition. Master’s thesis, University of Toronto, 2015. [14] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems, pages 1097–1105, 2012. [15] Brian Kulis. Metric learning: A survey. Foundations and Trends in Machine Learning, 5(4):287–364, 2012.
1703.05175#35
Prototypical Networks for Few-shot Learning
We propose prototypical networks for the problem of few-shot classification, where a classifier must generalize to new classes not seen in the training set, given only a small number of examples of each new class. Prototypical networks learn a metric space in which classification can be performed by computing distances to prototype representations of each class. Compared to recent approaches for few-shot learning, they reflect a simpler inductive bias that is beneficial in this limited-data regime, and achieve excellent results. We provide an analysis showing that some simple design decisions can yield substantial improvements over recent approaches involving complicated architectural choices and meta-learning. We further extend prototypical networks to zero-shot learning and achieve state-of-the-art results on the CU-Birds dataset.
http://arxiv.org/pdf/1703.05175
Jake Snell, Kevin Swersky, Richard S. Zemel
cs.LG, stat.ML
null
null
cs.LG
20170315
20170619
[ { "id": "1605.05395" }, { "id": "1502.03167" } ]
1703.04908
36
Speech Acts, 41–58. San Diego, CA: Academic Press. [Jang, Gu, and Poole 2016] Jang, E.; Gu, S.; and Poole, B. 2016. Categorical Reparameterization with Gumbel- Softmax. ArXiv e-prints. [Kirby, Griffiths, and Smith 2014] Kirby, S.; Griffiths, T.; and Smith, K. 2014. Iterated learning and the evolution of language. Current opinion in neurobiology 28:108–114. [Kirby 1999] Kirby, S. 1999. Syntax out of Learning: the cultural evolution of structured communication in a popula- tion of induction algorithms. [Kirby 2001] Kirby, S. 2001. Spontaneous evolution of lin- guistic structure-an iterated learning model of the emergence of regularity and irregularity. IEEE Transactions on Evolu- tionary Computation 5(2):102–110. [Lake et al. 2016] Lake, B. M.; Ullman, T. D.; Tenenbaum, J. B.; and Gershman, S. J. 2016. Building machines that learn and think like people. CoRR
1703.04908#36
Emergence of Grounded Compositional Language in Multi-Agent Populations
By capturing statistical patterns in large corpora, machine learning has enabled significant advances in natural language processing, including in machine translation, question answering, and sentiment analysis. However, for agents to intelligently interact with humans, simply capturing the statistical patterns is insufficient. In this paper we investigate if, and how, grounded compositional language can emerge as a means to achieve goals in multi-agent populations. Towards this end, we propose a multi-agent learning environment and learning methods that bring about emergence of a basic compositional language. This language is represented as streams of abstract discrete symbols uttered by agents over time, but nonetheless has a coherent structure that possesses a defined vocabulary and syntax. We also observe emergence of non-verbal communication such as pointing and guiding when language communication is unavailable.
http://arxiv.org/pdf/1703.04908
Igor Mordatch, Pieter Abbeel
cs.AI, cs.CL
null
null
cs.AI
20170315
20180724
[ { "id": "1603.08887" }, { "id": "1611.01779" }, { "id": "1612.07182" }, { "id": "1609.00777" }, { "id": "1612.08810" } ]
1703.04933
36
# 5 Allowing reparametrizations {fθ, θ ∈ Θ} = {fg(η), η ∈ g−1(Θ)}. In the previous section 4 we explored the case of a fixed parametrization, that of deep rectifier models. In this section we demonstrate a simple observation. If we are allowed to change the parametrization of some function f , we can obtain arbitrarily different geometries without affecting how the function evaluates on unseen data. The same holds for reparametrization of the input space. The implication is that the correlation between the geometry of the parameter space (and hence the error surface) and the behavior of a given function is meaningless if not preconditioned on the specific We show in Appendix A, bijections that allow us to perturb the relative flatness between a finite number of minima.
1703.04933#36
Sharp Minima Can Generalize For Deep Nets
Despite their overwhelming capacity to overfit, deep learning architectures tend to generalize relatively well to unseen data, allowing them to be deployed in practice. However, explaining why this is the case is still an open area of research. One standing hypothesis that is gaining popularity, e.g. Hochreiter & Schmidhuber (1997); Keskar et al. (2017), is that the flatness of minima of the loss function found by stochastic gradient based methods results in good generalization. This paper argues that most notions of flatness are problematic for deep models and can not be directly applied to explain generalization. Specifically, when focusing on deep networks with rectifier units, we can exploit the particular geometry of parameter space induced by the inherent symmetries that these architectures exhibit to build equivalent models corresponding to arbitrarily sharper minima. Furthermore, if we allow to reparametrize a function, the geometry of its parameters can change drastically without affecting its generalization properties.
http://arxiv.org/pdf/1703.04933
Laurent Dinh, Razvan Pascanu, Samy Bengio, Yoshua Bengio
cs.LG
8.5 pages of main content, 2.5 of bibliography and 1 page of appendix
null
cs.LG
20170315
20170515
[ { "id": "1609.03193" }, { "id": "1701.04271" }, { "id": "1609.04836" }, { "id": "1606.04838" }, { "id": "1611.03530" }, { "id": "1605.08803" }, { "id": "1511.01029" }, { "id": "1609.08144" }, { "id": "1611.01838" }, { "id": "1606.05336" }, { "id": "1603.01431" }, { "id": "1511.01844" }, { "id": "1612.04010" }, { "id": "1611.07476" }, { "id": "1611.02344" } ]
1703.05175
36
[15] Brian Kulis. Metric learning: A survey. Foundations and Trends in Machine Learning, 5(4):287–364, 2012. [16] Brenden M. Lake, Ruslan Salakhutdinov, Jason Gross, and Joshua B. Tenenbaum. One shot learning of simple visual concepts. In CogSci, 2011. [17] Renjie Liao, Alexander Schwing, Richard Zemel, and Raquel Urtasun. Learning deep parsimonious representations. Advances in Neural Information Processing Systems, 2016. [18] Laurens van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of Machine Learning Research, 9(Nov):2579–2605, 2008. [19] Thomas Mensink, Jakob Verbeek, Florent Perronnin, and Gabriela Csurka. Distance-based image classifi- cation: Generalizing to new classes at near-zero cost. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(11):2624–2637, 2013. 9 [20] Erik G Miller, Nicholas E Matsakis, and Paul A Viola. Learning from one example through shared densities on transforms. In CVPR, volume 1, pages 464–471, 2000.
1703.05175#36
Prototypical Networks for Few-shot Learning
We propose prototypical networks for the problem of few-shot classification, where a classifier must generalize to new classes not seen in the training set, given only a small number of examples of each new class. Prototypical networks learn a metric space in which classification can be performed by computing distances to prototype representations of each class. Compared to recent approaches for few-shot learning, they reflect a simpler inductive bias that is beneficial in this limited-data regime, and achieve excellent results. We provide an analysis showing that some simple design decisions can yield substantial improvements over recent approaches involving complicated architectural choices and meta-learning. We further extend prototypical networks to zero-shot learning and achieve state-of-the-art results on the CU-Birds dataset.
http://arxiv.org/pdf/1703.05175
Jake Snell, Kevin Swersky, Richard S. Zemel
cs.LG, stat.ML
null
null
cs.LG
20170315
20170619
[ { "id": "1605.05395" }, { "id": "1502.03167" } ]
1703.04908
37
T. D.; Tenenbaum, J. B.; and Gershman, S. J. 2016. Building machines that learn and think like people. CoRR abs/1604.00289. [Lazaridou, Peysakhovich, and Baroni 2016] Lazaridou, A.; Peysakhovich, A.; and Baroni, M. 2016. Multi-agent co- operation and the emergence of (natural) language. arXiv preprint arXiv:1612.07182. [Lazaridou, Pham, and Baroni 2016] Lazaridou, A.; Pham, N. T.; and Baroni, M. Towards Multi- Agent Communication-Based Language Learning. arXiv: 1605.07133. [Littman 1994] Littman, M. L. 1994. Markov games as a framework for multi-agent reinforcement learning. In Proceedings of the eleventh international conference on ma- chine learning, volume 157, 157–163. [Maddison, Mnih, and Teh 2016] Maddison, C. J.; Mnih, A.; and Teh, Y. W. 2016. The concrete distribution: A con- tinuous relaxation of discrete random variables. CoRR abs/1611.00712.
1703.04908#37
Emergence of Grounded Compositional Language in Multi-Agent Populations
By capturing statistical patterns in large corpora, machine learning has enabled significant advances in natural language processing, including in machine translation, question answering, and sentiment analysis. However, for agents to intelligently interact with humans, simply capturing the statistical patterns is insufficient. In this paper we investigate if, and how, grounded compositional language can emerge as a means to achieve goals in multi-agent populations. Towards this end, we propose a multi-agent learning environment and learning methods that bring about emergence of a basic compositional language. This language is represented as streams of abstract discrete symbols uttered by agents over time, but nonetheless has a coherent structure that possesses a defined vocabulary and syntax. We also observe emergence of non-verbal communication such as pointing and guiding when language communication is unavailable.
http://arxiv.org/pdf/1703.04908
Igor Mordatch, Pieter Abbeel
cs.AI, cs.CL
null
null
cs.AI
20170315
20180724
[ { "id": "1603.08887" }, { "id": "1611.01779" }, { "id": "1612.07182" }, { "id": "1609.00777" }, { "id": "1612.08810" } ]
1703.04933
37
We show in Appendix A, bijections that allow us to perturb the relative flatness between a finite number of minima. Instances of commonly used reparametrization are batch normalization (Ioffe & Szegedy, 2015), or the virtual batch normalization variant (Salimans et al., 2016), and weight normalization (Badrinarayanan et al., 2015; Salimans & Kingma, 2016; Arpit et al., 2016). Im et al. (2016) have plotted how the loss function landscape was affected by batch normalization. However, we will focus on weight nor- malization reparametrization as the analysis will be simpler, Sharp Minima Can Generalize For Deep Nets e every minimum has infinite volume e-sharpness;
1703.04933#37
Sharp Minima Can Generalize For Deep Nets
Despite their overwhelming capacity to overfit, deep learning architectures tend to generalize relatively well to unseen data, allowing them to be deployed in practice. However, explaining why this is the case is still an open area of research. One standing hypothesis that is gaining popularity, e.g. Hochreiter & Schmidhuber (1997); Keskar et al. (2017), is that the flatness of minima of the loss function found by stochastic gradient based methods results in good generalization. This paper argues that most notions of flatness are problematic for deep models and can not be directly applied to explain generalization. Specifically, when focusing on deep networks with rectifier units, we can exploit the particular geometry of parameter space induced by the inherent symmetries that these architectures exhibit to build equivalent models corresponding to arbitrarily sharper minima. Furthermore, if we allow to reparametrize a function, the geometry of its parameters can change drastically without affecting its generalization properties.
http://arxiv.org/pdf/1703.04933
Laurent Dinh, Razvan Pascanu, Samy Bengio, Yoshua Bengio
cs.LG
8.5 pages of main content, 2.5 of bibliography and 1 page of appendix
null
cs.LG
20170315
20170515
[ { "id": "1609.03193" }, { "id": "1701.04271" }, { "id": "1609.04836" }, { "id": "1606.04838" }, { "id": "1611.03530" }, { "id": "1605.08803" }, { "id": "1511.01029" }, { "id": "1609.08144" }, { "id": "1611.01838" }, { "id": "1606.05336" }, { "id": "1603.01431" }, { "id": "1511.01844" }, { "id": "1612.04010" }, { "id": "1611.07476" }, { "id": "1611.02344" } ]
1703.05175
37
[21] Renqiang Min, David A Stanley, Zineng Yuan, Anthony Bonner, and Zhaolei Zhang. A deep non-linear feature mapping for large-margin knn classification. In IEEE International Conference on Data Mining, pages 357–366, 2009. [22] Sachin Ravi and Hugo Larochelle. Optimization as a model for few-shot learning. International Conference on Learning Representations, 2017. [23] Scott Reed, Zeynep Akata, Bernt Schiele, and Honglak Lee. Learning deep representations of fine-grained visual descriptions. arXiv preprint arXiv:1605.05395, 2016. [24] Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approxi- mate inference in deep generative models. arXiv preprint arXiv:1401.4082, 2014. [25] Oren Rippel, Manohar Paluri, Piotr Dollar, and Lubomir Bourdev. Metric learning with adaptive density discrimination. International Conference on Learning Representations, 2016.
1703.05175#37
Prototypical Networks for Few-shot Learning
We propose prototypical networks for the problem of few-shot classification, where a classifier must generalize to new classes not seen in the training set, given only a small number of examples of each new class. Prototypical networks learn a metric space in which classification can be performed by computing distances to prototype representations of each class. Compared to recent approaches for few-shot learning, they reflect a simpler inductive bias that is beneficial in this limited-data regime, and achieve excellent results. We provide an analysis showing that some simple design decisions can yield substantial improvements over recent approaches involving complicated architectural choices and meta-learning. We further extend prototypical networks to zero-shot learning and achieve state-of-the-art results on the CU-Birds dataset.
http://arxiv.org/pdf/1703.05175
Jake Snell, Kevin Swersky, Richard S. Zemel
cs.LG, stat.ML
null
null
cs.LG
20170315
20170619
[ { "id": "1605.05395" }, { "id": "1502.03167" } ]
1703.04908
38
Teh, Y. W. 2016. The concrete distribution: A con- tinuous relaxation of discrete random variables. CoRR abs/1611.00712. [Ng 2011] Ng, A. 2011. Sparse autoencoder. CS294A Lec- ture notes 72(2011):1–19. [Nowak, Plotkin, and Jansen 2000] Nowak, M. A.; Plotkin, J. B.; and Jansen, V. A. A. 2000. The evolution of syntactic communication. Nature 404(6777):495–498. [Silver et al. 2016] Silver, D.; van Hasselt, H.; Hessel, M.; Schaul, T.; Guez, A.; Harley, T.; Dulac-Arnold, G.; Reichert, D.; Rabinowitz, N.; Barreto, A.; et al. 2016. The pre- dictron: End-to-end learning and planning. arXiv preprint arXiv:1612.08810. [Socher et al. 2013] Socher, R.; Perelygin, A.; Wu, J. Y.; Chuang, J.; Manning, C. D.; Ng, A. Y.; Potts, C.; et al. 2013. Recursive deep models for
1703.04908#38
Emergence of Grounded Compositional Language in Multi-Agent Populations
By capturing statistical patterns in large corpora, machine learning has enabled significant advances in natural language processing, including in machine translation, question answering, and sentiment analysis. However, for agents to intelligently interact with humans, simply capturing the statistical patterns is insufficient. In this paper we investigate if, and how, grounded compositional language can emerge as a means to achieve goals in multi-agent populations. Towards this end, we propose a multi-agent learning environment and learning methods that bring about emergence of a basic compositional language. This language is represented as streams of abstract discrete symbols uttered by agents over time, but nonetheless has a coherent structure that possesses a defined vocabulary and syntax. We also observe emergence of non-verbal communication such as pointing and guiding when language communication is unavailable.
http://arxiv.org/pdf/1703.04908
Igor Mordatch, Pieter Abbeel
cs.AI, cs.CL
null
null
cs.AI
20170315
20180724
[ { "id": "1603.08887" }, { "id": "1611.01779" }, { "id": "1612.07182" }, { "id": "1609.00777" }, { "id": "1612.08810" } ]
1703.04933
38
(a) Loss function with default parametrization • every minimum is observationally equivalent to an infinitely sharp minimum and to an infinitely flat min- imum when considering nonzero eigenvalues of the Hessian; © every minimum is observationally equivalent to a mini- mum with arbitrarily low full-space and random sub- space e-sharpness and a minimum with high full-space e-sharpness. (b) Loss function with reparametrization
1703.04933#38
Sharp Minima Can Generalize For Deep Nets
Despite their overwhelming capacity to overfit, deep learning architectures tend to generalize relatively well to unseen data, allowing them to be deployed in practice. However, explaining why this is the case is still an open area of research. One standing hypothesis that is gaining popularity, e.g. Hochreiter & Schmidhuber (1997); Keskar et al. (2017), is that the flatness of minima of the loss function found by stochastic gradient based methods results in good generalization. This paper argues that most notions of flatness are problematic for deep models and can not be directly applied to explain generalization. Specifically, when focusing on deep networks with rectifier units, we can exploit the particular geometry of parameter space induced by the inherent symmetries that these architectures exhibit to build equivalent models corresponding to arbitrarily sharper minima. Furthermore, if we allow to reparametrize a function, the geometry of its parameters can change drastically without affecting its generalization properties.
http://arxiv.org/pdf/1703.04933
Laurent Dinh, Razvan Pascanu, Samy Bengio, Yoshua Bengio
cs.LG
8.5 pages of main content, 2.5 of bibliography and 1 page of appendix
null
cs.LG
20170315
20170515
[ { "id": "1609.03193" }, { "id": "1701.04271" }, { "id": "1609.04836" }, { "id": "1606.04838" }, { "id": "1611.03530" }, { "id": "1605.08803" }, { "id": "1511.01029" }, { "id": "1609.08144" }, { "id": "1611.01838" }, { "id": "1606.05336" }, { "id": "1603.01431" }, { "id": "1511.01844" }, { "id": "1612.04010" }, { "id": "1611.07476" }, { "id": "1611.02344" } ]
1703.05175
38
[26] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. Imagenet large scale visual recognition challenge. International Journal of Computer Vision, 115(3):211–252, 2015. [27] Ruslan Salakhutdinov and Geoffrey E. Hinton. Learning a nonlinear embedding by preserving class neighbourhood structure. In AISTATS, pages 412–419, 2007. [28] Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1–9, 2015. [29] Oriol Vinyals, Charles Blundell, Tim Lillicrap, Daan Wierstra, et al. Matching networks for one shot learning. In Advances in Neural Information Processing Systems, pages 3630–3638, 2016.
1703.05175#38
Prototypical Networks for Few-shot Learning
We propose prototypical networks for the problem of few-shot classification, where a classifier must generalize to new classes not seen in the training set, given only a small number of examples of each new class. Prototypical networks learn a metric space in which classification can be performed by computing distances to prototype representations of each class. Compared to recent approaches for few-shot learning, they reflect a simpler inductive bias that is beneficial in this limited-data regime, and achieve excellent results. We provide an analysis showing that some simple design decisions can yield substantial improvements over recent approaches involving complicated architectural choices and meta-learning. We further extend prototypical networks to zero-shot learning and achieve state-of-the-art results on the CU-Birds dataset.
http://arxiv.org/pdf/1703.05175
Jake Snell, Kevin Swersky, Richard S. Zemel
cs.LG, stat.ML
null
null
cs.LG
20170315
20170619
[ { "id": "1605.05395" }, { "id": "1502.03167" } ]
1703.04908
39
J. Y.; Chuang, J.; Manning, C. D.; Ng, A. Y.; Potts, C.; et al. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the conference on em- pirical methods in natural language processing (EMNLP), volume 1631, 1642. Citeseer. [Steels 1995] Steels, L. 1995. A self-organizing spatial vo- cabulary. Artif. Life 2(3):319–332. [Steels 2005] Steels, L. 2005. What triggers the emergence of grammar? In AISB’05: Proceedings of the Second In- ternational Symposium on the Emergence and Evolution of Linguistic Communication (EELC’05), 143–150. University of Hertfordshire. [Sukhbaatar, Szlam, and Fergus 2016] Sukhbaatar, S.; Szlam, A.; and Fergus, R. 2016. Learning multiagent com- In Advances in Neural munication with backpropagation. Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, December 5-10, 2016, Barcelona, Spain, 2244–2252. [Sutskever,
1703.04908#39
Emergence of Grounded Compositional Language in Multi-Agent Populations
By capturing statistical patterns in large corpora, machine learning has enabled significant advances in natural language processing, including in machine translation, question answering, and sentiment analysis. However, for agents to intelligently interact with humans, simply capturing the statistical patterns is insufficient. In this paper we investigate if, and how, grounded compositional language can emerge as a means to achieve goals in multi-agent populations. Towards this end, we propose a multi-agent learning environment and learning methods that bring about emergence of a basic compositional language. This language is represented as streams of abstract discrete symbols uttered by agents over time, but nonetheless has a coherent structure that possesses a defined vocabulary and syntax. We also observe emergence of non-verbal communication such as pointing and guiding when language communication is unavailable.
http://arxiv.org/pdf/1703.04908
Igor Mordatch, Pieter Abbeel
cs.AI, cs.CL
null
null
cs.AI
20170315
20180724
[ { "id": "1603.08887" }, { "id": "1611.01779" }, { "id": "1612.07182" }, { "id": "1609.00777" }, { "id": "1612.08810" } ]
1703.04933
39
This further weakens the link between the flatness of a minimum and the generalization property of the associated prediction function when a specific parameter space has not been specified and explained beforehand. # Input representation As we conclude that the notion of flatness for a minimum in the loss function by itself is not sufficient to determine its generalization ability in the general case, we can choose to focus instead on properties of the prediction function instead. Motivated by some work in adversarial examples (Szegedy et al., 2014; Goodfellow et al., 2015) for deep neural net- works, one could decide on its generalization property by analyzing the gradient of the prediction function on exam- ples. Intuitively, if the gradient is small on typical points from the distribution or has a small Lipschitz constant, then a small change in the input should not incur a large change in the prediction. (c) Loss function with another reparametrization
1703.04933#39
Sharp Minima Can Generalize For Deep Nets
Despite their overwhelming capacity to overfit, deep learning architectures tend to generalize relatively well to unseen data, allowing them to be deployed in practice. However, explaining why this is the case is still an open area of research. One standing hypothesis that is gaining popularity, e.g. Hochreiter & Schmidhuber (1997); Keskar et al. (2017), is that the flatness of minima of the loss function found by stochastic gradient based methods results in good generalization. This paper argues that most notions of flatness are problematic for deep models and can not be directly applied to explain generalization. Specifically, when focusing on deep networks with rectifier units, we can exploit the particular geometry of parameter space induced by the inherent symmetries that these architectures exhibit to build equivalent models corresponding to arbitrarily sharper minima. Furthermore, if we allow to reparametrize a function, the geometry of its parameters can change drastically without affecting its generalization properties.
http://arxiv.org/pdf/1703.04933
Laurent Dinh, Razvan Pascanu, Samy Bengio, Yoshua Bengio
cs.LG
8.5 pages of main content, 2.5 of bibliography and 1 page of appendix
null
cs.LG
20170315
20170515
[ { "id": "1609.03193" }, { "id": "1701.04271" }, { "id": "1609.04836" }, { "id": "1606.04838" }, { "id": "1611.03530" }, { "id": "1605.08803" }, { "id": "1511.01029" }, { "id": "1609.08144" }, { "id": "1611.01838" }, { "id": "1606.05336" }, { "id": "1603.01431" }, { "id": "1511.01844" }, { "id": "1612.04010" }, { "id": "1611.07476" }, { "id": "1611.02344" } ]
1703.05175
39
[30] Kilian Q Weinberger, John Blitzer, and Lawrence K Saul. Distance metric learning for large margin nearest neighbor classification. In Advances in Neural Information Processing Systems, pages 1473–1480, 2005. [31] P. Welinder, S. Branson, T. Mita, C. Wah, F. Schroff, S. Belongie, and P. Perona. Caltech-UCSD Birds 200. Technical Report CNS-TR-2010-001, California Institute of Technology, 2010. 10 # A Additional Omniglot Results In Table 4 we show test classification accuracy for prototypical networks using Euclidean distance trained with 5, 20, and 60 classes per episode. Table 4: Additional classification accuracy results for prototypical networks on Omniglot. Configura- tion of training episodes is indicated by number of classes per episode (“way”), number of support points per class (“shot”) and number of query points per class (“query”). Classification accuracy was averaged over 1,000 randomly generated episodes from the test set.
1703.05175#39
Prototypical Networks for Few-shot Learning
We propose prototypical networks for the problem of few-shot classification, where a classifier must generalize to new classes not seen in the training set, given only a small number of examples of each new class. Prototypical networks learn a metric space in which classification can be performed by computing distances to prototype representations of each class. Compared to recent approaches for few-shot learning, they reflect a simpler inductive bias that is beneficial in this limited-data regime, and achieve excellent results. We provide an analysis showing that some simple design decisions can yield substantial improvements over recent approaches involving complicated architectural choices and meta-learning. We further extend prototypical networks to zero-shot learning and achieve state-of-the-art results on the CU-Birds dataset.
http://arxiv.org/pdf/1703.05175
Jake Snell, Kevin Swersky, Richard S. Zemel
cs.LG, stat.ML
null
null
cs.LG
20170315
20170619
[ { "id": "1605.05395" }, { "id": "1502.03167" } ]
1703.04908
40
29: Annual Conference on Neural Information Processing Systems 2016, December 5-10, 2016, Barcelona, Spain, 2244–2252. [Sutskever, Vinyals, and Le 2014] Sutskever, I.; Vinyals, O.; and Le, Q. V. 2014. Sequence to sequence learning with neural networks. In Ghahramani, Z.; Welling, M.; Cortes, C.; Lawrence, N. D.; and Weinberger, K. Q., eds., Advances in Neural Information Processing Systems 27. Curran Asso- ciates, Inc. 3104–3112. [Teh 2011] Teh, Y. W. 2011. Dirichlet process. In Encyclo- pedia of machine learning. Springer. 280–287. [Ullman, Xu, and Goodman 2016] Ullman, T.; Xu, Y.; and Goodman, N. 2016. The pragmatics of spatial language. In Proceedings of the Cognitive Science Society. [Vogel et al. 2014] Vogel, A.; G´omez Emilsson, A.; Frank, M. C.; Jurafsky, D.; and Potts, C. 2014. Learning to reason pragmatically with cognitive limitations. In Proceedings of the
1703.04908#40
Emergence of Grounded Compositional Language in Multi-Agent Populations
By capturing statistical patterns in large corpora, machine learning has enabled significant advances in natural language processing, including in machine translation, question answering, and sentiment analysis. However, for agents to intelligently interact with humans, simply capturing the statistical patterns is insufficient. In this paper we investigate if, and how, grounded compositional language can emerge as a means to achieve goals in multi-agent populations. Towards this end, we propose a multi-agent learning environment and learning methods that bring about emergence of a basic compositional language. This language is represented as streams of abstract discrete symbols uttered by agents over time, but nonetheless has a coherent structure that possesses a defined vocabulary and syntax. We also observe emergence of non-verbal communication such as pointing and guiding when language communication is unavailable.
http://arxiv.org/pdf/1703.04908
Igor Mordatch, Pieter Abbeel
cs.AI, cs.CL
null
null
cs.AI
20170315
20180724
[ { "id": "1603.08887" }, { "id": "1611.01779" }, { "id": "1612.07182" }, { "id": "1609.00777" }, { "id": "1612.08810" } ]