id
stringlengths 12
15
| title
stringlengths 8
162
| content
stringlengths 1
17.6k
| prechunk_id
stringlengths 0
15
| postchunk_id
stringlengths 0
15
| arxiv_id
stringlengths 10
10
| references
listlengths 1
1
|
---|---|---|---|---|---|---|
1703.05175#41 | Prototypical Networks for Few-shot Learning | (number of classes per episode) for prototypical networks trained on miniImageNet. Each training episode contains 15 query points per class. Error bars indicate 95% conï¬ dence intervals as computed over 600 test episodes. 12 Table 5: Comparison of matching and prototypical networks on miniImageNet under cosine vs. Euclidean distance, 5-way vs. 20-way, and 1-shot vs. 5-shot. All experiments use a shared encoder for both support and query points with embedding dimension 1,600 (architecture and training details are provided in Section 3.2 of the main paper). | 1703.05175#40 | 1703.05175#42 | 1703.05175 | [
"1605.05395"
]
|
1703.05175#42 | Prototypical Networks for Few-shot Learning | Classiï¬ cation accuracy is averaged over 600 randomly generated episodes from the test set and 95% conï¬ dence intervals are shown. Model Dist. Train Episodes Shot Query Way 1-shot 5-way Acc. 5-shot MATCHING NETS / PROTONETS MATCHING NETS / PROTONETS MATCHING NETS / PROTONETS MATCHING NETS / PROTONETS Cosine Euclid. Cosine Euclid. 1 1 1 1 15 15 15 15 5 5 20 20 38.82 ± 0.69% 44.54 ± 0.56% 46.61 ± 0.78% 59.84 ± 0.64% 43.63 ± 0.76% 51.34 ± 0.64% 49.17 ± 0.83% 62.66 ± 0.71% MATCHING NETS MATCHING NETS MATCHING NETS MATCHING NETS PROTONETS PROTONETS PROTONETS PROTONETS Cosine Euclid. Cosine Euclid. Cosine Euclid. Cosine Euclid. 5 5 5 5 5 5 5 5 15 15 15 15 15 15 15 15 5 5 20 20 5 5 20 20 46.43 ± 0.74% 54.60 ± 0.62% 46.43 ± 0.78% 60.97 ± 0.67% 46.46 ± 0.79% 55.77 ± 0.69% 47.99 ± 0.79% 63.66 ± 0.68% 42.48 ± 0.74% 51.23 ± 0.63% 44.53 ± 0.76% 65.77 ± 0.70% 42.45 ± 0.73% 51.48 ± 0.70% 43.57 ± 0.82% 68.20 ± 0.66% | 1703.05175#41 | 1703.05175#43 | 1703.05175 | [
"1605.05395"
]
|
1703.05175#43 | Prototypical Networks for Few-shot Learning | Table 6: Effect of training â wayâ (number of classes per training episode) for prototypical networks with Euclidean distance on miniImageNet. The number of query points per class in training episodes was ï¬ xed at 15. Classiï¬ cation accuracy is averaged over 600 randomly generated episodes from the test set and 95% conï¬ dence intervals are shown. Model Dist. Train Episodes Shot Query Way 1-shot 5-way Acc. 5-shot PROTONETS PROTONETS PROTONETS PROTONETS PROTONETS PROTONETS Euclid. Euclid. Euclid. Euclid. Euclid. Euclid. 1 1 1 1 1 1 15 15 15 15 15 15 5 10 15 20 25 30 46.14 ± 0.77% 61.36 ± 0.68% 48.27 ± 0.79% 64.18 ± 0.68% 48.60 ± 0.76% 64.62 ± 0.66% 48.57 ± 0.79% 65.04 ± 0.69% 48.51 ± 0.83% 64.63 ± 0.69% 49.42 ± 0.78% 65.38 ± 0.68% PROTONETS PROTONETS PROTONETS PROTONETS PROTONETS PROTONETS Euclid. Euclid. Euclid. Euclid. Euclid. | 1703.05175#42 | 1703.05175#44 | 1703.05175 | [
"1605.05395"
]
|
1703.05175#44 | Prototypical Networks for Few-shot Learning | Euclid. 5 5 5 5 5 5 15 15 15 15 15 15 5 10 15 20 25 30 44.53 ± 0.76% 65.77 ± 0.70% 45.09 ± 0.79% 67.49 ± 0.70% 44.07 ± 0.80% 68.03 ± 0.66% 43.57 ± 0.82% 68.20 ± 0.66% 43.32 ± 0.79% 67.66 ± 0.68% 41.38 ± 0.81% 66.79 ± 0.66% 13 | 1703.05175#43 | 1703.05175 | [
"1605.05395"
]
|
|
1703.04908#0 | Emergence of Grounded Compositional Language in Multi-Agent Populations | 8 1 0 2 # l u J 4 2 ] I A . s c [ 2 v 8 0 9 4 0 . 3 0 7 1 : v i X r a # Emergence of Grounded Compositional Language in Multi-Agent Populations # Igor Mordatch OpenAI San Francisco, California, USA # Pieter Abbeel UC Berkeley Berkeley, California, USA # Abstract By capturing statistical patterns in large corpora, machine learning has enabled signiï¬ cant advances in natural language processing, including in machine translation, question an- swering, and sentiment analysis. However, for agents to in- telligently interact with humans, simply capturing the statis- tical patterns is insufï¬ cient. In this paper we investigate if, and how, grounded compositional language can emerge as a means to achieve goals in multi-agent populations. Towards this end, we propose a multi-agent learning environment and learning methods that bring about emergence of a basic com- positional language. This language is represented as streams of abstract discrete symbols uttered by agents over time, but nonetheless has a coherent structure that possesses a deï¬ ned vocabulary and syntax. We also observe emergence of non- verbal communication such as pointing and guiding when language communication is unavailable. | 1703.04908#1 | 1703.04908 | [
"1603.08887"
]
|
|
1703.04908#1 | Emergence of Grounded Compositional Language in Multi-Agent Populations | # Introduction Recently there has been a surge of renewed interest in the pragmatic aspects of language use and it is also the focus of our work. We adopt a view of (Gauthier and Mordatch 2016) that an agent possesses an understanding of language when it can use language (along with other tools such as non-verbal communication or physical acts) to accomplish goals in its environment. This leads to evaluation criteria that can be measured precisely and without human involvement. In this paper, we propose a physically-situated multi- agent learning environment and learning methods that bring about emergence of a basic compositional language. This language is represented as streams of abstract discrete sym- bols uttered by agents over time, but nonetheless has a co- herent structure that possesses a deï¬ ned vocabulary and syn- tax. The agents utter communication symbols alongside per- forming actions in the physical environment to cooperatively accomplish goals deï¬ ned by a joint reward function shared between all agents. There are no pre-designed meanings as- sociated with the uttered symbols - the agents form concepts relevant to the task and environment and assign arbitrary symbols to communicate them. Development of agents that are capable of communication and ï¬ exible language use is one of the long-standing chal- lenges facing the ï¬ eld of artiï¬ cial intelligence. Agents need to develop communication if they are to successfully coor- dinate as a collective. Furthermore, agents will need some language capacity if they are to interact and productively collaborate with humans or make decisions that are inter- pretable by humans. If such a capacity were to arise artiï¬ - cially, it could also offer important insights into questions surrounding development of human language and cognition. But if we wish to arrive at formation of communication from ï¬ rst principles, it must form out of necessity. The ap- proaches that learn to plausibly imitate language from ex- amples of human language, while tremendously useful, do not learn why language exists. Such supervised approaches can capture structural and statistical relationships in lan- guage, but they do not capture its functional aspects, or that language happens for purposes of successful coordina- tion between humans. Evaluating success of such imitation- based approaches on the basis of linguistic plausibility also presents challenges of ambiguity and requirement of human involvement. | 1703.04908#0 | 1703.04908#2 | 1703.04908 | [
"1603.08887"
]
|
1703.04908#2 | Emergence of Grounded Compositional Language in Multi-Agent Populations | Copyright © 2018, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. There are similarly no explicit language usage goals, such as making correct utterances, and no explicit roles agents are assigned, such as speaker or listener, or explicit turn- taking dialogue structure as in traditional language games. There may be an arbitrary number of agents in a popula- tion communicating at the same time and part of the dif- ï¬ culty is learning to refer speciï¬ c agents. A population of agents is situated as moving particles in a continuous two-dimensional environment, possessing properties such as color and shape. The goals of the population are based on non-linguistic objectives, such as moving to a location and language arises from the need to coordinate on those goals. | 1703.04908#1 | 1703.04908#3 | 1703.04908 | [
"1603.08887"
]
|
1703.04908#3 | Emergence of Grounded Compositional Language in Multi-Agent Populations | We do not rely on any supervision such as human demon- strations or text corpora. Similar to recent work,we formulate the discovery the ac- tion and communication protocols for our agents jointly as a reinforcement learning problem. Agents perform physical actions and communication utterances according to an iden- tical policy that is instantiated for all agents and fully de- termines the action and communication protocols. The poli- cies are based on neural network models with an architec- ture composed of dynamically-instantiated recurrent mod- ules. This allows decentralized execution with a variable number of agents and communication streams. The joint dynamics of all agents and environment, including discrete communication streams are fully-differentiable, the agentsâ policy is trained end-to-end with backpropagation through time. The languages formed exhibit interpretable compositional structure that in general assigns symbols to separately refer to environment landmarks, action verbs, and agents. How- ever, environment variation leads to a number of specialized languages, omitting words that are clear from context. For example, when there is only one type of action to take or one landmark to go to, words for those concepts do not form in the language. Considerations of the physical environment also have an impact on language structure. For example, a symbol denoting go action is typically uttered ï¬ rst because the listener can start moving before even hearing the desti- nation. | 1703.04908#2 | 1703.04908#4 | 1703.04908 | [
"1603.08887"
]
|
1703.04908#4 | Emergence of Grounded Compositional Language in Multi-Agent Populations | This effect only arises when linguistic and physical behaviors are treated jointly and not in isolation. The presence of a physical environment also allows for alternative strategies aside from language use to accom- plish goals. A visual sensory modality provides an alterna- tive medium for communication and we observe emergence of non-verbal communication such as pointing and guiding when language communication is unavailable. When even non-verbal communication is unavailable, strategies such as direct pushing may be employed to succeed at the task. It is important to us to build an environment with a diverse set of capabilities which language use develops alongside with. By compositionality we mean the combination of mul- tiple words to create meaning, as opposed to holistic lan- guages that have a unique word for every possible meaning (Kirby 2001). Our work offers insights into why such com- positional structure emerges. In part, we ï¬ nd it to emerge when we explicitly encourage active vocabulary sizes to be small through a soft penalty. This is consistent with analy- sis in evolutionary linguistics (Nowak, Plotkin, and Jansen 2000) that ï¬ nds composition to emerge only when number of concepts to be expressed becomes greater than a factor of agentâ s symbol vocabulary capacity. Another important component leading to composition is training on a variety of tasks and environment conï¬ gurations simultaneously. Train- ing on cases where most information is clear from context (such as when there is only one landmark) leads to forma- tion of atomic concepts that are reused compositionally in more complicated cases. | 1703.04908#3 | 1703.04908#5 | 1703.04908 | [
"1603.08887"
]
|
1703.04908#5 | Emergence of Grounded Compositional Language in Multi-Agent Populations | Related Work Recent years have seen substantial progress in practical natural language applications such as machine translation (Sutskever, Vinyals, and Le 2014; Bahdanau, Cho, and Ben- gio 2014), sentiment analysis (Socher et al. 2013), document summarization (Durrett, Berg-Kirkpatrick, and Klein 2016), and domain-speciï¬ c dialogue (Dhingra et al. 2016). Much of this success is a result of intelligently designed statistical models trained on large static datasets. However, such ap- proaches do not produce an understanding of language that can lead to productive cooperation with humans. An interest in pragmatic view of language understand- ing has been longstanding (Austin 1962; Grice 1975) and has recently argued for in (Gauthier and Mordatch 2016; Lake et al. 2016; Lazaridou, Pham, and Baroni 2016). Prag- matic language use has been proposed in the context of two- player reference games (Golland, Liang, and Klein 2010; Vogel et al. 2014; Andreas and Klein 2016) focusing on the task of identifying object references through a learned language. (Winograd 1973; Wang, Liang, and Manning 2016) ground language in a physical environment and fo- cusing on language interaction with humans for comple- tion of tasks in the physical environment. In such a prag- matic setting, language use for communication of spatial concepts has received particular attention in (Steels 1995; Ullman, Xu, and Goodman 2016). Aside from producing agents that can interact with hu- mans through language, research in pragmatic language un- derstanding can be informative to the ï¬ elds of linguistics and cognitive science. Of particular interest in these ï¬ elds has been the question of how syntax and compositional structure in language emerged, and why it is largely unique to human languages (Kirby 1999; Nowak, Plotkin, and Jansen 2000; Steels 2005). Models such as Rational Speech Acts (Frank and Goodman 2012) and Iterated Learning (Kirby, Grifï¬ | 1703.04908#4 | 1703.04908#6 | 1703.04908 | [
"1603.08887"
]
|
1703.04908#6 | Emergence of Grounded Compositional Language in Multi-Agent Populations | ths, and Smith 2014) have been popular in cognitive science and evolutionary linguistics, but such approaches tend to rely on pre-speciï¬ ed procedures or models that limit their general- ity. The recent work that is most similar to ours is the applica- tion of reinforcement learning approaches towards the pur- poses of learning a communication protocol, as exempliï¬ ed by (Bratman et al. 2010; Foerster et al. 2016; Sukhbaatar, Szlam, and Fergus 2016; Lazaridou, Peysakhovich, and Ba- roni 2016). | 1703.04908#5 | 1703.04908#7 | 1703.04908 | [
"1603.08887"
]
|
1703.04908#7 | Emergence of Grounded Compositional Language in Multi-Agent Populations | # Problem Formulation The setting we are considering is a cooperative partially ob- servable Markov game (inman 1994), which is a multi- agent extension of a Markov decision process. A Markov game for N agents is defined by set of states S describ- ing the possible configurations of all agents, a set of ac- tions A,,...,Ay and a set of observations O,,...,Oy for each agent. Initial states are determined by a distribution p: S++ (0, 1]. State transitions are determined by a function T:SxA, x... x An © S. For each agent 7, rewards are given by function r; : S x A; +> R, observations are given by function 0; : S ++ O;. To choose actions, each agent i uses a stochastic policy 7; : O; x A; +> [0,1]. In this work, we assume all agents have identical action and observation spaces, and all agents act according to the same policy Ï | 1703.04908#6 | 1703.04908#8 | 1703.04908 | [
"1603.08887"
]
|
1703.04908#8 | Emergence of Grounded Compositional Language in Multi-Agent Populations | and receive a shared reward. We consider a ï¬ - nite horizon setting, with episode length T . In a cooperative setting, the problem is to ï¬ nd a policy that maximizes the expected shared return for all agents, which can be solved as a joint minimization problem: T N max R(7), where Rm) =| 7) ris'.a))] t=0 i=0 agent 1 landmark ° B landmark landmark v @ agent 3 agent 2 Figure 1: An example of environments we consider. # Grounded Communication Environment As argued in the introduction, grounding multi-agent com- munication in a physical environment is crucial for interest- ing communication behaviors to emerge. In this work, we consider a physically-simulated two-dimensional environ- ment in continuous space and discrete time. This environ- ment consists of N agents and M landmarks. Both agent and landmark entities inhabit a physical location in space p and posses descriptive physical characteristics, such as color and shape type. In addition, agents can direct their gaze to a loca- tion v.Agents can act to move in the environment and direct their gaze, but may also be affected by physical interactions with other agents. We denote the physical state of an entity (including descriptive characteristics) by x and describe its precise details and transition dynamics in the Appendix. In addition to performing physical actions, agents utter verbal communication symbols c at every timestep. These utterances are discrete elements of an abstract symbol vo- cabulary C of size K. | 1703.04908#7 | 1703.04908#9 | 1703.04908 | [
"1603.08887"
]
|
1703.04908#9 | Emergence of Grounded Compositional Language in Multi-Agent Populations | We do not assign any signiï¬ cance or meaning to these symbols. They are treated as abstract cate- gorical variables that are emitted by each agent and observed by all other agents. It is up to agents at training time to as- sign meaning to these symbols. As shown in Section , these symbols become assigned to interpretable concepts. Agents may also choose not to utter anything at a given timestep, and there is a cost to making an utterance, loosely represent- ing the metabolic effort of vocalization. We denote a vector representing one-hot encoding of symbol c with boldface c. Each agent has internal goals speciï¬ ed by vector g that are private and not observed by other agents. These goals are grounded in the physical environment and include tasks such as moving to or gazing at a location. These goals may involve other agents (requiring the other agent to move to a location, for example) but are not observed by them and thus necessitate coordination and communication between agents. Verbal utterances are one tool which the agents can use to cooperatively accomplish all goals, but we also ob- serve emergent use of non-verbal signals and altogether non- communicative strategies. To aid in accomplishing goals, each agent has internal re- current memory bank m that is also private and not observed by other agents. This memory bank has no pre-designed be- havior and it is up to the agents to learn to utilize it appro- priately. | 1703.04908#8 | 1703.04908#10 | 1703.04908 | [
"1603.08887"
]
|
1703.04908#10 | Emergence of Grounded Compositional Language in Multi-Agent Populations | The full state of the environment is given by s = [x1 jes (N+M) ©1,...,N M1,....N 81,.. ] â ¬ S. Each agent observes physical states of all entities in the environment, verbal utterances of all agents, and its own private mem- ory and goal vector. The observation for agent i is 0;(s) = [ @X1,...,(W+a2) C1,....N Mj Bi ] . Where ;x, is the observa- tion of entity 7â s physical state in agent iâ s reference frame (see Appendix for details). More intricate observation mod- els are possible, such as physical o| pixels or verbal observations from These models would require agents sual processing and source separati nal to this work. Despite the dimens: varying with the number of physical bservations solely from a single input channel. learning to perform vi- on, which are orthogo- ionality of observations entities and communi- cation streams, our policy architecture as described in Sec- tion allows a single policy parameterization across these variations. Figure 2: The transition dynamics of N agents from time t â 1 to t. Dashed lines indicate one-to-one dependencies between agents and solid lines indicate all-to-all dependen- cies. Policy Learning with Backpropagation Each agent acts by sampling actions from a stochastic pol- icy Ï , which is identical for all agents and deï¬ ned by pa- rameters θ. There are several common options for ï¬ nding optimal policy parameters. The model-free framework of Q- learning can be used to ï¬ nd the optimal state-action value function, and employ a policy that acts greedily to accord- ing to the value function. Unfortunately, Q function dimen- sionality scales quadratically with communication vocabu- lary size, which can quickly become intractably large. Alter- natively it is possible to directly learn a policy function using model-free policy gradient methods, which use sampling to estimate the gradient of policy return dR dθ . The gradient es- timates from these methods can exhibit very high variance and credit assignment becomes an especially difï¬ | 1703.04908#9 | 1703.04908#11 | 1703.04908 | [
"1603.08887"
]
|
1703.04908#11 | Emergence of Grounded Compositional Language in Multi-Agent Populations | cult prob- lem in the presence of sequential communication actions. Instead of using model-free reinforcement learning meth- ods, we build an end-to-end differentiable model of all agent and environment state dynamics over time and calculate dR dθ with backpropagation. At every optimization iteration, we sample a new batch of 1024 random environment instan- tiations and backpropagate their dynamics through time to calculate the total return gradient. Figure 2 shows the de- pendency chain between two timesteps. A similar approach was employed by (Foerster et al. 2016; Sukhbaatar, Szlam, and Fergus 2016) to compute gradients for communication actions, although the latter still employed model-free meth- ods for physical action computation. The physical state dynamics, including discontinuous contact events can be made differentiable with smoothing. However, communication actions require emission of dis- crete symbols, which present difï¬ culties for backpropaga- tion. Discrete Communication and Gumbel-Softmax Estimator In order to use categorical communication emissions c in our setting, it must be possible to differentiate through them. There has been a wealth of work in machine learn- ing on differentiable models with discrete variables, but we found recent approach in (Jang, Gu, and Poole 2016; Maddison, Mnih, and Teh 2016) to be particularly effective in our setting. The approach proposes a Gumbel-Softmax distribution, which is a continuous relaxation of a discrete categorical distribution. Given K-categorical distribution parameters p, a differentiable K-dimensional one-hot en- coding sample G from the Gumbel-Softmax distribution can be calculated as: G(logp), exp ((logp +e)/r) Yj=0 exp((logp; + â ¬)/T) Where ε are i.i.d. samples from Gumbel(0, 1) distribution, ε = â log(â log(u)), u â ¼ U[0, 1] and Ï | 1703.04908#10 | 1703.04908#12 | 1703.04908 | [
"1603.08887"
]
|
1703.04908#12 | Emergence of Grounded Compositional Language in Multi-Agent Populations | is a softmax tem- perature parameter. We did not ï¬ nd it necessary to anneal the temperature and set it to 1 in all our experiments for train- ing and sample directly from the categorical distribution at test time. To emit a communication symbol, our policy is trained to directly output logp â RK, which is transformed to a symbol emission sample c â ¼ G(logp). The resulting gradient can be estimated as dc Policy Architecture The policy class we consider in this work are stochastic neu- ral networks. The policy outputs samples of an agentâ s phys- ical actions u, communication symbol utterance c, and in- ternal memory updates â m. The policy must consolidate multiple incoming communication symbol streams emitted by other agents, as well as incoming observations of physi- cal entities. Importantly, the number of agents (and thus the Figure 3: Overview of our policy architecture, mapping ob- servations to actions at every point time time. | 1703.04908#11 | 1703.04908#13 | 1703.04908 | [
"1603.08887"
]
|
1703.04908#13 | Emergence of Grounded Compositional Language in Multi-Agent Populations | FC indicates a fully-connected processing module that shares weights with all others of its label. pool indicates a softmax pooling layer. number of communication streams) and number of physi- cal entities can vary between environment instantiations. To support this, the policy instantiates a collection of identi- cal processing modules for each communication stream and each observed physical entity. Each processing module is a fully-connected multi-layer perceptron. The weights be- tween all communication processing and physical observa- tion modules are shared. The outputs of individual process- ing modules are pooled with a softmax operation into feature vectors Ï c and Ï x for communication and physical observa- tion streams, respectively. Such weight sharing and pooling makes it possible to apply the same policy parameters to any number of communication and physical observations. The pooled features and agentâ s private goal vector are passed to the ï¬ nal processing module that outputs distribu- tion parameters [ Ï u Ï c ] from which action samples are generated as u = Ï u + ε and c â ¼ G(Ï c), where ε is a zero-mean Gaussian noise. | 1703.04908#12 | 1703.04908#14 | 1703.04908 | [
"1603.08887"
]
|
1703.04908#14 | Emergence of Grounded Compositional Language in Multi-Agent Populations | Unlike communication games where agents only emit a single utterance, our agents continually emit a stream of symbols over time. Thus processing modules that read and write communication utterance streams beneï¬ t greatly from recurrent memory that can capture meaning of a stream over time. To this end, we augment each communication process- ing and output module with an independent internal mem- ory state m, and each module outputs memory state updates â m. In this work we use simple additive memory updates mt = tanh(mtâ 1 + â mtâ 1 + ε) for simplicity and in- terpretability, but other memory architectures such LSTMs can be used. | 1703.04908#13 | 1703.04908#15 | 1703.04908 | [
"1603.08887"
]
|
1703.04908#15 | Emergence of Grounded Compositional Language in Multi-Agent Populations | We build all fully-connected modules with 256 hidden units and 2 layers each in all our experiments, us- ing exponential-linear units and dropout with a rate of 0.1 between all hidden layers. Size is feature vectors Ï is 256 and size of each memory module is 32. The overall policy architecture is shown in Figure 3. Auxiliary Prediction Reward To help policy training avoid local minima in more com- plex environments, we found it helpful to include auxiliary goal prediction tasks, similar to recent work in reinforce- ment learning (Dosovitskiy and Koltun 2016; Silver et al. 2016). | 1703.04908#14 | 1703.04908#16 | 1703.04908 | [
"1603.08887"
]
|
1703.04908#16 | Emergence of Grounded Compositional Language in Multi-Agent Populations | In agent iâ s policy, each communication processing module j additionally outputs a prediction Ë gi,j of agent jâ s goals. We do not use Ë g as an input in calculating actions. It is only used for the purposes of auxiliary prediction task. At the end of the episode, we add a reward for predicting other agentâ s goals, which in turn encourages communication ut- terances that convey the agentâ s goals clearly to other agents. Across all agents this reward has the form: | 1703.04908#15 | 1703.04908#17 | 1703.04908 | [
"1603.08887"
]
|
1703.04908#17 | Emergence of Grounded Compositional Language in Multi-Agent Populations | rg=- > \efj)-2F |? {i,j|iA5} Compositionality and Vocabulary Size What leads to compositional syntax formation? One known constructive hypothesis requires modeling the process of language transmission and acquisition from one generation of agents to the next iteratively as in (Kirby, Grifï¬ ths, and Smith 2014). In such iterated learning setting, composition- ality emerges due to poverty of stimulus - one generation will only observe a limited number of symbol utterances from the previous generation and must infer meaning of un- seen symbols. This approach requires modeling language acquisition between agents, but when implemented with pre- designed rules was shown over multiple iterations between generations to lead to formation of a compositional vocabu- lary. Alternatively, (Nowak, Plotkin, and Jansen 2000) ob- served that emergence of compositionality requires the num- ber of concepts describable by a language to be above a fac- tor of vocabulary size. In our preliminary environments the number of concepts to communicate is still fairly small and is within the capacity of a non-compositional language. We use a maximum vocabulary size K = 20 in all our exper- iments. We tested a smaller maximum vocabulary size, but found that policy optimization became stuck in a poor lo- cal minima where concepts became conï¬ | 1703.04908#16 | 1703.04908#18 | 1703.04908 | [
"1603.08887"
]
|
1703.04908#18 | Emergence of Grounded Compositional Language in Multi-Agent Populations | ated. Instead, we propose to use a large vocabulary size limit but use a soft penalty function to prevent the formation of unnecessarily large vocabularies. This allows the intermediate stages of policy optimization to explore large vocabularies, but then converge on an appropriate active vocabulary size. As shown in Figure 6, this is indeed what happens. How do we penalize large vocabulary sizes? (Nowak, Plotkin, and Jansen 2000) proposed a word population dy- namics model that deï¬ nes reproductive ratios of words to be proportional to their frequency, making already popu- lar words more likely to survive. Inspired by these rich-get- richer dynamics, we model the communication symbols as being generated from a Dirichlet Process (Teh 2011). Each communication symbol has a probability of being symbol ck as p(ck) = nk α + n â | 1703.04908#17 | 1703.04908#19 | 1703.04908 | [
"1603.08887"
]
|
1703.04908#19 | Emergence of Grounded Compositional Language in Multi-Agent Populations | 1 Where nk is the number of times symbol ck has been uttered and n is the total number of symbols uttered. These counts are accumulated over agents, timesteps, and batch entries. α is a Dirichlet Process hyperparameter corresponding to the probability of observing an out-of-vocabulary word. The re- sulting reward across all agents is the log-likelihood of all communication utterances to independently have been gen- erated by a Dirichlet Process: rc = 1[ct i = ck]logp(ck) i,t,k Maximizing this reward leads to consolidation of symbols and the formation of compositionality. This approach is sim- ilar to encouraging code population sparsity in autoencoders (Ng 2011), which was shown to give rise to compositional representations for images. | 1703.04908#18 | 1703.04908#20 | 1703.04908 | [
"1603.08887"
]
|
1703.04908#20 | Emergence of Grounded Compositional Language in Multi-Agent Populations | # Experiments We experimentally investigate how variation in goals, envi- ronment conï¬ guration, and agents physical capabilities lead to different communication strategies. In this work, we con- sider three types of actions an agent needs to perform: go to location, look at location, and do nothing. Goal for agent i consists of an action to perform, a location to perform it on ¯r, and an agent r that should perform that action. These goal properties are accumulated into goal description vector g. These goals are private to each agent, but may involve other agents. For example, agent i may want agent r to go to location ¯r. This goal is not observed by agent r, and re- quires communication between agents i and r. The goals are assigned to agents such that no agent receives conï¬ icting goals. | 1703.04908#19 | 1703.04908#21 | 1703.04908 | [
"1603.08887"
]
|
1703.04908#21 | Emergence of Grounded Compositional Language in Multi-Agent Populations | We do however show generalization in the presence of conï¬ icting goals in Section . Agents can only communicate in discrete symbols and have individual reference frames without a shared global po- sitioning reference (see Appendix), so cannot directly send goal position vector. What makes the task possible is that we place goal locations ¯r on landmark locations of which are observed by all agents (in their invidiaul reference frames). The strategy then is for agent i to unambiguously commu- nicate landmark reference to agent r. Importantly, we do not provide explicit association between goal positions and landmark reference. It is up to the agents to learn to asso- ciate a position vector with a set of landmark properties and communicate them with discrete symbols. In the results that follow, agents do not observe other agents. This disallows capacity for non-verbal communica- tion, necessitating the use of language. In section we report what happens when agents are able to observe each other and capacity for non-verbal communication is available. | 1703.04908#20 | 1703.04908#22 | 1703.04908 | [
"1603.08887"
]
|
1703.04908#22 | Emergence of Grounded Compositional Language in Multi-Agent Populations | Despite training with continuous relaxation of the cate- gorical distribution, we observe very similar reward perfor- mance at test time. No communication is provided as a base- line (again, non-verbal communication is not possible). The no-communication strategy is for all agents go towards the centroid of all landmarks. Condition No Communication Communication Train Reward Test Reward -0.919 -0.332 -0.920 -0.392 Table 1: Training and test physical reward for setting with and without communication. | 1703.04908#21 | 1703.04908#23 | 1703.04908 | [
"1603.08887"
]
|
1703.04908#23 | Emergence of Grounded Compositional Language in Multi-Agent Populations | GoTo ° ° e ° ° ° e . BLUE Goto GREEN ° . ° â GoTo ad BLUE-AGENT e ° ® ° RED-AGENT GREEN *sLooKar °.. ° ° e ° BLUE-AGENT a . G DONOTHING RED F ; â 5 ° ° ° eo ° é GREEN-AGENT Goto Aa BLUE Goro RED-AGENT cE . BLUE-AGENT â a RED GoTo 7 coro, oRED © GREEN-AGENT || © . RED-AGENT . ° ° . © GOTO ° BLUE ° ° t=0 te1 t=2 t=3 Figure 4: A collection of typical sequences of events in our environments shown over time. Each row is an independent trial. Large circles represent agents and small circles repre- sent landmarks. Communication symbols are shown next to the agent making the utterance. The labels for abstract com- munication symbols are chosen purely for visualization and ... represents silence symbol. Syntactic Structure We observe a compositional syntactic structure emerging in the stream of symbol uttered by agents. When trained on environments with only two agents, but multiple landmarks and actions, we observe symbols forming for each of the landmark colors and each of the action types. A typical con- versation and physical agent conï¬ guration is shown in ï¬ rst row of Figure 4 and is as follows: | 1703.04908#22 | 1703.04908#24 | 1703.04908 | [
"1603.08887"
]
|
1703.04908#24 | Emergence of Grounded Compositional Language in Multi-Agent Populations | Green Agent: GOTO, GREEN, ... Blue Agent: GOTO, BLUE, The labels for abstract symbols are chosen by us purely for interpretability and visualization and carry no mean- ing for training. While there is recent work on interpreting continuous machine languages (Andreas, Dragan, and Klein 2017), the discrete nature and small size of our symbol vo- cabulary makes it possible to manually labels to the sym- bols. See results in supplementary video for consistency of the vocabulary usage. Physical environment considerations play a part in the syntactic structure. The action type verb GOTO is uttered ï¬ rst because actions take time to accomplish in the grounded | 1703.04908#23 | 1703.04908#25 | 1703.04908 | [
"1603.08887"
]
|
1703.04908#25 | Emergence of Grounded Compositional Language in Multi-Agent Populations | environment. When the agent receives GOTO symbol it starts moving toward the centroid of all the landmarks (to be equidistant from all of them) and then moves towards the speciï¬ c landmark when it receives its color identity. When the environment conï¬ guration can contain more than three agents, agents need to form symbols for referring to each other. Three new symbols form to refer to agent col- ors that are separate in meaning from landmark colors. The typical conversations are shown in second and third rows of Figure 4. Red Agent: GOTO, RED, BLUE-AGENT, ... Green Agent: ..., ..., ..., ... Blue Agent: RED-AGENT, GREEN, LOOKAT, ... Agents may not omit any utterances when they are the subject of their private goal, in which case they have access to that information and have no need to announce it. In this language, there is no set ordering to word utterances. Each symbol contributes to sentence meaning independently, sim- ilar to case marking grammatical strategies used in many hu- man languages (Beuls and Steels 2013). The agents largely settle on using a consistent set of sym- bols for each meaning, due to vocabulary size penalties and that discourage synonyms. | 1703.04908#24 | 1703.04908#26 | 1703.04908 | [
"1603.08887"
]
|
1703.04908#26 | Emergence of Grounded Compositional Language in Multi-Agent Populations | We show the aggregate streams of communication utterances in Figure 5. Before Training AfterTraining vocabulary symbol Figure 5: Communication symbol streams emitted by agents over time before and after training accumulated over 10 thousand test trials. In simpliï¬ ed environment conï¬ gurations when there is only one landmark or one type of action to take, no sym- bols are formed to refer to those concepts because they are clear from context. Symbol Vocabulary Usage We ï¬ nd word activation counts to settle on the appropriate compositional word counts. That early during training large vocabulary sizes are being taken advantage of to explore the space of communication possibilities before settling on the appropriate effective vocabulary sizes as shown in Figure 6. | 1703.04908#25 | 1703.04908#27 | 1703.04908 | [
"1603.08887"
]
|
1703.04908#27 | Emergence of Grounded Compositional Language in Multi-Agent Populations | In this ï¬ gure, 1x1x3 case refers to environment with two agents and a single action, which requires only communi- cating one of three landmark identities. 1x2x3 contains two types of actions, and 3x3x3 case contains three agents that require explicit referencing. Generalization to Unseen Conï¬ gurations One of the advantages of decentralised execution policies is that trained agents can be placed into arbitrarily-sized groups and still function reasonably. When there are addi- tional agents in the environment with the same color iden- tity, all agents of the same color will perform the same task if they are being referred to. Additionally, when agents of a | 1703.04908#26 | 1703.04908#28 | 1703.04908 | [
"1603.08887"
]
|
1703.04908#28 | Emergence of Grounded Compositional Language in Multi-Agent Populations | 20 \ â 1xb3 â 1x2x3 â â 3x3x3 1s - 10 a | {MLL ALUM active vocabulary size HY tt) | ot | A tt 0 1000 2000 3000 4000 5000 iteration Figure 6: Word activations counts for different environment conï¬ gurations over training iterations. particular color are asked to perform two conï¬ icting tasks (such as being asked go to two different landmarks by two different agents), they will perform the average of the con- ï¬ icting goals assigned to them. | 1703.04908#27 | 1703.04908#29 | 1703.04908 | [
"1603.08887"
]
|
1703.04908#29 | Emergence of Grounded Compositional Language in Multi-Agent Populations | Such cases occur despite never having been seen during training. Due to the modularized observation architecture, the num- ber of landmarks in the environment can also vary between training and execution. The agents perform sensible behav- iors with different numbers of landmarks, despite not being trained in such environments. For example, when there are distractor landmarks of novel colors, the agents never go to- wards them. When there are multiple landmarks of the same color, the agent communicating the goal still utters landmark color (because the goal is the position of one of the land- marks). However, the agents receiving the landmark color utterance go towards the centroid of all landmark of the same color, showing a very sensible generalization strategy. | 1703.04908#28 | 1703.04908#30 | 1703.04908 | [
"1603.08887"
]
|
1703.04908#30 | Emergence of Grounded Compositional Language in Multi-Agent Populations | An example of such case is shown in fourth row of Figure 4. Non-verbal Communication and Other Strategies The presence of a physical environment also allows for al- ternative strategies aside from language use to accomplish goals. In this set of experiments we enable agents to observe other agentsâ position and gaze location, and in turn dis- able communication capability via symbol utterances. When agents can observe each otherâ s gaze, a pointing strategy forms where the agent can communicate a landmark location by gazing in its direction, which the recipient correctly inter- prets and moves towards. When gazes of other agents cannot be observed, we see behavior of goal sender agent moving towards the location assigned to goal recipient agent (despite receiving no explicit reward for doing so), in order to guide the goal recipient to that location. Lastly, when neither visual not verbal observation is available on part of the goal recipi- ent, we observe the behavior of goal sender directly pushing the recipient to the target location. Examples of such strate- gies are shown in Figure 7 and supplementary video. | 1703.04908#29 | 1703.04908#31 | 1703.04908 | [
"1603.08887"
]
|
1703.04908#31 | Emergence of Grounded Compositional Language in Multi-Agent Populations | It is important to us to build an environment with a diverse set of capabilities which language use develops alongside with. Figure 7: Examples of non-verbal communication strategies, such as pointing, guiding, and pushing. Conclusion We have presented a multi-agent environment and learning methods that brings about emergence of an abstract compo- sitional language from grounded experience. This abstract language is formed without any exposure to human language use. We investigated how variation in environment conï¬ gu- ration and physical capabilities of agents affect the commu- nication strategies that arise. In the future, we would like experiment with larger num- ber of actions that necessitate more complex syntax and larger vocabularies. We would also like integrate exposure to human language to form communication strategies that are compatible with human use. Acknowledgements We thank OpenAI team for helpful comments and fruitful discussions. This work was funded in part by ONR PECASE N000141612723. References [Andreas and Klein 2016] Andreas, J., and Klein, D. 2016. Reasoning about pragmatics with neural listeners and speak- In Proceedings of the 2016 Conference on Empirical ers. Methods in Natural Language Processing, EMNLP 2016, Austin, Texas, USA, November 1-4, 2016, 1173â 1182. [Andreas, Dragan, and Klein 2017] Andreas, J.; Dragan, A.; and Klein, D. 2017. Translating neuralese. [Austin 1962] Austin, J. 1962. | 1703.04908#30 | 1703.04908#32 | 1703.04908 | [
"1603.08887"
]
|
1703.04908#32 | Emergence of Grounded Compositional Language in Multi-Agent Populations | How to Do Things with Words. Oxford. [Bahdanau, Cho, and Bengio 2014] Bahdanau, D.; Cho, K.; 2014. Neural machine translation by and Bengio, Y. arXiv preprint jointly learning to align and translate. arXiv:1409.0473. [Beuls and Steels 2013] Beuls, K., and Steels, L. 2013. Agent-based models of strategies for the emergence and evo- lution of grammatical agreement. PloS one 8(3):e58960. [Bratman et al. 2010] Bratman, J.; Shvartsman, M.; Lewis, R. L.; and Singh, S. 2010. A new approach to exploring lan- guage emergence as boundedly optimal control in the face of environmental and cognitive constraints. In Proceedings of the 10th International Conference on Cognitive Modeling, 7â 12. Citeseer. [Dhingra et al. 2016] Dhingra, B.; Li, L.; Li, X.; Gao, J.; Chen, Y.-N.; Ahmed, F.; and Deng, L. 2016. End-to-End Reinforcement Learning of Dialogue Agents for Informa- tion Access. arXiv:1609.00777 [cs]. arXiv: 1609.00777. [Dosovitskiy and Koltun 2016] Dosovitskiy, A., and Koltun, V. 2016. Learning to act by predicting the future. arXiv preprint arXiv:1611.01779. [Durrett, Berg-Kirkpatrick, and Klein 2016] Durrett, G.; Berg-Kirkpatrick, T.; and Klein, D. 2016. Learning-based single-document summarization with compression and anaphoricity constraints. arXiv preprint arXiv:1603.08887. [Foerster et al. 2016] Foerster, J. | 1703.04908#31 | 1703.04908#33 | 1703.04908 | [
"1603.08887"
]
|
1703.04908#33 | Emergence of Grounded Compositional Language in Multi-Agent Populations | N.; Assael, Y. M.; de Fre- itas, N.; and Whiteson, S. 2016. Learning to Communicate with Deep Multi-Agent Reinforcement Learning. [Frank and Goodman 2012] Frank, M. C., and Goodman, N. D. 2012. Predicting Pragmatic Reasoning in Language Games. Science 336(6084):998. [Gauthier and Mordatch 2016] Gauthier, J., and Mordatch, I. 2016. A paradigm for situated and goal-driven language learning. CoRR abs/1610.03585. [Golland, Liang, and Klein 2010] Golland, D.; Liang, P.; and Klein, D. 2010. A game-theoretic approach to generating In Proceedings of the 2010 Confer- spatial descriptions. ence on Empirical Methods in Natural Language Process- ing, EMNLP â 10, 410â 419. | 1703.04908#32 | 1703.04908#34 | 1703.04908 | [
"1603.08887"
]
|
1703.04908#34 | Emergence of Grounded Compositional Language in Multi-Agent Populations | Stroudsburg, PA, USA: Associ- ation for Computational Linguistics. [Grice 1975] Grice, H. P. 1975. Logic and conversation. In Cole, P., and Morgan, J. L., eds., Syntax and Semantics: Vol. 3: Speech Acts, 41â 58. San Diego, CA: Academic Press. [Jang, Gu, and Poole 2016] Jang, E.; Gu, S.; and Poole, B. 2016. | 1703.04908#33 | 1703.04908#35 | 1703.04908 | [
"1603.08887"
]
|
1703.04908#35 | Emergence of Grounded Compositional Language in Multi-Agent Populations | Categorical Reparameterization with Gumbel- Softmax. ArXiv e-prints. [Kirby, Grifï¬ ths, and Smith 2014] Kirby, S.; Grifï¬ ths, T.; and Smith, K. 2014. Iterated learning and the evolution of language. Current opinion in neurobiology 28:108â 114. [Kirby 1999] Kirby, S. 1999. Syntax out of Learning: the cultural evolution of structured communication in a popula- tion of induction algorithms. [Kirby 2001] Kirby, S. 2001. Spontaneous evolution of lin- guistic structure-an iterated learning model of the emergence of regularity and irregularity. IEEE Transactions on Evolu- tionary Computation 5(2):102â 110. [Lake et al. 2016] Lake, B. M.; Ullman, T. D.; Tenenbaum, J. B.; and Gershman, S. J. 2016. | 1703.04908#34 | 1703.04908#36 | 1703.04908 | [
"1603.08887"
]
|
1703.04908#36 | Emergence of Grounded Compositional Language in Multi-Agent Populations | Building machines that learn and think like people. CoRR abs/1604.00289. [Lazaridou, Peysakhovich, and Baroni 2016] Lazaridou, A.; Peysakhovich, A.; and Baroni, M. 2016. Multi-agent co- operation and the emergence of (natural) language. arXiv preprint arXiv:1612.07182. [Lazaridou, Pham, and Baroni 2016] Lazaridou, A.; Pham, N. T.; and Baroni, M. Towards Multi- Agent Communication-Based Language Learning. arXiv: 1605.07133. [Littman 1994] Littman, M. L. 1994. Markov games as a framework for multi-agent reinforcement learning. | 1703.04908#35 | 1703.04908#37 | 1703.04908 | [
"1603.08887"
]
|
1703.04908#37 | Emergence of Grounded Compositional Language in Multi-Agent Populations | In Pro- ceedings of the eleventh international conference on ma- chine learning, volume 157, 157â 163. [Maddison, Mnih, and Teh 2016] Maddison, C. J.; Mnih, A.; and Teh, Y. W. 2016. The concrete distribution: A con- tinuous relaxation of discrete random variables. CoRR abs/1611.00712. [Ng 2011] Ng, A. 2011. Sparse autoencoder. CS294A Lec- ture notes 72(2011):1â | 1703.04908#36 | 1703.04908#38 | 1703.04908 | [
"1603.08887"
]
|
1703.04908#38 | Emergence of Grounded Compositional Language in Multi-Agent Populations | 19. [Nowak, Plotkin, and Jansen 2000] Nowak, M. A.; Plotkin, J. B.; and Jansen, V. A. A. 2000. The evolution of syntactic communication. Nature 404(6777):495â 498. [Silver et al. 2016] Silver, D.; van Hasselt, H.; Hessel, M.; Schaul, T.; Guez, A.; Harley, T.; Dulac-Arnold, G.; Reichert, D.; Rabinowitz, N.; Barreto, A.; et al. 2016. The pre- dictron: End-to-end learning and planning. arXiv preprint arXiv:1612.08810. [Socher et al. 2013] Socher, R.; Perelygin, A.; Wu, J. Y.; Chuang, J.; Manning, C. D.; Ng, A. Y.; Potts, C.; et al. 2013. | 1703.04908#37 | 1703.04908#39 | 1703.04908 | [
"1603.08887"
]
|
1703.04908#39 | Emergence of Grounded Compositional Language in Multi-Agent Populations | Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the conference on em- pirical methods in natural language processing (EMNLP), volume 1631, 1642. Citeseer. [Steels 1995] Steels, L. 1995. A self-organizing spatial vo- cabulary. Artif. Life 2(3):319â 332. [Steels 2005] Steels, L. 2005. | 1703.04908#38 | 1703.04908#40 | 1703.04908 | [
"1603.08887"
]
|
1703.04908#40 | Emergence of Grounded Compositional Language in Multi-Agent Populations | What triggers the emergence of grammar? In AISBâ 05: Proceedings of the Second In- ternational Symposium on the Emergence and Evolution of Linguistic Communication (EELCâ 05), 143â 150. University of Hertfordshire. [Sukhbaatar, Szlam, and Fergus 2016] Sukhbaatar, S.; Szlam, A.; and Fergus, R. 2016. Learning multiagent com- In Advances in Neural munication with backpropagation. Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, December 5-10, 2016, Barcelona, Spain, 2244â 2252. [Sutskever, Vinyals, and Le 2014] Sutskever, I.; Vinyals, O.; and Le, Q. V. 2014. | 1703.04908#39 | 1703.04908#41 | 1703.04908 | [
"1603.08887"
]
|
1703.04908#41 | Emergence of Grounded Compositional Language in Multi-Agent Populations | Sequence to sequence learning with neural networks. In Ghahramani, Z.; Welling, M.; Cortes, C.; Lawrence, N. D.; and Weinberger, K. Q., eds., Advances in Neural Information Processing Systems 27. Curran Asso- ciates, Inc. 3104â 3112. [Teh 2011] Teh, Y. W. 2011. Dirichlet process. In Encyclo- pedia of machine learning. | 1703.04908#40 | 1703.04908#42 | 1703.04908 | [
"1603.08887"
]
|
1703.04908#42 | Emergence of Grounded Compositional Language in Multi-Agent Populations | Springer. 280â 287. [Ullman, Xu, and Goodman 2016] Ullman, T.; Xu, Y.; and Goodman, N. 2016. The pragmatics of spatial language. In Proceedings of the Cognitive Science Society. [Vogel et al. 2014] Vogel, A.; G´omez Emilsson, A.; Frank, M. C.; Jurafsky, D.; and Potts, C. 2014. Learning to reason pragmatically with cognitive limitations. In Proceedings of the 36th Annual Meeting of the Cognitive Science Society, 3055â | 1703.04908#41 | 1703.04908#43 | 1703.04908 | [
"1603.08887"
]
|
1703.04908#43 | Emergence of Grounded Compositional Language in Multi-Agent Populations | 3060. Wheat Ridge, CO: Cognitive Science Society. [Wang, Liang, and Manning 2016] Wang, S. I.; Liang, P.; and Manning, C. 2016. Learning language games through In Association for Computational Linguistics interaction. (ACL). [Winograd 1973] Winograd, T. 1973. A procedural model of language understanding. # Appendix: Physical State and Dynamics is speciï¬ ed by x = The physical state of the agent [ p Ë p v d ] where Ë p is the velocity of p. d â | 1703.04908#42 | 1703.04908#44 | 1703.04908 | [
"1603.08887"
]
|
1703.04908#44 | Emergence of Grounded Compositional Language in Multi-Agent Populations | R3 is the color associted with the agent. Landmarks have similar state, but without gaze and velocity components. The physical state transition dynamics for a single agent i are given by: p+ pat 1 YP + (up + £(x1,...,x))At Uy t ,_{P x,=|]P] = vi. i i Where f (x1, ..., xN ) are the physical interaction forces (such as collision) between all agents in the environment and any obstacles, â t is the simulation timestep (we use 0.1), and (1 â γ) is a damping coefï¬ cient (we use 0.5). | 1703.04908#43 | 1703.04908#45 | 1703.04908 | [
"1603.08887"
]
|
1703.04908#45 | Emergence of Grounded Compositional Language in Multi-Agent Populations | The action space of the agent is a = [ up uv c ]. The ob- servation of any location pj in reference frame of agent i is ipj = Ri(pj â pi), where Ri is the random rotation matrix of agent i. Giving each agent a private random orientation prevents identifying landmarks in a shared coordinate frame (using words such as top-most or left-most). | 1703.04908#44 | 1703.04908 | [
"1603.08887"
]
|
|
1703.04933#0 | Sharp Minima Can Generalize For Deep Nets | 7 1 0 2 y a M 5 1 ] G L . s c [ 2 v 3 3 9 4 0 . 3 0 7 1 : v i X r a # Sharp Minima Can Generalize For Deep Nets # Laurent Dinh 1 Razvan Pascanu 2 Samy Bengio 3 Yoshua Bengio 1 4 Abstract Despite their overwhelming capacity to overï¬ t, deep learning architectures tend to generalize rel- atively well to unseen data, allowing them to be deployed in practice. However, explaining why this is the case is still an open area of research. One standing hypothesis that is gaining popularity, e.g. Hochreiter & Schmidhuber (1997); Keskar et al. (2017), is that the ï¬ atness of minima of the loss function found by stochastic gradient based methods results in good generalization. This pa- per argues that most notions of ï¬ atness are prob- lematic for deep models and can not be directly applied to explain generalization. Speciï¬ cally, when focusing on deep networks with rectiï¬ er units, we can exploit the particular geometry of pa- rameter space induced by the inherent symmetries that these architectures exhibit to build equivalent models corresponding to arbitrarily sharper min- ima. Furthermore, if we allow to reparametrize a function, the geometry of its parameters can change drastically without affecting its general- ization properties. approximate certain functions (e.g. Montufar et al., 2014; Raghu et al., 2016). Other works (e.g Dauphin et al., 2014; Choromanska et al., 2015) have looked at the structure of the error surface to analyze how trainable these models are. Finally, another point of discussion is how well these mod- els can generalize (Nesterov & Vial, 2008; Keskar et al., 2017; Zhang et al., 2017). These correspond, respectively, to low approximation, optimization and estimation error as described by Bottou (2010). Our work focuses on the analysis of the estimation error. | 1703.04933#1 | 1703.04933 | [
"1609.03193"
]
|
|
1703.04933#1 | Sharp Minima Can Generalize For Deep Nets | In particular, different approaches had been used to look at the question of why stochastic gradient descent results in solu- tions that generalize well (Bottou & LeCun, 2005; Bottou & Bousquet, 2008). For example, Duchi et al. (2011); Nesterov & Vial (2008); Hardt et al. (2016); Bottou et al. (2016); Go- nen & Shalev-Shwartz (2017) rely on the concept of stochas- tic approximation or uniform stability (Bousquet & Elisseeff, 2002). Another conjecture that was recently (Keskar et al., 2017) explored, but that could be traced back to Hochreiter & Schmidhuber (1997), relies on the geometry of the loss function around a given solution. It argues that ï¬ at minima, for some deï¬ nition of ï¬ atness, lead to better generalization. Our work focuses on this particular conjecture, arguing that there are critical issues when applying the concept of ï¬ at minima to deep neural networks, which require rethinking what ï¬ | 1703.04933#0 | 1703.04933#2 | 1703.04933 | [
"1609.03193"
]
|
1703.04933#2 | Sharp Minima Can Generalize For Deep Nets | atness actually means. # Introduction Deep learning techniques have been very successful in several domains, like object recognition in images (e.g Krizhevsky et al., 2012; Simonyan & Zisserman, 2015; Szegedy et al., 2015; He et al., 2016), machine transla- tion (e.g. Cho et al., 2014; Sutskever et al., 2014; Bahdanau et al., 2015; Wu et al., 2016; Gehring et al., 2016) and speech recognition (e.g. Graves et al., 2013; Hannun et al., 2014; Chorowski et al., 2015; Chan et al., 2016; Collobert et al., 2016). Several arguments have been brought forward to jus- tify these empirical results. From a representational point of view, it has been argued that deep networks can efï¬ | 1703.04933#1 | 1703.04933#3 | 1703.04933 | [
"1609.03193"
]
|
1703.04933#3 | Sharp Minima Can Generalize For Deep Nets | ciently 1Université of Montréal, Montréal, Canada 2DeepMind, Lon- don, United Kingdom 3Google Brain, Mountain View, United States 4CIFAR Senior Fellow. Correspondence to: Laurent Dinh <[email protected]>. While the concept of ï¬ at minima is not well deï¬ ned, having slightly different meanings in different works, the intuition is relatively simple. If one imagines the error as a one- dimensional curve, a minimum is ï¬ at if there is a wide region around it with roughly the same error, otherwise the minimum is sharp. When moving to higher dimen- sional spaces, deï¬ ning ï¬ atness becomes more complicated. In Hochreiter & Schmidhuber (1997) it is deï¬ ned as the size of the connected region around the minimum where the training loss is relatively similar. Chaudhari et al. (2017) relies, in contrast, on the curvature of the second order struc- ture around the minimum, while Keskar et al. (2017) looks at the maximum loss in a bounded neighbourhood of the minimum. | 1703.04933#2 | 1703.04933#4 | 1703.04933 | [
"1609.03193"
]
|
1703.04933#4 | Sharp Minima Can Generalize For Deep Nets | All these works rely on the fact that ï¬ atness results in robustness to low precision arithmetic or noise in the parameter space, which, using an minimum description length-based argument, suggests a low expected overï¬ tting. Proceedings of the 34 th International Conference on Machine Learning, Sydney, Australia, 2017. JMLR: W&CP. Copyright 2017 by the author(s). Sharp Minima Can Generalize For Deep Nets However, several common architectures and parametriza- tions in deep learning are already at odds with this conjec- ture, requiring at least some degree of reï¬ | 1703.04933#3 | 1703.04933#5 | 1703.04933 | [
"1609.03193"
]
|
1703.04933#5 | Sharp Minima Can Generalize For Deep Nets | nement in the statements made. In particular, we show how the geome- try of the associated parameter space can alter the ranking between prediction functions when considering several mea- sures of ï¬ atness/sharpness. We believe the reason for this contradiction stems from the Bayesian arguments about KL- divergence made to justify the generalization ability of ï¬ at minima (Hinton & Van Camp, 1993). Indeed, Kullback- Liebler divergence is invariant to change of parameters whereas the notion of "ï¬ atness" is not. The demonstrations of Hochreiter & Schmidhuber (1997) are approximately based on a Gibbs formalism and rely on strong assumptions and approximations that can compromise the applicability of the argument, including the assumption of a discrete function space. the literature. Hochreiter & Schmidhuber (1997) deï¬ nes a ï¬ at minimum as "a large connected region in weight space where the error remains approximately constant". We interpret this formulation as follows: Definition 1. Given « > 0, a minimum 6, and a loss L, we define C(L, 0, â ¬) as the largest (using inclusion as the partial order over the subsets of 0) connected set containing 6 such that V6â â ¬ C(L,6,â ¬),L(0') < L(@) +. The e- flatness will be defined as the volume of C(L, 0, â ¬). We will call this measure the volume ¢-flatness. In Figure 1, C(L, 0, â ¬) will be the purple line at the top of the red area if the height is â ¬ and its volume will simply be the length of the purple line. | 1703.04933#4 | 1703.04933#6 | 1703.04933 | [
"1609.03193"
]
|
1703.04933#6 | Sharp Minima Can Generalize For Deep Nets | # 2 Deï¬ nitions of ï¬ atness/sharpness Figure 1: An illustration of the notion of flatness. The loss L as a function of 6 is plotted in black. If the height of the red area is ¢, the width will represent the volume e-flatness. If the width is 2¢, the height will then represent the e-sharpness. Best seen with colors. For conciseness, we will restrict ourselves to supervised scalar output problems, but several conclusions in this pa- per can apply to other problems as well. We will consider a function f that takes as input an element x from an in- put space Â¥ and outputs a scalar y. We will denote by fg the prediction function. This prediction function will be parametrized by a parameter vector @ in a parameter space ©. Often, this prediction function will be over-parametrized and two parameters (0, 6â ) â ¬ ©? that yield the same pre- diction function everywhere, Va â ¬ 4â , fo(a) = for (x), are called observationally equivalent. | 1703.04933#5 | 1703.04933#7 | 1703.04933 | [
"1609.03193"
]
|
1703.04933#7 | Sharp Minima Can Generalize For Deep Nets | The model is trained to minimize a continuous loss function L which takes as argu- ment the prediction function fg. We will often think of the loss L as a function of 6 and adopt the notation L(@). Flatness can also be deï¬ ned using the local curvature of the loss function around the minimum if it is a critical point 1. Chaudhari et al. (2017); Keskar et al. (2017) suggest that this information is encoded in the eigenvalues of the Hessian. However, in order to compare how ï¬ at one minimum versus another, the eigenvalues need to be reduced to a single number. Here we consider the spectral norm and trace of the Hessian, two typical measurements of the eigenvalues of a matrix. Additionally Keskar et al. (2017) defines the notion of e- sharpness. In order to make proofs more readable, we will slightly modify their definition. However, because of norm equivalence in finite dimensional space, our results will transfer to the original definition in full space as well. Our modified definition is the following: Definition 2. Let Bz(â ¬,) be an Euclidean ball centered on a minimum 6 with radius â ¬. Then, for a non-negative valued loss function L, the e-sharpness will be defined as proportional to mMaxX/â ¬ Bo (c,0) (L(6â ) â L(6)) 1+ L(0) , In Figure 1, if the width of the red area is 2e then the height of the red area is maxg<p,(c,9) (L(6") â L(6)). e-sharpness can be related to the spectral norm of the Hes- sian. Indeed, a second-order Taylor expansion of L around a critical point minimum is written L(6â ) = L(0) + ; (0' â 0) (V7L)(0)(0' â 0) + 0(||â â ||). The notion of ï¬ atness/sharpness of a minimum is relative, therefore we will discuss metrics that can be used to com- pare the relative ï¬ atness between two minima. In this sec- tion we will formalize three used deï¬ nitions of ï¬ atness in In this second order approximation, the e-sharpness at 0 | 1703.04933#6 | 1703.04933#8 | 1703.04933 | [
"1609.03193"
]
|
1703.04933#8 | Sharp Minima Can Generalize For Deep Nets | 1In this paper, we will often assume that is the case when dealing with Hessian-based measures in order to have them well- deï¬ ned. Sharp Minima Can Generalize For Deep Nets would be Iw? Z)llhoe? 2(1+L(0)) © # 3 Properties of Deep Rectiï¬ ed Networks Before moving forward to our results, in this section we ï¬ rst introduce the notation used in the rest of paper. Most of our results, for clarity, will be on the deep rectiï¬ ed feedforward networks with a linear output layer that we describe below, though they can easily be extended to other architectures (e.g. convolutional, etc.). | 1703.04933#7 | 1703.04933#9 | 1703.04933 | [
"1609.03193"
]
|
1703.04933#9 | Sharp Minima Can Generalize For Deep Nets | 9. Definition 3. Given K weight matrices (0x )p<K with ny, = dim(vec(@,)) and n = vy nx, the output y of a deep rectified feedforward networks with a linear output layer is: y= rect (Srecr( +++ brect(@ +01) ++ â ) : x1) OK, where # o Figure 2: An illustration of the effects of non-negative ho- mogeneity. The graph depicts level curves of the behavior of the loss L embedded into the two dimensional param- eter space with the axis given by θ1 and θ2. | 1703.04933#8 | 1703.04933#10 | 1703.04933 | [
"1609.03193"
]
|
1703.04933#10 | Sharp Minima Can Generalize For Deep Nets | Speciï¬ cally, each line of a given color corresponds to the parameter as- signments (θ1, θ2) that result observationally in the same prediction function fθ. Best seen with colors. â ¢ x is the input to the model, a high-dimensional vector â ¢ Ï rect is the rectiï¬ ed elementwise activation func- tion (Jarrett et al., 2009; Nair & Hinton, 2010; Glorot et al., 2011), which is the positive part (zi)i â ¢ vec reshapes a matrix into a vector. Note that in our deï¬ nition we excluded the bias terms, usu- ally found in any neural architecture. This is done mainly for convenience, to simplify the rendition of our arguments. However, the arguments can be extended to the case that includes biases (see Appendix B). Another choice is that of the linear output layer. Having an output activation func- tion does not affect our argument either: since the loss is a function of the output activation, it can be rephrased as a function of linear pre-activation. | 1703.04933#9 | 1703.04933#11 | 1703.04933 | [
"1609.03193"
]
|
1703.04933#11 | Sharp Minima Can Generalize For Deep Nets | Deep rectiï¬ er models have certain properties that allows us in section 4 to arbitrary manipulate the ï¬ atness of a minimum. An important topic for optimization of neural networks is understanding the non-Euclidean geometry of the param- eter space as imposed by the neural architecture (see, for example Amari, 1998). In principle, when we take a step in parameter space what we expect to control is the change in the behavior of the model (i.e. the mapping of the input x to the output y). In principle we are not interested in the parameters per se, but rather only in the mapping they represent. | 1703.04933#10 | 1703.04933#12 | 1703.04933 | [
"1609.03193"
]
|
1703.04933#12 | Sharp Minima Can Generalize For Deep Nets | If one deï¬ nes a measure for the change in the behavior of the model, which can be done under some assumptions, then, it can be used to deï¬ ne, at any point in the parameter space, a metric that says what is the equivalent change in the parameters for a unit of change in the behavior of the model. As it turns out, for neural networks, this metric is not constant over Î . Intuitively, the metric is related to the curvature, and since neural networks can be highly non- linear, the curvature will not be constant. See Amari (1998); Pascanu & Bengio (2014) for more details. Coming back to the concept of ï¬ atness or sharpness of a minimum, this metric should deï¬ ne the ï¬ atness. | 1703.04933#11 | 1703.04933#13 | 1703.04933 | [
"1609.03193"
]
|
1703.04933#13 | Sharp Minima Can Generalize For Deep Nets | However, the geometry of the parameter space is more com- plicated. Regardless of the measure chosen to compare two instantiations of a neural network, because of the structure of the model, it also exhibits a large number of symmet- ric conï¬ gurations that result in exactly the same behavior. Because the rectiï¬ er activation has the non-negative homo- geneity property, as we will see shortly, one can construct a continuum of points that lead to the same behavior, hence the metric is singular. Which means that one can exploit these directions in which the model stays unchanged to shape the neighbourhood around a minimum in such a way that, by most deï¬ nitions of ï¬ atness, this property can be controlled. See Figure 2 for a visual depiction, where the ï¬ atness (given here as the distance between the different level curves) can be changed by moving along the curve. | 1703.04933#12 | 1703.04933#14 | 1703.04933 | [
"1609.03193"
]
|
1703.04933#14 | Sharp Minima Can Generalize For Deep Nets | Sharp Minima Can Generalize For Deep Nets Let us redeï¬ ne, for convenience, the non-negative homo- geneity property (Neyshabur et al., 2015; Lafond et al., 2016) below. Note that beside this property, the reason for study- ing the rectiï¬ ed linear activation is for its widespread adop- tion (Krizhevsky et al., 2012; Simonyan & Zisserman, 2015; Szegedy et al., 2015; He et al., 2016). | 1703.04933#13 | 1703.04933#15 | 1703.04933 | [
"1609.03193"
]
|
1703.04933#15 | Sharp Minima Can Generalize For Deep Nets | Deï¬ nition 4. A given a function Ï is non-negative homoge- neous if # 4 Deep Rectiï¬ ed networks and ï¬ at minima In this section we exploit the resulting strong non- identiï¬ ability to showcase a few shortcomings of some deï¬ nitions of ï¬ atness. Although α-scale transformation does not affect the function represented, it allows us to sig- niï¬ cantly decrease several measures of ï¬ atness. For another deï¬ nition of ï¬ atness, α-scale transformation show that all minima are equally ï¬ at. â (z, α) â R à R+, Ï (αz) = Î±Ï (z) . # 4.1 Volume «-flatness Theorem 1. The rectiï¬ ed function Ï rect(x) = max(x, 0) is non-negative homogeneous. Theorem 2. For a one-hidden layer rectiï¬ ed neural network of the form y = Ï rect(x · θ1) · θ2, Proof. Follows trivially from the constraint that α > 0, given that x > 0 â αx > 0, iff α > 0. and a minimum 6 = (61,62), such that 0, 4 0 and 62 # 0, Ve > 0 C(L, 6, â ¬) has an infinite volume. For a deep rectiï¬ | 1703.04933#14 | 1703.04933#16 | 1703.04933 | [
"1609.03193"
]
|
1703.04933#16 | Sharp Minima Can Generalize For Deep Nets | ed neural network it means that: # brect (x + (a61)) 02 = brect( * 01) - (92), meaning that for this one (hidden) layer neural network, the parameters (αθ1, θ2) is observationally equivalent to (θ1, αθ2). This observational equivalence similarly holds for convolutional layers. We will not consider the solution @ where any of the weight matrices 6),62 is zero, 6; = 0 or 02 = 0, as it results in a constant function which we will assume to give poor training performance. For a > 0, the a-scale transformation To : (01,02) (61,0710) has Jacobian determinant a2, where once again n; = dim(vec(61)) and nz = dim(vec(62)). Note that the Jacobian determinant of this linear transformation is the change in the volume induced by T,, and T,, o Tg = Tyg. We show below that there is a connected region containing 6 with infinite volume and where the error remains approximately constant. Given this non-negative homogeneity, if (0,,42) 4 (0,0) then {(a01, 07102), a > o} is an infinite set of obser- vationally equivalent parameters, inducing a strong non- identifiability in this learning scenario. Other models like deep linear networks (Saxe et al., 2013), leaky rectifiers (He et al., 2015) or maxout networks (Goodfellow et al., 2013) also have this non-negative homogeneity property. In what follows we will rely on such transformations, in particular we will rely on the following deï¬ | 1703.04933#15 | 1703.04933#17 | 1703.04933 | [
"1609.03193"
]
|
1703.04933#17 | Sharp Minima Can Generalize For Deep Nets | nition: Proof. We will first introduce a small region with approxi- mately constant error around @ with non-zero volume. Given ⠬ > 0 and if we consider the loss function continuous with respect to the parameter, C(L, 0, ⠬) is an open set containing 9. Since we also have 6; 4 0 and 62 ¥ 0, let r > 0 such that the £.. ball Boo (r, 0) is in C(L,0,⠬) and has empty intersection with {0,0, = 0}. Let v = (2r)⠢*"2 > 0 the volume of B,,(r, 9). | 1703.04933#16 | 1703.04933#18 | 1703.04933 | [
"1609.03193"
]
|
1703.04933#18 | Sharp Minima Can Generalize For Deep Nets | Deï¬ nition 5. For a single hidden layer rectiï¬ er feedforward network we deï¬ ne the family of transformations -1 Ta : (01,02) + (a1, 07°82) which we refer to as a α-scale transformation. Note that a α-scale transformation will not affect the gener- alization, as the behavior of the function is identical. Also while the transformation is only deï¬ ned for a single layer rectiï¬ ed feedforward network, it can trivially be extended to any architecture having a single rectiï¬ ed network as a submodule, e.g. a deep rectiï¬ ed feedforward network. For simplicity and readability we will rely on this deï¬ nition. Since the Jacobian determinant of T,, is the multiplicative change of induced by T,,, the volume of Ty, (Boo(r, 9)) is vaâ | 1703.04933#17 | 1703.04933#19 | 1703.04933 | [
"1609.03193"
]
|
1703.04933#19 | Sharp Minima Can Generalize For Deep Nets | ¢â "2, If ny A ng, we can arbitrarily grow the volume of Ta(Boo(r, 4)), with error within an ¢-interval of L(0), by having a tends to +00 if n > nz or to 0 otherwise. If ny = no, Vaâ > 0,Ty (Bar, 6)) has volume v. Let Co = Ugo La (Bor, 6). Câ is a connected region where the error remains approximately constant, i.e. within an e-interval of L(@). â 9 ll@llo+r oF Leta = eer Since Boo(7, 0) = Boo(r, 01) X Boo(r, 02), | 1703.04933#18 | 1703.04933#20 | 1703.04933 | [
"1609.03193"
]
|
1703.04933#20 | Sharp Minima Can Generalize For Deep Nets | Sharp Minima Can Generalize For Deep Nets T,(Bz.(r',8)) T.(Boe(râ ,8)) curvature (e.g. Desjardins et al., 2015; Salimans & Kingma, 2016). In this section we look at two widely used measures of the Hessian, the spectral radius and trace, showing that either of these values can be manipulated without actually changing the behavior of the function. If the ï¬ atness of a minimum is deï¬ ned by any of these quantities, then it could also be easily manipulated. Theorem 3. The gradient and Hessian of the loss L with respect to θ can be modiï¬ ed by Tα. Proof. L(θ1, θ2) = L(αθ1, αâ 1θ2), we have then by differentiation Figure 3: An illustration of how we build different dis- joint volumes using 7T,,. In this two-dimensional exam- ple, Ty (Boo(râ ,4)) and B.o(râ , 9) have the same volume. Boo (r", 9), Ta (Boo(râ , 9)),T3(Boo(râ , 0)),... will there- fore be a sequence of disjoint constant volumes. Câ will therefore have an infinite volume. | 1703.04933#19 | 1703.04933#21 | 1703.04933 | [
"1609.03193"
]
|
1703.04933#21 | Sharp Minima Can Generalize For Deep Nets | Best seen with colors. (7L)(61,05) = (VL)(at, 0M) | 0 0 ath, atl © (VL)(a,,0- Ma) = (VE)( 0102) | 0 ol | and where à is the Cartesian set product, we have # Ta # (Boo(r,0)) = Boar, a) X Bo(aatr, 762). (V?L)(a61,07 162) antl, 0 2 atl, 0 = [Oo gt, |(772N6.8)/ oO ge |: Therefore, Ty (Bo(r, 9) 1 Boo (7,9) = 0 (see Figure 3). Similarly, Bao (r,0), Tax (Boo(r,0)),T?(Boo(r,9)),.-. are disjoint and have volume v. We have also TE (Boo(1â ,0)) = Tyr (Boo(râ ,0)) â ¬ Câ . The vol- ume of Câ is then lower bounded by 0 < vu+u+u+-:+ and is therefore infinite. C'(L, 0, â ¬) has then infinite volume too, making the volume e-flatness of 0 infinite. Sharpest direction Through these transformations we can easily ï¬ nd, for any critical point which is a minimum with non-zero Hessian, an observationally equivalent param- eter whose Hessian has an arbitrarily large spectral norm. Theorem 4. For a one-hidden layer rectiï¬ ed neural network of the form This theorem can generalize to rectified neural networks in general with a similar proof. Given that every minimum has an infinitely large region (volume-wise) in which the error remains approximately constant, that means that every minimum would be infinitely flat according to the volume e-flatness. Since all minima are equally flat, it is not possible to use volume ¢-flatness to gauge the generalization property of a minimum. # 4.2 Hessian-based measures The non-Euclidean geometry of the parameter space, cou- pled with the manifolds of observationally equal behavior of the model, allows one to move from one region of the param- eter space to another, changing the curvature of the model without actually changing the function. | 1703.04933#20 | 1703.04933#22 | 1703.04933 | [
"1609.03193"
]
|
1703.04933#22 | Sharp Minima Can Generalize For Deep Nets | This approach has been used with success to improve optimization, by moving from a region of high curvature to a region of well behaved y = Ï rect(x · θ1) · θ2, (01,02) being a minimum 0, VM > 0,da > |(V?L)(Ta(4)) ll], és and critical point 0 = for L, such that (V?L)(@) 0, ||| (V?L) (Ta(9)) |||, = M4 where | the spectral norm of (V?L) (Ta(9)). Proof. The trace of a symmetric matrix is the sum of its eigenvalues and a real symmetric matrix can be diagonalized in R, therefore if the Hessian is non-zero, there is one non- zero positive diagonal element. Without loss of generality, we will assume that this non-zero element of value y > 0 corresponds to an element in 0;. Therefore the Frobenius norm |||(V?L)(T.(4)) ||| - of (V?L) (a1, 07102) _ ath, 0 2 aly, 0 = 0. aly (V°L) (41, 42) 0 ala, | | 1703.04933#21 | 1703.04933#23 | 1703.04933 | [
"1609.03193"
]
|
1703.04933#23 | Sharp Minima Can Generalize For Deep Nets | Sharp Minima Can Generalize For Deep Nets is lower bounded by αâ 2γ. Since all norms are equivalent in finite dimension, there exists a constant r > 0 such that r|||.A]l| , < |||All], for al symmetric matrices A. So by picking a < \/ 77, we are guaranteed that |||(V?L)(Ta(9)) |||, = M. Any minimum with non-zero Hessian will be observation- ally equivalent to a minimum whose Hessian has an arbi- trarily large spectral norm. Therefore for any minimum in the loss function, if there exists another minimum that generalizes better then there exists another minimum that generalizes better and is also sharper according the spectral norm of the Hessian. The spectral norm of critical pointsâ Hessian becomes as a result less relevant as a measure of potential generalization error. Moreover, since the spectral norm lower bounds the trace for a positive semi-deï¬ nite symmetric matrix, the same conclusion can be drawn for the trace. 0,4da > 0 such that (r - ming<x(Mx)) eigenvalues are greater than M. | 1703.04933#22 | 1703.04933#24 | 1703.04933 | [
"1609.03193"
]
|
1703.04933#24 | Sharp Minima Can Generalize For Deep Nets | â Proof, For simplicity, we will note VM the principal square root of a symmetric positive-semidefinite matrix M. The eigenvalues of VM are the square root of the eigenvalues of M and are its singular values. By defini- tion, the singular values of \/(V?L)(0)Daq are the square root of the eigenvalues of D,(V?L)(9)D,. Without loss of generality, we consider ming< (Me) = nx and choose Vk < K,o, = 8! andagx = 6*~-1. Since Dy and \/(V?L)(@) are positive symmetric semi-definite matrices, we can apply the multiplicative Horn inequalities (Klyachko, 2000) on singular values of the product \/(V?L)(@)Da: Vi < nj <(nâ nk), Nias ((V2L)(8)D2) > As((V2L)(0)) ?. | 1703.04933#23 | 1703.04933#25 | 1703.04933 | [
"1609.03193"
]
|
1703.04933#25 | Sharp Minima Can Generalize For Deep Nets | Many directions However, some notion of sharpness might take into account the entire eigenspectrum of the Hessian as opposed to its largest eigenvalue, for instance, Chaudhari et al. (2017) describe the notion of wide valleys, allowing the presence of very few large eigenvalues. We can generalize the transformations between observationally equivalent parameters to deeper neural networks with K â 1 hidden layers: for a; > 0,Ta : (Ox )k<K > (AnOk)kew with []{_, ax = 1. If we define M By choosing β > λr M ((V2L)(8)) â Ax((V2L)(8)) > 0 we can since we have Vi < r,Ai((V7L)(0)) > Ax((V2L)(8)) > 0 we can conclude that Vi <(râ nk), di((W2L)(0)D2) > Aran, ((V?L) (8) 8 > d.((W?L)(8)) 6? > M. Dα = In1 αâ 1 1 0 ... 0 0 In2 αâ 1 2 ... 0 0 · · · 0 · · · ... . . . InK · · · αâ 1 K then the first and second derivatives at T,,(@) will be (VL)(Ta(0)) =(VE)(0)Da (V°L)(Ta(8)) =Da(V?L)(8)Da- It means that there exists an observationally equivalent pa- rameter with at least (r â ming< x (nx)) arbitrarily large eigenvalues. Since Sagun et al. (2016) seems to suggests that rank deficiency in the Hessian is due to over-parametrization of the model, one could conjecture that (r - ming<x (nx) can be high for thin and deep neural networks, resulting in a majority of large eigenvalues. Therefore, it would still be possible to obtain an equivalent parameter with large Hessian eigenvalues, i.e. sharp in multiple directions. We will show to which extent you can increase several eigenvalues of (V?L)(Tq(0)) by varying a. Definition 6. | 1703.04933#24 | 1703.04933#26 | 1703.04933 | [
"1609.03193"
]
|
1703.04933#26 | Sharp Minima Can Generalize For Deep Nets | For each n x n matrix A, we define the vector (A) of sorted singular values of A with their multiplicity Ai (A) > A2(A) > +++ > An(A). # 4.3. ¢-sharpness We have redefined for ¢ > 0 the e-sharpness of Keskar et al. (2017) as follow If A is symmetric positive semi-deï¬ nite, λ(A) is also the vector of its sorted eigenvalues. Theorem 5. For a (K â 1)-hidden layer rectiï¬ ed neural network of the form y = Ï rect(Ï rect(· · · Ï rect(x · θ1) · · · ) · θKâ 1) · θK, and critical point 0 = (0k)k<K being a minimum for L, such that (V?L)(0) has rank r = rank((V?L)(9)), VM > maxyrepa(eo) (L(6') â L(8)) 1+ LO) where B2(e,6) is the Euclidean ball of radius â ¬ centered on 6. | 1703.04933#25 | 1703.04933#27 | 1703.04933 | [
"1609.03193"
]
|
1703.04933#27 | Sharp Minima Can Generalize For Deep Nets | This modification will demonstrate more clearly the issues of that metric as a measure of probable generaliza- tion. If we use K = 2 and (6), 2) corresponding to a non-constant function, i.e. 6; 4 0 and 62 4 0, then we can Sharp Minima Can Generalize For Deep Nets parametrization of the model. 4; 0 S. a Figure 4: An illustration of how we exploit non- identifiability and its particular geometry to obtain sharper minima: although 0 is far from the 62 = 0 line, the observa- tionally equivalent parameter 6â is closer. The green and red circle centered on each of these points have the same radius. Best seen with colors. | 1703.04933#26 | 1703.04933#28 | 1703.04933 | [
"1609.03193"
]
|
1703.04933#28 | Sharp Minima Can Generalize For Deep Nets | # 5.1 Model reparametrization One thing that needs to be considered when relating ï¬ atness of minima to their probable generalization is that the choice of parametrization and its associated geometry are arbitrary. Since we are interested in ï¬ nding a prediction function in a given family of functions, no reparametrization of this fam- ily should inï¬ uence generalization of any of these functions. Given a bijection g onto θ, we can deï¬ ne new transformed parameter η = gâ 1(θ). Since θ and η represent in different space the same prediction function, they should generalize as well. define a = Ta: We will now consider the observation- ally equivalent parameter T,,(01,02) = (eq a~16). Given that ||@;:||2 < ||@l|2, we have that (0,a7'@2) â ¬ Bo(e,To(9)), making the maximum loss in this neighbor- hood at least as high as the best constant-valued function, incurring relatively high sharpness. Figure 4 provides a visualization of the proof. | 1703.04933#27 | 1703.04933#29 | 1703.04933 | [
"1609.03193"
]
|
1703.04933#29 | Sharp Minima Can Generalize For Deep Nets | Letâ s call Lη = L â ¦ g the loss function with respect to the new parameter η. We generalize the derivation of Subsec- tion 4.2: L,,(n) = L(g(n)) = (VLy)(n) = (VL)(g9(m)) (V9)(n) => (V?Ln)(n) = (Vg)(m)" (VL) (9(n)) (V9) (n) + (VL) (9(m)) (V9) (n)- For rectified neural network every minimum is observation- ally equivalent to a minimum that generalizes as well but with high e-sharpness. This also applies when using the full-space ¢-sharpness used by Keskar et al. (2017). We can prove this similarly using the equivalence of norms in finite dimensional vector spaces and the fact that for c>0,â ¬ > 0,â ¬ < e(c + 1) (see Keskar et al. (2017)). We have not been able to show a similar problem with random subspace ¢-sharpness used by Keskar et al. (2017), ie. a restriction of the maximization to a random subspace, which could relate to the notion of wide valleys described by Chaudhari et al. (2017). By exploiting the non-Euclidean geometry and non- identiï¬ ability of rectiï¬ ed neural networks, we were able to demonstrate some of the limits of using typical deï¬ nitions of minimumâ s ï¬ atness as core explanation for generalization. | 1703.04933#28 | 1703.04933#30 | 1703.04933 | [
"1609.03193"
]
|
1703.04933#30 | Sharp Minima Can Generalize For Deep Nets | At a differentiable critical point, we have by definition (VL)(g(n)) = 0, therefore the transformed Hessian at a critical point becomes (V?Ln)(n) = (V9)(n)" (VL) (9(n)) (V9) (n)- This means that by reparametrizing the problem we can modify to a large extent the geometry of the loss function so as to have sharp minima of L in θ correspond to ï¬ at minima of Lη in η = gâ 1(θ) and conversely. Figure 5 illustrates that point in one dimension. Several practical (Dinh et al., 2014; Rezende & Mohamed, 2015; Kingma et al., 2016; Dinh et al., 2016) and theoretical works (Hyvärinen & Pajunen, 1999) show how powerful bijections can be. We can also note that the formula for the transformed Hessian at a critical point also applies if g is not invertible, g would just need to be surjective over Î in order to cover exactly the same family of prediction functions # 5 Allowing reparametrizations {fθ, θ â Î } = {fg(η), η â gâ 1(Î )}. In the previous section 4 we explored the case of a ï¬ xed parametrization, that of deep rectiï¬ er models. In this section we demonstrate a simple observation. If we are allowed to change the parametrization of some function f , we can obtain arbitrarily different geometries without affecting how the function evaluates on unseen data. The same holds for reparametrization of the input space. The implication is that the correlation between the geometry of the parameter space (and hence the error surface) and the behavior of a given function is meaningless if not preconditioned on the speciï¬ | 1703.04933#29 | 1703.04933#31 | 1703.04933 | [
"1609.03193"
]
|
1703.04933#31 | Sharp Minima Can Generalize For Deep Nets | c We show in Appendix A, bijections that allow us to perturb the relative ï¬ atness between a ï¬ nite number of minima. Instances of commonly used reparametrization are batch normalization (Ioffe & Szegedy, 2015), or the virtual batch normalization variant (Salimans et al., 2016), and weight normalization (Badrinarayanan et al., 2015; Salimans & Kingma, 2016; Arpit et al., 2016). Im et al. (2016) have plotted how the loss function landscape was affected by batch normalization. However, we will focus on weight nor- malization reparametrization as the analysis will be simpler, Sharp Minima Can Generalize For Deep Nets e every minimum has infinite volume e-sharpness; (a) Loss function with default parametrization | 1703.04933#30 | 1703.04933#32 | 1703.04933 | [
"1609.03193"
]
|
1703.04933#32 | Sharp Minima Can Generalize For Deep Nets | â ¢ every minimum is observationally equivalent to an inï¬ nitely sharp minimum and to an inï¬ nitely ï¬ at min- imum when considering nonzero eigenvalues of the Hessian; © every minimum is observationally equivalent to a mini- mum with arbitrarily low full-space and random sub- space e-sharpness and a minimum with high full-space e-sharpness. (b) Loss function with reparametrization This further weakens the link between the ï¬ atness of a minimum and the generalization property of the associated prediction function when a speciï¬ c parameter space has not been speciï¬ ed and explained beforehand. | 1703.04933#31 | 1703.04933#33 | 1703.04933 | [
"1609.03193"
]
|
1703.04933#33 | Sharp Minima Can Generalize For Deep Nets | # Input representation As we conclude that the notion of ï¬ atness for a minimum in the loss function by itself is not sufï¬ cient to determine its generalization ability in the general case, we can choose to focus instead on properties of the prediction function instead. Motivated by some work in adversarial examples (Szegedy et al., 2014; Goodfellow et al., 2015) for deep neural net- works, one could decide on its generalization property by analyzing the gradient of the prediction function on exam- ples. Intuitively, if the gradient is small on typical points from the distribution or has a small Lipschitz constant, then a small change in the input should not incur a large change in the prediction. (c) Loss function with another reparametrization | 1703.04933#32 | 1703.04933#34 | 1703.04933 | [
"1609.03193"
]
|
1703.04933#34 | Sharp Minima Can Generalize For Deep Nets | Figure 5: A one-dimensional example on how much the geometry of the loss function depends on the parameter space chosen. The x-axis is the parameter value and the y-axis is the loss. The points correspond to a regular grid in the default parametrization. In the default parametrization, all minima have roughly the same curvature but with a careful choice of reparametrization, it is possible to turn a minimum signiï¬ cantly ï¬ atter or sharper than the others. Reparametrizations in this ï¬ gure are of the form η = (|θ â Ë Î¸|2 + b)a(θ â Ë Î¸) where b â ¥ 0, a > â 1 2 and Ë Î¸ is shown with the red vertical line. | 1703.04933#33 | 1703.04933#35 | 1703.04933 | [
"1609.03193"
]
|
1703.04933#35 | Sharp Minima Can Generalize For Deep Nets | but the intuition with batch normalization will be similar. Weight normalization reparametrizes a nonzero weight w as w= °Toe with the new parameter being the scale s and the unnormalized weight v 4 0. But this inï¬ nitesimal reasoning is once again very dependent of the local geometry of the input space. For an invertible preprocessing ξâ 1, e.g. feature standardization, whitening or gaussianization (Chen & Gopinath, 2001), we will call fξ = f â ¦ ξ the prediction function on the preprocessed input u = ξâ 1(x). We can reproduce the derivation in Section 5 to obtain â f â xT As we can alter signiï¬ cantly the relative magnitude of the gradient at each point, analyzing the amplitude of the gradi- ent of the prediction function might prove problematic if the choice of the input space have not been explained before- hand. This remark applies in applications involving images, sound or other signals with invariances (Larsen et al., 2015). For example, Theis et al. (2016) show for images how a small drift of one to four pixels can incur a large difference in terms of L2 norm. Since we can observe that w is invariant to scaling of v, reasoning similar to Section 3 can be applied with the sim- pler transformations Tâ , : v ++ av for a # 0. Moreover, since this transformation is a simpler isotropic scaling, the conclusion that we can draw can be actually more powerful with respect to v: # 6 Discussion It has been observed empirically that minima found by stan- dard deep learning algorithms that generalize well tend to be ï¬ atter than found minima that did not generalize Sharp Minima Can Generalize For Deep Nets | 1703.04933#34 | 1703.04933#36 | 1703.04933 | [
"1609.03193"
]
|
1703.04933#36 | Sharp Minima Can Generalize For Deep Nets | well (Chaudhari et al., 2017; Keskar et al., 2017). How- ever, when following several deï¬ nitions of ï¬ atness, we have shown that the conclusion that ï¬ at minima should generalize better than sharp ones cannot be applied as is without fur- ther context. Previously used deï¬ nitions fail to account for the complex geometry of some commonly used deep archi- tectures. In particular, the non-identiï¬ ability of the model induced by symmetries, allows one to alter the ï¬ atness of a minimum without affecting the function it represents. Addi- tionally the whole geometry of the error surface with respect to the parameters can be changed arbitrarily under different parametrizations. | 1703.04933#35 | 1703.04933#37 | 1703.04933 | [
"1609.03193"
]
|
1703.04933#37 | Sharp Minima Can Generalize For Deep Nets | In the spirit of (Swirszcz et al., 2016), our work indicates that more care is needed to deï¬ ne ï¬ atness to avoid degeneracies of the geometry of the model under study. Also such a concept can not be divorced from the particular parametrization of the model or input space. # Acknowledgements The authors would like to thank Grzegorz ´Swirszcz for an insightful discussion of the paper, Harm De Vries, Yann Dauphin, Jascha Sohl-Dickstein and César Laurent for use- ful discussions about optimization, Danilo Rezende for ex- plaining universal approximation using normalizing ï¬ ows and Kyle Kastner, Adriana Romero, Junyoung Chung, Nico- las Ballas, Aaron Courville, George Dahl, Yaroslav Ganin, Prajit Ramachandran, à aË glar Gülçehre, Ahmed Touati and the ICML reviewers for useful feedback. # References Roweis, S. (eds.), Advances in Neural Information Process- ing Systems, volume 20, pp. 161â 168. NIPS Foundation (http://books.nips.cc), 2008. URL http://leon.bottou. org/papers/bottou-bousquet-2008. Bottou, Léon and LeCun, Yann. On-line learning for very large datasets. Applied Stochastic Models in Business and Industry, 21(2):137â 151, 2005. URL http://leon.bottou.org/ papers/bottou-lecun-2004a. Bottou, Léon, Curtis, Frank E, and Nocedal, Jorge. Optimiza- tion methods for large-scale machine learning. arXiv preprint arXiv:1606.04838, 2016. Bousquet, Olivier and Elisseeff, André. Stability and generaliza- tion. Journal of Machine Learning Research, 2(Mar):499â 526, 2002. Chan, William, Jaitly, Navdeep, Le, Quoc V., and Vinyals, Oriol. | 1703.04933#36 | 1703.04933#38 | 1703.04933 | [
"1609.03193"
]
|
1703.04933#38 | Sharp Minima Can Generalize For Deep Nets | Listen, attend and spell: A neural network for large vocab- In 2016 IEEE In- ulary conversational speech recognition. ternational Conference on Acoustics, Speech and Signal Pro- cessing, ICASSP 2016, Shanghai, China, March 20-25, 2016, pp. 4960â 4964. IEEE, 2016. ISBN 978-1-4799-9988-0. doi: 10.1109/ICASSP.2016.7472621. URL http://dx.doi. org/10.1109/ICASSP.2016.7472621. | 1703.04933#37 | 1703.04933#39 | 1703.04933 | [
"1609.03193"
]
|
1703.04933#39 | Sharp Minima Can Generalize For Deep Nets | Chaudhari, Pratik, Choromanska, Anna, Soatto, Stefano, Le- Cun, Yann, Baldassi, Carlo, Borgs, Christian, Chayes, Jen- nifer, Sagun, Levent, and Zecchina, Riccardo. Entropy-sgd: In ICLRâ 2017, Biasing gradient descent into wide valleys. arXiv:1611.01838, 2017. Chen, Scott Saobing and Gopinath, Ramesh A. Gaussianization. In Leen, T. K., Dietterich, T. G., and Tresp, V. (eds.), Advances in Neural Information Processing Systems 13, pp. 423â 429. MIT Press, 2001. URL http://papers.nips.cc/paper/ 1856-gaussianization.pdf. | 1703.04933#38 | 1703.04933#40 | 1703.04933 | [
"1609.03193"
]
|
1703.04933#40 | Sharp Minima Can Generalize For Deep Nets | Amari, Shun-Ichi. Natural gradient works efï¬ ciently in learning. Neural Comput., 10(2), 1998. Arpit, Devansh, Zhou, Yingbo, Kota, Bhargava U, and Govin- daraju, Venu. Normalization propagation: A parametric tech- nique for removing internal covariate shift in deep networks. arXiv preprint arXiv:1603.01431, 2016. Bach, Francis R. and Blei, David M. (eds.). Proceedings of the 32nd International Conference on Machine Learning, ICML 2015, Lille, France, 6-11 July 2015, volume 37 of JMLR Work- shop and Conference Proceedings, 2015. JMLR.org. URL http://jmlr.org/proceedings/papers/v37/. Badrinarayanan, Vijay, Mishra, Bamdev, and Cipolla, Roberto. Understanding symmetries in deep networks. arXiv preprint arXiv:1511.01029, 2015. Bahdanau, Dzmitry, Cho, Kyunghyun, and Bengio, Yoshua. Neural machine translation by jointly learning to align and translate. | 1703.04933#39 | 1703.04933#41 | 1703.04933 | [
"1609.03193"
]
|
1703.04933#41 | Sharp Minima Can Generalize For Deep Nets | In ICLRâ 2015, arXiv:1409.0473, 2015. Bottou, Léon. Large-scale machine learning with stochastic gradi- ent descent. In Proceedings of COMPSTATâ 2010, pp. 177â 186. Springer, 2010. Bottou, Léon and Bousquet, Olivier. The tradeoffs of large In Platt, J.C., Koller, D., Singer, Y., and Cho, Kyunghyun, van Merrienboer, Bart, Gülçehre, à aglar, Bah- danau, Dzmitry, Bougares, Fethi, Schwenk, Holger, and Ben- gio, Yoshua. | 1703.04933#40 | 1703.04933#42 | 1703.04933 | [
"1609.03193"
]
|
1703.04933#42 | Sharp Minima Can Generalize For Deep Nets | Learning phrase representations using RNN encoder-decoder for statistical machine translation. In Mos- chitti, Alessandro, Pang, Bo, and Daelemans, Walter (eds.), Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP 2014, October 25-29, 2014, Doha, Qatar, A meeting of SIGDAT, a Special Interest Group of the ACL, pp. 1724â 1734. ACL, 2014. ISBN 978-1- 937284-96-1. URL http://aclweb.org/anthology/ D/D14/D14-1179.pdf. Choromanska, Anna, Henaff, Mikael, Mathieu, Michaël, Arous, Gérard Ben, and LeCun, Yann. | 1703.04933#41 | 1703.04933#43 | 1703.04933 | [
"1609.03193"
]
|
1703.04933#43 | Sharp Minima Can Generalize For Deep Nets | The loss surfaces of multilayer networks. In AISTATS, 2015. Chorowski, Jan K, Bahdanau, Dzmitry, Serdyuk, Dmitriy, Cho, Kyunghyun, and Bengio, Yoshua. Attention-based models for speech recognition. In Advances in Neural Information Process- ing Systems, pp. 577â 585, 2015. Collobert, Ronan, Puhrsch, Christian, and Synnaeve, Gabriel. Wav2letter: an end-to-end convnet-based speech recognition system. arXiv preprint arXiv:1609.03193, 2016. Dauphin, Yann N., Pascanu, Razvan, Gülçehre, à aglar, Cho, KyungHyun, Ganguli, Surya, and Bengio, Yoshua. | 1703.04933#42 | 1703.04933#44 | 1703.04933 | [
"1609.03193"
]
|
1703.04933#44 | Sharp Minima Can Generalize For Deep Nets | Identifying and attacking the saddle point problem in high-dimensional non-convex optimization. NIPS, 2014. Sharp Minima Can Generalize For Deep Nets Desjardins, Guillaume, Simonyan, Karen, Pascanu, Razvan, and Kavukcuoglu, Koray. Natural neural networks. NIPS, 2015. Dinh, Laurent, Krueger, David, and Bengio, Yoshua. Nice: Non- linear independent components estimation. arXiv preprint arXiv:1410.8516, 2014. Hinton, Geoffrey E and Van Camp, Drew. Keeping the neural networks simple by minimizing the description length of the In Proceedings of the sixth annual conference on weights. Computational learning theory, pp. 5â | 1703.04933#43 | 1703.04933#45 | 1703.04933 | [
"1609.03193"
]
|
1703.04933#45 | Sharp Minima Can Generalize For Deep Nets | 13. ACM, 1993. Hochreiter, Sepp and Schmidhuber, Jürgen. Flat minima. Neural Computation, 9(1):1â 42, 1997. Dinh, Laurent, Sohl-Dickstein, Jascha, and Bengio, Samy. Density estimation using real nvp. In ICLRâ 2017, arXiv:1605.08803, 2016. Hyvärinen, Aapo and Pajunen, Petteri. | 1703.04933#44 | 1703.04933#46 | 1703.04933 | [
"1609.03193"
]
|
1703.04933#46 | Sharp Minima Can Generalize For Deep Nets | Nonlinear independent component analysis: Existence and uniqueness results. Neural Networks, 12(3):429â 439, 1999. Duchi, John, Hazan, Elad, and Singer, Yoram. Adaptive subgra- dient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12(Jul):2121â 2159, 2011. Im, Daniel Jiwoong, Tao, Michael, and Branson, Kristin. | 1703.04933#45 | 1703.04933#47 | 1703.04933 | [
"1609.03193"
]
|
1703.04933#47 | Sharp Minima Can Generalize For Deep Nets | An empirical analysis of deep network loss surfaces. arXiv preprint arXiv:1612.04010, 2016. Gehring, Jonas, Auli, Michael, Grangier, David, and Dauphin, Yann N. A convolutional encoder model for neural machine translation. arXiv preprint arXiv:1611.02344, 2016. Glorot, Xavier, Bordes, Antoine, and Bengio, Yoshua. Deep sparse rectiï¬ er neural networks. | 1703.04933#46 | 1703.04933#48 | 1703.04933 | [
"1609.03193"
]
|
1703.04933#48 | Sharp Minima Can Generalize For Deep Nets | In Aistats, volume 15, pp. 275, 2011. Gonen, Alon and Shalev-Shwartz, Shai. Fast rates for empirical risk minimization of strict saddle problems. arXiv preprint arXiv:1701.04271, 2017. Batch normaliza- tion: Accelerating deep network training by reducing in- ternal covariate shift. In Bach & Blei (2015), pp. 448â 456. URL http://jmlr.org/proceedings/papers/ v37/ioffe15.html. Jarrett, Kevin, Kavukcuoglu, Koray, LeCun, Yann, et al. What is the best multi-stage architecture for object recognition? In Computer Vision, 2009 IEEE 12th International Conference on, pp. 2146â 2153. IEEE, 2009. Goodfellow, Ian J, Warde-Farley, David, Mirza, Mehdi, Courville, Aaron C, and Bengio, Yoshua. Maxout networks. ICML (3), 28: 1319â 1327, 2013. Keskar, Nitish Shirish, Mudigere, Dheevatsa, Nocedal, Jorge, Smelyanskiy, Mikhail, and Tang, Ping Tak Peter. | 1703.04933#47 | 1703.04933#49 | 1703.04933 | [
"1609.03193"
]
|
1703.04933#49 | Sharp Minima Can Generalize For Deep Nets | On large- batch training for deep learning: Generalization gap and sharp minima. In ICLRâ 2017, arXiv:1609.04836, 2017. Goodfellow, Ian J, Shlens, Jonathon, and Szegedy, Christian. Ex- plaining and harnessing adversarial examples. In ICLRâ 2015 arXiv:1412.6572, 2015. Graves, Alex, Mohamed, Abdel-rahman, and Hinton, Geoffrey. Speech recognition with deep recurrent neural networks. In Acoustics, speech and signal processing (icassp), 2013 ieee international conference on, pp. 6645â 6649. IEEE, 2013. Hannun, Awni Y., Case, Carl, Casper, Jared, Catanzaro, Bryan, Diamos, Greg, Elsen, Erich, Prenger, Ryan, Satheesh, San- jeev, Sengupta, Shubho, Coates, Adam, and Ng, Andrew Y. Deep speech: Scaling up end-to-end speech recognition. CoRR, abs/1412.5567, 2014. URL http://arxiv.org/abs/ 1412.5567. Kingma, Diederik P, Salimans, Tim, Jozefowicz, Rafal, Chen, Xi, Sutskever, Ilya, and Welling, Max. Improved variational infer- ence with inverse autoregressive ï¬ | 1703.04933#48 | 1703.04933#50 | 1703.04933 | [
"1609.03193"
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.