doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1606.01540 | 12 | [8] B. Tanner and A. White. RL-Glue: Language-independent software for reinforcement-learning experiments. J. Mach. Learn. Res., 10:2133â2136, 2009.
[9] T. Schaul, J. Bayer, D. Wierstra, Y. Sun, M. Felder, F. Sehnke, T. R¨uckstieÃ, and J. Schmidhuber. PyBrain. J. Mach. Learn. Res., 11:743â746, 2010.
[10] S. Abeyruwan. RLLib: Lightweight standard and on/off policy reinforcement learning library (C++). http://web.cs.miami.edu/home/saminda/rilib.html, 2013.
[11] Christos Dimitrakakis, Guangliang Li, and Nikoalos Tziortziotis. The reinforcement learning competition 2014. AI Magazine, 35(3):61â65, 2014.
[12] R. S. Sutton and A. G. Barto. Reinforcement Learning: An Introduction. MIT Press, 1998.
[13] Petr BaudiËs and Jean-loup Gailly. Pachi: State of the art open source go program. In Advances in Computer Games, pages 24â38. Springer, 2011. | 1606.01540#12 | OpenAI Gym | OpenAI Gym is a toolkit for reinforcement learning research. It includes a
growing collection of benchmark problems that expose a common interface, and a
website where people can share their results and compare the performance of
algorithms. This whitepaper discusses the components of OpenAI Gym and the
design decisions that went into the software. | http://arxiv.org/pdf/1606.01540 | Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, Wojciech Zaremba | cs.LG, cs.AI | null | null | cs.LG | 20160605 | 20160605 | [
{
"id": "1602.01783"
},
{
"id": "1604.06778"
},
{
"id": "1605.02097"
}
] |
1606.01541 | 12 | Also relevant is prior work on reinforcement learn- ing for language understanding - including learning from delayed reward signals by playing text-based games (Narasimhan et al., 2015; He et al., 2016), executing instructions for Windows help (Branavan et al., 2011), or understanding dialogues that give navigation directions (Vogel and Jurafsky, 2010).
Our goal is to integrate the SEQ2SEQ and rein- forcement learning paradigms, drawing on the advan- tages of both. We are thus particularly inspired by recent work that attempts to merge these paradigms, including Wen et al. (2016)â training an end-to-end
task-oriented dialogue system that links input repre- sentations to slot-value pairs in a databaseâ or Su et al. (2016), who combine reinforcement learning with neural generation on tasks with real users, show- ing that reinforcement learning improves dialogue performance.
# 3 Reinforcement Learning for Open-Domain Dialogue
In this section, we describe in detail the components of the proposed RL model. | 1606.01541#12 | Deep Reinforcement Learning for Dialogue Generation | Recent neural models of dialogue generation offer great promise for
generating responses for conversational agents, but tend to be shortsighted,
predicting utterances one at a time while ignoring their influence on future
outcomes. Modeling the future direction of a dialogue is crucial to generating
coherent, interesting dialogues, a need which led traditional NLP models of
dialogue to draw on reinforcement learning. In this paper, we show how to
integrate these goals, applying deep reinforcement learning to model future
reward in chatbot dialogue. The model simulates dialogues between two virtual
agents, using policy gradient methods to reward sequences that display three
useful conversational properties: informativity (non-repetitive turns),
coherence, and ease of answering (related to forward-looking function). We
evaluate our model on diversity, length as well as with human judges, showing
that the proposed algorithm generates more interactive responses and manages to
foster a more sustained conversation in dialogue simulation. This work marks a
first step towards learning a neural conversational model based on the
long-term success of dialogues. | http://arxiv.org/pdf/1606.01541 | Jiwei Li, Will Monroe, Alan Ritter, Michel Galley, Jianfeng Gao, Dan Jurafsky | cs.CL | null | null | cs.CL | 20160605 | 20160929 | [
{
"id": "1506.08941"
},
{
"id": "1505.00521"
},
{
"id": "1511.06732"
},
{
"id": "1605.05110"
},
{
"id": "1604.04562"
},
{
"id": "1603.09457"
},
{
"id": "1603.08023"
}
] |
1606.01540 | 13 | [14] Emanuel Todorov, Tom Erez, and Yuval Tassa. Mujoco: A physics engine for model-based control. In Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on, pages 5026â5033. IEEE, 2012.
[15] MichaŠKempka, Marek Wydmuch, Grzegorz Runc, Jakub Toczek, and Wojciech Ja´skowski. Vizdoom: A doom-based ai research platform for visual reinforcement learning. arXiv preprint arXiv:1605.02097, 2016.
4 | 1606.01540#13 | OpenAI Gym | OpenAI Gym is a toolkit for reinforcement learning research. It includes a
growing collection of benchmark problems that expose a common interface, and a
website where people can share their results and compare the performance of
algorithms. This whitepaper discusses the components of OpenAI Gym and the
design decisions that went into the software. | http://arxiv.org/pdf/1606.01540 | Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, Wojciech Zaremba | cs.LG, cs.AI | null | null | cs.LG | 20160605 | 20160605 | [
{
"id": "1602.01783"
},
{
"id": "1604.06778"
},
{
"id": "1605.02097"
}
] |
1606.01541 | 13 | # 3 Reinforcement Learning for Open-Domain Dialogue
In this section, we describe in detail the components of the proposed RL model.
The learning system consists of two agents. We use p to denote sentences generated from the ï¬rst agent and q to denote sentences from the second. The two agents take turns talking with each other. A dialogue can be represented as an alternating se- quence of sentences generated by the two agents: p1, q1, p2, q2, ..., pi, qi. We view the generated sen- tences as actions that are taken according to a policy deï¬ned by an encoder-decoder recurrent neural net- work language model. | 1606.01541#13 | Deep Reinforcement Learning for Dialogue Generation | Recent neural models of dialogue generation offer great promise for
generating responses for conversational agents, but tend to be shortsighted,
predicting utterances one at a time while ignoring their influence on future
outcomes. Modeling the future direction of a dialogue is crucial to generating
coherent, interesting dialogues, a need which led traditional NLP models of
dialogue to draw on reinforcement learning. In this paper, we show how to
integrate these goals, applying deep reinforcement learning to model future
reward in chatbot dialogue. The model simulates dialogues between two virtual
agents, using policy gradient methods to reward sequences that display three
useful conversational properties: informativity (non-repetitive turns),
coherence, and ease of answering (related to forward-looking function). We
evaluate our model on diversity, length as well as with human judges, showing
that the proposed algorithm generates more interactive responses and manages to
foster a more sustained conversation in dialogue simulation. This work marks a
first step towards learning a neural conversational model based on the
long-term success of dialogues. | http://arxiv.org/pdf/1606.01541 | Jiwei Li, Will Monroe, Alan Ritter, Michel Galley, Jianfeng Gao, Dan Jurafsky | cs.CL | null | null | cs.CL | 20160605 | 20160929 | [
{
"id": "1506.08941"
},
{
"id": "1505.00521"
},
{
"id": "1511.06732"
},
{
"id": "1605.05110"
},
{
"id": "1604.04562"
},
{
"id": "1603.09457"
},
{
"id": "1603.08023"
}
] |
1606.01541 | 14 | The parameters of the network are optimized to maximize the expected future reward using policy search, as described in Section 4.3. Policy gradi- ent methods are more appropriate for our scenario than Q-learning (Mnih et al., 2013), because we can initialize the encoder-decoder RNN using MLE pa- rameters that already produce plausible responses, before changing the objective and tuning towards a policy that maximizes long-term reward. Q-learning, on the other hand, directly estimates the future ex- pected reward of each action, which can differ from the MLE objective by orders of magnitude, thus mak- ing MLE parameters inappropriate for initialization. The components (states, actions, reward, etc.) of our sequential decision problem are summarized in the following sub-sections.
# 3.1 Action
An action a is the dialogue utterance to generate. The action space is inï¬nite since arbitrary-length se- quences can be generated.
# 3.2 State
A state is denoted by the previous two dialogue turns [pi, qi]. The dialogue history is further transformed to a vector representation by feeding the concatena- tion of pi and qi into an LSTM encoder model as
described in Li et al. (2016a).
# 3.3 Policy | 1606.01541#14 | Deep Reinforcement Learning for Dialogue Generation | Recent neural models of dialogue generation offer great promise for
generating responses for conversational agents, but tend to be shortsighted,
predicting utterances one at a time while ignoring their influence on future
outcomes. Modeling the future direction of a dialogue is crucial to generating
coherent, interesting dialogues, a need which led traditional NLP models of
dialogue to draw on reinforcement learning. In this paper, we show how to
integrate these goals, applying deep reinforcement learning to model future
reward in chatbot dialogue. The model simulates dialogues between two virtual
agents, using policy gradient methods to reward sequences that display three
useful conversational properties: informativity (non-repetitive turns),
coherence, and ease of answering (related to forward-looking function). We
evaluate our model on diversity, length as well as with human judges, showing
that the proposed algorithm generates more interactive responses and manages to
foster a more sustained conversation in dialogue simulation. This work marks a
first step towards learning a neural conversational model based on the
long-term success of dialogues. | http://arxiv.org/pdf/1606.01541 | Jiwei Li, Will Monroe, Alan Ritter, Michel Galley, Jianfeng Gao, Dan Jurafsky | cs.CL | null | null | cs.CL | 20160605 | 20160929 | [
{
"id": "1506.08941"
},
{
"id": "1505.00521"
},
{
"id": "1511.06732"
},
{
"id": "1605.05110"
},
{
"id": "1604.04562"
},
{
"id": "1603.09457"
},
{
"id": "1603.08023"
}
] |
1606.01541 | 15 | described in Li et al. (2016a).
# 3.3 Policy
A policy takes the form of an LSTM encoder-decoder (i.e., pRL(pi+1|pi, qi) ) and is deï¬ned by its param- eters. Note that we use a stochastic representation of the policy (a probability distribution over actions given states). A deterministic policy would result in a discontinuous objective that is difï¬cult to optimize using gradient-based methods.
# 3.4 Reward
r denotes the reward obtained for each action. In this subsection, we discuss major factors that contribute to the success of a dialogue and describe how approx- imations to these factors can be operationalized in computable reward functions. | 1606.01541#15 | Deep Reinforcement Learning for Dialogue Generation | Recent neural models of dialogue generation offer great promise for
generating responses for conversational agents, but tend to be shortsighted,
predicting utterances one at a time while ignoring their influence on future
outcomes. Modeling the future direction of a dialogue is crucial to generating
coherent, interesting dialogues, a need which led traditional NLP models of
dialogue to draw on reinforcement learning. In this paper, we show how to
integrate these goals, applying deep reinforcement learning to model future
reward in chatbot dialogue. The model simulates dialogues between two virtual
agents, using policy gradient methods to reward sequences that display three
useful conversational properties: informativity (non-repetitive turns),
coherence, and ease of answering (related to forward-looking function). We
evaluate our model on diversity, length as well as with human judges, showing
that the proposed algorithm generates more interactive responses and manages to
foster a more sustained conversation in dialogue simulation. This work marks a
first step towards learning a neural conversational model based on the
long-term success of dialogues. | http://arxiv.org/pdf/1606.01541 | Jiwei Li, Will Monroe, Alan Ritter, Michel Galley, Jianfeng Gao, Dan Jurafsky | cs.CL | null | null | cs.CL | 20160605 | 20160929 | [
{
"id": "1506.08941"
},
{
"id": "1505.00521"
},
{
"id": "1511.06732"
},
{
"id": "1605.05110"
},
{
"id": "1604.04562"
},
{
"id": "1603.09457"
},
{
"id": "1603.08023"
}
] |
1606.01541 | 16 | Ease of answering A turn generated by a machine should be easy to respond to. This aspect of a turn is related to its forward-looking function: the con- straints a turn places on the next turn (Schegloff and Sacks, 1973; Allwood et al., 1992). We propose to measure the ease of answering a generated turn by using the negative log likelihood of responding to that utterance with a dull response. We manually con- structed a list of dull responses S consisting 8 turns such as âI donât know what you are talking aboutâ, âI have no ideaâ, etc., that we and others have found occur very frequently in SEQ2SEQ models of con- versations. The reward function is given as follows:
1 1 r= He De Fy; 08 Powsesea(sla) (1) scS
where NS denotes the cardinality of NS and Ns de- notes the number of tokens in the dull response s. Although of course there are more ways to generate dull responses than the list can cover, many of these responses are likely to fall into similar regions in the vector space computed by the model. A system less likely to generate utterances in the list is thus also less likely to generate other dull responses. | 1606.01541#16 | Deep Reinforcement Learning for Dialogue Generation | Recent neural models of dialogue generation offer great promise for
generating responses for conversational agents, but tend to be shortsighted,
predicting utterances one at a time while ignoring their influence on future
outcomes. Modeling the future direction of a dialogue is crucial to generating
coherent, interesting dialogues, a need which led traditional NLP models of
dialogue to draw on reinforcement learning. In this paper, we show how to
integrate these goals, applying deep reinforcement learning to model future
reward in chatbot dialogue. The model simulates dialogues between two virtual
agents, using policy gradient methods to reward sequences that display three
useful conversational properties: informativity (non-repetitive turns),
coherence, and ease of answering (related to forward-looking function). We
evaluate our model on diversity, length as well as with human judges, showing
that the proposed algorithm generates more interactive responses and manages to
foster a more sustained conversation in dialogue simulation. This work marks a
first step towards learning a neural conversational model based on the
long-term success of dialogues. | http://arxiv.org/pdf/1606.01541 | Jiwei Li, Will Monroe, Alan Ritter, Michel Galley, Jianfeng Gao, Dan Jurafsky | cs.CL | null | null | cs.CL | 20160605 | 20160929 | [
{
"id": "1506.08941"
},
{
"id": "1505.00521"
},
{
"id": "1511.06732"
},
{
"id": "1605.05110"
},
{
"id": "1604.04562"
},
{
"id": "1603.09457"
},
{
"id": "1603.08023"
}
] |
1606.01541 | 17 | represents the likelihood output by SEQ2SEQ models. It is worth noting that pseq2seq is different from the stochastic policy function pRL(pi+1|pi, qi), since the former is learned based on the MLE objective of the SEQ2SEQ model while the latter is the policy optimized for long-term future
reward in the RL setting. r1 is further scaled by the length of target S.
Information Flow We want each agent to con- tribute new information at each turn to keep the di- alogue moving and avoid repetitive sequences. We therefore propose penalizing semantic similarity be- tween consecutive turns from the same agent. Let hpi and hpi+1 denote representations obtained from the encoder for two consecutive turns pi and pi+1. The reward is given by the negative log of the cosine similarity between them:
hp, hi rz = â log cos(hp,, hp,,,) log cos dpi Pina â_ [pl psa (2)
Semantic Coherence We also need to measure the adequacy of responses to avoid situations in which the generated replies are highly rewarded but are un- grammatical or not coherent. We therefore consider the mutual information between the action a and pre- vious turns in the history to ensure the generated responses are coherent and appropriate: | 1606.01541#17 | Deep Reinforcement Learning for Dialogue Generation | Recent neural models of dialogue generation offer great promise for
generating responses for conversational agents, but tend to be shortsighted,
predicting utterances one at a time while ignoring their influence on future
outcomes. Modeling the future direction of a dialogue is crucial to generating
coherent, interesting dialogues, a need which led traditional NLP models of
dialogue to draw on reinforcement learning. In this paper, we show how to
integrate these goals, applying deep reinforcement learning to model future
reward in chatbot dialogue. The model simulates dialogues between two virtual
agents, using policy gradient methods to reward sequences that display three
useful conversational properties: informativity (non-repetitive turns),
coherence, and ease of answering (related to forward-looking function). We
evaluate our model on diversity, length as well as with human judges, showing
that the proposed algorithm generates more interactive responses and manages to
foster a more sustained conversation in dialogue simulation. This work marks a
first step towards learning a neural conversational model based on the
long-term success of dialogues. | http://arxiv.org/pdf/1606.01541 | Jiwei Li, Will Monroe, Alan Ritter, Michel Galley, Jianfeng Gao, Dan Jurafsky | cs.CL | null | null | cs.CL | 20160605 | 20160929 | [
{
"id": "1506.08941"
},
{
"id": "1505.00521"
},
{
"id": "1511.06732"
},
{
"id": "1605.05110"
},
{
"id": "1604.04562"
},
{
"id": "1603.09457"
},
{
"id": "1603.08023"
}
] |
1606.01541 | 18 | r3 = 1 Na log pseq2seq(a|qi, pi)+ 1 Nqi log pbackward seq2seq (qi|a)
(3) pseq2seq(a|pi, qi) denotes the probability of generat- ing response a given the previous dialogue utterances [pi, qi]. pbackward seq2seq (qi|a) denotes the backward proba- bility of generating the previous dialogue utterance qi based on response a. pbackward is trained in a simi- seq2seq lar way as standard SEQ2SEQ models with sources and targets swapped. Again, to control the inï¬u- ence of target length, both log pseq2seq(a|qi, pi) and log pbackward seq2seq (qi|a) are scaled by the length of targets. The ï¬nal reward for action a is a weighted sum of
the rewards discussed above:
r(a, [pi, qi]) = λ1r1 + λ2r2 + λ3r3 (4)
where λ1 + λ2 + λ3 = 1. We set λ1 = 0.25, λ2 = 0.25 and λ3 = 0.5. A reward is observed after the agent reaches the end of each sentence.
# 4 Simulation | 1606.01541#18 | Deep Reinforcement Learning for Dialogue Generation | Recent neural models of dialogue generation offer great promise for
generating responses for conversational agents, but tend to be shortsighted,
predicting utterances one at a time while ignoring their influence on future
outcomes. Modeling the future direction of a dialogue is crucial to generating
coherent, interesting dialogues, a need which led traditional NLP models of
dialogue to draw on reinforcement learning. In this paper, we show how to
integrate these goals, applying deep reinforcement learning to model future
reward in chatbot dialogue. The model simulates dialogues between two virtual
agents, using policy gradient methods to reward sequences that display three
useful conversational properties: informativity (non-repetitive turns),
coherence, and ease of answering (related to forward-looking function). We
evaluate our model on diversity, length as well as with human judges, showing
that the proposed algorithm generates more interactive responses and manages to
foster a more sustained conversation in dialogue simulation. This work marks a
first step towards learning a neural conversational model based on the
long-term success of dialogues. | http://arxiv.org/pdf/1606.01541 | Jiwei Li, Will Monroe, Alan Ritter, Michel Galley, Jianfeng Gao, Dan Jurafsky | cs.CL | null | null | cs.CL | 20160605 | 20160929 | [
{
"id": "1506.08941"
},
{
"id": "1505.00521"
},
{
"id": "1511.06732"
},
{
"id": "1605.05110"
},
{
"id": "1604.04562"
},
{
"id": "1603.09457"
},
{
"id": "1603.08023"
}
] |
1606.01541 | 19 | # 4 Simulation
The central idea behind our approach is to simulate the process of two virtual agents taking turns talking with each other, through which we can explore the
state-action space and learn a policy pRL(pi+1|pi, qi) that leads to the optimal expected reward. We adopt an AlphaGo-style strategy (Silver et al., 2016) by initializing the RL system using a general response generation policy which is learned from a fully su- pervised setting.
# 4.1 Supervised Learning
For the ï¬rst stage of training, we build on prior work of predicting a generated target sequence given dia- logue history using the supervised SEQ2SEQ model (Vinyals and Le, 2015). Results from supervised models will be later used for initialization.
We trained a SEQ2SEQ model with attention (Bah- danau et al., 2015) on the OpenSubtitles dataset, which consists of roughly 80 million source-target pairs. We treated each turn in the dataset as a target and the concatenation of two previous sentences as source inputs.
# 4.2 Mutual Information | 1606.01541#19 | Deep Reinforcement Learning for Dialogue Generation | Recent neural models of dialogue generation offer great promise for
generating responses for conversational agents, but tend to be shortsighted,
predicting utterances one at a time while ignoring their influence on future
outcomes. Modeling the future direction of a dialogue is crucial to generating
coherent, interesting dialogues, a need which led traditional NLP models of
dialogue to draw on reinforcement learning. In this paper, we show how to
integrate these goals, applying deep reinforcement learning to model future
reward in chatbot dialogue. The model simulates dialogues between two virtual
agents, using policy gradient methods to reward sequences that display three
useful conversational properties: informativity (non-repetitive turns),
coherence, and ease of answering (related to forward-looking function). We
evaluate our model on diversity, length as well as with human judges, showing
that the proposed algorithm generates more interactive responses and manages to
foster a more sustained conversation in dialogue simulation. This work marks a
first step towards learning a neural conversational model based on the
long-term success of dialogues. | http://arxiv.org/pdf/1606.01541 | Jiwei Li, Will Monroe, Alan Ritter, Michel Galley, Jianfeng Gao, Dan Jurafsky | cs.CL | null | null | cs.CL | 20160605 | 20160929 | [
{
"id": "1506.08941"
},
{
"id": "1505.00521"
},
{
"id": "1511.06732"
},
{
"id": "1605.05110"
},
{
"id": "1604.04562"
},
{
"id": "1603.09457"
},
{
"id": "1603.08023"
}
] |
1606.01541 | 20 | # 4.2 Mutual Information
Samples from SEQ2SEQ models are often times dull and generic, e.g., âi donât knowâ (Li et al., 2016a) We thus do not want to initialize the policy model using the pre-trained SEQ2SEQ models because this will lead to a lack of diversity in the RL modelsâ ex- periences. Li et al. (2016a) showed that modeling mutual information between sources and targets will signiï¬cantly decrease the chance of generating dull responses and improve general response quality. We now show how we can obtain an encoder-decoder model which generates maximum mutual informa- tion responses.
As illustrated in Li et al. (2016a), direct decoding from Eq 3 is infeasible since the second term requires the target sentence to be completely generated. In- spired by recent work on sequence level learning (Ranzato et al., 2015), we treat the problem of gen- erating maximum mutual information response as a reinforcement learning problem in which a reward of mutual information value is observed when the model arrives at the end of a sequence. | 1606.01541#20 | Deep Reinforcement Learning for Dialogue Generation | Recent neural models of dialogue generation offer great promise for
generating responses for conversational agents, but tend to be shortsighted,
predicting utterances one at a time while ignoring their influence on future
outcomes. Modeling the future direction of a dialogue is crucial to generating
coherent, interesting dialogues, a need which led traditional NLP models of
dialogue to draw on reinforcement learning. In this paper, we show how to
integrate these goals, applying deep reinforcement learning to model future
reward in chatbot dialogue. The model simulates dialogues between two virtual
agents, using policy gradient methods to reward sequences that display three
useful conversational properties: informativity (non-repetitive turns),
coherence, and ease of answering (related to forward-looking function). We
evaluate our model on diversity, length as well as with human judges, showing
that the proposed algorithm generates more interactive responses and manages to
foster a more sustained conversation in dialogue simulation. This work marks a
first step towards learning a neural conversational model based on the
long-term success of dialogues. | http://arxiv.org/pdf/1606.01541 | Jiwei Li, Will Monroe, Alan Ritter, Michel Galley, Jianfeng Gao, Dan Jurafsky | cs.CL | null | null | cs.CL | 20160605 | 20160929 | [
{
"id": "1506.08941"
},
{
"id": "1505.00521"
},
{
"id": "1511.06732"
},
{
"id": "1605.05110"
},
{
"id": "1604.04562"
},
{
"id": "1603.09457"
},
{
"id": "1603.08023"
}
] |
1606.01541 | 21 | Similar to Ranzato et al. (2015), we use policy gra- dient methods (Sutton et al., 1999; Williams, 1992) for optimization. We initialize the policy model pRL using a pre-trained pSEQ2SEQ(a|pi, qi) model. Given an input source [pi, qi], we generate a candidate list A = {Ëa|Ëa â¼ pRL}. For each generated candidate Ëa, we will obtain the mutual information score m(Ëa, [pi, qi]) from the pre-trained pSEQ2SEQ(a|pi, qi) and pbackward SEQ2SEQ(qi|a). This mutual information score will be used as a reward and back-propagated to the encoder-decoder model, tailoring it to generate se- quences with higher rewards. We refer the readers to Zaremba and Sutskever (2015) and Williams (1992) for details. The expected reward for a sequence is given by:
J(θ) = E[m(Ëa, [pi, qi])] (5)
The gradient is estimated using the likelihood ratio trick:
âJ(θ) = m(Ëa, [pi, qi])â log pRL(Ëa|[pi, qi]) | 1606.01541#21 | Deep Reinforcement Learning for Dialogue Generation | Recent neural models of dialogue generation offer great promise for
generating responses for conversational agents, but tend to be shortsighted,
predicting utterances one at a time while ignoring their influence on future
outcomes. Modeling the future direction of a dialogue is crucial to generating
coherent, interesting dialogues, a need which led traditional NLP models of
dialogue to draw on reinforcement learning. In this paper, we show how to
integrate these goals, applying deep reinforcement learning to model future
reward in chatbot dialogue. The model simulates dialogues between two virtual
agents, using policy gradient methods to reward sequences that display three
useful conversational properties: informativity (non-repetitive turns),
coherence, and ease of answering (related to forward-looking function). We
evaluate our model on diversity, length as well as with human judges, showing
that the proposed algorithm generates more interactive responses and manages to
foster a more sustained conversation in dialogue simulation. This work marks a
first step towards learning a neural conversational model based on the
long-term success of dialogues. | http://arxiv.org/pdf/1606.01541 | Jiwei Li, Will Monroe, Alan Ritter, Michel Galley, Jianfeng Gao, Dan Jurafsky | cs.CL | null | null | cs.CL | 20160605 | 20160929 | [
{
"id": "1506.08941"
},
{
"id": "1505.00521"
},
{
"id": "1511.06732"
},
{
"id": "1605.05110"
},
{
"id": "1604.04562"
},
{
"id": "1603.09457"
},
{
"id": "1603.08023"
}
] |
1606.01541 | 22 | âJ(θ) = m(Ëa, [pi, qi])â log pRL(Ëa|[pi, qi])
We update the parameters in the encoder-decoder model using stochastic gradient descent. A curricu- lum learning strategy is adopted (Bengio et al., 2009) as in Ranzato et al. (2015) such that, for every se- quence of length T we use the MLE loss for the ï¬rst L tokens and the reinforcement algorithm for the remaining T â L tokens. We gradually anneal the value of L to zero. A baseline strategy is employed to decrease the learning variance: an additional neural model takes as inputs the generated target and the initial source and outputs a baseline value, similar to the strategy adopted by Zaremba and Sutskever (2015). The ï¬nal gradient is thus:
âJ(θ) = â log pRL(Ëa|[pi, qi])[m(Ëa, [pi, qi]) â b] (7)
# 4.3 Dialogue Simulation between Two Agents | 1606.01541#22 | Deep Reinforcement Learning for Dialogue Generation | Recent neural models of dialogue generation offer great promise for
generating responses for conversational agents, but tend to be shortsighted,
predicting utterances one at a time while ignoring their influence on future
outcomes. Modeling the future direction of a dialogue is crucial to generating
coherent, interesting dialogues, a need which led traditional NLP models of
dialogue to draw on reinforcement learning. In this paper, we show how to
integrate these goals, applying deep reinforcement learning to model future
reward in chatbot dialogue. The model simulates dialogues between two virtual
agents, using policy gradient methods to reward sequences that display three
useful conversational properties: informativity (non-repetitive turns),
coherence, and ease of answering (related to forward-looking function). We
evaluate our model on diversity, length as well as with human judges, showing
that the proposed algorithm generates more interactive responses and manages to
foster a more sustained conversation in dialogue simulation. This work marks a
first step towards learning a neural conversational model based on the
long-term success of dialogues. | http://arxiv.org/pdf/1606.01541 | Jiwei Li, Will Monroe, Alan Ritter, Michel Galley, Jianfeng Gao, Dan Jurafsky | cs.CL | null | null | cs.CL | 20160605 | 20160929 | [
{
"id": "1506.08941"
},
{
"id": "1505.00521"
},
{
"id": "1511.06732"
},
{
"id": "1605.05110"
},
{
"id": "1604.04562"
},
{
"id": "1603.09457"
},
{
"id": "1603.08023"
}
] |
1606.01541 | 23 | # 4.3 Dialogue Simulation between Two Agents
We simulate conversations between the two virtual agents and have them take turns talking with each other. The simulation proceeds as follows: at the initial step, a message from the training set is fed to the ï¬rst agent. The agent encodes the input message to a vector representation and starts decoding to gen- erate a response output. Combining the immediate output from the ï¬rst agent with the dialogue history, the second agent updates the state by encoding the dialogue history into a representation and uses the decoder RNN to generate responses, which are sub- sequently fed back to the ï¬rst agent, and the process is repeated.
v XY 4 \ Input Message ON nd âe 4 Turn 2 & Sim 4 Tom n to Ue Dis 3 8 Dis = = = encode decode encode decode 1 encode | decode 1 m > > Di > > â11 ââ Png 1 â~__, "Ce ) â~__, pho How old are . fa you? : : P12 2 2 22> â Ss i â> Pra 2 2 ââ fiz ââ Pn2 p a 3 1,3: > > 11 â Paar 3 â_, 3 'm 16, why are â Le Cte) you Pn youasking? J, were Cte) :
Figure 1: Dialogue simulation between the two agents. | 1606.01541#23 | Deep Reinforcement Learning for Dialogue Generation | Recent neural models of dialogue generation offer great promise for
generating responses for conversational agents, but tend to be shortsighted,
predicting utterances one at a time while ignoring their influence on future
outcomes. Modeling the future direction of a dialogue is crucial to generating
coherent, interesting dialogues, a need which led traditional NLP models of
dialogue to draw on reinforcement learning. In this paper, we show how to
integrate these goals, applying deep reinforcement learning to model future
reward in chatbot dialogue. The model simulates dialogues between two virtual
agents, using policy gradient methods to reward sequences that display three
useful conversational properties: informativity (non-repetitive turns),
coherence, and ease of answering (related to forward-looking function). We
evaluate our model on diversity, length as well as with human judges, showing
that the proposed algorithm generates more interactive responses and manages to
foster a more sustained conversation in dialogue simulation. This work marks a
first step towards learning a neural conversational model based on the
long-term success of dialogues. | http://arxiv.org/pdf/1606.01541 | Jiwei Li, Will Monroe, Alan Ritter, Michel Galley, Jianfeng Gao, Dan Jurafsky | cs.CL | null | null | cs.CL | 20160605 | 20160929 | [
{
"id": "1506.08941"
},
{
"id": "1505.00521"
},
{
"id": "1511.06732"
},
{
"id": "1605.05110"
},
{
"id": "1604.04562"
},
{
"id": "1603.09457"
},
{
"id": "1603.08023"
}
] |
1606.01541 | 24 | Figure 1: Dialogue simulation between the two agents.
Optimization We initialize the policy model pRL with parameters from the mutual information model described in the previous subsection. We then use policy gradient methods to ï¬nd parameters that lead to a larger expected reward. The objective to maxi- mize is the expected future reward:
Tri(0) = i=T PRL(41:7) > R(ai, [pis al)] (8) i=l
where R(ai, [pi, qi]) denotes the reward resulting from action ai. We use the likelihood ratio trick (Williams, 1992; Glynn, 1990; Aleksandrov et al., 1968) for gradient updates:
generation systems using both human judgments and two automatic metrics: conversation length (number of turns in the entire session) and diversity.
# 5.1 Dataset
The dialogue simulation requires high-quality initial inputs fed to the agent. For example, an initial input of âwhy ?â is undesirable since it is unclear how the dialogue could proceed. We take a subset of 10 million messages from the OpenSubtitles dataset and extract 0.8 million sequences with the lowest likelihood of generating the response âi donât know what you are taking aboutâ to ensure initial inputs are easy to respond to. | 1606.01541#24 | Deep Reinforcement Learning for Dialogue Generation | Recent neural models of dialogue generation offer great promise for
generating responses for conversational agents, but tend to be shortsighted,
predicting utterances one at a time while ignoring their influence on future
outcomes. Modeling the future direction of a dialogue is crucial to generating
coherent, interesting dialogues, a need which led traditional NLP models of
dialogue to draw on reinforcement learning. In this paper, we show how to
integrate these goals, applying deep reinforcement learning to model future
reward in chatbot dialogue. The model simulates dialogues between two virtual
agents, using policy gradient methods to reward sequences that display three
useful conversational properties: informativity (non-repetitive turns),
coherence, and ease of answering (related to forward-looking function). We
evaluate our model on diversity, length as well as with human judges, showing
that the proposed algorithm generates more interactive responses and manages to
foster a more sustained conversation in dialogue simulation. This work marks a
first step towards learning a neural conversational model based on the
long-term success of dialogues. | http://arxiv.org/pdf/1606.01541 | Jiwei Li, Will Monroe, Alan Ritter, Michel Galley, Jianfeng Gao, Dan Jurafsky | cs.CL | null | null | cs.CL | 20160605 | 20160929 | [
{
"id": "1506.08941"
},
{
"id": "1505.00521"
},
{
"id": "1511.06732"
},
{
"id": "1605.05110"
},
{
"id": "1604.04562"
},
{
"id": "1603.09457"
},
{
"id": "1603.08023"
}
] |
1606.01541 | 25 | i=T VJrLO ~ D7 Viosn\ (ailpi.a:) ¥- R(ai [pis ai) i=1 (9)
(9) We refer readers to Williams (1992) and Glynn
(1990) for more details.
# 4.4 Curriculum Learning
A curriculum Learning strategy is again employed in which we begin by simulating the dialogue for 2 turns, and gradually increase the number of simulated turns. We generate 5 turns at most, as the number of candidates to examine grows exponentially in the size of candidate list. Five candidate responses are generated at each step of the simulation.
# 5.2 Automatic Evaluation
Evaluating dialogue systems is difï¬cult. Metrics such as BLEU (Papineni et al., 2002) and perplexity have been widely used for dialogue quality evaluation (Li et al., 2016a; Vinyals and Le, 2015; Sordoni et al., 2015), but it is widely debated how well these auto- matic metrics are correlated with true response qual- ity (Liu et al., 2016; Galley et al., 2015). Since the goal of the proposed system is not to predict the highest probability response, but rather the long-term success of the dialogue, we do not employ BLEU or perplexity for evaluation2.
# 5 Experimental Results
In this section, we describe experimental results along with qualitative analysis. We evaluate dialogue | 1606.01541#25 | Deep Reinforcement Learning for Dialogue Generation | Recent neural models of dialogue generation offer great promise for
generating responses for conversational agents, but tend to be shortsighted,
predicting utterances one at a time while ignoring their influence on future
outcomes. Modeling the future direction of a dialogue is crucial to generating
coherent, interesting dialogues, a need which led traditional NLP models of
dialogue to draw on reinforcement learning. In this paper, we show how to
integrate these goals, applying deep reinforcement learning to model future
reward in chatbot dialogue. The model simulates dialogues between two virtual
agents, using policy gradient methods to reward sequences that display three
useful conversational properties: informativity (non-repetitive turns),
coherence, and ease of answering (related to forward-looking function). We
evaluate our model on diversity, length as well as with human judges, showing
that the proposed algorithm generates more interactive responses and manages to
foster a more sustained conversation in dialogue simulation. This work marks a
first step towards learning a neural conversational model based on the
long-term success of dialogues. | http://arxiv.org/pdf/1606.01541 | Jiwei Li, Will Monroe, Alan Ritter, Michel Galley, Jianfeng Gao, Dan Jurafsky | cs.CL | null | null | cs.CL | 20160605 | 20160929 | [
{
"id": "1506.08941"
},
{
"id": "1505.00521"
},
{
"id": "1511.06732"
},
{
"id": "1605.05110"
},
{
"id": "1604.04562"
},
{
"id": "1603.09457"
},
{
"id": "1603.08023"
}
] |
1606.01541 | 26 | # 5 Experimental Results
In this section, we describe experimental results along with qualitative analysis. We evaluate dialogue
2We found the RL model performs worse on BLEU score. On a random sample of 2,500 conversational pairs, single reference BLEU scores for RL models, mutual information models and vanilla SEQ2SEQ models are respectively 1.28, 1.44 and 1.17. BLEU is highly correlated with perplexity in generation tasks.
Model SEQ2SEQ mutual information RL # of simulated turns 2.68 3.40 4.48
Table 2: The average number of simulated turns from standard SEQ2SEQ models, mutual informa- tion model and the proposed RL model.
Length of the dialogue The ï¬rst metric we pro- pose is the length of the simulated dialogue. We say a dialogue ends when one of the agents starts gener- ating dull responses such as âi donât knowâ 3 or two consecutive utterances from the same user are highly overlapping4. | 1606.01541#26 | Deep Reinforcement Learning for Dialogue Generation | Recent neural models of dialogue generation offer great promise for
generating responses for conversational agents, but tend to be shortsighted,
predicting utterances one at a time while ignoring their influence on future
outcomes. Modeling the future direction of a dialogue is crucial to generating
coherent, interesting dialogues, a need which led traditional NLP models of
dialogue to draw on reinforcement learning. In this paper, we show how to
integrate these goals, applying deep reinforcement learning to model future
reward in chatbot dialogue. The model simulates dialogues between two virtual
agents, using policy gradient methods to reward sequences that display three
useful conversational properties: informativity (non-repetitive turns),
coherence, and ease of answering (related to forward-looking function). We
evaluate our model on diversity, length as well as with human judges, showing
that the proposed algorithm generates more interactive responses and manages to
foster a more sustained conversation in dialogue simulation. This work marks a
first step towards learning a neural conversational model based on the
long-term success of dialogues. | http://arxiv.org/pdf/1606.01541 | Jiwei Li, Will Monroe, Alan Ritter, Michel Galley, Jianfeng Gao, Dan Jurafsky | cs.CL | null | null | cs.CL | 20160605 | 20160929 | [
{
"id": "1506.08941"
},
{
"id": "1505.00521"
},
{
"id": "1511.06732"
},
{
"id": "1605.05110"
},
{
"id": "1604.04562"
},
{
"id": "1603.09457"
},
{
"id": "1603.08023"
}
] |
1606.01541 | 27 | The test set consists of 1,000 input messages. To reduce the risk of circular dialogues, we limit the number of simulated turns to be less than 8. Results are shown in Table 2. As can be seen, using mutual information leads to more sustained conversations between the two agents. The proposed RL model is ï¬rst trained based on the mutual information objec- tive and thus beneï¬ts from it in addition to the RL model. We observe that the RL model with dialogue simulation achieves the best evaluation score.
Diversity We report degree of diversity by calculat- ing the number of distinct unigrams and bigrams in generated responses. The value is scaled by the total number of generated tokens to avoid favoring long sentences as described in Li et al. (2016a). The re- sulting metric is thus a type-token ratio for unigrams and bigrams. | 1606.01541#27 | Deep Reinforcement Learning for Dialogue Generation | Recent neural models of dialogue generation offer great promise for
generating responses for conversational agents, but tend to be shortsighted,
predicting utterances one at a time while ignoring their influence on future
outcomes. Modeling the future direction of a dialogue is crucial to generating
coherent, interesting dialogues, a need which led traditional NLP models of
dialogue to draw on reinforcement learning. In this paper, we show how to
integrate these goals, applying deep reinforcement learning to model future
reward in chatbot dialogue. The model simulates dialogues between two virtual
agents, using policy gradient methods to reward sequences that display three
useful conversational properties: informativity (non-repetitive turns),
coherence, and ease of answering (related to forward-looking function). We
evaluate our model on diversity, length as well as with human judges, showing
that the proposed algorithm generates more interactive responses and manages to
foster a more sustained conversation in dialogue simulation. This work marks a
first step towards learning a neural conversational model based on the
long-term success of dialogues. | http://arxiv.org/pdf/1606.01541 | Jiwei Li, Will Monroe, Alan Ritter, Michel Galley, Jianfeng Gao, Dan Jurafsky | cs.CL | null | null | cs.CL | 20160605 | 20160929 | [
{
"id": "1506.08941"
},
{
"id": "1505.00521"
},
{
"id": "1511.06732"
},
{
"id": "1605.05110"
},
{
"id": "1604.04562"
},
{
"id": "1603.09457"
},
{
"id": "1603.08023"
}
] |
1606.01541 | 28 | For both the standard SEQ2SEQ model and the pro- posed RL model, we use beam search with a beam size 10 to generate a response to a given input mes- sage. For the mutual information model, we ï¬rst generate n-best lists using pSEQ2SEQ(t|s) and then linearly re-rank them using pSEQ2SEQ(s|t). Results are presented in Table 4. We ï¬nd that the proposed RL model generates more diverse outputs when comSince the RL model is trained based on future reward rather than MLE, it is not surprising that the RL based models achieve lower BLEU score.
3We use a simple rule matching method, with a list of 8 phrases that count as dull responses. Although this can lead to both false-positives and -negatives, it works pretty well in practice.
4Two utterances are considered to be repetitive if they share more than 80 percent of their words.
pared against both the vanilla SEQ2SEQ model and the mutual information model.
Model SEQ2SEQ mutual information RL Unigram Bigram 0.0062 0.011 0.017 0.015 0.031 0.041
Table 4: Diversity scores (type-token ratios) for the standard SEQ2SEQ model, mutual information model and the proposed RL model. | 1606.01541#28 | Deep Reinforcement Learning for Dialogue Generation | Recent neural models of dialogue generation offer great promise for
generating responses for conversational agents, but tend to be shortsighted,
predicting utterances one at a time while ignoring their influence on future
outcomes. Modeling the future direction of a dialogue is crucial to generating
coherent, interesting dialogues, a need which led traditional NLP models of
dialogue to draw on reinforcement learning. In this paper, we show how to
integrate these goals, applying deep reinforcement learning to model future
reward in chatbot dialogue. The model simulates dialogues between two virtual
agents, using policy gradient methods to reward sequences that display three
useful conversational properties: informativity (non-repetitive turns),
coherence, and ease of answering (related to forward-looking function). We
evaluate our model on diversity, length as well as with human judges, showing
that the proposed algorithm generates more interactive responses and manages to
foster a more sustained conversation in dialogue simulation. This work marks a
first step towards learning a neural conversational model based on the
long-term success of dialogues. | http://arxiv.org/pdf/1606.01541 | Jiwei Li, Will Monroe, Alan Ritter, Michel Galley, Jianfeng Gao, Dan Jurafsky | cs.CL | null | null | cs.CL | 20160605 | 20160929 | [
{
"id": "1506.08941"
},
{
"id": "1505.00521"
},
{
"id": "1511.06732"
},
{
"id": "1605.05110"
},
{
"id": "1604.04562"
},
{
"id": "1603.09457"
},
{
"id": "1603.08023"
}
] |
1606.01541 | 29 | Table 4: Diversity scores (type-token ratios) for the standard SEQ2SEQ model, mutual information model and the proposed RL model.
Human Evaluation We explore three settings for human evaluation: the ï¬rst setting is similar to what was described in Li et al. (2016a), where we employ crowdsourced judges to evaluate a random sample of 500 items. We present both an input message and the generated outputs to 3 judges and ask them to decide which of the two outputs is better (denoted as single- turn general quality). Ties are permitted. Identical strings are assigned the same score. We measure the improvement achieved by the RL model over the mutual information model by the mean difference in scores between the models.
For the second setting, judges are again presented with input messages and system outputs, but are asked to decide which of the two outputs is easier to respond to (denoted as single-turn ease to answer). Again we evaluate a random sample of 500 items, each being assigned to 3 judges.
For the third setting, judges are presented with sim- ulated conversations between the two agents (denoted as multi-turn general quality). Each conversation consists of 5 turns. We evaluate 200 simulated con- versations, each being assigned to 3 judges, who are asked to decide which of the simulated conversations is of higher quality. | 1606.01541#29 | Deep Reinforcement Learning for Dialogue Generation | Recent neural models of dialogue generation offer great promise for
generating responses for conversational agents, but tend to be shortsighted,
predicting utterances one at a time while ignoring their influence on future
outcomes. Modeling the future direction of a dialogue is crucial to generating
coherent, interesting dialogues, a need which led traditional NLP models of
dialogue to draw on reinforcement learning. In this paper, we show how to
integrate these goals, applying deep reinforcement learning to model future
reward in chatbot dialogue. The model simulates dialogues between two virtual
agents, using policy gradient methods to reward sequences that display three
useful conversational properties: informativity (non-repetitive turns),
coherence, and ease of answering (related to forward-looking function). We
evaluate our model on diversity, length as well as with human judges, showing
that the proposed algorithm generates more interactive responses and manages to
foster a more sustained conversation in dialogue simulation. This work marks a
first step towards learning a neural conversational model based on the
long-term success of dialogues. | http://arxiv.org/pdf/1606.01541 | Jiwei Li, Will Monroe, Alan Ritter, Michel Galley, Jianfeng Gao, Dan Jurafsky | cs.CL | null | null | cs.CL | 20160605 | 20160929 | [
{
"id": "1506.08941"
},
{
"id": "1505.00521"
},
{
"id": "1511.06732"
},
{
"id": "1605.05110"
},
{
"id": "1604.04562"
},
{
"id": "1603.09457"
},
{
"id": "1603.08023"
}
] |
1606.01541 | 30 | Setting single-turn general quality single-turn ease to answer multi-turn general quality RL-win RL-lose 0.40 0.52 0.72 0.36 0.23 0.12
Tie 0.24 0.25 0.16 Table 5: RL gains over the mutual information sys- tem based on pairwise human judgments.
Results for human evaluation are shown in Table 5. The proposed RL system does not introduce a signiï¬- cant boost in single-turn response quality (winning 40 percent of time and losing 36 percent of time).
Input Message How old are you? Whatâs your full name? I donât want to go home tonight. Do you have any feelings for me? How much time do you have here? Not long enough. Sorry, sir. Shall we get started? Do you play football? Weâd have to talk to him. How come you never say it? Table 3: Sampled responses generated from the mutual information models and the proposed RL model. | 1606.01541#30 | Deep Reinforcement Learning for Dialogue Generation | Recent neural models of dialogue generation offer great promise for
generating responses for conversational agents, but tend to be shortsighted,
predicting utterances one at a time while ignoring their influence on future
outcomes. Modeling the future direction of a dialogue is crucial to generating
coherent, interesting dialogues, a need which led traditional NLP models of
dialogue to draw on reinforcement learning. In this paper, we show how to
integrate these goals, applying deep reinforcement learning to model future
reward in chatbot dialogue. The model simulates dialogues between two virtual
agents, using policy gradient methods to reward sequences that display three
useful conversational properties: informativity (non-repetitive turns),
coherence, and ease of answering (related to forward-looking function). We
evaluate our model on diversity, length as well as with human judges, showing
that the proposed algorithm generates more interactive responses and manages to
foster a more sustained conversation in dialogue simulation. This work marks a
first step towards learning a neural conversational model based on the
long-term success of dialogues. | http://arxiv.org/pdf/1606.01541 | Jiwei Li, Will Monroe, Alan Ritter, Michel Galley, Jianfeng Gao, Dan Jurafsky | cs.CL | null | null | cs.CL | 20160605 | 20160929 | [
{
"id": "1506.08941"
},
{
"id": "1505.00521"
},
{
"id": "1511.06732"
},
{
"id": "1605.05110"
},
{
"id": "1604.04562"
},
{
"id": "1603.09457"
},
{
"id": "1603.08023"
}
] |
1606.01541 | 31 | This is in line with our expectations, as the RL model is not optimized to predict the next utterance, but rather to increase long-term reward. The RL sys- tem produces responses that are signiï¬cantly easier to answer than does the mutual information system, as demonstrated by the single-turn ease to answer setting (winning 52 percent of time and losing 23 percent of time), and also signiï¬cantly higher quality multi-turn dialogues, as demonstrated by the multi- turn general quality setting (winning 72 percent of time).
A: Whatâs your name ? B: Daniel. A: How old are you ? B. Twelve. Whatâs your name ? A. Daniel. B: How old are you ? A: Twelve. Whatâs your name ? B: Daniel. A: How old are you ? B ...
Table 6: An simulated dialogue with a cycle longer than one.
Qualitative Analysis and Discussion We show a random sample of generated responses in Table 3 and simulated conversations in Table 1 at the beginning of the paper. From Table 3, we can see that the RL based agent indeed generates more interactive responses than the other baselines. We also ï¬nd that the RL model has a tendency to end a sentence with another question and hand the conversation over to the user. From Table 1, we observe that the RL model manages to produce more interactive and sustained conversations than the mutual information model. | 1606.01541#31 | Deep Reinforcement Learning for Dialogue Generation | Recent neural models of dialogue generation offer great promise for
generating responses for conversational agents, but tend to be shortsighted,
predicting utterances one at a time while ignoring their influence on future
outcomes. Modeling the future direction of a dialogue is crucial to generating
coherent, interesting dialogues, a need which led traditional NLP models of
dialogue to draw on reinforcement learning. In this paper, we show how to
integrate these goals, applying deep reinforcement learning to model future
reward in chatbot dialogue. The model simulates dialogues between two virtual
agents, using policy gradient methods to reward sequences that display three
useful conversational properties: informativity (non-repetitive turns),
coherence, and ease of answering (related to forward-looking function). We
evaluate our model on diversity, length as well as with human judges, showing
that the proposed algorithm generates more interactive responses and manages to
foster a more sustained conversation in dialogue simulation. This work marks a
first step towards learning a neural conversational model based on the
long-term success of dialogues. | http://arxiv.org/pdf/1606.01541 | Jiwei Li, Will Monroe, Alan Ritter, Michel Galley, Jianfeng Gao, Dan Jurafsky | cs.CL | null | null | cs.CL | 20160605 | 20160929 | [
{
"id": "1506.08941"
},
{
"id": "1505.00521"
},
{
"id": "1511.06732"
},
{
"id": "1605.05110"
},
{
"id": "1604.04562"
},
{
"id": "1603.09457"
},
{
"id": "1603.08023"
}
] |
1606.01541 | 32 | some aspects of what makes a good conversation, ideally the system would instead receive real rewards from humans. Another problem with the current model is that we can only afford to explore a very small number of candidates and simulated turns since the number of cases to consider grow exponentially.
# 6 Conclusion
During error analysis, we found that although we penalize repetitive utterances in consecutive turns, the dialogue sometimes enters a cycle with length greater than one, as shown in Table 6. This can be ascribed to the limited amount of conversational his- tory we consider. Another issue observed is that the model sometimes starts a less relevant topic during the conversation. There is a tradeoff between rele- vance and less repetitiveness, as manifested in the reward function we deï¬ne in Eq 4.
The fundamental problem, of course, is that the manually deï¬ned reward function canât possibly cover the crucial aspects that deï¬ne an ideal conversa- tion. While the heuristic rewards that we deï¬ned are amenable to automatic calculation, and do capture | 1606.01541#32 | Deep Reinforcement Learning for Dialogue Generation | Recent neural models of dialogue generation offer great promise for
generating responses for conversational agents, but tend to be shortsighted,
predicting utterances one at a time while ignoring their influence on future
outcomes. Modeling the future direction of a dialogue is crucial to generating
coherent, interesting dialogues, a need which led traditional NLP models of
dialogue to draw on reinforcement learning. In this paper, we show how to
integrate these goals, applying deep reinforcement learning to model future
reward in chatbot dialogue. The model simulates dialogues between two virtual
agents, using policy gradient methods to reward sequences that display three
useful conversational properties: informativity (non-repetitive turns),
coherence, and ease of answering (related to forward-looking function). We
evaluate our model on diversity, length as well as with human judges, showing
that the proposed algorithm generates more interactive responses and manages to
foster a more sustained conversation in dialogue simulation. This work marks a
first step towards learning a neural conversational model based on the
long-term success of dialogues. | http://arxiv.org/pdf/1606.01541 | Jiwei Li, Will Monroe, Alan Ritter, Michel Galley, Jianfeng Gao, Dan Jurafsky | cs.CL | null | null | cs.CL | 20160605 | 20160929 | [
{
"id": "1506.08941"
},
{
"id": "1505.00521"
},
{
"id": "1511.06732"
},
{
"id": "1605.05110"
},
{
"id": "1604.04562"
},
{
"id": "1603.09457"
},
{
"id": "1603.08023"
}
] |
1606.01541 | 33 | We introduce a reinforcement learning framework for neural response generation by simulating dialogues between two agents, integrating the strengths of neu- ral SEQ2SEQ systems and reinforcement learning for dialogue. Like earlier neural SEQ2SEQ models, our framework captures the compositional models of the meaning of a dialogue turn and generates se- mantically appropriate responses. Like reinforce- ment learning dialogue systems, our framework is able to generate utterances that optimize future re- ward, successfully capturing global properties of a good conversation. Despite the fact that our model uses very simple, operationable heuristics for captur- ing these global properties, the framework generates more diverse, interactive responses that foster a more sustained conversation.
# Acknowledgement | 1606.01541#33 | Deep Reinforcement Learning for Dialogue Generation | Recent neural models of dialogue generation offer great promise for
generating responses for conversational agents, but tend to be shortsighted,
predicting utterances one at a time while ignoring their influence on future
outcomes. Modeling the future direction of a dialogue is crucial to generating
coherent, interesting dialogues, a need which led traditional NLP models of
dialogue to draw on reinforcement learning. In this paper, we show how to
integrate these goals, applying deep reinforcement learning to model future
reward in chatbot dialogue. The model simulates dialogues between two virtual
agents, using policy gradient methods to reward sequences that display three
useful conversational properties: informativity (non-repetitive turns),
coherence, and ease of answering (related to forward-looking function). We
evaluate our model on diversity, length as well as with human judges, showing
that the proposed algorithm generates more interactive responses and manages to
foster a more sustained conversation in dialogue simulation. This work marks a
first step towards learning a neural conversational model based on the
long-term success of dialogues. | http://arxiv.org/pdf/1606.01541 | Jiwei Li, Will Monroe, Alan Ritter, Michel Galley, Jianfeng Gao, Dan Jurafsky | cs.CL | null | null | cs.CL | 20160605 | 20160929 | [
{
"id": "1506.08941"
},
{
"id": "1505.00521"
},
{
"id": "1511.06732"
},
{
"id": "1605.05110"
},
{
"id": "1604.04562"
},
{
"id": "1603.09457"
},
{
"id": "1603.08023"
}
] |
1606.01541 | 34 | # Acknowledgement
We would like to thank Chris Brockett, Bill Dolan and other members of the NLP group at Microsoft Re- search for insightful comments and suggestions. We also want to thank Kelvin Guu, Percy Liang, Chris Manning, Sida Wang, Ziang Xie and other members of the Stanford NLP groups for useful discussions. Jiwei Li is supported by the Facebook Fellowship, to which we gratefully acknowledge. This work is par- tially supported by the NSF via Awards IIS-1514268, IIS-1464128, and by the DARPA Communicating with Computers (CwC) program under ARO prime contract no. W911NF- 15-1-0462. Any opinions, ï¬ndings, and conclusions or recommendations ex- pressed in this material are those of the authors and do not necessarily reï¬ect the views of NSF, DARPA, or Facebook.
# References
V. M. Aleksandrov, V. I. Sysoyev, and V. V. Shemeneva. 1968. Stochastic optimization. Engineering Cybernet- ics, 5:11â16.
Jens Allwood, Joakim Nivre, and Elisabeth Ahls´en. 1992. On the semantics and pragmatics of linguistic feedback. Journal of Semantics, 9:1â26. | 1606.01541#34 | Deep Reinforcement Learning for Dialogue Generation | Recent neural models of dialogue generation offer great promise for
generating responses for conversational agents, but tend to be shortsighted,
predicting utterances one at a time while ignoring their influence on future
outcomes. Modeling the future direction of a dialogue is crucial to generating
coherent, interesting dialogues, a need which led traditional NLP models of
dialogue to draw on reinforcement learning. In this paper, we show how to
integrate these goals, applying deep reinforcement learning to model future
reward in chatbot dialogue. The model simulates dialogues between two virtual
agents, using policy gradient methods to reward sequences that display three
useful conversational properties: informativity (non-repetitive turns),
coherence, and ease of answering (related to forward-looking function). We
evaluate our model on diversity, length as well as with human judges, showing
that the proposed algorithm generates more interactive responses and manages to
foster a more sustained conversation in dialogue simulation. This work marks a
first step towards learning a neural conversational model based on the
long-term success of dialogues. | http://arxiv.org/pdf/1606.01541 | Jiwei Li, Will Monroe, Alan Ritter, Michel Galley, Jianfeng Gao, Dan Jurafsky | cs.CL | null | null | cs.CL | 20160605 | 20160929 | [
{
"id": "1506.08941"
},
{
"id": "1505.00521"
},
{
"id": "1511.06732"
},
{
"id": "1605.05110"
},
{
"id": "1604.04562"
},
{
"id": "1603.09457"
},
{
"id": "1603.08023"
}
] |
1606.01541 | 35 | Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proc. of ICLR.
Rafael E Banchs and Haizhou Li. 2012. IRIS: a chat- oriented dialogue system based on the vector space model. In Proceedings of the ACL 2012 System Demon- strations, pages 37â42.
Yoshua Bengio, J´erËome Louradour, Ronan Collobert, and Jason Weston. 2009. Curriculum learning. In Pro- ceedings of the 26th annual international conference on machine learning, pages 41â48. ACM.
SRK Branavan, David Silver, and Regina Barzilay. 2011. Learning to win by reading manuals in a monte-carlo framework. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Hu- man Language Technologies-Volume 1, pages 268â277. Michel Galley, Chris Brockett, Alessandro Sordoni, Yangfeng Ji, Michael Auli, Chris Quirk, Margaret Mitchell, Jianfeng Gao, and Bill Dolan. 2015. deltaBLEU: A discriminative metric for generation tasks with intrinsically diverse targets. In Proc. of ACL- IJCNLP, pages 445â450, Beijing, China, July. | 1606.01541#35 | Deep Reinforcement Learning for Dialogue Generation | Recent neural models of dialogue generation offer great promise for
generating responses for conversational agents, but tend to be shortsighted,
predicting utterances one at a time while ignoring their influence on future
outcomes. Modeling the future direction of a dialogue is crucial to generating
coherent, interesting dialogues, a need which led traditional NLP models of
dialogue to draw on reinforcement learning. In this paper, we show how to
integrate these goals, applying deep reinforcement learning to model future
reward in chatbot dialogue. The model simulates dialogues between two virtual
agents, using policy gradient methods to reward sequences that display three
useful conversational properties: informativity (non-repetitive turns),
coherence, and ease of answering (related to forward-looking function). We
evaluate our model on diversity, length as well as with human judges, showing
that the proposed algorithm generates more interactive responses and manages to
foster a more sustained conversation in dialogue simulation. This work marks a
first step towards learning a neural conversational model based on the
long-term success of dialogues. | http://arxiv.org/pdf/1606.01541 | Jiwei Li, Will Monroe, Alan Ritter, Michel Galley, Jianfeng Gao, Dan Jurafsky | cs.CL | null | null | cs.CL | 20160605 | 20160929 | [
{
"id": "1506.08941"
},
{
"id": "1505.00521"
},
{
"id": "1511.06732"
},
{
"id": "1605.05110"
},
{
"id": "1604.04562"
},
{
"id": "1603.09457"
},
{
"id": "1603.08023"
}
] |
1606.01541 | 36 | Milica GaËsic, Catherine Breslin, Matthew Henderson, Dongho Kim, Martin Szummer, Blaise Thomson, Pir- ros Tsiakoulis, and Steve Young. 2013a. Pomdp-based
dialogue manager adaptation to extended domains. In Proceedings of SIGDIAL.
Milica Gasic, Catherine Breslin, Mike Henderson, Dongkyu Kim, Martin Szummer, Blaise Thomson, Pir- ros Tsiakoulis, and Steve Young. 2013b. On-line policy optimisation of bayesian spoken dialogue systems via human interaction. In Proceedings of ICASSP 2013, pages 8367â8371. IEEE.
Milica GaËsic, Dongho Kim, Pirros Tsiakoulis, Catherine Breslin, Matthew Henderson, Martin Szummer, Blaise Thomson, and Steve Young. 2014. Incremental on- line adaptation of pomdp-based dialogue managers to extended domains. In Proceedings on InterSpeech. Peter W Glynn. 1990. Likelihood ratio gradient estima- tion for stochastic systems. Communications of the ACM, 33(10):75â84. | 1606.01541#36 | Deep Reinforcement Learning for Dialogue Generation | Recent neural models of dialogue generation offer great promise for
generating responses for conversational agents, but tend to be shortsighted,
predicting utterances one at a time while ignoring their influence on future
outcomes. Modeling the future direction of a dialogue is crucial to generating
coherent, interesting dialogues, a need which led traditional NLP models of
dialogue to draw on reinforcement learning. In this paper, we show how to
integrate these goals, applying deep reinforcement learning to model future
reward in chatbot dialogue. The model simulates dialogues between two virtual
agents, using policy gradient methods to reward sequences that display three
useful conversational properties: informativity (non-repetitive turns),
coherence, and ease of answering (related to forward-looking function). We
evaluate our model on diversity, length as well as with human judges, showing
that the proposed algorithm generates more interactive responses and manages to
foster a more sustained conversation in dialogue simulation. This work marks a
first step towards learning a neural conversational model based on the
long-term success of dialogues. | http://arxiv.org/pdf/1606.01541 | Jiwei Li, Will Monroe, Alan Ritter, Michel Galley, Jianfeng Gao, Dan Jurafsky | cs.CL | null | null | cs.CL | 20160605 | 20160929 | [
{
"id": "1506.08941"
},
{
"id": "1505.00521"
},
{
"id": "1511.06732"
},
{
"id": "1605.05110"
},
{
"id": "1604.04562"
},
{
"id": "1603.09457"
},
{
"id": "1603.08023"
}
] |
1606.01541 | 37 | Ji He, Jianshu Chen, Xiaodong He, Jianfeng Gao, Lihong Li, Li Deng, and Mari Ostendorf. 2016. Deep rein- forcement learning with a natural language action space. In Proceedings of the 54th Annual Meeting of the Asso- ciation for Computational Linguistics (Volume 1: Long Papers), pages 1621â1630, Berlin, Germany, August. Esther Levin, Roberto Pieraccini, and Wieland Eckert. 1997. Learning dialogue strategies within the markov In Automatic Speech decision process framework. Recognition and Understanding, 1997. Proceedings., 1997 IEEE Workshop on, pages 72â79. IEEE.
Esther Levin, Roberto Pieraccini, and Wieland Eckert. 2000. A stochastic model of human-machine interac- tion for learning dialog strategies. IEEE Transactions on Speech and Audio Processing, 8(1):11â23.
Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016a. A diversity-promoting objective function for neural conversation models. In Proc. of NAACL-HLT. | 1606.01541#37 | Deep Reinforcement Learning for Dialogue Generation | Recent neural models of dialogue generation offer great promise for
generating responses for conversational agents, but tend to be shortsighted,
predicting utterances one at a time while ignoring their influence on future
outcomes. Modeling the future direction of a dialogue is crucial to generating
coherent, interesting dialogues, a need which led traditional NLP models of
dialogue to draw on reinforcement learning. In this paper, we show how to
integrate these goals, applying deep reinforcement learning to model future
reward in chatbot dialogue. The model simulates dialogues between two virtual
agents, using policy gradient methods to reward sequences that display three
useful conversational properties: informativity (non-repetitive turns),
coherence, and ease of answering (related to forward-looking function). We
evaluate our model on diversity, length as well as with human judges, showing
that the proposed algorithm generates more interactive responses and manages to
foster a more sustained conversation in dialogue simulation. This work marks a
first step towards learning a neural conversational model based on the
long-term success of dialogues. | http://arxiv.org/pdf/1606.01541 | Jiwei Li, Will Monroe, Alan Ritter, Michel Galley, Jianfeng Gao, Dan Jurafsky | cs.CL | null | null | cs.CL | 20160605 | 20160929 | [
{
"id": "1506.08941"
},
{
"id": "1505.00521"
},
{
"id": "1511.06732"
},
{
"id": "1605.05110"
},
{
"id": "1604.04562"
},
{
"id": "1603.09457"
},
{
"id": "1603.08023"
}
] |
1606.01541 | 38 | Jiwei Li, Michel Galley, Chris Brockett, Georgios Sp- ithourakis, Jianfeng Gao, and Bill Dolan. 2016b. A persona-based neural conversation model. In Proceed- ings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 994â1003, Berlin, Germany, August.
Chia-Wei Liu, Ryan Lowe, Iulian V Serban, Michael Nose- worthy, Laurent Charlin, and Joelle Pineau. 2016. How not to evaluate your dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response generation. arXiv preprint arXiv:1603.08023. 2016. LSTM based conversation models. arXiv preprint arXiv:1603.09457.
Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and Mar- tin Riedmiller. 2013. Playing Atari with deep rein- forcement learning. NIPS Deep Learning Workshop.
Karthik Narasimhan, Tejas Kulkarni, and Regina Barzilay. 2015. Language understanding for text-based games using deep reinforcement learning. arXiv preprint arXiv:1506.08941. | 1606.01541#38 | Deep Reinforcement Learning for Dialogue Generation | Recent neural models of dialogue generation offer great promise for
generating responses for conversational agents, but tend to be shortsighted,
predicting utterances one at a time while ignoring their influence on future
outcomes. Modeling the future direction of a dialogue is crucial to generating
coherent, interesting dialogues, a need which led traditional NLP models of
dialogue to draw on reinforcement learning. In this paper, we show how to
integrate these goals, applying deep reinforcement learning to model future
reward in chatbot dialogue. The model simulates dialogues between two virtual
agents, using policy gradient methods to reward sequences that display three
useful conversational properties: informativity (non-repetitive turns),
coherence, and ease of answering (related to forward-looking function). We
evaluate our model on diversity, length as well as with human judges, showing
that the proposed algorithm generates more interactive responses and manages to
foster a more sustained conversation in dialogue simulation. This work marks a
first step towards learning a neural conversational model based on the
long-term success of dialogues. | http://arxiv.org/pdf/1606.01541 | Jiwei Li, Will Monroe, Alan Ritter, Michel Galley, Jianfeng Gao, Dan Jurafsky | cs.CL | null | null | cs.CL | 20160605 | 20160929 | [
{
"id": "1506.08941"
},
{
"id": "1505.00521"
},
{
"id": "1511.06732"
},
{
"id": "1605.05110"
},
{
"id": "1604.04562"
},
{
"id": "1603.09457"
},
{
"id": "1603.08023"
}
] |
1606.01541 | 39 | Lasguido Nio, Sakriani Sakti, Graham Neubig, Tomoki Toda, Mirna Adriani, and Satoshi Nakamura. 2014. Developing non-goal dialog system based on examples of drama television. In Natural Interaction with Robots, Knowbots and Smartphones, pages 355â361. Springer. Alice H Oh and Alexander I Rudnicky. 2000. Stochastic language generation for spoken dialogue systems. In Proceedings of the 2000 ANLP/NAACL Workshop on Conversational systems-Volume 3, pages 27â32.
Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. BLEU: a method for automatic eval- uation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pages 311â318.
Roberto Pieraccini, David Suendermann, Krishna Dayanidhi, and Jackson Liscombe. 2009. Are we there yet? Research in commercial spoken dialog systems. In Text, Speech and Dialogue, pages 3â13. Springer. MarcâAurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. 2015. Sequence level train- ing with recurrent neural networks. arXiv preprint arXiv:1511.06732. | 1606.01541#39 | Deep Reinforcement Learning for Dialogue Generation | Recent neural models of dialogue generation offer great promise for
generating responses for conversational agents, but tend to be shortsighted,
predicting utterances one at a time while ignoring their influence on future
outcomes. Modeling the future direction of a dialogue is crucial to generating
coherent, interesting dialogues, a need which led traditional NLP models of
dialogue to draw on reinforcement learning. In this paper, we show how to
integrate these goals, applying deep reinforcement learning to model future
reward in chatbot dialogue. The model simulates dialogues between two virtual
agents, using policy gradient methods to reward sequences that display three
useful conversational properties: informativity (non-repetitive turns),
coherence, and ease of answering (related to forward-looking function). We
evaluate our model on diversity, length as well as with human judges, showing
that the proposed algorithm generates more interactive responses and manages to
foster a more sustained conversation in dialogue simulation. This work marks a
first step towards learning a neural conversational model based on the
long-term success of dialogues. | http://arxiv.org/pdf/1606.01541 | Jiwei Li, Will Monroe, Alan Ritter, Michel Galley, Jianfeng Gao, Dan Jurafsky | cs.CL | null | null | cs.CL | 20160605 | 20160929 | [
{
"id": "1506.08941"
},
{
"id": "1505.00521"
},
{
"id": "1511.06732"
},
{
"id": "1605.05110"
},
{
"id": "1604.04562"
},
{
"id": "1603.09457"
},
{
"id": "1603.08023"
}
] |
1606.01541 | 40 | Adwait Ratnaparkhi. 2002. Trainable approaches to sur- face natural language generation and their application to conversational dialog systems. Computer Speech & Language, 16(3):435â455.
Alan Ritter, Colin Cherry, and William B Dolan. 2011. Data-driven response generation in social media. In Proceedings of EMNLP 2011, pages 583â593.
Jost Schatzmann, Karl Weilhammer, Matt Stuttle, and Steve Young. 2006. A survey of statistical user simula- tion techniques for reinforcement-learning of dialogue management strategies. The knowledge engineering review, 21(02):97â126.
Emanuel A. Schegloff and Harvey Sacks. 1973. Opening up closings. Semiotica, 8(4):289â327.
Iulian V Serban, Alessandro Sordoni, Yoshua Bengio, Aaron Courville, and Joelle Pineau. 2016. Building end-to-end dialogue systems using generative hierar- chical neural network models. In Proceedings of AAAI, February.
Lifeng Shang, Zhengdong Lu, and Hang Li. 2015. Neural In responding machine for short-text conversation. Proceedings of ACL-IJCNLP, pages 1577â1586. | 1606.01541#40 | Deep Reinforcement Learning for Dialogue Generation | Recent neural models of dialogue generation offer great promise for
generating responses for conversational agents, but tend to be shortsighted,
predicting utterances one at a time while ignoring their influence on future
outcomes. Modeling the future direction of a dialogue is crucial to generating
coherent, interesting dialogues, a need which led traditional NLP models of
dialogue to draw on reinforcement learning. In this paper, we show how to
integrate these goals, applying deep reinforcement learning to model future
reward in chatbot dialogue. The model simulates dialogues between two virtual
agents, using policy gradient methods to reward sequences that display three
useful conversational properties: informativity (non-repetitive turns),
coherence, and ease of answering (related to forward-looking function). We
evaluate our model on diversity, length as well as with human judges, showing
that the proposed algorithm generates more interactive responses and manages to
foster a more sustained conversation in dialogue simulation. This work marks a
first step towards learning a neural conversational model based on the
long-term success of dialogues. | http://arxiv.org/pdf/1606.01541 | Jiwei Li, Will Monroe, Alan Ritter, Michel Galley, Jianfeng Gao, Dan Jurafsky | cs.CL | null | null | cs.CL | 20160605 | 20160929 | [
{
"id": "1506.08941"
},
{
"id": "1505.00521"
},
{
"id": "1511.06732"
},
{
"id": "1605.05110"
},
{
"id": "1604.04562"
},
{
"id": "1603.09457"
},
{
"id": "1603.08023"
}
] |
1606.01541 | 41 | David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrit- twieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. 2016. Mastering the game of go with deep neural networks and tree search. Nature, 529(7587):484â489.
Satinder P Singh, Michael J Kearns, Diane J Litman, and Marilyn A Walker. 1999. Reinforcement learning for spoken dialogue systems. In Nips, pages 956â962. Satinder Singh, Michael Kearns, Diane J Litman, Mar- ilyn A Walker, et al. 2000. Empirical evaluation of a reinforcement learning spoken dialogue system. In AAAI/IAAI, pages 645â651.
Satinder Singh, Diane Litman, Michael Kearns, and Mari- lyn Walker. 2002. Optimizing dialogue management with reinforcement learning: Experiments with the nj- fun system. Journal of Artiï¬cial Intelligence Research, pages 105â133. | 1606.01541#41 | Deep Reinforcement Learning for Dialogue Generation | Recent neural models of dialogue generation offer great promise for
generating responses for conversational agents, but tend to be shortsighted,
predicting utterances one at a time while ignoring their influence on future
outcomes. Modeling the future direction of a dialogue is crucial to generating
coherent, interesting dialogues, a need which led traditional NLP models of
dialogue to draw on reinforcement learning. In this paper, we show how to
integrate these goals, applying deep reinforcement learning to model future
reward in chatbot dialogue. The model simulates dialogues between two virtual
agents, using policy gradient methods to reward sequences that display three
useful conversational properties: informativity (non-repetitive turns),
coherence, and ease of answering (related to forward-looking function). We
evaluate our model on diversity, length as well as with human judges, showing
that the proposed algorithm generates more interactive responses and manages to
foster a more sustained conversation in dialogue simulation. This work marks a
first step towards learning a neural conversational model based on the
long-term success of dialogues. | http://arxiv.org/pdf/1606.01541 | Jiwei Li, Will Monroe, Alan Ritter, Michel Galley, Jianfeng Gao, Dan Jurafsky | cs.CL | null | null | cs.CL | 20160605 | 20160929 | [
{
"id": "1506.08941"
},
{
"id": "1505.00521"
},
{
"id": "1511.06732"
},
{
"id": "1605.05110"
},
{
"id": "1604.04562"
},
{
"id": "1603.09457"
},
{
"id": "1603.08023"
}
] |
1606.01541 | 42 | Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Meg Mitchell, Jian-Yun Nie, Jianfeng Gao, and Bill Dolan. 2015. A neural network approach to context-sensitive generation of conversa- tional responses. In Proceedings of NAACL-HLT. Pei-Hao Su, Milica Gasic, Nikola Mrksic, Lina Rojas- Barahona, Stefan Ultes, David Vandyke, Tsung-Hsien Wen, and Steve Young. 2016. Continuously learning neural dialogue management. arxiv.
Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pages 3104â3112.
Richard S Sutton, David A McAllester, Satinder P Singh, Yishay Mansour, et al. 1999. Policy gradient methods for reinforcement learning with function approximation. In NIPS, volume 99, pages 1057â1063.
Oriol Vinyals and Quoc Le. 2015. A neural conversa- tional model. In Proceedings of ICML Deep Learning Workshop.
Adam Vogel and Dan Jurafsky. 2010. Learning to follow navigational directions. In Proceedings of ACL 2010, pages 806â814. | 1606.01541#42 | Deep Reinforcement Learning for Dialogue Generation | Recent neural models of dialogue generation offer great promise for
generating responses for conversational agents, but tend to be shortsighted,
predicting utterances one at a time while ignoring their influence on future
outcomes. Modeling the future direction of a dialogue is crucial to generating
coherent, interesting dialogues, a need which led traditional NLP models of
dialogue to draw on reinforcement learning. In this paper, we show how to
integrate these goals, applying deep reinforcement learning to model future
reward in chatbot dialogue. The model simulates dialogues between two virtual
agents, using policy gradient methods to reward sequences that display three
useful conversational properties: informativity (non-repetitive turns),
coherence, and ease of answering (related to forward-looking function). We
evaluate our model on diversity, length as well as with human judges, showing
that the proposed algorithm generates more interactive responses and manages to
foster a more sustained conversation in dialogue simulation. This work marks a
first step towards learning a neural conversational model based on the
long-term success of dialogues. | http://arxiv.org/pdf/1606.01541 | Jiwei Li, Will Monroe, Alan Ritter, Michel Galley, Jianfeng Gao, Dan Jurafsky | cs.CL | null | null | cs.CL | 20160605 | 20160929 | [
{
"id": "1506.08941"
},
{
"id": "1505.00521"
},
{
"id": "1511.06732"
},
{
"id": "1605.05110"
},
{
"id": "1604.04562"
},
{
"id": "1603.09457"
},
{
"id": "1603.08023"
}
] |
1606.01541 | 43 | Adam Vogel and Dan Jurafsky. 2010. Learning to follow navigational directions. In Proceedings of ACL 2010, pages 806â814.
Marilyn A Walker, Rashmi Prasad, and Amanda Stent. 2003. A trainable generator for recommendations in multimodal dialog. In Proceeedings of INTERSPEECH 2003.
Marilyn A. Walker. 2000. An application of reinforce- ment learning to dialogue strategy selection in a spoken dialogue system for email. Journal of Artiï¬cial Intelli- gence Research, pages 387â416.
Tsung-Hsien Wen, Milica Gasic, Nikola MrkËsi´c, Pei-Hao Su, David Vandyke, and Steve Young. 2015. Semanti- cally conditioned LSTM-based natural language gener- ation for spoken dialogue systems. In Proceedings of EMNLP, pages 1711â1721, Lisbon, Portugal.
Tsung-Hsien Wen, Milica Gasic, Nikola Mrksic, Lina M Rojas-Barahona, Pei-Hao Su, Stefan Ultes, David Vandyke, and Steve Young. 2016. A network-based end-to-end trainable task-oriented dialogue system. arXiv preprint arXiv:1604.04562. | 1606.01541#43 | Deep Reinforcement Learning for Dialogue Generation | Recent neural models of dialogue generation offer great promise for
generating responses for conversational agents, but tend to be shortsighted,
predicting utterances one at a time while ignoring their influence on future
outcomes. Modeling the future direction of a dialogue is crucial to generating
coherent, interesting dialogues, a need which led traditional NLP models of
dialogue to draw on reinforcement learning. In this paper, we show how to
integrate these goals, applying deep reinforcement learning to model future
reward in chatbot dialogue. The model simulates dialogues between two virtual
agents, using policy gradient methods to reward sequences that display three
useful conversational properties: informativity (non-repetitive turns),
coherence, and ease of answering (related to forward-looking function). We
evaluate our model on diversity, length as well as with human judges, showing
that the proposed algorithm generates more interactive responses and manages to
foster a more sustained conversation in dialogue simulation. This work marks a
first step towards learning a neural conversational model based on the
long-term success of dialogues. | http://arxiv.org/pdf/1606.01541 | Jiwei Li, Will Monroe, Alan Ritter, Michel Galley, Jianfeng Gao, Dan Jurafsky | cs.CL | null | null | cs.CL | 20160605 | 20160929 | [
{
"id": "1506.08941"
},
{
"id": "1505.00521"
},
{
"id": "1511.06732"
},
{
"id": "1605.05110"
},
{
"id": "1604.04562"
},
{
"id": "1603.09457"
},
{
"id": "1603.08023"
}
] |
1606.01541 | 44 | Ronald J Williams. 1992. Simple statistical gradient- following algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229â256.
Zhen Xu, Bingquan Liu, Baoxun Wang, Chengjie Sun, and Xiaolong Wang. 2016. Incorporating loose-structured knowledge into LSTM with recall gate for conversation modeling. arXiv preprint arXiv:1605.05110.
Kaisheng Yao, Geoffrey Zweig, and Baolin Peng. 2015. Attention with intention for a neural network conversa- tion model. In NIPS workshop on Machine Learning for Spoken Language Understanding and Interaction. Steve Young, Milica GaËsi´c, Simon Keizer, Franc¸ois Mairesse, Jost Schatzmann, Blaise Thomson, and Kai Yu. 2010. The hidden information state model: A prac- tical framework for pomdp-based spoken dialogue man- agement. Computer Speech & Language, 24(2):150â 174.
Steve Young, Milica Gasic, Blaise Thomson, and Jason D Williams. 2013. Pomdp-based statistical spoken di- alog systems: A review. Proceedings of the IEEE, 101(5):1160â1179. | 1606.01541#44 | Deep Reinforcement Learning for Dialogue Generation | Recent neural models of dialogue generation offer great promise for
generating responses for conversational agents, but tend to be shortsighted,
predicting utterances one at a time while ignoring their influence on future
outcomes. Modeling the future direction of a dialogue is crucial to generating
coherent, interesting dialogues, a need which led traditional NLP models of
dialogue to draw on reinforcement learning. In this paper, we show how to
integrate these goals, applying deep reinforcement learning to model future
reward in chatbot dialogue. The model simulates dialogues between two virtual
agents, using policy gradient methods to reward sequences that display three
useful conversational properties: informativity (non-repetitive turns),
coherence, and ease of answering (related to forward-looking function). We
evaluate our model on diversity, length as well as with human judges, showing
that the proposed algorithm generates more interactive responses and manages to
foster a more sustained conversation in dialogue simulation. This work marks a
first step towards learning a neural conversational model based on the
long-term success of dialogues. | http://arxiv.org/pdf/1606.01541 | Jiwei Li, Will Monroe, Alan Ritter, Michel Galley, Jianfeng Gao, Dan Jurafsky | cs.CL | null | null | cs.CL | 20160605 | 20160929 | [
{
"id": "1506.08941"
},
{
"id": "1505.00521"
},
{
"id": "1511.06732"
},
{
"id": "1605.05110"
},
{
"id": "1604.04562"
},
{
"id": "1603.09457"
},
{
"id": "1603.08023"
}
] |
1606.01305 | 0 | 7 1 0 2
p e S 2 2 ] E N . s c [
4 v 5 0 3 1 0 . 6 0 6 1 : v i X r a
Under review as a conference paper at ICLR 2017
# ZONEOUT: REGULARIZING RNNS BY RANDOMLY PRESERVING HIDDEN ACTIVATIONS
David Krueger!*, Tegan Maharaj?*, Janos Kramar? Mohammad Pezeshki! Nicolas Ballas', Nan Rosemary Keâ, Anirudh Goyalâ Yoshua Bengioââ, Aaron Courville'!, Christopher Pal? ! MILA, Université de Montréal, firstname. [email protected]. 2 Beole Polytechnique de Montréal, firstname. [email protected]. * Equal contributions. âCIFAR Senior Fellow. CIFAR Fellow.
# ABSTRACT | 1606.01305#0 | Zoneout: Regularizing RNNs by Randomly Preserving Hidden Activations | We propose zoneout, a novel method for regularizing RNNs. At each timestep,
zoneout stochastically forces some hidden units to maintain their previous
values. Like dropout, zoneout uses random noise to train a pseudo-ensemble,
improving generalization. But by preserving instead of dropping hidden units,
gradient information and state information are more readily propagated through
time, as in feedforward stochastic depth networks. We perform an empirical
investigation of various RNN regularizers, and find that zoneout gives
significant performance improvements across tasks. We achieve competitive
results with relatively simple models in character- and word-level language
modelling on the Penn Treebank and Text8 datasets, and combining with recurrent
batch normalization yields state-of-the-art results on permuted sequential
MNIST. | http://arxiv.org/pdf/1606.01305 | David Krueger, Tegan Maharaj, János Kramár, Mohammad Pezeshki, Nicolas Ballas, Nan Rosemary Ke, Anirudh Goyal, Yoshua Bengio, Aaron Courville, Chris Pal | cs.NE, cs.CL, cs.LG | David Krueger and Tegan Maharaj contributed equally to this work | null | cs.NE | 20160603 | 20170922 | [
{
"id": "1603.05118"
},
{
"id": "1504.00941"
},
{
"id": "1603.09025"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1511.08400"
}
] |
1606.01305 | 1 | # ABSTRACT
We propose zoneout, a novel method for regularizing RNNs. At each timestep, zoneout stochastically forces some hidden units to maintain their previous values. Like dropout, zoneout uses random noise to train a pseudo-ensemble, improving generalization. But by preserving instead of dropping hidden units, gradient information and state information are more readily propagated through time, as in feedforward stochastic depth networks. We perform an empirical investigation of various RNN regularizers, and ï¬nd that zoneout gives signiï¬cant performance improvements across tasks. We achieve competitive results with relatively simple models in character- and word-level language modelling on the Penn Treebank and Text8 datasets, and combining with recurrent batch normalization (Cooijmans et al., 2016) yields state-of-the-art results on permuted sequential MNIST.
# INTRODUCTION
Regularizing neural nets can signiï¬cantly improve performance, as indicated by the widespread use of early stopping, and success of regularization methods such as dropout and its recurrent variants (Hinton et al., 2012; Srivastava et al., 2014; Zaremba et al., 2014; Gal, 2015). In this paper, we address the issue of regularization in recurrent neural networks (RNNs) with a novel method called zoneout. | 1606.01305#1 | Zoneout: Regularizing RNNs by Randomly Preserving Hidden Activations | We propose zoneout, a novel method for regularizing RNNs. At each timestep,
zoneout stochastically forces some hidden units to maintain their previous
values. Like dropout, zoneout uses random noise to train a pseudo-ensemble,
improving generalization. But by preserving instead of dropping hidden units,
gradient information and state information are more readily propagated through
time, as in feedforward stochastic depth networks. We perform an empirical
investigation of various RNN regularizers, and find that zoneout gives
significant performance improvements across tasks. We achieve competitive
results with relatively simple models in character- and word-level language
modelling on the Penn Treebank and Text8 datasets, and combining with recurrent
batch normalization yields state-of-the-art results on permuted sequential
MNIST. | http://arxiv.org/pdf/1606.01305 | David Krueger, Tegan Maharaj, János Kramár, Mohammad Pezeshki, Nicolas Ballas, Nan Rosemary Ke, Anirudh Goyal, Yoshua Bengio, Aaron Courville, Chris Pal | cs.NE, cs.CL, cs.LG | David Krueger and Tegan Maharaj contributed equally to this work | null | cs.NE | 20160603 | 20170922 | [
{
"id": "1603.05118"
},
{
"id": "1504.00941"
},
{
"id": "1603.09025"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1511.08400"
}
] |
1606.01305 | 2 | RNNs sequentially construct ï¬xed-length representations of arbitrary-length sequences by folding new observations into their hidden state using an input-dependent transition operator. The repeated application of the same transition operator at the different time steps of the sequence, however, can make the dynamics of an RNN sensitive to minor perturbations in the hidden state; the transition dynamics can magnify components of these perturbations exponentially. Zoneout aims to improve RNNsâ robustness to perturbations in the hidden state in order to regularize transition dynamics.
Like dropout, zoneout injects noise during training. But instead of setting some unitsâ activations to 0 as in dropout, zoneout randomly replaces some unitsâ activations with their activations from the previous timestep. As in dropout, we use the expectation of the random noise at test time. This results in a simple regularization approach which can be applied through time for any RNN architecture, and can be conceptually extended to any model whose state varies over time.
Compared with dropout, zoneout is appealing because it preserves information ï¬ow forwards and backwards through the network. This helps combat the vanishing gradient problem (Hochreiter, 1991; Bengio et al., 1994), as we observe experimentally. | 1606.01305#2 | Zoneout: Regularizing RNNs by Randomly Preserving Hidden Activations | We propose zoneout, a novel method for regularizing RNNs. At each timestep,
zoneout stochastically forces some hidden units to maintain their previous
values. Like dropout, zoneout uses random noise to train a pseudo-ensemble,
improving generalization. But by preserving instead of dropping hidden units,
gradient information and state information are more readily propagated through
time, as in feedforward stochastic depth networks. We perform an empirical
investigation of various RNN regularizers, and find that zoneout gives
significant performance improvements across tasks. We achieve competitive
results with relatively simple models in character- and word-level language
modelling on the Penn Treebank and Text8 datasets, and combining with recurrent
batch normalization yields state-of-the-art results on permuted sequential
MNIST. | http://arxiv.org/pdf/1606.01305 | David Krueger, Tegan Maharaj, János Kramár, Mohammad Pezeshki, Nicolas Ballas, Nan Rosemary Ke, Anirudh Goyal, Yoshua Bengio, Aaron Courville, Chris Pal | cs.NE, cs.CL, cs.LG | David Krueger and Tegan Maharaj contributed equally to this work | null | cs.NE | 20160603 | 20170922 | [
{
"id": "1603.05118"
},
{
"id": "1504.00941"
},
{
"id": "1603.09025"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1511.08400"
}
] |
1606.01305 | 3 | We also empirically evaluate zoneout on classiï¬cation using the permuted sequential MNIST dataset, and on language modelling using the Penn Treebank and Text8 datasets, demonstrat- ing competitive or state of the art performance across tasks. In particular, we show that zo- neout performs competitively with other proposed regularization methods for RNNs, includ- ing recently-proposed dropout variants. Code for replicating all experiments can be found at: http://github.com/teganmaharaj/zoneout
1
Under review as a conference paper at ICLR 2017
2 RELATED WORK
2.1 RELATIONSHIP TO DROPOUT | 1606.01305#3 | Zoneout: Regularizing RNNs by Randomly Preserving Hidden Activations | We propose zoneout, a novel method for regularizing RNNs. At each timestep,
zoneout stochastically forces some hidden units to maintain their previous
values. Like dropout, zoneout uses random noise to train a pseudo-ensemble,
improving generalization. But by preserving instead of dropping hidden units,
gradient information and state information are more readily propagated through
time, as in feedforward stochastic depth networks. We perform an empirical
investigation of various RNN regularizers, and find that zoneout gives
significant performance improvements across tasks. We achieve competitive
results with relatively simple models in character- and word-level language
modelling on the Penn Treebank and Text8 datasets, and combining with recurrent
batch normalization yields state-of-the-art results on permuted sequential
MNIST. | http://arxiv.org/pdf/1606.01305 | David Krueger, Tegan Maharaj, János Kramár, Mohammad Pezeshki, Nicolas Ballas, Nan Rosemary Ke, Anirudh Goyal, Yoshua Bengio, Aaron Courville, Chris Pal | cs.NE, cs.CL, cs.LG | David Krueger and Tegan Maharaj contributed equally to this work | null | cs.NE | 20160603 | 20170922 | [
{
"id": "1603.05118"
},
{
"id": "1504.00941"
},
{
"id": "1603.09025"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1511.08400"
}
] |
1606.01305 | 4 | 1
Under review as a conference paper at ICLR 2017
2 RELATED WORK
2.1 RELATIONSHIP TO DROPOUT
Zoneout can be seen as a selective application of dropout to some of the nodes in a modiï¬ed computational graph, as shown in Figure 1. In zoneout, instead of dropping out (being set to 0), units zone out and are set to their previous value (ht = htâ1). Zoneout, like dropout, can be viewed as a way to train a pseudo-ensemble (Bachman et al., 2014), injecting noise using a stochastic âidentity-maskâ rather than a zero-mask. We conjecture that identity-masking is more appropriate for RNNs, since it makes it easier for the network to preserve information from previous timesteps going forward, and facilitates, rather than hinders, the ï¬ow of gradient information going backward, as we demonstrate experimentally.
ie M1
Figure 1: Zoneout as a special case of dropout; Ëht is the unit hâs hidden activation for the next time step (if not zoned out). Zoneout can be seen as applying dropout on the hidden state delta, Ëht â htâ1. When this update is dropped out (represented by the dashed line), ht becomes htâ1.
2.2 DROPOUT IN RNNS | 1606.01305#4 | Zoneout: Regularizing RNNs by Randomly Preserving Hidden Activations | We propose zoneout, a novel method for regularizing RNNs. At each timestep,
zoneout stochastically forces some hidden units to maintain their previous
values. Like dropout, zoneout uses random noise to train a pseudo-ensemble,
improving generalization. But by preserving instead of dropping hidden units,
gradient information and state information are more readily propagated through
time, as in feedforward stochastic depth networks. We perform an empirical
investigation of various RNN regularizers, and find that zoneout gives
significant performance improvements across tasks. We achieve competitive
results with relatively simple models in character- and word-level language
modelling on the Penn Treebank and Text8 datasets, and combining with recurrent
batch normalization yields state-of-the-art results on permuted sequential
MNIST. | http://arxiv.org/pdf/1606.01305 | David Krueger, Tegan Maharaj, János Kramár, Mohammad Pezeshki, Nicolas Ballas, Nan Rosemary Ke, Anirudh Goyal, Yoshua Bengio, Aaron Courville, Chris Pal | cs.NE, cs.CL, cs.LG | David Krueger and Tegan Maharaj contributed equally to this work | null | cs.NE | 20160603 | 20170922 | [
{
"id": "1603.05118"
},
{
"id": "1504.00941"
},
{
"id": "1603.09025"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1511.08400"
}
] |
1606.01305 | 5 | 2.2 DROPOUT IN RNNS
Initially successful applications of dropout in RNNs (Pham et al., 2013; Zaremba et al., 2014) only applied dropout to feed-forward connections (âup the stackâ), and not recurrent connections (âforward through timeâ), but several recent works (Semeniuta et al., 2016; Moon et al., 2015; Gal, 2015) propose methods that are not limited in this way. Bayer et al. (2013) successfully apply fast dropout (Wang & Manning, 2013), a deterministic approximation of dropout, to RNNs.
# eas | 1606.01305#5 | Zoneout: Regularizing RNNs by Randomly Preserving Hidden Activations | We propose zoneout, a novel method for regularizing RNNs. At each timestep,
zoneout stochastically forces some hidden units to maintain their previous
values. Like dropout, zoneout uses random noise to train a pseudo-ensemble,
improving generalization. But by preserving instead of dropping hidden units,
gradient information and state information are more readily propagated through
time, as in feedforward stochastic depth networks. We perform an empirical
investigation of various RNN regularizers, and find that zoneout gives
significant performance improvements across tasks. We achieve competitive
results with relatively simple models in character- and word-level language
modelling on the Penn Treebank and Text8 datasets, and combining with recurrent
batch normalization yields state-of-the-art results on permuted sequential
MNIST. | http://arxiv.org/pdf/1606.01305 | David Krueger, Tegan Maharaj, János Kramár, Mohammad Pezeshki, Nicolas Ballas, Nan Rosemary Ke, Anirudh Goyal, Yoshua Bengio, Aaron Courville, Chris Pal | cs.NE, cs.CL, cs.LG | David Krueger and Tegan Maharaj contributed equally to this work | null | cs.NE | 20160603 | 20170922 | [
{
"id": "1603.05118"
},
{
"id": "1504.00941"
},
{
"id": "1603.09025"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1511.08400"
}
] |
1606.01305 | 6 | # eas
Semeniuta et al. (2016) apply recurrent dropout to the updates to LSTM memory cells (or GRU states), i.e. they drop out the input/update gate in LSTM/GRU. Like zoneout, their approach prevents the loss of long-term memories built up in the states/cells of GRUs/LSTMS, but zoneout does this by preserving unitsâ activations exactly. This difference is most salient when zoning out the hidden states (not the memory cells) of an LSTM, for which there is no analogue in recurrent dropout. Whereas saturated output gates or output nonlinearities would cause recurrent dropout to suffer from vanishing gradients (Bengio et al., 1994), zoned-out units still propagate gradients effectively in this situation. Furthermore, while the recurrent dropout method is speciï¬c to LSTMs and GRUs, zoneout generalizes to any model that sequentially builds distributed representations of its input, including vanilla RNNs. | 1606.01305#6 | Zoneout: Regularizing RNNs by Randomly Preserving Hidden Activations | We propose zoneout, a novel method for regularizing RNNs. At each timestep,
zoneout stochastically forces some hidden units to maintain their previous
values. Like dropout, zoneout uses random noise to train a pseudo-ensemble,
improving generalization. But by preserving instead of dropping hidden units,
gradient information and state information are more readily propagated through
time, as in feedforward stochastic depth networks. We perform an empirical
investigation of various RNN regularizers, and find that zoneout gives
significant performance improvements across tasks. We achieve competitive
results with relatively simple models in character- and word-level language
modelling on the Penn Treebank and Text8 datasets, and combining with recurrent
batch normalization yields state-of-the-art results on permuted sequential
MNIST. | http://arxiv.org/pdf/1606.01305 | David Krueger, Tegan Maharaj, János Kramár, Mohammad Pezeshki, Nicolas Ballas, Nan Rosemary Ke, Anirudh Goyal, Yoshua Bengio, Aaron Courville, Chris Pal | cs.NE, cs.CL, cs.LG | David Krueger and Tegan Maharaj contributed equally to this work | null | cs.NE | 20160603 | 20170922 | [
{
"id": "1603.05118"
},
{
"id": "1504.00941"
},
{
"id": "1603.09025"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1511.08400"
}
] |
1606.01305 | 7 | Also motivated by preventing memory loss, Moon et al. (2015) propose rnnDrop. This technique amounts to using the same dropout mask at every timestep, which the authors show results in improved performance on speech recognition in their experiments. Semeniuta et al. (2016) show, however, that past statesâ inï¬uence vanishes exponentially as a function of dropout probability when taking the expectation at test time in rnnDrop; this is problematic for tasks involving longer-term dependencies.
# carurs
Gal (2015) propose another technique which uses the same mask at each timestep. Motivated by variational inference, they drop out the rows of weight matrices in the input and output embeddings and LSTM gates, instead of dropping unitsâ activations. The proposed variational RNN technique achieves single-model state-of-the-art test perplexity of 73.4 on word-level language modelling of Penn Treebank.
2.3 RELATIONSHIP TO STOCHASTIC DEPTH
Zoneout can also be viewed as a per-unit version of stochastic depth (Huang et al., 2016), which randomly drops entire layers of feed-forward residual networks (ResNets (He et al., 2015)). This is
2
# Under review as a conference paper at ICLR 2017 | 1606.01305#7 | Zoneout: Regularizing RNNs by Randomly Preserving Hidden Activations | We propose zoneout, a novel method for regularizing RNNs. At each timestep,
zoneout stochastically forces some hidden units to maintain their previous
values. Like dropout, zoneout uses random noise to train a pseudo-ensemble,
improving generalization. But by preserving instead of dropping hidden units,
gradient information and state information are more readily propagated through
time, as in feedforward stochastic depth networks. We perform an empirical
investigation of various RNN regularizers, and find that zoneout gives
significant performance improvements across tasks. We achieve competitive
results with relatively simple models in character- and word-level language
modelling on the Penn Treebank and Text8 datasets, and combining with recurrent
batch normalization yields state-of-the-art results on permuted sequential
MNIST. | http://arxiv.org/pdf/1606.01305 | David Krueger, Tegan Maharaj, János Kramár, Mohammad Pezeshki, Nicolas Ballas, Nan Rosemary Ke, Anirudh Goyal, Yoshua Bengio, Aaron Courville, Chris Pal | cs.NE, cs.CL, cs.LG | David Krueger and Tegan Maharaj contributed equally to this work | null | cs.NE | 20160603 | 20170922 | [
{
"id": "1603.05118"
},
{
"id": "1504.00941"
},
{
"id": "1603.09025"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1511.08400"
}
] |
1606.01305 | 8 | 2
# Under review as a conference paper at ICLR 2017
equivalent to zoning out all of the units of a layer at the same time. In a typical RNN, there is a new input at each timestep, causing issues for a naive implementation of stochastic depth. Zoning out an entire layer in an RNN means the input at the corresponding timestep is completely ignored, whereas zoning out individual units allows the RNN to take each element of its input sequence into account. We also found that using residual connections in recurrent nets led to instability, presumably due to the parameter sharing in RNNs. Concurrent with our work, Singh et al. (2016) propose zoneout for ResNets, calling it SkipForward. In their experiments, zoneout is outperformed by stochastic depth, dropout, and their proposed Swapout technique, which randomly drops either or both of the identity or residual connections. Unlike Singh et al. (2016), we apply zoneout to RNNs, and ï¬nd it outperforms stochastic depth and recurrent dropout.
2.4 SELECTIVELY UPDATING HIDDEN UNITS | 1606.01305#8 | Zoneout: Regularizing RNNs by Randomly Preserving Hidden Activations | We propose zoneout, a novel method for regularizing RNNs. At each timestep,
zoneout stochastically forces some hidden units to maintain their previous
values. Like dropout, zoneout uses random noise to train a pseudo-ensemble,
improving generalization. But by preserving instead of dropping hidden units,
gradient information and state information are more readily propagated through
time, as in feedforward stochastic depth networks. We perform an empirical
investigation of various RNN regularizers, and find that zoneout gives
significant performance improvements across tasks. We achieve competitive
results with relatively simple models in character- and word-level language
modelling on the Penn Treebank and Text8 datasets, and combining with recurrent
batch normalization yields state-of-the-art results on permuted sequential
MNIST. | http://arxiv.org/pdf/1606.01305 | David Krueger, Tegan Maharaj, János Kramár, Mohammad Pezeshki, Nicolas Ballas, Nan Rosemary Ke, Anirudh Goyal, Yoshua Bengio, Aaron Courville, Chris Pal | cs.NE, cs.CL, cs.LG | David Krueger and Tegan Maharaj contributed equally to this work | null | cs.NE | 20160603 | 20170922 | [
{
"id": "1603.05118"
},
{
"id": "1504.00941"
},
{
"id": "1603.09025"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1511.08400"
}
] |
1606.01305 | 9 | 2.4 SELECTIVELY UPDATING HIDDEN UNITS
Like zoneout, clockwork RNNs (Koutnik et al., 2014) and hierarchical RNNs (Hihi & Bengio, 1996) update only some unitsâ activations at every timestep, but their updates are periodic, whereas zoneoutâs are stochastic. Inspired by clockwork RNNs, we experimented with zoneout variants that target different update rates or schedules for different units, but did not ï¬nd any performance beneï¬t. Hierarchical multiscale LSTMs (Chung et al., 2016) learn update probabilities for different units using the straight-through estimator (Bengio et al., 2013; Courbariaux et al., 2015), and combined with recently-proposed Layer Normalization (Ba et al., 2016), achieve competitive results on a variety of tasks. As the authors note, their method can be interpreted as an input-dependent form of adaptive zoneout. | 1606.01305#9 | Zoneout: Regularizing RNNs by Randomly Preserving Hidden Activations | We propose zoneout, a novel method for regularizing RNNs. At each timestep,
zoneout stochastically forces some hidden units to maintain their previous
values. Like dropout, zoneout uses random noise to train a pseudo-ensemble,
improving generalization. But by preserving instead of dropping hidden units,
gradient information and state information are more readily propagated through
time, as in feedforward stochastic depth networks. We perform an empirical
investigation of various RNN regularizers, and find that zoneout gives
significant performance improvements across tasks. We achieve competitive
results with relatively simple models in character- and word-level language
modelling on the Penn Treebank and Text8 datasets, and combining with recurrent
batch normalization yields state-of-the-art results on permuted sequential
MNIST. | http://arxiv.org/pdf/1606.01305 | David Krueger, Tegan Maharaj, János Kramár, Mohammad Pezeshki, Nicolas Ballas, Nan Rosemary Ke, Anirudh Goyal, Yoshua Bengio, Aaron Courville, Chris Pal | cs.NE, cs.CL, cs.LG | David Krueger and Tegan Maharaj contributed equally to this work | null | cs.NE | 20160603 | 20170922 | [
{
"id": "1603.05118"
},
{
"id": "1504.00941"
},
{
"id": "1603.09025"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1511.08400"
}
] |
1606.01305 | 10 | In recent work, Ha et al. (2016) use a hypernetwork to dynamically rescale the row-weights of a primary LSTM network, achieving state-of-the-art 1.21 BPC on character-level Penn Treebank when combined with layer normalization (Ba et al., 2016) in a two-layer network. This scaling can be viewed as an adaptive, differentiable version of the variational LSTM (Gal, 2015), and could similarly be used to create an adaptive, differentiable version of zoneout. Very recent work conditions zoneout probabilities on suprisal (a measure of the discrepancy between the predicted and actual state), and sets a new state of the art on enwik8 (Rocki et al., 2016).
# 3 ZONEOUT AND PRELIMINARIES
We now explain zoneout in full detail, and compare with other forms of dropout in RNNs. We start by reviewing recurrent neural networks (RNNs).
3.1 RECURRENT NEURAL NETWORKS | 1606.01305#10 | Zoneout: Regularizing RNNs by Randomly Preserving Hidden Activations | We propose zoneout, a novel method for regularizing RNNs. At each timestep,
zoneout stochastically forces some hidden units to maintain their previous
values. Like dropout, zoneout uses random noise to train a pseudo-ensemble,
improving generalization. But by preserving instead of dropping hidden units,
gradient information and state information are more readily propagated through
time, as in feedforward stochastic depth networks. We perform an empirical
investigation of various RNN regularizers, and find that zoneout gives
significant performance improvements across tasks. We achieve competitive
results with relatively simple models in character- and word-level language
modelling on the Penn Treebank and Text8 datasets, and combining with recurrent
batch normalization yields state-of-the-art results on permuted sequential
MNIST. | http://arxiv.org/pdf/1606.01305 | David Krueger, Tegan Maharaj, János Kramár, Mohammad Pezeshki, Nicolas Ballas, Nan Rosemary Ke, Anirudh Goyal, Yoshua Bengio, Aaron Courville, Chris Pal | cs.NE, cs.CL, cs.LG | David Krueger and Tegan Maharaj contributed equally to this work | null | cs.NE | 20160603 | 20170922 | [
{
"id": "1603.05118"
},
{
"id": "1504.00941"
},
{
"id": "1603.09025"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1511.08400"
}
] |
1606.01305 | 11 | 3.1 RECURRENT NEURAL NETWORKS
Recurrent neural networks process data x1, x2, . . . , xT sequentially, constructing a corresponding sequence of representations, h1, h2, . . . , hT . Each hidden state is trained (implicitly) to remember and emphasize all task-relevant aspects of the preceding inputs, and to incorporate new inputs via a transition operator, T , which converts the present hidden state and input into a new hidden state: ht = T (htâ1, xt). Zoneout modiï¬es these dynamics by mixing the original transition operator ËT with the identity operator (as opposed to the null operator used in dropout), according to a vector of Bernoulli masks, dt: Zoneout:
3.2 LONG SHORT-TERM MEMORY | 1606.01305#11 | Zoneout: Regularizing RNNs by Randomly Preserving Hidden Activations | We propose zoneout, a novel method for regularizing RNNs. At each timestep,
zoneout stochastically forces some hidden units to maintain their previous
values. Like dropout, zoneout uses random noise to train a pseudo-ensemble,
improving generalization. But by preserving instead of dropping hidden units,
gradient information and state information are more readily propagated through
time, as in feedforward stochastic depth networks. We perform an empirical
investigation of various RNN regularizers, and find that zoneout gives
significant performance improvements across tasks. We achieve competitive
results with relatively simple models in character- and word-level language
modelling on the Penn Treebank and Text8 datasets, and combining with recurrent
batch normalization yields state-of-the-art results on permuted sequential
MNIST. | http://arxiv.org/pdf/1606.01305 | David Krueger, Tegan Maharaj, János Kramár, Mohammad Pezeshki, Nicolas Ballas, Nan Rosemary Ke, Anirudh Goyal, Yoshua Bengio, Aaron Courville, Chris Pal | cs.NE, cs.CL, cs.LG | David Krueger and Tegan Maharaj contributed equally to this work | null | cs.NE | 20160603 | 20170922 | [
{
"id": "1603.05118"
},
{
"id": "1504.00941"
},
{
"id": "1603.09025"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1511.08400"
}
] |
1606.01305 | 12 | 3.2 LONG SHORT-TERM MEMORY
In long short-term memory RNNs (LSTMs) (Hochreiter & Schmidhuber, 1997), the hidden state is divided into memory cell ct, intended for internal long-term storage, and hidden state ht, used as a transient representation of state at timestep t. In the most widely used formulation of an LSTM (Gers et al., 2000), ct and ht are computed via a set of four âgatesâ, including the forget gate, ft, which directly connects ct to the memories of the previous timestep ctâ1, via an element-wise multiplication. Large values of the forget gate cause the cell to remember most (not all) of its previous value. The other gates control the ï¬ow of information in (it, gt) and out (ot) of the cell. Each gate has a weight matrix and bias vector; for example the forget gate has Wxf , Whf , and bf . For brevity, we will write these as Wx, Wh, b.
3
# Under review as a conference paper at ICLR 2017
An LSTM is deï¬ned as follows: | 1606.01305#12 | Zoneout: Regularizing RNNs by Randomly Preserving Hidden Activations | We propose zoneout, a novel method for regularizing RNNs. At each timestep,
zoneout stochastically forces some hidden units to maintain their previous
values. Like dropout, zoneout uses random noise to train a pseudo-ensemble,
improving generalization. But by preserving instead of dropping hidden units,
gradient information and state information are more readily propagated through
time, as in feedforward stochastic depth networks. We perform an empirical
investigation of various RNN regularizers, and find that zoneout gives
significant performance improvements across tasks. We achieve competitive
results with relatively simple models in character- and word-level language
modelling on the Penn Treebank and Text8 datasets, and combining with recurrent
batch normalization yields state-of-the-art results on permuted sequential
MNIST. | http://arxiv.org/pdf/1606.01305 | David Krueger, Tegan Maharaj, János Kramár, Mohammad Pezeshki, Nicolas Ballas, Nan Rosemary Ke, Anirudh Goyal, Yoshua Bengio, Aaron Courville, Chris Pal | cs.NE, cs.CL, cs.LG | David Krueger and Tegan Maharaj contributed equally to this work | null | cs.NE | 20160603 | 20170922 | [
{
"id": "1603.05118"
},
{
"id": "1504.00941"
},
{
"id": "1603.09025"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1511.08400"
}
] |
1606.01305 | 13 | 3
# Under review as a conference paper at ICLR 2017
An LSTM is deï¬ned as follows:
in, fe, Or = O(Wexe + Wrhe_1 + b) H = tanh(W,,2; + Wight-1 + bg) = frOarithog hy = 0, © tanh(c)
A naive application of dropout in LSTMs would zero-mask either or both of the memory cells and hidden states, without changing the computation of the gates (i, f, o, g). Dropping memory cells, for example, changes the computation of ct as follows:
= dO (frOG-1 +t © 9)
Alternatives abound, however; masks can be applied to any subset of the gates, cells, and states. Semeniuta et al. (2016), for instance, zero-mask the input gate:
c= (fr Oa-1 + di OO H)
When the input gate is masked like this, there is no additive contribution from the input or hidden state, and the value of the memory cell simply decays according to the forget gate.
(a) (b) | 1606.01305#13 | Zoneout: Regularizing RNNs by Randomly Preserving Hidden Activations | We propose zoneout, a novel method for regularizing RNNs. At each timestep,
zoneout stochastically forces some hidden units to maintain their previous
values. Like dropout, zoneout uses random noise to train a pseudo-ensemble,
improving generalization. But by preserving instead of dropping hidden units,
gradient information and state information are more readily propagated through
time, as in feedforward stochastic depth networks. We perform an empirical
investigation of various RNN regularizers, and find that zoneout gives
significant performance improvements across tasks. We achieve competitive
results with relatively simple models in character- and word-level language
modelling on the Penn Treebank and Text8 datasets, and combining with recurrent
batch normalization yields state-of-the-art results on permuted sequential
MNIST. | http://arxiv.org/pdf/1606.01305 | David Krueger, Tegan Maharaj, János Kramár, Mohammad Pezeshki, Nicolas Ballas, Nan Rosemary Ke, Anirudh Goyal, Yoshua Bengio, Aaron Courville, Chris Pal | cs.NE, cs.CL, cs.LG | David Krueger and Tegan Maharaj contributed equally to this work | null | cs.NE | 20160603 | 20170922 | [
{
"id": "1603.05118"
},
{
"id": "1504.00941"
},
{
"id": "1603.09025"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1511.08400"
}
] |
1606.01305 | 14 | (a) (b)
Figure 2: (a) Zoneout, vs (b) the recurrent dropout strategy of (Semeniuta et al., 2016) in an LSTM. Dashed lines are zero-masked; in zoneout, the corresponding dotted lines are masked with the corresponding opposite zero-mask. Rectangular nodes are embedding layers.
In zoneout, the values of the hidden state and memory cell randomly either maintain their previous value or are updated as usual. This introduces stochastic identity connections between subsequent time steps:
= df Ou-1+(l-di)O(frOartiOg) hy = dP? Oy + (l= di?) © (0 © tanh (fe O¢1tt%© g))
We usually use different zoneout masks for cells and hiddens. We also experiment with a variant of recurrent dropout that reuses the input dropout mask to zoneout the corresponding output gates:
ce = (fr © r-1 + dy Ot © H) hy = ((1â dk) © o + dy © 04-1) © tanh(c;)
The motivation for this variant is to prevent the network from being forced (by the output gate) to expose a memory cell which has not been updated, and hence may contain misleading information.
# 4 EXPERIMENTS AND DISCUSSION | 1606.01305#14 | Zoneout: Regularizing RNNs by Randomly Preserving Hidden Activations | We propose zoneout, a novel method for regularizing RNNs. At each timestep,
zoneout stochastically forces some hidden units to maintain their previous
values. Like dropout, zoneout uses random noise to train a pseudo-ensemble,
improving generalization. But by preserving instead of dropping hidden units,
gradient information and state information are more readily propagated through
time, as in feedforward stochastic depth networks. We perform an empirical
investigation of various RNN regularizers, and find that zoneout gives
significant performance improvements across tasks. We achieve competitive
results with relatively simple models in character- and word-level language
modelling on the Penn Treebank and Text8 datasets, and combining with recurrent
batch normalization yields state-of-the-art results on permuted sequential
MNIST. | http://arxiv.org/pdf/1606.01305 | David Krueger, Tegan Maharaj, János Kramár, Mohammad Pezeshki, Nicolas Ballas, Nan Rosemary Ke, Anirudh Goyal, Yoshua Bengio, Aaron Courville, Chris Pal | cs.NE, cs.CL, cs.LG | David Krueger and Tegan Maharaj contributed equally to this work | null | cs.NE | 20160603 | 20170922 | [
{
"id": "1603.05118"
},
{
"id": "1504.00941"
},
{
"id": "1603.09025"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1511.08400"
}
] |
1606.01305 | 15 | # 4 EXPERIMENTS AND DISCUSSION
We evaluate zoneoutâs performance on the following tasks: (1) Character-level language modelling on the Penn Treebank corpus (Marcus et al., 1993); (2) Word-level language modelling on the Penn Treebank corpus (Marcus et al., 1993); (3) Character-level language modelling on the Text8 corpus (Mahoney, 2011); (4) Classiï¬cation of hand-written digits on permuted sequential MNIST (pMNIST) (Le et al., 2015). We also investigate the gradient ï¬ow to past hidden states, using pMNIST.
4
Under review as a conference paper at ICLR 2017
4.1 PENN TREEBANK LANGUAGE MODELLING DATASET
The Penn Treebank language model corpus contains 1 million words. The model is trained to predict the next word (evaluated on perplexity) or character (evaluated on BPC: bits per character) in a sequence. 1
4.1.1 CHARACTER-LEVEL | 1606.01305#15 | Zoneout: Regularizing RNNs by Randomly Preserving Hidden Activations | We propose zoneout, a novel method for regularizing RNNs. At each timestep,
zoneout stochastically forces some hidden units to maintain their previous
values. Like dropout, zoneout uses random noise to train a pseudo-ensemble,
improving generalization. But by preserving instead of dropping hidden units,
gradient information and state information are more readily propagated through
time, as in feedforward stochastic depth networks. We perform an empirical
investigation of various RNN regularizers, and find that zoneout gives
significant performance improvements across tasks. We achieve competitive
results with relatively simple models in character- and word-level language
modelling on the Penn Treebank and Text8 datasets, and combining with recurrent
batch normalization yields state-of-the-art results on permuted sequential
MNIST. | http://arxiv.org/pdf/1606.01305 | David Krueger, Tegan Maharaj, János Kramár, Mohammad Pezeshki, Nicolas Ballas, Nan Rosemary Ke, Anirudh Goyal, Yoshua Bengio, Aaron Courville, Chris Pal | cs.NE, cs.CL, cs.LG | David Krueger and Tegan Maharaj contributed equally to this work | null | cs.NE | 20160603 | 20170922 | [
{
"id": "1603.05118"
},
{
"id": "1504.00941"
},
{
"id": "1603.09025"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1511.08400"
}
] |
1606.01305 | 16 | 4.1.1 CHARACTER-LEVEL
For the character-level task, we train networks with one layer of 1000 hidden units. We train LSTMs with a learning rate of 0.002 on overlapping sequences of 100 in batches of 32, optimize using Adam, and clip gradients with threshold 1. These settings match those used in Cooijmans et al. (2016). We also train GRUs and tanh-RNNs with the same parameters as above, except sequences are non- overlapping and we use learning rates of 0.001, and 0.0003 for GRUs and tanh-RNNs respectively. Small values (0.1, 0.05) of zoneout signiï¬cantly improve generalization performance for all three models. Intriguingly, we ï¬nd zoneout increases training time for GRU and tanh-RNN, but decreases training time for LSTMs. | 1606.01305#16 | Zoneout: Regularizing RNNs by Randomly Preserving Hidden Activations | We propose zoneout, a novel method for regularizing RNNs. At each timestep,
zoneout stochastically forces some hidden units to maintain their previous
values. Like dropout, zoneout uses random noise to train a pseudo-ensemble,
improving generalization. But by preserving instead of dropping hidden units,
gradient information and state information are more readily propagated through
time, as in feedforward stochastic depth networks. We perform an empirical
investigation of various RNN regularizers, and find that zoneout gives
significant performance improvements across tasks. We achieve competitive
results with relatively simple models in character- and word-level language
modelling on the Penn Treebank and Text8 datasets, and combining with recurrent
batch normalization yields state-of-the-art results on permuted sequential
MNIST. | http://arxiv.org/pdf/1606.01305 | David Krueger, Tegan Maharaj, János Kramár, Mohammad Pezeshki, Nicolas Ballas, Nan Rosemary Ke, Anirudh Goyal, Yoshua Bengio, Aaron Courville, Chris Pal | cs.NE, cs.CL, cs.LG | David Krueger and Tegan Maharaj contributed equally to this work | null | cs.NE | 20160603 | 20170922 | [
{
"id": "1603.05118"
},
{
"id": "1504.00941"
},
{
"id": "1603.09025"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1511.08400"
}
] |
1606.01305 | 17 | We focus our investigation on LSTM units, where the dynamics of zoning out states, cells, or both provide interesting insight into zoneoutâs behaviour. Figure 3 shows our exploration of zoneout in LSTMs, for various zoneout probabilities of cells and/or hiddens. Zoneout on cells with probability 0.5 or zoneout on states with probability 0.05 both outperform the best-performing recurrent dropout (p = 0.25). Combining zc = 0.5 and zh = 0.05 leads to our best-performing model, which achieves 1.27 BPC, competitive with recent state-of-the-art set by (Ha et al., 2016). We compare zoneout to recurrent dropout (for p â {0.05, 0.2, 0.25, 0.5, 0.7}), weight noise (Ï = 0.075), norm stabilizer (β = 50) (Krueger & Memisevic, 2015), and explore stochastic depth (Huang et al., 2016) in a recurrent setting (analagous to zoning out an entire timestep). We also tried a shared-mask variant of zoneout as used in pMNIST experiments, where the same mask is used for both cells and | 1606.01305#17 | Zoneout: Regularizing RNNs by Randomly Preserving Hidden Activations | We propose zoneout, a novel method for regularizing RNNs. At each timestep,
zoneout stochastically forces some hidden units to maintain their previous
values. Like dropout, zoneout uses random noise to train a pseudo-ensemble,
improving generalization. But by preserving instead of dropping hidden units,
gradient information and state information are more readily propagated through
time, as in feedforward stochastic depth networks. We perform an empirical
investigation of various RNN regularizers, and find that zoneout gives
significant performance improvements across tasks. We achieve competitive
results with relatively simple models in character- and word-level language
modelling on the Penn Treebank and Text8 datasets, and combining with recurrent
batch normalization yields state-of-the-art results on permuted sequential
MNIST. | http://arxiv.org/pdf/1606.01305 | David Krueger, Tegan Maharaj, János Kramár, Mohammad Pezeshki, Nicolas Ballas, Nan Rosemary Ke, Anirudh Goyal, Yoshua Bengio, Aaron Courville, Chris Pal | cs.NE, cs.CL, cs.LG | David Krueger and Tegan Maharaj contributed equally to this work | null | cs.NE | 20160603 | 20170922 | [
{
"id": "1603.05118"
},
{
"id": "1504.00941"
},
{
"id": "1603.09025"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1511.08400"
}
] |
1606.01305 | 18 | out an entire timestep). We also tried a shared-mask variant of zoneout as used in pMNIST experiments, where the same mask is used for both cells and hiddens. Neither stochastic depth or shared-mask zoneout performed as well as separate masks, sampled per unit. Figure 3 shows the best performance achieved with each regularizer, as well as an unregularized LSTM baseline. Results are reported in Table 1, and learning curves shown in Figure 4. | 1606.01305#18 | Zoneout: Regularizing RNNs by Randomly Preserving Hidden Activations | We propose zoneout, a novel method for regularizing RNNs. At each timestep,
zoneout stochastically forces some hidden units to maintain their previous
values. Like dropout, zoneout uses random noise to train a pseudo-ensemble,
improving generalization. But by preserving instead of dropping hidden units,
gradient information and state information are more readily propagated through
time, as in feedforward stochastic depth networks. We perform an empirical
investigation of various RNN regularizers, and find that zoneout gives
significant performance improvements across tasks. We achieve competitive
results with relatively simple models in character- and word-level language
modelling on the Penn Treebank and Text8 datasets, and combining with recurrent
batch normalization yields state-of-the-art results on permuted sequential
MNIST. | http://arxiv.org/pdf/1606.01305 | David Krueger, Tegan Maharaj, János Kramár, Mohammad Pezeshki, Nicolas Ballas, Nan Rosemary Ke, Anirudh Goyal, Yoshua Bengio, Aaron Courville, Chris Pal | cs.NE, cs.CL, cs.LG | David Krueger and Tegan Maharaj contributed equally to this work | null | cs.NE | 20160603 | 20170922 | [
{
"id": "1603.05118"
},
{
"id": "1504.00941"
},
{
"id": "1603.09025"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1511.08400"
}
] |
1606.01305 | 19 | Low zoneout probabilities (0.05-0.25) also improve over baseline in GRUs and tanh-RNNs, reducing BPC from 1.53 to 1.41 for GRU and 1.67 to 1.52 for tanh-RNN. Similarly, low zoneout probabilities work best on the hidden states of LSTMs. For memory cells in LSTMs, however, higher probabilities (around 0.5) work well, perhaps because large forget-gate values approximate the effect of cells zoning out. We conjecture that best performance is achieved with zoneout LSTMs because of the stability of having both state and cell. The probability that both will be zoned out is very low, but having one or the other zoned out carries information from the previous timestep forward, while having the other react ânormallyâ to new information.
# 4.1.2 WORD-LEVEL
For the word-level task, we replicate settings from Zaremba et al. (2014)âs best single-model perfor- mance. This network has 2 layers of 1500 units, with weights initialized uniformly [-0.04, +0.04]. The model is trained for 14 epochs with learning rate 1, after which the learning rate is reduced by a factor of 1.15 after each epoch. Gradient norms are clipped at 10. | 1606.01305#19 | Zoneout: Regularizing RNNs by Randomly Preserving Hidden Activations | We propose zoneout, a novel method for regularizing RNNs. At each timestep,
zoneout stochastically forces some hidden units to maintain their previous
values. Like dropout, zoneout uses random noise to train a pseudo-ensemble,
improving generalization. But by preserving instead of dropping hidden units,
gradient information and state information are more readily propagated through
time, as in feedforward stochastic depth networks. We perform an empirical
investigation of various RNN regularizers, and find that zoneout gives
significant performance improvements across tasks. We achieve competitive
results with relatively simple models in character- and word-level language
modelling on the Penn Treebank and Text8 datasets, and combining with recurrent
batch normalization yields state-of-the-art results on permuted sequential
MNIST. | http://arxiv.org/pdf/1606.01305 | David Krueger, Tegan Maharaj, János Kramár, Mohammad Pezeshki, Nicolas Ballas, Nan Rosemary Ke, Anirudh Goyal, Yoshua Bengio, Aaron Courville, Chris Pal | cs.NE, cs.CL, cs.LG | David Krueger and Tegan Maharaj contributed equally to this work | null | cs.NE | 20160603 | 20170922 | [
{
"id": "1603.05118"
},
{
"id": "1504.00941"
},
{
"id": "1603.09025"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1511.08400"
}
] |
1606.01305 | 20 | With no dropout on the non-recurrent connections (i.e. zoneout as the only regularization), we do not achieve competitive results. We did not perform any search over models, and conjecture that the large model size requires regularization of the feed-forward connections. Adding zoneout (zc = 0.25 and zh = 0.025) on the recurrent connections to the model optimized for dropout on the non-recurrent connections however, we are able to improve test perplexity from 78.4 to 77.4. We report the best performance achieved with a given technique in Table 1.
1 These metrics are deterministic functions of negative log-likelihood (NLL). Speciï¬cally, perplexity is exponentiated NLL, and BPC (entropy) is NLL divided by the natural logarithm of 2.
5
# Under review as a conference paper at ICLR 2017 | 1606.01305#20 | Zoneout: Regularizing RNNs by Randomly Preserving Hidden Activations | We propose zoneout, a novel method for regularizing RNNs. At each timestep,
zoneout stochastically forces some hidden units to maintain their previous
values. Like dropout, zoneout uses random noise to train a pseudo-ensemble,
improving generalization. But by preserving instead of dropping hidden units,
gradient information and state information are more readily propagated through
time, as in feedforward stochastic depth networks. We perform an empirical
investigation of various RNN regularizers, and find that zoneout gives
significant performance improvements across tasks. We achieve competitive
results with relatively simple models in character- and word-level language
modelling on the Penn Treebank and Text8 datasets, and combining with recurrent
batch normalization yields state-of-the-art results on permuted sequential
MNIST. | http://arxiv.org/pdf/1606.01305 | David Krueger, Tegan Maharaj, János Kramár, Mohammad Pezeshki, Nicolas Ballas, Nan Rosemary Ke, Anirudh Goyal, Yoshua Bengio, Aaron Courville, Chris Pal | cs.NE, cs.CL, cs.LG | David Krueger and Tegan Maharaj contributed equally to this work | null | cs.NE | 20160603 | 20170922 | [
{
"id": "1603.05118"
},
{
"id": "1504.00941"
},
{
"id": "1603.09025"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1511.08400"
}
] |
1606.01305 | 21 | 5
# Under review as a conference paper at ICLR 2017
22 â Zh=05 â Unregularized LSTM â Z=05 22) â Zoneout ee Zh =0.05 © Stochastic depth _ âww Recurrent dropout y 2 2c = 0.05 20 4-4 Norm stabilizer § aa Zc =0.5,Zh =05 y 4 Weight noise 5 <4 Zc = 0.05, Zh = 0.05 â< gis b> Zc = 0.5, Zh = 0.05 gi g s 1.6| 16 14 4 1 6 11 16 21 26 1 6 11 16 21 26 31 36 41 46 51 56 61 66 71 76 81 86 Epoch Epoch
Figure 3: Validation BPC (bits per character) on Character-level Penn Treebank, for different probabilities of zoneout on cells zc and hidden states zh (left), and comparison of an unregularized LSTM, zoneout zc = 0.5, zh = 0.05, stochastic depth zoneout z = 0.05, recurrent dropout p = 0.25, norm stabilizer β = 50, and weight noise Ï = 0.075 (right). | 1606.01305#21 | Zoneout: Regularizing RNNs by Randomly Preserving Hidden Activations | We propose zoneout, a novel method for regularizing RNNs. At each timestep,
zoneout stochastically forces some hidden units to maintain their previous
values. Like dropout, zoneout uses random noise to train a pseudo-ensemble,
improving generalization. But by preserving instead of dropping hidden units,
gradient information and state information are more readily propagated through
time, as in feedforward stochastic depth networks. We perform an empirical
investigation of various RNN regularizers, and find that zoneout gives
significant performance improvements across tasks. We achieve competitive
results with relatively simple models in character- and word-level language
modelling on the Penn Treebank and Text8 datasets, and combining with recurrent
batch normalization yields state-of-the-art results on permuted sequential
MNIST. | http://arxiv.org/pdf/1606.01305 | David Krueger, Tegan Maharaj, János Kramár, Mohammad Pezeshki, Nicolas Ballas, Nan Rosemary Ke, Anirudh Goyal, Yoshua Bengio, Aaron Courville, Chris Pal | cs.NE, cs.CL, cs.LG | David Krueger and Tegan Maharaj contributed equally to this work | null | cs.NE | 20160603 | 20170922 | [
{
"id": "1603.05118"
},
{
"id": "1504.00941"
},
{
"id": "1603.09025"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1511.08400"
}
] |
1606.01305 | 22 | 3.5 3.0 Unregularized LSTM (training) ~ Unregularized LSTM (training) â _Unregularized LSTM (validation) â Unregularized LSTM (validation) Recurrent dropout (training) Recurrent dropout (training) 3.0 ast Recurrent dropout (validation) § Recurrent dropout (validation) ONE as Zoneout (training) S - Zoneout (training) â Zoneout (validation) ro â Zoneout (validation) a 2.0 r zs 5 + z Pe Ey a 0 5 10 15 20 25 30 o 5 10 15 20 2 30 35 40 Epochs Epochs
3 8 s
2
Figure 4: Training and validation bits-per-character (BPC) comparing LSTM regularization methods on character-level Penn Treebank (left) and Text8. (right)
4.2 TEXT8
Enwik8 is a corpus made from the ï¬rst 109 bytes of Wikipedia dumped on Mar. 3, 2006. Text8 is a "clean text" version of this corpus; with html tags removed, numbers spelled out, symbols converted to spaces, all lower-cased. Both datasets were created and are hosted by Mahoney (2011). | 1606.01305#22 | Zoneout: Regularizing RNNs by Randomly Preserving Hidden Activations | We propose zoneout, a novel method for regularizing RNNs. At each timestep,
zoneout stochastically forces some hidden units to maintain their previous
values. Like dropout, zoneout uses random noise to train a pseudo-ensemble,
improving generalization. But by preserving instead of dropping hidden units,
gradient information and state information are more readily propagated through
time, as in feedforward stochastic depth networks. We perform an empirical
investigation of various RNN regularizers, and find that zoneout gives
significant performance improvements across tasks. We achieve competitive
results with relatively simple models in character- and word-level language
modelling on the Penn Treebank and Text8 datasets, and combining with recurrent
batch normalization yields state-of-the-art results on permuted sequential
MNIST. | http://arxiv.org/pdf/1606.01305 | David Krueger, Tegan Maharaj, János Kramár, Mohammad Pezeshki, Nicolas Ballas, Nan Rosemary Ke, Anirudh Goyal, Yoshua Bengio, Aaron Courville, Chris Pal | cs.NE, cs.CL, cs.LG | David Krueger and Tegan Maharaj contributed equally to this work | null | cs.NE | 20160603 | 20170922 | [
{
"id": "1603.05118"
},
{
"id": "1504.00941"
},
{
"id": "1603.09025"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1511.08400"
}
] |
1606.01305 | 23 | We use a single-layer network of 2000 units, initialized orthogonally, with batch size 128, learning rate 0.001, and sequence length 180. We optimize with Adam (Kingma & Ba, 2014), clip gradients to a maximum norm of 1 (Pascanu et al., 2012), and use early stopping, again matching the settings of Cooijmans et al. (2016). Results are reported in Table 1, and Figure 4 shows training and validation learning curves for zoneout (zc = 0.5, zh = 0.05) compared to an unregularized LSTM and to recurrent dropout.
4.3 PERMUTED SEQUENTIAL MNIST
In sequential MNIST, pixels of an image representing a number [0-9] are presented one at a time, left to right, top to bottom. The task is to classify the number shown in the image. In pMNIST , the pixels are presented in a (ï¬xed) random order. | 1606.01305#23 | Zoneout: Regularizing RNNs by Randomly Preserving Hidden Activations | We propose zoneout, a novel method for regularizing RNNs. At each timestep,
zoneout stochastically forces some hidden units to maintain their previous
values. Like dropout, zoneout uses random noise to train a pseudo-ensemble,
improving generalization. But by preserving instead of dropping hidden units,
gradient information and state information are more readily propagated through
time, as in feedforward stochastic depth networks. We perform an empirical
investigation of various RNN regularizers, and find that zoneout gives
significant performance improvements across tasks. We achieve competitive
results with relatively simple models in character- and word-level language
modelling on the Penn Treebank and Text8 datasets, and combining with recurrent
batch normalization yields state-of-the-art results on permuted sequential
MNIST. | http://arxiv.org/pdf/1606.01305 | David Krueger, Tegan Maharaj, János Kramár, Mohammad Pezeshki, Nicolas Ballas, Nan Rosemary Ke, Anirudh Goyal, Yoshua Bengio, Aaron Courville, Chris Pal | cs.NE, cs.CL, cs.LG | David Krueger and Tegan Maharaj contributed equally to this work | null | cs.NE | 20160603 | 20170922 | [
{
"id": "1603.05118"
},
{
"id": "1504.00941"
},
{
"id": "1603.09025"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1511.08400"
}
] |
1606.01305 | 24 | We compare recurrent dropout and zoneout to an unregularized LSTM baseline. All models have a single layer of 100 units, and are trained for 150 epochs using RMSProp (Tieleman & Hinton, 2012) with a decay rate of 0.5 for the moving average of gradient norms. The learning rate is set to 0.001 and the gradients are clipped to a maximum norm of 1 (Pascanu et al., 2012).
6
# Under review as a conference paper at ICLR 2017
As shown in Figure 5 and Table 2, zoneout gives a signiï¬cant performance boost compared to the LSTM baseline and outperforms recurrent dropout (Semeniuta et al., 2016), although recurrent batch normalization (Cooijmans et al., 2016) outperforms all three. However, by adding zoneout to the recurrent batch normalized LSTM, we achieve state of the art performance. For this setting, the zoneout mask is shared between cells and states, and the recurrent dropout probability and zoneout probabilities are both set to 0.15. | 1606.01305#24 | Zoneout: Regularizing RNNs by Randomly Preserving Hidden Activations | We propose zoneout, a novel method for regularizing RNNs. At each timestep,
zoneout stochastically forces some hidden units to maintain their previous
values. Like dropout, zoneout uses random noise to train a pseudo-ensemble,
improving generalization. But by preserving instead of dropping hidden units,
gradient information and state information are more readily propagated through
time, as in feedforward stochastic depth networks. We perform an empirical
investigation of various RNN regularizers, and find that zoneout gives
significant performance improvements across tasks. We achieve competitive
results with relatively simple models in character- and word-level language
modelling on the Penn Treebank and Text8 datasets, and combining with recurrent
batch normalization yields state-of-the-art results on permuted sequential
MNIST. | http://arxiv.org/pdf/1606.01305 | David Krueger, Tegan Maharaj, János Kramár, Mohammad Pezeshki, Nicolas Ballas, Nan Rosemary Ke, Anirudh Goyal, Yoshua Bengio, Aaron Courville, Chris Pal | cs.NE, cs.CL, cs.LG | David Krueger and Tegan Maharaj contributed equally to this work | null | cs.NE | 20160603 | 20170922 | [
{
"id": "1603.05118"
},
{
"id": "1504.00941"
},
{
"id": "1603.09025"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1511.08400"
}
] |
1606.01305 | 25 | Table 1: Validation and test results of different models on the three language modelling tasks. Results are reported for the best-performing settings. Performance on Char-PTB and Text8 is measured in bits- per-character (BPC); Word-PTB is measured in perplexity. For Char-PTB and Text8 all models are 1-layer unless otherwise noted; for Word-PTB all models are 2-layer. Results above the line are from our own implementation and experiments. Models below the line are: NR-dropout (non-recurrent dropout), V-Dropout (variational dropout), RBN (recurrent batchnorm), H-LSTM+LN (HyperLSTM + LayerNorm), 3-HM-LSTM+LN (3-layer Hierarchical Multiscale LSTM + LayerNorm). | 1606.01305#25 | Zoneout: Regularizing RNNs by Randomly Preserving Hidden Activations | We propose zoneout, a novel method for regularizing RNNs. At each timestep,
zoneout stochastically forces some hidden units to maintain their previous
values. Like dropout, zoneout uses random noise to train a pseudo-ensemble,
improving generalization. But by preserving instead of dropping hidden units,
gradient information and state information are more readily propagated through
time, as in feedforward stochastic depth networks. We perform an empirical
investigation of various RNN regularizers, and find that zoneout gives
significant performance improvements across tasks. We achieve competitive
results with relatively simple models in character- and word-level language
modelling on the Penn Treebank and Text8 datasets, and combining with recurrent
batch normalization yields state-of-the-art results on permuted sequential
MNIST. | http://arxiv.org/pdf/1606.01305 | David Krueger, Tegan Maharaj, János Kramár, Mohammad Pezeshki, Nicolas Ballas, Nan Rosemary Ke, Anirudh Goyal, Yoshua Bengio, Aaron Courville, Chris Pal | cs.NE, cs.CL, cs.LG | David Krueger and Tegan Maharaj contributed equally to this work | null | cs.NE | 20160603 | 20170922 | [
{
"id": "1603.05118"
},
{
"id": "1504.00941"
},
{
"id": "1603.09025"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1511.08400"
}
] |
1606.01305 | 26 | Char-PTB Word-PTB Text8 Model Valid Test Valid Test Valid Test Unregularized LSTM Weight noise Norm stabilizer Stochastic depth Recurrent dropout Zoneout 1.466 1.507 1.459 1.432 1.396 1.362 1.356 1.344 1.352 1.343 1.286 1.252 120.7 â â â 91.6 81.4 114.5 â â â 87.0 77.4 1.396 1.356 1.382 1.337 1.386 1.331 1.408 1.367 1.398 1.343 1.401 1.336 NR-dropout (Zaremba et al., 2014) V-dropout (Gal, 2015) RBN (Cooijmans et al., 2016) H-LSTM + LN (Ha et al., 2016) 3-HM-LSTM + LN (Chung et al., 2016) â â â 1.281 â â â 1.32 1.250 1.24 82.2 â â â â 78.4 73.4 â â â â â â â â â â 1.36 â 1.29
Table 2: Error rates on the pMNIST digit classiï¬cation task. Zoneout outperforms recurrent dropout, and sets state of the art when combined with recurrent batch normalization. | 1606.01305#26 | Zoneout: Regularizing RNNs by Randomly Preserving Hidden Activations | We propose zoneout, a novel method for regularizing RNNs. At each timestep,
zoneout stochastically forces some hidden units to maintain their previous
values. Like dropout, zoneout uses random noise to train a pseudo-ensemble,
improving generalization. But by preserving instead of dropping hidden units,
gradient information and state information are more readily propagated through
time, as in feedforward stochastic depth networks. We perform an empirical
investigation of various RNN regularizers, and find that zoneout gives
significant performance improvements across tasks. We achieve competitive
results with relatively simple models in character- and word-level language
modelling on the Penn Treebank and Text8 datasets, and combining with recurrent
batch normalization yields state-of-the-art results on permuted sequential
MNIST. | http://arxiv.org/pdf/1606.01305 | David Krueger, Tegan Maharaj, János Kramár, Mohammad Pezeshki, Nicolas Ballas, Nan Rosemary Ke, Anirudh Goyal, Yoshua Bengio, Aaron Courville, Chris Pal | cs.NE, cs.CL, cs.LG | David Krueger and Tegan Maharaj contributed equally to this work | null | cs.NE | 20160603 | 20170922 | [
{
"id": "1603.05118"
},
{
"id": "1504.00941"
},
{
"id": "1603.09025"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1511.08400"
}
] |
1606.01305 | 27 | Model Valid Test Unregularized LSTM Recurrent dropout p = 0.5 Zoneout zc = zh = 0.15 Recurrent batchnorm Recurrent batchnorm & Zoneout zc = zh = 0.15 0.092 0.083 0.063 - 0.045 0.102 0.075 0.069 0.046 0.041
1.0 Vanilla LSTM (Train) âVanilla LSTM (Validation) =0.1 15 (Train) 0.15 ( 08 0.6 Error Rate 04 02 0.0 0 20 40 60 go 100 120 «+140 160 Epochs
Figure 5: Training and validation error rates for an unregularized LSTM, recurrent dropout, and zoneout on the task of permuted sequential MNIST digit classiï¬cation.
7
# Under review as a conference paper at ICLR 2017
4.4 GRADIENT FLOW
We investigate the hypothesis that identity connections introduced by zoneout facilitate gradient ï¬ow to earlier timesteps. Vanishing gradients are a perennial issue in RNNs. As effective as many techniques are for mitigating vanishing gradients (notably the LSTM architecture Hochreiter & Schmidhuber (1997)), we can always imagine a longer sequence to train on, or a longer-term dependence we want to capture. | 1606.01305#27 | Zoneout: Regularizing RNNs by Randomly Preserving Hidden Activations | We propose zoneout, a novel method for regularizing RNNs. At each timestep,
zoneout stochastically forces some hidden units to maintain their previous
values. Like dropout, zoneout uses random noise to train a pseudo-ensemble,
improving generalization. But by preserving instead of dropping hidden units,
gradient information and state information are more readily propagated through
time, as in feedforward stochastic depth networks. We perform an empirical
investigation of various RNN regularizers, and find that zoneout gives
significant performance improvements across tasks. We achieve competitive
results with relatively simple models in character- and word-level language
modelling on the Penn Treebank and Text8 datasets, and combining with recurrent
batch normalization yields state-of-the-art results on permuted sequential
MNIST. | http://arxiv.org/pdf/1606.01305 | David Krueger, Tegan Maharaj, János Kramár, Mohammad Pezeshki, Nicolas Ballas, Nan Rosemary Ke, Anirudh Goyal, Yoshua Bengio, Aaron Courville, Chris Pal | cs.NE, cs.CL, cs.LG | David Krueger and Tegan Maharaj contributed equally to this work | null | cs.NE | 20160603 | 20170922 | [
{
"id": "1603.05118"
},
{
"id": "1504.00941"
},
{
"id": "1603.09025"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1511.08400"
}
] |
1606.01305 | 28 | We compare gradient flow in an unregularized LSTM to zoning out (stochastic identity-mapping) and dropping out (stochastic zero-mapping) the recurrent connections after one epoch of training on pMNIST. We compute the average gradient norms || oe || of loss L with respect to cell activations c; at each timestep t, and for each method, normalize the average gradient norms by the sum of average gradient norms for all timesteps.
Figure 6 shows that zoneout propagates gradient information to early timesteps much more effectively than dropout on the recurrent connections, and even more effectively than an unregularized LSTM. The same effect was observed for hidden states ht.
â= Dropout â Zoneout â_ Unregularized STM 0100 200 +300 ~-400~=500~~<600~â~«700 timestep
Figure 6: Normalized >> || oe || of loss L with respect to cell activations c, at each timestep zoneout (z, = 0.5), dropout (z. = 0.5), and an unregularized LSTM on one epoch of pMNIST
.
# 5 CONCLUSION | 1606.01305#28 | Zoneout: Regularizing RNNs by Randomly Preserving Hidden Activations | We propose zoneout, a novel method for regularizing RNNs. At each timestep,
zoneout stochastically forces some hidden units to maintain their previous
values. Like dropout, zoneout uses random noise to train a pseudo-ensemble,
improving generalization. But by preserving instead of dropping hidden units,
gradient information and state information are more readily propagated through
time, as in feedforward stochastic depth networks. We perform an empirical
investigation of various RNN regularizers, and find that zoneout gives
significant performance improvements across tasks. We achieve competitive
results with relatively simple models in character- and word-level language
modelling on the Penn Treebank and Text8 datasets, and combining with recurrent
batch normalization yields state-of-the-art results on permuted sequential
MNIST. | http://arxiv.org/pdf/1606.01305 | David Krueger, Tegan Maharaj, János Kramár, Mohammad Pezeshki, Nicolas Ballas, Nan Rosemary Ke, Anirudh Goyal, Yoshua Bengio, Aaron Courville, Chris Pal | cs.NE, cs.CL, cs.LG | David Krueger and Tegan Maharaj contributed equally to this work | null | cs.NE | 20160603 | 20170922 | [
{
"id": "1603.05118"
},
{
"id": "1504.00941"
},
{
"id": "1603.09025"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1511.08400"
}
] |
1606.01305 | 29 | .
# 5 CONCLUSION
We have introduced zoneout, a novel and simple regularizer for RNNs, which stochastically preserves hidden unitsâ activations. Zoneout improves performance across tasks, outperforming many alterna- tive regularizers to achieve results competitive with state of the art on the Penn Treebank and Text8 datasets, and state of the art results on pMNIST. While searching over zoneout probabilites allows us to tune zoneout to each task, low zoneout probabilities (0.05 - 0.2) on states reliably improve performance of existing models.
We perform no hyperparameter search to achieve these results, simply using settings from the previous state of the art. Results on pMNIST and word-level Penn Treebank suggest that Zoneout works well in combination with other regularizers, such as recurrent batch normalization, and dropout on feedforward/embedding layers. We conjecture that the beneï¬ts of zoneout arise from two main factors: (1) Introducing stochasticity makes the network more robust to changes in the hidden state; (2) The identity connections improve the ï¬ow of information forward and backward through the network.
ACKNOWLEDGMENTS | 1606.01305#29 | Zoneout: Regularizing RNNs by Randomly Preserving Hidden Activations | We propose zoneout, a novel method for regularizing RNNs. At each timestep,
zoneout stochastically forces some hidden units to maintain their previous
values. Like dropout, zoneout uses random noise to train a pseudo-ensemble,
improving generalization. But by preserving instead of dropping hidden units,
gradient information and state information are more readily propagated through
time, as in feedforward stochastic depth networks. We perform an empirical
investigation of various RNN regularizers, and find that zoneout gives
significant performance improvements across tasks. We achieve competitive
results with relatively simple models in character- and word-level language
modelling on the Penn Treebank and Text8 datasets, and combining with recurrent
batch normalization yields state-of-the-art results on permuted sequential
MNIST. | http://arxiv.org/pdf/1606.01305 | David Krueger, Tegan Maharaj, János Kramár, Mohammad Pezeshki, Nicolas Ballas, Nan Rosemary Ke, Anirudh Goyal, Yoshua Bengio, Aaron Courville, Chris Pal | cs.NE, cs.CL, cs.LG | David Krueger and Tegan Maharaj contributed equally to this work | null | cs.NE | 20160603 | 20170922 | [
{
"id": "1603.05118"
},
{
"id": "1504.00941"
},
{
"id": "1603.09025"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1511.08400"
}
] |
1606.01305 | 30 | ACKNOWLEDGMENTS
We are grateful to Hugo Larochelle, Jan Chorowski, and students at MILA, especially ÃaËglar Gülçehre, Marcin Moczulski, Chiheb Trabelsi, and Christopher Beckham, for helpful feedback and discussions. We thank the developers of Theano (Theano Development Team, 2016), Fuel, and Blocks (van Merriënboer et al., 2015). We acknowledge the computing resources provided by ComputeCanada and CalculQuebec. We also thank IBM and Samsung for their support. We would also like to acknowledge the work of Pranav Shyam on learning RNN hierarchies. This research was developed with funding from the Defense Advanced Research Projects Agency (DARPA) and the Air
8
# t for
# Under review as a conference paper at ICLR 2017
Force Research Laboratory (AFRL). The views, opinions and/or ï¬ndings expressed are those of the authors and should not be interpreted as representing the ofï¬cial views or policies of the Department of Defense or the U.S. Government.
# REFERENCES
Lei Jimmy Ba, Ryan Kiros, and Geoffrey E. Hinton. Layer normalization. CoRR, abs/1607.06450, 2016. URL http://arxiv.org/abs/1607.06450. | 1606.01305#30 | Zoneout: Regularizing RNNs by Randomly Preserving Hidden Activations | We propose zoneout, a novel method for regularizing RNNs. At each timestep,
zoneout stochastically forces some hidden units to maintain their previous
values. Like dropout, zoneout uses random noise to train a pseudo-ensemble,
improving generalization. But by preserving instead of dropping hidden units,
gradient information and state information are more readily propagated through
time, as in feedforward stochastic depth networks. We perform an empirical
investigation of various RNN regularizers, and find that zoneout gives
significant performance improvements across tasks. We achieve competitive
results with relatively simple models in character- and word-level language
modelling on the Penn Treebank and Text8 datasets, and combining with recurrent
batch normalization yields state-of-the-art results on permuted sequential
MNIST. | http://arxiv.org/pdf/1606.01305 | David Krueger, Tegan Maharaj, János Kramár, Mohammad Pezeshki, Nicolas Ballas, Nan Rosemary Ke, Anirudh Goyal, Yoshua Bengio, Aaron Courville, Chris Pal | cs.NE, cs.CL, cs.LG | David Krueger and Tegan Maharaj contributed equally to this work | null | cs.NE | 20160603 | 20170922 | [
{
"id": "1603.05118"
},
{
"id": "1504.00941"
},
{
"id": "1603.09025"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1511.08400"
}
] |
1606.01305 | 31 | Philip Bachman, Ouais Alsharif, and Doina Precup. Learning with pseudo-ensembles. In Advances in Neural Information Processing Systems, pp. 3365â3373, 2014.
J. Bayer, C. Osendorfer, D. Korhammer, N. Chen, S. Urban, and P. van der Smagt. On Fast Dropout and its Applicability to Recurrent Networks. ArXiv e-prints, November 2013.
Yoshua Bengio, Patrice Simard, and Paolo Frasconi. Learning long-term dependencies with gradient descent is difï¬cult. Neural Networks, IEEE Transactions on, 5(2):157â166, 1994.
Yoshua Bengio, Nicholas Léonard, and Aaron C. Courville. Estimating or propagating gradients through stochastic neurons for conditional computation. CoRR, abs/1308.3432, 2013. URL http://arxiv.org/abs/1308.3432.
Junyoung Chung, Sungjin Ahn, and Yoshua Bengio. Hierarchical multiscale recurrent neural networks. CoRR, abs/1609.01704, 2016. URL http://arxiv.org/abs/1609.01704. | 1606.01305#31 | Zoneout: Regularizing RNNs by Randomly Preserving Hidden Activations | We propose zoneout, a novel method for regularizing RNNs. At each timestep,
zoneout stochastically forces some hidden units to maintain their previous
values. Like dropout, zoneout uses random noise to train a pseudo-ensemble,
improving generalization. But by preserving instead of dropping hidden units,
gradient information and state information are more readily propagated through
time, as in feedforward stochastic depth networks. We perform an empirical
investigation of various RNN regularizers, and find that zoneout gives
significant performance improvements across tasks. We achieve competitive
results with relatively simple models in character- and word-level language
modelling on the Penn Treebank and Text8 datasets, and combining with recurrent
batch normalization yields state-of-the-art results on permuted sequential
MNIST. | http://arxiv.org/pdf/1606.01305 | David Krueger, Tegan Maharaj, János Kramár, Mohammad Pezeshki, Nicolas Ballas, Nan Rosemary Ke, Anirudh Goyal, Yoshua Bengio, Aaron Courville, Chris Pal | cs.NE, cs.CL, cs.LG | David Krueger and Tegan Maharaj contributed equally to this work | null | cs.NE | 20160603 | 20170922 | [
{
"id": "1603.05118"
},
{
"id": "1504.00941"
},
{
"id": "1603.09025"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1511.08400"
}
] |
1606.01305 | 32 | Tim Cooijmans, Nicolas Ballas, César Laurent, Caglar Gulcehre, and Aaron Courville. Recurrent batch normalization. arXiv preprint arXiv:1603.09025, 2016.
Matthieu Courbariaux, Yoshua Bengio, and Jean-Pierre David. Binaryconnect: Training deep neural networks with binary weights during propagations. In NIPS, pp. 3123â3131, 2015.
Yarin Gal. A Theoretically Grounded Application of Dropout in Recurrent Neural Networks. ArXiv e-prints, December 2015.
Felix A. Gers, Jürgen Schmidhuber, and Fred A. Cummins. Learning to forget: Continual prediction with LSTM. Neural Computation, 12(10):2451â2471, 2000.
David Ha, Andrew M. Dai, and Quoc V. Le. Hypernetworks. CoRR, abs/1609.09106, 2016. URL http://arxiv.org/abs/1609.09106.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. arXiv preprint arXiv:1512.03385, 2015. | 1606.01305#32 | Zoneout: Regularizing RNNs by Randomly Preserving Hidden Activations | We propose zoneout, a novel method for regularizing RNNs. At each timestep,
zoneout stochastically forces some hidden units to maintain their previous
values. Like dropout, zoneout uses random noise to train a pseudo-ensemble,
improving generalization. But by preserving instead of dropping hidden units,
gradient information and state information are more readily propagated through
time, as in feedforward stochastic depth networks. We perform an empirical
investigation of various RNN regularizers, and find that zoneout gives
significant performance improvements across tasks. We achieve competitive
results with relatively simple models in character- and word-level language
modelling on the Penn Treebank and Text8 datasets, and combining with recurrent
batch normalization yields state-of-the-art results on permuted sequential
MNIST. | http://arxiv.org/pdf/1606.01305 | David Krueger, Tegan Maharaj, János Kramár, Mohammad Pezeshki, Nicolas Ballas, Nan Rosemary Ke, Anirudh Goyal, Yoshua Bengio, Aaron Courville, Chris Pal | cs.NE, cs.CL, cs.LG | David Krueger and Tegan Maharaj contributed equally to this work | null | cs.NE | 20160603 | 20170922 | [
{
"id": "1603.05118"
},
{
"id": "1504.00941"
},
{
"id": "1603.09025"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1511.08400"
}
] |
1606.01305 | 33 | Salah El Hihi and Yoshua Bengio. Hierarchical recurrent neural networks for long-term dependencies. In Advances in Neural Information Processing Systems. 1996.
Geoffrey E Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan R Salakhutdinov. Improving neural networks by preventing co-adaptation of feature detectors. arXiv preprint arXiv:1207.0580, 2012.
Sepp Hochreiter. Untersuchungen zu dynamischen neuronalen netzen. Masterâs thesis, Institut fur Informatik, Technische Universitat, Munchen, 1991.
Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation, 9(8): 1735â1780, 1997.
Gao Huang, Yu Sun, Zhuang Liu, Daniel Sedra, and Kilian Weinberger. Deep networks with stochastic depth. arXiv preprint arXiv:1603.09382, 2016.
Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. | 1606.01305#33 | Zoneout: Regularizing RNNs by Randomly Preserving Hidden Activations | We propose zoneout, a novel method for regularizing RNNs. At each timestep,
zoneout stochastically forces some hidden units to maintain their previous
values. Like dropout, zoneout uses random noise to train a pseudo-ensemble,
improving generalization. But by preserving instead of dropping hidden units,
gradient information and state information are more readily propagated through
time, as in feedforward stochastic depth networks. We perform an empirical
investigation of various RNN regularizers, and find that zoneout gives
significant performance improvements across tasks. We achieve competitive
results with relatively simple models in character- and word-level language
modelling on the Penn Treebank and Text8 datasets, and combining with recurrent
batch normalization yields state-of-the-art results on permuted sequential
MNIST. | http://arxiv.org/pdf/1606.01305 | David Krueger, Tegan Maharaj, János Kramár, Mohammad Pezeshki, Nicolas Ballas, Nan Rosemary Ke, Anirudh Goyal, Yoshua Bengio, Aaron Courville, Chris Pal | cs.NE, cs.CL, cs.LG | David Krueger and Tegan Maharaj contributed equally to this work | null | cs.NE | 20160603 | 20170922 | [
{
"id": "1603.05118"
},
{
"id": "1504.00941"
},
{
"id": "1603.09025"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1511.08400"
}
] |
1606.01305 | 34 | Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
Jan Koutnik, Klaus Greff, Faustino Gomez, and Juergen Schmidhuber. A clockwork rnn. arXiv preprint arXiv:1402.3511, 2014.
9
# Under review as a conference paper at ICLR 2017
David Krueger and Roland Memisevic. Regularizing rnns by stabilizing activations. arXiv preprint arXiv:1511.08400, 2015.
Quoc V Le, Navdeep Jaitly, and Geoffrey E Hinton. A simple way to initialize recurrent networks of rectiï¬ed linear units. arXiv preprint arXiv:1504.00941, 2015.
# Matt Mahoney. About the test data, 2011. URL http://mattmahoney.net/dc/textdata.
Mitchell P Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. Building a large annotated corpus of english: The penn treebank. Computational linguistics, 19(2):313â330, 1993. | 1606.01305#34 | Zoneout: Regularizing RNNs by Randomly Preserving Hidden Activations | We propose zoneout, a novel method for regularizing RNNs. At each timestep,
zoneout stochastically forces some hidden units to maintain their previous
values. Like dropout, zoneout uses random noise to train a pseudo-ensemble,
improving generalization. But by preserving instead of dropping hidden units,
gradient information and state information are more readily propagated through
time, as in feedforward stochastic depth networks. We perform an empirical
investigation of various RNN regularizers, and find that zoneout gives
significant performance improvements across tasks. We achieve competitive
results with relatively simple models in character- and word-level language
modelling on the Penn Treebank and Text8 datasets, and combining with recurrent
batch normalization yields state-of-the-art results on permuted sequential
MNIST. | http://arxiv.org/pdf/1606.01305 | David Krueger, Tegan Maharaj, János Kramár, Mohammad Pezeshki, Nicolas Ballas, Nan Rosemary Ke, Anirudh Goyal, Yoshua Bengio, Aaron Courville, Chris Pal | cs.NE, cs.CL, cs.LG | David Krueger and Tegan Maharaj contributed equally to this work | null | cs.NE | 20160603 | 20170922 | [
{
"id": "1603.05118"
},
{
"id": "1504.00941"
},
{
"id": "1603.09025"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1511.08400"
}
] |
1606.01305 | 35 | Taesup Moon, Heeyoul Choi, Hoshik Lee, and Inchul Song. Rnndrop: A novel dropout for rnns in asr. Automatic Speech Recognition and Understanding (ASRU), 2015.
Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. Understanding the exploding gradient problem. CoRR, abs/1211.5063, 2012. URL http://arxiv.org/abs/1211.5063.
V. Pham, T. Bluche, C. Kermorvant, and J. Louradour. Dropout improves Recurrent Neural Networks for Handwriting Recognition. ArXiv e-prints, November 2013.
Kamil Rocki, Tomasz Kornuta, and Tegan Maharaj. Surprisal-driven zoneout. CoRR, abs/1610.07675, 2016. URL http://arxiv.org/abs/1610.07675.
Stanislau Semeniuta, Aliaksei Severyn, and Erhardt Barth. Recurrent dropout without memory loss. arXiv preprint arXiv:1603.05118, 2016.
S. Singh, D. Hoiem, and D. Forsyth. Swapout: Learning an ensemble of deep architectures. ArXiv e-prints, May 2016. | 1606.01305#35 | Zoneout: Regularizing RNNs by Randomly Preserving Hidden Activations | We propose zoneout, a novel method for regularizing RNNs. At each timestep,
zoneout stochastically forces some hidden units to maintain their previous
values. Like dropout, zoneout uses random noise to train a pseudo-ensemble,
improving generalization. But by preserving instead of dropping hidden units,
gradient information and state information are more readily propagated through
time, as in feedforward stochastic depth networks. We perform an empirical
investigation of various RNN regularizers, and find that zoneout gives
significant performance improvements across tasks. We achieve competitive
results with relatively simple models in character- and word-level language
modelling on the Penn Treebank and Text8 datasets, and combining with recurrent
batch normalization yields state-of-the-art results on permuted sequential
MNIST. | http://arxiv.org/pdf/1606.01305 | David Krueger, Tegan Maharaj, János Kramár, Mohammad Pezeshki, Nicolas Ballas, Nan Rosemary Ke, Anirudh Goyal, Yoshua Bengio, Aaron Courville, Chris Pal | cs.NE, cs.CL, cs.LG | David Krueger and Tegan Maharaj contributed equally to this work | null | cs.NE | 20160603 | 20170922 | [
{
"id": "1603.05118"
},
{
"id": "1504.00941"
},
{
"id": "1603.09025"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1511.08400"
}
] |
1606.01305 | 36 | S. Singh, D. Hoiem, and D. Forsyth. Swapout: Learning an ensemble of deep architectures. ArXiv e-prints, May 2016.
Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: A simple way to prevent neural networks from overï¬tting. The Journal of Machine Learning Research, 15(1):1929â1958, 2014.
Theano Development Team. Theano: A Python framework for fast computation of mathematical expressions. arXiv e-prints, abs/1605.02688, May 2016.
Tijmen Tieleman and Geoffrey Hinton. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural Networks for Machine Learning, 4:2, 2012.
Bart van Merriënboer, Dzmitry Bahdanau, Vincent Dumoulin, Dmitriy Serdyuk, David Warde-Farley, Jan Chorowski, and Yoshua Bengio. Blocks and fuel: Frameworks for deep learning. CoRR, abs/1506.00619, 2015.
Sida Wang and Christopher Manning. Fast dropout training. In Proceedings of the 30th International Conference on Machine Learning, pp. 118â126, 2013. | 1606.01305#36 | Zoneout: Regularizing RNNs by Randomly Preserving Hidden Activations | We propose zoneout, a novel method for regularizing RNNs. At each timestep,
zoneout stochastically forces some hidden units to maintain their previous
values. Like dropout, zoneout uses random noise to train a pseudo-ensemble,
improving generalization. But by preserving instead of dropping hidden units,
gradient information and state information are more readily propagated through
time, as in feedforward stochastic depth networks. We perform an empirical
investigation of various RNN regularizers, and find that zoneout gives
significant performance improvements across tasks. We achieve competitive
results with relatively simple models in character- and word-level language
modelling on the Penn Treebank and Text8 datasets, and combining with recurrent
batch normalization yields state-of-the-art results on permuted sequential
MNIST. | http://arxiv.org/pdf/1606.01305 | David Krueger, Tegan Maharaj, János Kramár, Mohammad Pezeshki, Nicolas Ballas, Nan Rosemary Ke, Anirudh Goyal, Yoshua Bengio, Aaron Courville, Chris Pal | cs.NE, cs.CL, cs.LG | David Krueger and Tegan Maharaj contributed equally to this work | null | cs.NE | 20160603 | 20170922 | [
{
"id": "1603.05118"
},
{
"id": "1504.00941"
},
{
"id": "1603.09025"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1511.08400"
}
] |
1606.01305 | 37 | Sida Wang and Christopher Manning. Fast dropout training. In Proceedings of the 30th International Conference on Machine Learning, pp. 118â126, 2013.
Wojciech Zaremba, Ilya Sutskever, and Oriol Vinyals. Recurrent neural network regularization. arXiv preprint arXiv:1409.2329, 2014.
10
# Under review as a conference paper at ICLR 2017
6 APPENDIX
6.1 STATIC IDENTITY CONNECTIONS EXPERIMENT
This experiment was suggested by AnonReviewer2 during the ICLR review process with the goal of disentangling the effects zoneout has (1) through noise injection in the training process and (2) through identity connections. Based on these results, we observe that noise injection is essential for obtaining the regularization beneï¬ts of zoneout.
In this experiment, one zoneout mask is sampled at the beginning of training, and used for all examples. This means the identity connections introduced are static across training examples (but still different for each timestep). Using static identity connections resulted in slightly lower training (but not validation) error than zoneout, but worse performance than an unregularized LSTM on both train and validation sets, as shown in Figure 7. | 1606.01305#37 | Zoneout: Regularizing RNNs by Randomly Preserving Hidden Activations | We propose zoneout, a novel method for regularizing RNNs. At each timestep,
zoneout stochastically forces some hidden units to maintain their previous
values. Like dropout, zoneout uses random noise to train a pseudo-ensemble,
improving generalization. But by preserving instead of dropping hidden units,
gradient information and state information are more readily propagated through
time, as in feedforward stochastic depth networks. We perform an empirical
investigation of various RNN regularizers, and find that zoneout gives
significant performance improvements across tasks. We achieve competitive
results with relatively simple models in character- and word-level language
modelling on the Penn Treebank and Text8 datasets, and combining with recurrent
batch normalization yields state-of-the-art results on permuted sequential
MNIST. | http://arxiv.org/pdf/1606.01305 | David Krueger, Tegan Maharaj, János Kramár, Mohammad Pezeshki, Nicolas Ballas, Nan Rosemary Ke, Anirudh Goyal, Yoshua Bengio, Aaron Courville, Chris Pal | cs.NE, cs.CL, cs.LG | David Krueger and Tegan Maharaj contributed equally to this work | null | cs.NE | 20160603 | 20170922 | [
{
"id": "1603.05118"
},
{
"id": "1504.00941"
},
{
"id": "1603.09025"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1511.08400"
}
] |
1606.01305 | 38 | 2.2 â Vanilla LSTM (validation) Vanilla LSTM (training) 2.0 â Zoneout (validation) â Zoneout (training) H 18 â Static identity connections (validation) â Static identity connections (training) BPC 1 6 11 16 21 26 31 36 41 46 51 56 61 66 71 76 81 86 91 96101
1 6 11 16 21 26 31 36 41 46 51 56 61 66 71 76 81 86 91 96101 Epoch
Figure 7: Training and validation curves for an LSTM with static identity connections compared to zoneout (both Zc = 0.5 and Zh = 0.05) and compared to a vanilla LSTM, showing that static identity connections fail to capture the beneï¬ts of zoneout.
11 | 1606.01305#38 | Zoneout: Regularizing RNNs by Randomly Preserving Hidden Activations | We propose zoneout, a novel method for regularizing RNNs. At each timestep,
zoneout stochastically forces some hidden units to maintain their previous
values. Like dropout, zoneout uses random noise to train a pseudo-ensemble,
improving generalization. But by preserving instead of dropping hidden units,
gradient information and state information are more readily propagated through
time, as in feedforward stochastic depth networks. We perform an empirical
investigation of various RNN regularizers, and find that zoneout gives
significant performance improvements across tasks. We achieve competitive
results with relatively simple models in character- and word-level language
modelling on the Penn Treebank and Text8 datasets, and combining with recurrent
batch normalization yields state-of-the-art results on permuted sequential
MNIST. | http://arxiv.org/pdf/1606.01305 | David Krueger, Tegan Maharaj, János Kramár, Mohammad Pezeshki, Nicolas Ballas, Nan Rosemary Ke, Anirudh Goyal, Yoshua Bengio, Aaron Courville, Chris Pal | cs.NE, cs.CL, cs.LG | David Krueger and Tegan Maharaj contributed equally to this work | null | cs.NE | 20160603 | 20170922 | [
{
"id": "1603.05118"
},
{
"id": "1504.00941"
},
{
"id": "1603.09025"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1511.08400"
}
] |
1605.09782 | 0 | 7 1 0 2
r p A 3 ] G L . s c [
7 v 2 8 7 9 0 . 5 0 6 1 : v i X r a
Published as a conference paper at ICLR 2017
# ADVERSARIAL FEATURE LEARNING
# Jeff Donahue [email protected] Computer Science Division University of California, Berkeley
# Philipp Krähenbühl [email protected] Department of Computer Science University of Texas, Austin
# Trevor Darrell [email protected] Computer Science Division University of California, Berkeley
# ABSTRACT
The ability of the Generative Adversarial Networks (GANs) framework to learn generative models mapping from simple latent distributions to arbitrarily complex data distributions has been demonstrated empirically, with compelling results showing that the latent space of such generators captures semantic variation in the data distribution. Intuitively, models trained to predict these semantic latent representations given data may serve as useful feature representations for auxiliary problems where semantics are relevant. However, in their existing form, GANs have no means of learning the inverse mapping â projecting data back into the latent space. We propose Bidirectional Generative Adversarial Networks (BiGANs) as a means of learning this inverse mapping, and demonstrate that the resulting learned feature representation is useful for auxiliary supervised discrimination tasks, competitive with contemporary approaches to unsupervised and self-supervised feature learning. | 1605.09782#0 | Adversarial Feature Learning | The ability of the Generative Adversarial Networks (GANs) framework to learn
generative models mapping from simple latent distributions to arbitrarily
complex data distributions has been demonstrated empirically, with compelling
results showing that the latent space of such generators captures semantic
variation in the data distribution. Intuitively, models trained to predict
these semantic latent representations given data may serve as useful feature
representations for auxiliary problems where semantics are relevant. However,
in their existing form, GANs have no means of learning the inverse mapping --
projecting data back into the latent space. We propose Bidirectional Generative
Adversarial Networks (BiGANs) as a means of learning this inverse mapping, and
demonstrate that the resulting learned feature representation is useful for
auxiliary supervised discrimination tasks, competitive with contemporary
approaches to unsupervised and self-supervised feature learning. | http://arxiv.org/pdf/1605.09782 | Jeff Donahue, Philipp Krähenbühl, Trevor Darrell | cs.LG, cs.AI, cs.CV, cs.NE, stat.ML | Published as a conference paper at ICLR 2017. Changelog: (v7) Table 2
results improved 1-2% due to averaging predictions over 10 crops at test
time, as done in Noroozi & Favaro; Table 3 VOC classification results
slightly improved due to minor bugfix. (See v6 changelog for previous
versions.) | null | cs.LG | 20160531 | 20170403 | [
{
"id": "1605.02688"
},
{
"id": "1606.00704"
}
] |
1605.09782 | 1 | # INTRODUCTION
Deep convolutional networks (convnets) have become a staple of the modern computer vision pipeline. After training these models on a massive database of image-label pairs like ImageNet (Russakovsky et al., 2015), the network easily adapts to a variety of similar visual tasks, achieving impressive results on image classiï¬cation (Donahue et al., 2014; Zeiler & Fergus, 2014; Razavian et al., 2014) or localization (Girshick et al., 2014; Long et al., 2015) tasks. In other perceptual domains such as natural language processing or speech recognition, deep networks have proven highly effective as well (Bahdanau et al., 2015; Sutskever et al., 2014; Vinyals et al., 2015; Graves et al., 2013). However, all of these recent results rely on a supervisory signal from large-scale databases of hand-labeled data, ignoring much of the useful information present in the structure of the data itself. | 1605.09782#1 | Adversarial Feature Learning | The ability of the Generative Adversarial Networks (GANs) framework to learn
generative models mapping from simple latent distributions to arbitrarily
complex data distributions has been demonstrated empirically, with compelling
results showing that the latent space of such generators captures semantic
variation in the data distribution. Intuitively, models trained to predict
these semantic latent representations given data may serve as useful feature
representations for auxiliary problems where semantics are relevant. However,
in their existing form, GANs have no means of learning the inverse mapping --
projecting data back into the latent space. We propose Bidirectional Generative
Adversarial Networks (BiGANs) as a means of learning this inverse mapping, and
demonstrate that the resulting learned feature representation is useful for
auxiliary supervised discrimination tasks, competitive with contemporary
approaches to unsupervised and self-supervised feature learning. | http://arxiv.org/pdf/1605.09782 | Jeff Donahue, Philipp Krähenbühl, Trevor Darrell | cs.LG, cs.AI, cs.CV, cs.NE, stat.ML | Published as a conference paper at ICLR 2017. Changelog: (v7) Table 2
results improved 1-2% due to averaging predictions over 10 crops at test
time, as done in Noroozi & Favaro; Table 3 VOC classification results
slightly improved due to minor bugfix. (See v6 changelog for previous
versions.) | null | cs.LG | 20160531 | 20170403 | [
{
"id": "1605.02688"
},
{
"id": "1606.00704"
}
] |
1605.09782 | 2 | Meanwhile, Generative Adversarial Networks (GANs) (Goodfellow et al., 2014) have emerged as a powerful framework for learning generative models of arbitrarily complex data distributions. The GAN framework learns a generator mapping samples from an arbitrary latent distribution to data, as well as an adversarial discriminator which tries to distinguish between real and generated samples as accurately as possible. The generatorâs goal is to âfoolâ the discriminator by producing samples which are as close to real data as possible. When trained on databases of natural images, GANs produce impressive results (Radford et al., 2016; Denton et al., 2015).
Interpolations in the latent space of the generator produce smooth and plausible semantic variations, and certain directions in this space correspond to particular semantic attributes along which the data distribution varies. For example, Radford et al. (2016) showed that a GAN trained on a database of human faces learns to associate particular latent directions with gender and the presence of eyeglasses.
A natural question arises from this ostensible âsemantic juiceâ ï¬owing through the weights of generators learned using the GAN framework: can GANs be used for unsupervised learning of rich feature representations for arbitrary data distributions? An obvious issue with doing so is that the
1
Published as a conference paper at ICLR 2017 | 1605.09782#2 | Adversarial Feature Learning | The ability of the Generative Adversarial Networks (GANs) framework to learn
generative models mapping from simple latent distributions to arbitrarily
complex data distributions has been demonstrated empirically, with compelling
results showing that the latent space of such generators captures semantic
variation in the data distribution. Intuitively, models trained to predict
these semantic latent representations given data may serve as useful feature
representations for auxiliary problems where semantics are relevant. However,
in their existing form, GANs have no means of learning the inverse mapping --
projecting data back into the latent space. We propose Bidirectional Generative
Adversarial Networks (BiGANs) as a means of learning this inverse mapping, and
demonstrate that the resulting learned feature representation is useful for
auxiliary supervised discrimination tasks, competitive with contemporary
approaches to unsupervised and self-supervised feature learning. | http://arxiv.org/pdf/1605.09782 | Jeff Donahue, Philipp Krähenbühl, Trevor Darrell | cs.LG, cs.AI, cs.CV, cs.NE, stat.ML | Published as a conference paper at ICLR 2017. Changelog: (v7) Table 2
results improved 1-2% due to averaging predictions over 10 crops at test
time, as done in Noroozi & Favaro; Table 3 VOC classification results
slightly improved due to minor bugfix. (See v6 changelog for previous
versions.) | null | cs.LG | 20160531 | 20170403 | [
{
"id": "1605.02688"
},
{
"id": "1606.00704"
}
] |
1605.09782 | 3 | 1
Published as a conference paper at ICLR 2017
features data z G G(z) G(z), z D x, E(x) E(x) E x P (y)
Figure 1: The structure of Bidirectional Generative Adversarial Networks (BiGAN).
generator maps latent samples to generated data, but the framework does not include an inverse mapping from data to latent representation.
Hence, we propose a novel unsupervised feature learning framework, Bidirectional Generative Adversarial Networks (BiGAN). The overall model is depicted in Figure 1. In short, in addition to the generator G from the standard GAN framework (Goodfellow et al., 2014), BiGAN includes an encoder E which maps data x to latent representations z. The BiGAN discriminator D discriminates not only in data space (x versus G(z)), but jointly in data and latent space (tuples (x, E(x)) versus (G(z), z)), where the latent component is either an encoder output E(x) or a generator input z. | 1605.09782#3 | Adversarial Feature Learning | The ability of the Generative Adversarial Networks (GANs) framework to learn
generative models mapping from simple latent distributions to arbitrarily
complex data distributions has been demonstrated empirically, with compelling
results showing that the latent space of such generators captures semantic
variation in the data distribution. Intuitively, models trained to predict
these semantic latent representations given data may serve as useful feature
representations for auxiliary problems where semantics are relevant. However,
in their existing form, GANs have no means of learning the inverse mapping --
projecting data back into the latent space. We propose Bidirectional Generative
Adversarial Networks (BiGANs) as a means of learning this inverse mapping, and
demonstrate that the resulting learned feature representation is useful for
auxiliary supervised discrimination tasks, competitive with contemporary
approaches to unsupervised and self-supervised feature learning. | http://arxiv.org/pdf/1605.09782 | Jeff Donahue, Philipp Krähenbühl, Trevor Darrell | cs.LG, cs.AI, cs.CV, cs.NE, stat.ML | Published as a conference paper at ICLR 2017. Changelog: (v7) Table 2
results improved 1-2% due to averaging predictions over 10 crops at test
time, as done in Noroozi & Favaro; Table 3 VOC classification results
slightly improved due to minor bugfix. (See v6 changelog for previous
versions.) | null | cs.LG | 20160531 | 20170403 | [
{
"id": "1605.02688"
},
{
"id": "1606.00704"
}
] |
1605.09782 | 4 | It may not be obvious from this description that the BiGAN encoder E should learn to invert the generator G. The two modules cannot directly âcommunicateâ with one another: the encoder never âseesâ generator outputs (E(G(z)) is not computed), and vice versa. Yet, in Section 3, we will both argue intuitively and formally prove that the encoder and generator must learn to invert one another in order to fool the BiGAN discriminator.
Because the BiGAN encoder learns to predict features z given data x, and prior work on GANs has demonstrated that these features capture semantic attributes of the data, we hypothesize that a trained BiGAN encoder may serve as a useful feature representation for related semantic tasks, in the same way that fully supervised visual models trained to predict semantic âlabelsâ given images serve as powerful feature representations for related visual tasks. In this context, a latent representation z may be thought of as a âlabelâ for x, but one which came for âfree,â without the need for supervision. | 1605.09782#4 | Adversarial Feature Learning | The ability of the Generative Adversarial Networks (GANs) framework to learn
generative models mapping from simple latent distributions to arbitrarily
complex data distributions has been demonstrated empirically, with compelling
results showing that the latent space of such generators captures semantic
variation in the data distribution. Intuitively, models trained to predict
these semantic latent representations given data may serve as useful feature
representations for auxiliary problems where semantics are relevant. However,
in their existing form, GANs have no means of learning the inverse mapping --
projecting data back into the latent space. We propose Bidirectional Generative
Adversarial Networks (BiGANs) as a means of learning this inverse mapping, and
demonstrate that the resulting learned feature representation is useful for
auxiliary supervised discrimination tasks, competitive with contemporary
approaches to unsupervised and self-supervised feature learning. | http://arxiv.org/pdf/1605.09782 | Jeff Donahue, Philipp Krähenbühl, Trevor Darrell | cs.LG, cs.AI, cs.CV, cs.NE, stat.ML | Published as a conference paper at ICLR 2017. Changelog: (v7) Table 2
results improved 1-2% due to averaging predictions over 10 crops at test
time, as done in Noroozi & Favaro; Table 3 VOC classification results
slightly improved due to minor bugfix. (See v6 changelog for previous
versions.) | null | cs.LG | 20160531 | 20170403 | [
{
"id": "1605.02688"
},
{
"id": "1606.00704"
}
] |
1605.09782 | 5 | An alternative approach to learning the inverse mapping from data to latent representation is to directly model p(z|G(z)), predicting generator input z given generated data G(z). Weâll refer to this alternative as a latent regressor, later arguing (Section 4.1) that the BiGAN encoder may be preferable in a feature learning context, as well as comparing the approaches empirically.
BiGANs are a robust and highly generic approach to unsupervised feature learning, making no assumptions about the structure or type of data to which they are applied, as our theoretical results will demonstrate. Our empirical studies will show that despite their generality, BiGANs are competitive with contemporary approaches to self-supervised and weakly supervised feature learning designed speciï¬cally for a notoriously complex data distribution â natural images.
Dumoulin et al. (2016) independently proposed an identical model in their concurrent work, exploring the case of a stochastic encoder E and the ability of such models to learn in a semi-supervised setting.
# 2 PRELIMINARIES | 1605.09782#5 | Adversarial Feature Learning | The ability of the Generative Adversarial Networks (GANs) framework to learn
generative models mapping from simple latent distributions to arbitrarily
complex data distributions has been demonstrated empirically, with compelling
results showing that the latent space of such generators captures semantic
variation in the data distribution. Intuitively, models trained to predict
these semantic latent representations given data may serve as useful feature
representations for auxiliary problems where semantics are relevant. However,
in their existing form, GANs have no means of learning the inverse mapping --
projecting data back into the latent space. We propose Bidirectional Generative
Adversarial Networks (BiGANs) as a means of learning this inverse mapping, and
demonstrate that the resulting learned feature representation is useful for
auxiliary supervised discrimination tasks, competitive with contemporary
approaches to unsupervised and self-supervised feature learning. | http://arxiv.org/pdf/1605.09782 | Jeff Donahue, Philipp Krähenbühl, Trevor Darrell | cs.LG, cs.AI, cs.CV, cs.NE, stat.ML | Published as a conference paper at ICLR 2017. Changelog: (v7) Table 2
results improved 1-2% due to averaging predictions over 10 crops at test
time, as done in Noroozi & Favaro; Table 3 VOC classification results
slightly improved due to minor bugfix. (See v6 changelog for previous
versions.) | null | cs.LG | 20160531 | 20170403 | [
{
"id": "1605.02688"
},
{
"id": "1606.00704"
}
] |
1605.09782 | 6 | # 2 PRELIMINARIES
Let pX(x) be the distribution of our data for x â â¦X (e.g. natural images). The goal of generative modeling is capture this data distribution using a probabilistic model. Unfortunately, exact modeling of this probability density function is computationally intractable (Hinton et al., 2006; Salakhutdinov & Hinton, 2009) for all but the most trivial models. Generative Adversarial Networks (GANs) (Good2
Published as a conference paper at ICLR 2017
fellow et al., 2014) instead model the data distribution as a transformation of a ï¬xed latent distribution pZ(z) for z â â¦Z. This transformation, called a generator, is expressed as a deterministic feed forward network G : â¦Z â â¦X with pG(x|z) = δ (x â G(z)) and pG(x) = Ezâ¼pZ [pG(x|z)]. The goal is to train a generator such that pG(x) â pX(x). | 1605.09782#6 | Adversarial Feature Learning | The ability of the Generative Adversarial Networks (GANs) framework to learn
generative models mapping from simple latent distributions to arbitrarily
complex data distributions has been demonstrated empirically, with compelling
results showing that the latent space of such generators captures semantic
variation in the data distribution. Intuitively, models trained to predict
these semantic latent representations given data may serve as useful feature
representations for auxiliary problems where semantics are relevant. However,
in their existing form, GANs have no means of learning the inverse mapping --
projecting data back into the latent space. We propose Bidirectional Generative
Adversarial Networks (BiGANs) as a means of learning this inverse mapping, and
demonstrate that the resulting learned feature representation is useful for
auxiliary supervised discrimination tasks, competitive with contemporary
approaches to unsupervised and self-supervised feature learning. | http://arxiv.org/pdf/1605.09782 | Jeff Donahue, Philipp Krähenbühl, Trevor Darrell | cs.LG, cs.AI, cs.CV, cs.NE, stat.ML | Published as a conference paper at ICLR 2017. Changelog: (v7) Table 2
results improved 1-2% due to averaging predictions over 10 crops at test
time, as done in Noroozi & Favaro; Table 3 VOC classification results
slightly improved due to minor bugfix. (See v6 changelog for previous
versions.) | null | cs.LG | 20160531 | 20170403 | [
{
"id": "1605.02688"
},
{
"id": "1606.00704"
}
] |
1605.09782 | 7 | The GAN framework trains a generator, such that no discriminative model D : Qx ++ [0,1] can distinguish samples of the data distribution from samples of the generative distribution. Both generator and discriminator are learned using the adversarial (minimax) objective min max V(D,G), where
learned using the adversarial (minimax) objective min max V(D, G) = Exnpx [log D(x)] + Ex~pe [log (1 - D())]
V(D, G) = Exnpx [log D(x)] + Ex~pe [log (1 - D())] () Ez~pg [log(1âD(G(z)))]
:= Goodfellow et al. (2014) showed that for an ideal discriminator the objective C(G) maxD V (D, G) is equivalent to the Jensen-Shannon divergence between the two distributions pG and pX. | 1605.09782#7 | Adversarial Feature Learning | The ability of the Generative Adversarial Networks (GANs) framework to learn
generative models mapping from simple latent distributions to arbitrarily
complex data distributions has been demonstrated empirically, with compelling
results showing that the latent space of such generators captures semantic
variation in the data distribution. Intuitively, models trained to predict
these semantic latent representations given data may serve as useful feature
representations for auxiliary problems where semantics are relevant. However,
in their existing form, GANs have no means of learning the inverse mapping --
projecting data back into the latent space. We propose Bidirectional Generative
Adversarial Networks (BiGANs) as a means of learning this inverse mapping, and
demonstrate that the resulting learned feature representation is useful for
auxiliary supervised discrimination tasks, competitive with contemporary
approaches to unsupervised and self-supervised feature learning. | http://arxiv.org/pdf/1605.09782 | Jeff Donahue, Philipp Krähenbühl, Trevor Darrell | cs.LG, cs.AI, cs.CV, cs.NE, stat.ML | Published as a conference paper at ICLR 2017. Changelog: (v7) Table 2
results improved 1-2% due to averaging predictions over 10 crops at test
time, as done in Noroozi & Favaro; Table 3 VOC classification results
slightly improved due to minor bugfix. (See v6 changelog for previous
versions.) | null | cs.LG | 20160531 | 20170403 | [
{
"id": "1605.02688"
},
{
"id": "1606.00704"
}
] |
1605.09782 | 8 | The adversarial objective 1 does not directly lend itself to an efï¬cient optimization, as each step in the generator G requires a full discriminator D to be learned. Furthermore, a perfect discriminator no longer provides any gradient information to the generator, as the gradient of any global or local maximum of V (D, G) is 0. To provide a strong gradient signal nonetheless, Goodfellow et al. (2014) slightly alter the objective between generator and discriminator updates, while keeping the same ï¬xed point characteristics. They also propose to optimize (1) using an alternating optimization switching between updates to the generator and discriminator. While this optimization is not guaranteed to converge, empirically it works well if the discriminator and generator are well balanced.
Despite the empirical strength of GANs as generative models of arbitrary data distributions, it is not clear how they can be applied as an unsupervised feature representation. One possibility for learning such representations is to learn an inverse mapping regressing from generated data G(z) back to the latent input z. However, unless the generator perfectly models the data distribution pX, a nearly impossible objective for a complex data distribution such as that of high-resolution natural images, this idea may prove insufï¬cient.
# 3 BIDIRECTIONAL GENERATIVE ADVERSARIAL NETWORKS | 1605.09782#8 | Adversarial Feature Learning | The ability of the Generative Adversarial Networks (GANs) framework to learn
generative models mapping from simple latent distributions to arbitrarily
complex data distributions has been demonstrated empirically, with compelling
results showing that the latent space of such generators captures semantic
variation in the data distribution. Intuitively, models trained to predict
these semantic latent representations given data may serve as useful feature
representations for auxiliary problems where semantics are relevant. However,
in their existing form, GANs have no means of learning the inverse mapping --
projecting data back into the latent space. We propose Bidirectional Generative
Adversarial Networks (BiGANs) as a means of learning this inverse mapping, and
demonstrate that the resulting learned feature representation is useful for
auxiliary supervised discrimination tasks, competitive with contemporary
approaches to unsupervised and self-supervised feature learning. | http://arxiv.org/pdf/1605.09782 | Jeff Donahue, Philipp Krähenbühl, Trevor Darrell | cs.LG, cs.AI, cs.CV, cs.NE, stat.ML | Published as a conference paper at ICLR 2017. Changelog: (v7) Table 2
results improved 1-2% due to averaging predictions over 10 crops at test
time, as done in Noroozi & Favaro; Table 3 VOC classification results
slightly improved due to minor bugfix. (See v6 changelog for previous
versions.) | null | cs.LG | 20160531 | 20170403 | [
{
"id": "1605.02688"
},
{
"id": "1606.00704"
}
] |
1605.09782 | 9 | # 3 BIDIRECTIONAL GENERATIVE ADVERSARIAL NETWORKS
In Bidirectional Generative Adversarial Networks (BiGANs) we not only train a generator, but additionally train an encoder E : â¦X â â¦Z. The encoder induces a distribution pE(z|x) = δ(z â E(x)) mapping data points x into the latent feature space of the generative model. The discriminator is also modiï¬ed to take input from the latent space, predicting PD(Y |x, z), where Y = 1 if x is real (sampled from the real data distribution pX), and Y = 0 if x is generated (the output of G(z), z â¼ pZ).
The BiGAN training objective is deï¬ned as a minimax objective
min G,E max D V (D, E, G) (2)
where
V(D, E, G) := Exnpy [Ez~py(-|x) log D(x, 2)] | + Exxpz [ Ex~po(-|z) [log (1 â D(x, 2))} }. â_-el_â__ââ___--__âââ â_â_â_ââââOC___ââ" log D(x,E(x)) log(1âD(G(z),z)) | 1605.09782#9 | Adversarial Feature Learning | The ability of the Generative Adversarial Networks (GANs) framework to learn
generative models mapping from simple latent distributions to arbitrarily
complex data distributions has been demonstrated empirically, with compelling
results showing that the latent space of such generators captures semantic
variation in the data distribution. Intuitively, models trained to predict
these semantic latent representations given data may serve as useful feature
representations for auxiliary problems where semantics are relevant. However,
in their existing form, GANs have no means of learning the inverse mapping --
projecting data back into the latent space. We propose Bidirectional Generative
Adversarial Networks (BiGANs) as a means of learning this inverse mapping, and
demonstrate that the resulting learned feature representation is useful for
auxiliary supervised discrimination tasks, competitive with contemporary
approaches to unsupervised and self-supervised feature learning. | http://arxiv.org/pdf/1605.09782 | Jeff Donahue, Philipp Krähenbühl, Trevor Darrell | cs.LG, cs.AI, cs.CV, cs.NE, stat.ML | Published as a conference paper at ICLR 2017. Changelog: (v7) Table 2
results improved 1-2% due to averaging predictions over 10 crops at test
time, as done in Noroozi & Favaro; Table 3 VOC classification results
slightly improved due to minor bugfix. (See v6 changelog for previous
versions.) | null | cs.LG | 20160531 | 20170403 | [
{
"id": "1605.02688"
},
{
"id": "1606.00704"
}
] |
1605.09782 | 10 | We optimize this minimax objective using the same alternating gradient based optimization as Goodfellow et al. (2014). See Section 3.4 for details.
BiGANs share many of the theoretical properties of GANs (Goodfellow et al.}[2014), while addition- ally guaranteeing that at the global optimum, G and E are each otherâs inverse. BiGANs are also closely related to autoencoders with an ¢p loss function. In the following sections we highlight some of the appealing theoretical properties of BiGANs.
Deï¬nitions Let pGZ(x, z) := pG(x|z)pZ(z) and pEX(x, z) := pE(z|x)pX(x) be the joint distri- butions modeled by the generator and encoder respectively. ⦠:= â¦X à â¦Z is the joint latent and
3
(3)
Published as a conference paper at ICLR 2017
data space. For a region R â â¦, | 1605.09782#10 | Adversarial Feature Learning | The ability of the Generative Adversarial Networks (GANs) framework to learn
generative models mapping from simple latent distributions to arbitrarily
complex data distributions has been demonstrated empirically, with compelling
results showing that the latent space of such generators captures semantic
variation in the data distribution. Intuitively, models trained to predict
these semantic latent representations given data may serve as useful feature
representations for auxiliary problems where semantics are relevant. However,
in their existing form, GANs have no means of learning the inverse mapping --
projecting data back into the latent space. We propose Bidirectional Generative
Adversarial Networks (BiGANs) as a means of learning this inverse mapping, and
demonstrate that the resulting learned feature representation is useful for
auxiliary supervised discrimination tasks, competitive with contemporary
approaches to unsupervised and self-supervised feature learning. | http://arxiv.org/pdf/1605.09782 | Jeff Donahue, Philipp Krähenbühl, Trevor Darrell | cs.LG, cs.AI, cs.CV, cs.NE, stat.ML | Published as a conference paper at ICLR 2017. Changelog: (v7) Table 2
results improved 1-2% due to averaging predictions over 10 crops at test
time, as done in Noroozi & Favaro; Table 3 VOC classification results
slightly improved due to minor bugfix. (See v6 changelog for previous
versions.) | null | cs.LG | 20160531 | 20170403 | [
{
"id": "1605.02688"
},
{
"id": "1606.00704"
}
] |
1605.09782 | 12 | â¦X pX(x)1[xâRX] dx â¦Z pZ(z)1[zâRZ] dz
as measures over regions RX â â¦X and RZ â â¦Z. We refer to the set of features and data samples in the support of PX and PZ as Ëâ¦X := supp(PX) and Ëâ¦Z := supp(PZ) respectively. DKL ( P || Q ) and DJS ( P || Q ) respectively denote the Kullback-Leibler (KL) and Jensen-Shannon divergences between probability measures P and Q. By deï¬nition, DKL ( P || Q ) := Exâ¼P [log fP Q(x)] P +Q DJS ( P || Q ) := 1 2 2
where fpg := ra is the Radon-Nikodym (RN) derivative of measure P with respect to measure Q, with the defining property that P(R) = [;, fpq dQ. The RN derivative fpg : 2+ Ryo is defined for any measures P and Q on space 2 such that P is absolutely continuous with respect to Q: i.e., for any R CQ, P(R) > 0 = > Q(R) > 0. | 1605.09782#12 | Adversarial Feature Learning | The ability of the Generative Adversarial Networks (GANs) framework to learn
generative models mapping from simple latent distributions to arbitrarily
complex data distributions has been demonstrated empirically, with compelling
results showing that the latent space of such generators captures semantic
variation in the data distribution. Intuitively, models trained to predict
these semantic latent representations given data may serve as useful feature
representations for auxiliary problems where semantics are relevant. However,
in their existing form, GANs have no means of learning the inverse mapping --
projecting data back into the latent space. We propose Bidirectional Generative
Adversarial Networks (BiGANs) as a means of learning this inverse mapping, and
demonstrate that the resulting learned feature representation is useful for
auxiliary supervised discrimination tasks, competitive with contemporary
approaches to unsupervised and self-supervised feature learning. | http://arxiv.org/pdf/1605.09782 | Jeff Donahue, Philipp Krähenbühl, Trevor Darrell | cs.LG, cs.AI, cs.CV, cs.NE, stat.ML | Published as a conference paper at ICLR 2017. Changelog: (v7) Table 2
results improved 1-2% due to averaging predictions over 10 crops at test
time, as done in Noroozi & Favaro; Table 3 VOC classification results
slightly improved due to minor bugfix. (See v6 changelog for previous
versions.) | null | cs.LG | 20160531 | 20170403 | [
{
"id": "1605.02688"
},
{
"id": "1606.00704"
}
] |
1605.09782 | 13 | 3.1 OPTIMAL DISCRIMINATOR, GENERATOR, & ENCODER
We start by characterizing the optimal discriminator for any generator and encoder, following Good- fellow et al. (2014). This optimal discriminator then allows us to reformulate objective (3), and show that it reduces to the Jensen-Shannon divergence between the joint distributions PEX and PGZ.
Proposition 1 For any E and G, the optimal discriminator Dig := arg max p V (D, E,G) is the dPpx : Q + [0,1] of measure Px with respect to Radon-Nikodym derivative fq := W(Poxt Pon) measure Pex + Pez.
Proof. Given in Appendix A.1.
This optimal discriminator now allows us to characterize the optimal generator and encoder.
Proposition 2 The encoder and generatorâs objective for an optimal discriminator C(E, G) := maxD V (D, E, G) = V (Dâ EG, E, G) can be rewritten in terms of the Jensen-Shannon divergence between measures PEX and PGZ as C(E, G) = 2 DJS ( PEX || PGZ ) â log 4.
Proof. Given in Appendix A.2. | 1605.09782#13 | Adversarial Feature Learning | The ability of the Generative Adversarial Networks (GANs) framework to learn
generative models mapping from simple latent distributions to arbitrarily
complex data distributions has been demonstrated empirically, with compelling
results showing that the latent space of such generators captures semantic
variation in the data distribution. Intuitively, models trained to predict
these semantic latent representations given data may serve as useful feature
representations for auxiliary problems where semantics are relevant. However,
in their existing form, GANs have no means of learning the inverse mapping --
projecting data back into the latent space. We propose Bidirectional Generative
Adversarial Networks (BiGANs) as a means of learning this inverse mapping, and
demonstrate that the resulting learned feature representation is useful for
auxiliary supervised discrimination tasks, competitive with contemporary
approaches to unsupervised and self-supervised feature learning. | http://arxiv.org/pdf/1605.09782 | Jeff Donahue, Philipp Krähenbühl, Trevor Darrell | cs.LG, cs.AI, cs.CV, cs.NE, stat.ML | Published as a conference paper at ICLR 2017. Changelog: (v7) Table 2
results improved 1-2% due to averaging predictions over 10 crops at test
time, as done in Noroozi & Favaro; Table 3 VOC classification results
slightly improved due to minor bugfix. (See v6 changelog for previous
versions.) | null | cs.LG | 20160531 | 20170403 | [
{
"id": "1605.02688"
},
{
"id": "1606.00704"
}
] |
1605.09782 | 14 | Proof. Given in Appendix A.2.
Theorem 1 The global minimum of C(E, G) is achieved if and only if PEX = PGZ. At that point, C(E, G) = â log 4 and Dâ
Proof. From Proposition 2, we have that C(E, G) = 2 DJS ( PEX || PGZ ) â log 4. The Jensen- Shannon divergence DJS ( P || Q ) ⥠0 for any P and Q, and DJS ( P || Q ) = 0 if and only if P = Q. Therefore, the global minimum of C(E, G) occurs if and only if PEX = PGZ, and at this point the value is C(E, G) = â log 4. Finally, PEX = PGZ implies that the optimal discriminator is chance: Dâ
The optimal discriminator, encoder, and generator of BiGAN are similar to the optimal discriminator and generator of the GAN framework (Goodfellow et al., 2014). However, an important difference is that BiGAN optimizes a Jensen-Shannon divergence between a joint distribution over both data X and latent features Z. This joint divergence allows us to further characterize properties of G and E, as shown below.
3.2 OPTIMAL GENERATOR & ENCODER ARE INVERSES | 1605.09782#14 | Adversarial Feature Learning | The ability of the Generative Adversarial Networks (GANs) framework to learn
generative models mapping from simple latent distributions to arbitrarily
complex data distributions has been demonstrated empirically, with compelling
results showing that the latent space of such generators captures semantic
variation in the data distribution. Intuitively, models trained to predict
these semantic latent representations given data may serve as useful feature
representations for auxiliary problems where semantics are relevant. However,
in their existing form, GANs have no means of learning the inverse mapping --
projecting data back into the latent space. We propose Bidirectional Generative
Adversarial Networks (BiGANs) as a means of learning this inverse mapping, and
demonstrate that the resulting learned feature representation is useful for
auxiliary supervised discrimination tasks, competitive with contemporary
approaches to unsupervised and self-supervised feature learning. | http://arxiv.org/pdf/1605.09782 | Jeff Donahue, Philipp Krähenbühl, Trevor Darrell | cs.LG, cs.AI, cs.CV, cs.NE, stat.ML | Published as a conference paper at ICLR 2017. Changelog: (v7) Table 2
results improved 1-2% due to averaging predictions over 10 crops at test
time, as done in Noroozi & Favaro; Table 3 VOC classification results
slightly improved due to minor bugfix. (See v6 changelog for previous
versions.) | null | cs.LG | 20160531 | 20170403 | [
{
"id": "1605.02688"
},
{
"id": "1606.00704"
}
] |
1605.09782 | 15 | 3.2 OPTIMAL GENERATOR & ENCODER ARE INVERSES
We ï¬rst present an intuitive argument that, in order to âfoolâ a perfect discriminator, a deterministic BiGAN encoder and generator must invert each other. (Later we will formally state and prove this
4
Published as a conference paper at ICLR 2017
property.) Consider a BiGAN discriminator input pair (x, z). Due to the sampling procedure, (x, z) must satisfy at least one of the following two properties: (a) x â Ëâ¦X â§ E(x) = z
(b) z â Ëâ¦Z â§ G(z) = x | 1605.09782#15 | Adversarial Feature Learning | The ability of the Generative Adversarial Networks (GANs) framework to learn
generative models mapping from simple latent distributions to arbitrarily
complex data distributions has been demonstrated empirically, with compelling
results showing that the latent space of such generators captures semantic
variation in the data distribution. Intuitively, models trained to predict
these semantic latent representations given data may serve as useful feature
representations for auxiliary problems where semantics are relevant. However,
in their existing form, GANs have no means of learning the inverse mapping --
projecting data back into the latent space. We propose Bidirectional Generative
Adversarial Networks (BiGANs) as a means of learning this inverse mapping, and
demonstrate that the resulting learned feature representation is useful for
auxiliary supervised discrimination tasks, competitive with contemporary
approaches to unsupervised and self-supervised feature learning. | http://arxiv.org/pdf/1605.09782 | Jeff Donahue, Philipp Krähenbühl, Trevor Darrell | cs.LG, cs.AI, cs.CV, cs.NE, stat.ML | Published as a conference paper at ICLR 2017. Changelog: (v7) Table 2
results improved 1-2% due to averaging predictions over 10 crops at test
time, as done in Noroozi & Favaro; Table 3 VOC classification results
slightly improved due to minor bugfix. (See v6 changelog for previous
versions.) | null | cs.LG | 20160531 | 20170403 | [
{
"id": "1605.02688"
},
{
"id": "1606.00704"
}
] |
1605.09782 | 16 | (b) z â Ëâ¦Z â§ G(z) = x
If only one of these properties is satisï¬ed, a perfect discriminator can infer the source of (x, z) with certainty: if only (a) is satisï¬ed, (x, z) must be an encoder pair (x, E(x)) and Dâ EG(x, z) = 1; if only (b) is satisï¬ed, (x, z) must be a generator pair (G(z), z) and Dâ EG(x, z) = 0. Therefore, in order to fool a perfect discriminator at (x, z) (so that 0 < Dâ EG(x, z) < 1), E and G must satisfy both (a) and (b). In this case, we can substitute the equality E(x) = z required by (a) into the equality G(z) = x required by (b), and vice versa, giving the inversion properties x = G(E(x)) and z = E(G(z)).
Formally, we show in Theorem 2 that the optimal generator and encoder invert one another almost everywhere on the support Ëâ¦X and Ëâ¦Z of PX and PZ. | 1605.09782#16 | Adversarial Feature Learning | The ability of the Generative Adversarial Networks (GANs) framework to learn
generative models mapping from simple latent distributions to arbitrarily
complex data distributions has been demonstrated empirically, with compelling
results showing that the latent space of such generators captures semantic
variation in the data distribution. Intuitively, models trained to predict
these semantic latent representations given data may serve as useful feature
representations for auxiliary problems where semantics are relevant. However,
in their existing form, GANs have no means of learning the inverse mapping --
projecting data back into the latent space. We propose Bidirectional Generative
Adversarial Networks (BiGANs) as a means of learning this inverse mapping, and
demonstrate that the resulting learned feature representation is useful for
auxiliary supervised discrimination tasks, competitive with contemporary
approaches to unsupervised and self-supervised feature learning. | http://arxiv.org/pdf/1605.09782 | Jeff Donahue, Philipp Krähenbühl, Trevor Darrell | cs.LG, cs.AI, cs.CV, cs.NE, stat.ML | Published as a conference paper at ICLR 2017. Changelog: (v7) Table 2
results improved 1-2% due to averaging predictions over 10 crops at test
time, as done in Noroozi & Favaro; Table 3 VOC classification results
slightly improved due to minor bugfix. (See v6 changelog for previous
versions.) | null | cs.LG | 20160531 | 20170403 | [
{
"id": "1605.02688"
},
{
"id": "1606.00704"
}
] |
1605.09782 | 17 | Theorem 2 If E and G are an optimal encoder and generator, then E = Gâ1 almost everywhere; that is, G(E(x)) = x for PX-almost every x â â¦X, and E(G(z)) = z for PZ-almost every z â â¦Z.
Proof. Given in Appendix A.4.
While Theorem|2|characterizes the encoder and decoder at their optimum, due to the non-convex nature of the optimization, this optimum might never be reached. Experimentally, Section|4]shows that on standard datasets, the two are approximate inverses; however, they are rarely exact inverses. It is thus also interesting to show what objective BiGAN optimizes in terms of E and G. Next we show that BiGANs are closely related to autoencoders with an £ loss function.
3.3 RELATIONSHIP TO AUTOENCODERS
As argued in Section 1, a model trained to predict features z given data x should learn useful semantic representations. Here we show that the BiGAN objective forces the encoder E to do exactly this: in order to fool the discriminator at a particular z, the encoder must invert the generator at that z, such that E(G(z)) = z. | 1605.09782#17 | Adversarial Feature Learning | The ability of the Generative Adversarial Networks (GANs) framework to learn
generative models mapping from simple latent distributions to arbitrarily
complex data distributions has been demonstrated empirically, with compelling
results showing that the latent space of such generators captures semantic
variation in the data distribution. Intuitively, models trained to predict
these semantic latent representations given data may serve as useful feature
representations for auxiliary problems where semantics are relevant. However,
in their existing form, GANs have no means of learning the inverse mapping --
projecting data back into the latent space. We propose Bidirectional Generative
Adversarial Networks (BiGANs) as a means of learning this inverse mapping, and
demonstrate that the resulting learned feature representation is useful for
auxiliary supervised discrimination tasks, competitive with contemporary
approaches to unsupervised and self-supervised feature learning. | http://arxiv.org/pdf/1605.09782 | Jeff Donahue, Philipp Krähenbühl, Trevor Darrell | cs.LG, cs.AI, cs.CV, cs.NE, stat.ML | Published as a conference paper at ICLR 2017. Changelog: (v7) Table 2
results improved 1-2% due to averaging predictions over 10 crops at test
time, as done in Noroozi & Favaro; Table 3 VOC classification results
slightly improved due to minor bugfix. (See v6 changelog for previous
versions.) | null | cs.LG | 20160531 | 20170403 | [
{
"id": "1605.02688"
},
{
"id": "1606.00704"
}
] |
1605.09782 | 18 | Theorem 3 The encoder and generator objective given an optimal discriminator C(E,G) := maxp V(D, E,G) can be rewritten as an â¬y autoencoder loss function
C(E,G) = Exnpx [1 peopeencteea)=x] log fra(x, E(x))| + Eapz [tate ecinen (Cte) =e] log (1 â fra(G(2),2))|
with log fEG â (ââ, 0) and log (1 â fEG) â (ââ, 0) PEX-almost and PGZ-almost everywhere.
Proof. Given in Appendix A.5.
Here the indicator function 1{q((x))=x| in the first term is equivalent to an autoencoder with Co loss, while the indicator 1(j(c@(z))=z) in the second term shows that the BiGAN encoder must invert the generator, the desired property for feature learning. The objective further encourages the functions E(x) and G(z) to produce valid outputs in the support of Pz, and Px respectively. Unlike regular autoencoders, the fy loss function does not make any assumptions about the structure or distribution of the data itself; in fact, all the structural properties of BiGAN are learned as part of the discriminator. | 1605.09782#18 | Adversarial Feature Learning | The ability of the Generative Adversarial Networks (GANs) framework to learn
generative models mapping from simple latent distributions to arbitrarily
complex data distributions has been demonstrated empirically, with compelling
results showing that the latent space of such generators captures semantic
variation in the data distribution. Intuitively, models trained to predict
these semantic latent representations given data may serve as useful feature
representations for auxiliary problems where semantics are relevant. However,
in their existing form, GANs have no means of learning the inverse mapping --
projecting data back into the latent space. We propose Bidirectional Generative
Adversarial Networks (BiGANs) as a means of learning this inverse mapping, and
demonstrate that the resulting learned feature representation is useful for
auxiliary supervised discrimination tasks, competitive with contemporary
approaches to unsupervised and self-supervised feature learning. | http://arxiv.org/pdf/1605.09782 | Jeff Donahue, Philipp Krähenbühl, Trevor Darrell | cs.LG, cs.AI, cs.CV, cs.NE, stat.ML | Published as a conference paper at ICLR 2017. Changelog: (v7) Table 2
results improved 1-2% due to averaging predictions over 10 crops at test
time, as done in Noroozi & Favaro; Table 3 VOC classification results
slightly improved due to minor bugfix. (See v6 changelog for previous
versions.) | null | cs.LG | 20160531 | 20170403 | [
{
"id": "1605.02688"
},
{
"id": "1606.00704"
}
] |
1605.09782 | 19 | 3.4 LEARNING
In practice, as in the GAN framework (Goodfellow et al., 2014), each BiGAN module D, G, and E is a parametric function (with parameters θD, θG, and θE, respectively). As a whole, BiGAN can be optimized using alternating stochastic gradient steps. In one iteration, the discriminator parameters θD are updated by taking one or more steps in the positive gradient direction âθD V (D, E, G), then the encoder parameters θE and generator parameters θG are together updated by taking a step in the negative gradient direction ââθE ,θG V (D, E, G). In both cases, the expectation terms of
5
Published as a conference paper at ICLR 2017
V (D, E, G) are estimated using mini-batches of n samples {x(i) â¼ pX}n drawn independently for each update step. i=1 and {z(i) â¼ pZ}n i=1 | 1605.09782#19 | Adversarial Feature Learning | The ability of the Generative Adversarial Networks (GANs) framework to learn
generative models mapping from simple latent distributions to arbitrarily
complex data distributions has been demonstrated empirically, with compelling
results showing that the latent space of such generators captures semantic
variation in the data distribution. Intuitively, models trained to predict
these semantic latent representations given data may serve as useful feature
representations for auxiliary problems where semantics are relevant. However,
in their existing form, GANs have no means of learning the inverse mapping --
projecting data back into the latent space. We propose Bidirectional Generative
Adversarial Networks (BiGANs) as a means of learning this inverse mapping, and
demonstrate that the resulting learned feature representation is useful for
auxiliary supervised discrimination tasks, competitive with contemporary
approaches to unsupervised and self-supervised feature learning. | http://arxiv.org/pdf/1605.09782 | Jeff Donahue, Philipp Krähenbühl, Trevor Darrell | cs.LG, cs.AI, cs.CV, cs.NE, stat.ML | Published as a conference paper at ICLR 2017. Changelog: (v7) Table 2
results improved 1-2% due to averaging predictions over 10 crops at test
time, as done in Noroozi & Favaro; Table 3 VOC classification results
slightly improved due to minor bugfix. (See v6 changelog for previous
versions.) | null | cs.LG | 20160531 | 20170403 | [
{
"id": "1605.02688"
},
{
"id": "1606.00704"
}
] |
1605.09782 | 20 | Goodfellow et al. (2014) found that an objective in which the real and generated labels Y are swapped provides stronger gradient signal to G. We similarly observed in BiGAN training that an âinverseâ objective provides stronger gradient signal to G and E. For efï¬ciency, we also update all modules D, G, and E simultaneously at each iteration, rather than alternating between D updates and G, E updates. See Appendix B for details.
3.5 GENERALIZED BIGAN | 1605.09782#20 | Adversarial Feature Learning | The ability of the Generative Adversarial Networks (GANs) framework to learn
generative models mapping from simple latent distributions to arbitrarily
complex data distributions has been demonstrated empirically, with compelling
results showing that the latent space of such generators captures semantic
variation in the data distribution. Intuitively, models trained to predict
these semantic latent representations given data may serve as useful feature
representations for auxiliary problems where semantics are relevant. However,
in their existing form, GANs have no means of learning the inverse mapping --
projecting data back into the latent space. We propose Bidirectional Generative
Adversarial Networks (BiGANs) as a means of learning this inverse mapping, and
demonstrate that the resulting learned feature representation is useful for
auxiliary supervised discrimination tasks, competitive with contemporary
approaches to unsupervised and self-supervised feature learning. | http://arxiv.org/pdf/1605.09782 | Jeff Donahue, Philipp Krähenbühl, Trevor Darrell | cs.LG, cs.AI, cs.CV, cs.NE, stat.ML | Published as a conference paper at ICLR 2017. Changelog: (v7) Table 2
results improved 1-2% due to averaging predictions over 10 crops at test
time, as done in Noroozi & Favaro; Table 3 VOC classification results
slightly improved due to minor bugfix. (See v6 changelog for previous
versions.) | null | cs.LG | 20160531 | 20170403 | [
{
"id": "1605.02688"
},
{
"id": "1606.00704"
}
] |
1605.09782 | 21 | 3.5 GENERALIZED BIGAN
It is often useful to parametrize the output of the generator G and encoder F in a different, usually smaller, space { & and Oy rather than the original Qx and Qz,. For example, for visual feature learning, the images input to the encoder should be of similar resolution to images used in the evaluation. On the other hand, generating high resolution images remains difficult for current generative models. In this situation, the encoder may take higher resolution input while the generator output and discriminator input remain low resolution. We generalize the BiGAN objective V(D, G, E) (3) with functions gx : Qx + OX and gz : Az QZ, and encoder E : Nx ++ Nz, generator G : Az 4 NX, and discriminator D : OX x 0, + [0, 1]: Exnpx [ Een pn(-lx) [log D(9x(x), z')| ] + EL wpz [ Been pg (-lz) [log (1 _ D(xâ, gz(z)))] ] SS ââââ log D(gx (x), E(x)) log(1âD(G(2),gz(2)))
and gz : Az OX x 0, + [0, 1]: gz(z)))] ] | 1605.09782#21 | Adversarial Feature Learning | The ability of the Generative Adversarial Networks (GANs) framework to learn
generative models mapping from simple latent distributions to arbitrarily
complex data distributions has been demonstrated empirically, with compelling
results showing that the latent space of such generators captures semantic
variation in the data distribution. Intuitively, models trained to predict
these semantic latent representations given data may serve as useful feature
representations for auxiliary problems where semantics are relevant. However,
in their existing form, GANs have no means of learning the inverse mapping --
projecting data back into the latent space. We propose Bidirectional Generative
Adversarial Networks (BiGANs) as a means of learning this inverse mapping, and
demonstrate that the resulting learned feature representation is useful for
auxiliary supervised discrimination tasks, competitive with contemporary
approaches to unsupervised and self-supervised feature learning. | http://arxiv.org/pdf/1605.09782 | Jeff Donahue, Philipp Krähenbühl, Trevor Darrell | cs.LG, cs.AI, cs.CV, cs.NE, stat.ML | Published as a conference paper at ICLR 2017. Changelog: (v7) Table 2
results improved 1-2% due to averaging predictions over 10 crops at test
time, as done in Noroozi & Favaro; Table 3 VOC classification results
slightly improved due to minor bugfix. (See v6 changelog for previous
versions.) | null | cs.LG | 20160531 | 20170403 | [
{
"id": "1605.02688"
},
{
"id": "1606.00704"
}
] |
1605.09782 | 22 | and gz : Az OX x 0, + [0, 1]: gz(z)))] ]
Nz, generator G : Az 4 NX, D(9x(x), z')| ] + EL wpz [
An identity gx (x) = x and gz(z) = z (and Q = Nx, NZ = Oz) yields the original objective. For visual feature learning with higher resolution encoder inputs, gx is an image resizing function that downsamples a high resolution image x ⬠(x to a lower resolution image xâ ⬠OQ, as output by the generator. (gz, is identity.) | 1605.09782#22 | Adversarial Feature Learning | The ability of the Generative Adversarial Networks (GANs) framework to learn
generative models mapping from simple latent distributions to arbitrarily
complex data distributions has been demonstrated empirically, with compelling
results showing that the latent space of such generators captures semantic
variation in the data distribution. Intuitively, models trained to predict
these semantic latent representations given data may serve as useful feature
representations for auxiliary problems where semantics are relevant. However,
in their existing form, GANs have no means of learning the inverse mapping --
projecting data back into the latent space. We propose Bidirectional Generative
Adversarial Networks (BiGANs) as a means of learning this inverse mapping, and
demonstrate that the resulting learned feature representation is useful for
auxiliary supervised discrimination tasks, competitive with contemporary
approaches to unsupervised and self-supervised feature learning. | http://arxiv.org/pdf/1605.09782 | Jeff Donahue, Philipp Krähenbühl, Trevor Darrell | cs.LG, cs.AI, cs.CV, cs.NE, stat.ML | Published as a conference paper at ICLR 2017. Changelog: (v7) Table 2
results improved 1-2% due to averaging predictions over 10 crops at test
time, as done in Noroozi & Favaro; Table 3 VOC classification results
slightly improved due to minor bugfix. (See v6 changelog for previous
versions.) | null | cs.LG | 20160531 | 20170403 | [
{
"id": "1605.02688"
},
{
"id": "1606.00704"
}
] |
1605.09782 | 23 | In this case, the encoder and generator respectively induce probability measures Pgxâ and Paz over regions R C of the joint space OY := OX x OF, with Pex/(R) := Joy. Say, Sng, PEXCS 2) Loe 2ryer9(9x x) â x') da! dxâ dx = Jo, Px) 1 (x(x), BG9)ER) IX and Pez; defined analogously. For optimal E and G, we can show Pex: = Paz: a generalization of Theorem|]] When E and G are deterministic and optimal, Theorem|2|- that £ and G invert one another â can also be generalized: 4,9, {E(x) = gz(z) \ G(z) = 9x(x)} for Px-almost every x ⬠Ox, and 5,29, {E(x) = gz(z) A Gz) = gx(x)} for Pz-almost every z ⬠Oz.
# 4 EVALUATION | 1605.09782#23 | Adversarial Feature Learning | The ability of the Generative Adversarial Networks (GANs) framework to learn
generative models mapping from simple latent distributions to arbitrarily
complex data distributions has been demonstrated empirically, with compelling
results showing that the latent space of such generators captures semantic
variation in the data distribution. Intuitively, models trained to predict
these semantic latent representations given data may serve as useful feature
representations for auxiliary problems where semantics are relevant. However,
in their existing form, GANs have no means of learning the inverse mapping --
projecting data back into the latent space. We propose Bidirectional Generative
Adversarial Networks (BiGANs) as a means of learning this inverse mapping, and
demonstrate that the resulting learned feature representation is useful for
auxiliary supervised discrimination tasks, competitive with contemporary
approaches to unsupervised and self-supervised feature learning. | http://arxiv.org/pdf/1605.09782 | Jeff Donahue, Philipp Krähenbühl, Trevor Darrell | cs.LG, cs.AI, cs.CV, cs.NE, stat.ML | Published as a conference paper at ICLR 2017. Changelog: (v7) Table 2
results improved 1-2% due to averaging predictions over 10 crops at test
time, as done in Noroozi & Favaro; Table 3 VOC classification results
slightly improved due to minor bugfix. (See v6 changelog for previous
versions.) | null | cs.LG | 20160531 | 20170403 | [
{
"id": "1605.02688"
},
{
"id": "1606.00704"
}
] |
1605.09782 | 24 | # 4 EVALUATION
We evaluate the feature learning capabilities of BiGANs by ï¬rst training them unsupervised as described in Section 3.4, then transferring the encoderâs learned feature representations for use in auxiliary supervised learning tasks. To demonstrate that BiGANs are able to learn meaningful feature representations both on arbitrary data vectors, where the model is agnostic to any underlying structure, as well as very high-dimensional and complex distributions, we evaluate on both permutation-invariant MNIST (LeCun et al., 1998) and on the high-resolution natural images of ImageNet (Russakovsky et al., 2015).
In all experiments, each module D, G, and E is a parametric deep (multi-layer) network. The BiGAN discriminator D(x, z) takes data x as its initial input, and at each linear layer thereafter, the latent representation z is transformed using a learned linear transformation to the hidden layer dimension and added to the non-linearity input.
4.1 BASELINE METHODS
Besides the BiGAN framework presented above, we considered alternative approaches to learning feature representations using different GAN variants.
Discriminator The discriminator D in a standard GAN takes data samples x â¼ pX as input, making its learned intermediate representations natural candidates as feature representations for related tasks.
6
Published as a conference paper at ICLR 2017 | 1605.09782#24 | Adversarial Feature Learning | The ability of the Generative Adversarial Networks (GANs) framework to learn
generative models mapping from simple latent distributions to arbitrarily
complex data distributions has been demonstrated empirically, with compelling
results showing that the latent space of such generators captures semantic
variation in the data distribution. Intuitively, models trained to predict
these semantic latent representations given data may serve as useful feature
representations for auxiliary problems where semantics are relevant. However,
in their existing form, GANs have no means of learning the inverse mapping --
projecting data back into the latent space. We propose Bidirectional Generative
Adversarial Networks (BiGANs) as a means of learning this inverse mapping, and
demonstrate that the resulting learned feature representation is useful for
auxiliary supervised discrimination tasks, competitive with contemporary
approaches to unsupervised and self-supervised feature learning. | http://arxiv.org/pdf/1605.09782 | Jeff Donahue, Philipp Krähenbühl, Trevor Darrell | cs.LG, cs.AI, cs.CV, cs.NE, stat.ML | Published as a conference paper at ICLR 2017. Changelog: (v7) Table 2
results improved 1-2% due to averaging predictions over 10 crops at test
time, as done in Noroozi & Favaro; Table 3 VOC classification results
slightly improved due to minor bugfix. (See v6 changelog for previous
versions.) | null | cs.LG | 20160531 | 20170403 | [
{
"id": "1605.02688"
},
{
"id": "1606.00704"
}
] |
1605.09782 | 25 | 6
Published as a conference paper at ICLR 2017
BiGAN D LR JLR- AE(é:) AE (¢,) 97.39 9730 97.44 97.13 97.58 97.63
Table 1: One Nearest Neighbors (1NN) classification accuracy (%) on the permutation-invariant MNIST (LeCun et al.|/1998) test set in the feature space learned by BiGAN, Latent Regressor (LR), Joint Latent Regressor (JLR), and an autoencoder (AE) using an ¢, or £3 distance.
G(z) x G(E(x)) | 1605.09782#25 | Adversarial Feature Learning | The ability of the Generative Adversarial Networks (GANs) framework to learn
generative models mapping from simple latent distributions to arbitrarily
complex data distributions has been demonstrated empirically, with compelling
results showing that the latent space of such generators captures semantic
variation in the data distribution. Intuitively, models trained to predict
these semantic latent representations given data may serve as useful feature
representations for auxiliary problems where semantics are relevant. However,
in their existing form, GANs have no means of learning the inverse mapping --
projecting data back into the latent space. We propose Bidirectional Generative
Adversarial Networks (BiGANs) as a means of learning this inverse mapping, and
demonstrate that the resulting learned feature representation is useful for
auxiliary supervised discrimination tasks, competitive with contemporary
approaches to unsupervised and self-supervised feature learning. | http://arxiv.org/pdf/1605.09782 | Jeff Donahue, Philipp Krähenbühl, Trevor Darrell | cs.LG, cs.AI, cs.CV, cs.NE, stat.ML | Published as a conference paper at ICLR 2017. Changelog: (v7) Table 2
results improved 1-2% due to averaging predictions over 10 crops at test
time, as done in Noroozi & Favaro; Table 3 VOC classification results
slightly improved due to minor bugfix. (See v6 changelog for previous
versions.) | null | cs.LG | 20160531 | 20170403 | [
{
"id": "1605.02688"
},
{
"id": "1606.00704"
}
] |
1605.09782 | 27 | Figure 2: Qualitative results for permutation-invariant MNIST BiGAN training, including generator samples G(z), real data x, and corresponding reconstructions G(E(x)).
This alternative is appealing as it requires no additional machinery, and is the approach used for unsupervised feature learning in Radford et al. (2016). On the other hand, it is not clear that the task of distinguishing between real and generated data requires or beneï¬ts from intermediate representations that are useful as semantic feature representations. In fact, if G successfully generates the true data distribution pX(x), D may ignore the input data entirely and predict P (Y = 1) = P (Y = 1|x) = 1 2 unconditionally, not learning any meaningful intermediate representations. | 1605.09782#27 | Adversarial Feature Learning | The ability of the Generative Adversarial Networks (GANs) framework to learn
generative models mapping from simple latent distributions to arbitrarily
complex data distributions has been demonstrated empirically, with compelling
results showing that the latent space of such generators captures semantic
variation in the data distribution. Intuitively, models trained to predict
these semantic latent representations given data may serve as useful feature
representations for auxiliary problems where semantics are relevant. However,
in their existing form, GANs have no means of learning the inverse mapping --
projecting data back into the latent space. We propose Bidirectional Generative
Adversarial Networks (BiGANs) as a means of learning this inverse mapping, and
demonstrate that the resulting learned feature representation is useful for
auxiliary supervised discrimination tasks, competitive with contemporary
approaches to unsupervised and self-supervised feature learning. | http://arxiv.org/pdf/1605.09782 | Jeff Donahue, Philipp Krähenbühl, Trevor Darrell | cs.LG, cs.AI, cs.CV, cs.NE, stat.ML | Published as a conference paper at ICLR 2017. Changelog: (v7) Table 2
results improved 1-2% due to averaging predictions over 10 crops at test
time, as done in Noroozi & Favaro; Table 3 VOC classification results
slightly improved due to minor bugfix. (See v6 changelog for previous
versions.) | null | cs.LG | 20160531 | 20170403 | [
{
"id": "1605.02688"
},
{
"id": "1606.00704"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.