doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1512.05742 | 205 | In our experience, almost all dialogue datasets contain some amount of spelling errors. By correcting these, we expect to reduce data sparsity. This can be done by using automatic spelling correctors. However, it is important to inspect their effectiveness. For example, for movie scripts, Serban et al. (2016) found that automatic spelling correctors introduced more spelling errors than they corrected, and a better strategy was to use Wikipediaâs most commonly misspelled words19 to lookup and replace potential spelling errors. Transcribed spoken language corpora often include many non-words in their transcriptions (e.g. uh, oh). Depending on whether or not these provide additional information to the dialogue system, researchers may also want to remove these words by using automatic spelling correctors.
18. http://www.ircbeginner.com/ircinfo/abbreviations.html 19. https://en.wikipedia.org/wiki/Commonly_misspelled_English_words
46
# A.2 Segmenting Speakers and Conversations | 1512.05742#205 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | During the past decade, several areas of speech and language understanding
have witnessed substantial breakthroughs from the use of data-driven models. In
the area of dialogue systems, the trend is less obvious, and most practical
systems are still built through significant engineering and expert knowledge.
Nevertheless, several recent results suggest that data-driven approaches are
feasible and quite promising. To facilitate research in this area, we have
carried out a wide survey of publicly available datasets suitable for
data-driven learning of dialogue systems. We discuss important characteristics
of these datasets, how they can be used to learn diverse dialogue strategies,
and their other potential uses. We also examine methods for transfer learning
between datasets and the use of external knowledge. Finally, we discuss
appropriate choice of evaluation metrics for the learning objective. | http://arxiv.org/pdf/1512.05742 | Iulian Vlad Serban, Ryan Lowe, Peter Henderson, Laurent Charlin, Joelle Pineau | cs.CL, cs.AI, cs.HC, cs.LG, stat.ML, 68T01, 68T05, 68T35, 68T50, I.2.6; I.2.7; I.2.1 | 56 pages including references and appendix, 5 tables and 1 figure;
Under review for the Dialogue & Discourse journal. Update: paper has been
rewritten and now includes several new datasets | null | cs.CL | 20151217 | 20170321 | [
{
"id": "1511.06931"
},
{
"id": "1606.03632"
},
{
"id": "1611.09268"
},
{
"id": "1510.03055"
},
{
"id": "1607.00070"
},
{
"id": "1503.02364"
},
{
"id": "1511.08099"
},
{
"id": "1604.06045"
},
{
"id": "1610.02891"
},
{
"id": "1606.02689"
},
{
"id": "1606.01269"
},
{
"id": "1605.03481"
},
{
"id": "1506.05869"
}
] |
1512.05742 | 206 | 46
# A.2 Segmenting Speakers and Conversations
Some dialogue corpora, such as those based on movie subtitles, come without explicit speaker segmentation. However, it is often possible to estimate the speaker segmentation, which is useful to build a model of a given speakerâas compared to a model of the conversation as a whole. For text-based corpora, Serban and Pineau (2015) have recently proposed the use of recurrent neural networks to estimate turn-taking and speaker labels in movie scripts with promising results.
In the speech recognition literature, this is the subtask of speaker diarisation (Miro et al., 2012; Tranter et al., 2006). When the audio stream of the speech is available, the segmentation is quite accurate with classiï¬cation error rates as low as 5%. | 1512.05742#206 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | During the past decade, several areas of speech and language understanding
have witnessed substantial breakthroughs from the use of data-driven models. In
the area of dialogue systems, the trend is less obvious, and most practical
systems are still built through significant engineering and expert knowledge.
Nevertheless, several recent results suggest that data-driven approaches are
feasible and quite promising. To facilitate research in this area, we have
carried out a wide survey of publicly available datasets suitable for
data-driven learning of dialogue systems. We discuss important characteristics
of these datasets, how they can be used to learn diverse dialogue strategies,
and their other potential uses. We also examine methods for transfer learning
between datasets and the use of external knowledge. Finally, we discuss
appropriate choice of evaluation metrics for the learning objective. | http://arxiv.org/pdf/1512.05742 | Iulian Vlad Serban, Ryan Lowe, Peter Henderson, Laurent Charlin, Joelle Pineau | cs.CL, cs.AI, cs.HC, cs.LG, stat.ML, 68T01, 68T05, 68T35, 68T50, I.2.6; I.2.7; I.2.1 | 56 pages including references and appendix, 5 tables and 1 figure;
Under review for the Dialogue & Discourse journal. Update: paper has been
rewritten and now includes several new datasets | null | cs.CL | 20151217 | 20170321 | [
{
"id": "1511.06931"
},
{
"id": "1606.03632"
},
{
"id": "1611.09268"
},
{
"id": "1510.03055"
},
{
"id": "1607.00070"
},
{
"id": "1503.02364"
},
{
"id": "1511.08099"
},
{
"id": "1604.06045"
},
{
"id": "1610.02891"
},
{
"id": "1606.02689"
},
{
"id": "1606.01269"
},
{
"id": "1605.03481"
},
{
"id": "1506.05869"
}
] |
1512.05742 | 207 | A strategy sometimes used for segmentation of spoken dialogues is based on labelling a small subset of the corpus, known as the gold corpus, and training a speciï¬c segmentation model based on this. The remaining corpus is then segmented iteratively according to the segmentation model, after which the gold corpus is expanded with the most conï¬dent segmentations and the segmentation model is retrained. This process is sometimes known as embedded training, and is widely used in other speech recognition tasks (Jurafsky and Martin, 2008). It appears to work well in practice, but has the disadvantage that the interpretation of the label can drift. Naturally, this approach can be applied to text dialogues as well in a straightforward manner.
In certain corpora, such as those based on chat channels or extracted from movie subtitles, many conversations occur in sequence. In some cases, there are no labels partitioning the beginning and end of separate conversations. Similarly, certain corpora with multiple speakers, such as corpora based on chat channels, contain several conversations occurring in parallel (e.g. simultaneously) but do not contain any segmentation separating these conversations. This makes it hard to learn a meaningful model from such conversations, because they do not represent consistent speakers or coherent semantic topics. | 1512.05742#207 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | During the past decade, several areas of speech and language understanding
have witnessed substantial breakthroughs from the use of data-driven models. In
the area of dialogue systems, the trend is less obvious, and most practical
systems are still built through significant engineering and expert knowledge.
Nevertheless, several recent results suggest that data-driven approaches are
feasible and quite promising. To facilitate research in this area, we have
carried out a wide survey of publicly available datasets suitable for
data-driven learning of dialogue systems. We discuss important characteristics
of these datasets, how they can be used to learn diverse dialogue strategies,
and their other potential uses. We also examine methods for transfer learning
between datasets and the use of external knowledge. Finally, we discuss
appropriate choice of evaluation metrics for the learning objective. | http://arxiv.org/pdf/1512.05742 | Iulian Vlad Serban, Ryan Lowe, Peter Henderson, Laurent Charlin, Joelle Pineau | cs.CL, cs.AI, cs.HC, cs.LG, stat.ML, 68T01, 68T05, 68T35, 68T50, I.2.6; I.2.7; I.2.1 | 56 pages including references and appendix, 5 tables and 1 figure;
Under review for the Dialogue & Discourse journal. Update: paper has been
rewritten and now includes several new datasets | null | cs.CL | 20151217 | 20170321 | [
{
"id": "1511.06931"
},
{
"id": "1606.03632"
},
{
"id": "1611.09268"
},
{
"id": "1510.03055"
},
{
"id": "1607.00070"
},
{
"id": "1503.02364"
},
{
"id": "1511.08099"
},
{
"id": "1604.06045"
},
{
"id": "1610.02891"
},
{
"id": "1606.02689"
},
{
"id": "1606.01269"
},
{
"id": "1605.03481"
},
{
"id": "1506.05869"
}
] |
1512.05742 | 208 | To leverage such data towards learning individual conversations, researchers have proposed methods to automatically estimate segmentations of conversations (Lowe et al., 2015a; Nio et al., 2014a). Former solutions were mostly based on hand-crafted rules and seemed to work well upon manual inspection. For chat forums, one solution involves thresholding the beginning and end of conversations based on time (e.g. delay of more than x minutes between utterances), and eliminat- ing speakers from the conversation unless they are referred to explicitly by other speakers (Lowe et al., 2015a). More advanced techniques involve maximum-entropy classiï¬ers, which leverage the content of the utterances in addition to the discourse structure and timing information (Elsner and Charniak, 2008). For movie scripts, researchers have proposed the use of simple information- retrieval similarity measures, such as cosine similarity, to identify conversations (Nio et al., 2014a). Based on the their performance on estimating turn-taking and speaker labels, recurrent neural net- works also hold promise for segmenting conversations (Serban and Pineau, 2015).
# A.3 Discriminative Model Architectures | 1512.05742#208 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | During the past decade, several areas of speech and language understanding
have witnessed substantial breakthroughs from the use of data-driven models. In
the area of dialogue systems, the trend is less obvious, and most practical
systems are still built through significant engineering and expert knowledge.
Nevertheless, several recent results suggest that data-driven approaches are
feasible and quite promising. To facilitate research in this area, we have
carried out a wide survey of publicly available datasets suitable for
data-driven learning of dialogue systems. We discuss important characteristics
of these datasets, how they can be used to learn diverse dialogue strategies,
and their other potential uses. We also examine methods for transfer learning
between datasets and the use of external knowledge. Finally, we discuss
appropriate choice of evaluation metrics for the learning objective. | http://arxiv.org/pdf/1512.05742 | Iulian Vlad Serban, Ryan Lowe, Peter Henderson, Laurent Charlin, Joelle Pineau | cs.CL, cs.AI, cs.HC, cs.LG, stat.ML, 68T01, 68T05, 68T35, 68T50, I.2.6; I.2.7; I.2.1 | 56 pages including references and appendix, 5 tables and 1 figure;
Under review for the Dialogue & Discourse journal. Update: paper has been
rewritten and now includes several new datasets | null | cs.CL | 20151217 | 20170321 | [
{
"id": "1511.06931"
},
{
"id": "1606.03632"
},
{
"id": "1611.09268"
},
{
"id": "1510.03055"
},
{
"id": "1607.00070"
},
{
"id": "1503.02364"
},
{
"id": "1511.08099"
},
{
"id": "1604.06045"
},
{
"id": "1610.02891"
},
{
"id": "1606.02689"
},
{
"id": "1606.01269"
},
{
"id": "1605.03481"
},
{
"id": "1506.05869"
}
] |
1512.05742 | 209 | # A.3 Discriminative Model Architectures
As discussed in Subsection 2.3, discriminative models aim to predict certain labels or annotations manually associated with a portion of a dialogue. For example, a discriminative model might be trained to predict the intent of a person in a dialogue, or the topic, or a speciï¬c piece of information.
47
In the following subsections, we discuss research directions where discriminative models have been developed to solve dialogue-related tasks.20 This is primarily meant to review and contrast the work from a data-driven learning perspective.
A.3.1 DIALOGUE ACT CLASSIFICATION AND DIALOGUE TOPIC SPOTTING
Here we consider the simple task known as dialogue act classiï¬cation (or dialogue move recogni- tion). In this task, the goal is to classify a user utterance, independent of the rest of the conversation, as one out of K dialogue acts: P (A | U ), where A is the discrete variable representing the dialogue act and U is the userâs utterance. This falls under the general umbrella of text classiï¬cation tasks, though its application is speciï¬c to dialogue. Like the dialogue state tracker model, a dialogue act classiï¬cation model could be plugged into a dialogue system as an additional natural language understanding component. | 1512.05742#209 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | During the past decade, several areas of speech and language understanding
have witnessed substantial breakthroughs from the use of data-driven models. In
the area of dialogue systems, the trend is less obvious, and most practical
systems are still built through significant engineering and expert knowledge.
Nevertheless, several recent results suggest that data-driven approaches are
feasible and quite promising. To facilitate research in this area, we have
carried out a wide survey of publicly available datasets suitable for
data-driven learning of dialogue systems. We discuss important characteristics
of these datasets, how they can be used to learn diverse dialogue strategies,
and their other potential uses. We also examine methods for transfer learning
between datasets and the use of external knowledge. Finally, we discuss
appropriate choice of evaluation metrics for the learning objective. | http://arxiv.org/pdf/1512.05742 | Iulian Vlad Serban, Ryan Lowe, Peter Henderson, Laurent Charlin, Joelle Pineau | cs.CL, cs.AI, cs.HC, cs.LG, stat.ML, 68T01, 68T05, 68T35, 68T50, I.2.6; I.2.7; I.2.1 | 56 pages including references and appendix, 5 tables and 1 figure;
Under review for the Dialogue & Discourse journal. Update: paper has been
rewritten and now includes several new datasets | null | cs.CL | 20151217 | 20170321 | [
{
"id": "1511.06931"
},
{
"id": "1606.03632"
},
{
"id": "1611.09268"
},
{
"id": "1510.03055"
},
{
"id": "1607.00070"
},
{
"id": "1503.02364"
},
{
"id": "1511.08099"
},
{
"id": "1604.06045"
},
{
"id": "1610.02891"
},
{
"id": "1606.02689"
},
{
"id": "1606.01269"
},
{
"id": "1605.03481"
},
{
"id": "1506.05869"
}
] |
1512.05742 | 210 | Early approaches for this task focused on using n-gram models for classiï¬cation (Reithinger and Klesen, 1997; Bird et al., 1995). For example, Reithinger et al. assumed that each dialogue act is generated by its own language model. They trained an n-gram language model on the utterances of each dialogue act, Pθ(U |A), and afterwards use Bayesâ rule to assign the probability of a new dialogue act Pθ(A|U ) to be proportional to the probability of generating the utterance under the language model Pθ(U |A).
However, a major problem with this approach is the lack of datasets with annotated dialogue acts. More recent work by Forgues et al. (2014) acknowledged this problem, and tried to overcome the data scarcity issue by leveraging word embeddings learned from other, larger text corpora. They created an utterance-level representation by combining the word embeddings of each word, for example, by summing the word embeddings or taking the maximum w.r.t. each dimension. These utterance-level representations, together with word counts, were then given as inputs to a linear classiï¬er to classify the dialogue acts. Thus, Forgues et al. showed that by leveraging another, substantially larger, corpus they were able to improve performance on their original task. | 1512.05742#210 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | During the past decade, several areas of speech and language understanding
have witnessed substantial breakthroughs from the use of data-driven models. In
the area of dialogue systems, the trend is less obvious, and most practical
systems are still built through significant engineering and expert knowledge.
Nevertheless, several recent results suggest that data-driven approaches are
feasible and quite promising. To facilitate research in this area, we have
carried out a wide survey of publicly available datasets suitable for
data-driven learning of dialogue systems. We discuss important characteristics
of these datasets, how they can be used to learn diverse dialogue strategies,
and their other potential uses. We also examine methods for transfer learning
between datasets and the use of external knowledge. Finally, we discuss
appropriate choice of evaluation metrics for the learning objective. | http://arxiv.org/pdf/1512.05742 | Iulian Vlad Serban, Ryan Lowe, Peter Henderson, Laurent Charlin, Joelle Pineau | cs.CL, cs.AI, cs.HC, cs.LG, stat.ML, 68T01, 68T05, 68T35, 68T50, I.2.6; I.2.7; I.2.1 | 56 pages including references and appendix, 5 tables and 1 figure;
Under review for the Dialogue & Discourse journal. Update: paper has been
rewritten and now includes several new datasets | null | cs.CL | 20151217 | 20170321 | [
{
"id": "1511.06931"
},
{
"id": "1606.03632"
},
{
"id": "1611.09268"
},
{
"id": "1510.03055"
},
{
"id": "1607.00070"
},
{
"id": "1503.02364"
},
{
"id": "1511.08099"
},
{
"id": "1604.06045"
},
{
"id": "1610.02891"
},
{
"id": "1606.02689"
},
{
"id": "1606.01269"
},
{
"id": "1605.03481"
},
{
"id": "1506.05869"
}
] |
1512.05742 | 211 | This makes the work on dialogue act classiï¬cation very appealing from a data-driven perspec- tive. First, it seems that the accuracy can be improved by leveraging alternative data sources. Sec- ond, unlike the dialogue state tracking models, dialogue act classiï¬cation models typically involve relatively little feature hand-crafting thus suggesting that data-driven approaches may be more pow- erful for these tasks.
# A.3.2 DIALOGUE STATE TRACKING
The core task of the DSTC (Williams et al., 2013) adds more complexity by focusing on tracking the state of a conversation. This is framed as a classiï¬cation problem: for every time step t of the dialogue, the model is given the current input to the dialogue state tracker (including ASR and SLU outputs) together with external knowledge sources (e.g. bus timetables). The required output is a probability distribution over a set of Nt predeï¬ned hypotheses, in addition to the REST hypothesis (which represents the probability that none of the previous Nt hypotheses are correct). The goal is to match the distribution over hypotheses as closely as possible to the real annotated | 1512.05742#211 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | During the past decade, several areas of speech and language understanding
have witnessed substantial breakthroughs from the use of data-driven models. In
the area of dialogue systems, the trend is less obvious, and most practical
systems are still built through significant engineering and expert knowledge.
Nevertheless, several recent results suggest that data-driven approaches are
feasible and quite promising. To facilitate research in this area, we have
carried out a wide survey of publicly available datasets suitable for
data-driven learning of dialogue systems. We discuss important characteristics
of these datasets, how they can be used to learn diverse dialogue strategies,
and their other potential uses. We also examine methods for transfer learning
between datasets and the use of external knowledge. Finally, we discuss
appropriate choice of evaluation metrics for the learning objective. | http://arxiv.org/pdf/1512.05742 | Iulian Vlad Serban, Ryan Lowe, Peter Henderson, Laurent Charlin, Joelle Pineau | cs.CL, cs.AI, cs.HC, cs.LG, stat.ML, 68T01, 68T05, 68T35, 68T50, I.2.6; I.2.7; I.2.1 | 56 pages including references and appendix, 5 tables and 1 figure;
Under review for the Dialogue & Discourse journal. Update: paper has been
rewritten and now includes several new datasets | null | cs.CL | 20151217 | 20170321 | [
{
"id": "1511.06931"
},
{
"id": "1606.03632"
},
{
"id": "1611.09268"
},
{
"id": "1510.03055"
},
{
"id": "1607.00070"
},
{
"id": "1503.02364"
},
{
"id": "1511.08099"
},
{
"id": "1604.06045"
},
{
"id": "1610.02891"
},
{
"id": "1606.02689"
},
{
"id": "1606.01269"
},
{
"id": "1605.03481"
},
{
"id": "1506.05869"
}
] |
1512.05742 | 212 | 20. It is important to note that although discriminative models have been favored to model supervised problems in the dialogue-system literature, in principle generative models (P (X, Y )) instead of discriminative models (P (Y |X)) could be used.
48
data. By providing an open dataset with accurate labels, it has been possible for researchers to perform rigourous comparative evaluations of different classiï¬cation models for dialogue systems. Models for the DSTC include both statistical approaches and hand-crafted systems. An example of the latter is the system proposed in Wang and Lemon (2013), which relies on having access to a marginal conï¬dence score Pt(u, s, v) for a user dialogue u(s = v) with slot s and value v given by a subsystem at time t. The marginal conï¬dence score gives a heuristic estimate of the probability of a slot taking a particular value. The model must then aggregate all these estimates and conï¬dence scores to compute probabilities for each hypothesis. | 1512.05742#212 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | During the past decade, several areas of speech and language understanding
have witnessed substantial breakthroughs from the use of data-driven models. In
the area of dialogue systems, the trend is less obvious, and most practical
systems are still built through significant engineering and expert knowledge.
Nevertheless, several recent results suggest that data-driven approaches are
feasible and quite promising. To facilitate research in this area, we have
carried out a wide survey of publicly available datasets suitable for
data-driven learning of dialogue systems. We discuss important characteristics
of these datasets, how they can be used to learn diverse dialogue strategies,
and their other potential uses. We also examine methods for transfer learning
between datasets and the use of external knowledge. Finally, we discuss
appropriate choice of evaluation metrics for the learning objective. | http://arxiv.org/pdf/1512.05742 | Iulian Vlad Serban, Ryan Lowe, Peter Henderson, Laurent Charlin, Joelle Pineau | cs.CL, cs.AI, cs.HC, cs.LG, stat.ML, 68T01, 68T05, 68T35, 68T50, I.2.6; I.2.7; I.2.1 | 56 pages including references and appendix, 5 tables and 1 figure;
Under review for the Dialogue & Discourse journal. Update: paper has been
rewritten and now includes several new datasets | null | cs.CL | 20151217 | 20170321 | [
{
"id": "1511.06931"
},
{
"id": "1606.03632"
},
{
"id": "1611.09268"
},
{
"id": "1510.03055"
},
{
"id": "1607.00070"
},
{
"id": "1503.02364"
},
{
"id": "1511.08099"
},
{
"id": "1604.06045"
},
{
"id": "1610.02891"
},
{
"id": "1606.02689"
},
{
"id": "1606.01269"
},
{
"id": "1605.03481"
},
{
"id": "1506.05869"
}
] |
1512.05742 | 213 | In this model, the SLU component may for example give the marginal conï¬dence score (in- form(data.day=today)=0.9) in the bus scheduling DSTC, meaning that it believes with high conï¬- dence (0.9) that the user has requested information for the current day. This marginal conï¬dence score is used to update the belief state of the system bt(s, v) at time t using a set of hand-crafted updates to the probability distribution over hypotheses. From a data-driven learning perspective, this approach does not make efï¬cient use of the dataset, but instead relies heavily on the accuracy of the hand-crafted tracker outputs. | 1512.05742#213 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | During the past decade, several areas of speech and language understanding
have witnessed substantial breakthroughs from the use of data-driven models. In
the area of dialogue systems, the trend is less obvious, and most practical
systems are still built through significant engineering and expert knowledge.
Nevertheless, several recent results suggest that data-driven approaches are
feasible and quite promising. To facilitate research in this area, we have
carried out a wide survey of publicly available datasets suitable for
data-driven learning of dialogue systems. We discuss important characteristics
of these datasets, how they can be used to learn diverse dialogue strategies,
and their other potential uses. We also examine methods for transfer learning
between datasets and the use of external knowledge. Finally, we discuss
appropriate choice of evaluation metrics for the learning objective. | http://arxiv.org/pdf/1512.05742 | Iulian Vlad Serban, Ryan Lowe, Peter Henderson, Laurent Charlin, Joelle Pineau | cs.CL, cs.AI, cs.HC, cs.LG, stat.ML, 68T01, 68T05, 68T35, 68T50, I.2.6; I.2.7; I.2.1 | 56 pages including references and appendix, 5 tables and 1 figure;
Under review for the Dialogue & Discourse journal. Update: paper has been
rewritten and now includes several new datasets | null | cs.CL | 20151217 | 20170321 | [
{
"id": "1511.06931"
},
{
"id": "1606.03632"
},
{
"id": "1611.09268"
},
{
"id": "1510.03055"
},
{
"id": "1607.00070"
},
{
"id": "1503.02364"
},
{
"id": "1511.08099"
},
{
"id": "1604.06045"
},
{
"id": "1610.02891"
},
{
"id": "1606.02689"
},
{
"id": "1606.01269"
},
{
"id": "1605.03481"
},
{
"id": "1506.05869"
}
] |
1512.05742 | 214 | More sophisticated models for the DSTC take a dynamic Bayesian approach by modeling the latent dialogue state and observed tracker outputs in a directed graphical model (Thomson and Young, 2010). These models are sometimes called generative state tracking models, though they are still discriminative in nature as they only attempt to model the state of the dialogue and not the words and speech acts in each dialogue. For simplicity we drop the index i in the following equations. Similar to before, let xt be the observed tracker outputs at time t. Let st be the dialogue state at time t, which represents the state of the world including, for example, the user actions (e.g. deï¬ned by slot-value pairs) and system actions (e.g. number of times a piece of information has been requested). For the DSTC, the state st must represent the true current slot-value pair at time t. Let rt be the reward observed at time t, and let at be the action taken by the dialogue system at time t. This general framework, also known as a partially-observable Markov decision process (POMDP) then deï¬nes the graphical model: | 1512.05742#214 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | During the past decade, several areas of speech and language understanding
have witnessed substantial breakthroughs from the use of data-driven models. In
the area of dialogue systems, the trend is less obvious, and most practical
systems are still built through significant engineering and expert knowledge.
Nevertheless, several recent results suggest that data-driven approaches are
feasible and quite promising. To facilitate research in this area, we have
carried out a wide survey of publicly available datasets suitable for
data-driven learning of dialogue systems. We discuss important characteristics
of these datasets, how they can be used to learn diverse dialogue strategies,
and their other potential uses. We also examine methods for transfer learning
between datasets and the use of external knowledge. Finally, we discuss
appropriate choice of evaluation metrics for the learning objective. | http://arxiv.org/pdf/1512.05742 | Iulian Vlad Serban, Ryan Lowe, Peter Henderson, Laurent Charlin, Joelle Pineau | cs.CL, cs.AI, cs.HC, cs.LG, stat.ML, 68T01, 68T05, 68T35, 68T50, I.2.6; I.2.7; I.2.1 | 56 pages including references and appendix, 5 tables and 1 figure;
Under review for the Dialogue & Discourse journal. Update: paper has been
rewritten and now includes several new datasets | null | cs.CL | 20151217 | 20170321 | [
{
"id": "1511.06931"
},
{
"id": "1606.03632"
},
{
"id": "1611.09268"
},
{
"id": "1510.03055"
},
{
"id": "1607.00070"
},
{
"id": "1503.02364"
},
{
"id": "1511.08099"
},
{
"id": "1604.06045"
},
{
"id": "1610.02891"
},
{
"id": "1606.02689"
},
{
"id": "1606.01269"
},
{
"id": "1605.03481"
},
{
"id": "1506.05869"
}
] |
1512.05742 | 215 | Pθ(xt, st, rt|at, stâ1) = Pθ(xt|st, at)Pθ(st|stâ1, at)Pθ(rt|st, at), (3)
where at is assumed to be a deterministic variable of the dialogue history. This variable is given in the DSTC, because it comes from the policy used to interact with the humans when gathering the datasets. This approach is attractive from a data-driven learning perspective, because it models the uncertainty (e.g. noise and ambiguity) inherent in all variables of interest. Thus, we might expect such a model to be more robust in real applications.
Now, since all variables are observed in this task, and since the goal is to determine st given the other variables, we are only interested in:
Pθ(st|xt, rt, at) â Pθ(xt|st, at)Pθ(st|stâ1, at)Pθ(rt|st, at), (4) | 1512.05742#215 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | During the past decade, several areas of speech and language understanding
have witnessed substantial breakthroughs from the use of data-driven models. In
the area of dialogue systems, the trend is less obvious, and most practical
systems are still built through significant engineering and expert knowledge.
Nevertheless, several recent results suggest that data-driven approaches are
feasible and quite promising. To facilitate research in this area, we have
carried out a wide survey of publicly available datasets suitable for
data-driven learning of dialogue systems. We discuss important characteristics
of these datasets, how they can be used to learn diverse dialogue strategies,
and their other potential uses. We also examine methods for transfer learning
between datasets and the use of external knowledge. Finally, we discuss
appropriate choice of evaluation metrics for the learning objective. | http://arxiv.org/pdf/1512.05742 | Iulian Vlad Serban, Ryan Lowe, Peter Henderson, Laurent Charlin, Joelle Pineau | cs.CL, cs.AI, cs.HC, cs.LG, stat.ML, 68T01, 68T05, 68T35, 68T50, I.2.6; I.2.7; I.2.1 | 56 pages including references and appendix, 5 tables and 1 figure;
Under review for the Dialogue & Discourse journal. Update: paper has been
rewritten and now includes several new datasets | null | cs.CL | 20151217 | 20170321 | [
{
"id": "1511.06931"
},
{
"id": "1606.03632"
},
{
"id": "1611.09268"
},
{
"id": "1510.03055"
},
{
"id": "1607.00070"
},
{
"id": "1503.02364"
},
{
"id": "1511.08099"
},
{
"id": "1604.06045"
},
{
"id": "1610.02891"
},
{
"id": "1606.02689"
},
{
"id": "1606.01269"
},
{
"id": "1605.03481"
},
{
"id": "1506.05869"
}
] |
1512.05742 | 216 | which can then be normalized appropriately since st is a discrete stochastic variable. However, due to the temporal dependency between st and stâ1, the complexity of the model is similar to a hidden Markov model, and thus both learning and inference become intractable when the state, observation and action spaces are too large. Indeed, as noted by Young et al. (2013), the number of states, actions and observations can easily reach 1010 conï¬gurations in some dialogue systems. Thus, it is necessary to make simplifying assumptions on the distribution Pθ(st|xt, rt, at) and to approximate
49
the learning and inference procedures (Young et al., 2013). With appropriate structural assumptions and approximations, these models perform well compared to baseline systems on the DSTC (Black et al., 2011). | 1512.05742#216 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | During the past decade, several areas of speech and language understanding
have witnessed substantial breakthroughs from the use of data-driven models. In
the area of dialogue systems, the trend is less obvious, and most practical
systems are still built through significant engineering and expert knowledge.
Nevertheless, several recent results suggest that data-driven approaches are
feasible and quite promising. To facilitate research in this area, we have
carried out a wide survey of publicly available datasets suitable for
data-driven learning of dialogue systems. We discuss important characteristics
of these datasets, how they can be used to learn diverse dialogue strategies,
and their other potential uses. We also examine methods for transfer learning
between datasets and the use of external knowledge. Finally, we discuss
appropriate choice of evaluation metrics for the learning objective. | http://arxiv.org/pdf/1512.05742 | Iulian Vlad Serban, Ryan Lowe, Peter Henderson, Laurent Charlin, Joelle Pineau | cs.CL, cs.AI, cs.HC, cs.LG, stat.ML, 68T01, 68T05, 68T35, 68T50, I.2.6; I.2.7; I.2.1 | 56 pages including references and appendix, 5 tables and 1 figure;
Under review for the Dialogue & Discourse journal. Update: paper has been
rewritten and now includes several new datasets | null | cs.CL | 20151217 | 20170321 | [
{
"id": "1511.06931"
},
{
"id": "1606.03632"
},
{
"id": "1611.09268"
},
{
"id": "1510.03055"
},
{
"id": "1607.00070"
},
{
"id": "1503.02364"
},
{
"id": "1511.08099"
},
{
"id": "1604.06045"
},
{
"id": "1610.02891"
},
{
"id": "1606.02689"
},
{
"id": "1606.01269"
},
{
"id": "1605.03481"
},
{
"id": "1506.05869"
}
] |
1512.05742 | 217 | Non-bayesian data-driven models have also been proposed. These models are sometimes called discriminative state tracking models, because they do not assume a generation process for the tracker outputs, xt or for any other variables, but instead only condition on them. For example, Henderson et al. (2013) proposed to use a feed-forward neural network. At each time step t, they extracted a set of features and then concatenate a window of W feature vectors together. These are given as input to the neural network, which outputs the probability of each hypothesis from the set of hypotheses. By learning a discriminative model and using a window over the last time steps, they do not face the intractability issues of dynamic Bayesian networks. Instead, their system can be trained with gradient descent methods. This approach could eventually scale to large datasets, and is therefore very attractive for data-driven learning. However, unlike the dynamic Bayesian approaches, these models do not represent probability distributions over variables apart from the state of the dialogue. Without probability distributions, it is not clear how to deï¬ne a conï¬dence interval over the predictions. Thus the models might not provide adequate information to determine when to seek conï¬rmation or clariï¬cation following unclear statements. | 1512.05742#217 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | During the past decade, several areas of speech and language understanding
have witnessed substantial breakthroughs from the use of data-driven models. In
the area of dialogue systems, the trend is less obvious, and most practical
systems are still built through significant engineering and expert knowledge.
Nevertheless, several recent results suggest that data-driven approaches are
feasible and quite promising. To facilitate research in this area, we have
carried out a wide survey of publicly available datasets suitable for
data-driven learning of dialogue systems. We discuss important characteristics
of these datasets, how they can be used to learn diverse dialogue strategies,
and their other potential uses. We also examine methods for transfer learning
between datasets and the use of external knowledge. Finally, we discuss
appropriate choice of evaluation metrics for the learning objective. | http://arxiv.org/pdf/1512.05742 | Iulian Vlad Serban, Ryan Lowe, Peter Henderson, Laurent Charlin, Joelle Pineau | cs.CL, cs.AI, cs.HC, cs.LG, stat.ML, 68T01, 68T05, 68T35, 68T50, I.2.6; I.2.7; I.2.1 | 56 pages including references and appendix, 5 tables and 1 figure;
Under review for the Dialogue & Discourse journal. Update: paper has been
rewritten and now includes several new datasets | null | cs.CL | 20151217 | 20170321 | [
{
"id": "1511.06931"
},
{
"id": "1606.03632"
},
{
"id": "1611.09268"
},
{
"id": "1510.03055"
},
{
"id": "1607.00070"
},
{
"id": "1503.02364"
},
{
"id": "1511.08099"
},
{
"id": "1604.06045"
},
{
"id": "1610.02891"
},
{
"id": "1606.02689"
},
{
"id": "1606.01269"
},
{
"id": "1605.03481"
},
{
"id": "1506.05869"
}
] |
1512.05742 | 218 | Researchers have also investigated the use of conditional random ï¬elds (CRFs) for state tracking (Ren et al., 2013). This class of models also falls under the umbrella of discriminative state tracking models; however, they are able to take into account temporal dependencies within dialogues by modeling a complete joint distribution over states:
Pθ(S|X) â fi(sc, xc), câC i (5)
where C is the set of factors, i.e. sets of state and tracker variables across time, sc is the set of states associated with factor c, xc is the set of observations associated with factor c, and {fi}i is a set of functions parametrized by parameters θ. There exist certain functions fi, for which exact inference is tractable and learning the parameters θ is efï¬cient (Koller and Friedman, 2009; Serban, 2012). For example, Ren et al. (2013) propose a set of factors which create a linear dependency structure between the dialogue states while conditioning on all the observed tracker outputs:
Pθ(S|X) â fi(stâ1, st, st+1, X). t i (6) | 1512.05742#218 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | During the past decade, several areas of speech and language understanding
have witnessed substantial breakthroughs from the use of data-driven models. In
the area of dialogue systems, the trend is less obvious, and most practical
systems are still built through significant engineering and expert knowledge.
Nevertheless, several recent results suggest that data-driven approaches are
feasible and quite promising. To facilitate research in this area, we have
carried out a wide survey of publicly available datasets suitable for
data-driven learning of dialogue systems. We discuss important characteristics
of these datasets, how they can be used to learn diverse dialogue strategies,
and their other potential uses. We also examine methods for transfer learning
between datasets and the use of external knowledge. Finally, we discuss
appropriate choice of evaluation metrics for the learning objective. | http://arxiv.org/pdf/1512.05742 | Iulian Vlad Serban, Ryan Lowe, Peter Henderson, Laurent Charlin, Joelle Pineau | cs.CL, cs.AI, cs.HC, cs.LG, stat.ML, 68T01, 68T05, 68T35, 68T50, I.2.6; I.2.7; I.2.1 | 56 pages including references and appendix, 5 tables and 1 figure;
Under review for the Dialogue & Discourse journal. Update: paper has been
rewritten and now includes several new datasets | null | cs.CL | 20151217 | 20170321 | [
{
"id": "1511.06931"
},
{
"id": "1606.03632"
},
{
"id": "1611.09268"
},
{
"id": "1510.03055"
},
{
"id": "1607.00070"
},
{
"id": "1503.02364"
},
{
"id": "1511.08099"
},
{
"id": "1604.06045"
},
{
"id": "1610.02891"
},
{
"id": "1606.02689"
},
{
"id": "1606.01269"
},
{
"id": "1605.03481"
},
{
"id": "1506.05869"
}
] |
1512.05742 | 219 | Pθ(S|X) â fi(stâ1, st, st+1, X). t i (6)
This creates a dependency between all dialogue states, forcing them be coherent with each other. This should be contrasted to the feed-forward neural network approach, which does not enforce any sort of consistency between different predicted dialogue states. The CFR models can be trained with gradient descent to optimize the exact log-likelihood, but exact inference is typically intractable. Therefore, an approximate inference procedure, such as loopy belief propagation, is necessary to approximate the posterior distribution over states st.
In summary, there exist different approaches to building discriminative learning architectures for dialogue. While they are fairly straightforward to evaluate and often form a crucial component for real-world dialogue systems, by themselves they only offer a limited view of what we ultimately want to accomplish with dialogue models. They often require labeled data, which is often difï¬cult to acquire on a large scale (except in the case of answer re-ranking) and require manual feature selection, which reduces their potential effectiveness. Since each model is trained independently
50 | 1512.05742#219 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | During the past decade, several areas of speech and language understanding
have witnessed substantial breakthroughs from the use of data-driven models. In
the area of dialogue systems, the trend is less obvious, and most practical
systems are still built through significant engineering and expert knowledge.
Nevertheless, several recent results suggest that data-driven approaches are
feasible and quite promising. To facilitate research in this area, we have
carried out a wide survey of publicly available datasets suitable for
data-driven learning of dialogue systems. We discuss important characteristics
of these datasets, how they can be used to learn diverse dialogue strategies,
and their other potential uses. We also examine methods for transfer learning
between datasets and the use of external knowledge. Finally, we discuss
appropriate choice of evaluation metrics for the learning objective. | http://arxiv.org/pdf/1512.05742 | Iulian Vlad Serban, Ryan Lowe, Peter Henderson, Laurent Charlin, Joelle Pineau | cs.CL, cs.AI, cs.HC, cs.LG, stat.ML, 68T01, 68T05, 68T35, 68T50, I.2.6; I.2.7; I.2.1 | 56 pages including references and appendix, 5 tables and 1 figure;
Under review for the Dialogue & Discourse journal. Update: paper has been
rewritten and now includes several new datasets | null | cs.CL | 20151217 | 20170321 | [
{
"id": "1511.06931"
},
{
"id": "1606.03632"
},
{
"id": "1611.09268"
},
{
"id": "1510.03055"
},
{
"id": "1607.00070"
},
{
"id": "1503.02364"
},
{
"id": "1511.08099"
},
{
"id": "1604.06045"
},
{
"id": "1610.02891"
},
{
"id": "1606.02689"
},
{
"id": "1606.01269"
},
{
"id": "1605.03481"
},
{
"id": "1506.05869"
}
] |
1512.05742 | 220 | 50
of the other models and components with which it interacts in the complete dialogue system, one cannot give guarantees on the performance of the ï¬nal dialogue system by evaluating the individual models alone. Thus, we desire models that are capable of producing probability distributions over all possible responses instead of over all annotated labelsâin other words, models that can actually generate new responses by selecting the highest probability next utterance. This is the subject of the next section.
# A.4 Response Generation Models
Both the response re-ranking approach and the generative response model approach have allowed for the use of large-scale unannotated dialogue corpora for training dialogue systems. We therefore close this section by discussing these classes of approaches
In general, approaches which aim to generate responses have the potential to learn semantically more powerful representations of dialogues compared to models trained for dialogue state tracking or dialogue act classiï¬cation tasks: the concepts they are able to represent are limited only by the content of the dataset, unlike the dialogue state tracking or dialogue act classiï¬cation models which are limited by the annotation scheme used (e.g. the set of possible slot-value pairs pre-speciï¬ed for the DSTC).
# A.4.1 RE-RANKING RESPONSE MODELS | 1512.05742#220 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | During the past decade, several areas of speech and language understanding
have witnessed substantial breakthroughs from the use of data-driven models. In
the area of dialogue systems, the trend is less obvious, and most practical
systems are still built through significant engineering and expert knowledge.
Nevertheless, several recent results suggest that data-driven approaches are
feasible and quite promising. To facilitate research in this area, we have
carried out a wide survey of publicly available datasets suitable for
data-driven learning of dialogue systems. We discuss important characteristics
of these datasets, how they can be used to learn diverse dialogue strategies,
and their other potential uses. We also examine methods for transfer learning
between datasets and the use of external knowledge. Finally, we discuss
appropriate choice of evaluation metrics for the learning objective. | http://arxiv.org/pdf/1512.05742 | Iulian Vlad Serban, Ryan Lowe, Peter Henderson, Laurent Charlin, Joelle Pineau | cs.CL, cs.AI, cs.HC, cs.LG, stat.ML, 68T01, 68T05, 68T35, 68T50, I.2.6; I.2.7; I.2.1 | 56 pages including references and appendix, 5 tables and 1 figure;
Under review for the Dialogue & Discourse journal. Update: paper has been
rewritten and now includes several new datasets | null | cs.CL | 20151217 | 20170321 | [
{
"id": "1511.06931"
},
{
"id": "1606.03632"
},
{
"id": "1611.09268"
},
{
"id": "1510.03055"
},
{
"id": "1607.00070"
},
{
"id": "1503.02364"
},
{
"id": "1511.08099"
},
{
"id": "1604.06045"
},
{
"id": "1610.02891"
},
{
"id": "1606.02689"
},
{
"id": "1606.01269"
},
{
"id": "1605.03481"
},
{
"id": "1506.05869"
}
] |
1512.05742 | 221 | # A.4.1 RE-RANKING RESPONSE MODELS
Researchers have recently turned their attention to the problem of building models that produce answers by re-ranking a set of candidate answers, and outputting the one with the highest rank or probability. While the task may seem artiï¬cial, the main advantage is that it allows the use of completely un-annotated datasets. Unlike dialogue state tracking, this task does not require datasets where experts have labeled every utterance and system response. This task only requires knowing the sequence of utterances, which can be extracted automatically from transcribed conversations. | 1512.05742#221 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | During the past decade, several areas of speech and language understanding
have witnessed substantial breakthroughs from the use of data-driven models. In
the area of dialogue systems, the trend is less obvious, and most practical
systems are still built through significant engineering and expert knowledge.
Nevertheless, several recent results suggest that data-driven approaches are
feasible and quite promising. To facilitate research in this area, we have
carried out a wide survey of publicly available datasets suitable for
data-driven learning of dialogue systems. We discuss important characteristics
of these datasets, how they can be used to learn diverse dialogue strategies,
and their other potential uses. We also examine methods for transfer learning
between datasets and the use of external knowledge. Finally, we discuss
appropriate choice of evaluation metrics for the learning objective. | http://arxiv.org/pdf/1512.05742 | Iulian Vlad Serban, Ryan Lowe, Peter Henderson, Laurent Charlin, Joelle Pineau | cs.CL, cs.AI, cs.HC, cs.LG, stat.ML, 68T01, 68T05, 68T35, 68T50, I.2.6; I.2.7; I.2.1 | 56 pages including references and appendix, 5 tables and 1 figure;
Under review for the Dialogue & Discourse journal. Update: paper has been
rewritten and now includes several new datasets | null | cs.CL | 20151217 | 20170321 | [
{
"id": "1511.06931"
},
{
"id": "1606.03632"
},
{
"id": "1611.09268"
},
{
"id": "1510.03055"
},
{
"id": "1607.00070"
},
{
"id": "1503.02364"
},
{
"id": "1511.08099"
},
{
"id": "1604.06045"
},
{
"id": "1610.02891"
},
{
"id": "1606.02689"
},
{
"id": "1606.01269"
},
{
"id": "1605.03481"
},
{
"id": "1506.05869"
}
] |
1512.05742 | 222 | Banchs and Li (2012) construct an information retrieval system based on movie scripts using the vector space model. Their system searches through a database of movie scripts to ï¬nd a dialogue similar to the current dialogue with the user, and then emits the response from the closest dialogue in the database. Similarly, Ameixa et al. (2014) also use an information retrieval system, but based on movie subtitles instead of movie scripts. They show that their system gives sensible responses to questions, and that bootstrapping an existing dialogue system from movie subtitles improves an- swering out-of-domain questions. Both approaches assume that the responses given in the movie scripts and movie subtitle corpora are appropriate. Such information retrieval systems consist of a relatively small set of manually tuned parameters. For this reason, they do not require (annotated) labels and can therefore take advantage of raw data (in this case movie scripts and movie subtitles). However, these systems are effectively nearest-neighbor methods. They do not learn rich represen- tations from dialogues which can be used, for example, to generalize to previously unseen situations. Furthermore, it is unclear how to transform such models into full dialogue agents. They are not ro- bust and it is not clear how to maintain the dialogue state. Contrary to search engines, which present an entire page of results, the dialogue system is only allowed to give a single response to the user. | 1512.05742#222 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | During the past decade, several areas of speech and language understanding
have witnessed substantial breakthroughs from the use of data-driven models. In
the area of dialogue systems, the trend is less obvious, and most practical
systems are still built through significant engineering and expert knowledge.
Nevertheless, several recent results suggest that data-driven approaches are
feasible and quite promising. To facilitate research in this area, we have
carried out a wide survey of publicly available datasets suitable for
data-driven learning of dialogue systems. We discuss important characteristics
of these datasets, how they can be used to learn diverse dialogue strategies,
and their other potential uses. We also examine methods for transfer learning
between datasets and the use of external knowledge. Finally, we discuss
appropriate choice of evaluation metrics for the learning objective. | http://arxiv.org/pdf/1512.05742 | Iulian Vlad Serban, Ryan Lowe, Peter Henderson, Laurent Charlin, Joelle Pineau | cs.CL, cs.AI, cs.HC, cs.LG, stat.ML, 68T01, 68T05, 68T35, 68T50, I.2.6; I.2.7; I.2.1 | 56 pages including references and appendix, 5 tables and 1 figure;
Under review for the Dialogue & Discourse journal. Update: paper has been
rewritten and now includes several new datasets | null | cs.CL | 20151217 | 20170321 | [
{
"id": "1511.06931"
},
{
"id": "1606.03632"
},
{
"id": "1611.09268"
},
{
"id": "1510.03055"
},
{
"id": "1607.00070"
},
{
"id": "1503.02364"
},
{
"id": "1511.08099"
},
{
"id": "1604.06045"
},
{
"id": "1610.02891"
},
{
"id": "1606.02689"
},
{
"id": "1606.01269"
},
{
"id": "1605.03481"
},
{
"id": "1506.05869"
}
] |
1512.05742 | 223 | (Lowe et al., 2015a) also propose a re-ranking approach using the Ubuntu Dialogue Corpus. The authors propose an afï¬nity model between a context c (e.g. ï¬ve consecutive utterances in a conversation) and a potential reply r. Given a context-reply pair the model compares the output of a context-speciï¬c LSTM against that of a response-speciï¬c LSTM neural network and outputs
51
whether or not the response is correct for the given context. The model maximizes the likelihood of a correct context-response pair:
cri) TH) max > Po(true response | ¢;,r;)/i"») (1 â Py(true response i | 1512.05742#223 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | During the past decade, several areas of speech and language understanding
have witnessed substantial breakthroughs from the use of data-driven models. In
the area of dialogue systems, the trend is less obvious, and most practical
systems are still built through significant engineering and expert knowledge.
Nevertheless, several recent results suggest that data-driven approaches are
feasible and quite promising. To facilitate research in this area, we have
carried out a wide survey of publicly available datasets suitable for
data-driven learning of dialogue systems. We discuss important characteristics
of these datasets, how they can be used to learn diverse dialogue strategies,
and their other potential uses. We also examine methods for transfer learning
between datasets and the use of external knowledge. Finally, we discuss
appropriate choice of evaluation metrics for the learning objective. | http://arxiv.org/pdf/1512.05742 | Iulian Vlad Serban, Ryan Lowe, Peter Henderson, Laurent Charlin, Joelle Pineau | cs.CL, cs.AI, cs.HC, cs.LG, stat.ML, 68T01, 68T05, 68T35, 68T50, I.2.6; I.2.7; I.2.1 | 56 pages including references and appendix, 5 tables and 1 figure;
Under review for the Dialogue & Discourse journal. Update: paper has been
rewritten and now includes several new datasets | null | cs.CL | 20151217 | 20170321 | [
{
"id": "1511.06931"
},
{
"id": "1606.03632"
},
{
"id": "1611.09268"
},
{
"id": "1510.03055"
},
{
"id": "1607.00070"
},
{
"id": "1503.02364"
},
{
"id": "1511.08099"
},
{
"id": "1604.06045"
},
{
"id": "1610.02891"
},
{
"id": "1606.02689"
},
{
"id": "1606.01269"
},
{
"id": "1605.03481"
},
{
"id": "1506.05869"
}
] |
1512.05742 | 224 | cri) TH) max > Po(true response | ¢;,r;)/i"») (1 â Py(true response i
where θ stands for the set of all model parameters and Ici(·) denotes a function that returns 1 when ri is the correct response to ci and 0 otherwise. Learning in the model uses stochastic gradient descent. As is typical with neural network architectures, this learning procedure scales to large datasets. Given a context, the trained model can be used to pick an appropriate answer from a set of potential answers. This model assumes that the responses given in the corpus are appropriate (i.e., this model does not generate novel responses). However, unlike the above information retrieval systems, this model is not provided with a similarity metric as in the vector space model, but instead must learn the semantic relevance of a response to a context. This approach is more attractive from a data-driven learning perspective because it uses the dataset more efï¬ciently and avoids costly hand tuning of parameters.
# A.4.2 FULL GENERATIVE RESPONSE MODELS | 1512.05742#224 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | During the past decade, several areas of speech and language understanding
have witnessed substantial breakthroughs from the use of data-driven models. In
the area of dialogue systems, the trend is less obvious, and most practical
systems are still built through significant engineering and expert knowledge.
Nevertheless, several recent results suggest that data-driven approaches are
feasible and quite promising. To facilitate research in this area, we have
carried out a wide survey of publicly available datasets suitable for
data-driven learning of dialogue systems. We discuss important characteristics
of these datasets, how they can be used to learn diverse dialogue strategies,
and their other potential uses. We also examine methods for transfer learning
between datasets and the use of external knowledge. Finally, we discuss
appropriate choice of evaluation metrics for the learning objective. | http://arxiv.org/pdf/1512.05742 | Iulian Vlad Serban, Ryan Lowe, Peter Henderson, Laurent Charlin, Joelle Pineau | cs.CL, cs.AI, cs.HC, cs.LG, stat.ML, 68T01, 68T05, 68T35, 68T50, I.2.6; I.2.7; I.2.1 | 56 pages including references and appendix, 5 tables and 1 figure;
Under review for the Dialogue & Discourse journal. Update: paper has been
rewritten and now includes several new datasets | null | cs.CL | 20151217 | 20170321 | [
{
"id": "1511.06931"
},
{
"id": "1606.03632"
},
{
"id": "1611.09268"
},
{
"id": "1510.03055"
},
{
"id": "1607.00070"
},
{
"id": "1503.02364"
},
{
"id": "1511.08099"
},
{
"id": "1604.06045"
},
{
"id": "1610.02891"
},
{
"id": "1606.02689"
},
{
"id": "1606.01269"
},
{
"id": "1605.03481"
},
{
"id": "1506.05869"
}
] |
1512.05742 | 225 | # A.4.2 FULL GENERATIVE RESPONSE MODELS
Generative dialogue response strategies are designed to automatically produce utterances by com- posing text (see Section 2.4). A straightforward way to deï¬ne the set of dialogue system actions is by considering them as sequences of words which form utterances. Sordoni et al. (2015b) and Ser- ban et al. (2016) both use this approach. They assume that both the user and the system utterances can be represented by the same generative distribution:
om) Po(ui,...,ur) = Po(uz | wet) (8) t=1
# t=1 T
# N
Pθ(wt,n | wt,<n, u<t), t=1 n=1
= | 1512.05742#225 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | During the past decade, several areas of speech and language understanding
have witnessed substantial breakthroughs from the use of data-driven models. In
the area of dialogue systems, the trend is less obvious, and most practical
systems are still built through significant engineering and expert knowledge.
Nevertheless, several recent results suggest that data-driven approaches are
feasible and quite promising. To facilitate research in this area, we have
carried out a wide survey of publicly available datasets suitable for
data-driven learning of dialogue systems. We discuss important characteristics
of these datasets, how they can be used to learn diverse dialogue strategies,
and their other potential uses. We also examine methods for transfer learning
between datasets and the use of external knowledge. Finally, we discuss
appropriate choice of evaluation metrics for the learning objective. | http://arxiv.org/pdf/1512.05742 | Iulian Vlad Serban, Ryan Lowe, Peter Henderson, Laurent Charlin, Joelle Pineau | cs.CL, cs.AI, cs.HC, cs.LG, stat.ML, 68T01, 68T05, 68T35, 68T50, I.2.6; I.2.7; I.2.1 | 56 pages including references and appendix, 5 tables and 1 figure;
Under review for the Dialogue & Discourse journal. Update: paper has been
rewritten and now includes several new datasets | null | cs.CL | 20151217 | 20170321 | [
{
"id": "1511.06931"
},
{
"id": "1606.03632"
},
{
"id": "1611.09268"
},
{
"id": "1510.03055"
},
{
"id": "1607.00070"
},
{
"id": "1503.02364"
},
{
"id": "1511.08099"
},
{
"id": "1604.06045"
},
{
"id": "1610.02891"
},
{
"id": "1606.02689"
},
{
"id": "1606.01269"
},
{
"id": "1605.03481"
},
{
"id": "1506.05869"
}
] |
1512.05742 | 226 | # t=1 T
# N
Pθ(wt,n | wt,<n, u<t), t=1 n=1
=
where the dialogue consists of T utterances u1, . . . , uT and wt,n is the nth token in utterance t. The variable u<t indicates the sequence of utterances which preceed ut and similarly for wt,<n. Further, the probability of the ï¬rst utterance is deï¬ned as P (u1|u<1) = P (u1), and the ï¬rst word of each utterance only conditions on the previous utterance, i.e. wt,<1 is ânullâ. Tokens can be words, as well as speech and dialogue acts. The set of tokens depends on the particular application domain, but in general the set must be able to represent all desirable system actions. In particular, the set must contain an end-of-utterance token to allow the model to express turn-taking. This approach is similar to language modeling. For differentiable models, training is based on maximum log- likelihood using stochastic gradient descent methods. As discussed in Subsection 2.4, these models project words and dialogue histories onto an Euclidian space. Furthermore, when trained on text only, they can be thought of as unsupervised machine learning models. | 1512.05742#226 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | During the past decade, several areas of speech and language understanding
have witnessed substantial breakthroughs from the use of data-driven models. In
the area of dialogue systems, the trend is less obvious, and most practical
systems are still built through significant engineering and expert knowledge.
Nevertheless, several recent results suggest that data-driven approaches are
feasible and quite promising. To facilitate research in this area, we have
carried out a wide survey of publicly available datasets suitable for
data-driven learning of dialogue systems. We discuss important characteristics
of these datasets, how they can be used to learn diverse dialogue strategies,
and their other potential uses. We also examine methods for transfer learning
between datasets and the use of external knowledge. Finally, we discuss
appropriate choice of evaluation metrics for the learning objective. | http://arxiv.org/pdf/1512.05742 | Iulian Vlad Serban, Ryan Lowe, Peter Henderson, Laurent Charlin, Joelle Pineau | cs.CL, cs.AI, cs.HC, cs.LG, stat.ML, 68T01, 68T05, 68T35, 68T50, I.2.6; I.2.7; I.2.1 | 56 pages including references and appendix, 5 tables and 1 figure;
Under review for the Dialogue & Discourse journal. Update: paper has been
rewritten and now includes several new datasets | null | cs.CL | 20151217 | 20170321 | [
{
"id": "1511.06931"
},
{
"id": "1606.03632"
},
{
"id": "1611.09268"
},
{
"id": "1510.03055"
},
{
"id": "1607.00070"
},
{
"id": "1503.02364"
},
{
"id": "1511.08099"
},
{
"id": "1604.06045"
},
{
"id": "1610.02891"
},
{
"id": "1606.02689"
},
{
"id": "1606.01269"
},
{
"id": "1605.03481"
},
{
"id": "1506.05869"
}
] |
1512.05742 | 227 | Sordoni et al. (2015b) use the above approach to generate responses for posts on Twitter. Specif- ically, Pθ(um | u<m) is given by a recurrent neural network which generates a response word-by- word based on Eq. (9). The model learns its parameters using stochastic gradient descent on a corpus of Twitter messages. The authors then combine their generative model with a machine translation
52
(9)
system and demonstrate that the hybrid system outperforms a state-of-the-art machine translation system (Ritter et al., 2011). | 1512.05742#227 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | During the past decade, several areas of speech and language understanding
have witnessed substantial breakthroughs from the use of data-driven models. In
the area of dialogue systems, the trend is less obvious, and most practical
systems are still built through significant engineering and expert knowledge.
Nevertheless, several recent results suggest that data-driven approaches are
feasible and quite promising. To facilitate research in this area, we have
carried out a wide survey of publicly available datasets suitable for
data-driven learning of dialogue systems. We discuss important characteristics
of these datasets, how they can be used to learn diverse dialogue strategies,
and their other potential uses. We also examine methods for transfer learning
between datasets and the use of external knowledge. Finally, we discuss
appropriate choice of evaluation metrics for the learning objective. | http://arxiv.org/pdf/1512.05742 | Iulian Vlad Serban, Ryan Lowe, Peter Henderson, Laurent Charlin, Joelle Pineau | cs.CL, cs.AI, cs.HC, cs.LG, stat.ML, 68T01, 68T05, 68T35, 68T50, I.2.6; I.2.7; I.2.1 | 56 pages including references and appendix, 5 tables and 1 figure;
Under review for the Dialogue & Discourse journal. Update: paper has been
rewritten and now includes several new datasets | null | cs.CL | 20151217 | 20170321 | [
{
"id": "1511.06931"
},
{
"id": "1606.03632"
},
{
"id": "1611.09268"
},
{
"id": "1510.03055"
},
{
"id": "1607.00070"
},
{
"id": "1503.02364"
},
{
"id": "1511.08099"
},
{
"id": "1604.06045"
},
{
"id": "1610.02891"
},
{
"id": "1606.02689"
},
{
"id": "1606.01269"
},
{
"id": "1605.03481"
},
{
"id": "1506.05869"
}
] |
1512.05742 | 228 | Serban et al. (2016) extend the above model to generate responses for movie subtitles and movie scripts. Speciï¬cally, Serban et al. (2016) adapt a hierarchical recurrent neural network (Sordoni et al., 2015a), which they argue is able to represent the common ground between the dialogue interlocutors. They also propose to add speech and dialogue acts to the vocabulary of the model to make the interaction with the system more natural. However, since the model is used in a standalone manner, i.e., without combining it with a machine translation system, the majority of the generated responses are highly generic (e.g. Iâm sorry or I donât know). The authors conclude that this is a limitation of all neural network-based generative models for dialogue (e.g., (Serban et al., 2016; Sordoni et al., 2015b; Vinyals and Le, 2015)). The problem appears to lie in the distribution of words in the dialogue utterances, which primarily consist of pronouns, punctuation tokens and a few common verbs but rarely nouns, verbs and adjectives. When trained on a such a skewed distribution, the models do not learn to represent the semantic content of | 1512.05742#228 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | During the past decade, several areas of speech and language understanding
have witnessed substantial breakthroughs from the use of data-driven models. In
the area of dialogue systems, the trend is less obvious, and most practical
systems are still built through significant engineering and expert knowledge.
Nevertheless, several recent results suggest that data-driven approaches are
feasible and quite promising. To facilitate research in this area, we have
carried out a wide survey of publicly available datasets suitable for
data-driven learning of dialogue systems. We discuss important characteristics
of these datasets, how they can be used to learn diverse dialogue strategies,
and their other potential uses. We also examine methods for transfer learning
between datasets and the use of external knowledge. Finally, we discuss
appropriate choice of evaluation metrics for the learning objective. | http://arxiv.org/pdf/1512.05742 | Iulian Vlad Serban, Ryan Lowe, Peter Henderson, Laurent Charlin, Joelle Pineau | cs.CL, cs.AI, cs.HC, cs.LG, stat.ML, 68T01, 68T05, 68T35, 68T50, I.2.6; I.2.7; I.2.1 | 56 pages including references and appendix, 5 tables and 1 figure;
Under review for the Dialogue & Discourse journal. Update: paper has been
rewritten and now includes several new datasets | null | cs.CL | 20151217 | 20170321 | [
{
"id": "1511.06931"
},
{
"id": "1606.03632"
},
{
"id": "1611.09268"
},
{
"id": "1510.03055"
},
{
"id": "1607.00070"
},
{
"id": "1503.02364"
},
{
"id": "1511.08099"
},
{
"id": "1604.06045"
},
{
"id": "1610.02891"
},
{
"id": "1606.02689"
},
{
"id": "1606.01269"
},
{
"id": "1605.03481"
},
{
"id": "1506.05869"
}
] |
1512.05742 | 229 | few common verbs but rarely nouns, verbs and adjectives. When trained on a such a skewed distribution, the models do not learn to represent the semantic content of dialogues very well. This issue is exacerbated by the fact that dialogue is inherently ambiguous and multi-modal, which makes it more likely for the model to fall back on a generic response. As a workaround, Li et al. (2015) increase response diversity by changing the objective function at generation time to also maximize the mutual information between the context, i.e. the previous utterances, and the response utterance. However, it is not clear what impact this artiï¬cial diversity has on the effectiveness or naturalness of the dialogue system. It is possible that the issue may require larger corpora to learn semantic representations of dialogue, more context (e.g. longer conversations, user proï¬les and task-speciï¬c corpora) and multi-modal interfaces to reduce uncertainty. Further research is needed to resolve this question. | 1512.05742#229 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | During the past decade, several areas of speech and language understanding
have witnessed substantial breakthroughs from the use of data-driven models. In
the area of dialogue systems, the trend is less obvious, and most practical
systems are still built through significant engineering and expert knowledge.
Nevertheless, several recent results suggest that data-driven approaches are
feasible and quite promising. To facilitate research in this area, we have
carried out a wide survey of publicly available datasets suitable for
data-driven learning of dialogue systems. We discuss important characteristics
of these datasets, how they can be used to learn diverse dialogue strategies,
and their other potential uses. We also examine methods for transfer learning
between datasets and the use of external knowledge. Finally, we discuss
appropriate choice of evaluation metrics for the learning objective. | http://arxiv.org/pdf/1512.05742 | Iulian Vlad Serban, Ryan Lowe, Peter Henderson, Laurent Charlin, Joelle Pineau | cs.CL, cs.AI, cs.HC, cs.LG, stat.ML, 68T01, 68T05, 68T35, 68T50, I.2.6; I.2.7; I.2.1 | 56 pages including references and appendix, 5 tables and 1 figure;
Under review for the Dialogue & Discourse journal. Update: paper has been
rewritten and now includes several new datasets | null | cs.CL | 20151217 | 20170321 | [
{
"id": "1511.06931"
},
{
"id": "1606.03632"
},
{
"id": "1611.09268"
},
{
"id": "1510.03055"
},
{
"id": "1607.00070"
},
{
"id": "1503.02364"
},
{
"id": "1511.08099"
},
{
"id": "1604.06045"
},
{
"id": "1610.02891"
},
{
"id": "1606.02689"
},
{
"id": "1606.01269"
},
{
"id": "1605.03481"
},
{
"id": "1506.05869"
}
] |
1512.05742 | 230 | Wen et al. (2015) train a neural network to generate natural language responses for a closed- dialogue domain. They use Amazon Mechanical Turk21 to collect a dataset of dialogue acts and utterance pairs. They then train recurrent neural networks to generate a single utterance as in Eq. (9), but condition on the speciï¬ed dialogue act:
Pθ(U |A) = Pθ(wn | w<n, A), (10)
# n
where A is the dialogue act represented by a discrete variable, U is the generated utterance given A and wn is the nth word in the utterance. Based on a hybrid approach combining different recurrent neural networks for answer generation and convolutional neural networks for re-ranking answers, they are able to generate diverse utterances representing the dialogue acts in their datasets.
Similar to the models which re-rank answers, generative models may be used as complete di- alogue systems or as response generation components of other dialogue systems. However, unlike the models which re-rank answers, the word-by-word generative models can generate entirely new utterances never seen before in the training set. Further, in certain models such as those cited above, response generation scales irrespective of dataset size.
# A.5 User Simulation Models | 1512.05742#230 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | During the past decade, several areas of speech and language understanding
have witnessed substantial breakthroughs from the use of data-driven models. In
the area of dialogue systems, the trend is less obvious, and most practical
systems are still built through significant engineering and expert knowledge.
Nevertheless, several recent results suggest that data-driven approaches are
feasible and quite promising. To facilitate research in this area, we have
carried out a wide survey of publicly available datasets suitable for
data-driven learning of dialogue systems. We discuss important characteristics
of these datasets, how they can be used to learn diverse dialogue strategies,
and their other potential uses. We also examine methods for transfer learning
between datasets and the use of external knowledge. Finally, we discuss
appropriate choice of evaluation metrics for the learning objective. | http://arxiv.org/pdf/1512.05742 | Iulian Vlad Serban, Ryan Lowe, Peter Henderson, Laurent Charlin, Joelle Pineau | cs.CL, cs.AI, cs.HC, cs.LG, stat.ML, 68T01, 68T05, 68T35, 68T50, I.2.6; I.2.7; I.2.1 | 56 pages including references and appendix, 5 tables and 1 figure;
Under review for the Dialogue & Discourse journal. Update: paper has been
rewritten and now includes several new datasets | null | cs.CL | 20151217 | 20170321 | [
{
"id": "1511.06931"
},
{
"id": "1606.03632"
},
{
"id": "1611.09268"
},
{
"id": "1510.03055"
},
{
"id": "1607.00070"
},
{
"id": "1503.02364"
},
{
"id": "1511.08099"
},
{
"id": "1604.06045"
},
{
"id": "1610.02891"
},
{
"id": "1606.02689"
},
{
"id": "1606.01269"
},
{
"id": "1605.03481"
},
{
"id": "1506.05869"
}
] |
1512.05742 | 231 | # A.5 User Simulation Models
In the absence of large datasets, some researchers have turned to building user simulation models (sometimes referred to as âuser modelsâ) to train dialogue strategies. User simulation models aim
21. http://www.mturk.com
53
to produce natural, varied and consistent interactions from a ï¬xed corpus, as stated by Pietquin and Hastie (2013, p. 2): âAn efï¬cient user simulation should not only reproduce the statistical distribution of dialogue acts measured in the data but should also reproduce complete dialogue structures.â As such, they model the conditional probability of the user utterances given previous user and system utterances:
Pθ(uuser t <t , usystem |uuser <t ), (11)
and usystem t where θ are the model parameters, uuser utterance (or action) respectively at time t. Similarly, uuser and system utterances that precede uuser are the user utterance (or action) and the system indicate the sequence of user t <t and usystem <t and usystem t , respectively. t | 1512.05742#231 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | During the past decade, several areas of speech and language understanding
have witnessed substantial breakthroughs from the use of data-driven models. In
the area of dialogue systems, the trend is less obvious, and most practical
systems are still built through significant engineering and expert knowledge.
Nevertheless, several recent results suggest that data-driven approaches are
feasible and quite promising. To facilitate research in this area, we have
carried out a wide survey of publicly available datasets suitable for
data-driven learning of dialogue systems. We discuss important characteristics
of these datasets, how they can be used to learn diverse dialogue strategies,
and their other potential uses. We also examine methods for transfer learning
between datasets and the use of external knowledge. Finally, we discuss
appropriate choice of evaluation metrics for the learning objective. | http://arxiv.org/pdf/1512.05742 | Iulian Vlad Serban, Ryan Lowe, Peter Henderson, Laurent Charlin, Joelle Pineau | cs.CL, cs.AI, cs.HC, cs.LG, stat.ML, 68T01, 68T05, 68T35, 68T50, I.2.6; I.2.7; I.2.1 | 56 pages including references and appendix, 5 tables and 1 figure;
Under review for the Dialogue & Discourse journal. Update: paper has been
rewritten and now includes several new datasets | null | cs.CL | 20151217 | 20170321 | [
{
"id": "1511.06931"
},
{
"id": "1606.03632"
},
{
"id": "1611.09268"
},
{
"id": "1510.03055"
},
{
"id": "1607.00070"
},
{
"id": "1503.02364"
},
{
"id": "1511.08099"
},
{
"id": "1604.06045"
},
{
"id": "1610.02891"
},
{
"id": "1606.02689"
},
{
"id": "1606.01269"
},
{
"id": "1605.03481"
},
{
"id": "1506.05869"
}
] |
1512.05742 | 232 | There are two main differences between user simulation models and the generative response models discussed in Subsection A.4.2. First, user simulation models never model the distribution over system utterances, but instead only model the conditional distribution over user utterances given previous user and system utterances. Second, user simulation models usually model dia- logue acts as opposed to word tokens. Since a single dialogue act may represent many different utterances, the models generalize well across paraphrases. However, training such user simulation models requires access to a dialogue corpus with annotated dialogue acts, and limits their applica- tion to training dialogue systems which work on the same set of dialogue acts. For spoken dialogue systems, user simulation models are usually combined with a model over speech recognition errors based on the automatic speech recognition system but, for simplicity, we omit this aspect in our analysis.
Researchers initially experimented with n-gram-based user simulation models (Eckert et al., 1997; Georgila et al., 2006), which are deï¬ned as:
Pθ(uuser t |usystem tâ1 tâ2, . . . , usystem , uuser tânâ1) = θuuser t ,usystem tâ2,...,usystem tâ1 ,uuser tânâ1 , (12) | 1512.05742#232 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | During the past decade, several areas of speech and language understanding
have witnessed substantial breakthroughs from the use of data-driven models. In
the area of dialogue systems, the trend is less obvious, and most practical
systems are still built through significant engineering and expert knowledge.
Nevertheless, several recent results suggest that data-driven approaches are
feasible and quite promising. To facilitate research in this area, we have
carried out a wide survey of publicly available datasets suitable for
data-driven learning of dialogue systems. We discuss important characteristics
of these datasets, how they can be used to learn diverse dialogue strategies,
and their other potential uses. We also examine methods for transfer learning
between datasets and the use of external knowledge. Finally, we discuss
appropriate choice of evaluation metrics for the learning objective. | http://arxiv.org/pdf/1512.05742 | Iulian Vlad Serban, Ryan Lowe, Peter Henderson, Laurent Charlin, Joelle Pineau | cs.CL, cs.AI, cs.HC, cs.LG, stat.ML, 68T01, 68T05, 68T35, 68T50, I.2.6; I.2.7; I.2.1 | 56 pages including references and appendix, 5 tables and 1 figure;
Under review for the Dialogue & Discourse journal. Update: paper has been
rewritten and now includes several new datasets | null | cs.CL | 20151217 | 20170321 | [
{
"id": "1511.06931"
},
{
"id": "1606.03632"
},
{
"id": "1611.09268"
},
{
"id": "1510.03055"
},
{
"id": "1607.00070"
},
{
"id": "1503.02364"
},
{
"id": "1511.08099"
},
{
"id": "1604.06045"
},
{
"id": "1610.02891"
},
{
"id": "1606.02689"
},
{
"id": "1606.01269"
},
{
"id": "1605.03481"
},
{
"id": "1506.05869"
}
] |
1512.05742 | 233 | where n is an even integer, and θ is an n-dimensional tensor (table) which satisï¬es:
θuuser t ,usystem tâ2,...,usystem tâ1 ,uuser tânâ1 = 1. (13)
# uuser t
These models are trained either to maximize the log-likelihood of the observations by setting θuuser equal to (a constant times) the number of occurrences of each correspond- ing n-gram , or on a related objective function which encourages smoothness and therefore reduces data sparsity for larger nâs (Goodman, 2001). Even with smoothing, n has to be kept small and these models are therefore unable to maintain the history and goals of the user over several utter- ances (Schatzmann et al., 2005). Consequently, the goal of the user changes over time, which has a detrimental effect on the performance of the dialogue system trained using the user simulator.
Several solutions have been proposed to solve the problem of maintaining the history of the dialogue. Pietquin (2004) propose to condition the n-gram model on the userâs goal:
Pθ(uuser t |usystem tâ1 tâ2, . . . , usystem , uuser tânâ1, g), (14) | 1512.05742#233 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | During the past decade, several areas of speech and language understanding
have witnessed substantial breakthroughs from the use of data-driven models. In
the area of dialogue systems, the trend is less obvious, and most practical
systems are still built through significant engineering and expert knowledge.
Nevertheless, several recent results suggest that data-driven approaches are
feasible and quite promising. To facilitate research in this area, we have
carried out a wide survey of publicly available datasets suitable for
data-driven learning of dialogue systems. We discuss important characteristics
of these datasets, how they can be used to learn diverse dialogue strategies,
and their other potential uses. We also examine methods for transfer learning
between datasets and the use of external knowledge. Finally, we discuss
appropriate choice of evaluation metrics for the learning objective. | http://arxiv.org/pdf/1512.05742 | Iulian Vlad Serban, Ryan Lowe, Peter Henderson, Laurent Charlin, Joelle Pineau | cs.CL, cs.AI, cs.HC, cs.LG, stat.ML, 68T01, 68T05, 68T35, 68T50, I.2.6; I.2.7; I.2.1 | 56 pages including references and appendix, 5 tables and 1 figure;
Under review for the Dialogue & Discourse journal. Update: paper has been
rewritten and now includes several new datasets | null | cs.CL | 20151217 | 20170321 | [
{
"id": "1511.06931"
},
{
"id": "1606.03632"
},
{
"id": "1611.09268"
},
{
"id": "1510.03055"
},
{
"id": "1607.00070"
},
{
"id": "1503.02364"
},
{
"id": "1511.08099"
},
{
"id": "1604.06045"
},
{
"id": "1610.02891"
},
{
"id": "1606.02689"
},
{
"id": "1606.01269"
},
{
"id": "1605.03481"
},
{
"id": "1506.05869"
}
] |
1512.05742 | 234 | Pθ(uuser t |usystem tâ1 tâ2, . . . , usystem , uuser tânâ1, g), (14)
where g is the goal of the user deï¬ned as a set of slot-value pairs. Unfortunately, not only must the goal lie within a set of hand-crafted slot-value pairs, but its distribution when simulating must
54
also be deï¬ned by experts. Using a more data-driven approach, Georgila et al. (2006) propose to condition the n-gram model on additional features:
Pθ(uuser t |usystem tâ1 tâ2, . . . , usystem <t , usystem , uuser tânâ1, f (uuser <t )), (15)
where f (uuser ) is a function mapping all previous user and system utterances to a low- dimensional vector that summarizes the previous interactions between the user and the system (e.g. slot-value pairs that the user has provided the system up to time t). Now, θ can be learned using maximum log-likelihood with stochastic gradient descent. | 1512.05742#234 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | During the past decade, several areas of speech and language understanding
have witnessed substantial breakthroughs from the use of data-driven models. In
the area of dialogue systems, the trend is less obvious, and most practical
systems are still built through significant engineering and expert knowledge.
Nevertheless, several recent results suggest that data-driven approaches are
feasible and quite promising. To facilitate research in this area, we have
carried out a wide survey of publicly available datasets suitable for
data-driven learning of dialogue systems. We discuss important characteristics
of these datasets, how they can be used to learn diverse dialogue strategies,
and their other potential uses. We also examine methods for transfer learning
between datasets and the use of external knowledge. Finally, we discuss
appropriate choice of evaluation metrics for the learning objective. | http://arxiv.org/pdf/1512.05742 | Iulian Vlad Serban, Ryan Lowe, Peter Henderson, Laurent Charlin, Joelle Pineau | cs.CL, cs.AI, cs.HC, cs.LG, stat.ML, 68T01, 68T05, 68T35, 68T50, I.2.6; I.2.7; I.2.1 | 56 pages including references and appendix, 5 tables and 1 figure;
Under review for the Dialogue & Discourse journal. Update: paper has been
rewritten and now includes several new datasets | null | cs.CL | 20151217 | 20170321 | [
{
"id": "1511.06931"
},
{
"id": "1606.03632"
},
{
"id": "1611.09268"
},
{
"id": "1510.03055"
},
{
"id": "1607.00070"
},
{
"id": "1503.02364"
},
{
"id": "1511.08099"
},
{
"id": "1604.06045"
},
{
"id": "1610.02891"
},
{
"id": "1606.02689"
},
{
"id": "1606.01269"
},
{
"id": "1605.03481"
},
{
"id": "1506.05869"
}
] |
1512.05742 | 235 | More sophisticated probabilistic models have been proposed based on directed graphical mod- els, such as hidden Markov models and input-output hidden Markov models (Cuay´ahuitl et al., 2005), and undirected graphical models, such as conditional random ï¬elds based on linear chains (Jung et al., 2009). Inspired by Pietquin (2005), Pietquin (2007) and Rossignol et al. (2011) propose the following directed graphical model:
Pθ(uuser t <t , usystem |uuser <t ) = Pθ(uuser t <t , usystem |gt, kt, uuser <t <t , usystem )Pθ(gt|kt)Pθ(kt|k<t, uuser <t gt,kt ) | 1512.05742#235 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | During the past decade, several areas of speech and language understanding
have witnessed substantial breakthroughs from the use of data-driven models. In
the area of dialogue systems, the trend is less obvious, and most practical
systems are still built through significant engineering and expert knowledge.
Nevertheless, several recent results suggest that data-driven approaches are
feasible and quite promising. To facilitate research in this area, we have
carried out a wide survey of publicly available datasets suitable for
data-driven learning of dialogue systems. We discuss important characteristics
of these datasets, how they can be used to learn diverse dialogue strategies,
and their other potential uses. We also examine methods for transfer learning
between datasets and the use of external knowledge. Finally, we discuss
appropriate choice of evaluation metrics for the learning objective. | http://arxiv.org/pdf/1512.05742 | Iulian Vlad Serban, Ryan Lowe, Peter Henderson, Laurent Charlin, Joelle Pineau | cs.CL, cs.AI, cs.HC, cs.LG, stat.ML, 68T01, 68T05, 68T35, 68T50, I.2.6; I.2.7; I.2.1 | 56 pages including references and appendix, 5 tables and 1 figure;
Under review for the Dialogue & Discourse journal. Update: paper has been
rewritten and now includes several new datasets | null | cs.CL | 20151217 | 20170321 | [
{
"id": "1511.06931"
},
{
"id": "1606.03632"
},
{
"id": "1611.09268"
},
{
"id": "1510.03055"
},
{
"id": "1607.00070"
},
{
"id": "1503.02364"
},
{
"id": "1511.08099"
},
{
"id": "1604.06045"
},
{
"id": "1610.02891"
},
{
"id": "1606.02689"
},
{
"id": "1606.01269"
},
{
"id": "1605.03481"
},
{
"id": "1506.05869"
}
] |
1512.05742 | 236 | where gt is a discrete random variable representing the userâs goal at time t (e.g. a set of slot-value pairs), and kt is another discrete random variable representing the userâs knowledge at time t (e.g. a set of slot-value pairs). This model allows the user to change goals during the dialogue, which would be the case, for example, if the user is notiï¬ed by the dialogue system that the original goal cannot be accomplished. The dependency on previous user and system utterances for uuser and kt may be limited to a small number of previous turns as well as a set of hand-crafted features computed on these utterances. For example, the conditional probability:
Pθ(uuser t <t , usystem |gt, kt, uuser <t ), (17) | 1512.05742#236 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | During the past decade, several areas of speech and language understanding
have witnessed substantial breakthroughs from the use of data-driven models. In
the area of dialogue systems, the trend is less obvious, and most practical
systems are still built through significant engineering and expert knowledge.
Nevertheless, several recent results suggest that data-driven approaches are
feasible and quite promising. To facilitate research in this area, we have
carried out a wide survey of publicly available datasets suitable for
data-driven learning of dialogue systems. We discuss important characteristics
of these datasets, how they can be used to learn diverse dialogue strategies,
and their other potential uses. We also examine methods for transfer learning
between datasets and the use of external knowledge. Finally, we discuss
appropriate choice of evaluation metrics for the learning objective. | http://arxiv.org/pdf/1512.05742 | Iulian Vlad Serban, Ryan Lowe, Peter Henderson, Laurent Charlin, Joelle Pineau | cs.CL, cs.AI, cs.HC, cs.LG, stat.ML, 68T01, 68T05, 68T35, 68T50, I.2.6; I.2.7; I.2.1 | 56 pages including references and appendix, 5 tables and 1 figure;
Under review for the Dialogue & Discourse journal. Update: paper has been
rewritten and now includes several new datasets | null | cs.CL | 20151217 | 20170321 | [
{
"id": "1511.06931"
},
{
"id": "1606.03632"
},
{
"id": "1611.09268"
},
{
"id": "1510.03055"
},
{
"id": "1607.00070"
},
{
"id": "1503.02364"
},
{
"id": "1511.08099"
},
{
"id": "1604.06045"
},
{
"id": "1610.02891"
},
{
"id": "1606.02689"
},
{
"id": "1606.01269"
},
{
"id": "1605.03481"
},
{
"id": "1506.05869"
}
] |
1512.05742 | 237 | may be approximated by an n-gram model with additional features as in Georgila et al. (2006). Generating user utterances can be done in a straightforward manner by using ancestral sampling: ï¬rst, sample kt given k<t and the previous user and system utterances; then, sample gt given kt; and ï¬nally, sample uuser given gt, kt and the previous user and system utterances. The model can be trained using maximum log-likelihood. If all variables are observed, i.e. gt and kt have been given by human annotators, then the maximum-likelihood parameters can be found similarly to n- gram models by counting the co-occurrences of variables. If some variables are missing, they can be estimated using the expectation-maximization (EM) algorithm, since the dependencies form a linear chain. Rossignol et al. (2011) also propose to regularize the model by assuming a Dirichlet distribution prior over the parameters, which is straightforward to combine with the EM algorithm. User simulation models are particularly useful in the development of dialogue systems based on reinforcement learning methods (Singh et al., 2002; Schatzmann et al., 2006; Pietquin and Dutoit, 2006; Frampton | 1512.05742#237 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | During the past decade, several areas of speech and language understanding
have witnessed substantial breakthroughs from the use of data-driven models. In
the area of dialogue systems, the trend is less obvious, and most practical
systems are still built through significant engineering and expert knowledge.
Nevertheless, several recent results suggest that data-driven approaches are
feasible and quite promising. To facilitate research in this area, we have
carried out a wide survey of publicly available datasets suitable for
data-driven learning of dialogue systems. We discuss important characteristics
of these datasets, how they can be used to learn diverse dialogue strategies,
and their other potential uses. We also examine methods for transfer learning
between datasets and the use of external knowledge. Finally, we discuss
appropriate choice of evaluation metrics for the learning objective. | http://arxiv.org/pdf/1512.05742 | Iulian Vlad Serban, Ryan Lowe, Peter Henderson, Laurent Charlin, Joelle Pineau | cs.CL, cs.AI, cs.HC, cs.LG, stat.ML, 68T01, 68T05, 68T35, 68T50, I.2.6; I.2.7; I.2.1 | 56 pages including references and appendix, 5 tables and 1 figure;
Under review for the Dialogue & Discourse journal. Update: paper has been
rewritten and now includes several new datasets | null | cs.CL | 20151217 | 20170321 | [
{
"id": "1511.06931"
},
{
"id": "1606.03632"
},
{
"id": "1611.09268"
},
{
"id": "1510.03055"
},
{
"id": "1607.00070"
},
{
"id": "1503.02364"
},
{
"id": "1511.08099"
},
{
"id": "1604.06045"
},
{
"id": "1610.02891"
},
{
"id": "1606.02689"
},
{
"id": "1606.01269"
},
{
"id": "1605.03481"
},
{
"id": "1506.05869"
}
] |
1512.05742 | 238 | dialogue systems based on reinforcement learning methods (Singh et al., 2002; Schatzmann et al., 2006; Pietquin and Dutoit, 2006; Frampton and Lemon, 2009; JurËc´ıËcek et al., 2012; Png and Pineau, 2011; Young et al., 2013). Furthermore, many user simulation models, such as those trainable with stochastic gradient descent or co-occurrence statistics, are able to scale to large corpora. In the light of the increasing availability of large dialogue corpora, there are ample opportunities for building novel user simulation models, which aim to better represent real user behavior, and in turn for training dialogue systems, which aim to solve more general and more difï¬cult tasks. Despite their similarities, research on user simulation | 1512.05742#238 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | During the past decade, several areas of speech and language understanding
have witnessed substantial breakthroughs from the use of data-driven models. In
the area of dialogue systems, the trend is less obvious, and most practical
systems are still built through significant engineering and expert knowledge.
Nevertheless, several recent results suggest that data-driven approaches are
feasible and quite promising. To facilitate research in this area, we have
carried out a wide survey of publicly available datasets suitable for
data-driven learning of dialogue systems. We discuss important characteristics
of these datasets, how they can be used to learn diverse dialogue strategies,
and their other potential uses. We also examine methods for transfer learning
between datasets and the use of external knowledge. Finally, we discuss
appropriate choice of evaluation metrics for the learning objective. | http://arxiv.org/pdf/1512.05742 | Iulian Vlad Serban, Ryan Lowe, Peter Henderson, Laurent Charlin, Joelle Pineau | cs.CL, cs.AI, cs.HC, cs.LG, stat.ML, 68T01, 68T05, 68T35, 68T50, I.2.6; I.2.7; I.2.1 | 56 pages including references and appendix, 5 tables and 1 figure;
Under review for the Dialogue & Discourse journal. Update: paper has been
rewritten and now includes several new datasets | null | cs.CL | 20151217 | 20170321 | [
{
"id": "1511.06931"
},
{
"id": "1606.03632"
},
{
"id": "1611.09268"
},
{
"id": "1510.03055"
},
{
"id": "1607.00070"
},
{
"id": "1503.02364"
},
{
"id": "1511.08099"
},
{
"id": "1604.06045"
},
{
"id": "1610.02891"
},
{
"id": "1606.02689"
},
{
"id": "1606.01269"
},
{
"id": "1605.03481"
},
{
"id": "1506.05869"
}
] |
1512.04455 | 1 | Google Deepmind * These authors contributed equally. heess, jjhunt, countzero, davidsilver @ google.com
# Abstract
Partially observed control problems are a challenging aspect of reinforcement learning. We extend two related, model-free algorithms for continuous control â deterministic policy gradient and stochastic value gradient â to solve partially observed domains using recurrent neural networks trained with backpropagation through time. We demonstrate that this approach, coupled with long-short term memory is able to solve a variety of physical control problems exhibiting an as- sortment of memory requirements. These include the short-term integration of in- formation from noisy sensors and the identiï¬cation of system parameters, as well as long-term memory problems that require preserving information over many time steps. We also demonstrate success on a combined exploration and mem- ory problem in the form of a simpliï¬ed version of the well-known Morris water maze task. Finally, we show that our approach can deal with high-dimensional observations by learning directly from pixels. We ï¬nd that recurrent deterministic and stochastic policies are able to learn similarly good solutions to these tasks, including the water maze where the agent must learn effective search strategies.
# Introduction | 1512.04455#1 | Memory-based control with recurrent neural networks | Partially observed control problems are a challenging aspect of reinforcement
learning. We extend two related, model-free algorithms for continuous control
-- deterministic policy gradient and stochastic value gradient -- to solve
partially observed domains using recurrent neural networks trained with
backpropagation through time.
We demonstrate that this approach, coupled with long-short term memory is
able to solve a variety of physical control problems exhibiting an assortment
of memory requirements. These include the short-term integration of information
from noisy sensors and the identification of system parameters, as well as
long-term memory problems that require preserving information over many time
steps. We also demonstrate success on a combined exploration and memory problem
in the form of a simplified version of the well-known Morris water maze task.
Finally, we show that our approach can deal with high-dimensional observations
by learning directly from pixels.
We find that recurrent deterministic and stochastic policies are able to
learn similarly good solutions to these tasks, including the water maze where
the agent must learn effective search strategies. | http://arxiv.org/pdf/1512.04455 | Nicolas Heess, Jonathan J Hunt, Timothy P Lillicrap, David Silver | cs.LG | NIPS Deep Reinforcement Learning Workshop 2015 | null | cs.LG | 20151214 | 20151214 | [
{
"id": "1509.03005"
},
{
"id": "1507.06527"
},
{
"id": "1504.00702"
},
{
"id": "1509.02971"
}
] |
1512.04455 | 2 | # Introduction
The use of neural networks for solving continuous control problems has a long tradition. Several recent papers successfully apply model-free, direct policy search methods to the problem of learning neural network control policies for challenging continuous domains with many degrees of freedoms [2, 6, 14, 21, 22, 12]. However, all of this work assumes fully observed state.
Many real world control problems are partially observed. Partial observability can arise from dif- ferent sources including the need to remember information that is only temporarily available such as a way sign in a navigation task, sensor limitations or noise, unobserved variations of the plant under control (system identiï¬cation), or state-aliasing due to function approximation. Partial ob- servability also arises naturally in many tasks that involve control from vision: a static image of a dynamic scene provides no information about velocities, occlusions occur as a consequence of the three-dimensional nature of the world, and most vision sensors are bandwidth-limited and only have a restricted ï¬eld-of-view.
Resolution of partial observability is non-trivial. Existing methods can roughly be divided into two broad classes: | 1512.04455#2 | Memory-based control with recurrent neural networks | Partially observed control problems are a challenging aspect of reinforcement
learning. We extend two related, model-free algorithms for continuous control
-- deterministic policy gradient and stochastic value gradient -- to solve
partially observed domains using recurrent neural networks trained with
backpropagation through time.
We demonstrate that this approach, coupled with long-short term memory is
able to solve a variety of physical control problems exhibiting an assortment
of memory requirements. These include the short-term integration of information
from noisy sensors and the identification of system parameters, as well as
long-term memory problems that require preserving information over many time
steps. We also demonstrate success on a combined exploration and memory problem
in the form of a simplified version of the well-known Morris water maze task.
Finally, we show that our approach can deal with high-dimensional observations
by learning directly from pixels.
We find that recurrent deterministic and stochastic policies are able to
learn similarly good solutions to these tasks, including the water maze where
the agent must learn effective search strategies. | http://arxiv.org/pdf/1512.04455 | Nicolas Heess, Jonathan J Hunt, Timothy P Lillicrap, David Silver | cs.LG | NIPS Deep Reinforcement Learning Workshop 2015 | null | cs.LG | 20151214 | 20151214 | [
{
"id": "1509.03005"
},
{
"id": "1507.06527"
},
{
"id": "1504.00702"
},
{
"id": "1509.02971"
}
] |
1512.04455 | 3 | Resolution of partial observability is non-trivial. Existing methods can roughly be divided into two broad classes:
On the one hand there are approaches that explicitly maintain a belief state that corresponds to the distribution over the world state given the observations so far. This approach has two major disadvantages: The ï¬rst is the need for a model, and the second is the computational cost that is typically associated with the update of the belief state [8, 23].
1
On the other hand there are model free approaches that learn to form memories based on interactions with the world. This is challenging since it is a priori unknown which features of the observations will be relevant later, and associations may have to be formed over many steps. For this reason, most model free approaches tend to assume the fully-observed case. In practice, partial observability is often solved by hand-crafting a solution such as providing multiple-frames at each timestep to allow velocity estimation [16, 14]. | 1512.04455#3 | Memory-based control with recurrent neural networks | Partially observed control problems are a challenging aspect of reinforcement
learning. We extend two related, model-free algorithms for continuous control
-- deterministic policy gradient and stochastic value gradient -- to solve
partially observed domains using recurrent neural networks trained with
backpropagation through time.
We demonstrate that this approach, coupled with long-short term memory is
able to solve a variety of physical control problems exhibiting an assortment
of memory requirements. These include the short-term integration of information
from noisy sensors and the identification of system parameters, as well as
long-term memory problems that require preserving information over many time
steps. We also demonstrate success on a combined exploration and memory problem
in the form of a simplified version of the well-known Morris water maze task.
Finally, we show that our approach can deal with high-dimensional observations
by learning directly from pixels.
We find that recurrent deterministic and stochastic policies are able to
learn similarly good solutions to these tasks, including the water maze where
the agent must learn effective search strategies. | http://arxiv.org/pdf/1512.04455 | Nicolas Heess, Jonathan J Hunt, Timothy P Lillicrap, David Silver | cs.LG | NIPS Deep Reinforcement Learning Workshop 2015 | null | cs.LG | 20151214 | 20151214 | [
{
"id": "1509.03005"
},
{
"id": "1507.06527"
},
{
"id": "1504.00702"
},
{
"id": "1509.02971"
}
] |
1512.04455 | 4 | In this work we investigate a natural extension of two recent, closely related policy gradient algo- rithms for learning continuous-action policies to handle partially observed problems. We primarily consider the Deterministic Policy Gradient algorithm (DPG) [24], which is an off-policy policy gradient algorithm that has recently produced promising results on a broad range of difï¬cult, high- dimensional continuous control problems, including direct control from pixels [14]. DPG is an actor-critic algorithm that uses a learned approximation of the action-value (Q) function to obtain approximate action-value gradients. These are then used to update a deterministic policy via the chain-rule. We also consider DPGâs stochastic counterpart, SVG(0) ([6]; SVG stands for âStochastic Value Gradientsâ) which similarly updates the policy via backpropagation of action-value gradients from an action-value critic but learns a stochastic policy. | 1512.04455#4 | Memory-based control with recurrent neural networks | Partially observed control problems are a challenging aspect of reinforcement
learning. We extend two related, model-free algorithms for continuous control
-- deterministic policy gradient and stochastic value gradient -- to solve
partially observed domains using recurrent neural networks trained with
backpropagation through time.
We demonstrate that this approach, coupled with long-short term memory is
able to solve a variety of physical control problems exhibiting an assortment
of memory requirements. These include the short-term integration of information
from noisy sensors and the identification of system parameters, as well as
long-term memory problems that require preserving information over many time
steps. We also demonstrate success on a combined exploration and memory problem
in the form of a simplified version of the well-known Morris water maze task.
Finally, we show that our approach can deal with high-dimensional observations
by learning directly from pixels.
We find that recurrent deterministic and stochastic policies are able to
learn similarly good solutions to these tasks, including the water maze where
the agent must learn effective search strategies. | http://arxiv.org/pdf/1512.04455 | Nicolas Heess, Jonathan J Hunt, Timothy P Lillicrap, David Silver | cs.LG | NIPS Deep Reinforcement Learning Workshop 2015 | null | cs.LG | 20151214 | 20151214 | [
{
"id": "1509.03005"
},
{
"id": "1507.06527"
},
{
"id": "1504.00702"
},
{
"id": "1509.02971"
}
] |
1512.04455 | 5 | We modify both algorithms to use recurrent networks trained with backpropagation through time. We demonstrate that the resulting algorithms, Recurrent DPG (RDPG) and Recurrent SVG(0) (RSVG(0)), can be applied to a number of partially observed physical control problems with di- verse memory requirements. These problems include: short-term integration of sensor information to estimate the system state (pendulum and cartpole swing-up tasks without velocity information); system identiï¬cation (cart pole swing-up with variable and unknown pole-length); long-term mem- ory (a robot arm that needs to reach out and grab a payload to move it to the position the arm started from); as well as a simpliï¬ed version of the water maze task which requires the agent to learn an exploration strategy to ï¬nd a hidden platform and then remember the platformâs position in order to return to it subsequently. We also demonstrate successful control directly from pixels.
Our results suggest that actor-critic algorithms that rely on bootstrapping for estimating the value function can be a viable option for learning control policies in partially observed domains. We further ï¬nd that, at least in the setup considered here, there is little performance difference between stochastic and deterministic policies, despite the former being typically presumed to be preferable in partially observed domains.
# 2 Background | 1512.04455#5 | Memory-based control with recurrent neural networks | Partially observed control problems are a challenging aspect of reinforcement
learning. We extend two related, model-free algorithms for continuous control
-- deterministic policy gradient and stochastic value gradient -- to solve
partially observed domains using recurrent neural networks trained with
backpropagation through time.
We demonstrate that this approach, coupled with long-short term memory is
able to solve a variety of physical control problems exhibiting an assortment
of memory requirements. These include the short-term integration of information
from noisy sensors and the identification of system parameters, as well as
long-term memory problems that require preserving information over many time
steps. We also demonstrate success on a combined exploration and memory problem
in the form of a simplified version of the well-known Morris water maze task.
Finally, we show that our approach can deal with high-dimensional observations
by learning directly from pixels.
We find that recurrent deterministic and stochastic policies are able to
learn similarly good solutions to these tasks, including the water maze where
the agent must learn effective search strategies. | http://arxiv.org/pdf/1512.04455 | Nicolas Heess, Jonathan J Hunt, Timothy P Lillicrap, David Silver | cs.LG | NIPS Deep Reinforcement Learning Workshop 2015 | null | cs.LG | 20151214 | 20151214 | [
{
"id": "1509.03005"
},
{
"id": "1507.06527"
},
{
"id": "1504.00702"
},
{
"id": "1509.02971"
}
] |
1512.04455 | 6 | # 2 Background
We model our environment as discrete-time, partially-observed Markov Decision process (POMDP). A POMDP is described a set of environment states S and a set of actions A, an initial state distribu- tion p0(s0), a transition function p(st+1|st, at) and reward function r(st, at). This underlying MDP is partially observed when the agent is unable to observe the state st directly and instead receives observations from the set O which are conditioned on the underlying state p(ot|st).
The agent only indirectly observes the underlying state of the MDP through the observations. An optimal agent may, in principle, require access to the entire history ht = (o1, a1, o2, a2, ...atâ1, ot).
The goal of the agent is thus to learn a policy Ï(ht) which maps from the history to a distribution over actions P (A) which maximizes the expected discounted reward (below we consider both stochastic and deterministic policies). For stochastic policies we want to maximise
Sars ; (1) J=E, t=1 | 1512.04455#6 | Memory-based control with recurrent neural networks | Partially observed control problems are a challenging aspect of reinforcement
learning. We extend two related, model-free algorithms for continuous control
-- deterministic policy gradient and stochastic value gradient -- to solve
partially observed domains using recurrent neural networks trained with
backpropagation through time.
We demonstrate that this approach, coupled with long-short term memory is
able to solve a variety of physical control problems exhibiting an assortment
of memory requirements. These include the short-term integration of information
from noisy sensors and the identification of system parameters, as well as
long-term memory problems that require preserving information over many time
steps. We also demonstrate success on a combined exploration and memory problem
in the form of a simplified version of the well-known Morris water maze task.
Finally, we show that our approach can deal with high-dimensional observations
by learning directly from pixels.
We find that recurrent deterministic and stochastic policies are able to
learn similarly good solutions to these tasks, including the water maze where
the agent must learn effective search strategies. | http://arxiv.org/pdf/1512.04455 | Nicolas Heess, Jonathan J Hunt, Timothy P Lillicrap, David Silver | cs.LG | NIPS Deep Reinforcement Learning Workshop 2015 | null | cs.LG | 20151214 | 20151214 | [
{
"id": "1509.03005"
},
{
"id": "1507.06527"
},
{
"id": "1504.00702"
},
{
"id": "1509.02971"
}
] |
1512.04455 | 7 | Sars ; (1) J=E, t=1
where the trajectories Ï = (s1, o1, a1, s2, . . . ) are drawn from the trajectory distribution induced by the policy Ï: p(s1)p(o1|s1)Ï(a1|h1)p(s2|s1, a1)p(o2|s2)Ï(a2|h2) . . . and where ht is deï¬ned as above. For deterministic policies we replace Ï with a deterministic function µ which maps directly from states S to actions A and we replace at â¼ Ï(·|ht) with at = µ(ht). In the algorithms below we make use of the action-value function QÏ. For a fully observed MDP, when we have access to s, the action-value function is deï¬ned as the expected future discounted reward when in state st the agent takes action at and thereafter follows policy Ï. Since we are
2
interested in the partially observed case where the agent does not have access to s we instead deï¬ne QÏ in terms of h:
Q" (ht, ar) = Es, jn, [re(St,¢)] + Exy jn, a: » y'r(stris oa (2) i=1 | 1512.04455#7 | Memory-based control with recurrent neural networks | Partially observed control problems are a challenging aspect of reinforcement
learning. We extend two related, model-free algorithms for continuous control
-- deterministic policy gradient and stochastic value gradient -- to solve
partially observed domains using recurrent neural networks trained with
backpropagation through time.
We demonstrate that this approach, coupled with long-short term memory is
able to solve a variety of physical control problems exhibiting an assortment
of memory requirements. These include the short-term integration of information
from noisy sensors and the identification of system parameters, as well as
long-term memory problems that require preserving information over many time
steps. We also demonstrate success on a combined exploration and memory problem
in the form of a simplified version of the well-known Morris water maze task.
Finally, we show that our approach can deal with high-dimensional observations
by learning directly from pixels.
We find that recurrent deterministic and stochastic policies are able to
learn similarly good solutions to these tasks, including the water maze where
the agent must learn effective search strategies. | http://arxiv.org/pdf/1512.04455 | Nicolas Heess, Jonathan J Hunt, Timothy P Lillicrap, David Silver | cs.LG | NIPS Deep Reinforcement Learning Workshop 2015 | null | cs.LG | 20151214 | 20151214 | [
{
"id": "1509.03005"
},
{
"id": "1507.06527"
},
{
"id": "1504.00702"
},
{
"id": "1509.02971"
}
] |
1512.04455 | 8 | where Ï>t = (st+1, ot+1, at+1 . . . ) is the future trajectory and the two expectations are taken with respect to the conditionals p(st|ht) and p(Ï>t|ht, at) of the trajectory distribution associated with Ï. Note that this equivalent to deï¬ning QÏ in terms of the belief state since h is a sufï¬cient statistic.
Obviously, for most POMDPs of interest, it is not tractable to condition on the entire sequence of observations. A central challenge is to learn how to summarize the past in a scalable way.
# 3 Algorithms
# 3.1 Recurrent DPG
We extend the Deterministic Policy Gradient (DPG) algorithm for MDPs introduced in [24] to deal with partially observed domains and pixels. The core idea of the DPG algorithm for the fully ob- served case is that for a deterministic policy µθ with parameters θ, and given access to the true action-value function associated with the current policy Qµ, the policy can be updated by backprop- agation:
oJ) _» AQ" (s,a) au (s) a0 owen da , (3) 06 a=9(s) | 1512.04455#8 | Memory-based control with recurrent neural networks | Partially observed control problems are a challenging aspect of reinforcement
learning. We extend two related, model-free algorithms for continuous control
-- deterministic policy gradient and stochastic value gradient -- to solve
partially observed domains using recurrent neural networks trained with
backpropagation through time.
We demonstrate that this approach, coupled with long-short term memory is
able to solve a variety of physical control problems exhibiting an assortment
of memory requirements. These include the short-term integration of information
from noisy sensors and the identification of system parameters, as well as
long-term memory problems that require preserving information over many time
steps. We also demonstrate success on a combined exploration and memory problem
in the form of a simplified version of the well-known Morris water maze task.
Finally, we show that our approach can deal with high-dimensional observations
by learning directly from pixels.
We find that recurrent deterministic and stochastic policies are able to
learn similarly good solutions to these tasks, including the water maze where
the agent must learn effective search strategies. | http://arxiv.org/pdf/1512.04455 | Nicolas Heess, Jonathan J Hunt, Timothy P Lillicrap, David Silver | cs.LG | NIPS Deep Reinforcement Learning Workshop 2015 | null | cs.LG | 20151214 | 20151214 | [
{
"id": "1509.03005"
},
{
"id": "1507.06527"
},
{
"id": "1504.00702"
},
{
"id": "1509.02971"
}
] |
1512.04455 | 9 | oJ) _» AQ" (s,a) au (s) a0 owen da , (3) 06 a=9(s)
where the expectation is taken with respect to the (discounted) state visitation distribution ϵ induced by the current policy µθ [24]. Similar ideas had previously been exploited in NFQCA [4] and in the ADP [13] community. In practice the exact action-value function Qµ is replaced by an approximate (critic) QÏ with parameters Ï that is differentiable in a and which can be learned e.g. with Q- learning.
In order to ensure the applicability of our approach to large observation spaces (e.g. from pixels), we use neural networks for all function approximators. These networks, with convolutional layers have proven effective at many sensory processing tasks [11, 18], and been demonstrated to be effective for scaling reinforcement learning to large state spaces [14, 16]. [14] proposed modiï¬cations to DPG necessary in order to learn effectively with deep neural networks which we make use of here (cf. sections 3.1.1, 3.1.2). | 1512.04455#9 | Memory-based control with recurrent neural networks | Partially observed control problems are a challenging aspect of reinforcement
learning. We extend two related, model-free algorithms for continuous control
-- deterministic policy gradient and stochastic value gradient -- to solve
partially observed domains using recurrent neural networks trained with
backpropagation through time.
We demonstrate that this approach, coupled with long-short term memory is
able to solve a variety of physical control problems exhibiting an assortment
of memory requirements. These include the short-term integration of information
from noisy sensors and the identification of system parameters, as well as
long-term memory problems that require preserving information over many time
steps. We also demonstrate success on a combined exploration and memory problem
in the form of a simplified version of the well-known Morris water maze task.
Finally, we show that our approach can deal with high-dimensional observations
by learning directly from pixels.
We find that recurrent deterministic and stochastic policies are able to
learn similarly good solutions to these tasks, including the water maze where
the agent must learn effective search strategies. | http://arxiv.org/pdf/1512.04455 | Nicolas Heess, Jonathan J Hunt, Timothy P Lillicrap, David Silver | cs.LG | NIPS Deep Reinforcement Learning Workshop 2015 | null | cs.LG | 20151214 | 20151214 | [
{
"id": "1509.03005"
},
{
"id": "1507.06527"
},
{
"id": "1504.00702"
},
{
"id": "1509.02971"
}
] |
1512.04455 | 10 | Under partial observability the optimal policy and the associated action-value function are both functions of the entire preceding observation-action history ht. The primary change we introduce is the use of recurrent neural networks, rather than feedforward networks, in order to allow the network to learn to preserve (limited) information about the past which is needed in order to solve the POMDP. Thus, writing µ(h) and Q(h, a) rather than µ(s) and Q(s, a) we obtain the following policy update:
aJ(0) 0 (4) Oa 00 a=p9 (he) ee OQ" (ht, a) | | 1512.04455#10 | Memory-based control with recurrent neural networks | Partially observed control problems are a challenging aspect of reinforcement
learning. We extend two related, model-free algorithms for continuous control
-- deterministic policy gradient and stochastic value gradient -- to solve
partially observed domains using recurrent neural networks trained with
backpropagation through time.
We demonstrate that this approach, coupled with long-short term memory is
able to solve a variety of physical control problems exhibiting an assortment
of memory requirements. These include the short-term integration of information
from noisy sensors and the identification of system parameters, as well as
long-term memory problems that require preserving information over many time
steps. We also demonstrate success on a combined exploration and memory problem
in the form of a simplified version of the well-known Morris water maze task.
Finally, we show that our approach can deal with high-dimensional observations
by learning directly from pixels.
We find that recurrent deterministic and stochastic policies are able to
learn similarly good solutions to these tasks, including the water maze where
the agent must learn effective search strategies. | http://arxiv.org/pdf/1512.04455 | Nicolas Heess, Jonathan J Hunt, Timothy P Lillicrap, David Silver | cs.LG | NIPS Deep Reinforcement Learning Workshop 2015 | null | cs.LG | 20151214 | 20151214 | [
{
"id": "1509.03005"
},
{
"id": "1507.06527"
},
{
"id": "1504.00702"
},
{
"id": "1509.02971"
}
] |
1512.04455 | 11 | aJ(0) 0 (4) Oa 00 a=p9 (he) ee OQ" (ht, a) |
where we have written the expectation now explicitly over entire trajectories Ï = (s1, o1, a1, s2, o2, a2, . . . ) which are drawn from the trajectory distribution induced by the current policy and ht = (o1, a1, . . . , otâ1, atâ1, ot) is the observation-action trajectory preï¬x at time step t, both as introduced above1. In practice, as in the fully observed case, we replace Qµ by learned approximation QÏ (which is also a recurrent network with parameters Ï). Thus, rather than di- rectly conditioning on the entire observation history, we effectively train recurrent neural networks to summarize this history in their recurrent state using backpropagation through time (BPTT). For
1 A discount factor γt appears implicitly in the update which is absorbed in the discounted state-visitation distribution in eq. 3. In practice we ignore this term as is often done in policy gradient implementations in practice (e.g. [26]).
3
long episodes or continuing tasks it is possible to use truncated BPTT, although we do not use this here.
The full algorithm is given below (Algorithm 1). | 1512.04455#11 | Memory-based control with recurrent neural networks | Partially observed control problems are a challenging aspect of reinforcement
learning. We extend two related, model-free algorithms for continuous control
-- deterministic policy gradient and stochastic value gradient -- to solve
partially observed domains using recurrent neural networks trained with
backpropagation through time.
We demonstrate that this approach, coupled with long-short term memory is
able to solve a variety of physical control problems exhibiting an assortment
of memory requirements. These include the short-term integration of information
from noisy sensors and the identification of system parameters, as well as
long-term memory problems that require preserving information over many time
steps. We also demonstrate success on a combined exploration and memory problem
in the form of a simplified version of the well-known Morris water maze task.
Finally, we show that our approach can deal with high-dimensional observations
by learning directly from pixels.
We find that recurrent deterministic and stochastic policies are able to
learn similarly good solutions to these tasks, including the water maze where
the agent must learn effective search strategies. | http://arxiv.org/pdf/1512.04455 | Nicolas Heess, Jonathan J Hunt, Timothy P Lillicrap, David Silver | cs.LG | NIPS Deep Reinforcement Learning Workshop 2015 | null | cs.LG | 20151214 | 20151214 | [
{
"id": "1509.03005"
},
{
"id": "1507.06527"
},
{
"id": "1504.00702"
},
{
"id": "1509.02971"
}
] |
1512.04455 | 12 | 3
long episodes or continuing tasks it is possible to use truncated BPTT, although we do not use this here.
The full algorithm is given below (Algorithm 1).
RDPG is an algorithm for learning deterministic policies. As discussed in the literature [25, 20] it is possible to construct examples where deterministic policies perform poorly under partial ob- servability. In RDPG the policy is conditioned on the entire history but since we are using function approximation state aliasing may still occur, especially early in learning. We therefore also inves- tigate a recurrent version of the stochastic counterpart to DPG: SVG(0) [6] (DPG can be seen as the deterministic limit of SVG(0)). In addition to learning stochastic policies SVG(0) also admits on-policy learning whereas DPG is inherently off policy (see below). | 1512.04455#12 | Memory-based control with recurrent neural networks | Partially observed control problems are a challenging aspect of reinforcement
learning. We extend two related, model-free algorithms for continuous control
-- deterministic policy gradient and stochastic value gradient -- to solve
partially observed domains using recurrent neural networks trained with
backpropagation through time.
We demonstrate that this approach, coupled with long-short term memory is
able to solve a variety of physical control problems exhibiting an assortment
of memory requirements. These include the short-term integration of information
from noisy sensors and the identification of system parameters, as well as
long-term memory problems that require preserving information over many time
steps. We also demonstrate success on a combined exploration and memory problem
in the form of a simplified version of the well-known Morris water maze task.
Finally, we show that our approach can deal with high-dimensional observations
by learning directly from pixels.
We find that recurrent deterministic and stochastic policies are able to
learn similarly good solutions to these tasks, including the water maze where
the agent must learn effective search strategies. | http://arxiv.org/pdf/1512.04455 | Nicolas Heess, Jonathan J Hunt, Timothy P Lillicrap, David Silver | cs.LG | NIPS Deep Reinforcement Learning Workshop 2015 | null | cs.LG | 20151214 | 20151214 | [
{
"id": "1509.03005"
},
{
"id": "1507.06527"
},
{
"id": "1504.00702"
},
{
"id": "1509.02971"
}
] |
1512.04455 | 13 | Similar to DPG, SVG(0) updates the policy by backpropagation âQ/âa from the action-value func- tion, but does so for stochastic policies. This is enabled through a âre-parameterizationâ (e.g. [10, 19]) of the stochastic policy: The stochastic policy is represented in terms of a ï¬xed, inde- pendent noise source and a parameterized deterministic function that transforms a draw from that noise source, i.e., in our case, a = Ïθ(h, ν) with ν ⼠β(·) where β is some ï¬xed distribution. For instance, a Gaussian policy Ïθ(a|h) = N (a|µθ(h), Ï2) can be re-parameterized as follows: a = Ïθ(h, ν) = µθ(h) + Ïν where ν â¼ N (·|0, 1). See [6] for more details.
The stochastic policy is updated as follows:
aI(O) _ 1-1 0Q⢠(hy, a) On (hi, V4) 9 = Erw | da 30 t a=n9 (hyve) (5) | 1512.04455#13 | Memory-based control with recurrent neural networks | Partially observed control problems are a challenging aspect of reinforcement
learning. We extend two related, model-free algorithms for continuous control
-- deterministic policy gradient and stochastic value gradient -- to solve
partially observed domains using recurrent neural networks trained with
backpropagation through time.
We demonstrate that this approach, coupled with long-short term memory is
able to solve a variety of physical control problems exhibiting an assortment
of memory requirements. These include the short-term integration of information
from noisy sensors and the identification of system parameters, as well as
long-term memory problems that require preserving information over many time
steps. We also demonstrate success on a combined exploration and memory problem
in the form of a simplified version of the well-known Morris water maze task.
Finally, we show that our approach can deal with high-dimensional observations
by learning directly from pixels.
We find that recurrent deterministic and stochastic policies are able to
learn similarly good solutions to these tasks, including the water maze where
the agent must learn effective search strategies. | http://arxiv.org/pdf/1512.04455 | Nicolas Heess, Jonathan J Hunt, Timothy P Lillicrap, David Silver | cs.LG | NIPS Deep Reinforcement Learning Workshop 2015 | null | cs.LG | 20151214 | 20151214 | [
{
"id": "1509.03005"
},
{
"id": "1507.06527"
},
{
"id": "1504.00702"
},
{
"id": "1509.02971"
}
] |
1512.04455 | 14 | with Ï drawn from the trajectory distribution which is conditioned on IID draws of νt from β at each time step. The full algorithm is provided in the supplementary (Algorithm 2).
# 3.1.1 Off-policy learning and experience replay
DPG is typically used in an off-policy setting due to the fact that the policy is deterministic but exploration is needed in order to learn the gradient of Q with respect to the actions. Furthermore, in practice, data efï¬ciency and stability can also be greatly improved by using experience replay (e.g. [4, 5, 14, 16, 6]) and we use the same approach here (see Algorithms 1, 2). Thus, during learning we store experienced trajectories in a database and then replace the expectation in eq. (4) with trajectories sampled from the database.
One consequence of this is a bias in the state distribution in eqs. (3, 5) which no longer corresponds to the state distribution induced by the current policy . With function approximation this can lead to a bias in the learned policy, although this typically ignored in practice. RDPG and RSVG(0) may similarly be affected; in fact since policies (and Q) are not just a function of the state but of an entire action-observation history (eq. 4) the bias might be more severe. | 1512.04455#14 | Memory-based control with recurrent neural networks | Partially observed control problems are a challenging aspect of reinforcement
learning. We extend two related, model-free algorithms for continuous control
-- deterministic policy gradient and stochastic value gradient -- to solve
partially observed domains using recurrent neural networks trained with
backpropagation through time.
We demonstrate that this approach, coupled with long-short term memory is
able to solve a variety of physical control problems exhibiting an assortment
of memory requirements. These include the short-term integration of information
from noisy sensors and the identification of system parameters, as well as
long-term memory problems that require preserving information over many time
steps. We also demonstrate success on a combined exploration and memory problem
in the form of a simplified version of the well-known Morris water maze task.
Finally, we show that our approach can deal with high-dimensional observations
by learning directly from pixels.
We find that recurrent deterministic and stochastic policies are able to
learn similarly good solutions to these tasks, including the water maze where
the agent must learn effective search strategies. | http://arxiv.org/pdf/1512.04455 | Nicolas Heess, Jonathan J Hunt, Timothy P Lillicrap, David Silver | cs.LG | NIPS Deep Reinforcement Learning Workshop 2015 | null | cs.LG | 20151214 | 20151214 | [
{
"id": "1509.03005"
},
{
"id": "1507.06527"
},
{
"id": "1504.00702"
},
{
"id": "1509.02971"
}
] |
1512.04455 | 15 | One potential advantage of (R)SVG(0) in this context is that it allows on-policy learning although we do not explore this possibility here. We found that off-policy learning with experience replay remained effective in the partially observed case.
# 3.1.2 Target networks
A second algorithmic feature that has been found to greatly improve the stability of neural-network based reinforcement learning algorithms that rely on bootstrapping for learning value functions is the use of target networks [4\[14] [16] {6]: The algorithm maintains two copies of the value function Q and of the policy 7 each, with parameters 6 and 6â, and w and wââ respectively. 6 and w are the parameters that are being updated by the algorithm; 0â and wâ track them with some delay and are used to compute the âtargets valuesâ for the Q function update. Different authors have explored different approaches to updating 6â and wâ. In this work we use âsoft updatesâ as in [14] (see Algorithms|T]and[2]below).
4
# Algorithm 1 RDPG algorithm | 1512.04455#15 | Memory-based control with recurrent neural networks | Partially observed control problems are a challenging aspect of reinforcement
learning. We extend two related, model-free algorithms for continuous control
-- deterministic policy gradient and stochastic value gradient -- to solve
partially observed domains using recurrent neural networks trained with
backpropagation through time.
We demonstrate that this approach, coupled with long-short term memory is
able to solve a variety of physical control problems exhibiting an assortment
of memory requirements. These include the short-term integration of information
from noisy sensors and the identification of system parameters, as well as
long-term memory problems that require preserving information over many time
steps. We also demonstrate success on a combined exploration and memory problem
in the form of a simplified version of the well-known Morris water maze task.
Finally, we show that our approach can deal with high-dimensional observations
by learning directly from pixels.
We find that recurrent deterministic and stochastic policies are able to
learn similarly good solutions to these tasks, including the water maze where
the agent must learn effective search strategies. | http://arxiv.org/pdf/1512.04455 | Nicolas Heess, Jonathan J Hunt, Timothy P Lillicrap, David Silver | cs.LG | NIPS Deep Reinforcement Learning Workshop 2015 | null | cs.LG | 20151214 | 20151214 | [
{
"id": "1509.03005"
},
{
"id": "1507.06527"
},
{
"id": "1504.00702"
},
{
"id": "1509.02971"
}
] |
1512.04455 | 16 | 4
# Algorithm 1 RDPG algorithm
Initialize critic network Q* (at, ht) and actor ju°(h+) with parameters w and 0. Initialize target networks Q*â and we "with weights wâ + w, 0â + 0. Initialize replay buffer R. for episodes = 1, Mdo initialize empty history ho fort=1,T do receive observation 0; hy < heâ1, Gtâ1, 0; (append observation and previous action to history) select action a; = 1° (ht) + ⬠(with e: exploration noise) end for Store the sequence (01,41, 71...07,a7,1rr) in R a Sample a minibatch of N episodes (0, a}, rj, ...0, ap, Tp )im1,...,N from R Construct histories hi = (o',a4,...ai_4, 04) Compute target values for each sample episode (y;{, ...y/,) using the recurrent target
(y;{, ...y/,) et
T ) using the recurrent target networks
Ut =r + 7Q° â(ni et "(hi 41))
Compute critic update (using BPTT)
xP LD (vi Oi a) POE
Compute actor update (using BPTT)
oer P(hi)) Op? (hi D> ( a On) | 1512.04455#16 | Memory-based control with recurrent neural networks | Partially observed control problems are a challenging aspect of reinforcement
learning. We extend two related, model-free algorithms for continuous control
-- deterministic policy gradient and stochastic value gradient -- to solve
partially observed domains using recurrent neural networks trained with
backpropagation through time.
We demonstrate that this approach, coupled with long-short term memory is
able to solve a variety of physical control problems exhibiting an assortment
of memory requirements. These include the short-term integration of information
from noisy sensors and the identification of system parameters, as well as
long-term memory problems that require preserving information over many time
steps. We also demonstrate success on a combined exploration and memory problem
in the form of a simplified version of the well-known Morris water maze task.
Finally, we show that our approach can deal with high-dimensional observations
by learning directly from pixels.
We find that recurrent deterministic and stochastic policies are able to
learn similarly good solutions to these tasks, including the water maze where
the agent must learn effective search strategies. | http://arxiv.org/pdf/1512.04455 | Nicolas Heess, Jonathan J Hunt, Timothy P Lillicrap, David Silver | cs.LG | NIPS Deep Reinforcement Learning Workshop 2015 | null | cs.LG | 20151214 | 20151214 | [
{
"id": "1509.03005"
},
{
"id": "1507.06527"
},
{
"id": "1504.00702"
},
{
"id": "1509.02971"
}
] |
1512.04455 | 17 | xP LD (vi Oi a) POE
Compute actor update (using BPTT)
oer P(hi)) Op? (hi D> ( a On)
Update actor and critic using Adam [9] Update the target networks
w & twt(lâ7)w" Oe rO+ (1 â7)0'
end for
# 4 Results
We tested our algorithms on a variety of partial-observed environments, covering different types of memory problems. Videos of the learned policies for all the domains are included in our sup- plementary videos2, we encourage viewing them as these may provide a better intuition for the environments. All physical control problems except the simulated water maze (section 4.3) were simulated in MuJoCo [28]. We tested both standard recurrent networks as well as LSTM networks.
# 4.1 Sensor integration and system identiï¬cation
Physical control problems with noisy sensors are one of the paradigm examples of partially-observed environments. A large amount of research has focused on how to efï¬ciently integrate noisy sensory information over multiple timesteps in order to derive accurate estimates of the system state, or to estimate derivatives of important properties of the system [27]. | 1512.04455#17 | Memory-based control with recurrent neural networks | Partially observed control problems are a challenging aspect of reinforcement
learning. We extend two related, model-free algorithms for continuous control
-- deterministic policy gradient and stochastic value gradient -- to solve
partially observed domains using recurrent neural networks trained with
backpropagation through time.
We demonstrate that this approach, coupled with long-short term memory is
able to solve a variety of physical control problems exhibiting an assortment
of memory requirements. These include the short-term integration of information
from noisy sensors and the identification of system parameters, as well as
long-term memory problems that require preserving information over many time
steps. We also demonstrate success on a combined exploration and memory problem
in the form of a simplified version of the well-known Morris water maze task.
Finally, we show that our approach can deal with high-dimensional observations
by learning directly from pixels.
We find that recurrent deterministic and stochastic policies are able to
learn similarly good solutions to these tasks, including the water maze where
the agent must learn effective search strategies. | http://arxiv.org/pdf/1512.04455 | Nicolas Heess, Jonathan J Hunt, Timothy P Lillicrap, David Silver | cs.LG | NIPS Deep Reinforcement Learning Workshop 2015 | null | cs.LG | 20151214 | 20151214 | [
{
"id": "1509.03005"
},
{
"id": "1507.06527"
},
{
"id": "1504.00702"
},
{
"id": "1509.02971"
}
] |
1512.04455 | 18 | Here, we consider two simple, standard control problems often used in reinforcement learning, the under-actuated pendulum and cartpole swing up. We modify these standard benchmarks tasks such that in both cases the agent receives no direct information of the velocity of any of the components, i.e. for the pendulum swing-up task the observation comprises only the angle of the pendulum, and
2Video of all the learned policies is available at https://youtu.be/V4_vb1D5NNQ
5
Figure (1) (a) The reward curve for the partially-observed pendulum task. Both RDPG and RSVG(0) are able to learn policies which bring the pendulum to an upright position. (b) The reward curve for the cartpole with no velocity and varying cartpole lengths. RDPG with LSTM, is able to reliably learn a good solution for this task; a purely feedforward agent (DDPG), which will not be able to estimate velocities nor to infer the pole length, is not able to solve the problem.
(a) (b)
(a) (b) | 1512.04455#18 | Memory-based control with recurrent neural networks | Partially observed control problems are a challenging aspect of reinforcement
learning. We extend two related, model-free algorithms for continuous control
-- deterministic policy gradient and stochastic value gradient -- to solve
partially observed domains using recurrent neural networks trained with
backpropagation through time.
We demonstrate that this approach, coupled with long-short term memory is
able to solve a variety of physical control problems exhibiting an assortment
of memory requirements. These include the short-term integration of information
from noisy sensors and the identification of system parameters, as well as
long-term memory problems that require preserving information over many time
steps. We also demonstrate success on a combined exploration and memory problem
in the form of a simplified version of the well-known Morris water maze task.
Finally, we show that our approach can deal with high-dimensional observations
by learning directly from pixels.
We find that recurrent deterministic and stochastic policies are able to
learn similarly good solutions to these tasks, including the water maze where
the agent must learn effective search strategies. | http://arxiv.org/pdf/1512.04455 | Nicolas Heess, Jonathan J Hunt, Timothy P Lillicrap, David Silver | cs.LG | NIPS Deep Reinforcement Learning Workshop 2015 | null | cs.LG | 20151214 | 20151214 | [
{
"id": "1509.03005"
},
{
"id": "1507.06527"
},
{
"id": "1504.00702"
},
{
"id": "1509.02971"
}
] |
1512.04455 | 19 | (a)
(b)
(c)
(d)
Figure 2: Reward curves for the (a) hidden target reacher task, and (b) return to start gripper task. In both cases the RDPG-agents with LSTMs are able to ï¬nd good policies whereas the feedforward agents fail on the memory component. (In both cases the feedforward agents perform clearly better than random which is expected from the setup of the tasks: For instance, as can be seen in the video, the gripper without memory is still able to grab the payload and move it to a âdefaultâ position.) Example frames from the 3 joint reaching task (c) and the gripper task (d).
for cartpole swing-up it is limited to the angle of the pole and the position of the cart. Velocity is crucial for solving the task and thus it must be estimated from the history of the system. Figure 1a shows the learning curves for pendulum swing-up. Both RDPG and RSVG0 were tested on the pendulum task, and are able to learn good solutions which bring the pole to upright. | 1512.04455#19 | Memory-based control with recurrent neural networks | Partially observed control problems are a challenging aspect of reinforcement
learning. We extend two related, model-free algorithms for continuous control
-- deterministic policy gradient and stochastic value gradient -- to solve
partially observed domains using recurrent neural networks trained with
backpropagation through time.
We demonstrate that this approach, coupled with long-short term memory is
able to solve a variety of physical control problems exhibiting an assortment
of memory requirements. These include the short-term integration of information
from noisy sensors and the identification of system parameters, as well as
long-term memory problems that require preserving information over many time
steps. We also demonstrate success on a combined exploration and memory problem
in the form of a simplified version of the well-known Morris water maze task.
Finally, we show that our approach can deal with high-dimensional observations
by learning directly from pixels.
We find that recurrent deterministic and stochastic policies are able to
learn similarly good solutions to these tasks, including the water maze where
the agent must learn effective search strategies. | http://arxiv.org/pdf/1512.04455 | Nicolas Heess, Jonathan J Hunt, Timothy P Lillicrap, David Silver | cs.LG | NIPS Deep Reinforcement Learning Workshop 2015 | null | cs.LG | 20151214 | 20151214 | [
{
"id": "1509.03005"
},
{
"id": "1507.06527"
},
{
"id": "1504.00702"
},
{
"id": "1509.02971"
}
] |
1512.04455 | 20 | For the cartpole swing-up task, in addition to not providing the agent with velocity information, we also varied the length of the pole from episode to episode. The pole length is invisible to the agent and needs to be inferred from the response of the system. In this task the sensor integration problem is thus paired with the need for system identiï¬cation. As can be seen in ï¬gure 1b, the RDPG agent with an LSTM network reliably solves this task every time while a simple feedforward agent (DDPG) fails entirely. RDPG with a simple RNN performs considerably less well than the LSTM agent, presumably due to relatively long episodes (T=350 steps) and the failure to backpropagate gradients effectively through the plain RNN. We found that a feedforward agent that does receive velocity information can solve the variable-length swing-up task partly but does so less reliably than the recurrent agent as it is unable to identify the relevant system parameters (not shown).
# 4.2 Memory tasks | 1512.04455#20 | Memory-based control with recurrent neural networks | Partially observed control problems are a challenging aspect of reinforcement
learning. We extend two related, model-free algorithms for continuous control
-- deterministic policy gradient and stochastic value gradient -- to solve
partially observed domains using recurrent neural networks trained with
backpropagation through time.
We demonstrate that this approach, coupled with long-short term memory is
able to solve a variety of physical control problems exhibiting an assortment
of memory requirements. These include the short-term integration of information
from noisy sensors and the identification of system parameters, as well as
long-term memory problems that require preserving information over many time
steps. We also demonstrate success on a combined exploration and memory problem
in the form of a simplified version of the well-known Morris water maze task.
Finally, we show that our approach can deal with high-dimensional observations
by learning directly from pixels.
We find that recurrent deterministic and stochastic policies are able to
learn similarly good solutions to these tasks, including the water maze where
the agent must learn effective search strategies. | http://arxiv.org/pdf/1512.04455 | Nicolas Heess, Jonathan J Hunt, Timothy P Lillicrap, David Silver | cs.LG | NIPS Deep Reinforcement Learning Workshop 2015 | null | cs.LG | 20151214 | 20151214 | [
{
"id": "1509.03005"
},
{
"id": "1507.06527"
},
{
"id": "1504.00702"
},
{
"id": "1509.02971"
}
] |
1512.04455 | 21 | # 4.2 Memory tasks
Another type of partially-observed task, which has been less studied in the context of reinforcement learning, involves the need to remember explicit information over a number of steps. We constructed two tasks like this. One was a 3-joint reacher which must reach for a randomly positioned target, but the position of the target is only provided to the agent in the initial observation (the entire episode is 80 timesteps). As a harder variant of this task, we constructed a 5 joint gripper which must reach for a (fully-observed) payload from a randomized initial conï¬guration and then return the payload to the initial position of its âhandâ (T=100). Note that this is a challenging control problem even in the fully observed case. The results for both tasks are shown in ï¬gure 2, RDPG agents with LSTM networks solve both tasks reliably whereas purely feedforward agents fail on the memory components of the task as can be seen in the supplemental video.
6
(a) (b) (c) (d) (e) | 1512.04455#21 | Memory-based control with recurrent neural networks | Partially observed control problems are a challenging aspect of reinforcement
learning. We extend two related, model-free algorithms for continuous control
-- deterministic policy gradient and stochastic value gradient -- to solve
partially observed domains using recurrent neural networks trained with
backpropagation through time.
We demonstrate that this approach, coupled with long-short term memory is
able to solve a variety of physical control problems exhibiting an assortment
of memory requirements. These include the short-term integration of information
from noisy sensors and the identification of system parameters, as well as
long-term memory problems that require preserving information over many time
steps. We also demonstrate success on a combined exploration and memory problem
in the form of a simplified version of the well-known Morris water maze task.
Finally, we show that our approach can deal with high-dimensional observations
by learning directly from pixels.
We find that recurrent deterministic and stochastic policies are able to
learn similarly good solutions to these tasks, including the water maze where
the agent must learn effective search strategies. | http://arxiv.org/pdf/1512.04455 | Nicolas Heess, Jonathan J Hunt, Timothy P Lillicrap, David Silver | cs.LG | NIPS Deep Reinforcement Learning Workshop 2015 | null | cs.LG | 20151214 | 20151214 | [
{
"id": "1509.03005"
},
{
"id": "1507.06527"
},
{
"id": "1504.00702"
},
{
"id": "1509.02971"
}
] |
1512.04455 | 22 | Figure 3: (a) shows the reward curve for different agents performing the water maze task. Both recurrent algorithms are capable of learning good solutions to the problem, while the non-recurrent agent (DDPG) is not. It is particularly notable that despite learning a deterministic policy, RDPG is able ï¬nd search strategies that allow it to locate the platform. (b) This shows the number of steps the agents take to reach the platform after a reset, normalized by the number of steps taken for the ï¬rst attempt. Note that on the 2nd and 3rd attempts the recurrent agents are able to reach the platform much more quickly, indicating they are learning to remember and recall the position of the platform. Example trajectories for the (c) RDPG, (d) RSVG(0) and (e) DDPG agents. Trajectory of the ï¬rst attempt is purple, second is blue and third is yellow.
# 4.3 Water maze | 1512.04455#22 | Memory-based control with recurrent neural networks | Partially observed control problems are a challenging aspect of reinforcement
learning. We extend two related, model-free algorithms for continuous control
-- deterministic policy gradient and stochastic value gradient -- to solve
partially observed domains using recurrent neural networks trained with
backpropagation through time.
We demonstrate that this approach, coupled with long-short term memory is
able to solve a variety of physical control problems exhibiting an assortment
of memory requirements. These include the short-term integration of information
from noisy sensors and the identification of system parameters, as well as
long-term memory problems that require preserving information over many time
steps. We also demonstrate success on a combined exploration and memory problem
in the form of a simplified version of the well-known Morris water maze task.
Finally, we show that our approach can deal with high-dimensional observations
by learning directly from pixels.
We find that recurrent deterministic and stochastic policies are able to
learn similarly good solutions to these tasks, including the water maze where
the agent must learn effective search strategies. | http://arxiv.org/pdf/1512.04455 | Nicolas Heess, Jonathan J Hunt, Timothy P Lillicrap, David Silver | cs.LG | NIPS Deep Reinforcement Learning Workshop 2015 | null | cs.LG | 20151214 | 20151214 | [
{
"id": "1509.03005"
},
{
"id": "1507.06527"
},
{
"id": "1504.00702"
},
{
"id": "1509.02971"
}
] |
1512.04455 | 23 | # 4.3 Water maze
The Morris water maze has been used extensively in rodents for the study of memory [3]. We tested our algorithms on a simpliï¬ed version of the task. The agent moves in a 2-dimensional circular space where a small region of the space is an invisible âplatformâ where the agent receives a positive reward. At the beginning of the episode the agent and platform are randomly positioned in the tank. The platform position is not visible to the agent but it âseesâ when it is on platform. The agent needs to search for and stay on the platform to receive reward by controlling its acceleration. After 5 steps on the platform the agent is reset randomly to a new position in the tank but the platform stays in place for the rest of the episode (T=200). The agent needs to remember the position of the platform to return to it quickly. | 1512.04455#23 | Memory-based control with recurrent neural networks | Partially observed control problems are a challenging aspect of reinforcement
learning. We extend two related, model-free algorithms for continuous control
-- deterministic policy gradient and stochastic value gradient -- to solve
partially observed domains using recurrent neural networks trained with
backpropagation through time.
We demonstrate that this approach, coupled with long-short term memory is
able to solve a variety of physical control problems exhibiting an assortment
of memory requirements. These include the short-term integration of information
from noisy sensors and the identification of system parameters, as well as
long-term memory problems that require preserving information over many time
steps. We also demonstrate success on a combined exploration and memory problem
in the form of a simplified version of the well-known Morris water maze task.
Finally, we show that our approach can deal with high-dimensional observations
by learning directly from pixels.
We find that recurrent deterministic and stochastic policies are able to
learn similarly good solutions to these tasks, including the water maze where
the agent must learn effective search strategies. | http://arxiv.org/pdf/1512.04455 | Nicolas Heess, Jonathan J Hunt, Timothy P Lillicrap, David Silver | cs.LG | NIPS Deep Reinforcement Learning Workshop 2015 | null | cs.LG | 20151214 | 20151214 | [
{
"id": "1509.03005"
},
{
"id": "1507.06527"
},
{
"id": "1504.00702"
},
{
"id": "1509.02971"
}
] |
1512.04455 | 24 | It is sometimes presumed that a stochastic policy is required in order to solve problems like this, which require learning a search strategy. Although there is some variability in the results, we found that both RDPG and RSVG(0) were able to ï¬nd similarly good solutions (ï¬gure 3a), indicating RDPG is able to learn reasonable, deterministic search strategies. Both solutions were able to make use of memory to return to the platform more quickly after discovering it during the initial search (ï¬gure 3b). A non-recurrent agent (DDPG) is able to learn a limited search strategy but fails to exploit memory to return the platform after having been reset to a random position in the tank.
# 4.4 High-dimensional observations
We also tested our agents, with convolutional networks, on solving tasks directly from high- dimensional pixel spaces. We tested on the pendulum task (but now the agent is given only a static rendering of the pendulum at each timestep), and a two-choice reaching task, where the target dis- appears after 5 frames (and the agent is not allowed to move during the ï¬rst 5 frames to prevent it from encoding the target position in its initial trajectory). | 1512.04455#24 | Memory-based control with recurrent neural networks | Partially observed control problems are a challenging aspect of reinforcement
learning. We extend two related, model-free algorithms for continuous control
-- deterministic policy gradient and stochastic value gradient -- to solve
partially observed domains using recurrent neural networks trained with
backpropagation through time.
We demonstrate that this approach, coupled with long-short term memory is
able to solve a variety of physical control problems exhibiting an assortment
of memory requirements. These include the short-term integration of information
from noisy sensors and the identification of system parameters, as well as
long-term memory problems that require preserving information over many time
steps. We also demonstrate success on a combined exploration and memory problem
in the form of a simplified version of the well-known Morris water maze task.
Finally, we show that our approach can deal with high-dimensional observations
by learning directly from pixels.
We find that recurrent deterministic and stochastic policies are able to
learn similarly good solutions to these tasks, including the water maze where
the agent must learn effective search strategies. | http://arxiv.org/pdf/1512.04455 | Nicolas Heess, Jonathan J Hunt, Timothy P Lillicrap, David Silver | cs.LG | NIPS Deep Reinforcement Learning Workshop 2015 | null | cs.LG | 20151214 | 20151214 | [
{
"id": "1509.03005"
},
{
"id": "1507.06527"
},
{
"id": "1504.00702"
},
{
"id": "1509.02971"
}
] |
1512.04455 | 26 | (a) (b) (c)
Figure 4: RDPG was able to learn good policies directly from high-dimensional renderings for pendulum (a), and a two choice reaching task with a disappearing target (b). (c) Example frame from the reaching task.
# 5 Discussion
# 5.1 Variants
In the experiments presented here, the actor and critic networks are entirely disjoint. However, par- ticularly when learning deep, convolutional networks the ï¬lters required in the early layers may be similar between the policy and the actor. Sharing these early layers could improve computational efï¬ciency and learning speed. Similar arguments apply to the recurrent part of the network, which could be shared between the actor and the critic. Such sharing, however, can also result in instabili- ties as updates to one network may unknowingly damage or shift the other network. For this reason, we have not used any sharing here, although it is a potential topic for further investigation.
# 5.2 Related work
There is a large body of literature on solving partially observed control problems. We focus on the most closely related work that aims to solve such problems with learned memory. | 1512.04455#26 | Memory-based control with recurrent neural networks | Partially observed control problems are a challenging aspect of reinforcement
learning. We extend two related, model-free algorithms for continuous control
-- deterministic policy gradient and stochastic value gradient -- to solve
partially observed domains using recurrent neural networks trained with
backpropagation through time.
We demonstrate that this approach, coupled with long-short term memory is
able to solve a variety of physical control problems exhibiting an assortment
of memory requirements. These include the short-term integration of information
from noisy sensors and the identification of system parameters, as well as
long-term memory problems that require preserving information over many time
steps. We also demonstrate success on a combined exploration and memory problem
in the form of a simplified version of the well-known Morris water maze task.
Finally, we show that our approach can deal with high-dimensional observations
by learning directly from pixels.
We find that recurrent deterministic and stochastic policies are able to
learn similarly good solutions to these tasks, including the water maze where
the agent must learn effective search strategies. | http://arxiv.org/pdf/1512.04455 | Nicolas Heess, Jonathan J Hunt, Timothy P Lillicrap, David Silver | cs.LG | NIPS Deep Reinforcement Learning Workshop 2015 | null | cs.LG | 20151214 | 20151214 | [
{
"id": "1509.03005"
},
{
"id": "1507.06527"
},
{
"id": "1504.00702"
},
{
"id": "1509.02971"
}
] |
1512.04455 | 27 | # 5.2 Related work
There is a large body of literature on solving partially observed control problems. We focus on the most closely related work that aims to solve such problems with learned memory.
Several groups [15, 1, 5] have studied the use of model-free algorithms with recurrent networks to solve POMDPs with discrete action spaces. [1] focused on relatively long-horizon (âdeepâ) memory problems in small state-action spaces. In contrast, [5] modiï¬ed the Atari DQN architecture [16] (i.e. they perform control from high-dimensional pixel inputs) and demonstrated that recurrent Q learning [15] can perform the required information integration to resolve short-term partial observability (e.g. to estimate velocities) that is achieved via stacks of frames in the original DQN architecture. | 1512.04455#27 | Memory-based control with recurrent neural networks | Partially observed control problems are a challenging aspect of reinforcement
learning. We extend two related, model-free algorithms for continuous control
-- deterministic policy gradient and stochastic value gradient -- to solve
partially observed domains using recurrent neural networks trained with
backpropagation through time.
We demonstrate that this approach, coupled with long-short term memory is
able to solve a variety of physical control problems exhibiting an assortment
of memory requirements. These include the short-term integration of information
from noisy sensors and the identification of system parameters, as well as
long-term memory problems that require preserving information over many time
steps. We also demonstrate success on a combined exploration and memory problem
in the form of a simplified version of the well-known Morris water maze task.
Finally, we show that our approach can deal with high-dimensional observations
by learning directly from pixels.
We find that recurrent deterministic and stochastic policies are able to
learn similarly good solutions to these tasks, including the water maze where
the agent must learn effective search strategies. | http://arxiv.org/pdf/1512.04455 | Nicolas Heess, Jonathan J Hunt, Timothy P Lillicrap, David Silver | cs.LG | NIPS Deep Reinforcement Learning Workshop 2015 | null | cs.LG | 20151214 | 20151214 | [
{
"id": "1509.03005"
},
{
"id": "1507.06527"
},
{
"id": "1504.00702"
},
{
"id": "1509.02971"
}
] |
1512.04455 | 28 | Continuous action problems with relatively low-dimensional observation spaces have been con- [30] trained LSTM-based stochastic policies using Reinforce; sidered e.g. in [30, 31, 29, 32]. [31, 29, 32] used actor-critic architectures. The algorithm of [31] can be seen as a special case of DPG where the deterministic policy produces the parameters of an action distribution from which the actions are then sampled. This requires suitable exploration at the level of distribution parame- ters (e.g. exploring in terms of means and variances of a Gaussian distribution); in contrast, SVG(0) also learns stochastic policies but allows exploration at the action level only.
All works mentioned above, except for [32], consider the memory to be internal to the policy and learn the RNN parameters using BPTT, back-propagating either TD errors or policy gradients. [32] instead take the view of [17] and consider memory as extra state dimensions that can can be read and set by the policy. They optimize the policy using guided policy search [12] which performs explicit trajectory optimization along reference trajectories and, unlike our approach, requires a well deï¬ned full latent state and access to this latent state during training.
# 6 Conclusion | 1512.04455#28 | Memory-based control with recurrent neural networks | Partially observed control problems are a challenging aspect of reinforcement
learning. We extend two related, model-free algorithms for continuous control
-- deterministic policy gradient and stochastic value gradient -- to solve
partially observed domains using recurrent neural networks trained with
backpropagation through time.
We demonstrate that this approach, coupled with long-short term memory is
able to solve a variety of physical control problems exhibiting an assortment
of memory requirements. These include the short-term integration of information
from noisy sensors and the identification of system parameters, as well as
long-term memory problems that require preserving information over many time
steps. We also demonstrate success on a combined exploration and memory problem
in the form of a simplified version of the well-known Morris water maze task.
Finally, we show that our approach can deal with high-dimensional observations
by learning directly from pixels.
We find that recurrent deterministic and stochastic policies are able to
learn similarly good solutions to these tasks, including the water maze where
the agent must learn effective search strategies. | http://arxiv.org/pdf/1512.04455 | Nicolas Heess, Jonathan J Hunt, Timothy P Lillicrap, David Silver | cs.LG | NIPS Deep Reinforcement Learning Workshop 2015 | null | cs.LG | 20151214 | 20151214 | [
{
"id": "1509.03005"
},
{
"id": "1507.06527"
},
{
"id": "1504.00702"
},
{
"id": "1509.02971"
}
] |
1512.04455 | 29 | # 6 Conclusion
We have demonstrated that two related model-free approaches can be extended to learn effectively with recurrent neural networks on a variety of partially-observed problems, including directly from pixel observations. Since these algorithms learn using standard backpropagation through time, we
8
are able to beneï¬t from innovations in supervised recurrent neural networks, such as long-short term memory networks [7], to solve challenging memory problems such as the Morris water maze.
# References
[1] B. Bakker. Reinforcement learning with long short-term memory. In NIPS, 2002. [2] D. Balduzzi and M. Ghifary. Compatible value gradients for reinforcement learning of continuous deep policies. arXiv preprint arXiv:1509.03005, 2015.
[3] R. DHooge and P. P. De Deyn. Applications of the morris water maze in the study of learning and memory. Brain research reviews, 36(1):60â90, 2001.
[4] R. Hafner and M. Riedmiller. Reinforcement learning in feedback control. Machine learning, 84(1-2):137â169, 2011.
[5] M. Hausknecht and P. Stone. Deep recurrent q-learning for partially observable mdps. arXiv preprint arXiv:1507.06527, 2015. | 1512.04455#29 | Memory-based control with recurrent neural networks | Partially observed control problems are a challenging aspect of reinforcement
learning. We extend two related, model-free algorithms for continuous control
-- deterministic policy gradient and stochastic value gradient -- to solve
partially observed domains using recurrent neural networks trained with
backpropagation through time.
We demonstrate that this approach, coupled with long-short term memory is
able to solve a variety of physical control problems exhibiting an assortment
of memory requirements. These include the short-term integration of information
from noisy sensors and the identification of system parameters, as well as
long-term memory problems that require preserving information over many time
steps. We also demonstrate success on a combined exploration and memory problem
in the form of a simplified version of the well-known Morris water maze task.
Finally, we show that our approach can deal with high-dimensional observations
by learning directly from pixels.
We find that recurrent deterministic and stochastic policies are able to
learn similarly good solutions to these tasks, including the water maze where
the agent must learn effective search strategies. | http://arxiv.org/pdf/1512.04455 | Nicolas Heess, Jonathan J Hunt, Timothy P Lillicrap, David Silver | cs.LG | NIPS Deep Reinforcement Learning Workshop 2015 | null | cs.LG | 20151214 | 20151214 | [
{
"id": "1509.03005"
},
{
"id": "1507.06527"
},
{
"id": "1504.00702"
},
{
"id": "1509.02971"
}
] |
1512.04455 | 30 | [6] N. Heess, G. Wayne, D. Silver, T. Lillicrap, T. Erez, and Y. Tassa. Learning continuous control policies by stochastic value gradients. In NIPS, 2015.
[7] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural computation, 9(8):1735â 1780, 1997.
[8] L. P. Kaelbling, M. L. Littman, and A. R. Cassandra. Planning and acting in partially observ- able stochastic domains. Artiï¬cial intelligence, 101(1):99â134, 1998.
[9] D. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
[10] D. P. Kingma and M. Welling. Auto-encoding variational bayes. CoRR, abs/1312.6114, 2013. [11] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classiï¬cation with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097â1105, 2012. | 1512.04455#30 | Memory-based control with recurrent neural networks | Partially observed control problems are a challenging aspect of reinforcement
learning. We extend two related, model-free algorithms for continuous control
-- deterministic policy gradient and stochastic value gradient -- to solve
partially observed domains using recurrent neural networks trained with
backpropagation through time.
We demonstrate that this approach, coupled with long-short term memory is
able to solve a variety of physical control problems exhibiting an assortment
of memory requirements. These include the short-term integration of information
from noisy sensors and the identification of system parameters, as well as
long-term memory problems that require preserving information over many time
steps. We also demonstrate success on a combined exploration and memory problem
in the form of a simplified version of the well-known Morris water maze task.
Finally, we show that our approach can deal with high-dimensional observations
by learning directly from pixels.
We find that recurrent deterministic and stochastic policies are able to
learn similarly good solutions to these tasks, including the water maze where
the agent must learn effective search strategies. | http://arxiv.org/pdf/1512.04455 | Nicolas Heess, Jonathan J Hunt, Timothy P Lillicrap, David Silver | cs.LG | NIPS Deep Reinforcement Learning Workshop 2015 | null | cs.LG | 20151214 | 20151214 | [
{
"id": "1509.03005"
},
{
"id": "1507.06527"
},
{
"id": "1504.00702"
},
{
"id": "1509.02971"
}
] |
1512.04455 | 31 | [12] S. Levine, C. Finn, T. Darrell, and P. Abbeel. End-to-end training of deep visuomotor policies. arXiv preprint arXiv:1504.00702, 2015.
[13] F. L. Lewis and D. Vrabie. Reinforcement learning and adaptive dynamic programming for feedback control. Circuits and Systems Magazine, IEEE, 9(3):32â50, 2009.
[14] T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wierstra. Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971, 2015. [15] L.-J. Lin and T. M. Mitchell. Reinforcement learning with hidden states. In J.-A. Meyer, H. L. Roitblat, and S. W. Wilson, editors, From animals to animats 2, pages 271â280. MIT Press, Cambridge, MA, USA, 1993. | 1512.04455#31 | Memory-based control with recurrent neural networks | Partially observed control problems are a challenging aspect of reinforcement
learning. We extend two related, model-free algorithms for continuous control
-- deterministic policy gradient and stochastic value gradient -- to solve
partially observed domains using recurrent neural networks trained with
backpropagation through time.
We demonstrate that this approach, coupled with long-short term memory is
able to solve a variety of physical control problems exhibiting an assortment
of memory requirements. These include the short-term integration of information
from noisy sensors and the identification of system parameters, as well as
long-term memory problems that require preserving information over many time
steps. We also demonstrate success on a combined exploration and memory problem
in the form of a simplified version of the well-known Morris water maze task.
Finally, we show that our approach can deal with high-dimensional observations
by learning directly from pixels.
We find that recurrent deterministic and stochastic policies are able to
learn similarly good solutions to these tasks, including the water maze where
the agent must learn effective search strategies. | http://arxiv.org/pdf/1512.04455 | Nicolas Heess, Jonathan J Hunt, Timothy P Lillicrap, David Silver | cs.LG | NIPS Deep Reinforcement Learning Workshop 2015 | null | cs.LG | 20151214 | 20151214 | [
{
"id": "1509.03005"
},
{
"id": "1507.06527"
},
{
"id": "1504.00702"
},
{
"id": "1509.02971"
}
] |
1512.04455 | 32 | [16] V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski, et al. Human-level control through deep rein- forcement learning. Nature, 518(7540):529â533, 2015.
[17] L. Peshkin, N. Meuleau, and L. P. Kaelbling. Learning policies with external memory. In ICML, 1999.
[18] A. S. Razavian, H. Azizpour, J. Sullivan, and S. Carlsson. Cnn features off-the-shelf: an astounding baseline for recognition. In Computer Vision and Pattern Recognition Workshops (CVPRW), 2014 IEEE Conference on, pages 512â519. IEEE, 2014. | 1512.04455#32 | Memory-based control with recurrent neural networks | Partially observed control problems are a challenging aspect of reinforcement
learning. We extend two related, model-free algorithms for continuous control
-- deterministic policy gradient and stochastic value gradient -- to solve
partially observed domains using recurrent neural networks trained with
backpropagation through time.
We demonstrate that this approach, coupled with long-short term memory is
able to solve a variety of physical control problems exhibiting an assortment
of memory requirements. These include the short-term integration of information
from noisy sensors and the identification of system parameters, as well as
long-term memory problems that require preserving information over many time
steps. We also demonstrate success on a combined exploration and memory problem
in the form of a simplified version of the well-known Morris water maze task.
Finally, we show that our approach can deal with high-dimensional observations
by learning directly from pixels.
We find that recurrent deterministic and stochastic policies are able to
learn similarly good solutions to these tasks, including the water maze where
the agent must learn effective search strategies. | http://arxiv.org/pdf/1512.04455 | Nicolas Heess, Jonathan J Hunt, Timothy P Lillicrap, David Silver | cs.LG | NIPS Deep Reinforcement Learning Workshop 2015 | null | cs.LG | 20151214 | 20151214 | [
{
"id": "1509.03005"
},
{
"id": "1507.06527"
},
{
"id": "1504.00702"
},
{
"id": "1509.02971"
}
] |
1512.04455 | 33 | [19] D. J. Rezende, S. Mohamed, and D. Wierstra. Stochastic backpropagation and approximate inference in deep generative models. In Proceedings of the 31th International Conference on Machine Learning, ICML 2014, Beijing, China, 21-26 June 2014, pages 1278â1286, 2014. [20] B. Sallans. Reinforcement learning for factored markov decision processes. PhD thesis, Citeseer, 2002.
[21] J. Schulman, S. Levine, P. Abbeel, M. I. Jordan, and P. Moritz. Trust region policy optimiza- tion. In ICML, 2015.
[22] J. Schulman, P. Moritz, S. Levine, M. I. Jordan, and P. Abbeel. High-dimensional continuous control using generalized advantage estimation. CoRR, abs/1506.02438, 2015.
9
[23] G. Shani, J. Pineau, and R. Kaplow. A survey of point-based pomdp solvers. Autonomous Agents and Multi-Agent Systems, 27(1):1â51, 2013. | 1512.04455#33 | Memory-based control with recurrent neural networks | Partially observed control problems are a challenging aspect of reinforcement
learning. We extend two related, model-free algorithms for continuous control
-- deterministic policy gradient and stochastic value gradient -- to solve
partially observed domains using recurrent neural networks trained with
backpropagation through time.
We demonstrate that this approach, coupled with long-short term memory is
able to solve a variety of physical control problems exhibiting an assortment
of memory requirements. These include the short-term integration of information
from noisy sensors and the identification of system parameters, as well as
long-term memory problems that require preserving information over many time
steps. We also demonstrate success on a combined exploration and memory problem
in the form of a simplified version of the well-known Morris water maze task.
Finally, we show that our approach can deal with high-dimensional observations
by learning directly from pixels.
We find that recurrent deterministic and stochastic policies are able to
learn similarly good solutions to these tasks, including the water maze where
the agent must learn effective search strategies. | http://arxiv.org/pdf/1512.04455 | Nicolas Heess, Jonathan J Hunt, Timothy P Lillicrap, David Silver | cs.LG | NIPS Deep Reinforcement Learning Workshop 2015 | null | cs.LG | 20151214 | 20151214 | [
{
"id": "1509.03005"
},
{
"id": "1507.06527"
},
{
"id": "1504.00702"
},
{
"id": "1509.02971"
}
] |
1512.04455 | 34 | [24] D. Silver, G. Lever, N. Heess, T. Degris, D. Wierstra, and M. Riedmiller. Deterministic policy gradient algorithms. In ICML, 2014.
[25] S. P. Singh. Learning without state-estimation in partially observable markovian decision pro- cesses. In ICML, 1994.
[26] P. Thomas. Bias in natural actor-critic algorithms. In Proceedings of The 31st International Conference on Machine Learning, pages 441â448, 2014.
[27] S. Thrun, W. Burgard, and D. Fox. Probabilistic robotics. MIT press, 2005. [28] E. Todorov, T. Erez, and Y. Tassa. Mujoco: A physics engine for model-based control. In Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on, pages 5026â5033. IEEE, 2012. | 1512.04455#34 | Memory-based control with recurrent neural networks | Partially observed control problems are a challenging aspect of reinforcement
learning. We extend two related, model-free algorithms for continuous control
-- deterministic policy gradient and stochastic value gradient -- to solve
partially observed domains using recurrent neural networks trained with
backpropagation through time.
We demonstrate that this approach, coupled with long-short term memory is
able to solve a variety of physical control problems exhibiting an assortment
of memory requirements. These include the short-term integration of information
from noisy sensors and the identification of system parameters, as well as
long-term memory problems that require preserving information over many time
steps. We also demonstrate success on a combined exploration and memory problem
in the form of a simplified version of the well-known Morris water maze task.
Finally, we show that our approach can deal with high-dimensional observations
by learning directly from pixels.
We find that recurrent deterministic and stochastic policies are able to
learn similarly good solutions to these tasks, including the water maze where
the agent must learn effective search strategies. | http://arxiv.org/pdf/1512.04455 | Nicolas Heess, Jonathan J Hunt, Timothy P Lillicrap, David Silver | cs.LG | NIPS Deep Reinforcement Learning Workshop 2015 | null | cs.LG | 20151214 | 20151214 | [
{
"id": "1509.03005"
},
{
"id": "1507.06527"
},
{
"id": "1504.00702"
},
{
"id": "1509.02971"
}
] |
1512.04455 | 35 | [29] H. Utsunomiya and K. Shibata. Contextual behaviors and internal representations acquired by reinforcement learning with a recurrent neural network in a continuous state and action space task. In M. Kppen, N. Kasabov, and G. Coghill, editors, Advances in Neuro-Information Processing, volume 5507 of Lecture Notes in Computer Science, pages 970â978. Springer Berlin Heidelberg, 2009.
[30] D. Wierstra, A. F¨orster, J. Peters, and J. Schmidhuber. Solving deep memory pomdps with recurrent policy gradients. In ICANN, 2007.
[31] D. Wierstra and J. Schmidhuber. Policy gradient critics. In ECML, 2007. [32] M. Zhang, S. Levine, Z. McCarthy, C. Finn, and P. Abbeel. Policy learning with continuous
memory states for partially observed robotic control. CoRR, abs/1507.01273, 2015.
10
# 7 Supplementary
Algorithm 2 RSVG(0) algorithm | 1512.04455#35 | Memory-based control with recurrent neural networks | Partially observed control problems are a challenging aspect of reinforcement
learning. We extend two related, model-free algorithms for continuous control
-- deterministic policy gradient and stochastic value gradient -- to solve
partially observed domains using recurrent neural networks trained with
backpropagation through time.
We demonstrate that this approach, coupled with long-short term memory is
able to solve a variety of physical control problems exhibiting an assortment
of memory requirements. These include the short-term integration of information
from noisy sensors and the identification of system parameters, as well as
long-term memory problems that require preserving information over many time
steps. We also demonstrate success on a combined exploration and memory problem
in the form of a simplified version of the well-known Morris water maze task.
Finally, we show that our approach can deal with high-dimensional observations
by learning directly from pixels.
We find that recurrent deterministic and stochastic policies are able to
learn similarly good solutions to these tasks, including the water maze where
the agent must learn effective search strategies. | http://arxiv.org/pdf/1512.04455 | Nicolas Heess, Jonathan J Hunt, Timothy P Lillicrap, David Silver | cs.LG | NIPS Deep Reinforcement Learning Workshop 2015 | null | cs.LG | 20151214 | 20151214 | [
{
"id": "1509.03005"
},
{
"id": "1507.06527"
},
{
"id": "1504.00702"
},
{
"id": "1509.02971"
}
] |
1512.04455 | 36 | memory states for partially observed robotic control. CoRR, abs/1507.01273, 2015.
10
# 7 Supplementary
Algorithm 2 RSVG(0) algorithm
Initialize critic network Qâ (az, hy) and actor 7°(h,) with parameters w and 0. Initialize target networks Q*â and xâ with weights w! â w, 6â + 0. Initialize replay buffer R. for episodes = 1, Mdo initialize empty history ho fort=1,T do receive observation 0; hy < heâ1, Gtâ1, 0; (append observation and previous action to history) select action a, = 7°(h:,v) with v ~ B) end for Store the sequence (01,41, 71...07,a7,1rr) in R a Sample a minibatch of N episodes (0), a, 7}, ...0'p, ap, Tp)i=1,...,n from R Construct histories hi = (o',a4,...ai_4, 04) Compute target values for each sample episode (y{, ...yâ-) using the recurrent target Y= rit 7Q? (bigs. 7? (hig) with v~ B
T ) using the recurrent target networks
Compute critic update (using BPTT) | 1512.04455#36 | Memory-based control with recurrent neural networks | Partially observed control problems are a challenging aspect of reinforcement
learning. We extend two related, model-free algorithms for continuous control
-- deterministic policy gradient and stochastic value gradient -- to solve
partially observed domains using recurrent neural networks trained with
backpropagation through time.
We demonstrate that this approach, coupled with long-short term memory is
able to solve a variety of physical control problems exhibiting an assortment
of memory requirements. These include the short-term integration of information
from noisy sensors and the identification of system parameters, as well as
long-term memory problems that require preserving information over many time
steps. We also demonstrate success on a combined exploration and memory problem
in the form of a simplified version of the well-known Morris water maze task.
Finally, we show that our approach can deal with high-dimensional observations
by learning directly from pixels.
We find that recurrent deterministic and stochastic policies are able to
learn similarly good solutions to these tasks, including the water maze where
the agent must learn effective search strategies. | http://arxiv.org/pdf/1512.04455 | Nicolas Heess, Jonathan J Hunt, Timothy P Lillicrap, David Silver | cs.LG | NIPS Deep Reinforcement Learning Workshop 2015 | null | cs.LG | 20151214 | 20151214 | [
{
"id": "1509.03005"
},
{
"id": "1507.06527"
},
{
"id": "1504.00702"
},
{
"id": "1509.02971"
}
] |
1512.03385 | 0 | 5 1 0 2 c e D 0 1
] V C . s c [ 1 v 5 8 3 3 0 . 2 1 5 1 : v i X r a
# Deep Residual Learning for Image Recognition
# Kaiming He
# Xiangyu Zhang
# Shaoqing Ren
# Jian Sun
# Microsoft Research
# @microsoft.com kahe, v-xiangz, v-shren, jiansun } {
# Abstract
Deeper neural networks are more difï¬cult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learn- ing residual functions with reference to the layer inputs, in- stead of learning unreferenced functions. We provide com- prehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layersâ8 à deeper than VGG nets [41] but still having lower complex- ity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classiï¬cation task. We also present analysis on CIFAR-10 with 100 and 1000 layers. | 1512.03385#0 | Deep Residual Learning for Image Recognition | Deeper neural networks are more difficult to train. We present a residual
learning framework to ease the training of networks that are substantially
deeper than those used previously. We explicitly reformulate the layers as
learning residual functions with reference to the layer inputs, instead of
learning unreferenced functions. We provide comprehensive empirical evidence
showing that these residual networks are easier to optimize, and can gain
accuracy from considerably increased depth. On the ImageNet dataset we evaluate
residual nets with a depth of up to 152 layers---8x deeper than VGG nets but
still having lower complexity. An ensemble of these residual nets achieves
3.57% error on the ImageNet test set. This result won the 1st place on the
ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100
and 1000 layers.
The depth of representations is of central importance for many visual
recognition tasks. Solely due to our extremely deep representations, we obtain
a 28% relative improvement on the COCO object detection dataset. Deep residual
nets are foundations of our submissions to ILSVRC & COCO 2015 competitions,
where we also won the 1st places on the tasks of ImageNet detection, ImageNet
localization, COCO detection, and COCO segmentation. | http://arxiv.org/pdf/1512.03385 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV | Tech report | null | cs.CV | 20151210 | 20151210 | [
{
"id": "1505.00387"
},
{
"id": "1504.06066"
}
] |
1512.03385 | 1 | The depth of representations is of central importance for many visual recognition tasks. Solely due to our ex- tremely deep representations, we obtain a 28% relative im- provement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet local- ization, COCO detection, and COCO segmentation.
# 1. Introduction
Deep convolutional neural networks [22, 21] have led to a series of breakthroughs for image classiï¬cation [21, 50, 40]. Deep networks naturally integrate low/mid/high- level features [50] and classiï¬ers in an end-to-end multi- layer fashion, and the âlevelsâ of features can be enriched by the number of stacked layers (depth). Recent evidence [41, 44] reveals that network depth is of crucial importance, and the leading results [41, 44, 13, 16] on the challenging ImageNet dataset [36] all exploit âvery deepâ [41] models, with a depth of sixteen [41] to thirty [16]. Many other non- trivial visual recognition tasks [8, 12, 7, 32, 27] have also | 1512.03385#1 | Deep Residual Learning for Image Recognition | Deeper neural networks are more difficult to train. We present a residual
learning framework to ease the training of networks that are substantially
deeper than those used previously. We explicitly reformulate the layers as
learning residual functions with reference to the layer inputs, instead of
learning unreferenced functions. We provide comprehensive empirical evidence
showing that these residual networks are easier to optimize, and can gain
accuracy from considerably increased depth. On the ImageNet dataset we evaluate
residual nets with a depth of up to 152 layers---8x deeper than VGG nets but
still having lower complexity. An ensemble of these residual nets achieves
3.57% error on the ImageNet test set. This result won the 1st place on the
ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100
and 1000 layers.
The depth of representations is of central importance for many visual
recognition tasks. Solely due to our extremely deep representations, we obtain
a 28% relative improvement on the COCO object detection dataset. Deep residual
nets are foundations of our submissions to ILSVRC & COCO 2015 competitions,
where we also won the 1st places on the tasks of ImageNet detection, ImageNet
localization, COCO detection, and COCO segmentation. | http://arxiv.org/pdf/1512.03385 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV | Tech report | null | cs.CV | 20151210 | 20151210 | [
{
"id": "1505.00387"
},
{
"id": "1504.06066"
}
] |
1512.03385 | 2 | and http://mscoco.org/dataset/#detections-challenge2015.
56-layer 20-layer 56-layer 20-layer * iter, (led) * ter. (1e4)
Figure 1. Training error (left) and test error (right) on CIFAR-10 with 20-layer and 56-layer âplainâ networks. The deeper network has higher training error, and thus test error. Similar phenomena on ImageNet is presented in Fig. 4.
greatly beneï¬ted from very deep models.
Driven by the signiï¬cance of depth, a question arises: Is learning better networks as easy as stacking more layers? An obstacle to answering this question was the notorious problem of vanishing/exploding gradients [1, 9], which hamper convergence from the beginning. This problem, however, has been largely addressed by normalized initial- ization [23, 9, 37, 13] and intermediate normalization layers [16], which enable networks with tens of layers to start con- verging for stochastic gradient descent (SGD) with back- propagation [22]. | 1512.03385#2 | Deep Residual Learning for Image Recognition | Deeper neural networks are more difficult to train. We present a residual
learning framework to ease the training of networks that are substantially
deeper than those used previously. We explicitly reformulate the layers as
learning residual functions with reference to the layer inputs, instead of
learning unreferenced functions. We provide comprehensive empirical evidence
showing that these residual networks are easier to optimize, and can gain
accuracy from considerably increased depth. On the ImageNet dataset we evaluate
residual nets with a depth of up to 152 layers---8x deeper than VGG nets but
still having lower complexity. An ensemble of these residual nets achieves
3.57% error on the ImageNet test set. This result won the 1st place on the
ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100
and 1000 layers.
The depth of representations is of central importance for many visual
recognition tasks. Solely due to our extremely deep representations, we obtain
a 28% relative improvement on the COCO object detection dataset. Deep residual
nets are foundations of our submissions to ILSVRC & COCO 2015 competitions,
where we also won the 1st places on the tasks of ImageNet detection, ImageNet
localization, COCO detection, and COCO segmentation. | http://arxiv.org/pdf/1512.03385 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV | Tech report | null | cs.CV | 20151210 | 20151210 | [
{
"id": "1505.00387"
},
{
"id": "1504.06066"
}
] |
1512.03385 | 3 | When deeper networks are able to start converging, a degradation problem has been exposed: with the network depth increasing, accuracy gets saturated (which might be unsurprising) and then degrades rapidly. Unexpectedly, such degradation is not caused by overï¬tting, and adding more layers to a suitably deep model leads to higher train- ing error, as reported in [11, 42] and thoroughly veriï¬ed by our experiments. Fig. 1 shows a typical example.
The degradation (of training accuracy) indicates that not all systems are similarly easy to optimize. Let us consider a shallower architecture and its deeper counterpart that adds more layers onto it. There exists a solution by construction to the deeper model: the added layers are identity mapping, and the other layers are copied from the learned shallower model. The existence of this constructed solution indicates that a deeper model should produce no higher training error than its shallower counterpart. But experiments show that our current solvers on hand are unable to ï¬nd solutions that
1
x y weight layer F(x) V relu x weight layer identity
Figure 2. Residual learning: a building block.
are comparably good or better than the constructed solution (or unable to do so in feasible time). | 1512.03385#3 | Deep Residual Learning for Image Recognition | Deeper neural networks are more difficult to train. We present a residual
learning framework to ease the training of networks that are substantially
deeper than those used previously. We explicitly reformulate the layers as
learning residual functions with reference to the layer inputs, instead of
learning unreferenced functions. We provide comprehensive empirical evidence
showing that these residual networks are easier to optimize, and can gain
accuracy from considerably increased depth. On the ImageNet dataset we evaluate
residual nets with a depth of up to 152 layers---8x deeper than VGG nets but
still having lower complexity. An ensemble of these residual nets achieves
3.57% error on the ImageNet test set. This result won the 1st place on the
ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100
and 1000 layers.
The depth of representations is of central importance for many visual
recognition tasks. Solely due to our extremely deep representations, we obtain
a 28% relative improvement on the COCO object detection dataset. Deep residual
nets are foundations of our submissions to ILSVRC & COCO 2015 competitions,
where we also won the 1st places on the tasks of ImageNet detection, ImageNet
localization, COCO detection, and COCO segmentation. | http://arxiv.org/pdf/1512.03385 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV | Tech report | null | cs.CV | 20151210 | 20151210 | [
{
"id": "1505.00387"
},
{
"id": "1504.06066"
}
] |
1512.03385 | 4 | Figure 2. Residual learning: a building block.
are comparably good or better than the constructed solution (or unable to do so in feasible time).
In this paper, we address the degradation problem by introducing a deep residual In- stead of hoping each few stacked layers directly ï¬t a desired underlying mapping, we explicitly let these lay- ers ï¬t a residual mapping. Formally, denoting the desired (x), we let the stacked nonlinear underlying mapping as x. The orig- layers ï¬t another mapping of (x) F inal mapping is recast into (x)+x. We hypothesize that it is easier to optimize the residual mapping than to optimize the original, unreferenced mapping. To the extreme, if an identity mapping were optimal, it would be easier to push the residual to zero than to ï¬t an identity mapping by a stack of nonlinear layers. | 1512.03385#4 | Deep Residual Learning for Image Recognition | Deeper neural networks are more difficult to train. We present a residual
learning framework to ease the training of networks that are substantially
deeper than those used previously. We explicitly reformulate the layers as
learning residual functions with reference to the layer inputs, instead of
learning unreferenced functions. We provide comprehensive empirical evidence
showing that these residual networks are easier to optimize, and can gain
accuracy from considerably increased depth. On the ImageNet dataset we evaluate
residual nets with a depth of up to 152 layers---8x deeper than VGG nets but
still having lower complexity. An ensemble of these residual nets achieves
3.57% error on the ImageNet test set. This result won the 1st place on the
ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100
and 1000 layers.
The depth of representations is of central importance for many visual
recognition tasks. Solely due to our extremely deep representations, we obtain
a 28% relative improvement on the COCO object detection dataset. Deep residual
nets are foundations of our submissions to ILSVRC & COCO 2015 competitions,
where we also won the 1st places on the tasks of ImageNet detection, ImageNet
localization, COCO detection, and COCO segmentation. | http://arxiv.org/pdf/1512.03385 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV | Tech report | null | cs.CV | 20151210 | 20151210 | [
{
"id": "1505.00387"
},
{
"id": "1504.06066"
}
] |
1512.03385 | 5 | (x) + x can be realized by feedfor- ward neural networks with âshortcut connectionsâ (Fig. 2). Shortcut connections [2, 34, 49] are those skipping one or more layers. In our case, the shortcut connections simply perform identity mapping, and their outputs are added to Identity short- the outputs of the stacked layers (Fig. 2). cut connections add neither extra parameter nor computa- tional complexity. The entire network can still be trained end-to-end by SGD with backpropagation, and can be eas- ily implemented using common libraries (e.g., Caffe [19]) without modifying the solvers.
We present comprehensive experiments on ImageNet [36] to show the degradation problem and evaluate our method. We show that: 1) Our extremely deep residual nets are easy to optimize, but the counterpart âplainâ nets (that simply stack layers) exhibit higher training error when the depth increases; 2) Our deep residual nets can easily enjoy accuracy gains from greatly increased depth, producing re- sults substantially better than previous networks. | 1512.03385#5 | Deep Residual Learning for Image Recognition | Deeper neural networks are more difficult to train. We present a residual
learning framework to ease the training of networks that are substantially
deeper than those used previously. We explicitly reformulate the layers as
learning residual functions with reference to the layer inputs, instead of
learning unreferenced functions. We provide comprehensive empirical evidence
showing that these residual networks are easier to optimize, and can gain
accuracy from considerably increased depth. On the ImageNet dataset we evaluate
residual nets with a depth of up to 152 layers---8x deeper than VGG nets but
still having lower complexity. An ensemble of these residual nets achieves
3.57% error on the ImageNet test set. This result won the 1st place on the
ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100
and 1000 layers.
The depth of representations is of central importance for many visual
recognition tasks. Solely due to our extremely deep representations, we obtain
a 28% relative improvement on the COCO object detection dataset. Deep residual
nets are foundations of our submissions to ILSVRC & COCO 2015 competitions,
where we also won the 1st places on the tasks of ImageNet detection, ImageNet
localization, COCO detection, and COCO segmentation. | http://arxiv.org/pdf/1512.03385 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV | Tech report | null | cs.CV | 20151210 | 20151210 | [
{
"id": "1505.00387"
},
{
"id": "1504.06066"
}
] |
1512.03385 | 6 | Similar phenomena are also shown on the CIFAR-10 set [20], suggesting that the optimization difï¬culties and the effects of our method are not just akin to a particular dataset. We present successfully trained models on this dataset with over 100 layers, and explore models with over 1000 layers. On the ImageNet classiï¬cation dataset [36], we obtain excellent results by extremely deep residual nets. Our 152- layer residual net is the deepest network ever presented on ImageNet, while still having lower complexity than VGG nets [41]. Our ensemble has 3.57% top-5 error on the
2
ImageNet test set, and won the 1st place in the ILSVRC 2015 classiï¬cation competition. The extremely deep rep- resentations also have excellent generalization performance on other recognition tasks, and lead us to further win the 1st places on: ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation in ILSVRC & COCO 2015 competitions. This strong evidence shows that the residual learning principle is generic, and we expect that it is applicable in other vision and non-vision problems.
# 2. Related Work | 1512.03385#6 | Deep Residual Learning for Image Recognition | Deeper neural networks are more difficult to train. We present a residual
learning framework to ease the training of networks that are substantially
deeper than those used previously. We explicitly reformulate the layers as
learning residual functions with reference to the layer inputs, instead of
learning unreferenced functions. We provide comprehensive empirical evidence
showing that these residual networks are easier to optimize, and can gain
accuracy from considerably increased depth. On the ImageNet dataset we evaluate
residual nets with a depth of up to 152 layers---8x deeper than VGG nets but
still having lower complexity. An ensemble of these residual nets achieves
3.57% error on the ImageNet test set. This result won the 1st place on the
ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100
and 1000 layers.
The depth of representations is of central importance for many visual
recognition tasks. Solely due to our extremely deep representations, we obtain
a 28% relative improvement on the COCO object detection dataset. Deep residual
nets are foundations of our submissions to ILSVRC & COCO 2015 competitions,
where we also won the 1st places on the tasks of ImageNet detection, ImageNet
localization, COCO detection, and COCO segmentation. | http://arxiv.org/pdf/1512.03385 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV | Tech report | null | cs.CV | 20151210 | 20151210 | [
{
"id": "1505.00387"
},
{
"id": "1504.06066"
}
] |
1512.03385 | 7 | # 2. Related Work
Residual Representations. In image recognition, VLAD [18] is a representation that encodes by the residual vectors with respect to a dictionary, and Fisher Vector [30] can be formulated as a probabilistic version [18] of VLAD. Both of them are powerful shallow representations for image re- trieval and classiï¬cation [4, 48]. For vector quantization, encoding residual vectors [17] is shown to be more effec- tive than encoding original vectors.
In low-level vision and computer graphics, for solv- ing Partial Differential Equations (PDEs), the widely used Multigrid method [3] reformulates the system as subprob- lems at multiple scales, where each subproblem is respon- sible for the residual solution between a coarser and a ï¬ner scale. An alternative to Multigrid is hierarchical basis pre- conditioning [45, 46], which relies on variables that repre- sent residual vectors between two scales. It has been shown [3, 45, 46] that these solvers converge much faster than stan- dard solvers that are unaware of the residual nature of the solutions. These methods suggest that a good reformulation or preconditioning can simplify the optimization. | 1512.03385#7 | Deep Residual Learning for Image Recognition | Deeper neural networks are more difficult to train. We present a residual
learning framework to ease the training of networks that are substantially
deeper than those used previously. We explicitly reformulate the layers as
learning residual functions with reference to the layer inputs, instead of
learning unreferenced functions. We provide comprehensive empirical evidence
showing that these residual networks are easier to optimize, and can gain
accuracy from considerably increased depth. On the ImageNet dataset we evaluate
residual nets with a depth of up to 152 layers---8x deeper than VGG nets but
still having lower complexity. An ensemble of these residual nets achieves
3.57% error on the ImageNet test set. This result won the 1st place on the
ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100
and 1000 layers.
The depth of representations is of central importance for many visual
recognition tasks. Solely due to our extremely deep representations, we obtain
a 28% relative improvement on the COCO object detection dataset. Deep residual
nets are foundations of our submissions to ILSVRC & COCO 2015 competitions,
where we also won the 1st places on the tasks of ImageNet detection, ImageNet
localization, COCO detection, and COCO segmentation. | http://arxiv.org/pdf/1512.03385 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV | Tech report | null | cs.CV | 20151210 | 20151210 | [
{
"id": "1505.00387"
},
{
"id": "1504.06066"
}
] |
1512.03385 | 8 | Shortcut Connections. Practices and theories that lead to shortcut connections [2, 34, 49] have been studied for a long time. An early practice of training multi-layer perceptrons (MLPs) is to add a linear layer connected from the network input to the output [34, 49]. In [44, 24], a few interme- diate layers are directly connected to auxiliary classiï¬ers for addressing vanishing/exploding gradients. The papers of [39, 38, 31, 47] propose methods for centering layer re- sponses, gradients, and propagated errors, implemented by shortcut connections. In [44], an âinceptionâ layer is com- posed of a shortcut branch and a few deeper branches. | 1512.03385#8 | Deep Residual Learning for Image Recognition | Deeper neural networks are more difficult to train. We present a residual
learning framework to ease the training of networks that are substantially
deeper than those used previously. We explicitly reformulate the layers as
learning residual functions with reference to the layer inputs, instead of
learning unreferenced functions. We provide comprehensive empirical evidence
showing that these residual networks are easier to optimize, and can gain
accuracy from considerably increased depth. On the ImageNet dataset we evaluate
residual nets with a depth of up to 152 layers---8x deeper than VGG nets but
still having lower complexity. An ensemble of these residual nets achieves
3.57% error on the ImageNet test set. This result won the 1st place on the
ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100
and 1000 layers.
The depth of representations is of central importance for many visual
recognition tasks. Solely due to our extremely deep representations, we obtain
a 28% relative improvement on the COCO object detection dataset. Deep residual
nets are foundations of our submissions to ILSVRC & COCO 2015 competitions,
where we also won the 1st places on the tasks of ImageNet detection, ImageNet
localization, COCO detection, and COCO segmentation. | http://arxiv.org/pdf/1512.03385 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV | Tech report | null | cs.CV | 20151210 | 20151210 | [
{
"id": "1505.00387"
},
{
"id": "1504.06066"
}
] |
1512.03385 | 9 | Concurrent with our work, âhighway networksâ [42, 43] present shortcut connections with gating functions [15]. These gates are data-dependent and have parameters, in contrast to our identity shortcuts that are parameter-free. When a gated shortcut is âclosedâ (approaching zero), the layers in highway networks represent non-residual func- tions. On the contrary, our formulation always learns residual functions; our identity shortcuts are never closed, and all information is always passed through, with addi- tional residual functions to be learned. In addition, highway networks have not demonstrated accuracy gains with extremely increased depth (e.g., over 100 layers).
# 3. Deep Residual Learning
# 3.1. Residual Learning | 1512.03385#9 | Deep Residual Learning for Image Recognition | Deeper neural networks are more difficult to train. We present a residual
learning framework to ease the training of networks that are substantially
deeper than those used previously. We explicitly reformulate the layers as
learning residual functions with reference to the layer inputs, instead of
learning unreferenced functions. We provide comprehensive empirical evidence
showing that these residual networks are easier to optimize, and can gain
accuracy from considerably increased depth. On the ImageNet dataset we evaluate
residual nets with a depth of up to 152 layers---8x deeper than VGG nets but
still having lower complexity. An ensemble of these residual nets achieves
3.57% error on the ImageNet test set. This result won the 1st place on the
ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100
and 1000 layers.
The depth of representations is of central importance for many visual
recognition tasks. Solely due to our extremely deep representations, we obtain
a 28% relative improvement on the COCO object detection dataset. Deep residual
nets are foundations of our submissions to ILSVRC & COCO 2015 competitions,
where we also won the 1st places on the tasks of ImageNet detection, ImageNet
localization, COCO detection, and COCO segmentation. | http://arxiv.org/pdf/1512.03385 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV | Tech report | null | cs.CV | 20151210 | 20151210 | [
{
"id": "1505.00387"
},
{
"id": "1504.06066"
}
] |
1512.03385 | 10 | # 3. Deep Residual Learning
# 3.1. Residual Learning
(x) as an underlying mapping to be ï¬t by a few stacked layers (not necessarily the entire net), with x denoting the inputs to the ï¬rst of these layers. If one hypothesizes that multiple nonlinear layers can asymptoti- cally approximate complicated functions2, then it is equiv- alent to hypothesize that they can asymptotically approxi- mate the residual functions, i.e., x (assuming that the input and output are of the same dimensions). So rather than expect stacked layers to approximate (x), we explicitly let these layers approximate a residual function x. The original function thus becomes (x) := F (x)+x. Although both forms should be able to asymptot- F ically approximate the desired functions (as hypothesized), the ease of learning might be different. | 1512.03385#10 | Deep Residual Learning for Image Recognition | Deeper neural networks are more difficult to train. We present a residual
learning framework to ease the training of networks that are substantially
deeper than those used previously. We explicitly reformulate the layers as
learning residual functions with reference to the layer inputs, instead of
learning unreferenced functions. We provide comprehensive empirical evidence
showing that these residual networks are easier to optimize, and can gain
accuracy from considerably increased depth. On the ImageNet dataset we evaluate
residual nets with a depth of up to 152 layers---8x deeper than VGG nets but
still having lower complexity. An ensemble of these residual nets achieves
3.57% error on the ImageNet test set. This result won the 1st place on the
ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100
and 1000 layers.
The depth of representations is of central importance for many visual
recognition tasks. Solely due to our extremely deep representations, we obtain
a 28% relative improvement on the COCO object detection dataset. Deep residual
nets are foundations of our submissions to ILSVRC & COCO 2015 competitions,
where we also won the 1st places on the tasks of ImageNet detection, ImageNet
localization, COCO detection, and COCO segmentation. | http://arxiv.org/pdf/1512.03385 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV | Tech report | null | cs.CV | 20151210 | 20151210 | [
{
"id": "1505.00387"
},
{
"id": "1504.06066"
}
] |
1512.03385 | 11 | This reformulation is motivated by the counterintuitive phenomena about the degradation problem (Fig. 1, left). As we discussed in the introduction, if the added layers can be constructed as identity mappings, a deeper model should have training error no greater than its shallower counter- part. The degradation problem suggests that the solvers might have difï¬culties in approximating identity mappings by multiple nonlinear layers. With the residual learning re- formulation, if identity mappings are optimal, the solvers may simply drive the weights of the multiple nonlinear lay- ers toward zero to approach identity mappings.
In real cases, it is unlikely that identity mappings are op- timal, but our reformulation may help to precondition the If the optimal function is closer to an identity problem. mapping than to a zero mapping, it should be easier for the solver to ï¬nd the perturbations with reference to an identity mapping, than to learn the function as a new one. We show by experiments (Fig. 7) that the learned residual functions in general have small responses, suggesting that identity map- pings provide reasonable preconditioning.
# 3.2. Identity Mapping by Shortcuts
We adopt residual learning to every few stacked layers. A building block is shown in Fig. 2. Formally, in this paper we consider a building block deï¬ned as:
(x, Wi y = (1) | 1512.03385#11 | Deep Residual Learning for Image Recognition | Deeper neural networks are more difficult to train. We present a residual
learning framework to ease the training of networks that are substantially
deeper than those used previously. We explicitly reformulate the layers as
learning residual functions with reference to the layer inputs, instead of
learning unreferenced functions. We provide comprehensive empirical evidence
showing that these residual networks are easier to optimize, and can gain
accuracy from considerably increased depth. On the ImageNet dataset we evaluate
residual nets with a depth of up to 152 layers---8x deeper than VGG nets but
still having lower complexity. An ensemble of these residual nets achieves
3.57% error on the ImageNet test set. This result won the 1st place on the
ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100
and 1000 layers.
The depth of representations is of central importance for many visual
recognition tasks. Solely due to our extremely deep representations, we obtain
a 28% relative improvement on the COCO object detection dataset. Deep residual
nets are foundations of our submissions to ILSVRC & COCO 2015 competitions,
where we also won the 1st places on the tasks of ImageNet detection, ImageNet
localization, COCO detection, and COCO segmentation. | http://arxiv.org/pdf/1512.03385 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV | Tech report | null | cs.CV | 20151210 | 20151210 | [
{
"id": "1505.00387"
},
{
"id": "1504.06066"
}
] |
1512.03385 | 12 | (x, Wi y = (1)
) + x. }
{ Here x and y are the input and output vectors of the lay- ers considered. The function ) represents the } residual mapping to be learned. For the example in Fig. 2 = W2Ï(W1x) in which Ï denotes that has two layers,
# F
# F
2This hypothesis, however, is still an open question. See [28].
3
ReLU [29] and the biases are omitted for simplifying no- tations. The operation + x is performed by a shortcut connection and element-wise addition. We adopt the sec- ond nonlinearity after the addition (i.e., Ï(y), see Fig. 2).
The shortcut connections in Eqn.(1) introduce neither ex- tra parameter nor computation complexity. This is not only attractive in practice but also important in our comparisons between plain and residual networks. We can fairly com- pare plain/residual networks that simultaneously have the same number of parameters, depth, width, and computa- tional cost (except for the negligible element-wise addition). must be equal in Eqn.(1). F If this is not the case (e.g., when changing the input/output channels), we can perform a linear projection Ws by the shortcut connections to match the dimensions:
y = (x, Wi (2)
) + Wsx. } | 1512.03385#12 | Deep Residual Learning for Image Recognition | Deeper neural networks are more difficult to train. We present a residual
learning framework to ease the training of networks that are substantially
deeper than those used previously. We explicitly reformulate the layers as
learning residual functions with reference to the layer inputs, instead of
learning unreferenced functions. We provide comprehensive empirical evidence
showing that these residual networks are easier to optimize, and can gain
accuracy from considerably increased depth. On the ImageNet dataset we evaluate
residual nets with a depth of up to 152 layers---8x deeper than VGG nets but
still having lower complexity. An ensemble of these residual nets achieves
3.57% error on the ImageNet test set. This result won the 1st place on the
ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100
and 1000 layers.
The depth of representations is of central importance for many visual
recognition tasks. Solely due to our extremely deep representations, we obtain
a 28% relative improvement on the COCO object detection dataset. Deep residual
nets are foundations of our submissions to ILSVRC & COCO 2015 competitions,
where we also won the 1st places on the tasks of ImageNet detection, ImageNet
localization, COCO detection, and COCO segmentation. | http://arxiv.org/pdf/1512.03385 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV | Tech report | null | cs.CV | 20151210 | 20151210 | [
{
"id": "1505.00387"
},
{
"id": "1504.06066"
}
] |
1512.03385 | 13 | y = (x, Wi (2)
) + Wsx. }
# F
{
We can also use a square matrix Ws in Eqn.(1). But we will show by experiments that the identity mapping is sufï¬cient for addressing the degradation problem and is economical, and thus Ws is only used when matching dimensions.
is ï¬exible. Exper- The form of the residual function F iments in this paper involve a function that has two or three layers (Fig. 5), while more layers are possible. But if has only a single layer, Eqn.(1) is similar to a linear layer: F y = W1x + x, for which we have not observed advantages. We also note that although the above notations are about fully-connected layers for simplicity, they are applicable to convolutional layers. The function ) can repre- } sent multiple convolutional layers. The element-wise addi- tion is performed on two feature maps, channel by channel.
# 3.3. Network Architectures
We have tested various plain/residual nets, and have ob- served consistent phenomena. To provide instances for dis- cussion, we describe two models for ImageNet as follows. | 1512.03385#13 | Deep Residual Learning for Image Recognition | Deeper neural networks are more difficult to train. We present a residual
learning framework to ease the training of networks that are substantially
deeper than those used previously. We explicitly reformulate the layers as
learning residual functions with reference to the layer inputs, instead of
learning unreferenced functions. We provide comprehensive empirical evidence
showing that these residual networks are easier to optimize, and can gain
accuracy from considerably increased depth. On the ImageNet dataset we evaluate
residual nets with a depth of up to 152 layers---8x deeper than VGG nets but
still having lower complexity. An ensemble of these residual nets achieves
3.57% error on the ImageNet test set. This result won the 1st place on the
ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100
and 1000 layers.
The depth of representations is of central importance for many visual
recognition tasks. Solely due to our extremely deep representations, we obtain
a 28% relative improvement on the COCO object detection dataset. Deep residual
nets are foundations of our submissions to ILSVRC & COCO 2015 competitions,
where we also won the 1st places on the tasks of ImageNet detection, ImageNet
localization, COCO detection, and COCO segmentation. | http://arxiv.org/pdf/1512.03385 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV | Tech report | null | cs.CV | 20151210 | 20151210 | [
{
"id": "1505.00387"
},
{
"id": "1504.06066"
}
] |
1512.03385 | 14 | We have tested various plain/residual nets, and have ob- served consistent phenomena. To provide instances for dis- cussion, we describe two models for ImageNet as follows.
Plain Network. Our plain baselines (Fig. 3, middle) are mainly inspired by the philosophy of VGG nets [41] (Fig. 3, left). The convolutional layers mostly have 3 3 ï¬lters and à follow two simple design rules: (i) for the same output feature map size, the layers have the same number of ï¬l- ters; and (ii) if the feature map size is halved, the num- ber of ï¬lters is doubled so as to preserve the time com- plexity per layer. We perform downsampling directly by convolutional layers that have a stride of 2. The network ends with a global average pooling layer and a 1000-way fully-connected layer with softmax. The total number of weighted layers is 34 in Fig. 3 (middle).
It is worth noticing that our model has fewer ï¬lters and lower complexity than VGG nets [41] (Fig. 3, left). Our 34- layer baseline has 3.6 billion FLOPs (multiply-adds), which is only 18% of VGG-19 (19.6 billion FLOPs).
# VGG-19
# 34-layer plain | 1512.03385#14 | Deep Residual Learning for Image Recognition | Deeper neural networks are more difficult to train. We present a residual
learning framework to ease the training of networks that are substantially
deeper than those used previously. We explicitly reformulate the layers as
learning residual functions with reference to the layer inputs, instead of
learning unreferenced functions. We provide comprehensive empirical evidence
showing that these residual networks are easier to optimize, and can gain
accuracy from considerably increased depth. On the ImageNet dataset we evaluate
residual nets with a depth of up to 152 layers---8x deeper than VGG nets but
still having lower complexity. An ensemble of these residual nets achieves
3.57% error on the ImageNet test set. This result won the 1st place on the
ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100
and 1000 layers.
The depth of representations is of central importance for many visual
recognition tasks. Solely due to our extremely deep representations, we obtain
a 28% relative improvement on the COCO object detection dataset. Deep residual
nets are foundations of our submissions to ILSVRC & COCO 2015 competitions,
where we also won the 1st places on the tasks of ImageNet detection, ImageNet
localization, COCO detection, and COCO segmentation. | http://arxiv.org/pdf/1512.03385 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV | Tech report | null | cs.CV | 20151210 | 20151210 | [
{
"id": "1505.00387"
},
{
"id": "1504.06066"
}
] |
1512.03385 | 16 | image image output 3x3 conv, 64 size: 224 u Â¥ 3x3 conv, 64 pool, /2 output size: 112 3x3 conv, 128 Â¥v 3x3 conv, 128 â7x7 conv, 64, /2 v v pool, /2 pool, /2 output see56 [333 ony 256 2 conw 64 Â¥v Â¥ Ba con 256 Ba con 6 38 conv, 56 Ba con 64 Â¥ Â¥ 38 conv, 256 Ba con 64 v 3x3 conv, 64 Ba conv, 64 5 Za pool, /2 3x3 conv, 128, /2 output sue28 Tagen Bacon 28 Â¥ Ba conv, 512 Bid conv, 128 Â¥v Â¥v Ba conv, S12 Bid conv, 198 Bd conv, S12 Bid conv, 128 v 3rd conv, 128 3d conv, 198 Bd conv, 128 output WOO size: 14 pool, /2 3x3 conv, 256, /2 3d conv, SD 3rd conv, 256 Â¥v es Bacon Sia Bacon B56 Â¥v Â¥ Bacon 52 3a conv 256 Been sz 33 conv, 256 Â¥ BS eon 256 3rd conv, 256 3x3 con, 256 Ba conv, 256 Bed eon 256 Â¥ 3rd conv, 256 3rd conv, 256 utput Vo cue pool, /2 | 1512.03385#16 | Deep Residual Learning for Image Recognition | Deeper neural networks are more difficult to train. We present a residual
learning framework to ease the training of networks that are substantially
deeper than those used previously. We explicitly reformulate the layers as
learning residual functions with reference to the layer inputs, instead of
learning unreferenced functions. We provide comprehensive empirical evidence
showing that these residual networks are easier to optimize, and can gain
accuracy from considerably increased depth. On the ImageNet dataset we evaluate
residual nets with a depth of up to 152 layers---8x deeper than VGG nets but
still having lower complexity. An ensemble of these residual nets achieves
3.57% error on the ImageNet test set. This result won the 1st place on the
ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100
and 1000 layers.
The depth of representations is of central importance for many visual
recognition tasks. Solely due to our extremely deep representations, we obtain
a 28% relative improvement on the COCO object detection dataset. Deep residual
nets are foundations of our submissions to ILSVRC & COCO 2015 competitions,
where we also won the 1st places on the tasks of ImageNet detection, ImageNet
localization, COCO detection, and COCO segmentation. | http://arxiv.org/pdf/1512.03385 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV | Tech report | null | cs.CV | 20151210 | 20151210 | [
{
"id": "1505.00387"
},
{
"id": "1504.06066"
}
] |
1512.03385 | 18 | # image
â7x7 conv, 64, /2 v pool, /2
# [3 conv 6a ¥v [acon 6a
# [scom6a ¥ [secon ea
# 3rd conv, 64
# Bid conv, 64
c= [36 conv, 128,72
# (arenas ee Bad conv, 128 ¥v
# Bd con, 128
# Bd conv, 128 ¥
# Bd eony, 28
# Bid conv, 128
# Bid conv, 128
# EEE
3x3 conv, 256, /2
# 3a conv, 256
# [amenase v [scones
# [pean ¥v [eens
# Bad conv, 256
# 3rd conv, 256
# Bed conv, 256
# [eens
# Bad conv, 256
3rd conv, 256 3xd conv, 512, 72 ¥v
# Wes =
# Bd conv, 51D
# Bd conv, 512 v
# Bd conv, 512
# 3rd conv, 512
# 3rd conv, S12
# avg pool
64096
1000
%1000
7 1000 | 1512.03385#18 | Deep Residual Learning for Image Recognition | Deeper neural networks are more difficult to train. We present a residual
learning framework to ease the training of networks that are substantially
deeper than those used previously. We explicitly reformulate the layers as
learning residual functions with reference to the layer inputs, instead of
learning unreferenced functions. We provide comprehensive empirical evidence
showing that these residual networks are easier to optimize, and can gain
accuracy from considerably increased depth. On the ImageNet dataset we evaluate
residual nets with a depth of up to 152 layers---8x deeper than VGG nets but
still having lower complexity. An ensemble of these residual nets achieves
3.57% error on the ImageNet test set. This result won the 1st place on the
ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100
and 1000 layers.
The depth of representations is of central importance for many visual
recognition tasks. Solely due to our extremely deep representations, we obtain
a 28% relative improvement on the COCO object detection dataset. Deep residual
nets are foundations of our submissions to ILSVRC & COCO 2015 competitions,
where we also won the 1st places on the tasks of ImageNet detection, ImageNet
localization, COCO detection, and COCO segmentation. | http://arxiv.org/pdf/1512.03385 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV | Tech report | null | cs.CV | 20151210 | 20151210 | [
{
"id": "1505.00387"
},
{
"id": "1504.06066"
}
] |
1512.03385 | 19 | # Bd conv, 512 v
# Bd conv, 512
# 3rd conv, 512
# 3rd conv, S12
# avg pool
64096
1000
%1000
7 1000
Figure 3. Example network architectures for ImageNet. Left: the VGG-19 model [41] (19.6 billion FLOPs) as a reference. Mid- dle: a plain network with 34 parameter layers (3.6 billion FLOPs). Right: a residual network with 34 parameter layers (3.6 billion FLOPs). The dotted shortcuts increase dimensions. Table 1 shows more details and other variants.
4
Residual Network. Based on the above plain network, we insert shortcut connections (Fig. 3, right) which turn the network into its counterpart residual version. The identity shortcuts (Eqn.(1)) can be directly used when the input and output are of the same dimensions (solid line shortcuts in Fig. 3). When the dimensions increase (dotted line shortcuts in Fig. 3), we consider two options: (A) The shortcut still performs identity mapping, with extra zero entries padded for increasing dimensions. This option introduces no extra parameter; (B) The projection shortcut in Eqn.(2) is used to match dimensions (done by 1 1 convolutions). For both options, when the shortcuts go across feature maps of two sizes, they are performed with a stride of 2.
# 3.4. Implementation | 1512.03385#19 | Deep Residual Learning for Image Recognition | Deeper neural networks are more difficult to train. We present a residual
learning framework to ease the training of networks that are substantially
deeper than those used previously. We explicitly reformulate the layers as
learning residual functions with reference to the layer inputs, instead of
learning unreferenced functions. We provide comprehensive empirical evidence
showing that these residual networks are easier to optimize, and can gain
accuracy from considerably increased depth. On the ImageNet dataset we evaluate
residual nets with a depth of up to 152 layers---8x deeper than VGG nets but
still having lower complexity. An ensemble of these residual nets achieves
3.57% error on the ImageNet test set. This result won the 1st place on the
ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100
and 1000 layers.
The depth of representations is of central importance for many visual
recognition tasks. Solely due to our extremely deep representations, we obtain
a 28% relative improvement on the COCO object detection dataset. Deep residual
nets are foundations of our submissions to ILSVRC & COCO 2015 competitions,
where we also won the 1st places on the tasks of ImageNet detection, ImageNet
localization, COCO detection, and COCO segmentation. | http://arxiv.org/pdf/1512.03385 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV | Tech report | null | cs.CV | 20151210 | 20151210 | [
{
"id": "1505.00387"
},
{
"id": "1504.06066"
}
] |
1512.03385 | 20 | # 3.4. Implementation
Our implementation for ImageNet follows the practice in [21, 41]. The image is resized with its shorter side ran- domly sampled in [256, 480] for scale augmentation [41]. A 224 224 crop is randomly sampled from an image or its horizontal ï¬ip, with the per-pixel mean subtracted [21]. The standard color augmentation in [21] is used. We adopt batch normalization (BN) [16] right after each convolution and before activation, following [16]. We initialize the weights as in [13] and train all plain/residual nets from scratch. We use SGD with a mini-batch size of 256. The learning rate starts from 0.1 and is divided by 10 when the error plateaus, 104 iterations. We and the models are trained for up to 60 use a weight decay of 0.0001 and a momentum of 0.9. We do not use dropout [14], following the practice in [16].
In testing, for comparison studies we adopt the standard 10-crop testing [21]. For best results, we adopt the fully- convolutional form as in [41, 13], and average the scores at multiple scales (images are resized such that the shorter side is in
). }
# { 4. Experiments
# 4.1. ImageNet Classiï¬cation | 1512.03385#20 | Deep Residual Learning for Image Recognition | Deeper neural networks are more difficult to train. We present a residual
learning framework to ease the training of networks that are substantially
deeper than those used previously. We explicitly reformulate the layers as
learning residual functions with reference to the layer inputs, instead of
learning unreferenced functions. We provide comprehensive empirical evidence
showing that these residual networks are easier to optimize, and can gain
accuracy from considerably increased depth. On the ImageNet dataset we evaluate
residual nets with a depth of up to 152 layers---8x deeper than VGG nets but
still having lower complexity. An ensemble of these residual nets achieves
3.57% error on the ImageNet test set. This result won the 1st place on the
ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100
and 1000 layers.
The depth of representations is of central importance for many visual
recognition tasks. Solely due to our extremely deep representations, we obtain
a 28% relative improvement on the COCO object detection dataset. Deep residual
nets are foundations of our submissions to ILSVRC & COCO 2015 competitions,
where we also won the 1st places on the tasks of ImageNet detection, ImageNet
localization, COCO detection, and COCO segmentation. | http://arxiv.org/pdf/1512.03385 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV | Tech report | null | cs.CV | 20151210 | 20151210 | [
{
"id": "1505.00387"
},
{
"id": "1504.06066"
}
] |
1512.03385 | 21 | ). }
# { 4. Experiments
# 4.1. ImageNet Classiï¬cation
We evaluate our method on the ImageNet 2012 classiï¬- cation dataset [36] that consists of 1000 classes. The models are trained on the 1.28 million training images, and evalu- ated on the 50k validation images. We also obtain a ï¬nal result on the 100k test images, reported by the test server. We evaluate both top-1 and top-5 error rates.
Plain Networks. We ï¬rst evaluate 18-layer and 34-layer plain nets. The 34-layer plain net is in Fig. 3 (middle). The 18-layer plain net is of a similar form. See Table 1 for de- tailed architectures.
The results in Table 2 show that the deeper 34-layer plain net has higher validation error than the shallower 18-layer plain net. To reveal the reasons, in Fig. 4 (left) we com- pare their training/validation errors during the training pro- cedure. We have observed the degradation problem - the | 1512.03385#21 | Deep Residual Learning for Image Recognition | Deeper neural networks are more difficult to train. We present a residual
learning framework to ease the training of networks that are substantially
deeper than those used previously. We explicitly reformulate the layers as
learning residual functions with reference to the layer inputs, instead of
learning unreferenced functions. We provide comprehensive empirical evidence
showing that these residual networks are easier to optimize, and can gain
accuracy from considerably increased depth. On the ImageNet dataset we evaluate
residual nets with a depth of up to 152 layers---8x deeper than VGG nets but
still having lower complexity. An ensemble of these residual nets achieves
3.57% error on the ImageNet test set. This result won the 1st place on the
ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100
and 1000 layers.
The depth of representations is of central importance for many visual
recognition tasks. Solely due to our extremely deep representations, we obtain
a 28% relative improvement on the COCO object detection dataset. Deep residual
nets are foundations of our submissions to ILSVRC & COCO 2015 competitions,
where we also won the 1st places on the tasks of ImageNet detection, ImageNet
localization, COCO detection, and COCO segmentation. | http://arxiv.org/pdf/1512.03385 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV | Tech report | null | cs.CV | 20151210 | 20151210 | [
{
"id": "1505.00387"
},
{
"id": "1504.06066"
}
] |
1512.03385 | 22 | Tayer name | output size T8-layer 34-layer 30-layer 101-layer 152-layer convl | 112x112 7X7, 64, stride 2 3x3 max pool, stride 2 1x1, 64 1x1, 64 1x1, 64 conv2.x | 56x56 [ an | x2 [ et | 3 3x3,64 | x3 3x3,64 | x3 3x3, 64 | x3 a 7 1x1, 256 1x1, 256 1x1, 256 Ix, 128 1x1, 128 1x1, 128 cony3.x | 28x28 [ ae }2 [ a be | x4} | 3x3, 128 | x4 3x3, 128 | x4 3x3, 128 | x8 7 a 1x1, 512 1x1, 512 1x1, 512 1x1, 256 1x1, 256 1x1, 256 2 conv4.x | 14x14 [ poe }= [33 ze | 3x3,256 | x6]] 3x3,256 | x23 |] 3x3,256 | x36 79, 290 HO, 0 1x1, 1024 11, 1024 1x1, 1024 1x1, 512 1x1, 512 1x1, 512 2 convs.x | 7x7 [ | 1512.03385#22 | Deep Residual Learning for Image Recognition | Deeper neural networks are more difficult to train. We present a residual
learning framework to ease the training of networks that are substantially
deeper than those used previously. We explicitly reformulate the layers as
learning residual functions with reference to the layer inputs, instead of
learning unreferenced functions. We provide comprehensive empirical evidence
showing that these residual networks are easier to optimize, and can gain
accuracy from considerably increased depth. On the ImageNet dataset we evaluate
residual nets with a depth of up to 152 layers---8x deeper than VGG nets but
still having lower complexity. An ensemble of these residual nets achieves
3.57% error on the ImageNet test set. This result won the 1st place on the
ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100
and 1000 layers.
The depth of representations is of central importance for many visual
recognition tasks. Solely due to our extremely deep representations, we obtain
a 28% relative improvement on the COCO object detection dataset. Deep residual
nets are foundations of our submissions to ILSVRC & COCO 2015 competitions,
where we also won the 1st places on the tasks of ImageNet detection, ImageNet
localization, COCO detection, and COCO segmentation. | http://arxiv.org/pdf/1512.03385 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV | Tech report | null | cs.CV | 20151210 | 20151210 | [
{
"id": "1505.00387"
},
{
"id": "1504.06066"
}
] |
1512.03385 | 24 | Table 1. Architectures for ImageNet. Building blocks are shown in brackets (see also Fig. 5), with the numbers of blocks stacked. Down- sampling is performed by conv3 1, conv4 1, and conv5 1 with a stride of 2.
âplain-34 l=. . A . : 0 10 30 iter. (1e4) 60} AMA DAM 30 - -----------j4.--=25> ae âResNet-18 NR aly âResNet-34 34-layer 205 10 20 30 40 50 iter. (1e4)
Figure 4. Training on ImageNet. Thin curves denote training error, and bold curves denote validation error of the center crops. Left: plain networks of 18 and 34 layers. Right: ResNets of 18 and 34 layers. In this plot, the residual networks have no extra parameter compared to their plain counterparts.
18 layers 34 layers plain 27.94 28.54 ResNet 27.88 25.03
Table 2. Top-1 error (%, 10-crop testing) on ImageNet validation. Here the ResNets have no extra parameter compared to their plain counterparts. Fig. 4 shows the training procedures.
34-layer plain net has higher training error throughout the whole training procedure, even though the solution space of the 18-layer plain network is a subspace of that of the 34-layer one. | 1512.03385#24 | Deep Residual Learning for Image Recognition | Deeper neural networks are more difficult to train. We present a residual
learning framework to ease the training of networks that are substantially
deeper than those used previously. We explicitly reformulate the layers as
learning residual functions with reference to the layer inputs, instead of
learning unreferenced functions. We provide comprehensive empirical evidence
showing that these residual networks are easier to optimize, and can gain
accuracy from considerably increased depth. On the ImageNet dataset we evaluate
residual nets with a depth of up to 152 layers---8x deeper than VGG nets but
still having lower complexity. An ensemble of these residual nets achieves
3.57% error on the ImageNet test set. This result won the 1st place on the
ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100
and 1000 layers.
The depth of representations is of central importance for many visual
recognition tasks. Solely due to our extremely deep representations, we obtain
a 28% relative improvement on the COCO object detection dataset. Deep residual
nets are foundations of our submissions to ILSVRC & COCO 2015 competitions,
where we also won the 1st places on the tasks of ImageNet detection, ImageNet
localization, COCO detection, and COCO segmentation. | http://arxiv.org/pdf/1512.03385 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV | Tech report | null | cs.CV | 20151210 | 20151210 | [
{
"id": "1505.00387"
},
{
"id": "1504.06066"
}
] |
1512.03385 | 25 | 34-layer plain net has higher training error throughout the whole training procedure, even though the solution space of the 18-layer plain network is a subspace of that of the 34-layer one.
We argue that this optimization difï¬culty is unlikely to be caused by vanishing gradients. These plain networks are trained with BN [16], which ensures forward propagated signals to have non-zero variances. We also verify that the backward propagated gradients exhibit healthy norms with BN. So neither forward nor backward signals vanish. In fact, the 34-layer plain net is still able to achieve compet- itive accuracy (Table 3), suggesting that the solver works to some extent. We conjecture that the deep plain nets may have exponentially low convergence rates, which impact the
reducing of the training error3. The reason for such opti- mization difï¬culties will be studied in the future. | 1512.03385#25 | Deep Residual Learning for Image Recognition | Deeper neural networks are more difficult to train. We present a residual
learning framework to ease the training of networks that are substantially
deeper than those used previously. We explicitly reformulate the layers as
learning residual functions with reference to the layer inputs, instead of
learning unreferenced functions. We provide comprehensive empirical evidence
showing that these residual networks are easier to optimize, and can gain
accuracy from considerably increased depth. On the ImageNet dataset we evaluate
residual nets with a depth of up to 152 layers---8x deeper than VGG nets but
still having lower complexity. An ensemble of these residual nets achieves
3.57% error on the ImageNet test set. This result won the 1st place on the
ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100
and 1000 layers.
The depth of representations is of central importance for many visual
recognition tasks. Solely due to our extremely deep representations, we obtain
a 28% relative improvement on the COCO object detection dataset. Deep residual
nets are foundations of our submissions to ILSVRC & COCO 2015 competitions,
where we also won the 1st places on the tasks of ImageNet detection, ImageNet
localization, COCO detection, and COCO segmentation. | http://arxiv.org/pdf/1512.03385 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV | Tech report | null | cs.CV | 20151210 | 20151210 | [
{
"id": "1505.00387"
},
{
"id": "1504.06066"
}
] |
1512.03385 | 26 | reducing of the training error3. The reason for such opti- mization difï¬culties will be studied in the future.
Residual Networks. Next we evaluate 18-layer and 34- layer residual nets (ResNets). The baseline architectures are the same as the above plain nets, expect that a shortcut connection is added to each pair of 3 3 ï¬lters as in Fig. 3 (right). In the ï¬rst comparison (Table 2 and Fig. 4 right), we use identity mapping for all shortcuts and zero-padding for increasing dimensions (option A). So they have no extra parameter compared to the plain counterparts.
We have three major observations from Table 2 and Fig. 4. First, the situation is reversed with residual learn- ing â the 34-layer ResNet is better than the 18-layer ResNet (by 2.8%). More importantly, the 34-layer ResNet exhibits considerably lower training error and is generalizable to the validation data. This indicates that the degradation problem is well addressed in this setting and we manage to obtain accuracy gains from increased depth.
Second, compared to its plain counterpart, the 34-layer
3We have experimented with more training iterations (3Ã) and still ob- served the degradation problem, suggesting that this problem cannot be feasibly addressed by simply using more iterations.
5 | 1512.03385#26 | Deep Residual Learning for Image Recognition | Deeper neural networks are more difficult to train. We present a residual
learning framework to ease the training of networks that are substantially
deeper than those used previously. We explicitly reformulate the layers as
learning residual functions with reference to the layer inputs, instead of
learning unreferenced functions. We provide comprehensive empirical evidence
showing that these residual networks are easier to optimize, and can gain
accuracy from considerably increased depth. On the ImageNet dataset we evaluate
residual nets with a depth of up to 152 layers---8x deeper than VGG nets but
still having lower complexity. An ensemble of these residual nets achieves
3.57% error on the ImageNet test set. This result won the 1st place on the
ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100
and 1000 layers.
The depth of representations is of central importance for many visual
recognition tasks. Solely due to our extremely deep representations, we obtain
a 28% relative improvement on the COCO object detection dataset. Deep residual
nets are foundations of our submissions to ILSVRC & COCO 2015 competitions,
where we also won the 1st places on the tasks of ImageNet detection, ImageNet
localization, COCO detection, and COCO segmentation. | http://arxiv.org/pdf/1512.03385 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV | Tech report | null | cs.CV | 20151210 | 20151210 | [
{
"id": "1505.00387"
},
{
"id": "1504.06066"
}
] |
1512.03385 | 27 | 3We have experimented with more training iterations (3Ã) and still ob- served the degradation problem, suggesting that this problem cannot be feasibly addressed by simply using more iterations.
5
model VGG-16 [41] GoogLeNet [44] PReLU-net [13] top-1 err. 28.07 - 24.27 top-5 err. 9.33 9.15 7.38 plain-34 ResNet-34 A ResNet-34 B ResNet-34 C ResNet-50 ResNet-101 28.54 25.03 24.52 24.19 22.85 21.75 21.43 10.02 7.76 7.46 7.40 6.71 6.05 5.71 ResNet-152
Table 3. Error rates (%, 10-crop testing) on ImageNet validation. VGG-16 is based on our test. ResNet-50/101/152 are of option B that only uses projections for increasing dimensions. | 1512.03385#27 | Deep Residual Learning for Image Recognition | Deeper neural networks are more difficult to train. We present a residual
learning framework to ease the training of networks that are substantially
deeper than those used previously. We explicitly reformulate the layers as
learning residual functions with reference to the layer inputs, instead of
learning unreferenced functions. We provide comprehensive empirical evidence
showing that these residual networks are easier to optimize, and can gain
accuracy from considerably increased depth. On the ImageNet dataset we evaluate
residual nets with a depth of up to 152 layers---8x deeper than VGG nets but
still having lower complexity. An ensemble of these residual nets achieves
3.57% error on the ImageNet test set. This result won the 1st place on the
ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100
and 1000 layers.
The depth of representations is of central importance for many visual
recognition tasks. Solely due to our extremely deep representations, we obtain
a 28% relative improvement on the COCO object detection dataset. Deep residual
nets are foundations of our submissions to ILSVRC & COCO 2015 competitions,
where we also won the 1st places on the tasks of ImageNet detection, ImageNet
localization, COCO detection, and COCO segmentation. | http://arxiv.org/pdf/1512.03385 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV | Tech report | null | cs.CV | 20151210 | 20151210 | [
{
"id": "1505.00387"
},
{
"id": "1504.06066"
}
] |
1512.03385 | 28 | method VGG [41] (ILSVRCâ14) GoogLeNet [44] (ILSVRCâ14) VGG [41] (v5) PReLU-net [13] BN-inception [16] ResNet-34 B ResNet-34 C ResNet-50 ResNet-101 ResNet-152 top-1 err. - - 24.4 21.59 21.99 21.84 21.53 20.74 19.87 19.38 top-5 err. 8.43â 7.89 7.1 5.71 5.81 5.71 5.60 5.25 4.60 4.49
Table 4. Error rates (%) of single-model results on the ImageNet validation set (except â reported on the test set).
method VGG [41] (ILSVRCâ14) GoogLeNet [44] (ILSVRCâ14) VGG [41] (v5) PReLU-net [13] BN-inception [16] ResNet (ILSVRCâ15) top-5 err. (test) 7.32 6.66 6.8 4.94 4.82 3.57
Table 5. Error rates (%) of ensembles. The top-5 error is on the test set of ImageNet and reported by the test server. | 1512.03385#28 | Deep Residual Learning for Image Recognition | Deeper neural networks are more difficult to train. We present a residual
learning framework to ease the training of networks that are substantially
deeper than those used previously. We explicitly reformulate the layers as
learning residual functions with reference to the layer inputs, instead of
learning unreferenced functions. We provide comprehensive empirical evidence
showing that these residual networks are easier to optimize, and can gain
accuracy from considerably increased depth. On the ImageNet dataset we evaluate
residual nets with a depth of up to 152 layers---8x deeper than VGG nets but
still having lower complexity. An ensemble of these residual nets achieves
3.57% error on the ImageNet test set. This result won the 1st place on the
ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100
and 1000 layers.
The depth of representations is of central importance for many visual
recognition tasks. Solely due to our extremely deep representations, we obtain
a 28% relative improvement on the COCO object detection dataset. Deep residual
nets are foundations of our submissions to ILSVRC & COCO 2015 competitions,
where we also won the 1st places on the tasks of ImageNet detection, ImageNet
localization, COCO detection, and COCO segmentation. | http://arxiv.org/pdf/1512.03385 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV | Tech report | null | cs.CV | 20151210 | 20151210 | [
{
"id": "1505.00387"
},
{
"id": "1504.06066"
}
] |
1512.03385 | 29 | Table 5. Error rates (%) of ensembles. The top-5 error is on the test set of ImageNet and reported by the test server.
ResNet reduces the top-1 error by 3.5% (Table 2), resulting from the successfully reduced training error (Fig. 4 right vs. left). This comparison veriï¬es the effectiveness of residual learning on extremely deep systems.
Last, we also note that the 18-layer plain/residual nets are comparably accurate (Table 2), but the 18-layer ResNet converges faster (Fig. 4 right vs. left). When the net is ânot overly deepâ (18 layers here), the current SGD solver is still able to ï¬nd good solutions to the plain net. In this case, the ResNet eases the optimization by providing faster conver- gence at the early stage.
# Identity vs. Projection Shortcuts. We have shown that
6
256-d ix, 64 yeu 3x3, 64 rela 1x1, 256
Figure 5. A deeper residual function F for ImageNet. Left: a building block (on 56Ã56 feature maps) as in Fig. 3 for ResNet- 34. Right: a âbottleneckâ building block for ResNet-50/101/152. | 1512.03385#29 | Deep Residual Learning for Image Recognition | Deeper neural networks are more difficult to train. We present a residual
learning framework to ease the training of networks that are substantially
deeper than those used previously. We explicitly reformulate the layers as
learning residual functions with reference to the layer inputs, instead of
learning unreferenced functions. We provide comprehensive empirical evidence
showing that these residual networks are easier to optimize, and can gain
accuracy from considerably increased depth. On the ImageNet dataset we evaluate
residual nets with a depth of up to 152 layers---8x deeper than VGG nets but
still having lower complexity. An ensemble of these residual nets achieves
3.57% error on the ImageNet test set. This result won the 1st place on the
ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100
and 1000 layers.
The depth of representations is of central importance for many visual
recognition tasks. Solely due to our extremely deep representations, we obtain
a 28% relative improvement on the COCO object detection dataset. Deep residual
nets are foundations of our submissions to ILSVRC & COCO 2015 competitions,
where we also won the 1st places on the tasks of ImageNet detection, ImageNet
localization, COCO detection, and COCO segmentation. | http://arxiv.org/pdf/1512.03385 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV | Tech report | null | cs.CV | 20151210 | 20151210 | [
{
"id": "1505.00387"
},
{
"id": "1504.06066"
}
] |
1512.03385 | 30 | parameter-free, identity shortcuts help with training. Next we investigate projection shortcuts (Eqn.(2)). In Table 3 we compare three options: (A) zero-padding shortcuts are used for increasing dimensions, and all shortcuts are parameter- free (the same as Table 2 and Fig. 4 right); (B) projec- tion shortcuts are used for increasing dimensions, and other shortcuts are identity; and (C) all shortcuts are projections. Table 3 shows that all three options are considerably bet- ter than the plain counterpart. B is slightly better than A. We argue that this is because the zero-padded dimensions in A indeed have no residual learning. C is marginally better than B, and we attribute this to the extra parameters introduced by many (thirteen) projection shortcuts. But the small dif- ferences among A/B/C indicate that projection shortcuts are not essential for addressing the degradation problem. So we do not use option C in the rest of this paper, to reduce mem- ory/time complexity and model sizes. Identity shortcuts are particularly important for not increasing the complexity of the bottleneck architectures that are introduced below. | 1512.03385#30 | Deep Residual Learning for Image Recognition | Deeper neural networks are more difficult to train. We present a residual
learning framework to ease the training of networks that are substantially
deeper than those used previously. We explicitly reformulate the layers as
learning residual functions with reference to the layer inputs, instead of
learning unreferenced functions. We provide comprehensive empirical evidence
showing that these residual networks are easier to optimize, and can gain
accuracy from considerably increased depth. On the ImageNet dataset we evaluate
residual nets with a depth of up to 152 layers---8x deeper than VGG nets but
still having lower complexity. An ensemble of these residual nets achieves
3.57% error on the ImageNet test set. This result won the 1st place on the
ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100
and 1000 layers.
The depth of representations is of central importance for many visual
recognition tasks. Solely due to our extremely deep representations, we obtain
a 28% relative improvement on the COCO object detection dataset. Deep residual
nets are foundations of our submissions to ILSVRC & COCO 2015 competitions,
where we also won the 1st places on the tasks of ImageNet detection, ImageNet
localization, COCO detection, and COCO segmentation. | http://arxiv.org/pdf/1512.03385 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV | Tech report | null | cs.CV | 20151210 | 20151210 | [
{
"id": "1505.00387"
},
{
"id": "1504.06066"
}
] |
1512.03385 | 31 | Deeper Bottleneck Architectures. Next we describe our deeper nets for ImageNet. Because of concerns on the train- ing time that we can afford, we modify the building block as a bottleneck design4. For each residual function , we use a stack of 3 layers instead of 2 (Fig. 5). The three layers are 1 1 layers are responsible for reducing and then increasing (restoring) 3 layer a bottleneck with smaller dimensions, leaving the 3 input/output dimensions. Fig. 5 shows an example, where both designs have similar time complexity.
The parameter-free identity shortcuts are particularly im- portant for the bottleneck architectures. If the identity short- cut in Fig. 5 (right) is replaced with projection, one can show that the time complexity and model size are doubled, as the shortcut is connected to the two high-dimensional ends. So identity shortcuts lead to more efï¬cient models for the bottleneck designs.
50-layer ResNet: We replace each 2-layer block in the
4Deeper non-bottleneck ResNets (e.g., Fig. 5 left) also gain accuracy from increased depth (as shown on CIFAR-10), but are not as economical as the bottleneck ResNets. So the usage of bottleneck designs is mainly due to practical considerations. We further note that the degradation problem of plain nets is also witnessed for the bottleneck designs. | 1512.03385#31 | Deep Residual Learning for Image Recognition | Deeper neural networks are more difficult to train. We present a residual
learning framework to ease the training of networks that are substantially
deeper than those used previously. We explicitly reformulate the layers as
learning residual functions with reference to the layer inputs, instead of
learning unreferenced functions. We provide comprehensive empirical evidence
showing that these residual networks are easier to optimize, and can gain
accuracy from considerably increased depth. On the ImageNet dataset we evaluate
residual nets with a depth of up to 152 layers---8x deeper than VGG nets but
still having lower complexity. An ensemble of these residual nets achieves
3.57% error on the ImageNet test set. This result won the 1st place on the
ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100
and 1000 layers.
The depth of representations is of central importance for many visual
recognition tasks. Solely due to our extremely deep representations, we obtain
a 28% relative improvement on the COCO object detection dataset. Deep residual
nets are foundations of our submissions to ILSVRC & COCO 2015 competitions,
where we also won the 1st places on the tasks of ImageNet detection, ImageNet
localization, COCO detection, and COCO segmentation. | http://arxiv.org/pdf/1512.03385 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV | Tech report | null | cs.CV | 20151210 | 20151210 | [
{
"id": "1505.00387"
},
{
"id": "1504.06066"
}
] |
1512.03385 | 32 | 34-layer net with this 3-layer bottleneck block, resulting in a 50-layer ResNet (Table 1). We use option B for increasing dimensions. This model has 3.8 billion FLOPs.
101-layer and 152-layer ResNets: We construct 101- layer and 152-layer ResNets by using more 3-layer blocks (Table 1). Remarkably, although the depth is signiï¬cantly increased, the 152-layer ResNet (11.3 billion FLOPs) still has lower complexity than VGG-16/19 nets (15.3/19.6 bil- lion FLOPs).
The 50/101/152-layer ResNets are more accurate than the 34-layer ones by considerable margins (Table 3 and 4). We do not observe the degradation problem and thus en- joy signiï¬cant accuracy gains from considerably increased depth. The beneï¬ts of depth are witnessed for all evaluation metrics (Table 3 and 4). | 1512.03385#32 | Deep Residual Learning for Image Recognition | Deeper neural networks are more difficult to train. We present a residual
learning framework to ease the training of networks that are substantially
deeper than those used previously. We explicitly reformulate the layers as
learning residual functions with reference to the layer inputs, instead of
learning unreferenced functions. We provide comprehensive empirical evidence
showing that these residual networks are easier to optimize, and can gain
accuracy from considerably increased depth. On the ImageNet dataset we evaluate
residual nets with a depth of up to 152 layers---8x deeper than VGG nets but
still having lower complexity. An ensemble of these residual nets achieves
3.57% error on the ImageNet test set. This result won the 1st place on the
ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100
and 1000 layers.
The depth of representations is of central importance for many visual
recognition tasks. Solely due to our extremely deep representations, we obtain
a 28% relative improvement on the COCO object detection dataset. Deep residual
nets are foundations of our submissions to ILSVRC & COCO 2015 competitions,
where we also won the 1st places on the tasks of ImageNet detection, ImageNet
localization, COCO detection, and COCO segmentation. | http://arxiv.org/pdf/1512.03385 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV | Tech report | null | cs.CV | 20151210 | 20151210 | [
{
"id": "1505.00387"
},
{
"id": "1504.06066"
}
] |
1512.03385 | 33 | Comparisons with State-of-the-art Methods. In Table 4 we compare with the previous best single-model results. Our baseline 34-layer ResNets have achieved very compet- itive accuracy. Our 152-layer ResNet has a single-model top-5 validation error of 4.49%. This single-model result outperforms all previous ensemble results (Table 5). We combine six models of different depth to form an ensemble (only with two 152-layer ones at the time of submitting). This leads to 3.57% top-5 error on the test set (Table 5). This entry won the 1st place in ILSVRC 2015.
# 4.2. CIFAR-10 and Analysis
We conducted more studies on the CIFAR-10 dataset [20], which consists of 50k training images and 10k test- ing images in 10 classes. We present experiments trained on the training set and evaluated on the test set. Our focus is on the behaviors of extremely deep networks, but not on pushing the state-of-the-art results, so we intentionally use simple architectures as follows. | 1512.03385#33 | Deep Residual Learning for Image Recognition | Deeper neural networks are more difficult to train. We present a residual
learning framework to ease the training of networks that are substantially
deeper than those used previously. We explicitly reformulate the layers as
learning residual functions with reference to the layer inputs, instead of
learning unreferenced functions. We provide comprehensive empirical evidence
showing that these residual networks are easier to optimize, and can gain
accuracy from considerably increased depth. On the ImageNet dataset we evaluate
residual nets with a depth of up to 152 layers---8x deeper than VGG nets but
still having lower complexity. An ensemble of these residual nets achieves
3.57% error on the ImageNet test set. This result won the 1st place on the
ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100
and 1000 layers.
The depth of representations is of central importance for many visual
recognition tasks. Solely due to our extremely deep representations, we obtain
a 28% relative improvement on the COCO object detection dataset. Deep residual
nets are foundations of our submissions to ILSVRC & COCO 2015 competitions,
where we also won the 1st places on the tasks of ImageNet detection, ImageNet
localization, COCO detection, and COCO segmentation. | http://arxiv.org/pdf/1512.03385 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV | Tech report | null | cs.CV | 20151210 | 20151210 | [
{
"id": "1505.00387"
},
{
"id": "1504.06066"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.