doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1704.04368 | 55 | van gaal was a man with a plan . the ï¬rst season , he stated , would see him deliver manchester united back into their rightful place in the champions league . he would restore them to the premier league top four but loftier aims of silverware would have to wait . his three-year vision would allow for such thoughts but , ï¬rst things ï¬rst , united needed to be dining from european football âs top table again . louis van gaal is close to delivering his ï¬rst-season aim of returning man united into champions league . wayne rooney smashes home during manchester united âs 3-1 win over aston villa on saturday . united âs win over aston villa took them third , eight points ahead of ï¬fth-placed liverpool in the table . april 12 manchester city ( h ) . april 18 chelsea ( a ) . april 26 everton ( a ) . may 2 west bromwich albion ( h ) . may 9 crystal palace ( a ) . may 17 arsenal ( h ) . may 24 hull city ( a ) . one season out of the champions league was far from ideal , but two seasons would be an absolute disaster and something , he understood , that would | 1704.04368#55 | Get To The Point: Summarization with Pointer-Generator Networks | Neural sequence-to-sequence models have provided a viable new approach for
abstractive text summarization (meaning they are not restricted to simply
selecting and rearranging passages from the original text). However, these
models have two shortcomings: they are liable to reproduce factual details
inaccurately, and they tend to repeat themselves. In this work we propose a
novel architecture that augments the standard sequence-to-sequence attentional
model in two orthogonal ways. First, we use a hybrid pointer-generator network
that can copy words from the source text via pointing, which aids accurate
reproduction of information, while retaining the ability to produce novel words
through the generator. Second, we use coverage to keep track of what has been
summarized, which discourages repetition. We apply our model to the CNN / Daily
Mail summarization task, outperforming the current abstractive state-of-the-art
by at least 2 ROUGE points. | http://arxiv.org/pdf/1704.04368 | Abigail See, Peter J. Liu, Christopher D. Manning | cs.CL | Add METEOR evaluation results, add some citations, fix some equations
(what are now equations 1, 8 and 11 were missing a bias term), fix url to
pyrouge package, add acknowledgments | null | cs.CL | 20170414 | 20170425 | [
{
"id": "1701.00138"
},
{
"id": "1611.03382"
},
{
"id": "1608.02927"
}
] |
1704.04368 | 56 | hull city ( a ) . one season out of the champions league was far from ideal , but two seasons would be an absolute disaster and something , he understood , that would not be tolerated . in november , even that was looking optimistic . a 1-0 defeat to manchester city meant that united had taken just 13 points from their opening 10 matches - it was their worst start to a league campaign since 1986 , when ron atkinson only lasted until november . | 1704.04368#56 | Get To The Point: Summarization with Pointer-Generator Networks | Neural sequence-to-sequence models have provided a viable new approach for
abstractive text summarization (meaning they are not restricted to simply
selecting and rearranging passages from the original text). However, these
models have two shortcomings: they are liable to reproduce factual details
inaccurately, and they tend to repeat themselves. In this work we propose a
novel architecture that augments the standard sequence-to-sequence attentional
model in two orthogonal ways. First, we use a hybrid pointer-generator network
that can copy words from the source text via pointing, which aids accurate
reproduction of information, while retaining the ability to produce novel words
through the generator. Second, we use coverage to keep track of what has been
summarized, which discourages repetition. We apply our model to the CNN / Daily
Mail summarization task, outperforming the current abstractive state-of-the-art
by at least 2 ROUGE points. | http://arxiv.org/pdf/1704.04368 | Abigail See, Peter J. Liu, Christopher D. Manning | cs.CL | Add METEOR evaluation results, add some citations, fix some equations
(what are now equations 1, 8 and 11 were missing a bias term), fix url to
pyrouge package, add acknowledgments | null | cs.CL | 20170414 | 20170425 | [
{
"id": "1701.00138"
},
{
"id": "1611.03382"
},
{
"id": "1608.02927"
}
] |
1704.04368 | 57 | Reference Summary: man united have an eight-point cushion from ï¬fth-place liverpool . van gaal looks likely to deliver on his promise of top four ï¬nish . but the dutchman has a three-year vision mapped out . next season will have to see united mount sustained challenge for title . they must also reach the later stages of the champions league .
Baseline: manchester united beat aston villa 3-1 at old trafford on saturday . louis van gaal is close to delivering his UNK aim of returning man united into the premier league top four . louis van gaal is close to delivering his UNK aim of returning man united into champions league .
Pointer-Generator, No Coverage: louis van gaal is close to delivering his ï¬rst-season aim of returning man united into champions league. united âs win over aston villa took them third , eight points ahead of ï¬fth-placed liverpool in the table . louis van gaal is close to delivering his ï¬rst-season aim of returning man united into champions league. | 1704.04368#57 | Get To The Point: Summarization with Pointer-Generator Networks | Neural sequence-to-sequence models have provided a viable new approach for
abstractive text summarization (meaning they are not restricted to simply
selecting and rearranging passages from the original text). However, these
models have two shortcomings: they are liable to reproduce factual details
inaccurately, and they tend to repeat themselves. In this work we propose a
novel architecture that augments the standard sequence-to-sequence attentional
model in two orthogonal ways. First, we use a hybrid pointer-generator network
that can copy words from the source text via pointing, which aids accurate
reproduction of information, while retaining the ability to produce novel words
through the generator. Second, we use coverage to keep track of what has been
summarized, which discourages repetition. We apply our model to the CNN / Daily
Mail summarization task, outperforming the current abstractive state-of-the-art
by at least 2 ROUGE points. | http://arxiv.org/pdf/1704.04368 | Abigail See, Peter J. Liu, Christopher D. Manning | cs.CL | Add METEOR evaluation results, add some citations, fix some equations
(what are now equations 1, 8 and 11 were missing a bias term), fix url to
pyrouge package, add acknowledgments | null | cs.CL | 20170414 | 20170425 | [
{
"id": "1701.00138"
},
{
"id": "1611.03382"
},
{
"id": "1608.02927"
}
] |
1704.04368 | 58 | Pointer-Generator, With Coverage: manchester united beat aston villa 3-1 at old trafford on saturday . louis van gaal is close to delivering his ï¬rst-season aim of returning man united into champions league . united needed to be dining from european football âs top table again .
Figure 10: In this example, both our baseline model and ï¬nal model produce a completely abstractive ï¬rst sentence, using a novel word beat.
Article (truncated): having been on the receiving end of a 6-1 thumping , a defeat like that | 1704.04368#58 | Get To The Point: Summarization with Pointer-Generator Networks | Neural sequence-to-sequence models have provided a viable new approach for
abstractive text summarization (meaning they are not restricted to simply
selecting and rearranging passages from the original text). However, these
models have two shortcomings: they are liable to reproduce factual details
inaccurately, and they tend to repeat themselves. In this work we propose a
novel architecture that augments the standard sequence-to-sequence attentional
model in two orthogonal ways. First, we use a hybrid pointer-generator network
that can copy words from the source text via pointing, which aids accurate
reproduction of information, while retaining the ability to produce novel words
through the generator. Second, we use coverage to keep track of what has been
summarized, which discourages repetition. We apply our model to the CNN / Daily
Mail summarization task, outperforming the current abstractive state-of-the-art
by at least 2 ROUGE points. | http://arxiv.org/pdf/1704.04368 | Abigail See, Peter J. Liu, Christopher D. Manning | cs.CL | Add METEOR evaluation results, add some citations, fix some equations
(what are now equations 1, 8 and 11 were missing a bias term), fix url to
pyrouge package, add acknowledgments | null | cs.CL | 20170414 | 20170425 | [
{
"id": "1701.00138"
},
{
"id": "1611.03382"
},
{
"id": "1608.02927"
}
] |
1704.04368 | 59 | could be justiï¬ably met with a backlash by angry supporters . watching a 3-1 ï¬rst leg aggregate advantage turn into a 7-4 deï¬cit come the end of the reverse encounter too could send many fans apoplectic at the capitulation of their side . however that does nât appear the case for those devoted to porto . porto supporters gave their team a hero âs welcome following their 6-1 defeat at bayern munich on tuesday . porto star striker jackson martinez was one of many players to look perplexed by their warm reception . porto boss julen lopetegui ( left ) was hugged by fans congratulating him on their champions league run . police escorts were needed to keep the delirious supporters at bay as the porto team bus drove past . the team bus was met with a cacophony of noise from porto supporters proudly chanting about their club . on their return from a humiliating champions league quarter-ï¬nal loss at the hands of bayern munich on tuesday night , the squad were given a heroes reception as they arrived back in portugal . in the early hours of wednesday morning , fans mobbed the | 1704.04368#59 | Get To The Point: Summarization with Pointer-Generator Networks | Neural sequence-to-sequence models have provided a viable new approach for
abstractive text summarization (meaning they are not restricted to simply
selecting and rearranging passages from the original text). However, these
models have two shortcomings: they are liable to reproduce factual details
inaccurately, and they tend to repeat themselves. In this work we propose a
novel architecture that augments the standard sequence-to-sequence attentional
model in two orthogonal ways. First, we use a hybrid pointer-generator network
that can copy words from the source text via pointing, which aids accurate
reproduction of information, while retaining the ability to produce novel words
through the generator. Second, we use coverage to keep track of what has been
summarized, which discourages repetition. We apply our model to the CNN / Daily
Mail summarization task, outperforming the current abstractive state-of-the-art
by at least 2 ROUGE points. | http://arxiv.org/pdf/1704.04368 | Abigail See, Peter J. Liu, Christopher D. Manning | cs.CL | Add METEOR evaluation results, add some citations, fix some equations
(what are now equations 1, 8 and 11 were missing a bias term), fix url to
pyrouge package, add acknowledgments | null | cs.CL | 20170414 | 20170425 | [
{
"id": "1701.00138"
},
{
"id": "1611.03382"
},
{
"id": "1608.02927"
}
] |
1704.04368 | 60 | munich on tuesday night , the squad were given a heroes reception as they arrived back in portugal . in the early hours of wednesday morning , fans mobbed the squad congratulating them on their run in the tournament . star striker jackson martinez and ricardo quaresma were one of many porto players who looked perplexed as they were hugged by fans before they making their way on to the team bus - set upon a cacophony of ï¬ercely-proud chanting . it was the ï¬rst time that porto , who had been unbeaten in this season âs tournament up until tuesday night , had reached the quarter-ï¬nals of the champions league since the 2008-09 season . | 1704.04368#60 | Get To The Point: Summarization with Pointer-Generator Networks | Neural sequence-to-sequence models have provided a viable new approach for
abstractive text summarization (meaning they are not restricted to simply
selecting and rearranging passages from the original text). However, these
models have two shortcomings: they are liable to reproduce factual details
inaccurately, and they tend to repeat themselves. In this work we propose a
novel architecture that augments the standard sequence-to-sequence attentional
model in two orthogonal ways. First, we use a hybrid pointer-generator network
that can copy words from the source text via pointing, which aids accurate
reproduction of information, while retaining the ability to produce novel words
through the generator. Second, we use coverage to keep track of what has been
summarized, which discourages repetition. We apply our model to the CNN / Daily
Mail summarization task, outperforming the current abstractive state-of-the-art
by at least 2 ROUGE points. | http://arxiv.org/pdf/1704.04368 | Abigail See, Peter J. Liu, Christopher D. Manning | cs.CL | Add METEOR evaluation results, add some citations, fix some equations
(what are now equations 1, 8 and 11 were missing a bias term), fix url to
pyrouge package, add acknowledgments | null | cs.CL | 20170414 | 20170425 | [
{
"id": "1701.00138"
},
{
"id": "1611.03382"
},
{
"id": "1608.02927"
}
] |
1704.04368 | 61 | Reference Summary: bayern munich beat porto 6-1 in their champions league tie on tuesday . result saw bayern win quarter-ï¬nal encounter 7-4 on aggregate . it was the ï¬rst-time porto had reached that stage since the 2008-09 season .
Baseline: porto beat bayern munich 2-0 in the champions league on tuesday night . porto star james UNK was one of many players involved in the match . the squad were given a trophy as they arrived back in portugal .
Pointer-Generator, No Coverage: porto star striker jackson martinez was one of many players to look perplexed by their warm reception. porto star striker jackson martinez was one of many players to look perplexed by their warm reception. porto star striker jackson martinez was one of many players to look perplexed by their warm reception.
Pointer-Generator, With Coverage: porto star striker jackson martinez was one of many players to look perplexed by their warm
reception .
porto boss julen lopetegui was hugged by fans congratulating him on their champions league run .
however that does nât appear the case for those devoted to porto .
police escorts were needed to keep the delirious supporters at bay . | 1704.04368#61 | Get To The Point: Summarization with Pointer-Generator Networks | Neural sequence-to-sequence models have provided a viable new approach for
abstractive text summarization (meaning they are not restricted to simply
selecting and rearranging passages from the original text). However, these
models have two shortcomings: they are liable to reproduce factual details
inaccurately, and they tend to repeat themselves. In this work we propose a
novel architecture that augments the standard sequence-to-sequence attentional
model in two orthogonal ways. First, we use a hybrid pointer-generator network
that can copy words from the source text via pointing, which aids accurate
reproduction of information, while retaining the ability to produce novel words
through the generator. Second, we use coverage to keep track of what has been
summarized, which discourages repetition. We apply our model to the CNN / Daily
Mail summarization task, outperforming the current abstractive state-of-the-art
by at least 2 ROUGE points. | http://arxiv.org/pdf/1704.04368 | Abigail See, Peter J. Liu, Christopher D. Manning | cs.CL | Add METEOR evaluation results, add some citations, fix some equations
(what are now equations 1, 8 and 11 were missing a bias term), fix url to
pyrouge package, add acknowledgments | null | cs.CL | 20170414 | 20170425 | [
{
"id": "1701.00138"
},
{
"id": "1611.03382"
},
{
"id": "1608.02927"
}
] |
1704.04368 | 62 | however that does nât appear the case for those devoted to porto .
police escorts were needed to keep the delirious supporters at bay .
Figure 11: The baseline model makes several factual inaccuracies: it claims porto beat bayern munich not vice versa, the score is changed from 7-4 to 2-0, jackson is changed to james and a heroes reception is replaced with a trophy. Our ï¬nal model produces sentences that are individually accurate, but they do not make sense as a whole. Note that the ï¬nal model omits the parenthesized phrase ( left ) from its second sentence. | 1704.04368#62 | Get To The Point: Summarization with Pointer-Generator Networks | Neural sequence-to-sequence models have provided a viable new approach for
abstractive text summarization (meaning they are not restricted to simply
selecting and rearranging passages from the original text). However, these
models have two shortcomings: they are liable to reproduce factual details
inaccurately, and they tend to repeat themselves. In this work we propose a
novel architecture that augments the standard sequence-to-sequence attentional
model in two orthogonal ways. First, we use a hybrid pointer-generator network
that can copy words from the source text via pointing, which aids accurate
reproduction of information, while retaining the ability to produce novel words
through the generator. Second, we use coverage to keep track of what has been
summarized, which discourages repetition. We apply our model to the CNN / Daily
Mail summarization task, outperforming the current abstractive state-of-the-art
by at least 2 ROUGE points. | http://arxiv.org/pdf/1704.04368 | Abigail See, Peter J. Liu, Christopher D. Manning | cs.CL | Add METEOR evaluation results, add some citations, fix some equations
(what are now equations 1, 8 and 11 were missing a bias term), fix url to
pyrouge package, add acknowledgments | null | cs.CL | 20170414 | 20170425 | [
{
"id": "1701.00138"
},
{
"id": "1611.03382"
},
{
"id": "1608.02927"
}
] |
1704.04368 | 63 | Article: ( cnn ) â it âs showtime ! â michael keaton paid homage â ever so slightly â to his roles in â beetlejuice â and â batman â in his third turn hosting â saturday night live â this weekend . keaton acknowledged in his opening monologue that a lot has changed since he ï¬rst hosted the comedy sketch show in 1982 . â i had a baby â he âs 31 . i also have a new girlfriend â she âs 28 , â he said . fans who were hoping for a full-blown revival of keaton âs most memorable characters might have been a little disappointed . snl cast members taran killam and bobby moynihan begged the actor with a song to â play â batman and beetlejuice with them . all they got in response were a couple of one-liners . overall , keaton âs performance drew high marks from viewers and critics for its â deadpan â manner and â unpredictable weirdness , â in the words of a.v . club âs dennis perkins . fans also delighted in a cameo from â walking dead â star norman | 1704.04368#63 | Get To The Point: Summarization with Pointer-Generator Networks | Neural sequence-to-sequence models have provided a viable new approach for
abstractive text summarization (meaning they are not restricted to simply
selecting and rearranging passages from the original text). However, these
models have two shortcomings: they are liable to reproduce factual details
inaccurately, and they tend to repeat themselves. In this work we propose a
novel architecture that augments the standard sequence-to-sequence attentional
model in two orthogonal ways. First, we use a hybrid pointer-generator network
that can copy words from the source text via pointing, which aids accurate
reproduction of information, while retaining the ability to produce novel words
through the generator. Second, we use coverage to keep track of what has been
summarized, which discourages repetition. We apply our model to the CNN / Daily
Mail summarization task, outperforming the current abstractive state-of-the-art
by at least 2 ROUGE points. | http://arxiv.org/pdf/1704.04368 | Abigail See, Peter J. Liu, Christopher D. Manning | cs.CL | Add METEOR evaluation results, add some citations, fix some equations
(what are now equations 1, 8 and 11 were missing a bias term), fix url to
pyrouge package, add acknowledgments | null | cs.CL | 20170414 | 20170425 | [
{
"id": "1701.00138"
},
{
"id": "1611.03382"
},
{
"id": "1608.02927"
}
] |
1704.04368 | 64 | â in the words of a.v . club âs dennis perkins . fans also delighted in a cameo from â walking dead â star norman reedus during weekend update . keaton scored some laughs from the audience as an ad executive who âs not very good at his job , a confused grandfather and a high school teacher who gets asked to the prom in a riff on the romantic comedy â she âs all that . â other crowd-pleasing spots included a scientology parody music video and a news conference spooï¬ng the ncaa student-athlete debate . the show also poked fun at cnn with cast member cecily strong playing anchor brooke baldwin . baldwin said on twitter that she âs taking it as a â crazy compliment â and shared a clip from the skit . | 1704.04368#64 | Get To The Point: Summarization with Pointer-Generator Networks | Neural sequence-to-sequence models have provided a viable new approach for
abstractive text summarization (meaning they are not restricted to simply
selecting and rearranging passages from the original text). However, these
models have two shortcomings: they are liable to reproduce factual details
inaccurately, and they tend to repeat themselves. In this work we propose a
novel architecture that augments the standard sequence-to-sequence attentional
model in two orthogonal ways. First, we use a hybrid pointer-generator network
that can copy words from the source text via pointing, which aids accurate
reproduction of information, while retaining the ability to produce novel words
through the generator. Second, we use coverage to keep track of what has been
summarized, which discourages repetition. We apply our model to the CNN / Daily
Mail summarization task, outperforming the current abstractive state-of-the-art
by at least 2 ROUGE points. | http://arxiv.org/pdf/1704.04368 | Abigail See, Peter J. Liu, Christopher D. Manning | cs.CL | Add METEOR evaluation results, add some citations, fix some equations
(what are now equations 1, 8 and 11 were missing a bias term), fix url to
pyrouge package, add acknowledgments | null | cs.CL | 20170414 | 20170425 | [
{
"id": "1701.00138"
},
{
"id": "1611.03382"
},
{
"id": "1608.02927"
}
] |
1704.04368 | 65 | Reference Summary: michael keaton hosted â saturday night live â for the ï¬rst time in 1982 . in 2015 , his nods to starring roles in â beetlejuice â and â batman â are brief .
Baseline: â snl â wins â saturday night live â . keaton acknowledged in his opening â saturday night live â . keaton acknowledged in his opening night âs â UNK â . the show also pokes fun at cnn with cast member mariah carey .
Pointer-Generator, No Coverage: â beetlejuice â michael keaton paid homage to his roles in â beetlejuice â and â batman â in his third turn hosting â saturday night live â this weekend . keaton acknowledged in his opening monologue that a lot has changed since he ï¬rst hosted the comedy sketch show in 1982 .
Pointer-Generator, With Coverage: michael keaton paid homage to his roles in â beetlejuice â and â batman â in his third turn
hosting â saturday night live â .
keaton acknowledged in his opening monologue that a lot has changed since he ï¬rst hosted the comedy sketch show in 1982 . | 1704.04368#65 | Get To The Point: Summarization with Pointer-Generator Networks | Neural sequence-to-sequence models have provided a viable new approach for
abstractive text summarization (meaning they are not restricted to simply
selecting and rearranging passages from the original text). However, these
models have two shortcomings: they are liable to reproduce factual details
inaccurately, and they tend to repeat themselves. In this work we propose a
novel architecture that augments the standard sequence-to-sequence attentional
model in two orthogonal ways. First, we use a hybrid pointer-generator network
that can copy words from the source text via pointing, which aids accurate
reproduction of information, while retaining the ability to produce novel words
through the generator. Second, we use coverage to keep track of what has been
summarized, which discourages repetition. We apply our model to the CNN / Daily
Mail summarization task, outperforming the current abstractive state-of-the-art
by at least 2 ROUGE points. | http://arxiv.org/pdf/1704.04368 | Abigail See, Peter J. Liu, Christopher D. Manning | cs.CL | Add METEOR evaluation results, add some citations, fix some equations
(what are now equations 1, 8 and 11 were missing a bias term), fix url to
pyrouge package, add acknowledgments | null | cs.CL | 20170414 | 20170425 | [
{
"id": "1701.00138"
},
{
"id": "1611.03382"
},
{
"id": "1608.02927"
}
] |
1704.04368 | 66 | hosting â saturday night live â .
keaton acknowledged in his opening monologue that a lot has changed since he ï¬rst hosted the comedy sketch show in 1982 .
Figure 12: Baseline model replaces cecily strong with mariah carey, and produces generally nonsensical output. The baseline model may be struggling with the out-of-vocabulary word beetlejuice, or perhaps the unusual non-news format of the article. Note that the ï¬nal model omits â ever so slightly â from its ï¬rst sentence. | 1704.04368#66 | Get To The Point: Summarization with Pointer-Generator Networks | Neural sequence-to-sequence models have provided a viable new approach for
abstractive text summarization (meaning they are not restricted to simply
selecting and rearranging passages from the original text). However, these
models have two shortcomings: they are liable to reproduce factual details
inaccurately, and they tend to repeat themselves. In this work we propose a
novel architecture that augments the standard sequence-to-sequence attentional
model in two orthogonal ways. First, we use a hybrid pointer-generator network
that can copy words from the source text via pointing, which aids accurate
reproduction of information, while retaining the ability to produce novel words
through the generator. Second, we use coverage to keep track of what has been
summarized, which discourages repetition. We apply our model to the CNN / Daily
Mail summarization task, outperforming the current abstractive state-of-the-art
by at least 2 ROUGE points. | http://arxiv.org/pdf/1704.04368 | Abigail See, Peter J. Liu, Christopher D. Manning | cs.CL | Add METEOR evaluation results, add some citations, fix some equations
(what are now equations 1, 8 and 11 were missing a bias term), fix url to
pyrouge package, add acknowledgments | null | cs.CL | 20170414 | 20170425 | [
{
"id": "1701.00138"
},
{
"id": "1611.03382"
},
{
"id": "1608.02927"
}
] |
1704.04368 | 67 | Article (truncated): they are supposed to be the dream team who can solve the conundrum of how to win the election . but david cameron and boris johnson were left scratching their heads today as they struggled with a children âs jigsaw teaching toddlers about the seasons . as the london mayor tried to hammer ill-ï¬tting pieces together with his hands , the prime minister tried out what could be a new election slogan , telling him : â if in doubt , wedge it in . â after being put right by a four-year-old who spotted their errors , the pair had more fun ï¬nger painting with tory blue paint . david cameron and boris johnson were left stumped by the puzzle at advantage children âs day nursery in surbiton , as three-year-old stephanie looked on . when they tried to put the puzzle back together , they hit trouble after it proved to be more difï¬cult than expected . the conservative duo made their ï¬rst appearance together on the campaign trail with a visit to advantage day nursery in surbiton , south west london . they were supposed to be highlighting tory plans to double free childcare | 1704.04368#67 | Get To The Point: Summarization with Pointer-Generator Networks | Neural sequence-to-sequence models have provided a viable new approach for
abstractive text summarization (meaning they are not restricted to simply
selecting and rearranging passages from the original text). However, these
models have two shortcomings: they are liable to reproduce factual details
inaccurately, and they tend to repeat themselves. In this work we propose a
novel architecture that augments the standard sequence-to-sequence attentional
model in two orthogonal ways. First, we use a hybrid pointer-generator network
that can copy words from the source text via pointing, which aids accurate
reproduction of information, while retaining the ability to produce novel words
through the generator. Second, we use coverage to keep track of what has been
summarized, which discourages repetition. We apply our model to the CNN / Daily
Mail summarization task, outperforming the current abstractive state-of-the-art
by at least 2 ROUGE points. | http://arxiv.org/pdf/1704.04368 | Abigail See, Peter J. Liu, Christopher D. Manning | cs.CL | Add METEOR evaluation results, add some citations, fix some equations
(what are now equations 1, 8 and 11 were missing a bias term), fix url to
pyrouge package, add acknowledgments | null | cs.CL | 20170414 | 20170425 | [
{
"id": "1701.00138"
},
{
"id": "1611.03382"
},
{
"id": "1608.02927"
}
] |
1704.04368 | 68 | trail with a visit to advantage day nursery in surbiton , south west london . they were supposed to be highlighting tory plans to double free childcare for 600,000 children aged three and four . they sat at a table with joshua and stephanie , both aged three , and four-year-olds hamish and leo where they broke up a puzzle which depicted the four seasons . when they tried to put it back together , they hit trouble after it proved to be more difï¬cult than expected . | 1704.04368#68 | Get To The Point: Summarization with Pointer-Generator Networks | Neural sequence-to-sequence models have provided a viable new approach for
abstractive text summarization (meaning they are not restricted to simply
selecting and rearranging passages from the original text). However, these
models have two shortcomings: they are liable to reproduce factual details
inaccurately, and they tend to repeat themselves. In this work we propose a
novel architecture that augments the standard sequence-to-sequence attentional
model in two orthogonal ways. First, we use a hybrid pointer-generator network
that can copy words from the source text via pointing, which aids accurate
reproduction of information, while retaining the ability to produce novel words
through the generator. Second, we use coverage to keep track of what has been
summarized, which discourages repetition. We apply our model to the CNN / Daily
Mail summarization task, outperforming the current abstractive state-of-the-art
by at least 2 ROUGE points. | http://arxiv.org/pdf/1704.04368 | Abigail See, Peter J. Liu, Christopher D. Manning | cs.CL | Add METEOR evaluation results, add some citations, fix some equations
(what are now equations 1, 8 and 11 were missing a bias term), fix url to
pyrouge package, add acknowledgments | null | cs.CL | 20170414 | 20170425 | [
{
"id": "1701.00138"
},
{
"id": "1611.03382"
},
{
"id": "1608.02927"
}
] |
1704.04368 | 69 | Reference Summary: david cameron and boris johnson visit nursery to promote childcare . duo left ï¬ummoxed by jigsaw aimed at teaching children about seasons . london mayor resorted to brute force to hammer wrong pieces together . prime minister gave his approval , telling boris : â if in doubt , wedge it in â .
Baseline: david cameron and boris johnson were left mystiï¬ed by the puzzle . they were at the nursery in UNK to promote a tory pledge to double free childcare for 25 children aged three and four . the pair were at the nursery in UNK to promote a tory pledge to double free childcare for children aged three and four .
Pointer-Generator, No Coverage: london mayor tried to hammer ill-ï¬tting pieces together with his hands . they were supposed to be highlighting tory plans to double free childcare . they were supposed to be highlighting tory plans to double free childcare . they were supposed to be highlighting tory plans to double free childcare .
Pointer-Generator, With Coverage: london mayor tried to hammer ill-ï¬tting pieces together with his hands .
prime minister tried out what could be a new election slogan .
but they hit trouble after it proved to be more difï¬cult than expected . | 1704.04368#69 | Get To The Point: Summarization with Pointer-Generator Networks | Neural sequence-to-sequence models have provided a viable new approach for
abstractive text summarization (meaning they are not restricted to simply
selecting and rearranging passages from the original text). However, these
models have two shortcomings: they are liable to reproduce factual details
inaccurately, and they tend to repeat themselves. In this work we propose a
novel architecture that augments the standard sequence-to-sequence attentional
model in two orthogonal ways. First, we use a hybrid pointer-generator network
that can copy words from the source text via pointing, which aids accurate
reproduction of information, while retaining the ability to produce novel words
through the generator. Second, we use coverage to keep track of what has been
summarized, which discourages repetition. We apply our model to the CNN / Daily
Mail summarization task, outperforming the current abstractive state-of-the-art
by at least 2 ROUGE points. | http://arxiv.org/pdf/1704.04368 | Abigail See, Peter J. Liu, Christopher D. Manning | cs.CL | Add METEOR evaluation results, add some citations, fix some equations
(what are now equations 1, 8 and 11 were missing a bias term), fix url to
pyrouge package, add acknowledgments | null | cs.CL | 20170414 | 20170425 | [
{
"id": "1701.00138"
},
{
"id": "1611.03382"
},
{
"id": "1608.02927"
}
] |
1704.04368 | 70 | prime minister tried out what could be a new election slogan .
but they hit trouble after it proved to be more difï¬cult than expected .
Figure 13: The baseline model appropriately replaces stumped with novel word mystiï¬ed. However, the reference summary chooses ï¬ummoxed (also novel) so the choice of mystiï¬ed is not rewarded by the ROUGE metric. The baseline model also incorrectly substitutes 600,000 for 25. In the ï¬nal modelâs output we observe that the generation probability is largest at the beginning of sentences (especially the ï¬rst verb) and on periods. | 1704.04368#70 | Get To The Point: Summarization with Pointer-Generator Networks | Neural sequence-to-sequence models have provided a viable new approach for
abstractive text summarization (meaning they are not restricted to simply
selecting and rearranging passages from the original text). However, these
models have two shortcomings: they are liable to reproduce factual details
inaccurately, and they tend to repeat themselves. In this work we propose a
novel architecture that augments the standard sequence-to-sequence attentional
model in two orthogonal ways. First, we use a hybrid pointer-generator network
that can copy words from the source text via pointing, which aids accurate
reproduction of information, while retaining the ability to produce novel words
through the generator. Second, we use coverage to keep track of what has been
summarized, which discourages repetition. We apply our model to the CNN / Daily
Mail summarization task, outperforming the current abstractive state-of-the-art
by at least 2 ROUGE points. | http://arxiv.org/pdf/1704.04368 | Abigail See, Peter J. Liu, Christopher D. Manning | cs.CL | Add METEOR evaluation results, add some citations, fix some equations
(what are now equations 1, 8 and 11 were missing a bias term), fix url to
pyrouge package, add acknowledgments | null | cs.CL | 20170414 | 20170425 | [
{
"id": "1701.00138"
},
{
"id": "1611.03382"
},
{
"id": "1608.02927"
}
] |
1704.04368 | 71 | Article (truncated): muhammadu buhari lagos , nigeria ( cnn ) a day after winning nigeria âs presidency , that he plans to aggressively ï¬ght corruption that has long plagued nigeria and go after the root of the nation âs unrest . buhari told cnn âs christiane amanpour said he âll â rapidly give attention â to curbing violence in the northeast part of nigeria , where the terrorist group boko haram operates . by cooperating with neighboring nations chad , cameroon and niger , he said his administration is conï¬dent it will be able to thwart criminals and others contributing to nigeria âs instability . for the ï¬rst time in nigeria âs history , the opposition defeated the ruling party in democratic elections . buhari defeated incumbent goodluck jonathan by about 2 million votes , according to nigeria âs independent national electoral commission . the win comes after a long history of military rule , coups and botched attempts at democracy in africa âs most populous nation . | 1704.04368#71 | Get To The Point: Summarization with Pointer-Generator Networks | Neural sequence-to-sequence models have provided a viable new approach for
abstractive text summarization (meaning they are not restricted to simply
selecting and rearranging passages from the original text). However, these
models have two shortcomings: they are liable to reproduce factual details
inaccurately, and they tend to repeat themselves. In this work we propose a
novel architecture that augments the standard sequence-to-sequence attentional
model in two orthogonal ways. First, we use a hybrid pointer-generator network
that can copy words from the source text via pointing, which aids accurate
reproduction of information, while retaining the ability to produce novel words
through the generator. Second, we use coverage to keep track of what has been
summarized, which discourages repetition. We apply our model to the CNN / Daily
Mail summarization task, outperforming the current abstractive state-of-the-art
by at least 2 ROUGE points. | http://arxiv.org/pdf/1704.04368 | Abigail See, Peter J. Liu, Christopher D. Manning | cs.CL | Add METEOR evaluation results, add some citations, fix some equations
(what are now equations 1, 8 and 11 were missing a bias term), fix url to
pyrouge package, add acknowledgments | null | cs.CL | 20170414 | 20170425 | [
{
"id": "1701.00138"
},
{
"id": "1611.03382"
},
{
"id": "1608.02927"
}
] |
1704.04368 | 72 | Reference Summary: muhammadu buhari tells cnn âs christiane amanpour that he will ï¬ght corruption in nigeria . nigeria is the most populous country in africa and is grappling with violent boko haram extremists . nigeria is also africa âs biggest economy , but up to 70 % of nigerians live on less than a dollar a day .
Baseline: UNK UNK says his administration is conï¬dent it will be able to destabilize nigeria âs economy . UNK says his administration is conï¬dent it will be able to thwart criminals and other nigerians . he says the country has long nigeria and nigeria âs economy .
Pointer-Generator, No Coverage: muhammadu buhari says he plans to aggressively ï¬ght corruption in the northeast part of nigeria . he says he âll â rapidly give attention â to curbing violence in the northeast part of nigeria . he says his administration is conï¬dent it will be able to thwart criminals .
Pointer-Generator, With Coverage: muhammadu buhari says he plans to aggressively ï¬ght corruption that has long plagued nigeria
. | 1704.04368#72 | Get To The Point: Summarization with Pointer-Generator Networks | Neural sequence-to-sequence models have provided a viable new approach for
abstractive text summarization (meaning they are not restricted to simply
selecting and rearranging passages from the original text). However, these
models have two shortcomings: they are liable to reproduce factual details
inaccurately, and they tend to repeat themselves. In this work we propose a
novel architecture that augments the standard sequence-to-sequence attentional
model in two orthogonal ways. First, we use a hybrid pointer-generator network
that can copy words from the source text via pointing, which aids accurate
reproduction of information, while retaining the ability to produce novel words
through the generator. Second, we use coverage to keep track of what has been
summarized, which discourages repetition. We apply our model to the CNN / Daily
Mail summarization task, outperforming the current abstractive state-of-the-art
by at least 2 ROUGE points. | http://arxiv.org/pdf/1704.04368 | Abigail See, Peter J. Liu, Christopher D. Manning | cs.CL | Add METEOR evaluation results, add some citations, fix some equations
(what are now equations 1, 8 and 11 were missing a bias term), fix url to
pyrouge package, add acknowledgments | null | cs.CL | 20170414 | 20170425 | [
{
"id": "1701.00138"
},
{
"id": "1611.03382"
},
{
"id": "1608.02927"
}
] |
1704.04368 | 73 | Pointer-Generator, With Coverage: muhammadu buhari says he plans to aggressively ï¬ght corruption that has long plagued nigeria
.
he says his administration is conï¬dent it will be able to thwart criminals . the win comes after a long history of military rule , coups and botched attempts at democracy in africa âs most populous nation .
Figure 14: The baseline model incorrectly changes thwart criminals and others contributing to nigeriaâs instability to destabilize nigeriaâs economy â which has a mostly opposite meaning. It also produces a nonsensical sentence. Note that our ï¬nal model produces the novel word says to paraphrase told cnn âs christiane amanpour.
Article: cairo ( cnn ) at least 12 people were killed sunday , and more injured , in separate | 1704.04368#73 | Get To The Point: Summarization with Pointer-Generator Networks | Neural sequence-to-sequence models have provided a viable new approach for
abstractive text summarization (meaning they are not restricted to simply
selecting and rearranging passages from the original text). However, these
models have two shortcomings: they are liable to reproduce factual details
inaccurately, and they tend to repeat themselves. In this work we propose a
novel architecture that augments the standard sequence-to-sequence attentional
model in two orthogonal ways. First, we use a hybrid pointer-generator network
that can copy words from the source text via pointing, which aids accurate
reproduction of information, while retaining the ability to produce novel words
through the generator. Second, we use coverage to keep track of what has been
summarized, which discourages repetition. We apply our model to the CNN / Daily
Mail summarization task, outperforming the current abstractive state-of-the-art
by at least 2 ROUGE points. | http://arxiv.org/pdf/1704.04368 | Abigail See, Peter J. Liu, Christopher D. Manning | cs.CL | Add METEOR evaluation results, add some citations, fix some equations
(what are now equations 1, 8 and 11 were missing a bias term), fix url to
pyrouge package, add acknowledgments | null | cs.CL | 20170414 | 20170425 | [
{
"id": "1701.00138"
},
{
"id": "1611.03382"
},
{
"id": "1608.02927"
}
] |
1704.04368 | 74 | attacks on a police station , a checkpoint and along a highway in egypt âs northern sinai authorities said . six people , including one civilian , were killed when a car bomb exploded near the police station in al-arish , capital of north sinai , health ministry spokesman hossam abdel-ghafar told ahram online . he said 40 people were injured . ansar beit al-maqdis , an isis afï¬liate , claimed responsibility for the attack , which came hours after another operation that the group also claimed . in that earlier attack , a ï¬rst lieutenant , a sergeant and four conscripts were killed when their armored vehicle was attacked on the highway from al-arish to sheikh zuweid in northern sinai , the military said . two other soldiers were injured and taken to a military hospital . ansar beit al-maqdis has claimed many attacks against the army and police in sinai . a third attack sunday on a checkpoint in rafah left three security personnel injured , after unknown assailants opened ï¬re at them , according to state media . the attacks come as the military announced a reshufï¬e of several senior military positions , state media reported . among those being replaced are | 1704.04368#74 | Get To The Point: Summarization with Pointer-Generator Networks | Neural sequence-to-sequence models have provided a viable new approach for
abstractive text summarization (meaning they are not restricted to simply
selecting and rearranging passages from the original text). However, these
models have two shortcomings: they are liable to reproduce factual details
inaccurately, and they tend to repeat themselves. In this work we propose a
novel architecture that augments the standard sequence-to-sequence attentional
model in two orthogonal ways. First, we use a hybrid pointer-generator network
that can copy words from the source text via pointing, which aids accurate
reproduction of information, while retaining the ability to produce novel words
through the generator. Second, we use coverage to keep track of what has been
summarized, which discourages repetition. We apply our model to the CNN / Daily
Mail summarization task, outperforming the current abstractive state-of-the-art
by at least 2 ROUGE points. | http://arxiv.org/pdf/1704.04368 | Abigail See, Peter J. Liu, Christopher D. Manning | cs.CL | Add METEOR evaluation results, add some citations, fix some equations
(what are now equations 1, 8 and 11 were missing a bias term), fix url to
pyrouge package, add acknowledgments | null | cs.CL | 20170414 | 20170425 | [
{
"id": "1701.00138"
},
{
"id": "1611.03382"
},
{
"id": "1608.02927"
}
] |
1704.04368 | 75 | them , according to state media . the attacks come as the military announced a reshufï¬e of several senior military positions , state media reported . among those being replaced are the generals in charge of military intelligence and egypt âs second ï¬eld army , which is spearheading the battle against the insurgents in the northern sinai . egypt âs army has been ï¬ghting a decade-long militant islamist insurgency , which has spiked since the ouster of muslim brotherhood president mohamed morsy in the summer of 2013 . hundreds of police and soldiers , as well as civilians , have been killed in militant attacks in the past months . ian lee reported from cairo . anas hamdan reported from atlanta . , | 1704.04368#75 | Get To The Point: Summarization with Pointer-Generator Networks | Neural sequence-to-sequence models have provided a viable new approach for
abstractive text summarization (meaning they are not restricted to simply
selecting and rearranging passages from the original text). However, these
models have two shortcomings: they are liable to reproduce factual details
inaccurately, and they tend to repeat themselves. In this work we propose a
novel architecture that augments the standard sequence-to-sequence attentional
model in two orthogonal ways. First, we use a hybrid pointer-generator network
that can copy words from the source text via pointing, which aids accurate
reproduction of information, while retaining the ability to produce novel words
through the generator. Second, we use coverage to keep track of what has been
summarized, which discourages repetition. We apply our model to the CNN / Daily
Mail summarization task, outperforming the current abstractive state-of-the-art
by at least 2 ROUGE points. | http://arxiv.org/pdf/1704.04368 | Abigail See, Peter J. Liu, Christopher D. Manning | cs.CL | Add METEOR evaluation results, add some citations, fix some equations
(what are now equations 1, 8 and 11 were missing a bias term), fix url to
pyrouge package, add acknowledgments | null | cs.CL | 20170414 | 20170425 | [
{
"id": "1701.00138"
},
{
"id": "1611.03382"
},
{
"id": "1608.02927"
}
] |
1704.04368 | 76 | Reference Summary: six people , including one civilian , are killed when a car bomb explodes near a police station . six others are killed when their armored vehicle is attacked on a highway in northern sinai . ansar beit al-maqdis , an isis afï¬liate , claims responsibility .
Baseline: he says the attacks come after another operation that killed 12 . the attacks come as military announced a u.n. peacekeeping force in northern sinai .
Pointer-Generator, No Coverage: ansar beit al-maqdis , an isis afï¬liate , claimed responsibility for the attack . ansar beit al-maqdis , an isis afï¬liate , claimed responsibility for the attack . the attacks come as the military announced a reshufï¬e of several senior military positions .
Pointer-Generator, With Coverage: six people ,
including one civilian , were killed when a car bomb explodes near the police
station . ansar beit al-maqdis , an isis afï¬liate , claimed responsibility for the attack .
egypt âs army has been ï¬ghting a decade-long militant islamist insurgency . | 1704.04368#76 | Get To The Point: Summarization with Pointer-Generator Networks | Neural sequence-to-sequence models have provided a viable new approach for
abstractive text summarization (meaning they are not restricted to simply
selecting and rearranging passages from the original text). However, these
models have two shortcomings: they are liable to reproduce factual details
inaccurately, and they tend to repeat themselves. In this work we propose a
novel architecture that augments the standard sequence-to-sequence attentional
model in two orthogonal ways. First, we use a hybrid pointer-generator network
that can copy words from the source text via pointing, which aids accurate
reproduction of information, while retaining the ability to produce novel words
through the generator. Second, we use coverage to keep track of what has been
summarized, which discourages repetition. We apply our model to the CNN / Daily
Mail summarization task, outperforming the current abstractive state-of-the-art
by at least 2 ROUGE points. | http://arxiv.org/pdf/1704.04368 | Abigail See, Peter J. Liu, Christopher D. Manning | cs.CL | Add METEOR evaluation results, add some citations, fix some equations
(what are now equations 1, 8 and 11 were missing a bias term), fix url to
pyrouge package, add acknowledgments | null | cs.CL | 20170414 | 20170425 | [
{
"id": "1701.00138"
},
{
"id": "1611.03382"
},
{
"id": "1608.02927"
}
] |
1704.04368 | 77 | egypt âs army has been ï¬ghting a decade-long militant islamist insurgency .
Figure 15: The baseline model fabricates a completely false detail about a u.n. peacekeeping force that is not mentioned in the article. This is most likely inspired by a connection between U.N. peacekeeping forces and northern sinai in the training data. The pointer-generator model is more accurate, correctly reporting the reshufï¬e of several senior military positions. | 1704.04368#77 | Get To The Point: Summarization with Pointer-Generator Networks | Neural sequence-to-sequence models have provided a viable new approach for
abstractive text summarization (meaning they are not restricted to simply
selecting and rearranging passages from the original text). However, these
models have two shortcomings: they are liable to reproduce factual details
inaccurately, and they tend to repeat themselves. In this work we propose a
novel architecture that augments the standard sequence-to-sequence attentional
model in two orthogonal ways. First, we use a hybrid pointer-generator network
that can copy words from the source text via pointing, which aids accurate
reproduction of information, while retaining the ability to produce novel words
through the generator. Second, we use coverage to keep track of what has been
summarized, which discourages repetition. We apply our model to the CNN / Daily
Mail summarization task, outperforming the current abstractive state-of-the-art
by at least 2 ROUGE points. | http://arxiv.org/pdf/1704.04368 | Abigail See, Peter J. Liu, Christopher D. Manning | cs.CL | Add METEOR evaluation results, add some citations, fix some equations
(what are now equations 1, 8 and 11 were missing a bias term), fix url to
pyrouge package, add acknowledgments | null | cs.CL | 20170414 | 20170425 | [
{
"id": "1701.00138"
},
{
"id": "1611.03382"
},
{
"id": "1608.02927"
}
] |
1704.03732 | 0 | 7 1 0 2
v o N 2 2 ] I A . s c [ 4 v 2 3 7 3 0 . 4 0 7 1 : v i X r a
# Deep Q-learning from Demonstrations
# Todd Hester Google DeepMind [email protected]
# Matej Vecerik Google DeepMind [email protected]
Google DeepMind Google DeepMind toddhester@ google.com [email protected]
Olivier Pietquin Google DeepMind [email protected]
Marc Lanctot Google DeepMind [email protected]
Tom Schaul Google DeepMind [email protected]
# Bilal Piot Google DeepMind [email protected]
# Dan Horgan Google DeepMind [email protected]
# John Quan Google DeepMind [email protected]
# Andrew Sendonaris Google DeepMind [email protected]
# Ian Osband Google DeepMind [email protected]
# Gabriel Dulac-Arnold Google DeepMind [email protected]
# John Agapiou Google DeepMind [email protected]
Joel Z. Leibo Google DeepMind [email protected]
# Audrunas Gruslys Google DeepMind [email protected]
# Abstract | 1704.03732#0 | Deep Q-learning from Demonstrations | Deep reinforcement learning (RL) has achieved several high profile successes
in difficult decision-making problems. However, these algorithms typically
require a huge amount of data before they reach reasonable performance. In
fact, their performance during learning can be extremely poor. This may be
acceptable for a simulator, but it severely limits the applicability of deep RL
to many real-world tasks, where the agent must learn in the real environment.
In this paper we study a setting where the agent may access data from previous
control of the system. We present an algorithm, Deep Q-learning from
Demonstrations (DQfD), that leverages small sets of demonstration data to
massively accelerate the learning process even from relatively small amounts of
demonstration data and is able to automatically assess the necessary ratio of
demonstration data while learning thanks to a prioritized replay mechanism.
DQfD works by combining temporal difference updates with supervised
classification of the demonstrator's actions. We show that DQfD has better
initial performance than Prioritized Dueling Double Deep Q-Networks (PDD DQN)
as it starts with better scores on the first million steps on 41 of 42 games
and on average it takes PDD DQN 83 million steps to catch up to DQfD's
performance. DQfD learns to out-perform the best demonstration given in 14 of
42 games. In addition, DQfD leverages human demonstrations to achieve
state-of-the-art results for 11 games. Finally, we show that DQfD performs
better than three related algorithms for incorporating demonstration data into
DQN. | http://arxiv.org/pdf/1704.03732 | Todd Hester, Matej Vecerik, Olivier Pietquin, Marc Lanctot, Tom Schaul, Bilal Piot, Dan Horgan, John Quan, Andrew Sendonaris, Gabriel Dulac-Arnold, Ian Osband, John Agapiou, Joel Z. Leibo, Audrunas Gruslys | cs.AI, cs.LG | Published at AAAI 2018. Previously on arxiv as "Learning from
Demonstrations for Real World Reinforcement Learning" | null | cs.AI | 20170412 | 20171122 | [] |
1704.03732 | 1 | Deep reinforcement learning (RL) has achieved several high proï¬le successes in difï¬cult decision-making problems. However, these algorithms typically require a huge amount of data before they reach reasonable performance. In fact, their performance during learning can be extremely poor. This may be acceptable for a simulator, but it severely limits the appli- cability of deep RL to many real-world tasks, where the agent must learn in the real environment. In this paper we study a setting where the agent may access data from previous con- trol of the system. We present an algorithm, Deep Q-learning from Demonstrations (DQfD), that leverages small sets of demonstration data to massively accelerate the learning pro- cess even from relatively small amounts of demonstration data and is able to automatically assess the necessary ratio of demonstration data while learning thanks to a prioritized replay mechanism. DQfD works by combining temporal dif- ference updates with supervised classiï¬cation of the demon- stratorâs actions. We show that DQfD has better initial per- formance than Prioritized Dueling Double Deep Q-Networks (PDD DQN) as it starts with better scores on the ï¬rst million steps on | 1704.03732#1 | Deep Q-learning from Demonstrations | Deep reinforcement learning (RL) has achieved several high profile successes
in difficult decision-making problems. However, these algorithms typically
require a huge amount of data before they reach reasonable performance. In
fact, their performance during learning can be extremely poor. This may be
acceptable for a simulator, but it severely limits the applicability of deep RL
to many real-world tasks, where the agent must learn in the real environment.
In this paper we study a setting where the agent may access data from previous
control of the system. We present an algorithm, Deep Q-learning from
Demonstrations (DQfD), that leverages small sets of demonstration data to
massively accelerate the learning process even from relatively small amounts of
demonstration data and is able to automatically assess the necessary ratio of
demonstration data while learning thanks to a prioritized replay mechanism.
DQfD works by combining temporal difference updates with supervised
classification of the demonstrator's actions. We show that DQfD has better
initial performance than Prioritized Dueling Double Deep Q-Networks (PDD DQN)
as it starts with better scores on the first million steps on 41 of 42 games
and on average it takes PDD DQN 83 million steps to catch up to DQfD's
performance. DQfD learns to out-perform the best demonstration given in 14 of
42 games. In addition, DQfD leverages human demonstrations to achieve
state-of-the-art results for 11 games. Finally, we show that DQfD performs
better than three related algorithms for incorporating demonstration data into
DQN. | http://arxiv.org/pdf/1704.03732 | Todd Hester, Matej Vecerik, Olivier Pietquin, Marc Lanctot, Tom Schaul, Bilal Piot, Dan Horgan, John Quan, Andrew Sendonaris, Gabriel Dulac-Arnold, Ian Osband, John Agapiou, Joel Z. Leibo, Audrunas Gruslys | cs.AI, cs.LG | Published at AAAI 2018. Previously on arxiv as "Learning from
Demonstrations for Real World Reinforcement Learning" | null | cs.AI | 20170412 | 20171122 | [] |
1704.03732 | 2 | per- formance than Prioritized Dueling Double Deep Q-Networks (PDD DQN) as it starts with better scores on the ï¬rst million steps on 41 of 42 games and on average it takes PDD DQN 83 million steps to catch up to DQfDâs performance. DQfD learns to out-perform the best demonstration given in 14 of 42 games. In addition, DQfD leverages human demonstra- tions to achieve state-of-the-art results for 11 games. Finally, we show that DQfD performs better than three related algo- rithms for incorporating demonstration data into DQN. | 1704.03732#2 | Deep Q-learning from Demonstrations | Deep reinforcement learning (RL) has achieved several high profile successes
in difficult decision-making problems. However, these algorithms typically
require a huge amount of data before they reach reasonable performance. In
fact, their performance during learning can be extremely poor. This may be
acceptable for a simulator, but it severely limits the applicability of deep RL
to many real-world tasks, where the agent must learn in the real environment.
In this paper we study a setting where the agent may access data from previous
control of the system. We present an algorithm, Deep Q-learning from
Demonstrations (DQfD), that leverages small sets of demonstration data to
massively accelerate the learning process even from relatively small amounts of
demonstration data and is able to automatically assess the necessary ratio of
demonstration data while learning thanks to a prioritized replay mechanism.
DQfD works by combining temporal difference updates with supervised
classification of the demonstrator's actions. We show that DQfD has better
initial performance than Prioritized Dueling Double Deep Q-Networks (PDD DQN)
as it starts with better scores on the first million steps on 41 of 42 games
and on average it takes PDD DQN 83 million steps to catch up to DQfD's
performance. DQfD learns to out-perform the best demonstration given in 14 of
42 games. In addition, DQfD leverages human demonstrations to achieve
state-of-the-art results for 11 games. Finally, we show that DQfD performs
better than three related algorithms for incorporating demonstration data into
DQN. | http://arxiv.org/pdf/1704.03732 | Todd Hester, Matej Vecerik, Olivier Pietquin, Marc Lanctot, Tom Schaul, Bilal Piot, Dan Horgan, John Quan, Andrew Sendonaris, Gabriel Dulac-Arnold, Ian Osband, John Agapiou, Joel Z. Leibo, Audrunas Gruslys | cs.AI, cs.LG | Published at AAAI 2018. Previously on arxiv as "Learning from
Demonstrations for Real World Reinforcement Learning" | null | cs.AI | 20170412 | 20171122 | [] |
1704.03732 | 4 | Copyright © 2018, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
the game of to defeating a top human expert at Go (Silver et al. 2016). An important part of the success of these approaches has been to leverage the recent con- tributions to scalability and performance of deep learn- ing (LeCun, Bengio, and Hinton 2015). The approach taken in (Mnih et al. 2015) builds a data set of previous experience using batch RL to train large convolutional neural networks in a supervised fashion from this data. By sampling from this data set rather than from current experience, the correlation in values from state distribution bias is mitigated, leading to good (in many cases, super-human) control policies. | 1704.03732#4 | Deep Q-learning from Demonstrations | Deep reinforcement learning (RL) has achieved several high profile successes
in difficult decision-making problems. However, these algorithms typically
require a huge amount of data before they reach reasonable performance. In
fact, their performance during learning can be extremely poor. This may be
acceptable for a simulator, but it severely limits the applicability of deep RL
to many real-world tasks, where the agent must learn in the real environment.
In this paper we study a setting where the agent may access data from previous
control of the system. We present an algorithm, Deep Q-learning from
Demonstrations (DQfD), that leverages small sets of demonstration data to
massively accelerate the learning process even from relatively small amounts of
demonstration data and is able to automatically assess the necessary ratio of
demonstration data while learning thanks to a prioritized replay mechanism.
DQfD works by combining temporal difference updates with supervised
classification of the demonstrator's actions. We show that DQfD has better
initial performance than Prioritized Dueling Double Deep Q-Networks (PDD DQN)
as it starts with better scores on the first million steps on 41 of 42 games
and on average it takes PDD DQN 83 million steps to catch up to DQfD's
performance. DQfD learns to out-perform the best demonstration given in 14 of
42 games. In addition, DQfD leverages human demonstrations to achieve
state-of-the-art results for 11 games. Finally, we show that DQfD performs
better than three related algorithms for incorporating demonstration data into
DQN. | http://arxiv.org/pdf/1704.03732 | Todd Hester, Matej Vecerik, Olivier Pietquin, Marc Lanctot, Tom Schaul, Bilal Piot, Dan Horgan, John Quan, Andrew Sendonaris, Gabriel Dulac-Arnold, Ian Osband, John Agapiou, Joel Z. Leibo, Audrunas Gruslys | cs.AI, cs.LG | Published at AAAI 2018. Previously on arxiv as "Learning from
Demonstrations for Real World Reinforcement Learning" | null | cs.AI | 20170412 | 20171122 | [] |
1704.03732 | 5 | to apply these algorithms au- to real world settings heli- tonomous recommendation sys- copters tems (Shani, Heckerman, and Brafman 2005). Typically these algorithms learn good control policies only after many millions of steps of very poor performance in simulation. This situation is acceptable when there is a perfectly accu- rate simulator; however, many real world problems do not come with such a simulator. Instead, in these situations, the agent must learn in the real domain with real consequences for its actions, which requires that the agent have good on- line performance from the start of learning. While accurate simulators are difï¬cult to ï¬nd, most of these problems have data of the system operating under a previous controller (either human or machine) that performs reasonably well. In this work, we make use of this demonstration data to pre-train the agent so that it can perform well in the task from the start of learning, and then continue improving from its own self-generated data. Enabling learning in this framework opens up the possibility of applying RL to many real world problems where demonstration data is common but accurate simulators do not exist.
We propose a new deep reinforcement learning algo- rithm, Deep Q-learning from Demonstrations (DQfD), which leverages even very small amounts of demonstration data to massively accelerate learning. DQfD initially pre- trains solely on the demonstration data using a combination | 1704.03732#5 | Deep Q-learning from Demonstrations | Deep reinforcement learning (RL) has achieved several high profile successes
in difficult decision-making problems. However, these algorithms typically
require a huge amount of data before they reach reasonable performance. In
fact, their performance during learning can be extremely poor. This may be
acceptable for a simulator, but it severely limits the applicability of deep RL
to many real-world tasks, where the agent must learn in the real environment.
In this paper we study a setting where the agent may access data from previous
control of the system. We present an algorithm, Deep Q-learning from
Demonstrations (DQfD), that leverages small sets of demonstration data to
massively accelerate the learning process even from relatively small amounts of
demonstration data and is able to automatically assess the necessary ratio of
demonstration data while learning thanks to a prioritized replay mechanism.
DQfD works by combining temporal difference updates with supervised
classification of the demonstrator's actions. We show that DQfD has better
initial performance than Prioritized Dueling Double Deep Q-Networks (PDD DQN)
as it starts with better scores on the first million steps on 41 of 42 games
and on average it takes PDD DQN 83 million steps to catch up to DQfD's
performance. DQfD learns to out-perform the best demonstration given in 14 of
42 games. In addition, DQfD leverages human demonstrations to achieve
state-of-the-art results for 11 games. Finally, we show that DQfD performs
better than three related algorithms for incorporating demonstration data into
DQN. | http://arxiv.org/pdf/1704.03732 | Todd Hester, Matej Vecerik, Olivier Pietquin, Marc Lanctot, Tom Schaul, Bilal Piot, Dan Horgan, John Quan, Andrew Sendonaris, Gabriel Dulac-Arnold, Ian Osband, John Agapiou, Joel Z. Leibo, Audrunas Gruslys | cs.AI, cs.LG | Published at AAAI 2018. Previously on arxiv as "Learning from
Demonstrations for Real World Reinforcement Learning" | null | cs.AI | 20170412 | 20171122 | [] |
1704.03732 | 6 | of temporal difference (TD) and supervised losses. The supervised loss enables the algorithm to learn to imitate the demonstrator while the TD loss enables it to learn a self- consistent value function from which it can continue learn- ing with RL. After pre-training, the agent starts interacting with the domain with its learned policy. The agent updates its network with a mix of demonstration and self-generated data. In practice, choosing the ratio between demonstration and self-generated data while learning is critical to improve the performance of the algorithm. One of our contributions is to use a prioritized replay mechanism (Schaul et al. 2016) this ratio. DQfD out-performs to automatically control learning using Prioritized Duel- pure reinforcement ing Double DQN (PDD DQN) (Schaul et al. 2016; van Hasselt, Guez, and Silver 2016; Wang et al. 2016) in 41 of 42 games on the ï¬rst million steps, and on average it takes 83 million steps for PDD DQN to catch up to DQfD. In addition, DQfD out-performs pure imitation learning in mean score on 39 of 42 games and out-performs the best demonstration given in 14 of 42 games. | 1704.03732#6 | Deep Q-learning from Demonstrations | Deep reinforcement learning (RL) has achieved several high profile successes
in difficult decision-making problems. However, these algorithms typically
require a huge amount of data before they reach reasonable performance. In
fact, their performance during learning can be extremely poor. This may be
acceptable for a simulator, but it severely limits the applicability of deep RL
to many real-world tasks, where the agent must learn in the real environment.
In this paper we study a setting where the agent may access data from previous
control of the system. We present an algorithm, Deep Q-learning from
Demonstrations (DQfD), that leverages small sets of demonstration data to
massively accelerate the learning process even from relatively small amounts of
demonstration data and is able to automatically assess the necessary ratio of
demonstration data while learning thanks to a prioritized replay mechanism.
DQfD works by combining temporal difference updates with supervised
classification of the demonstrator's actions. We show that DQfD has better
initial performance than Prioritized Dueling Double Deep Q-Networks (PDD DQN)
as it starts with better scores on the first million steps on 41 of 42 games
and on average it takes PDD DQN 83 million steps to catch up to DQfD's
performance. DQfD learns to out-perform the best demonstration given in 14 of
42 games. In addition, DQfD leverages human demonstrations to achieve
state-of-the-art results for 11 games. Finally, we show that DQfD performs
better than three related algorithms for incorporating demonstration data into
DQN. | http://arxiv.org/pdf/1704.03732 | Todd Hester, Matej Vecerik, Olivier Pietquin, Marc Lanctot, Tom Schaul, Bilal Piot, Dan Horgan, John Quan, Andrew Sendonaris, Gabriel Dulac-Arnold, Ian Osband, John Agapiou, Joel Z. Leibo, Audrunas Gruslys | cs.AI, cs.LG | Published at AAAI 2018. Previously on arxiv as "Learning from
Demonstrations for Real World Reinforcement Learning" | null | cs.AI | 20170412 | 20171122 | [] |
1704.03732 | 8 | Background We adopt the standard Markov Decision Process (MDP) for- malism for this work (Sutton and Barto 1998). An MDP is deï¬ned by a tuple hS, A, R, T, γi, which consists of a set of states S, a set of actions A, a reward function R(s, a), a transition function T (s, a, sâ²) = P (sâ²|s, a), and a discount factor γ. In each state s â S, the agent takes an action a â A. Upon taking this action, the agent receives a reward R(s, a) and reaches a new state sâ², determined from the probability distribution P (sâ²|s, a). A policy Ï speciï¬es for each state which action the agent will take. The goal of the agent is to ï¬nd the policy Ï mapping states to actions that maximizes the expected discounted total reward over the agentâs life- time. The value QÏ(s, a) of a given state-action pair (s, a) is an estimate of the expected future reward that can be ob- tained from (s, a) when following policy Ï. The optimal value function Qâ(s, a) provides maximal values in all states and is determined by solving the Bellman equation: | 1704.03732#8 | Deep Q-learning from Demonstrations | Deep reinforcement learning (RL) has achieved several high profile successes
in difficult decision-making problems. However, these algorithms typically
require a huge amount of data before they reach reasonable performance. In
fact, their performance during learning can be extremely poor. This may be
acceptable for a simulator, but it severely limits the applicability of deep RL
to many real-world tasks, where the agent must learn in the real environment.
In this paper we study a setting where the agent may access data from previous
control of the system. We present an algorithm, Deep Q-learning from
Demonstrations (DQfD), that leverages small sets of demonstration data to
massively accelerate the learning process even from relatively small amounts of
demonstration data and is able to automatically assess the necessary ratio of
demonstration data while learning thanks to a prioritized replay mechanism.
DQfD works by combining temporal difference updates with supervised
classification of the demonstrator's actions. We show that DQfD has better
initial performance than Prioritized Dueling Double Deep Q-Networks (PDD DQN)
as it starts with better scores on the first million steps on 41 of 42 games
and on average it takes PDD DQN 83 million steps to catch up to DQfD's
performance. DQfD learns to out-perform the best demonstration given in 14 of
42 games. In addition, DQfD leverages human demonstrations to achieve
state-of-the-art results for 11 games. Finally, we show that DQfD performs
better than three related algorithms for incorporating demonstration data into
DQN. | http://arxiv.org/pdf/1704.03732 | Todd Hester, Matej Vecerik, Olivier Pietquin, Marc Lanctot, Tom Schaul, Bilal Piot, Dan Horgan, John Quan, Andrew Sendonaris, Gabriel Dulac-Arnold, Ian Osband, John Agapiou, Joel Z. Leibo, Audrunas Gruslys | cs.AI, cs.LG | Published at AAAI 2018. Previously on arxiv as "Learning from
Demonstrations for Real World Reinforcement Learning" | null | cs.AI | 20170412 | 20171122 | [] |
1704.03732 | 9 | aâ² Qâ(sâ², aâ²) # sâ² X The optimal policy Ï is then Ï(s) = argmaxaâA Qâ(s, a). DQN (Mnih et al. 2015) approximates the value function Q(s, a) with a deep neural network that outputs a set of ac- tion values Q(s, ·; θ) for a given state input s, where θ are the parameters of the network. There are two key components of DQN that make this work. First, it uses a separate target net- work that is copied every Ï steps from the regular network so that the target Q-values are more stable. Second, the agent adds all of its experiences to a replay buffer Dreplay, which is then sampled uniformly to perform updates on the net- work. The
# upthe date over current the next is value of 2 JDQ(Q) = , where θⲠare the parameters of the target network, and amax t+1 = argmaxa Q(st+1, a; θ). Separating the value functions used for these two variables reduces the upward bias that is created with regular Q-learning updates. | 1704.03732#9 | Deep Q-learning from Demonstrations | Deep reinforcement learning (RL) has achieved several high profile successes
in difficult decision-making problems. However, these algorithms typically
require a huge amount of data before they reach reasonable performance. In
fact, their performance during learning can be extremely poor. This may be
acceptable for a simulator, but it severely limits the applicability of deep RL
to many real-world tasks, where the agent must learn in the real environment.
In this paper we study a setting where the agent may access data from previous
control of the system. We present an algorithm, Deep Q-learning from
Demonstrations (DQfD), that leverages small sets of demonstration data to
massively accelerate the learning process even from relatively small amounts of
demonstration data and is able to automatically assess the necessary ratio of
demonstration data while learning thanks to a prioritized replay mechanism.
DQfD works by combining temporal difference updates with supervised
classification of the demonstrator's actions. We show that DQfD has better
initial performance than Prioritized Dueling Double Deep Q-Networks (PDD DQN)
as it starts with better scores on the first million steps on 41 of 42 games
and on average it takes PDD DQN 83 million steps to catch up to DQfD's
performance. DQfD learns to out-perform the best demonstration given in 14 of
42 games. In addition, DQfD leverages human demonstrations to achieve
state-of-the-art results for 11 games. Finally, we show that DQfD performs
better than three related algorithms for incorporating demonstration data into
DQN. | http://arxiv.org/pdf/1704.03732 | Todd Hester, Matej Vecerik, Olivier Pietquin, Marc Lanctot, Tom Schaul, Bilal Piot, Dan Horgan, John Quan, Andrew Sendonaris, Gabriel Dulac-Arnold, Ian Osband, John Agapiou, Joel Z. Leibo, Audrunas Gruslys | cs.AI, cs.LG | Published at AAAI 2018. Previously on arxiv as "Learning from
Demonstrations for Real World Reinforcement Learning" | null | cs.AI | 20170412 | 20171122 | [] |
1704.03732 | 10 | Prioritized experience replay (Schaul et al. 2016) modi- ï¬es the DQN agent to sample more important transitions from its replay buffer more frequently. The probability of sampling a particular transition i is proportional to its prior- ity, P (i) = pα i , where the priority pi = |δi| + Ç«, and δi is Pk pα k the last TD error calculated for this transition and Ç« is a small positive constant to ensure all transitions are sampled with some probability. To account for the change in the distribu- tion, updates to the network are weighted with importance sampling weights, wi = ( 1 P (i) )β, where N is the size of the replay buffer and β controls the amount of importance sampling with no importance sampling when β = 0 and full importance sampling when β = 1. β is annealed linearly from β0 to 1. | 1704.03732#10 | Deep Q-learning from Demonstrations | Deep reinforcement learning (RL) has achieved several high profile successes
in difficult decision-making problems. However, these algorithms typically
require a huge amount of data before they reach reasonable performance. In
fact, their performance during learning can be extremely poor. This may be
acceptable for a simulator, but it severely limits the applicability of deep RL
to many real-world tasks, where the agent must learn in the real environment.
In this paper we study a setting where the agent may access data from previous
control of the system. We present an algorithm, Deep Q-learning from
Demonstrations (DQfD), that leverages small sets of demonstration data to
massively accelerate the learning process even from relatively small amounts of
demonstration data and is able to automatically assess the necessary ratio of
demonstration data while learning thanks to a prioritized replay mechanism.
DQfD works by combining temporal difference updates with supervised
classification of the demonstrator's actions. We show that DQfD has better
initial performance than Prioritized Dueling Double Deep Q-Networks (PDD DQN)
as it starts with better scores on the first million steps on 41 of 42 games
and on average it takes PDD DQN 83 million steps to catch up to DQfD's
performance. DQfD learns to out-perform the best demonstration given in 14 of
42 games. In addition, DQfD leverages human demonstrations to achieve
state-of-the-art results for 11 games. Finally, we show that DQfD performs
better than three related algorithms for incorporating demonstration data into
DQN. | http://arxiv.org/pdf/1704.03732 | Todd Hester, Matej Vecerik, Olivier Pietquin, Marc Lanctot, Tom Schaul, Bilal Piot, Dan Horgan, John Quan, Andrew Sendonaris, Gabriel Dulac-Arnold, Ian Osband, John Agapiou, Joel Z. Leibo, Audrunas Gruslys | cs.AI, cs.LG | Published at AAAI 2018. Previously on arxiv as "Learning from
Demonstrations for Real World Reinforcement Learning" | null | cs.AI | 20170412 | 20171122 | [] |
1704.03732 | 11 | Related Work Imitation learning is primarily concerned with matching the performance of the demonstrator. One popular algo- rithm, DAGGER (Ross, Gordon, and Bagnell 2011), itera- tively produces new policies based on polling the expert pol- icy outside its original state space, showing that this leads to no-regret over validation data in the online learning sense. DAGGER requires the expert to be available during train- ing to provide additional feedback to the agent. In addition, it does not combine imitation with reinforcement learning, meaning it can never learn to improve beyond the expert as DQfD can.
Deeply AggreVaTeD (Sun et al. 2017) extends DAGGER to work with deep neural networks and continuous action spaces. Not only does it require an always available expert like DAGGER does, the expert must provide a value func- tion in addition to actions. Similar to DAGGER, Deeply Ag- greVaTeD only does imitation learning and cannot learn to improve upon the expert. | 1704.03732#11 | Deep Q-learning from Demonstrations | Deep reinforcement learning (RL) has achieved several high profile successes
in difficult decision-making problems. However, these algorithms typically
require a huge amount of data before they reach reasonable performance. In
fact, their performance during learning can be extremely poor. This may be
acceptable for a simulator, but it severely limits the applicability of deep RL
to many real-world tasks, where the agent must learn in the real environment.
In this paper we study a setting where the agent may access data from previous
control of the system. We present an algorithm, Deep Q-learning from
Demonstrations (DQfD), that leverages small sets of demonstration data to
massively accelerate the learning process even from relatively small amounts of
demonstration data and is able to automatically assess the necessary ratio of
demonstration data while learning thanks to a prioritized replay mechanism.
DQfD works by combining temporal difference updates with supervised
classification of the demonstrator's actions. We show that DQfD has better
initial performance than Prioritized Dueling Double Deep Q-Networks (PDD DQN)
as it starts with better scores on the first million steps on 41 of 42 games
and on average it takes PDD DQN 83 million steps to catch up to DQfD's
performance. DQfD learns to out-perform the best demonstration given in 14 of
42 games. In addition, DQfD leverages human demonstrations to achieve
state-of-the-art results for 11 games. Finally, we show that DQfD performs
better than three related algorithms for incorporating demonstration data into
DQN. | http://arxiv.org/pdf/1704.03732 | Todd Hester, Matej Vecerik, Olivier Pietquin, Marc Lanctot, Tom Schaul, Bilal Piot, Dan Horgan, John Quan, Andrew Sendonaris, Gabriel Dulac-Arnold, Ian Osband, John Agapiou, Joel Z. Leibo, Audrunas Gruslys | cs.AI, cs.LG | Published at AAAI 2018. Previously on arxiv as "Learning from
Demonstrations for Real World Reinforcement Learning" | null | cs.AI | 20170412 | 20171122 | [] |
1704.03732 | 12 | Another popular paradigm is to setup a zero-sum game where the learner chooses a policy and the adversary chooses reward function (Syed and Schapire 2007; Syed, Bowling, and Schapire 2008; Ho and Ermon 2016). Demonstrations have also been used for inverse optimal control in high-dimensional, continuous robotic control problems (Finn, Levine, and Abbeel 2016). However, these approaches only do imitation learning and do not allow for learning from task rewards.
shown been in to problems has RL (Subramanian, Jr., and Thomaz 2016). There combined imita- interest also been recent tion and RL problem. For example, the HAT algo- rithm transfers knowledge directly from human policies (Taylor, Suay, and Chernova 2011). Follow-ups to this work showed how expert advice or demonstrations can be used to shape rewards in the RL problem (Brys et al. 2015; is Suay et al. 2016). to experi- shape use ence it- policy (Kim et al. 2013; from eration Chemali and Lezaric 2015). | 1704.03732#12 | Deep Q-learning from Demonstrations | Deep reinforcement learning (RL) has achieved several high profile successes
in difficult decision-making problems. However, these algorithms typically
require a huge amount of data before they reach reasonable performance. In
fact, their performance during learning can be extremely poor. This may be
acceptable for a simulator, but it severely limits the applicability of deep RL
to many real-world tasks, where the agent must learn in the real environment.
In this paper we study a setting where the agent may access data from previous
control of the system. We present an algorithm, Deep Q-learning from
Demonstrations (DQfD), that leverages small sets of demonstration data to
massively accelerate the learning process even from relatively small amounts of
demonstration data and is able to automatically assess the necessary ratio of
demonstration data while learning thanks to a prioritized replay mechanism.
DQfD works by combining temporal difference updates with supervised
classification of the demonstrator's actions. We show that DQfD has better
initial performance than Prioritized Dueling Double Deep Q-Networks (PDD DQN)
as it starts with better scores on the first million steps on 41 of 42 games
and on average it takes PDD DQN 83 million steps to catch up to DQfD's
performance. DQfD learns to out-perform the best demonstration given in 14 of
42 games. In addition, DQfD leverages human demonstrations to achieve
state-of-the-art results for 11 games. Finally, we show that DQfD performs
better than three related algorithms for incorporating demonstration data into
DQN. | http://arxiv.org/pdf/1704.03732 | Todd Hester, Matej Vecerik, Olivier Pietquin, Marc Lanctot, Tom Schaul, Bilal Piot, Dan Horgan, John Quan, Andrew Sendonaris, Gabriel Dulac-Arnold, Ian Osband, John Agapiou, Joel Z. Leibo, Audrunas Gruslys | cs.AI, cs.LG | Published at AAAI 2018. Previously on arxiv as "Learning from
Demonstrations for Real World Reinforcement Learning" | null | cs.AI | 20170412 | 20171122 | [] |
1704.03732 | 13 | Our algorithm works in a scenario where rewards are given by the environment used by the demon- strator. This framework was appropriately called Rein- forcement Learning with Expert Demonstrations (RLED) in (Piot, Geist, and Pietquin 2014a) and is also evaluated in (Kim et al. 2013; Chemali and Lezaric 2015). Our setup is similar to (Piot, Geist, and Pietquin 2014a) in that we combine TD and classiï¬cation losses in a batch algo- rithm in a model-free setting; ours differs in that our agent is pre-trained on the demonstration data initially and the batch of self-generated data grows over time and is used as experience replay to train deep Q-networks. In addition, a prioritized replay mechanism is used to bal- ance the amount of demonstration data in each mini-batch. (Piot, Geist, and Pietquin 2014b) present interesting results showing that adding a TD loss to the supervised classiï¬ca- tion loss improves imitation learning even when there are no rewards. | 1704.03732#13 | Deep Q-learning from Demonstrations | Deep reinforcement learning (RL) has achieved several high profile successes
in difficult decision-making problems. However, these algorithms typically
require a huge amount of data before they reach reasonable performance. In
fact, their performance during learning can be extremely poor. This may be
acceptable for a simulator, but it severely limits the applicability of deep RL
to many real-world tasks, where the agent must learn in the real environment.
In this paper we study a setting where the agent may access data from previous
control of the system. We present an algorithm, Deep Q-learning from
Demonstrations (DQfD), that leverages small sets of demonstration data to
massively accelerate the learning process even from relatively small amounts of
demonstration data and is able to automatically assess the necessary ratio of
demonstration data while learning thanks to a prioritized replay mechanism.
DQfD works by combining temporal difference updates with supervised
classification of the demonstrator's actions. We show that DQfD has better
initial performance than Prioritized Dueling Double Deep Q-Networks (PDD DQN)
as it starts with better scores on the first million steps on 41 of 42 games
and on average it takes PDD DQN 83 million steps to catch up to DQfD's
performance. DQfD learns to out-perform the best demonstration given in 14 of
42 games. In addition, DQfD leverages human demonstrations to achieve
state-of-the-art results for 11 games. Finally, we show that DQfD performs
better than three related algorithms for incorporating demonstration data into
DQN. | http://arxiv.org/pdf/1704.03732 | Todd Hester, Matej Vecerik, Olivier Pietquin, Marc Lanctot, Tom Schaul, Bilal Piot, Dan Horgan, John Quan, Andrew Sendonaris, Gabriel Dulac-Arnold, Ian Osband, John Agapiou, Joel Z. Leibo, Audrunas Gruslys | cs.AI, cs.LG | Published at AAAI 2018. Previously on arxiv as "Learning from
Demonstrations for Real World Reinforcement Learning" | null | cs.AI | 20170412 | 20171122 | [] |
1704.03732 | 14 | is similarly motivated to ours is (Schaal 1996). This work is focused on real world learning on robots, and thus is also concerned with on-line perfor- mance. Similar to our work, they pre-train the agent with demonstration data before letting it interact with the task. However, they do not use supervised learning to pre-train their algorithm, and are only able to ï¬nd one case where pre-training helps learning on Cart-Pole.
the agent is provided with an entire demonstration as input in addition to the current state. The demonstration speciï¬es the goal state that is wanted, but from different initial conditions. The agent is trained with target actions from more demon- strations. This setup also uses demonstrations, but requires a distribution of tasks with different initial conditions and goal states, and the agent can never learn to improve upon the demonstrations.
AlphaGo (Silver et al. 2016) takes a similar approach to our work in pre-training from demonstration data before in- teracting with the real task. AlphaGo ï¬rst trains a policy network from a dataset of 30 million expert actions, using supervised learning to predict the actions taken by experts. It then uses this as a starting point to apply policy gradient updates during self-play, combined with planning rollouts. Here, we do not have a model available for planning, so we focus on the model-free Q-learning case. | 1704.03732#14 | Deep Q-learning from Demonstrations | Deep reinforcement learning (RL) has achieved several high profile successes
in difficult decision-making problems. However, these algorithms typically
require a huge amount of data before they reach reasonable performance. In
fact, their performance during learning can be extremely poor. This may be
acceptable for a simulator, but it severely limits the applicability of deep RL
to many real-world tasks, where the agent must learn in the real environment.
In this paper we study a setting where the agent may access data from previous
control of the system. We present an algorithm, Deep Q-learning from
Demonstrations (DQfD), that leverages small sets of demonstration data to
massively accelerate the learning process even from relatively small amounts of
demonstration data and is able to automatically assess the necessary ratio of
demonstration data while learning thanks to a prioritized replay mechanism.
DQfD works by combining temporal difference updates with supervised
classification of the demonstrator's actions. We show that DQfD has better
initial performance than Prioritized Dueling Double Deep Q-Networks (PDD DQN)
as it starts with better scores on the first million steps on 41 of 42 games
and on average it takes PDD DQN 83 million steps to catch up to DQfD's
performance. DQfD learns to out-perform the best demonstration given in 14 of
42 games. In addition, DQfD leverages human demonstrations to achieve
state-of-the-art results for 11 games. Finally, we show that DQfD performs
better than three related algorithms for incorporating demonstration data into
DQN. | http://arxiv.org/pdf/1704.03732 | Todd Hester, Matej Vecerik, Olivier Pietquin, Marc Lanctot, Tom Schaul, Bilal Piot, Dan Horgan, John Quan, Andrew Sendonaris, Gabriel Dulac-Arnold, Ian Osband, John Agapiou, Joel Z. Leibo, Audrunas Gruslys | cs.AI, cs.LG | Published at AAAI 2018. Previously on arxiv as "Learning from
Demonstrations for Real World Reinforcement Learning" | null | cs.AI | 20170412 | 20171122 | [] |
1704.03732 | 15 | Replay (HER) an algorithm in which the agent samples from a replay buffer that is mixed between agent and demonstration data, similar to our approach. Gains were only slightly better than a random agent, and were surpassed by their alternative approach, Human Checkpoint Replay, which requires the ability to set
the state of the environment. While their algorithm is similar in that it samples from both datasets, it does not pre-train the agent or use a supervised loss. Our results show higher scores over a larger variety of games, without requiring full access to the environment. Replay Buffer Spiking (RBS) is another similar approach where the DQN agentâs replay buffer is initialized with demonstration data, but they do not pre-train the agent for good initial performance or keep the demonstration data permanently. | 1704.03732#15 | Deep Q-learning from Demonstrations | Deep reinforcement learning (RL) has achieved several high profile successes
in difficult decision-making problems. However, these algorithms typically
require a huge amount of data before they reach reasonable performance. In
fact, their performance during learning can be extremely poor. This may be
acceptable for a simulator, but it severely limits the applicability of deep RL
to many real-world tasks, where the agent must learn in the real environment.
In this paper we study a setting where the agent may access data from previous
control of the system. We present an algorithm, Deep Q-learning from
Demonstrations (DQfD), that leverages small sets of demonstration data to
massively accelerate the learning process even from relatively small amounts of
demonstration data and is able to automatically assess the necessary ratio of
demonstration data while learning thanks to a prioritized replay mechanism.
DQfD works by combining temporal difference updates with supervised
classification of the demonstrator's actions. We show that DQfD has better
initial performance than Prioritized Dueling Double Deep Q-Networks (PDD DQN)
as it starts with better scores on the first million steps on 41 of 42 games
and on average it takes PDD DQN 83 million steps to catch up to DQfD's
performance. DQfD learns to out-perform the best demonstration given in 14 of
42 games. In addition, DQfD leverages human demonstrations to achieve
state-of-the-art results for 11 games. Finally, we show that DQfD performs
better than three related algorithms for incorporating demonstration data into
DQN. | http://arxiv.org/pdf/1704.03732 | Todd Hester, Matej Vecerik, Olivier Pietquin, Marc Lanctot, Tom Schaul, Bilal Piot, Dan Horgan, John Quan, Andrew Sendonaris, Gabriel Dulac-Arnold, Ian Osband, John Agapiou, Joel Z. Leibo, Audrunas Gruslys | cs.AI, cs.LG | Published at AAAI 2018. Previously on arxiv as "Learning from
Demonstrations for Real World Reinforcement Learning" | null | cs.AI | 20170412 | 20171122 | [] |
1704.03732 | 16 | The work that most closely relates to ours is a workshop paper presenting Accelerated DQN with Expert Trajecto- ries (ADET) (Lakshminarayanan, Ozair, and Bengio 2016). They are also combining TD and classiï¬cation losses in a deep Q-learning setup. They use a trained DQN agent to generate their demonstration data, which on most games is better than human data. It also guarantees that the policy used by the demonstrator can be represented by the appren- ticeship agent as they are both using the same state input and network architecture. They use a cross-entropy classiï¬- cation loss rather than the large margin loss DQfD uses and they do not pre-train the agent to perform well from its ï¬rst interactions with the environment. | 1704.03732#16 | Deep Q-learning from Demonstrations | Deep reinforcement learning (RL) has achieved several high profile successes
in difficult decision-making problems. However, these algorithms typically
require a huge amount of data before they reach reasonable performance. In
fact, their performance during learning can be extremely poor. This may be
acceptable for a simulator, but it severely limits the applicability of deep RL
to many real-world tasks, where the agent must learn in the real environment.
In this paper we study a setting where the agent may access data from previous
control of the system. We present an algorithm, Deep Q-learning from
Demonstrations (DQfD), that leverages small sets of demonstration data to
massively accelerate the learning process even from relatively small amounts of
demonstration data and is able to automatically assess the necessary ratio of
demonstration data while learning thanks to a prioritized replay mechanism.
DQfD works by combining temporal difference updates with supervised
classification of the demonstrator's actions. We show that DQfD has better
initial performance than Prioritized Dueling Double Deep Q-Networks (PDD DQN)
as it starts with better scores on the first million steps on 41 of 42 games
and on average it takes PDD DQN 83 million steps to catch up to DQfD's
performance. DQfD learns to out-perform the best demonstration given in 14 of
42 games. In addition, DQfD leverages human demonstrations to achieve
state-of-the-art results for 11 games. Finally, we show that DQfD performs
better than three related algorithms for incorporating demonstration data into
DQN. | http://arxiv.org/pdf/1704.03732 | Todd Hester, Matej Vecerik, Olivier Pietquin, Marc Lanctot, Tom Schaul, Bilal Piot, Dan Horgan, John Quan, Andrew Sendonaris, Gabriel Dulac-Arnold, Ian Osband, John Agapiou, Joel Z. Leibo, Audrunas Gruslys | cs.AI, cs.LG | Published at AAAI 2018. Previously on arxiv as "Learning from
Demonstrations for Real World Reinforcement Learning" | null | cs.AI | 20170412 | 20171122 | [] |
1704.03732 | 17 | Deep Q-Learning from Demonstrations In many real-world settings of reinforcement learning, we have access to data of the system being operated by its previ- ous controller, but we do not have access to an accurate sim- ulator of the system. Therefore, we want the agent to learn as much as possible from the demonstration data before run- ning on the real system. The goal of the pre-training phase is to learn to imitate the demonstrator with a value func- tion that satisï¬es the Bellman equation so that it can be up- dated with TD updates once the agent starts interacting with the environment. During this pre-training phase, the agent samples mini-batches from the demonstration data and up- dates the network by applying four losses: the 1-step double Q-learning loss, an n-step double Q-learning loss, a super- vised large margin classiï¬cation loss, and an L2 regulariza- tion loss on the network weights and biases. The supervised loss is used for classiï¬cation of the demonstratorâs actions, while the Q-learning loss ensures that the network satisï¬es the Bellman equation and can be used as a starting point for TD learning. | 1704.03732#17 | Deep Q-learning from Demonstrations | Deep reinforcement learning (RL) has achieved several high profile successes
in difficult decision-making problems. However, these algorithms typically
require a huge amount of data before they reach reasonable performance. In
fact, their performance during learning can be extremely poor. This may be
acceptable for a simulator, but it severely limits the applicability of deep RL
to many real-world tasks, where the agent must learn in the real environment.
In this paper we study a setting where the agent may access data from previous
control of the system. We present an algorithm, Deep Q-learning from
Demonstrations (DQfD), that leverages small sets of demonstration data to
massively accelerate the learning process even from relatively small amounts of
demonstration data and is able to automatically assess the necessary ratio of
demonstration data while learning thanks to a prioritized replay mechanism.
DQfD works by combining temporal difference updates with supervised
classification of the demonstrator's actions. We show that DQfD has better
initial performance than Prioritized Dueling Double Deep Q-Networks (PDD DQN)
as it starts with better scores on the first million steps on 41 of 42 games
and on average it takes PDD DQN 83 million steps to catch up to DQfD's
performance. DQfD learns to out-perform the best demonstration given in 14 of
42 games. In addition, DQfD leverages human demonstrations to achieve
state-of-the-art results for 11 games. Finally, we show that DQfD performs
better than three related algorithms for incorporating demonstration data into
DQN. | http://arxiv.org/pdf/1704.03732 | Todd Hester, Matej Vecerik, Olivier Pietquin, Marc Lanctot, Tom Schaul, Bilal Piot, Dan Horgan, John Quan, Andrew Sendonaris, Gabriel Dulac-Arnold, Ian Osband, John Agapiou, Joel Z. Leibo, Audrunas Gruslys | cs.AI, cs.LG | Published at AAAI 2018. Previously on arxiv as "Learning from
Demonstrations for Real World Reinforcement Learning" | null | cs.AI | 20170412 | 20171122 | [] |
1704.03732 | 18 | The supervised loss is critical for the pre-training to have any effect. Since the demonstration data is necessarily cov- ering a narrow part of the state space and not taking all pos- sible actions, many state-actions have never been taken and have no data to ground them to realistic values. If we were to pre-train the network with only Q-learning updates to- wards the max value of the next state, the network would update towards the highest of these ungrounded variables and the network would propagate these values through- out the Q function. We add a large margin classiï¬cation loss (Piot, Geist, and Pietquin 2014a):
JE(Q) = max aâA [Q(s, a) + l(aE, a)] â Q(s, aE)
where aE is the action the expert demonstrator took in state | 1704.03732#18 | Deep Q-learning from Demonstrations | Deep reinforcement learning (RL) has achieved several high profile successes
in difficult decision-making problems. However, these algorithms typically
require a huge amount of data before they reach reasonable performance. In
fact, their performance during learning can be extremely poor. This may be
acceptable for a simulator, but it severely limits the applicability of deep RL
to many real-world tasks, where the agent must learn in the real environment.
In this paper we study a setting where the agent may access data from previous
control of the system. We present an algorithm, Deep Q-learning from
Demonstrations (DQfD), that leverages small sets of demonstration data to
massively accelerate the learning process even from relatively small amounts of
demonstration data and is able to automatically assess the necessary ratio of
demonstration data while learning thanks to a prioritized replay mechanism.
DQfD works by combining temporal difference updates with supervised
classification of the demonstrator's actions. We show that DQfD has better
initial performance than Prioritized Dueling Double Deep Q-Networks (PDD DQN)
as it starts with better scores on the first million steps on 41 of 42 games
and on average it takes PDD DQN 83 million steps to catch up to DQfD's
performance. DQfD learns to out-perform the best demonstration given in 14 of
42 games. In addition, DQfD leverages human demonstrations to achieve
state-of-the-art results for 11 games. Finally, we show that DQfD performs
better than three related algorithms for incorporating demonstration data into
DQN. | http://arxiv.org/pdf/1704.03732 | Todd Hester, Matej Vecerik, Olivier Pietquin, Marc Lanctot, Tom Schaul, Bilal Piot, Dan Horgan, John Quan, Andrew Sendonaris, Gabriel Dulac-Arnold, Ian Osband, John Agapiou, Joel Z. Leibo, Audrunas Gruslys | cs.AI, cs.LG | Published at AAAI 2018. Previously on arxiv as "Learning from
Demonstrations for Real World Reinforcement Learning" | null | cs.AI | 20170412 | 20171122 | [] |
1704.03732 | 19 | where aE is the action the expert demonstrator took in state
s and l(aE, a) is a margin function that is 0 when a = aE and positive otherwise. This loss forces the values of the other actions to be at least a margin lower than the value of the demonstratorâs action. Adding this loss grounds the val- ues of the unseen actions to reasonable values, and makes the greedy policy induced by the value function imitate the demonstrator. If the algorithm pre-trained with only this su- pervised loss, there would be nothing constraining the val- ues between consecutive states and the Q-network would not satisfy the Bellman equation, which is required to improve the policy on-line with TD learning.
Adding n-step returns (with n = 10) helps propagate the values of the expertâs trajectory to all the earlier states, lead- ing to better pre-training. The n-step return is:
rt + γrt+1 + ... + γnâ1rt+nâ1 + maxaγnQ(st+n, a),
which we calculate using the forward view, similar to A3C (Mnih et al. 2016). | 1704.03732#19 | Deep Q-learning from Demonstrations | Deep reinforcement learning (RL) has achieved several high profile successes
in difficult decision-making problems. However, these algorithms typically
require a huge amount of data before they reach reasonable performance. In
fact, their performance during learning can be extremely poor. This may be
acceptable for a simulator, but it severely limits the applicability of deep RL
to many real-world tasks, where the agent must learn in the real environment.
In this paper we study a setting where the agent may access data from previous
control of the system. We present an algorithm, Deep Q-learning from
Demonstrations (DQfD), that leverages small sets of demonstration data to
massively accelerate the learning process even from relatively small amounts of
demonstration data and is able to automatically assess the necessary ratio of
demonstration data while learning thanks to a prioritized replay mechanism.
DQfD works by combining temporal difference updates with supervised
classification of the demonstrator's actions. We show that DQfD has better
initial performance than Prioritized Dueling Double Deep Q-Networks (PDD DQN)
as it starts with better scores on the first million steps on 41 of 42 games
and on average it takes PDD DQN 83 million steps to catch up to DQfD's
performance. DQfD learns to out-perform the best demonstration given in 14 of
42 games. In addition, DQfD leverages human demonstrations to achieve
state-of-the-art results for 11 games. Finally, we show that DQfD performs
better than three related algorithms for incorporating demonstration data into
DQN. | http://arxiv.org/pdf/1704.03732 | Todd Hester, Matej Vecerik, Olivier Pietquin, Marc Lanctot, Tom Schaul, Bilal Piot, Dan Horgan, John Quan, Andrew Sendonaris, Gabriel Dulac-Arnold, Ian Osband, John Agapiou, Joel Z. Leibo, Audrunas Gruslys | cs.AI, cs.LG | Published at AAAI 2018. Previously on arxiv as "Learning from
Demonstrations for Real World Reinforcement Learning" | null | cs.AI | 20170412 | 20171122 | [] |
1704.03732 | 20 | which we calculate using the forward view, similar to A3C (Mnih et al. 2016).
We also add an L2 regularization loss applied to the weights and biases of the network to help prevent it from over-ï¬tting on the relatively small demonstration dataset. The overall loss used to update the network is a combina- tion of all four losses:
J(Q) = JDQ(Q) + λ1Jn(Q) + λ2JE(Q) + λ3JL2(Q).
The λ parameters control the weighting between the losses. We examine removing some of these losses in Section .
Once the pre-training phase is complete, the agent starts acting on the system, collecting self-generated data, and adding it to its replay buffer Dreplay. Data is added to the replay buffer until it is full, and then the agent starts over- writing old data in that buffer. However, the agent never over-writes the demonstration data. For proportional prior- itized sampling, different small positive constants, ǫa and ǫd, are added to the priorities of the agent and demonstration transitions to control the relative sampling of demonstration versus agent data. All the losses are applied to the demon- stration data in both phases, while the supervised loss is not applied to self-generated data (λ2 = 0). | 1704.03732#20 | Deep Q-learning from Demonstrations | Deep reinforcement learning (RL) has achieved several high profile successes
in difficult decision-making problems. However, these algorithms typically
require a huge amount of data before they reach reasonable performance. In
fact, their performance during learning can be extremely poor. This may be
acceptable for a simulator, but it severely limits the applicability of deep RL
to many real-world tasks, where the agent must learn in the real environment.
In this paper we study a setting where the agent may access data from previous
control of the system. We present an algorithm, Deep Q-learning from
Demonstrations (DQfD), that leverages small sets of demonstration data to
massively accelerate the learning process even from relatively small amounts of
demonstration data and is able to automatically assess the necessary ratio of
demonstration data while learning thanks to a prioritized replay mechanism.
DQfD works by combining temporal difference updates with supervised
classification of the demonstrator's actions. We show that DQfD has better
initial performance than Prioritized Dueling Double Deep Q-Networks (PDD DQN)
as it starts with better scores on the first million steps on 41 of 42 games
and on average it takes PDD DQN 83 million steps to catch up to DQfD's
performance. DQfD learns to out-perform the best demonstration given in 14 of
42 games. In addition, DQfD leverages human demonstrations to achieve
state-of-the-art results for 11 games. Finally, we show that DQfD performs
better than three related algorithms for incorporating demonstration data into
DQN. | http://arxiv.org/pdf/1704.03732 | Todd Hester, Matej Vecerik, Olivier Pietquin, Marc Lanctot, Tom Schaul, Bilal Piot, Dan Horgan, John Quan, Andrew Sendonaris, Gabriel Dulac-Arnold, Ian Osband, John Agapiou, Joel Z. Leibo, Audrunas Gruslys | cs.AI, cs.LG | Published at AAAI 2018. Previously on arxiv as "Learning from
Demonstrations for Real World Reinforcement Learning" | null | cs.AI | 20170412 | 20171122 | [] |
1704.03732 | 21 | Overall, Deep Q-learning from Demonstration (DQfD)
differs from PDD DQN in six key ways: ⢠Demonstration data: DQfD is given a set of demonstration data, which it retains in its replay buffer permanently. ⢠Pre-training: DQfD initially trains solely on the demon- stration data before starting any interaction with the envi- ronment.
⢠Supervised losses: In addition to TD losses, a large mar- gin supervised loss is applied that pushes the value of the demonstratorâs actions above the other action val- ues (Piot, Geist, and Pietquin 2014a).
⢠L2 Regularization losses: The algorithm also adds L2 reg- ularization losses on the network weights to prevent over- ï¬tting on the demonstration data.
⢠N-step TD losses: The agent updates its Q-network with targets from a mix of 1-step and n-step returns.
⢠Demonstration priority bonus: The priorities of demon- stration transitions are given a bonus of ǫd, to boost the frequency that they are sampled.
Pseudo-code is sketched in Algorithm 1. The behavior policy ÏÇ«Qθ is Ç«-greedy with respect to Qθ. | 1704.03732#21 | Deep Q-learning from Demonstrations | Deep reinforcement learning (RL) has achieved several high profile successes
in difficult decision-making problems. However, these algorithms typically
require a huge amount of data before they reach reasonable performance. In
fact, their performance during learning can be extremely poor. This may be
acceptable for a simulator, but it severely limits the applicability of deep RL
to many real-world tasks, where the agent must learn in the real environment.
In this paper we study a setting where the agent may access data from previous
control of the system. We present an algorithm, Deep Q-learning from
Demonstrations (DQfD), that leverages small sets of demonstration data to
massively accelerate the learning process even from relatively small amounts of
demonstration data and is able to automatically assess the necessary ratio of
demonstration data while learning thanks to a prioritized replay mechanism.
DQfD works by combining temporal difference updates with supervised
classification of the demonstrator's actions. We show that DQfD has better
initial performance than Prioritized Dueling Double Deep Q-Networks (PDD DQN)
as it starts with better scores on the first million steps on 41 of 42 games
and on average it takes PDD DQN 83 million steps to catch up to DQfD's
performance. DQfD learns to out-perform the best demonstration given in 14 of
42 games. In addition, DQfD leverages human demonstrations to achieve
state-of-the-art results for 11 games. Finally, we show that DQfD performs
better than three related algorithms for incorporating demonstration data into
DQN. | http://arxiv.org/pdf/1704.03732 | Todd Hester, Matej Vecerik, Olivier Pietquin, Marc Lanctot, Tom Schaul, Bilal Piot, Dan Horgan, John Quan, Andrew Sendonaris, Gabriel Dulac-Arnold, Ian Osband, John Agapiou, Joel Z. Leibo, Audrunas Gruslys | cs.AI, cs.LG | Published at AAAI 2018. Previously on arxiv as "Learning from
Demonstrations for Real World Reinforcement Learning" | null | cs.AI | 20170412 | 20171122 | [] |
1704.03732 | 23 | 2: for steps t â {1, 2, . . . k} do 3:
Sample a mini-batch of n transitions from Dreplay with prioritization Calculate loss J(Q) using target network Perform a gradient descent step to update θ if t mod Ï = 0 then θⲠâ θ end if
4: 5: 6: 7: end for 8: for steps t â {1, 2, . . .} do 9: 10: 11:
Sample action from behavior policy a â¼ ÏÇ«Qθ Play action a and observe (sâ², r). Store (s, a, r, sâ²) into Dreplay, overwriting oldest self-generated transition if over capacity Sample a mini-batch of n transitions from Dreplay with prioritization Calculate loss J(Q) using target network Perform a gradient descent step to update θ if t mod Ï = 0 then θⲠâ θ end if s â sâ²
12:
13: 14: 15: 16: 17: end for
# Experimental Setup | 1704.03732#23 | Deep Q-learning from Demonstrations | Deep reinforcement learning (RL) has achieved several high profile successes
in difficult decision-making problems. However, these algorithms typically
require a huge amount of data before they reach reasonable performance. In
fact, their performance during learning can be extremely poor. This may be
acceptable for a simulator, but it severely limits the applicability of deep RL
to many real-world tasks, where the agent must learn in the real environment.
In this paper we study a setting where the agent may access data from previous
control of the system. We present an algorithm, Deep Q-learning from
Demonstrations (DQfD), that leverages small sets of demonstration data to
massively accelerate the learning process even from relatively small amounts of
demonstration data and is able to automatically assess the necessary ratio of
demonstration data while learning thanks to a prioritized replay mechanism.
DQfD works by combining temporal difference updates with supervised
classification of the demonstrator's actions. We show that DQfD has better
initial performance than Prioritized Dueling Double Deep Q-Networks (PDD DQN)
as it starts with better scores on the first million steps on 41 of 42 games
and on average it takes PDD DQN 83 million steps to catch up to DQfD's
performance. DQfD learns to out-perform the best demonstration given in 14 of
42 games. In addition, DQfD leverages human demonstrations to achieve
state-of-the-art results for 11 games. Finally, we show that DQfD performs
better than three related algorithms for incorporating demonstration data into
DQN. | http://arxiv.org/pdf/1704.03732 | Todd Hester, Matej Vecerik, Olivier Pietquin, Marc Lanctot, Tom Schaul, Bilal Piot, Dan Horgan, John Quan, Andrew Sendonaris, Gabriel Dulac-Arnold, Ian Osband, John Agapiou, Joel Z. Leibo, Audrunas Gruslys | cs.AI, cs.LG | Published at AAAI 2018. Previously on arxiv as "Learning from
Demonstrations for Real World Reinforcement Learning" | null | cs.AI | 20170412 | 20171122 | [] |
1704.03732 | 24 | 12:
13: 14: 15: 16: 17: end for
# Experimental Setup
We evaluated DQfD on the Arcade Learning Environment (ALE) (Bellemare et al. 2013). ALE is a set of Atari games that are a standard benchmark for DQN and contains many games on which humans still perform better than the best learning agents. The agent plays the Atari games from a down-sampled 84x84 image of the game screen that has been converted to greyscale, and the agent stacks four of these frames together as its state. The agent must output one of 18 possible actions for each game. The agent applies a discount factor of 0.99 and all of its actions are repeated for four Atari frames. Each episode is initialized with up to 30 no-op actions to provide random starting positions. The scores reported are the scores in the Atari game, regardless of how the agent is representing reward internally.
For all of our experiments, we evaluated three different
algorithms, each averaged across four trials: ⢠Full DQfD algorithm with human demonstrations ⢠PDD DQN learning without any demonstration data ⢠Supervised imitation from demonstration data without
e Supervised imitation from demonstration data without any environment interaction | 1704.03732#24 | Deep Q-learning from Demonstrations | Deep reinforcement learning (RL) has achieved several high profile successes
in difficult decision-making problems. However, these algorithms typically
require a huge amount of data before they reach reasonable performance. In
fact, their performance during learning can be extremely poor. This may be
acceptable for a simulator, but it severely limits the applicability of deep RL
to many real-world tasks, where the agent must learn in the real environment.
In this paper we study a setting where the agent may access data from previous
control of the system. We present an algorithm, Deep Q-learning from
Demonstrations (DQfD), that leverages small sets of demonstration data to
massively accelerate the learning process even from relatively small amounts of
demonstration data and is able to automatically assess the necessary ratio of
demonstration data while learning thanks to a prioritized replay mechanism.
DQfD works by combining temporal difference updates with supervised
classification of the demonstrator's actions. We show that DQfD has better
initial performance than Prioritized Dueling Double Deep Q-Networks (PDD DQN)
as it starts with better scores on the first million steps on 41 of 42 games
and on average it takes PDD DQN 83 million steps to catch up to DQfD's
performance. DQfD learns to out-perform the best demonstration given in 14 of
42 games. In addition, DQfD leverages human demonstrations to achieve
state-of-the-art results for 11 games. Finally, we show that DQfD performs
better than three related algorithms for incorporating demonstration data into
DQN. | http://arxiv.org/pdf/1704.03732 | Todd Hester, Matej Vecerik, Olivier Pietquin, Marc Lanctot, Tom Schaul, Bilal Piot, Dan Horgan, John Quan, Andrew Sendonaris, Gabriel Dulac-Arnold, Ian Osband, John Agapiou, Joel Z. Leibo, Audrunas Gruslys | cs.AI, cs.LG | Published at AAAI 2018. Previously on arxiv as "Learning from
Demonstrations for Real World Reinforcement Learning" | null | cs.AI | 20170412 | 20171122 | [] |
1704.03732 | 25 | e Supervised imitation from demonstration data without any environment interaction
any environment interaction We performed informal parameter tuning for all the algo- rithms on six Atari games and then used the same param- eters for the entire set of games. The parameters used for the algorithms are shown in the appendix. Our coarse search over prioritization and n-step return parameters led to the same best parameters for DQfD and PDD DQN. PDD DQN
differs from DQfD because it does not have demonstra- tion data, pre-training, supervised losses, or regularization losses. We included n-step returns in PDD DQN to provide a better baseline for comparison between DQfD and PDD DQN. All three algorithms use the dueling state-advantage convolutional network architecture (Wang et al. 2016).
For the supervised imitation comparison, we performed supervised classiï¬cation of the demonstratorâs actions using a cross-entropy loss, with the same network architecture and L2 regularization used by DQfD. The imitation algorithm did not use any TD loss. Imitation learning only learns from the pre-training and not from any additional interactions. | 1704.03732#25 | Deep Q-learning from Demonstrations | Deep reinforcement learning (RL) has achieved several high profile successes
in difficult decision-making problems. However, these algorithms typically
require a huge amount of data before they reach reasonable performance. In
fact, their performance during learning can be extremely poor. This may be
acceptable for a simulator, but it severely limits the applicability of deep RL
to many real-world tasks, where the agent must learn in the real environment.
In this paper we study a setting where the agent may access data from previous
control of the system. We present an algorithm, Deep Q-learning from
Demonstrations (DQfD), that leverages small sets of demonstration data to
massively accelerate the learning process even from relatively small amounts of
demonstration data and is able to automatically assess the necessary ratio of
demonstration data while learning thanks to a prioritized replay mechanism.
DQfD works by combining temporal difference updates with supervised
classification of the demonstrator's actions. We show that DQfD has better
initial performance than Prioritized Dueling Double Deep Q-Networks (PDD DQN)
as it starts with better scores on the first million steps on 41 of 42 games
and on average it takes PDD DQN 83 million steps to catch up to DQfD's
performance. DQfD learns to out-perform the best demonstration given in 14 of
42 games. In addition, DQfD leverages human demonstrations to achieve
state-of-the-art results for 11 games. Finally, we show that DQfD performs
better than three related algorithms for incorporating demonstration data into
DQN. | http://arxiv.org/pdf/1704.03732 | Todd Hester, Matej Vecerik, Olivier Pietquin, Marc Lanctot, Tom Schaul, Bilal Piot, Dan Horgan, John Quan, Andrew Sendonaris, Gabriel Dulac-Arnold, Ian Osband, John Agapiou, Joel Z. Leibo, Audrunas Gruslys | cs.AI, cs.LG | Published at AAAI 2018. Previously on arxiv as "Learning from
Demonstrations for Real World Reinforcement Learning" | null | cs.AI | 20170412 | 20171122 | [] |
1704.03732 | 26 | We ran experiments on a randomly selected subset of 42 Atari games. We had a human player play each game be- tween three and twelve times. Each episode was played ei- ther until the game terminated or for 20 minutes. During game play, we logged the agentâs state, actions, rewards, and terminations. The human demonstrations range from 5,574 to 75,472 transitions per game. DQfD learns from a very small dataset compared to other similar work, as Al- phaGo (Silver et al. 2016) learns from 30 million human transitions, and DQN (Mnih et al. 2015) learns from over 200 million frames. DQfDâs smaller demonstration dataset makes it more difï¬cult to learn a good representation with- out over-ï¬tting. The demonstration scores for each game are shown in a table in the Appendix. Our human demonstrator is much better than PDD DQN on some games (e.g. Pri- vate Eye, Pitfall), but much worse than PDD DQN on many games (e.g. Breakout, Pong). | 1704.03732#26 | Deep Q-learning from Demonstrations | Deep reinforcement learning (RL) has achieved several high profile successes
in difficult decision-making problems. However, these algorithms typically
require a huge amount of data before they reach reasonable performance. In
fact, their performance during learning can be extremely poor. This may be
acceptable for a simulator, but it severely limits the applicability of deep RL
to many real-world tasks, where the agent must learn in the real environment.
In this paper we study a setting where the agent may access data from previous
control of the system. We present an algorithm, Deep Q-learning from
Demonstrations (DQfD), that leverages small sets of demonstration data to
massively accelerate the learning process even from relatively small amounts of
demonstration data and is able to automatically assess the necessary ratio of
demonstration data while learning thanks to a prioritized replay mechanism.
DQfD works by combining temporal difference updates with supervised
classification of the demonstrator's actions. We show that DQfD has better
initial performance than Prioritized Dueling Double Deep Q-Networks (PDD DQN)
as it starts with better scores on the first million steps on 41 of 42 games
and on average it takes PDD DQN 83 million steps to catch up to DQfD's
performance. DQfD learns to out-perform the best demonstration given in 14 of
42 games. In addition, DQfD leverages human demonstrations to achieve
state-of-the-art results for 11 games. Finally, we show that DQfD performs
better than three related algorithms for incorporating demonstration data into
DQN. | http://arxiv.org/pdf/1704.03732 | Todd Hester, Matej Vecerik, Olivier Pietquin, Marc Lanctot, Tom Schaul, Bilal Piot, Dan Horgan, John Quan, Andrew Sendonaris, Gabriel Dulac-Arnold, Ian Osband, John Agapiou, Joel Z. Leibo, Audrunas Gruslys | cs.AI, cs.LG | Published at AAAI 2018. Previously on arxiv as "Learning from
Demonstrations for Real World Reinforcement Learning" | null | cs.AI | 20170412 | 20171122 | [] |
1704.03732 | 27 | We found that in many of the games where the human player is better than DQN, it was due to DQN being trained with all rewards clipped to 1. For example, in Private Eye, DQN has no reason to select actions that reward 25,000 ver- sus actions that reward 10. To make the reward function used by the human demonstrator and the agent more consis- tent, we used unclipped rewards and converted the rewards using a log scale: ragent = sign(r) · log(1 + |r|). This transformation keeps the rewards over a reasonable scale for the neural network to learn, while conveying important information about the relative scale of individual rewards. These adapted rewards are used internally by the all the al- gorithms in our experiments. Results are still reported using actual game scores as is typically done in the Atari litera- ture (Mnih et al. 2015). | 1704.03732#27 | Deep Q-learning from Demonstrations | Deep reinforcement learning (RL) has achieved several high profile successes
in difficult decision-making problems. However, these algorithms typically
require a huge amount of data before they reach reasonable performance. In
fact, their performance during learning can be extremely poor. This may be
acceptable for a simulator, but it severely limits the applicability of deep RL
to many real-world tasks, where the agent must learn in the real environment.
In this paper we study a setting where the agent may access data from previous
control of the system. We present an algorithm, Deep Q-learning from
Demonstrations (DQfD), that leverages small sets of demonstration data to
massively accelerate the learning process even from relatively small amounts of
demonstration data and is able to automatically assess the necessary ratio of
demonstration data while learning thanks to a prioritized replay mechanism.
DQfD works by combining temporal difference updates with supervised
classification of the demonstrator's actions. We show that DQfD has better
initial performance than Prioritized Dueling Double Deep Q-Networks (PDD DQN)
as it starts with better scores on the first million steps on 41 of 42 games
and on average it takes PDD DQN 83 million steps to catch up to DQfD's
performance. DQfD learns to out-perform the best demonstration given in 14 of
42 games. In addition, DQfD leverages human demonstrations to achieve
state-of-the-art results for 11 games. Finally, we show that DQfD performs
better than three related algorithms for incorporating demonstration data into
DQN. | http://arxiv.org/pdf/1704.03732 | Todd Hester, Matej Vecerik, Olivier Pietquin, Marc Lanctot, Tom Schaul, Bilal Piot, Dan Horgan, John Quan, Andrew Sendonaris, Gabriel Dulac-Arnold, Ian Osband, John Agapiou, Joel Z. Leibo, Audrunas Gruslys | cs.AI, cs.LG | Published at AAAI 2018. Previously on arxiv as "Learning from
Demonstrations for Real World Reinforcement Learning" | null | cs.AI | 20170412 | 20171122 | [] |
1704.03732 | 28 | Results First, we show learning curves in Figure 1 for three and Road Runner. On Hero games: Hero, Pitfall, the human demonstrations enable DQfD and Pitfall, to achieve a score higher than any previously pub- lished result. Videos for both games are available at https://www.youtube.com/watch?v=JR6wmLaYuu4. On Hero, DQfD achieves a higher score than any of the human demonstrations as well as any previously published result. Pitfall may be the most difï¬cult Atari game, as it has very sparse positive rewards and dense negative rewards. No previous approach achieved any positive rewards on this game, while DQfDâs best score on this game averaged over a 3 million step period is 394.0.
On Road Runner, agents typically learn super-human policies with a score exploit that differs greatly from hu- man play. Our demonstrations are only human and have a maximum score of 20,200. Road Runner is the game with the smallest set of human demonstrations (only 5,574 tran- sitions). Despite these factors, DQfD still achieves a higher score than PDD DQN for the ï¬rst 36 million steps and matches PDD DQNâs performance after that. | 1704.03732#28 | Deep Q-learning from Demonstrations | Deep reinforcement learning (RL) has achieved several high profile successes
in difficult decision-making problems. However, these algorithms typically
require a huge amount of data before they reach reasonable performance. In
fact, their performance during learning can be extremely poor. This may be
acceptable for a simulator, but it severely limits the applicability of deep RL
to many real-world tasks, where the agent must learn in the real environment.
In this paper we study a setting where the agent may access data from previous
control of the system. We present an algorithm, Deep Q-learning from
Demonstrations (DQfD), that leverages small sets of demonstration data to
massively accelerate the learning process even from relatively small amounts of
demonstration data and is able to automatically assess the necessary ratio of
demonstration data while learning thanks to a prioritized replay mechanism.
DQfD works by combining temporal difference updates with supervised
classification of the demonstrator's actions. We show that DQfD has better
initial performance than Prioritized Dueling Double Deep Q-Networks (PDD DQN)
as it starts with better scores on the first million steps on 41 of 42 games
and on average it takes PDD DQN 83 million steps to catch up to DQfD's
performance. DQfD learns to out-perform the best demonstration given in 14 of
42 games. In addition, DQfD leverages human demonstrations to achieve
state-of-the-art results for 11 games. Finally, we show that DQfD performs
better than three related algorithms for incorporating demonstration data into
DQN. | http://arxiv.org/pdf/1704.03732 | Todd Hester, Matej Vecerik, Olivier Pietquin, Marc Lanctot, Tom Schaul, Bilal Piot, Dan Horgan, John Quan, Andrew Sendonaris, Gabriel Dulac-Arnold, Ian Osband, John Agapiou, Joel Z. Leibo, Audrunas Gruslys | cs.AI, cs.LG | Published at AAAI 2018. Previously on arxiv as "Learning from
Demonstrations for Real World Reinforcement Learning" | null | cs.AI | 20170412 | 20171122 | [] |
1704.03732 | 29 | The right subplot in Figure 1 shows the ratio of how of- ten the demonstration data was sampled versus how much it would be sampled with uniform sampling. For the most difï¬cult games like Pitfall and Montezumaâs Revenge, the demonstration data is sampled more frequently over time. For most other games, the ratio converges to a near constant level, which differs for each game.
In real world tasks, the agent must perform well from its very ï¬rst action and must learn quickly. DQfD performed better than PDD DQN on the ï¬rst million steps on 41 of 42 games. In addition, on 31 games, DQfD starts out with higher performance than pure imitation learning, as the ad- dition of the TD loss helps the agent generalize the demon- stration data better. On average, PDD DQN does not surpass the performance of DQfD until 83 million steps into the task and never surpasses it in mean scores. | 1704.03732#29 | Deep Q-learning from Demonstrations | Deep reinforcement learning (RL) has achieved several high profile successes
in difficult decision-making problems. However, these algorithms typically
require a huge amount of data before they reach reasonable performance. In
fact, their performance during learning can be extremely poor. This may be
acceptable for a simulator, but it severely limits the applicability of deep RL
to many real-world tasks, where the agent must learn in the real environment.
In this paper we study a setting where the agent may access data from previous
control of the system. We present an algorithm, Deep Q-learning from
Demonstrations (DQfD), that leverages small sets of demonstration data to
massively accelerate the learning process even from relatively small amounts of
demonstration data and is able to automatically assess the necessary ratio of
demonstration data while learning thanks to a prioritized replay mechanism.
DQfD works by combining temporal difference updates with supervised
classification of the demonstrator's actions. We show that DQfD has better
initial performance than Prioritized Dueling Double Deep Q-Networks (PDD DQN)
as it starts with better scores on the first million steps on 41 of 42 games
and on average it takes PDD DQN 83 million steps to catch up to DQfD's
performance. DQfD learns to out-perform the best demonstration given in 14 of
42 games. In addition, DQfD leverages human demonstrations to achieve
state-of-the-art results for 11 games. Finally, we show that DQfD performs
better than three related algorithms for incorporating demonstration data into
DQN. | http://arxiv.org/pdf/1704.03732 | Todd Hester, Matej Vecerik, Olivier Pietquin, Marc Lanctot, Tom Schaul, Bilal Piot, Dan Horgan, John Quan, Andrew Sendonaris, Gabriel Dulac-Arnold, Ian Osband, John Agapiou, Joel Z. Leibo, Audrunas Gruslys | cs.AI, cs.LG | Published at AAAI 2018. Previously on arxiv as "Learning from
Demonstrations for Real World Reinforcement Learning" | null | cs.AI | 20170412 | 20171122 | [] |
1704.03732 | 30 | In addition to boosting initial performance, DQfD is able to leverage the human demonstrations to learn better policies on the most difï¬cult Atari games. We compared DQfDâs scores over 200 million steps with that of other deep reinforcement learning approaches: DQN, Double DQN, Prioritized DQN, Dueling DQN, PopArt, and DQN+PixelCNN (Mnih et al. 2015; DQN+CTS, Schaul et al. 2016; van Hasselt, Guez, and Silver 2016; Wang et al. 2016; van Hasselt et al. 2016; Ostrovski et al. 2017). We took the best 3 million step window averaged over 4 seeds for the DQfD scores. DQfD achieves better scores than these algorithms on 11 of 42 games, shown in Table 1. Note that we do not compare with A3C (Mnih et al. 2016) or Reactor (Gruslys et al. 2017) as the only published results are for human starts, and we do not compare with UNREAL (Jaderberg et al. 2016) as they select the best hyper-parameters per game. Despite this fact, DQfD still out-performs the best | 1704.03732#30 | Deep Q-learning from Demonstrations | Deep reinforcement learning (RL) has achieved several high profile successes
in difficult decision-making problems. However, these algorithms typically
require a huge amount of data before they reach reasonable performance. In
fact, their performance during learning can be extremely poor. This may be
acceptable for a simulator, but it severely limits the applicability of deep RL
to many real-world tasks, where the agent must learn in the real environment.
In this paper we study a setting where the agent may access data from previous
control of the system. We present an algorithm, Deep Q-learning from
Demonstrations (DQfD), that leverages small sets of demonstration data to
massively accelerate the learning process even from relatively small amounts of
demonstration data and is able to automatically assess the necessary ratio of
demonstration data while learning thanks to a prioritized replay mechanism.
DQfD works by combining temporal difference updates with supervised
classification of the demonstrator's actions. We show that DQfD has better
initial performance than Prioritized Dueling Double Deep Q-Networks (PDD DQN)
as it starts with better scores on the first million steps on 41 of 42 games
and on average it takes PDD DQN 83 million steps to catch up to DQfD's
performance. DQfD learns to out-perform the best demonstration given in 14 of
42 games. In addition, DQfD leverages human demonstrations to achieve
state-of-the-art results for 11 games. Finally, we show that DQfD performs
better than three related algorithms for incorporating demonstration data into
DQN. | http://arxiv.org/pdf/1704.03732 | Todd Hester, Matej Vecerik, Olivier Pietquin, Marc Lanctot, Tom Schaul, Bilal Piot, Dan Horgan, John Quan, Andrew Sendonaris, Gabriel Dulac-Arnold, Ian Osband, John Agapiou, Joel Z. Leibo, Audrunas Gruslys | cs.AI, cs.LG | Published at AAAI 2018. Previously on arxiv as "Learning from
Demonstrations for Real World Reinforcement Learning" | null | cs.AI | 20170412 | 20171122 | [] |
1704.03732 | 31 | with UNREAL (Jaderberg et al. 2016) as they select the best hyper-parameters per game. Despite this fact, DQfD still out-performs the best UNREAL results on 10 games. DQN with count-based explo- ration (Ostrovski et al. 2017) is designed for and achieves the best results on the most difï¬cult exploration games. On the six sparse reward, hard exploration games both algorithms were run on, DQfD learns better policies on four of six games. | 1704.03732#31 | Deep Q-learning from Demonstrations | Deep reinforcement learning (RL) has achieved several high profile successes
in difficult decision-making problems. However, these algorithms typically
require a huge amount of data before they reach reasonable performance. In
fact, their performance during learning can be extremely poor. This may be
acceptable for a simulator, but it severely limits the applicability of deep RL
to many real-world tasks, where the agent must learn in the real environment.
In this paper we study a setting where the agent may access data from previous
control of the system. We present an algorithm, Deep Q-learning from
Demonstrations (DQfD), that leverages small sets of demonstration data to
massively accelerate the learning process even from relatively small amounts of
demonstration data and is able to automatically assess the necessary ratio of
demonstration data while learning thanks to a prioritized replay mechanism.
DQfD works by combining temporal difference updates with supervised
classification of the demonstrator's actions. We show that DQfD has better
initial performance than Prioritized Dueling Double Deep Q-Networks (PDD DQN)
as it starts with better scores on the first million steps on 41 of 42 games
and on average it takes PDD DQN 83 million steps to catch up to DQfD's
performance. DQfD learns to out-perform the best demonstration given in 14 of
42 games. In addition, DQfD leverages human demonstrations to achieve
state-of-the-art results for 11 games. Finally, we show that DQfD performs
better than three related algorithms for incorporating demonstration data into
DQN. | http://arxiv.org/pdf/1704.03732 | Todd Hester, Matej Vecerik, Olivier Pietquin, Marc Lanctot, Tom Schaul, Bilal Piot, Dan Horgan, John Quan, Andrew Sendonaris, Gabriel Dulac-Arnold, Ian Osband, John Agapiou, Joel Z. Leibo, Audrunas Gruslys | cs.AI, cs.LG | Published at AAAI 2018. Previously on arxiv as "Learning from
Demonstrations for Real World Reinforcement Learning" | null | cs.AI | 20170412 | 20171122 | [] |
1704.03732 | 32 | DQfD out-performs the worst demonstration episode it was given on in 29 of 42 games and it learns to play bet- ter than the best demonstration episode in 14 of the games: Amidar, Atlantis, Boxing, Breakout, Crazy Climber, De- fender, Enduro, Fishing Derby, Hero, James Bond, Kung Fu Master, Pong, Road Runner, and Up N Down. In compari- son, pure imitation learning is worse than the demonstratorâs performance in every game.
Figure 2 shows comparisons of DQfD with λ1 and λ2 set to 0, on two games where DQfD achieved state-of-the- art results: Montezumaâs Revenge and Q-Bert. As expected, | 1704.03732#32 | Deep Q-learning from Demonstrations | Deep reinforcement learning (RL) has achieved several high profile successes
in difficult decision-making problems. However, these algorithms typically
require a huge amount of data before they reach reasonable performance. In
fact, their performance during learning can be extremely poor. This may be
acceptable for a simulator, but it severely limits the applicability of deep RL
to many real-world tasks, where the agent must learn in the real environment.
In this paper we study a setting where the agent may access data from previous
control of the system. We present an algorithm, Deep Q-learning from
Demonstrations (DQfD), that leverages small sets of demonstration data to
massively accelerate the learning process even from relatively small amounts of
demonstration data and is able to automatically assess the necessary ratio of
demonstration data while learning thanks to a prioritized replay mechanism.
DQfD works by combining temporal difference updates with supervised
classification of the demonstrator's actions. We show that DQfD has better
initial performance than Prioritized Dueling Double Deep Q-Networks (PDD DQN)
as it starts with better scores on the first million steps on 41 of 42 games
and on average it takes PDD DQN 83 million steps to catch up to DQfD's
performance. DQfD learns to out-perform the best demonstration given in 14 of
42 games. In addition, DQfD leverages human demonstrations to achieve
state-of-the-art results for 11 games. Finally, we show that DQfD performs
better than three related algorithms for incorporating demonstration data into
DQN. | http://arxiv.org/pdf/1704.03732 | Todd Hester, Matej Vecerik, Olivier Pietquin, Marc Lanctot, Tom Schaul, Bilal Piot, Dan Horgan, John Quan, Andrew Sendonaris, Gabriel Dulac-Arnold, Ian Osband, John Agapiou, Joel Z. Leibo, Audrunas Gruslys | cs.AI, cs.LG | Published at AAAI 2018. Previously on arxiv as "Learning from
Demonstrations for Real World Reinforcement Learning" | null | cs.AI | 20170412 | 20171122 | [] |
1704.03732 | 33 | 120000 100000 DQfD Imitation PDD DQN Hero 200 100 Pitfall 60000 50000 DQfD Imitation PDD DQN Road Runner 7 6 Demonstration Data Up-Sample Ratio s n r u t e R e d o s i p E g n n a r T i i 80000 60000 40000 s n r u t e R e d o s i p E g n n a r T i i 0 â100 â200 s n r u t e R e d o s i p E g n n a r T i i 40000 30000 20000 o i t a R e p m a s - p U l 5 4 3 2 Hero Montezuma's Revenge Pitfall Q-Bert Road Runner 20000 0 0 50 100 Training Iteration 150 200 â300 â400 0 DQfD Imitation PDD DQN 50 100 Training Iteration 150 200 10000 0 0 50 100 Training Iteration 150 200 1 0 0 20 40 60 Training Iteration 80 100 | 1704.03732#33 | Deep Q-learning from Demonstrations | Deep reinforcement learning (RL) has achieved several high profile successes
in difficult decision-making problems. However, these algorithms typically
require a huge amount of data before they reach reasonable performance. In
fact, their performance during learning can be extremely poor. This may be
acceptable for a simulator, but it severely limits the applicability of deep RL
to many real-world tasks, where the agent must learn in the real environment.
In this paper we study a setting where the agent may access data from previous
control of the system. We present an algorithm, Deep Q-learning from
Demonstrations (DQfD), that leverages small sets of demonstration data to
massively accelerate the learning process even from relatively small amounts of
demonstration data and is able to automatically assess the necessary ratio of
demonstration data while learning thanks to a prioritized replay mechanism.
DQfD works by combining temporal difference updates with supervised
classification of the demonstrator's actions. We show that DQfD has better
initial performance than Prioritized Dueling Double Deep Q-Networks (PDD DQN)
as it starts with better scores on the first million steps on 41 of 42 games
and on average it takes PDD DQN 83 million steps to catch up to DQfD's
performance. DQfD learns to out-perform the best demonstration given in 14 of
42 games. In addition, DQfD leverages human demonstrations to achieve
state-of-the-art results for 11 games. Finally, we show that DQfD performs
better than three related algorithms for incorporating demonstration data into
DQN. | http://arxiv.org/pdf/1704.03732 | Todd Hester, Matej Vecerik, Olivier Pietquin, Marc Lanctot, Tom Schaul, Bilal Piot, Dan Horgan, John Quan, Andrew Sendonaris, Gabriel Dulac-Arnold, Ian Osband, John Agapiou, Joel Z. Leibo, Audrunas Gruslys | cs.AI, cs.LG | Published at AAAI 2018. Previously on arxiv as "Learning from
Demonstrations for Real World Reinforcement Learning" | null | cs.AI | 20170412 | 20171122 | [] |
1704.03732 | 35 | 5000 4000 Loss Ablations: Montezuma Revenge 25000 20000 Loss Ablations: Qbert 5000 4000 Related Work: Montezuma Revenge ADET DQfD Human Experience Replay Replay Buffer Spiking 25000 20000 Related Work: Qbert s n r u t e R e d o s i p E g n n a r T i i 3000 2000 s n r u t e R e d o s i p E g n n a r T i i 15000 10000 s n r u t e R e d o s i p E g n n a r T i i 3000 2000 s n r u t e R e d o s i p E g n n a r T i i 15000 10000 1000 DQfD No Supervised Loss No n-step TD loss 5000 DQfD No Supervised Loss No n-step TD loss 1000 5000 ADET DQfD Human Experience Replay Replay Buffer Spiking 0 0 50 100 Training Iteration 150 200 0 0 50 100 Training Iteration 150 200 0 0 50 100 Training Iteration 150 200 0 0 50 100 Training Iteration 150 200 | 1704.03732#35 | Deep Q-learning from Demonstrations | Deep reinforcement learning (RL) has achieved several high profile successes
in difficult decision-making problems. However, these algorithms typically
require a huge amount of data before they reach reasonable performance. In
fact, their performance during learning can be extremely poor. This may be
acceptable for a simulator, but it severely limits the applicability of deep RL
to many real-world tasks, where the agent must learn in the real environment.
In this paper we study a setting where the agent may access data from previous
control of the system. We present an algorithm, Deep Q-learning from
Demonstrations (DQfD), that leverages small sets of demonstration data to
massively accelerate the learning process even from relatively small amounts of
demonstration data and is able to automatically assess the necessary ratio of
demonstration data while learning thanks to a prioritized replay mechanism.
DQfD works by combining temporal difference updates with supervised
classification of the demonstrator's actions. We show that DQfD has better
initial performance than Prioritized Dueling Double Deep Q-Networks (PDD DQN)
as it starts with better scores on the first million steps on 41 of 42 games
and on average it takes PDD DQN 83 million steps to catch up to DQfD's
performance. DQfD learns to out-perform the best demonstration given in 14 of
42 games. In addition, DQfD leverages human demonstrations to achieve
state-of-the-art results for 11 games. Finally, we show that DQfD performs
better than three related algorithms for incorporating demonstration data into
DQN. | http://arxiv.org/pdf/1704.03732 | Todd Hester, Matej Vecerik, Olivier Pietquin, Marc Lanctot, Tom Schaul, Bilal Piot, Dan Horgan, John Quan, Andrew Sendonaris, Gabriel Dulac-Arnold, Ian Osband, John Agapiou, Joel Z. Leibo, Audrunas Gruslys | cs.AI, cs.LG | Published at AAAI 2018. Previously on arxiv as "Learning from
Demonstrations for Real World Reinforcement Learning" | null | cs.AI | 20170412 | 20171122 | [] |
1704.03732 | 36 | Figure 2: The left plots show on-line rewards of DQfD with some losses removed on the games of Montezumaâs Revenge and Q-Bert. Removing either loss degrades the performance of the algorithm. The right plots compare DQfD with three algorithms from the related work section. The other approaches do not perform as well as DQfD, particularly on Montezumaâs Revenge.
pre-training without any supervised loss results in a network trained towards ungrounded Q-learning targets and the agent starts with much lower performance and is slower to im- prove. Removing the n-step TD loss has nearly as large an impact on initial performance, as the n-step TD loss greatly helps in learning from the limited demonstration dataset. | 1704.03732#36 | Deep Q-learning from Demonstrations | Deep reinforcement learning (RL) has achieved several high profile successes
in difficult decision-making problems. However, these algorithms typically
require a huge amount of data before they reach reasonable performance. In
fact, their performance during learning can be extremely poor. This may be
acceptable for a simulator, but it severely limits the applicability of deep RL
to many real-world tasks, where the agent must learn in the real environment.
In this paper we study a setting where the agent may access data from previous
control of the system. We present an algorithm, Deep Q-learning from
Demonstrations (DQfD), that leverages small sets of demonstration data to
massively accelerate the learning process even from relatively small amounts of
demonstration data and is able to automatically assess the necessary ratio of
demonstration data while learning thanks to a prioritized replay mechanism.
DQfD works by combining temporal difference updates with supervised
classification of the demonstrator's actions. We show that DQfD has better
initial performance than Prioritized Dueling Double Deep Q-Networks (PDD DQN)
as it starts with better scores on the first million steps on 41 of 42 games
and on average it takes PDD DQN 83 million steps to catch up to DQfD's
performance. DQfD learns to out-perform the best demonstration given in 14 of
42 games. In addition, DQfD leverages human demonstrations to achieve
state-of-the-art results for 11 games. Finally, we show that DQfD performs
better than three related algorithms for incorporating demonstration data into
DQN. | http://arxiv.org/pdf/1704.03732 | Todd Hester, Matej Vecerik, Olivier Pietquin, Marc Lanctot, Tom Schaul, Bilal Piot, Dan Horgan, John Quan, Andrew Sendonaris, Gabriel Dulac-Arnold, Ian Osband, John Agapiou, Joel Z. Leibo, Audrunas Gruslys | cs.AI, cs.LG | Published at AAAI 2018. Previously on arxiv as "Learning from
Demonstrations for Real World Reinforcement Learning" | null | cs.AI | 20170412 | 20171122 | [] |
1704.03732 | 37 | Prev. Best Algorithm 4461.4 2869.3 Game Alien Asteroids Atlantis Battle Zone Gravitar Hero Montezuma Revenge Pitfall Private Eye Q-Bert Up N Down DQfD 4745.9 3796.4 920213.9 395762.0 41971.7 37150.0 1693.2 859.1 105929.4 23037.7 4739.6 3705.5 50.8 0.0 40908.2 15806.5 21792.7 19220.3 82555.0 44939.6 Dueling DQN (Wang et al. 2016) PopArt (van Hasselt et al. 2016) Prior. Dueling DQN (Wang et al. 2016) Dueling DQN (Wang et al. 2016) DQN+PixelCNN (Ostrovski et al. 2017) Prioritized DQN (Schaul et al. 2016) DQN+CTS (Ostrovski et al. 2017) Prior. Dueling DQN (Wang et al. 2016) DQN+PixelCNN (Ostrovski et al. 2017) Dueling DQN (Wang et al. 2016) Dueling DQN (Wang et al. 2016) | 1704.03732#37 | Deep Q-learning from Demonstrations | Deep reinforcement learning (RL) has achieved several high profile successes
in difficult decision-making problems. However, these algorithms typically
require a huge amount of data before they reach reasonable performance. In
fact, their performance during learning can be extremely poor. This may be
acceptable for a simulator, but it severely limits the applicability of deep RL
to many real-world tasks, where the agent must learn in the real environment.
In this paper we study a setting where the agent may access data from previous
control of the system. We present an algorithm, Deep Q-learning from
Demonstrations (DQfD), that leverages small sets of demonstration data to
massively accelerate the learning process even from relatively small amounts of
demonstration data and is able to automatically assess the necessary ratio of
demonstration data while learning thanks to a prioritized replay mechanism.
DQfD works by combining temporal difference updates with supervised
classification of the demonstrator's actions. We show that DQfD has better
initial performance than Prioritized Dueling Double Deep Q-Networks (PDD DQN)
as it starts with better scores on the first million steps on 41 of 42 games
and on average it takes PDD DQN 83 million steps to catch up to DQfD's
performance. DQfD learns to out-perform the best demonstration given in 14 of
42 games. In addition, DQfD leverages human demonstrations to achieve
state-of-the-art results for 11 games. Finally, we show that DQfD performs
better than three related algorithms for incorporating demonstration data into
DQN. | http://arxiv.org/pdf/1704.03732 | Todd Hester, Matej Vecerik, Olivier Pietquin, Marc Lanctot, Tom Schaul, Bilal Piot, Dan Horgan, John Quan, Andrew Sendonaris, Gabriel Dulac-Arnold, Ian Osband, John Agapiou, Joel Z. Leibo, Audrunas Gruslys | cs.AI, cs.LG | Published at AAAI 2018. Previously on arxiv as "Learning from
Demonstrations for Real World Reinforcement Learning" | null | cs.AI | 20170412 | 20171122 | [] |
1704.03732 | 38 | Table 1: Scores for the 11 games where DQfD achieves higher scores than any previously published deep RL re- sult using random no-op starts. Previous results take the best agent at its best iteration and evaluate it for 100 episodes. DQfD scores are the best 3 million step window averaged over four seeds, which is 508 episodes on average.
The right subplots in Figure 2 compare DQfD with three related algorithms for leveraging demonstration data in DQN:
Replay Buffer Spiking (RBS) (Lipton et al. 2016) ⢠Human
Experience (HER) (Hosu and Rebedea 2016) Replay
with (ADET) (Lakshminarayanan, Ozair, and Bengio 2016) | 1704.03732#38 | Deep Q-learning from Demonstrations | Deep reinforcement learning (RL) has achieved several high profile successes
in difficult decision-making problems. However, these algorithms typically
require a huge amount of data before they reach reasonable performance. In
fact, their performance during learning can be extremely poor. This may be
acceptable for a simulator, but it severely limits the applicability of deep RL
to many real-world tasks, where the agent must learn in the real environment.
In this paper we study a setting where the agent may access data from previous
control of the system. We present an algorithm, Deep Q-learning from
Demonstrations (DQfD), that leverages small sets of demonstration data to
massively accelerate the learning process even from relatively small amounts of
demonstration data and is able to automatically assess the necessary ratio of
demonstration data while learning thanks to a prioritized replay mechanism.
DQfD works by combining temporal difference updates with supervised
classification of the demonstrator's actions. We show that DQfD has better
initial performance than Prioritized Dueling Double Deep Q-Networks (PDD DQN)
as it starts with better scores on the first million steps on 41 of 42 games
and on average it takes PDD DQN 83 million steps to catch up to DQfD's
performance. DQfD learns to out-perform the best demonstration given in 14 of
42 games. In addition, DQfD leverages human demonstrations to achieve
state-of-the-art results for 11 games. Finally, we show that DQfD performs
better than three related algorithms for incorporating demonstration data into
DQN. | http://arxiv.org/pdf/1704.03732 | Todd Hester, Matej Vecerik, Olivier Pietquin, Marc Lanctot, Tom Schaul, Bilal Piot, Dan Horgan, John Quan, Andrew Sendonaris, Gabriel Dulac-Arnold, Ian Osband, John Agapiou, Joel Z. Leibo, Audrunas Gruslys | cs.AI, cs.LG | Published at AAAI 2018. Previously on arxiv as "Learning from
Demonstrations for Real World Reinforcement Learning" | null | cs.AI | 20170412 | 20171122 | [] |
1704.03732 | 39 | Experience (HER) (Hosu and Rebedea 2016) Replay
with (ADET) (Lakshminarayanan, Ozair, and Bengio 2016)
RBS is simply PDD DQN with the replay buffer initially full of demonstration data. HER keeps the demonstration data and mixes demonstration and agent data in each mini-batch. ADET is essentially DQfD with the large margin supervised loss replaced with a cross-entropy loss. The results show that all three of these approaches are worse than DQfD in both games. Having a supervised loss is critical to good perfor- mance, as both DQfD and ADET perform much better than the other two algorithms. All the algorithms use the exact same demonstration data used for DQfD. We included the prioritized replay mechanism and the n-step returns in all of these algorithms to make them as strong a comparison as possible. | 1704.03732#39 | Deep Q-learning from Demonstrations | Deep reinforcement learning (RL) has achieved several high profile successes
in difficult decision-making problems. However, these algorithms typically
require a huge amount of data before they reach reasonable performance. In
fact, their performance during learning can be extremely poor. This may be
acceptable for a simulator, but it severely limits the applicability of deep RL
to many real-world tasks, where the agent must learn in the real environment.
In this paper we study a setting where the agent may access data from previous
control of the system. We present an algorithm, Deep Q-learning from
Demonstrations (DQfD), that leverages small sets of demonstration data to
massively accelerate the learning process even from relatively small amounts of
demonstration data and is able to automatically assess the necessary ratio of
demonstration data while learning thanks to a prioritized replay mechanism.
DQfD works by combining temporal difference updates with supervised
classification of the demonstrator's actions. We show that DQfD has better
initial performance than Prioritized Dueling Double Deep Q-Networks (PDD DQN)
as it starts with better scores on the first million steps on 41 of 42 games
and on average it takes PDD DQN 83 million steps to catch up to DQfD's
performance. DQfD learns to out-perform the best demonstration given in 14 of
42 games. In addition, DQfD leverages human demonstrations to achieve
state-of-the-art results for 11 games. Finally, we show that DQfD performs
better than three related algorithms for incorporating demonstration data into
DQN. | http://arxiv.org/pdf/1704.03732 | Todd Hester, Matej Vecerik, Olivier Pietquin, Marc Lanctot, Tom Schaul, Bilal Piot, Dan Horgan, John Quan, Andrew Sendonaris, Gabriel Dulac-Arnold, Ian Osband, John Agapiou, Joel Z. Leibo, Audrunas Gruslys | cs.AI, cs.LG | Published at AAAI 2018. Previously on arxiv as "Learning from
Demonstrations for Real World Reinforcement Learning" | null | cs.AI | 20170412 | 20171122 | [] |
1704.03732 | 40 | Discussion The learning framework that we have presented in this paper is one that is very common in real world prob- lems such as controlling data centers, autonomous ve- hicles (Hester and Stone 2013), or recommendation sys- tems (Shani, Heckerman, and Brafman 2005). In these prob- lems, typically there is no accurate simulator available, and learning must be performed on the real system with real con- sequences. However, there is often data available of the sys- tem being operated by a previous controller. We have pre- sented a new algorithm called DQfD that takes advantage of this data to accelerate learning on the real system. It ï¬rst pre- trains solely on demonstration data, using a combination of 1-step TD, n-step TD, supervised, and regularization losses so that it has a reasonable policy that is a good starting point for learning in the task. Once it starts interacting with the task, it continues learning by sampling from both its self- generated data as well as the demonstration data. The ra- tio of both types of data in each mini-batch is automatically controlled by a prioritized-replay mechanism. | 1704.03732#40 | Deep Q-learning from Demonstrations | Deep reinforcement learning (RL) has achieved several high profile successes
in difficult decision-making problems. However, these algorithms typically
require a huge amount of data before they reach reasonable performance. In
fact, their performance during learning can be extremely poor. This may be
acceptable for a simulator, but it severely limits the applicability of deep RL
to many real-world tasks, where the agent must learn in the real environment.
In this paper we study a setting where the agent may access data from previous
control of the system. We present an algorithm, Deep Q-learning from
Demonstrations (DQfD), that leverages small sets of demonstration data to
massively accelerate the learning process even from relatively small amounts of
demonstration data and is able to automatically assess the necessary ratio of
demonstration data while learning thanks to a prioritized replay mechanism.
DQfD works by combining temporal difference updates with supervised
classification of the demonstrator's actions. We show that DQfD has better
initial performance than Prioritized Dueling Double Deep Q-Networks (PDD DQN)
as it starts with better scores on the first million steps on 41 of 42 games
and on average it takes PDD DQN 83 million steps to catch up to DQfD's
performance. DQfD learns to out-perform the best demonstration given in 14 of
42 games. In addition, DQfD leverages human demonstrations to achieve
state-of-the-art results for 11 games. Finally, we show that DQfD performs
better than three related algorithms for incorporating demonstration data into
DQN. | http://arxiv.org/pdf/1704.03732 | Todd Hester, Matej Vecerik, Olivier Pietquin, Marc Lanctot, Tom Schaul, Bilal Piot, Dan Horgan, John Quan, Andrew Sendonaris, Gabriel Dulac-Arnold, Ian Osband, John Agapiou, Joel Z. Leibo, Audrunas Gruslys | cs.AI, cs.LG | Published at AAAI 2018. Previously on arxiv as "Learning from
Demonstrations for Real World Reinforcement Learning" | null | cs.AI | 20170412 | 20171122 | [] |
1704.03732 | 41 | We have shown that DQfD gets a large boost in initial per- formance compared to PDD DQN. DQfD has better perfor- mance on the ï¬rst million steps than PDD DQN on 41 of 42 Atari games, and on average it takes DQN 82 million steps to match DQfDâs performance. On most real world tasks, an agent may never get hundreds of millions of steps from which to learn. We also showed that DQfD out-performs three other algorithms for leveraging demonstration data in RL. The fact that DQfD out-performs all these algorithms makes it clear that it is the better choice for any real-world application of RL where this type of demonstration data is available.
In addition to its early performance boost, DQfD is able to leverage the human demonstrations to achieve state-of- the-art results on 11 Atari games. Many of these games are the hardest exploration games (i.e. Montezumaâs Revenge, Pitfall, Private Eye) where the demonstration data can be used in place of smarter exploration. This result enables the deployment of RL to problems where more intelligent ex- ploration would otherwise be required. | 1704.03732#41 | Deep Q-learning from Demonstrations | Deep reinforcement learning (RL) has achieved several high profile successes
in difficult decision-making problems. However, these algorithms typically
require a huge amount of data before they reach reasonable performance. In
fact, their performance during learning can be extremely poor. This may be
acceptable for a simulator, but it severely limits the applicability of deep RL
to many real-world tasks, where the agent must learn in the real environment.
In this paper we study a setting where the agent may access data from previous
control of the system. We present an algorithm, Deep Q-learning from
Demonstrations (DQfD), that leverages small sets of demonstration data to
massively accelerate the learning process even from relatively small amounts of
demonstration data and is able to automatically assess the necessary ratio of
demonstration data while learning thanks to a prioritized replay mechanism.
DQfD works by combining temporal difference updates with supervised
classification of the demonstrator's actions. We show that DQfD has better
initial performance than Prioritized Dueling Double Deep Q-Networks (PDD DQN)
as it starts with better scores on the first million steps on 41 of 42 games
and on average it takes PDD DQN 83 million steps to catch up to DQfD's
performance. DQfD learns to out-perform the best demonstration given in 14 of
42 games. In addition, DQfD leverages human demonstrations to achieve
state-of-the-art results for 11 games. Finally, we show that DQfD performs
better than three related algorithms for incorporating demonstration data into
DQN. | http://arxiv.org/pdf/1704.03732 | Todd Hester, Matej Vecerik, Olivier Pietquin, Marc Lanctot, Tom Schaul, Bilal Piot, Dan Horgan, John Quan, Andrew Sendonaris, Gabriel Dulac-Arnold, Ian Osband, John Agapiou, Joel Z. Leibo, Audrunas Gruslys | cs.AI, cs.LG | Published at AAAI 2018. Previously on arxiv as "Learning from
Demonstrations for Real World Reinforcement Learning" | null | cs.AI | 20170412 | 20171122 | [] |
1704.03732 | 42 | DQfD achieves these results despite having a very small amount of demonstration data (5,574 to 75,472 transitions per game) that can be easily generated in just a few minutes of gameplay. DQN and DQfD receive three orders of magni- tude more interaction data for RL than demonstration data. DQfD demonstrates the gains that can be achieved by adding just a small amount of demonstration data with the right algorithm. As the related work comparison shows, naively adding (e.g. only pre-training or ï¬lling the replay buffer) this small amount of data to a pure deep RL algorithm does not provide similar beneï¬t and can sometimes be detrimental. | 1704.03732#42 | Deep Q-learning from Demonstrations | Deep reinforcement learning (RL) has achieved several high profile successes
in difficult decision-making problems. However, these algorithms typically
require a huge amount of data before they reach reasonable performance. In
fact, their performance during learning can be extremely poor. This may be
acceptable for a simulator, but it severely limits the applicability of deep RL
to many real-world tasks, where the agent must learn in the real environment.
In this paper we study a setting where the agent may access data from previous
control of the system. We present an algorithm, Deep Q-learning from
Demonstrations (DQfD), that leverages small sets of demonstration data to
massively accelerate the learning process even from relatively small amounts of
demonstration data and is able to automatically assess the necessary ratio of
demonstration data while learning thanks to a prioritized replay mechanism.
DQfD works by combining temporal difference updates with supervised
classification of the demonstrator's actions. We show that DQfD has better
initial performance than Prioritized Dueling Double Deep Q-Networks (PDD DQN)
as it starts with better scores on the first million steps on 41 of 42 games
and on average it takes PDD DQN 83 million steps to catch up to DQfD's
performance. DQfD learns to out-perform the best demonstration given in 14 of
42 games. In addition, DQfD leverages human demonstrations to achieve
state-of-the-art results for 11 games. Finally, we show that DQfD performs
better than three related algorithms for incorporating demonstration data into
DQN. | http://arxiv.org/pdf/1704.03732 | Todd Hester, Matej Vecerik, Olivier Pietquin, Marc Lanctot, Tom Schaul, Bilal Piot, Dan Horgan, John Quan, Andrew Sendonaris, Gabriel Dulac-Arnold, Ian Osband, John Agapiou, Joel Z. Leibo, Audrunas Gruslys | cs.AI, cs.LG | Published at AAAI 2018. Previously on arxiv as "Learning from
Demonstrations for Real World Reinforcement Learning" | null | cs.AI | 20170412 | 20171122 | [] |
1704.03732 | 43 | These results may seem obvious given that DQfD has ac- cess to privileged data, but the rewards and demonstrations are mathematically dissimilar training signals, and naive ap- proaches to combining them can have disastrous results. Simply doing supervised learning on the human demonstra- tions is not successful, while DQfD learns to out-perform the best demonstration in 14 of 42 games. DQfD also out- performs three prior algorithms for incorporating demonstration data into DQN. We argue that the combination of all four losses during pre-training is critical for the agent to learn a coherent representation that is not destroyed by the switch in training signals after pre-training. Even after pre-training, the agent must continue using the expert data. In particular, the right sub-ï¬gure of Figure 1 shows that the ratio of expert data needed (selected by prioritized replay) grows during the interaction phase for the most difï¬cult exploration games, where the demonstration data becomes more useful as the agent reaches new screens in the game. RBS shows an example where just having the demonstration data initially is not enough to provide good performance. | 1704.03732#43 | Deep Q-learning from Demonstrations | Deep reinforcement learning (RL) has achieved several high profile successes
in difficult decision-making problems. However, these algorithms typically
require a huge amount of data before they reach reasonable performance. In
fact, their performance during learning can be extremely poor. This may be
acceptable for a simulator, but it severely limits the applicability of deep RL
to many real-world tasks, where the agent must learn in the real environment.
In this paper we study a setting where the agent may access data from previous
control of the system. We present an algorithm, Deep Q-learning from
Demonstrations (DQfD), that leverages small sets of demonstration data to
massively accelerate the learning process even from relatively small amounts of
demonstration data and is able to automatically assess the necessary ratio of
demonstration data while learning thanks to a prioritized replay mechanism.
DQfD works by combining temporal difference updates with supervised
classification of the demonstrator's actions. We show that DQfD has better
initial performance than Prioritized Dueling Double Deep Q-Networks (PDD DQN)
as it starts with better scores on the first million steps on 41 of 42 games
and on average it takes PDD DQN 83 million steps to catch up to DQfD's
performance. DQfD learns to out-perform the best demonstration given in 14 of
42 games. In addition, DQfD leverages human demonstrations to achieve
state-of-the-art results for 11 games. Finally, we show that DQfD performs
better than three related algorithms for incorporating demonstration data into
DQN. | http://arxiv.org/pdf/1704.03732 | Todd Hester, Matej Vecerik, Olivier Pietquin, Marc Lanctot, Tom Schaul, Bilal Piot, Dan Horgan, John Quan, Andrew Sendonaris, Gabriel Dulac-Arnold, Ian Osband, John Agapiou, Joel Z. Leibo, Audrunas Gruslys | cs.AI, cs.LG | Published at AAAI 2018. Previously on arxiv as "Learning from
Demonstrations for Real World Reinforcement Learning" | null | cs.AI | 20170412 | 20171122 | [] |
1704.03732 | 44 | Learning from human demonstrations is particularly difï¬- cult. In most games, imitation learning is unable to perfectly classify the demonstratorâs actions even on the demonstra- tion dataset. Humans may play the games in a way that dif- fers greatly from a policy that an agent would learn, and may be using information that is not available in the agentâs state representation. In future work, we plan to measure these dif- ferences between demonstration and agent data to inform approaches that derive more value from the demonstrations. Another future direction is to apply these concepts to do- mains with continuous actions, where the classiï¬cation loss becomes a regression loss.
Acknowledgments The authors would like to thank Keith Anderson, Chris Apps, Ben Coppin, Joe Fenton, Nando de Freitas, Chris Gamble, Thore Graepel, Georg Ostrovski, Cosmin Padu- raru, Jack Rae, Amir Sadik, Jon Scholz, David Silver, Toby Pohlen, Tom Stepleton, Ziyu Wang, and many others at DeepMind for insightful discussions, code contributions, and other efforts. | 1704.03732#44 | Deep Q-learning from Demonstrations | Deep reinforcement learning (RL) has achieved several high profile successes
in difficult decision-making problems. However, these algorithms typically
require a huge amount of data before they reach reasonable performance. In
fact, their performance during learning can be extremely poor. This may be
acceptable for a simulator, but it severely limits the applicability of deep RL
to many real-world tasks, where the agent must learn in the real environment.
In this paper we study a setting where the agent may access data from previous
control of the system. We present an algorithm, Deep Q-learning from
Demonstrations (DQfD), that leverages small sets of demonstration data to
massively accelerate the learning process even from relatively small amounts of
demonstration data and is able to automatically assess the necessary ratio of
demonstration data while learning thanks to a prioritized replay mechanism.
DQfD works by combining temporal difference updates with supervised
classification of the demonstrator's actions. We show that DQfD has better
initial performance than Prioritized Dueling Double Deep Q-Networks (PDD DQN)
as it starts with better scores on the first million steps on 41 of 42 games
and on average it takes PDD DQN 83 million steps to catch up to DQfD's
performance. DQfD learns to out-perform the best demonstration given in 14 of
42 games. In addition, DQfD leverages human demonstrations to achieve
state-of-the-art results for 11 games. Finally, we show that DQfD performs
better than three related algorithms for incorporating demonstration data into
DQN. | http://arxiv.org/pdf/1704.03732 | Todd Hester, Matej Vecerik, Olivier Pietquin, Marc Lanctot, Tom Schaul, Bilal Piot, Dan Horgan, John Quan, Andrew Sendonaris, Gabriel Dulac-Arnold, Ian Osband, John Agapiou, Joel Z. Leibo, Audrunas Gruslys | cs.AI, cs.LG | Published at AAAI 2018. Previously on arxiv as "Learning from
Demonstrations for Real World Reinforcement Learning" | null | cs.AI | 20170412 | 20171122 | [] |
1704.03732 | 45 | References [Abbeel et al. 2007] Abbeel, P.; Coates, A.; Quigley, M.; and Ng, A. Y. 2007. An application of reinforcement learning to aerobatic helicopter ï¬ight. In Advances in Neural Informa- tion Processing Systems (NIPS).
[Bellemare et al. 2013] Bellemare, M. G.; Naddaf, Y.; Ve- ness, J.; and Bowling, M. 2013. The arcade learning envi- ronment: An evaluation platform for general agents. Journal of Artiï¬ï¬cial Intelligence Research (JAIR) 47:253â279.
[Brys et al. 2015] Brys, T.; Harutyunyan, A.; Suay, H.; Cher- nova, S.; Taylor, M.; and Now´e, A. 2015. Reinforcement learning from demonstration through shaping. In Interna- tional Joint Conference on Artiï¬cial Intelligence (IJCAI). [Cederborg et al. 2015] Cederborg, T.; Grover, I.; Isbell, C.; and Thomaz, A. 2015. Policy shaping with human teachers. In International Joint Conference on Artiï¬cial Intelligence (IJCAI 2015). | 1704.03732#45 | Deep Q-learning from Demonstrations | Deep reinforcement learning (RL) has achieved several high profile successes
in difficult decision-making problems. However, these algorithms typically
require a huge amount of data before they reach reasonable performance. In
fact, their performance during learning can be extremely poor. This may be
acceptable for a simulator, but it severely limits the applicability of deep RL
to many real-world tasks, where the agent must learn in the real environment.
In this paper we study a setting where the agent may access data from previous
control of the system. We present an algorithm, Deep Q-learning from
Demonstrations (DQfD), that leverages small sets of demonstration data to
massively accelerate the learning process even from relatively small amounts of
demonstration data and is able to automatically assess the necessary ratio of
demonstration data while learning thanks to a prioritized replay mechanism.
DQfD works by combining temporal difference updates with supervised
classification of the demonstrator's actions. We show that DQfD has better
initial performance than Prioritized Dueling Double Deep Q-Networks (PDD DQN)
as it starts with better scores on the first million steps on 41 of 42 games
and on average it takes PDD DQN 83 million steps to catch up to DQfD's
performance. DQfD learns to out-perform the best demonstration given in 14 of
42 games. In addition, DQfD leverages human demonstrations to achieve
state-of-the-art results for 11 games. Finally, we show that DQfD performs
better than three related algorithms for incorporating demonstration data into
DQN. | http://arxiv.org/pdf/1704.03732 | Todd Hester, Matej Vecerik, Olivier Pietquin, Marc Lanctot, Tom Schaul, Bilal Piot, Dan Horgan, John Quan, Andrew Sendonaris, Gabriel Dulac-Arnold, Ian Osband, John Agapiou, Joel Z. Leibo, Audrunas Gruslys | cs.AI, cs.LG | Published at AAAI 2018. Previously on arxiv as "Learning from
Demonstrations for Real World Reinforcement Learning" | null | cs.AI | 20170412 | 20171122 | [] |
1704.03732 | 46 | [Chemali and Lezaric 2015] Chemali, J., and Lezaric, A. 2015. Direct policy iteration from demonstrations. In Inter- national Joint Conference on Artiï¬cial Intelligence (IJCAI). [Duan et al. 2017] Duan, Y.; Andrychowicz, M.; Stadie, B. C.; Ho, J.; Schneider, J.; Sutskever, I.; Abbeel, P.; and
Zaremba, W. 2017. One-shot imitation learning. CoRR abs/1703.07326.
[Finn, Levine, and Abbeel 2016] Finn, C.; Levine, S.; and Abbeel, P. 2016. Guided cost learning: Deep inverse optimal control via policy optimization. In International Conference on Machine Learing (ICML).
[Gruslys et al. 2017] Gruslys, A.; Gheshlaghi Azar, M.; Bellemare, M. G.; and Munos, R. 2017. The Reactor: A Sample-Efï¬cient Actor-Critic Architecture. ArXiv e-prints. 2013. reinforcement
[Hester and Stone 2013] Hester, T., and Stone, P. TEXPLORE: Real-time sample-efï¬cient learning for robots. Machine Learning 90(3). | 1704.03732#46 | Deep Q-learning from Demonstrations | Deep reinforcement learning (RL) has achieved several high profile successes
in difficult decision-making problems. However, these algorithms typically
require a huge amount of data before they reach reasonable performance. In
fact, their performance during learning can be extremely poor. This may be
acceptable for a simulator, but it severely limits the applicability of deep RL
to many real-world tasks, where the agent must learn in the real environment.
In this paper we study a setting where the agent may access data from previous
control of the system. We present an algorithm, Deep Q-learning from
Demonstrations (DQfD), that leverages small sets of demonstration data to
massively accelerate the learning process even from relatively small amounts of
demonstration data and is able to automatically assess the necessary ratio of
demonstration data while learning thanks to a prioritized replay mechanism.
DQfD works by combining temporal difference updates with supervised
classification of the demonstrator's actions. We show that DQfD has better
initial performance than Prioritized Dueling Double Deep Q-Networks (PDD DQN)
as it starts with better scores on the first million steps on 41 of 42 games
and on average it takes PDD DQN 83 million steps to catch up to DQfD's
performance. DQfD learns to out-perform the best demonstration given in 14 of
42 games. In addition, DQfD leverages human demonstrations to achieve
state-of-the-art results for 11 games. Finally, we show that DQfD performs
better than three related algorithms for incorporating demonstration data into
DQN. | http://arxiv.org/pdf/1704.03732 | Todd Hester, Matej Vecerik, Olivier Pietquin, Marc Lanctot, Tom Schaul, Bilal Piot, Dan Horgan, John Quan, Andrew Sendonaris, Gabriel Dulac-Arnold, Ian Osband, John Agapiou, Joel Z. Leibo, Audrunas Gruslys | cs.AI, cs.LG | Published at AAAI 2018. Previously on arxiv as "Learning from
Demonstrations for Real World Reinforcement Learning" | null | cs.AI | 20170412 | 20171122 | [] |
1704.03732 | 47 | [Ho and Ermon 2016] Ho, J., and Ermon, S. 2016. Gener- ative adversarial imitation learning. In Advances in Neural Information Processing Systems (NIPS).
I.-A., and Rebedea, T. 2016. Playing atari games with deep reinforcement learn- In ECAI Workshop on ing and human checkpoint replay. Evaluating General Purpose AI.
[Jaderberg et al. 2016] Jaderberg, M.; Mnih, V.; Czarnecki, W. M.; Schaul, T.; Leibo, J. Z.; Silver, D.; and Kavukcuoglu, K. 2016. Reinforcement learning with unsupervised auxil- iary tasks. CoRR abs/1611.05397.
[Kim et al. 2013] Kim, B.; Farahmand, A.; Pineau, J.; and Precup, D. 2013. Learning from limited demonstrations. In Advances in Neural Information Processing Systems (NIPS).
[Lakshminarayanan, Ozair, and Bengio 2016]
Lakshminarayanan, A. S.; Ozair, S.; and Bengio, Y. 2016. Reinforcement learning with few expert demonstra- tions. In NIPS Workshop on Deep Learning for Action and Interaction. | 1704.03732#47 | Deep Q-learning from Demonstrations | Deep reinforcement learning (RL) has achieved several high profile successes
in difficult decision-making problems. However, these algorithms typically
require a huge amount of data before they reach reasonable performance. In
fact, their performance during learning can be extremely poor. This may be
acceptable for a simulator, but it severely limits the applicability of deep RL
to many real-world tasks, where the agent must learn in the real environment.
In this paper we study a setting where the agent may access data from previous
control of the system. We present an algorithm, Deep Q-learning from
Demonstrations (DQfD), that leverages small sets of demonstration data to
massively accelerate the learning process even from relatively small amounts of
demonstration data and is able to automatically assess the necessary ratio of
demonstration data while learning thanks to a prioritized replay mechanism.
DQfD works by combining temporal difference updates with supervised
classification of the demonstrator's actions. We show that DQfD has better
initial performance than Prioritized Dueling Double Deep Q-Networks (PDD DQN)
as it starts with better scores on the first million steps on 41 of 42 games
and on average it takes PDD DQN 83 million steps to catch up to DQfD's
performance. DQfD learns to out-perform the best demonstration given in 14 of
42 games. In addition, DQfD leverages human demonstrations to achieve
state-of-the-art results for 11 games. Finally, we show that DQfD performs
better than three related algorithms for incorporating demonstration data into
DQN. | http://arxiv.org/pdf/1704.03732 | Todd Hester, Matej Vecerik, Olivier Pietquin, Marc Lanctot, Tom Schaul, Bilal Piot, Dan Horgan, John Quan, Andrew Sendonaris, Gabriel Dulac-Arnold, Ian Osband, John Agapiou, Joel Z. Leibo, Audrunas Gruslys | cs.AI, cs.LG | Published at AAAI 2018. Previously on arxiv as "Learning from
Demonstrations for Real World Reinforcement Learning" | null | cs.AI | 20170412 | 20171122 | [] |
1704.03732 | 48 | [LeCun, Bengio, and Hinton 2015] LeCun, Y.; Bengio, Y.; Nature and Hinton, G. 521(7553):436â444. 2015. Deep learning.
[Levine et al. 2016] Levine, S.; Finn, C.; Darrell, T.; and Abbeel, P. 2016. End-to-end training of deep visuomotor policies. Journal of Machine Learning (JMLR) 17:1â40. [Lipton et al. 2016] Lipton, Z. C.; Gao, J.; Li, L.; Li, X.; Ahmed, F.; and Deng, L. 2016. Efï¬cient exploration for dia- log policy learning with deep BBQ network & replay buffer spiking. CoRR abs/1608.05081. | 1704.03732#48 | Deep Q-learning from Demonstrations | Deep reinforcement learning (RL) has achieved several high profile successes
in difficult decision-making problems. However, these algorithms typically
require a huge amount of data before they reach reasonable performance. In
fact, their performance during learning can be extremely poor. This may be
acceptable for a simulator, but it severely limits the applicability of deep RL
to many real-world tasks, where the agent must learn in the real environment.
In this paper we study a setting where the agent may access data from previous
control of the system. We present an algorithm, Deep Q-learning from
Demonstrations (DQfD), that leverages small sets of demonstration data to
massively accelerate the learning process even from relatively small amounts of
demonstration data and is able to automatically assess the necessary ratio of
demonstration data while learning thanks to a prioritized replay mechanism.
DQfD works by combining temporal difference updates with supervised
classification of the demonstrator's actions. We show that DQfD has better
initial performance than Prioritized Dueling Double Deep Q-Networks (PDD DQN)
as it starts with better scores on the first million steps on 41 of 42 games
and on average it takes PDD DQN 83 million steps to catch up to DQfD's
performance. DQfD learns to out-perform the best demonstration given in 14 of
42 games. In addition, DQfD leverages human demonstrations to achieve
state-of-the-art results for 11 games. Finally, we show that DQfD performs
better than three related algorithms for incorporating demonstration data into
DQN. | http://arxiv.org/pdf/1704.03732 | Todd Hester, Matej Vecerik, Olivier Pietquin, Marc Lanctot, Tom Schaul, Bilal Piot, Dan Horgan, John Quan, Andrew Sendonaris, Gabriel Dulac-Arnold, Ian Osband, John Agapiou, Joel Z. Leibo, Audrunas Gruslys | cs.AI, cs.LG | Published at AAAI 2018. Previously on arxiv as "Learning from
Demonstrations for Real World Reinforcement Learning" | null | cs.AI | 20170412 | 20171122 | [] |
1704.03732 | 49 | [Mnih et al. 2015] Mnih, V.; Kavukcuoglu, K.; Silver, D.; Rusu, A. A.; Veness, J.; Bellemare, M. G.; Graves, A.; Ried- miller, M.; Fidjeland, A. K.; Ostrovski, G.; Petersen, S.; Beattie, C.; Sadik, A.; Antonoglou, I.; King, H.; Kumaran, D.; Wierstra, D.; Legg, S.; and Hassabis, D. 2015. Human- level control through deep reinforcement learning. Nature 518(7540):529â533.
[Mnih et al. 2016] Mnih, V.; Badia, A. P.; Mirza, M.; Graves, A.; Lillicrap, T.; Harley, T.; Silver, D.; and Kavukcuoglu, K. 2016. Asynchronous methods for deep reinforcement learning. In International Conference on Machine Learning, 1928â1937.
[Ostrovski et al. 2017] Ostrovski, G.; Bellemare, M. G.; van den Oord, A.; and Munos, R. 2017. Count-based explo- ration with neural density models. CoRR abs/1703.01310. | 1704.03732#49 | Deep Q-learning from Demonstrations | Deep reinforcement learning (RL) has achieved several high profile successes
in difficult decision-making problems. However, these algorithms typically
require a huge amount of data before they reach reasonable performance. In
fact, their performance during learning can be extremely poor. This may be
acceptable for a simulator, but it severely limits the applicability of deep RL
to many real-world tasks, where the agent must learn in the real environment.
In this paper we study a setting where the agent may access data from previous
control of the system. We present an algorithm, Deep Q-learning from
Demonstrations (DQfD), that leverages small sets of demonstration data to
massively accelerate the learning process even from relatively small amounts of
demonstration data and is able to automatically assess the necessary ratio of
demonstration data while learning thanks to a prioritized replay mechanism.
DQfD works by combining temporal difference updates with supervised
classification of the demonstrator's actions. We show that DQfD has better
initial performance than Prioritized Dueling Double Deep Q-Networks (PDD DQN)
as it starts with better scores on the first million steps on 41 of 42 games
and on average it takes PDD DQN 83 million steps to catch up to DQfD's
performance. DQfD learns to out-perform the best demonstration given in 14 of
42 games. In addition, DQfD leverages human demonstrations to achieve
state-of-the-art results for 11 games. Finally, we show that DQfD performs
better than three related algorithms for incorporating demonstration data into
DQN. | http://arxiv.org/pdf/1704.03732 | Todd Hester, Matej Vecerik, Olivier Pietquin, Marc Lanctot, Tom Schaul, Bilal Piot, Dan Horgan, John Quan, Andrew Sendonaris, Gabriel Dulac-Arnold, Ian Osband, John Agapiou, Joel Z. Leibo, Audrunas Gruslys | cs.AI, cs.LG | Published at AAAI 2018. Previously on arxiv as "Learning from
Demonstrations for Real World Reinforcement Learning" | null | cs.AI | 20170412 | 20171122 | [] |
1704.03732 | 50 | [Piot, Geist, and Pietquin 2014a] Piot, B.; Geist, M.; and Pietquin, O. 2014a. Boosted bellman residual minimization handling expert demonstrations. In European Conference on Machine Learning (ECML).
[Piot, Geist, and Pietquin 2014b] Piot, B.; Geist, M.; and Pietquin, O. 2014b. Boosted and Reward-regularized Classi- ï¬cation for Apprenticeship Learning. In International Con- ference on Autonomous Agents and Multiagent Systems (AA- MAS).
[Ross, Gordon, and Bagnell 2011] Ross, S.; Gordon, G. J.; and Bagnell, J. A. 2011. A reduction of imitation learning and structured prediction to no-regret online learning. In In- ternational Conference on Artiï¬cial Intelligence and Statis- tics (AISTATS).
[Schaal 1996] Schaal, S. 1996. Learning from demonstra- tion. In Advances in Neural Information Processing Systems (NIPS).
[Schaul et al. 2016] Schaul, T.; Quan, J.; Antonoglou, I.; and Silver, D. 2016. Prioritized experience replay. In Proceed- ings of the International Conference on Learning Represen- tations, volume abs/1511.05952. | 1704.03732#50 | Deep Q-learning from Demonstrations | Deep reinforcement learning (RL) has achieved several high profile successes
in difficult decision-making problems. However, these algorithms typically
require a huge amount of data before they reach reasonable performance. In
fact, their performance during learning can be extremely poor. This may be
acceptable for a simulator, but it severely limits the applicability of deep RL
to many real-world tasks, where the agent must learn in the real environment.
In this paper we study a setting where the agent may access data from previous
control of the system. We present an algorithm, Deep Q-learning from
Demonstrations (DQfD), that leverages small sets of demonstration data to
massively accelerate the learning process even from relatively small amounts of
demonstration data and is able to automatically assess the necessary ratio of
demonstration data while learning thanks to a prioritized replay mechanism.
DQfD works by combining temporal difference updates with supervised
classification of the demonstrator's actions. We show that DQfD has better
initial performance than Prioritized Dueling Double Deep Q-Networks (PDD DQN)
as it starts with better scores on the first million steps on 41 of 42 games
and on average it takes PDD DQN 83 million steps to catch up to DQfD's
performance. DQfD learns to out-perform the best demonstration given in 14 of
42 games. In addition, DQfD leverages human demonstrations to achieve
state-of-the-art results for 11 games. Finally, we show that DQfD performs
better than three related algorithms for incorporating demonstration data into
DQN. | http://arxiv.org/pdf/1704.03732 | Todd Hester, Matej Vecerik, Olivier Pietquin, Marc Lanctot, Tom Schaul, Bilal Piot, Dan Horgan, John Quan, Andrew Sendonaris, Gabriel Dulac-Arnold, Ian Osband, John Agapiou, Joel Z. Leibo, Audrunas Gruslys | cs.AI, cs.LG | Published at AAAI 2018. Previously on arxiv as "Learning from
Demonstrations for Real World Reinforcement Learning" | null | cs.AI | 20170412 | 20171122 | [] |
1704.03732 | 51 | [Shani, Heckerman, and Brafman 2005] Shani, G.; Hecker- man, D.; and Brafman, R. I. 2005. An mdp-based rec- ommender system. Journal of Machine Learning Research 6:1265â1295.
[Silver et al. 2016] Silver, D.; Huang, A.; Maddison, C. J.; Guez, A.; Sifre, L.; van den Driessche, G.; Schrittwieser, J.; Antonoglou, I.; Panneershelvam, V.; Lanctot, M.; Diele- man, S.; Grewe, D.; Nham, J.; Kalchbrenner, N.; Sutskever, I.; Lillicrap, T.; Leach, M.; Kavukcuoglu, K.; Graepel, T.; and Hassabis, D. 2016. Mastering the game of Go with deep neural networks and tree search. Nature 529:484â489. [Suay et al. 2016] Suay, H. B.; Brys, T.; Taylor, M. E.; and Chernova, S. 2016. Learning from demonstration for shap- ing through inverse reinforcement learning. In International Conference on Autonomous Agents and Multiagent Systems (AAMAS). | 1704.03732#51 | Deep Q-learning from Demonstrations | Deep reinforcement learning (RL) has achieved several high profile successes
in difficult decision-making problems. However, these algorithms typically
require a huge amount of data before they reach reasonable performance. In
fact, their performance during learning can be extremely poor. This may be
acceptable for a simulator, but it severely limits the applicability of deep RL
to many real-world tasks, where the agent must learn in the real environment.
In this paper we study a setting where the agent may access data from previous
control of the system. We present an algorithm, Deep Q-learning from
Demonstrations (DQfD), that leverages small sets of demonstration data to
massively accelerate the learning process even from relatively small amounts of
demonstration data and is able to automatically assess the necessary ratio of
demonstration data while learning thanks to a prioritized replay mechanism.
DQfD works by combining temporal difference updates with supervised
classification of the demonstrator's actions. We show that DQfD has better
initial performance than Prioritized Dueling Double Deep Q-Networks (PDD DQN)
as it starts with better scores on the first million steps on 41 of 42 games
and on average it takes PDD DQN 83 million steps to catch up to DQfD's
performance. DQfD learns to out-perform the best demonstration given in 14 of
42 games. In addition, DQfD leverages human demonstrations to achieve
state-of-the-art results for 11 games. Finally, we show that DQfD performs
better than three related algorithms for incorporating demonstration data into
DQN. | http://arxiv.org/pdf/1704.03732 | Todd Hester, Matej Vecerik, Olivier Pietquin, Marc Lanctot, Tom Schaul, Bilal Piot, Dan Horgan, John Quan, Andrew Sendonaris, Gabriel Dulac-Arnold, Ian Osband, John Agapiou, Joel Z. Leibo, Audrunas Gruslys | cs.AI, cs.LG | Published at AAAI 2018. Previously on arxiv as "Learning from
Demonstrations for Real World Reinforcement Learning" | null | cs.AI | 20170412 | 20171122 | [] |
1704.03732 | 52 | [Subramanian, Jr., and Thomaz 2016] Subramanian, K.; Jr., C. L. I.; and Thomaz, A. 2016. Exploration from demonstra- tion for interactive reinforcement learning. In International Conference on Autonomous Agents and Multiagent Systems (AAMAS).
[Sun et al. 2017] Sun, W.; Venkatraman, A.; Gordon, G. J.; Boots, B.; and Bagnell, J. A. 2017. Deeply aggrevated: Differentiable imitation learning for sequential prediction. CoRR abs/1703.01030.
[Sutton and Barto 1998] Sutton, R. S., and Barto, A. G. 1998. Introduction to reinforcement learning. MIT Press. [Syed and Schapire 2007] Syed, U., and Schapire, R. E. 2007. A game-theoretic approach to apprenticeship learn- ing. In Advances in Neural Information Processing Systems (NIPS).
[Syed, Bowling, and Schapire 2008] Syed, U.; Bowling, M.; and Schapire, R. E. 2008. Apprenticeship learning using lin- ear programming. In International Conference on Machine Learning (ICML). | 1704.03732#52 | Deep Q-learning from Demonstrations | Deep reinforcement learning (RL) has achieved several high profile successes
in difficult decision-making problems. However, these algorithms typically
require a huge amount of data before they reach reasonable performance. In
fact, their performance during learning can be extremely poor. This may be
acceptable for a simulator, but it severely limits the applicability of deep RL
to many real-world tasks, where the agent must learn in the real environment.
In this paper we study a setting where the agent may access data from previous
control of the system. We present an algorithm, Deep Q-learning from
Demonstrations (DQfD), that leverages small sets of demonstration data to
massively accelerate the learning process even from relatively small amounts of
demonstration data and is able to automatically assess the necessary ratio of
demonstration data while learning thanks to a prioritized replay mechanism.
DQfD works by combining temporal difference updates with supervised
classification of the demonstrator's actions. We show that DQfD has better
initial performance than Prioritized Dueling Double Deep Q-Networks (PDD DQN)
as it starts with better scores on the first million steps on 41 of 42 games
and on average it takes PDD DQN 83 million steps to catch up to DQfD's
performance. DQfD learns to out-perform the best demonstration given in 14 of
42 games. In addition, DQfD leverages human demonstrations to achieve
state-of-the-art results for 11 games. Finally, we show that DQfD performs
better than three related algorithms for incorporating demonstration data into
DQN. | http://arxiv.org/pdf/1704.03732 | Todd Hester, Matej Vecerik, Olivier Pietquin, Marc Lanctot, Tom Schaul, Bilal Piot, Dan Horgan, John Quan, Andrew Sendonaris, Gabriel Dulac-Arnold, Ian Osband, John Agapiou, Joel Z. Leibo, Audrunas Gruslys | cs.AI, cs.LG | Published at AAAI 2018. Previously on arxiv as "Learning from
Demonstrations for Real World Reinforcement Learning" | null | cs.AI | 20170412 | 20171122 | [] |
1704.03732 | 53 | [Taylor, Suay, and Chernova 2011] Taylor, M.; Suay, H.; and Chernova, S. 2011. Integrating reinforcement learning with human demonstrations of varying ability. In International Conference on Autonomous Agents and Multiagent Systems (AAMAS).
[van Hasselt et al. 2016] van Hasselt, H. P.; Guez, A.; Hes- sel, M.; Mnih, V.; and Silver, D. 2016. Learning values In Advances in Neural across many orders of magnitude. Information Processing Systems (NIPS).
[van Hasselt, Guez, and Silver 2016] van Hasselt, H.; Guez, A.; and Silver, D. 2016. Deep reinforcement learning with
double Q-learning. In AAAI Conference on Artiï¬cial Intelli- gence (AAAI).
[Wang et al. 2016] Wang, Z.; Schaul, T.; Hessel, M.; van Hasselt, H.; Lanctot, M.; and de Freitas, N. 2016. Duel- ing network architectures for deep reinforcement learning. In International Conference on Machine Learning (ICML). J. T.; Boedecker, J.; and Riedmiller, M. A. 2015. Embed to control: A locally linear latent dynamics model for control In Advances in Neural Information from raw images. Processing (NIPS). | 1704.03732#53 | Deep Q-learning from Demonstrations | Deep reinforcement learning (RL) has achieved several high profile successes
in difficult decision-making problems. However, these algorithms typically
require a huge amount of data before they reach reasonable performance. In
fact, their performance during learning can be extremely poor. This may be
acceptable for a simulator, but it severely limits the applicability of deep RL
to many real-world tasks, where the agent must learn in the real environment.
In this paper we study a setting where the agent may access data from previous
control of the system. We present an algorithm, Deep Q-learning from
Demonstrations (DQfD), that leverages small sets of demonstration data to
massively accelerate the learning process even from relatively small amounts of
demonstration data and is able to automatically assess the necessary ratio of
demonstration data while learning thanks to a prioritized replay mechanism.
DQfD works by combining temporal difference updates with supervised
classification of the demonstrator's actions. We show that DQfD has better
initial performance than Prioritized Dueling Double Deep Q-Networks (PDD DQN)
as it starts with better scores on the first million steps on 41 of 42 games
and on average it takes PDD DQN 83 million steps to catch up to DQfD's
performance. DQfD learns to out-perform the best demonstration given in 14 of
42 games. In addition, DQfD leverages human demonstrations to achieve
state-of-the-art results for 11 games. Finally, we show that DQfD performs
better than three related algorithms for incorporating demonstration data into
DQN. | http://arxiv.org/pdf/1704.03732 | Todd Hester, Matej Vecerik, Olivier Pietquin, Marc Lanctot, Tom Schaul, Bilal Piot, Dan Horgan, John Quan, Andrew Sendonaris, Gabriel Dulac-Arnold, Ian Osband, John Agapiou, Joel Z. Leibo, Audrunas Gruslys | cs.AI, cs.LG | Published at AAAI 2018. Previously on arxiv as "Learning from
Demonstrations for Real World Reinforcement Learning" | null | cs.AI | 20170412 | 20171122 | [] |
1704.03732 | 54 | Supplementary Material Here are the parameters used for the three algorithms. DQfD used all of these parameters, while the other two algorithms only used the applicable parameters. ⢠Pre-training steps k = 750, 000 mini-batch updates. ⢠N-Step Return weight λ1 = 1.0 ⢠Supervised loss weight λ2 = 1.0. ⢠L2 regularization weight λ3 = 10â5. ⢠Expert margin l(aE, a)whena 6= aE = 0.8.
⢠ǫ-greedy exploration with ǫ = 0.01, same used by Double DQN (van Hasselt, Guez, and Silver 2016).
Prioritized replay exponent α = 0.4. ⢠Prioritized replay constants ǫa = 0.001, ǫd = 1.0. ⢠Prioritized replay importance sampling exponent β0 =
0.6 as in (Schaul et al. 2016).
N-step returns with n = 10. ⢠Target network update period Ï = 10, 000 as in
(Mnih et al. 2015) | 1704.03732#54 | Deep Q-learning from Demonstrations | Deep reinforcement learning (RL) has achieved several high profile successes
in difficult decision-making problems. However, these algorithms typically
require a huge amount of data before they reach reasonable performance. In
fact, their performance during learning can be extremely poor. This may be
acceptable for a simulator, but it severely limits the applicability of deep RL
to many real-world tasks, where the agent must learn in the real environment.
In this paper we study a setting where the agent may access data from previous
control of the system. We present an algorithm, Deep Q-learning from
Demonstrations (DQfD), that leverages small sets of demonstration data to
massively accelerate the learning process even from relatively small amounts of
demonstration data and is able to automatically assess the necessary ratio of
demonstration data while learning thanks to a prioritized replay mechanism.
DQfD works by combining temporal difference updates with supervised
classification of the demonstrator's actions. We show that DQfD has better
initial performance than Prioritized Dueling Double Deep Q-Networks (PDD DQN)
as it starts with better scores on the first million steps on 41 of 42 games
and on average it takes PDD DQN 83 million steps to catch up to DQfD's
performance. DQfD learns to out-perform the best demonstration given in 14 of
42 games. In addition, DQfD leverages human demonstrations to achieve
state-of-the-art results for 11 games. Finally, we show that DQfD performs
better than three related algorithms for incorporating demonstration data into
DQN. | http://arxiv.org/pdf/1704.03732 | Todd Hester, Matej Vecerik, Olivier Pietquin, Marc Lanctot, Tom Schaul, Bilal Piot, Dan Horgan, John Quan, Andrew Sendonaris, Gabriel Dulac-Arnold, Ian Osband, John Agapiou, Joel Z. Leibo, Audrunas Gruslys | cs.AI, cs.LG | Published at AAAI 2018. Previously on arxiv as "Learning from
Demonstrations for Real World Reinforcement Learning" | null | cs.AI | 20170412 | 20171122 | [] |
1704.03732 | 55 | 7000 Alien 2500 Amidar 2000 Assault 9000 Asterix 4500 Asteroids 6000 DQfD Imitation PDD DQN DQfD Imitation PDD DQN 1800 8000 DQfD Imitation PDD DQN 4000 DQfD Imitation PDD DQN 1000000 Atlantis s n r u t e R e d o s i p E g n n a r T i i 5000 4000 3000 2000 s n r u t e R e d o s i p E g n n a r T i i 2000 1500 1000 s n r u t e R e d o s i p E g n n a r T i i 1600 1400 1200 1000 800 s n r u t e R e d o s i p E g n n a r T i i 7000 6000 5000 4000 3000 s n r u t e R e d o s i p E g n n a r T i i 3500 3000 2500 2000 1500 s n r u t e R e d o s i p E g n n a r T i i 800000 600000 400000 1000 500 600 400 DQfD Imitation PDD DQN 2000 1000 1000 500 200000 DQfD Imitation PDD DQN 0 0 50 100 Training Iteration 150 200 0 0 50 100 Training Iteration 150 200 200 0 50 100 Training Iteration 150 200 0 | 1704.03732#55 | Deep Q-learning from Demonstrations | Deep reinforcement learning (RL) has achieved several high profile successes
in difficult decision-making problems. However, these algorithms typically
require a huge amount of data before they reach reasonable performance. In
fact, their performance during learning can be extremely poor. This may be
acceptable for a simulator, but it severely limits the applicability of deep RL
to many real-world tasks, where the agent must learn in the real environment.
In this paper we study a setting where the agent may access data from previous
control of the system. We present an algorithm, Deep Q-learning from
Demonstrations (DQfD), that leverages small sets of demonstration data to
massively accelerate the learning process even from relatively small amounts of
demonstration data and is able to automatically assess the necessary ratio of
demonstration data while learning thanks to a prioritized replay mechanism.
DQfD works by combining temporal difference updates with supervised
classification of the demonstrator's actions. We show that DQfD has better
initial performance than Prioritized Dueling Double Deep Q-Networks (PDD DQN)
as it starts with better scores on the first million steps on 41 of 42 games
and on average it takes PDD DQN 83 million steps to catch up to DQfD's
performance. DQfD learns to out-perform the best demonstration given in 14 of
42 games. In addition, DQfD leverages human demonstrations to achieve
state-of-the-art results for 11 games. Finally, we show that DQfD performs
better than three related algorithms for incorporating demonstration data into
DQN. | http://arxiv.org/pdf/1704.03732 | Todd Hester, Matej Vecerik, Olivier Pietquin, Marc Lanctot, Tom Schaul, Bilal Piot, Dan Horgan, John Quan, Andrew Sendonaris, Gabriel Dulac-Arnold, Ian Osband, John Agapiou, Joel Z. Leibo, Audrunas Gruslys | cs.AI, cs.LG | Published at AAAI 2018. Previously on arxiv as "Learning from
Demonstrations for Real World Reinforcement Learning" | null | cs.AI | 20170412 | 20171122 | [] |
1704.03732 | 56 | DQfD Imitation PDD DQN 0 0 50 100 Training Iteration 150 200 0 0 50 100 Training Iteration 150 200 200 0 50 100 Training Iteration 150 200 0 0 50 100 Training Iteration 150 200 0 0 50 100 Training Iteration 150 200 0 0 50 100 Training Iteration 150 s n r u t e R e d o s i p E g n n a r T i i 1400 1200 1000 800 600 400 Bank Heist DQfD Imitation PDD DQN s n r u t e R e d o s i p E g n n a r T i i 45000 40000 35000 30000 25000 20000 15000 Battle Zone s n r u t e R e d o s i p E g n n a r T i i 6000 5000 4000 3000 2000 DQfD Imitation PDD DQN Beam Rider s n r u t e R e d o s i p E g n n a r T i i 100 90 80 70 60 50 40 Bowling s n r u t e R e d o s i p E g n n a r T i i 100 80 60 40 20 0 Boxing DQfD Imitation PDD DQN s n r u t e R e d o s i p E g n n a r T i i 350 300 250 200 150 100 DQfD Imitation | 1704.03732#56 | Deep Q-learning from Demonstrations | Deep reinforcement learning (RL) has achieved several high profile successes
in difficult decision-making problems. However, these algorithms typically
require a huge amount of data before they reach reasonable performance. In
fact, their performance during learning can be extremely poor. This may be
acceptable for a simulator, but it severely limits the applicability of deep RL
to many real-world tasks, where the agent must learn in the real environment.
In this paper we study a setting where the agent may access data from previous
control of the system. We present an algorithm, Deep Q-learning from
Demonstrations (DQfD), that leverages small sets of demonstration data to
massively accelerate the learning process even from relatively small amounts of
demonstration data and is able to automatically assess the necessary ratio of
demonstration data while learning thanks to a prioritized replay mechanism.
DQfD works by combining temporal difference updates with supervised
classification of the demonstrator's actions. We show that DQfD has better
initial performance than Prioritized Dueling Double Deep Q-Networks (PDD DQN)
as it starts with better scores on the first million steps on 41 of 42 games
and on average it takes PDD DQN 83 million steps to catch up to DQfD's
performance. DQfD learns to out-perform the best demonstration given in 14 of
42 games. In addition, DQfD leverages human demonstrations to achieve
state-of-the-art results for 11 games. Finally, we show that DQfD performs
better than three related algorithms for incorporating demonstration data into
DQN. | http://arxiv.org/pdf/1704.03732 | Todd Hester, Matej Vecerik, Olivier Pietquin, Marc Lanctot, Tom Schaul, Bilal Piot, Dan Horgan, John Quan, Andrew Sendonaris, Gabriel Dulac-Arnold, Ian Osband, John Agapiou, Joel Z. Leibo, Audrunas Gruslys | cs.AI, cs.LG | Published at AAAI 2018. Previously on arxiv as "Learning from
Demonstrations for Real World Reinforcement Learning" | null | cs.AI | 20170412 | 20171122 | [] |
1704.03732 | 57 | PDD DQN s n r u t e R e d o s i p E g n n a r T i i 350 300 250 200 150 100 DQfD Imitation PDD DQN Breakout 10000 30 200 5000 DQfD Imitation PDD DQN 1000 20 DQfD Imitation PDD DQN 20 50 0 8000 7000 0 50 100 Training Iteration Chopper Command DQfD Imitation PDD DQN 150 200 0 0 160000 140000 50 100 Training Iteration Crazy Climber 150 200 0 0 30000 25000 50 DQfD Imitation PDD DQN 100 Training Iteration Defender 150 200 10 0 4000 3500 50 DQfD Imitation PDD DQN 100 Training Iteration Demon Attack 150 40 200 â12 â14 0 50 DQfD Imitation PDD DQN 100 Training Iteration Do ble D nk 150 200 0 0 2500 2000 50 100 Training Iteration Enduro 150 200 s n r u t e R e d o s i p E g n n a r T i i 6000 5000 4000 3000 s n r u t e R e d o s i p E g n n a r T i i 120000 100000 80000 60000 DQfD Imitation PDD DQN s n r u t e R e d o s | 1704.03732#57 | Deep Q-learning from Demonstrations | Deep reinforcement learning (RL) has achieved several high profile successes
in difficult decision-making problems. However, these algorithms typically
require a huge amount of data before they reach reasonable performance. In
fact, their performance during learning can be extremely poor. This may be
acceptable for a simulator, but it severely limits the applicability of deep RL
to many real-world tasks, where the agent must learn in the real environment.
In this paper we study a setting where the agent may access data from previous
control of the system. We present an algorithm, Deep Q-learning from
Demonstrations (DQfD), that leverages small sets of demonstration data to
massively accelerate the learning process even from relatively small amounts of
demonstration data and is able to automatically assess the necessary ratio of
demonstration data while learning thanks to a prioritized replay mechanism.
DQfD works by combining temporal difference updates with supervised
classification of the demonstrator's actions. We show that DQfD has better
initial performance than Prioritized Dueling Double Deep Q-Networks (PDD DQN)
as it starts with better scores on the first million steps on 41 of 42 games
and on average it takes PDD DQN 83 million steps to catch up to DQfD's
performance. DQfD learns to out-perform the best demonstration given in 14 of
42 games. In addition, DQfD leverages human demonstrations to achieve
state-of-the-art results for 11 games. Finally, we show that DQfD performs
better than three related algorithms for incorporating demonstration data into
DQN. | http://arxiv.org/pdf/1704.03732 | Todd Hester, Matej Vecerik, Olivier Pietquin, Marc Lanctot, Tom Schaul, Bilal Piot, Dan Horgan, John Quan, Andrew Sendonaris, Gabriel Dulac-Arnold, Ian Osband, John Agapiou, Joel Z. Leibo, Audrunas Gruslys | cs.AI, cs.LG | Published at AAAI 2018. Previously on arxiv as "Learning from
Demonstrations for Real World Reinforcement Learning" | null | cs.AI | 20170412 | 20171122 | [] |
1704.03732 | 58 | E g n n a r T i i 120000 100000 80000 60000 DQfD Imitation PDD DQN s n r u t e R e d o s i p E g n n a r T i i 20000 15000 10000 s n r u t e R e d o s i p E g n n a r T i i 3000 2500 2000 1500 s n r t e R e d o s i p E g n n a r T i i â16 â18 â20 s n r u t e R e d o s i p E g n n a r T i i 1500 1000 DQfD Imitation PDD DQN 2000 40000 1000 5000 â22 500 1000 20000 500 s n r t e R e d o s i p E g n n a r T i i 0 0 40 20 0 â20 â40 â60 50 100 Training Iteration Fishing Derby 150 DQfD Imitation PDD DQN 200 35 30 s n r u t e R e d o s i p E g n n a r T i i 25 20 0 0 50 100 Training Iteration Freeway 150 0 200 14000 12000 s n r u t e R e d o s i p E g n n a r T i i 10000 8000 6000 4000 0 50 DQfD Imitation PDD DQN 100 Training Iteration | 1704.03732#58 | Deep Q-learning from Demonstrations | Deep reinforcement learning (RL) has achieved several high profile successes
in difficult decision-making problems. However, these algorithms typically
require a huge amount of data before they reach reasonable performance. In
fact, their performance during learning can be extremely poor. This may be
acceptable for a simulator, but it severely limits the applicability of deep RL
to many real-world tasks, where the agent must learn in the real environment.
In this paper we study a setting where the agent may access data from previous
control of the system. We present an algorithm, Deep Q-learning from
Demonstrations (DQfD), that leverages small sets of demonstration data to
massively accelerate the learning process even from relatively small amounts of
demonstration data and is able to automatically assess the necessary ratio of
demonstration data while learning thanks to a prioritized replay mechanism.
DQfD works by combining temporal difference updates with supervised
classification of the demonstrator's actions. We show that DQfD has better
initial performance than Prioritized Dueling Double Deep Q-Networks (PDD DQN)
as it starts with better scores on the first million steps on 41 of 42 games
and on average it takes PDD DQN 83 million steps to catch up to DQfD's
performance. DQfD learns to out-perform the best demonstration given in 14 of
42 games. In addition, DQfD leverages human demonstrations to achieve
state-of-the-art results for 11 games. Finally, we show that DQfD performs
better than three related algorithms for incorporating demonstration data into
DQN. | http://arxiv.org/pdf/1704.03732 | Todd Hester, Matej Vecerik, Olivier Pietquin, Marc Lanctot, Tom Schaul, Bilal Piot, Dan Horgan, John Quan, Andrew Sendonaris, Gabriel Dulac-Arnold, Ian Osband, John Agapiou, Joel Z. Leibo, Audrunas Gruslys | cs.AI, cs.LG | Published at AAAI 2018. Previously on arxiv as "Learning from
Demonstrations for Real World Reinforcement Learning" | null | cs.AI | 20170412 | 20171122 | [] |
1704.03732 | 59 | t e R e d o s i p E g n n a r T i i 10000 8000 6000 4000 0 50 DQfD Imitation PDD DQN 100 Training Iteration Gopher 150 200 0 3000 2500 s n r u t e R e d o s i p E g n n a r T i i 2000 1500 1000 0 50 DQfD Imitation PDD DQN 100 Training Iteration Gravitar 150 â24 200 0 120000 100000 s n r u t e R e d o s i p E g n n a r T i i 80000 60000 40000 50 100 Training Iteration Hero DQfD Imitation PDD DQN 150 200 0 0 â2 â4 s n r t e R e d o s i p E g n n a r T i i â6 â8 â10 â12 0 50 100 Training Iteration Ice Hockey 150 200 â80 15 DQfD Imitation PDD DQN 2000 500 20000 â14 â16 DQfD Imitation PDD DQN â100 2500 2000 0 50 DQfD Imitation PDD DQN 100 Training Iteration Jamesbond 150 10 200 0 16000 14000 50 100 Training Iteration Kangaroo 150 200 0 12000 10000 0 50 100 Training Iteration Krull 150 200 0 0 35000 30000 50 | 1704.03732#59 | Deep Q-learning from Demonstrations | Deep reinforcement learning (RL) has achieved several high profile successes
in difficult decision-making problems. However, these algorithms typically
require a huge amount of data before they reach reasonable performance. In
fact, their performance during learning can be extremely poor. This may be
acceptable for a simulator, but it severely limits the applicability of deep RL
to many real-world tasks, where the agent must learn in the real environment.
In this paper we study a setting where the agent may access data from previous
control of the system. We present an algorithm, Deep Q-learning from
Demonstrations (DQfD), that leverages small sets of demonstration data to
massively accelerate the learning process even from relatively small amounts of
demonstration data and is able to automatically assess the necessary ratio of
demonstration data while learning thanks to a prioritized replay mechanism.
DQfD works by combining temporal difference updates with supervised
classification of the demonstrator's actions. We show that DQfD has better
initial performance than Prioritized Dueling Double Deep Q-Networks (PDD DQN)
as it starts with better scores on the first million steps on 41 of 42 games
and on average it takes PDD DQN 83 million steps to catch up to DQfD's
performance. DQfD learns to out-perform the best demonstration given in 14 of
42 games. In addition, DQfD leverages human demonstrations to achieve
state-of-the-art results for 11 games. Finally, we show that DQfD performs
better than three related algorithms for incorporating demonstration data into
DQN. | http://arxiv.org/pdf/1704.03732 | Todd Hester, Matej Vecerik, Olivier Pietquin, Marc Lanctot, Tom Schaul, Bilal Piot, Dan Horgan, John Quan, Andrew Sendonaris, Gabriel Dulac-Arnold, Ian Osband, John Agapiou, Joel Z. Leibo, Audrunas Gruslys | cs.AI, cs.LG | Published at AAAI 2018. Previously on arxiv as "Learning from
Demonstrations for Real World Reinforcement Learning" | null | cs.AI | 20170412 | 20171122 | [] |
1704.03732 | 60 | 150 10 200 0 16000 14000 50 100 Training Iteration Kangaroo 150 200 0 12000 10000 0 50 100 Training Iteration Krull 150 200 0 0 35000 30000 50 DQfD Imitation PDD DQN 100 Training Iteration Kung Fu Master 150 200 5000 4000 0 0 100 Training Iteration Montezuma Revenge 50 DQfD Imitation PDD DQN 150 â18 200 0 6000 5000 50 DQfD Imitation PDD DQN 100 Training Iteration Ms Pacman 150 200 s n r u t e R e d o s i p E g n n a r T i i 1500 1000 s n r u t e R e d o s i p E g n n a r T i i 12000 10000 8000 6000 4000 DQfD Imitation PDD DQN s n r u t e R e d o s i p E g n n a r T i i 8000 6000 4000 DQfD Imitation PDD DQN s n r u t e R e d o s i p E g n n a r T i i 25000 20000 15000 10000 s n r u t e R e d o s i p E g n n a r T i i 3000 2000 s n r u t e R e d o s i p E g n n a r T i i 4000 3000 2000 500 2000 | 1704.03732#60 | Deep Q-learning from Demonstrations | Deep reinforcement learning (RL) has achieved several high profile successes
in difficult decision-making problems. However, these algorithms typically
require a huge amount of data before they reach reasonable performance. In
fact, their performance during learning can be extremely poor. This may be
acceptable for a simulator, but it severely limits the applicability of deep RL
to many real-world tasks, where the agent must learn in the real environment.
In this paper we study a setting where the agent may access data from previous
control of the system. We present an algorithm, Deep Q-learning from
Demonstrations (DQfD), that leverages small sets of demonstration data to
massively accelerate the learning process even from relatively small amounts of
demonstration data and is able to automatically assess the necessary ratio of
demonstration data while learning thanks to a prioritized replay mechanism.
DQfD works by combining temporal difference updates with supervised
classification of the demonstrator's actions. We show that DQfD has better
initial performance than Prioritized Dueling Double Deep Q-Networks (PDD DQN)
as it starts with better scores on the first million steps on 41 of 42 games
and on average it takes PDD DQN 83 million steps to catch up to DQfD's
performance. DQfD learns to out-perform the best demonstration given in 14 of
42 games. In addition, DQfD leverages human demonstrations to achieve
state-of-the-art results for 11 games. Finally, we show that DQfD performs
better than three related algorithms for incorporating demonstration data into
DQN. | http://arxiv.org/pdf/1704.03732 | Todd Hester, Matej Vecerik, Olivier Pietquin, Marc Lanctot, Tom Schaul, Bilal Piot, Dan Horgan, John Quan, Andrew Sendonaris, Gabriel Dulac-Arnold, Ian Osband, John Agapiou, Joel Z. Leibo, Audrunas Gruslys | cs.AI, cs.LG | Published at AAAI 2018. Previously on arxiv as "Learning from
Demonstrations for Real World Reinforcement Learning" | null | cs.AI | 20170412 | 20171122 | [] |
1704.03732 | 61 | i p E g n n a r T i i 3000 2000 s n r u t e R e d o s i p E g n n a r T i i 4000 3000 2000 500 2000 2000 5000 1000 1000 s n r u t e R e d o s i p E g n n a r T i i 0 9000 8000 7000 6000 5000 4000 0 50 100 Training Iteration Name This Game 150 200 0 200 100 s n r u t e R e d o s i p E g n n a r T i i 0 â100 â200 0 50 100 Training Iteration Pitfall 150 200 20 15 10 s n r u t e R e d o s i p E g n n a r T i i 5 0 â5 â10 0 0 50 DQfD Imitation PDD DQN 100 Training Iteration Pong 150 200 s n r u t e R e d o s i p E g n n a r T i i 0 0 50000 40000 30000 20000 10000 50 100 Training Iteration Private Eye 150 DQfD Imitation PDD DQN 200 0 0 25000 20000 s n r u t e R e d o s i p E g n n a r T i i 15000 10000 50 DQfD Imitation PDD DQN 100 Training Iteration Qbert 150 200 0 0 20000 s n r u t e R e | 1704.03732#61 | Deep Q-learning from Demonstrations | Deep reinforcement learning (RL) has achieved several high profile successes
in difficult decision-making problems. However, these algorithms typically
require a huge amount of data before they reach reasonable performance. In
fact, their performance during learning can be extremely poor. This may be
acceptable for a simulator, but it severely limits the applicability of deep RL
to many real-world tasks, where the agent must learn in the real environment.
In this paper we study a setting where the agent may access data from previous
control of the system. We present an algorithm, Deep Q-learning from
Demonstrations (DQfD), that leverages small sets of demonstration data to
massively accelerate the learning process even from relatively small amounts of
demonstration data and is able to automatically assess the necessary ratio of
demonstration data while learning thanks to a prioritized replay mechanism.
DQfD works by combining temporal difference updates with supervised
classification of the demonstrator's actions. We show that DQfD has better
initial performance than Prioritized Dueling Double Deep Q-Networks (PDD DQN)
as it starts with better scores on the first million steps on 41 of 42 games
and on average it takes PDD DQN 83 million steps to catch up to DQfD's
performance. DQfD learns to out-perform the best demonstration given in 14 of
42 games. In addition, DQfD leverages human demonstrations to achieve
state-of-the-art results for 11 games. Finally, we show that DQfD performs
better than three related algorithms for incorporating demonstration data into
DQN. | http://arxiv.org/pdf/1704.03732 | Todd Hester, Matej Vecerik, Olivier Pietquin, Marc Lanctot, Tom Schaul, Bilal Piot, Dan Horgan, John Quan, Andrew Sendonaris, Gabriel Dulac-Arnold, Ian Osband, John Agapiou, Joel Z. Leibo, Audrunas Gruslys | cs.AI, cs.LG | Published at AAAI 2018. Previously on arxiv as "Learning from
Demonstrations for Real World Reinforcement Learning" | null | cs.AI | 20170412 | 20171122 | [] |
1704.03732 | 62 | r T i i 15000 10000 50 DQfD Imitation PDD DQN 100 Training Iteration Qbert 150 200 0 0 20000 s n r u t e R e d o s i p E g n n a r T i i 15000 10000 50 DQfD Imitation PDD DQN 100 Training Iteration Riverraid 150 200 3000 2000 DQfD Imitation PDD DQN â300 DQfD Imitation PDD DQN â15 â20 0 5000 5000 1000 0 60000 50000 50 DQfD Imitation PDD DQN 100 Training Iteration Road Runner 150 â400 200 0 14000 12000 50 DQfD Imitation PDD DQN 100 Training Iteration Seaquest 150 â25 200 0 6000 5000 50 100 Training Iteration Solaris 150 DQfD Imitation PDD DQN â10000 200 90000 80000 0 50 100 Training Iteration Up N Down 150 200 0 0 120000 100000 50 100 Training Iteration Video Pinball 150 DQfD Imitation PDD DQN 200 0 70000 60000 0 50 DQfD Imitation PDD DQN 100 Training Iteration Yars Revenge 150 200 40000 10000 8000 4000 70000 60000 50000 80000 50000 40000 | 1704.03732#62 | Deep Q-learning from Demonstrations | Deep reinforcement learning (RL) has achieved several high profile successes
in difficult decision-making problems. However, these algorithms typically
require a huge amount of data before they reach reasonable performance. In
fact, their performance during learning can be extremely poor. This may be
acceptable for a simulator, but it severely limits the applicability of deep RL
to many real-world tasks, where the agent must learn in the real environment.
In this paper we study a setting where the agent may access data from previous
control of the system. We present an algorithm, Deep Q-learning from
Demonstrations (DQfD), that leverages small sets of demonstration data to
massively accelerate the learning process even from relatively small amounts of
demonstration data and is able to automatically assess the necessary ratio of
demonstration data while learning thanks to a prioritized replay mechanism.
DQfD works by combining temporal difference updates with supervised
classification of the demonstrator's actions. We show that DQfD has better
initial performance than Prioritized Dueling Double Deep Q-Networks (PDD DQN)
as it starts with better scores on the first million steps on 41 of 42 games
and on average it takes PDD DQN 83 million steps to catch up to DQfD's
performance. DQfD learns to out-perform the best demonstration given in 14 of
42 games. In addition, DQfD leverages human demonstrations to achieve
state-of-the-art results for 11 games. Finally, we show that DQfD performs
better than three related algorithms for incorporating demonstration data into
DQN. | http://arxiv.org/pdf/1704.03732 | Todd Hester, Matej Vecerik, Olivier Pietquin, Marc Lanctot, Tom Schaul, Bilal Piot, Dan Horgan, John Quan, Andrew Sendonaris, Gabriel Dulac-Arnold, Ian Osband, John Agapiou, Joel Z. Leibo, Audrunas Gruslys | cs.AI, cs.LG | Published at AAAI 2018. Previously on arxiv as "Learning from
Demonstrations for Real World Reinforcement Learning" | null | cs.AI | 20170412 | 20171122 | [] |
1704.03732 | 63 | s n r u t e R e d o s i p E g n n a r T
# i
# i
30000
20000
s n r u t e R e d o s i p E g n n a r T
# i
# i
6000
4000
s n r u t e R e d o s i p E g n n a r T
# i
# i
3000
2000
s n r u t e R e d o s i p E g n n a r T
# i
# i
40000
30000
# DQfD Imitation PDD DQN
s n r u t e R e d o s i p E g n n a r T
# i
# i
60000
40000
s n r u t e R e d o s i p E g n n a r T
# i
# i
30000
20000
20000
10000
2000
1000
10000
20000
10000
0
0
50
# 100 Training Iteration
150
200
0
0
50
# 100 Training Iteration
150
200
0
0
50
# 100 Training Iteration
150
200
0
0
50
# 100 Training Iteration
150
200
0
0
50
# 100 Training Iteration
150
200
0
0
50
# 100 Training Iteration
Figure 3: On-line rewards of the three algorithms on the 42 Atari games, averaged over 4 trials. Each episode is started with up to 30 radom no-op actions. Scores are from the Atari game, regardless of the internal representation of reward used by the agent. | 1704.03732#63 | Deep Q-learning from Demonstrations | Deep reinforcement learning (RL) has achieved several high profile successes
in difficult decision-making problems. However, these algorithms typically
require a huge amount of data before they reach reasonable performance. In
fact, their performance during learning can be extremely poor. This may be
acceptable for a simulator, but it severely limits the applicability of deep RL
to many real-world tasks, where the agent must learn in the real environment.
In this paper we study a setting where the agent may access data from previous
control of the system. We present an algorithm, Deep Q-learning from
Demonstrations (DQfD), that leverages small sets of demonstration data to
massively accelerate the learning process even from relatively small amounts of
demonstration data and is able to automatically assess the necessary ratio of
demonstration data while learning thanks to a prioritized replay mechanism.
DQfD works by combining temporal difference updates with supervised
classification of the demonstrator's actions. We show that DQfD has better
initial performance than Prioritized Dueling Double Deep Q-Networks (PDD DQN)
as it starts with better scores on the first million steps on 41 of 42 games
and on average it takes PDD DQN 83 million steps to catch up to DQfD's
performance. DQfD learns to out-perform the best demonstration given in 14 of
42 games. In addition, DQfD leverages human demonstrations to achieve
state-of-the-art results for 11 games. Finally, we show that DQfD performs
better than three related algorithms for incorporating demonstration data into
DQN. | http://arxiv.org/pdf/1704.03732 | Todd Hester, Matej Vecerik, Olivier Pietquin, Marc Lanctot, Tom Schaul, Bilal Piot, Dan Horgan, John Quan, Andrew Sendonaris, Gabriel Dulac-Arnold, Ian Osband, John Agapiou, Joel Z. Leibo, Audrunas Gruslys | cs.AI, cs.LG | Published at AAAI 2018. Previously on arxiv as "Learning from
Demonstrations for Real World Reinforcement Learning" | null | cs.AI | 20170412 | 20171122 | [] |
1704.03732 | 64 | 150
200
200
DQfD Game PDD DQN Imitation Number
Worst Demo Best Demo Number Score Score 29160 9690 2341 1353 2274 1168 18100 4500 18100 14170 22400 10300 7465 900 60000 35000 19844 12594 149 89 15 0 79 17 11300 4700 61600 30600 18700 5150 6190 1800 -14 -22 803 383 20 -10 32 30 22520 2500 13400 2950 99320 35155 1 -4 650 400 36300 12400 13730 8040 25920 8300 34900 32300 55021 31781 19380 11350 47821 3662 0 -12 74456 70375 99450 80700 39710 17240 20200 8400 101120 56510 17840 2840 16080 6580 32420 8409 83523 48361
Transitions Episodes Mean Score Mean Score Mean Score 19133 16790 13224 9525 22801 17516 32389 9075 38665 9991 8438 10475 7710 18937 6421 17409 11855 42058 6388 10239 38632 15377 32907 17585 9050 20984 32581 12989 17949 21896 43571 35347 17719 10899 75472 46233 5574 57453 28552 10421 10051 21334 | 1704.03732#64 | Deep Q-learning from Demonstrations | Deep reinforcement learning (RL) has achieved several high profile successes
in difficult decision-making problems. However, these algorithms typically
require a huge amount of data before they reach reasonable performance. In
fact, their performance during learning can be extremely poor. This may be
acceptable for a simulator, but it severely limits the applicability of deep RL
to many real-world tasks, where the agent must learn in the real environment.
In this paper we study a setting where the agent may access data from previous
control of the system. We present an algorithm, Deep Q-learning from
Demonstrations (DQfD), that leverages small sets of demonstration data to
massively accelerate the learning process even from relatively small amounts of
demonstration data and is able to automatically assess the necessary ratio of
demonstration data while learning thanks to a prioritized replay mechanism.
DQfD works by combining temporal difference updates with supervised
classification of the demonstrator's actions. We show that DQfD has better
initial performance than Prioritized Dueling Double Deep Q-Networks (PDD DQN)
as it starts with better scores on the first million steps on 41 of 42 games
and on average it takes PDD DQN 83 million steps to catch up to DQfD's
performance. DQfD learns to out-perform the best demonstration given in 14 of
42 games. In addition, DQfD leverages human demonstrations to achieve
state-of-the-art results for 11 games. Finally, we show that DQfD performs
better than three related algorithms for incorporating demonstration data into
DQN. | http://arxiv.org/pdf/1704.03732 | Todd Hester, Matej Vecerik, Olivier Pietquin, Marc Lanctot, Tom Schaul, Bilal Piot, Dan Horgan, John Quan, Andrew Sendonaris, Gabriel Dulac-Arnold, Ian Osband, John Agapiou, Joel Z. Leibo, Audrunas Gruslys | cs.AI, cs.LG | Published at AAAI 2018. Previously on arxiv as "Learning from
Demonstrations for Real World Reinforcement Learning" | null | cs.AI | 20170412 | 20171122 | [] |
1704.03732 | 65 | 6197.1 2140.4 1880.9 7566.2 3917.6 303374.8 1240.8 41993.7 5401.4 71.0 99.3 275.0 6973.8 136828.2 24558.8 3511.6 -14.3 2199.6 28.6 31.9 12003.4 2796.1 22290.1 -2.0 639.6 13567.3 10344.4 32212.3 0.1 3684.2 8716.8 0.0 16.7 154.3 20693.7 18810.0 51688.4 1862.5 4157.0 82138.5 101339.6 63484.4
Alien Amidar Assault Asterix Asteroids Atlantis Bank Heist Battle Zone Beam Rider Bowling Boxing Breakout Chopper Command Crazy Climber Defender Demon Attack Double Dunk Enduro Fishing Derby Freeway Gopher Gravitar Hero Ice Hockey James Bond Kangaroo Krull Kung Fu Master Montezumaâs Revenge Ms Pacman Name This Game Pitfall Pong Private Eye Q-Bert River Raid Road Runner Seaquest Solaris Up N Down Video Pinball Yarsâ Revenge
5 5 5 5 5 12 7 5 4 5 5 9 5 5 5 5 5 5 4 5 5 5 5 5 5 5 5 5 5 3 5 5 3 5 5 5 5 7 6 4 5 4 | 1704.03732#65 | Deep Q-learning from Demonstrations | Deep reinforcement learning (RL) has achieved several high profile successes
in difficult decision-making problems. However, these algorithms typically
require a huge amount of data before they reach reasonable performance. In
fact, their performance during learning can be extremely poor. This may be
acceptable for a simulator, but it severely limits the applicability of deep RL
to many real-world tasks, where the agent must learn in the real environment.
In this paper we study a setting where the agent may access data from previous
control of the system. We present an algorithm, Deep Q-learning from
Demonstrations (DQfD), that leverages small sets of demonstration data to
massively accelerate the learning process even from relatively small amounts of
demonstration data and is able to automatically assess the necessary ratio of
demonstration data while learning thanks to a prioritized replay mechanism.
DQfD works by combining temporal difference updates with supervised
classification of the demonstrator's actions. We show that DQfD has better
initial performance than Prioritized Dueling Double Deep Q-Networks (PDD DQN)
as it starts with better scores on the first million steps on 41 of 42 games
and on average it takes PDD DQN 83 million steps to catch up to DQfD's
performance. DQfD learns to out-perform the best demonstration given in 14 of
42 games. In addition, DQfD leverages human demonstrations to achieve
state-of-the-art results for 11 games. Finally, we show that DQfD performs
better than three related algorithms for incorporating demonstration data into
DQN. | http://arxiv.org/pdf/1704.03732 | Todd Hester, Matej Vecerik, Olivier Pietquin, Marc Lanctot, Tom Schaul, Bilal Piot, Dan Horgan, John Quan, Andrew Sendonaris, Gabriel Dulac-Arnold, Ian Osband, John Agapiou, Joel Z. Leibo, Audrunas Gruslys | cs.AI, cs.LG | Published at AAAI 2018. Previously on arxiv as "Learning from
Demonstrations for Real World Reinforcement Learning" | null | cs.AI | 20170412 | 20171122 | [] |
1704.03073 | 1 | AbstractâDeep learning and reinforcement learning methods have recently been used to solve a variety of problems in continu- ous control domains. An obvious application of these techniques is dexterous manipulation tasks in robotics which are difï¬cult to solve using traditional control theory or hand-engineered approaches. One example of such a task is to grasp an object and precisely stack it on another. Solving this difï¬cult and practically relevant problem in the real world is an important long-term goal for the ï¬eld of robotics. Here we take a step towards this goal by examining the problem in simulation and providing models and techniques aimed at solving it. We introduce two extensions to the Deep Deterministic Policy Gradient algorithm (DDPG), a model-free Q-learning based method, which make it signiï¬cantly more data-efï¬cient and scalable. Our results show that by making extensive use of off-policy data and replay, it is possible to ï¬nd control policies that robustly grasp objects and stack them. Further, our results hint that it may soon be feasible to train successful stacking policies by collecting interactions on real robots.
# I. INTRODUCTION | 1704.03073#1 | Data-efficient Deep Reinforcement Learning for Dexterous Manipulation | Deep learning and reinforcement learning methods have recently been used to
solve a variety of problems in continuous control domains. An obvious
application of these techniques is dexterous manipulation tasks in robotics
which are difficult to solve using traditional control theory or
hand-engineered approaches. One example of such a task is to grasp an object
and precisely stack it on another. Solving this difficult and practically
relevant problem in the real world is an important long-term goal for the field
of robotics. Here we take a step towards this goal by examining the problem in
simulation and providing models and techniques aimed at solving it. We
introduce two extensions to the Deep Deterministic Policy Gradient algorithm
(DDPG), a model-free Q-learning based method, which make it significantly more
data-efficient and scalable. Our results show that by making extensive use of
off-policy data and replay, it is possible to find control policies that
robustly grasp objects and stack them. Further, our results hint that it may
soon be feasible to train successful stacking policies by collecting
interactions on real robots. | http://arxiv.org/pdf/1704.03073 | Ivaylo Popov, Nicolas Heess, Timothy Lillicrap, Roland Hafner, Gabriel Barth-Maron, Matej Vecerik, Thomas Lampe, Yuval Tassa, Tom Erez, Martin Riedmiller | cs.LG, cs.RO | 12 pages, 5 Figures | null | cs.LG | 20170410 | 20170410 | [
{
"id": "1504.00702"
},
{
"id": "1610.00633"
}
] |
1704.03073 | 2 | # I. INTRODUCTION
Dexterous manipulation is a fundamental challenge in robotics. Researchers have long been seeking a way to enable robots to robustly and ï¬exibly interact with ï¬xed and free objects of different shapes, materials, and surface properties in the context of a broad range of tasks and environmental conditions. Such ï¬exibility is very difï¬cult to achieve with manually designed controllers. The recent resurgence of neural networks and âdeep learningâ has inspired hope that these methods will be as effective in the control domain as they are for perception. And indeed, in simulation, recent work has used neural networks to learn solutions to a variety of control problems from scratch (e.g. [7, 20, 32, 31, 11, 17]). | 1704.03073#2 | Data-efficient Deep Reinforcement Learning for Dexterous Manipulation | Deep learning and reinforcement learning methods have recently been used to
solve a variety of problems in continuous control domains. An obvious
application of these techniques is dexterous manipulation tasks in robotics
which are difficult to solve using traditional control theory or
hand-engineered approaches. One example of such a task is to grasp an object
and precisely stack it on another. Solving this difficult and practically
relevant problem in the real world is an important long-term goal for the field
of robotics. Here we take a step towards this goal by examining the problem in
simulation and providing models and techniques aimed at solving it. We
introduce two extensions to the Deep Deterministic Policy Gradient algorithm
(DDPG), a model-free Q-learning based method, which make it significantly more
data-efficient and scalable. Our results show that by making extensive use of
off-policy data and replay, it is possible to find control policies that
robustly grasp objects and stack them. Further, our results hint that it may
soon be feasible to train successful stacking policies by collecting
interactions on real robots. | http://arxiv.org/pdf/1704.03073 | Ivaylo Popov, Nicolas Heess, Timothy Lillicrap, Roland Hafner, Gabriel Barth-Maron, Matej Vecerik, Thomas Lampe, Yuval Tassa, Tom Erez, Martin Riedmiller | cs.LG, cs.RO | 12 pages, 5 Figures | null | cs.LG | 20170410 | 20170410 | [
{
"id": "1504.00702"
},
{
"id": "1610.00633"
}
] |
1704.03073 | 3 | While the ï¬exibility and generality of learning approaches is promising for robotics, these methods typically require a large amount of data that grows with the complexity of the task. What is feasible on a simulated system, where hundreds of millions of control steps are possible [23], does not necessarily transfer to real robot applications due to unrealistic learning times. One solution to this problem is to restrict the generality of the controller by incorporating task speciï¬c knowledge, e.g. in the form of dynamic movement primitives [30], or in the form of strong teaching signals, e.g. kinesthetic teaching of trajectories [24]. Recent works have had some success learning ï¬exible neural network policies directly on real robots (e.g. [18, 5, 39]), but tasks as complex as grasping-and-stacking remain daunting. | 1704.03073#3 | Data-efficient Deep Reinforcement Learning for Dexterous Manipulation | Deep learning and reinforcement learning methods have recently been used to
solve a variety of problems in continuous control domains. An obvious
application of these techniques is dexterous manipulation tasks in robotics
which are difficult to solve using traditional control theory or
hand-engineered approaches. One example of such a task is to grasp an object
and precisely stack it on another. Solving this difficult and practically
relevant problem in the real world is an important long-term goal for the field
of robotics. Here we take a step towards this goal by examining the problem in
simulation and providing models and techniques aimed at solving it. We
introduce two extensions to the Deep Deterministic Policy Gradient algorithm
(DDPG), a model-free Q-learning based method, which make it significantly more
data-efficient and scalable. Our results show that by making extensive use of
off-policy data and replay, it is possible to find control policies that
robustly grasp objects and stack them. Further, our results hint that it may
soon be feasible to train successful stacking policies by collecting
interactions on real robots. | http://arxiv.org/pdf/1704.03073 | Ivaylo Popov, Nicolas Heess, Timothy Lillicrap, Roland Hafner, Gabriel Barth-Maron, Matej Vecerik, Thomas Lampe, Yuval Tassa, Tom Erez, Martin Riedmiller | cs.LG, cs.RO | 12 pages, 5 Figures | null | cs.LG | 20170410 | 20170410 | [
{
"id": "1504.00702"
},
{
"id": "1610.00633"
}
] |
1704.03073 | 4 | An important issue for the application of learning methods in robotics is to understand how to make the best use of collected data, which can be expensive to obtain, both in terms of time and money. To keep learning times reasonably low even in complex scenarios, it is crucial to ï¬nd a practical compromise between the generality of the controller and the necessary restrictions of the task setup. This is the gap that we aim to ï¬ll in this paper: exploring the potential of a learning approach that keeps prior assumptions low while keeping data consumption in reasonable bounds. Simultaneously, we are interested in approaches that are broadly applicable, robust, and practical. | 1704.03073#4 | Data-efficient Deep Reinforcement Learning for Dexterous Manipulation | Deep learning and reinforcement learning methods have recently been used to
solve a variety of problems in continuous control domains. An obvious
application of these techniques is dexterous manipulation tasks in robotics
which are difficult to solve using traditional control theory or
hand-engineered approaches. One example of such a task is to grasp an object
and precisely stack it on another. Solving this difficult and practically
relevant problem in the real world is an important long-term goal for the field
of robotics. Here we take a step towards this goal by examining the problem in
simulation and providing models and techniques aimed at solving it. We
introduce two extensions to the Deep Deterministic Policy Gradient algorithm
(DDPG), a model-free Q-learning based method, which make it significantly more
data-efficient and scalable. Our results show that by making extensive use of
off-policy data and replay, it is possible to find control policies that
robustly grasp objects and stack them. Further, our results hint that it may
soon be feasible to train successful stacking policies by collecting
interactions on real robots. | http://arxiv.org/pdf/1704.03073 | Ivaylo Popov, Nicolas Heess, Timothy Lillicrap, Roland Hafner, Gabriel Barth-Maron, Matej Vecerik, Thomas Lampe, Yuval Tassa, Tom Erez, Martin Riedmiller | cs.LG, cs.RO | 12 pages, 5 Figures | null | cs.LG | 20170410 | 20170410 | [
{
"id": "1504.00702"
},
{
"id": "1610.00633"
}
] |
1704.03073 | 5 | In this paper we provide a simulation study that investigates the possibility of learning complex manipulation skills end- to-end with a general purpose model-free deep reinforcement learning algorithm. The express goal of this work is to assess the feasibility of performing analogous end-to-end learning experiments on real robotics hardware and to provide guidance with respect to the choice of learning algorithm and experi- mental setup and the performance that we can hope to achieve. The task which we consider to this end is that of picking up a Lego brick from the table and stacking it onto a second nearby brick using a robotic arm with 9 degrees of freedom (DoF), six in the arm and three for the ï¬ngers in the gripper. In addition to having a high-dimensional state and action space, the task exempliï¬es several of the challenges that are encountered in real-world manipulation problems. Firstly, it involves contact-rich interactions between the robotic arm and two freely moving objects. Secondly it requires mastering several sub-skills (reaching, grasping, and stacking). Each of these sub-skills is challenging in its own right as they require both precision (for instance, successful stacking requires ac- | 1704.03073#5 | Data-efficient Deep Reinforcement Learning for Dexterous Manipulation | Deep learning and reinforcement learning methods have recently been used to
solve a variety of problems in continuous control domains. An obvious
application of these techniques is dexterous manipulation tasks in robotics
which are difficult to solve using traditional control theory or
hand-engineered approaches. One example of such a task is to grasp an object
and precisely stack it on another. Solving this difficult and practically
relevant problem in the real world is an important long-term goal for the field
of robotics. Here we take a step towards this goal by examining the problem in
simulation and providing models and techniques aimed at solving it. We
introduce two extensions to the Deep Deterministic Policy Gradient algorithm
(DDPG), a model-free Q-learning based method, which make it significantly more
data-efficient and scalable. Our results show that by making extensive use of
off-policy data and replay, it is possible to find control policies that
robustly grasp objects and stack them. Further, our results hint that it may
soon be feasible to train successful stacking policies by collecting
interactions on real robots. | http://arxiv.org/pdf/1704.03073 | Ivaylo Popov, Nicolas Heess, Timothy Lillicrap, Roland Hafner, Gabriel Barth-Maron, Matej Vecerik, Thomas Lampe, Yuval Tassa, Tom Erez, Martin Riedmiller | cs.LG, cs.RO | 12 pages, 5 Figures | null | cs.LG | 20170410 | 20170410 | [
{
"id": "1504.00702"
},
{
"id": "1610.00633"
}
] |
1704.03073 | 6 | grasping, and stacking). Each of these sub-skills is challenging in its own right as they require both precision (for instance, successful stacking requires ac- curate alignment of the two bricks) and as well as robust generalization over a large state space (e.g. different initial positions of the bricks and the initial conï¬guration of the arm). Finally, there exist non-trivial and long-ranging dependencies between the solutions for different subtasks: for instance, the ability to successfully stack the brick in the later part of the task depends critically on having picked up the brick in a sensible way beforehand. | 1704.03073#6 | Data-efficient Deep Reinforcement Learning for Dexterous Manipulation | Deep learning and reinforcement learning methods have recently been used to
solve a variety of problems in continuous control domains. An obvious
application of these techniques is dexterous manipulation tasks in robotics
which are difficult to solve using traditional control theory or
hand-engineered approaches. One example of such a task is to grasp an object
and precisely stack it on another. Solving this difficult and practically
relevant problem in the real world is an important long-term goal for the field
of robotics. Here we take a step towards this goal by examining the problem in
simulation and providing models and techniques aimed at solving it. We
introduce two extensions to the Deep Deterministic Policy Gradient algorithm
(DDPG), a model-free Q-learning based method, which make it significantly more
data-efficient and scalable. Our results show that by making extensive use of
off-policy data and replay, it is possible to find control policies that
robustly grasp objects and stack them. Further, our results hint that it may
soon be feasible to train successful stacking policies by collecting
interactions on real robots. | http://arxiv.org/pdf/1704.03073 | Ivaylo Popov, Nicolas Heess, Timothy Lillicrap, Roland Hafner, Gabriel Barth-Maron, Matej Vecerik, Thomas Lampe, Yuval Tassa, Tom Erez, Martin Riedmiller | cs.LG, cs.RO | 12 pages, 5 Figures | null | cs.LG | 20170410 | 20170410 | [
{
"id": "1504.00702"
},
{
"id": "1610.00633"
}
] |
1704.03073 | 7 | On the algorithm side we build on the Deep Deterministic Policy Gradient (DDPG; [20]), a general purpose model-free reinforcement learning algorithm for continuous action spaces, and extend it in two ways (section V): ï¬rstly, we improve the the data efï¬ciency of the algorithm by scheduling updates
> i YY Da
Fig. 1: Simulation rendering of the Lego task in different completion stages (also corresponding to different subtasks): (a) starting state, (b) reaching, (c) grasping, (also StackInHand starting state) and (d) stacking
of the network parameters independently of interactions with the environment. Secondly, we overcome the computational and experimental bottlenecks of single-machine single-robot learning by introducing a distributed version of DDPG which allows data collection and network training to be spread out over multiple computers and robots.
reward. The latter have been routinely applied in robotics, in part because they straightforwardly handle continuous and high-dimensional action spaces [3] and applications include manipulation [26, 13, 25, 37, 18, 5, 39, 8], locomotion e.g. [16, 21], and a range of other challenges such as helicopter ï¬ight [1]. | 1704.03073#7 | Data-efficient Deep Reinforcement Learning for Dexterous Manipulation | Deep learning and reinforcement learning methods have recently been used to
solve a variety of problems in continuous control domains. An obvious
application of these techniques is dexterous manipulation tasks in robotics
which are difficult to solve using traditional control theory or
hand-engineered approaches. One example of such a task is to grasp an object
and precisely stack it on another. Solving this difficult and practically
relevant problem in the real world is an important long-term goal for the field
of robotics. Here we take a step towards this goal by examining the problem in
simulation and providing models and techniques aimed at solving it. We
introduce two extensions to the Deep Deterministic Policy Gradient algorithm
(DDPG), a model-free Q-learning based method, which make it significantly more
data-efficient and scalable. Our results show that by making extensive use of
off-policy data and replay, it is possible to find control policies that
robustly grasp objects and stack them. Further, our results hint that it may
soon be feasible to train successful stacking policies by collecting
interactions on real robots. | http://arxiv.org/pdf/1704.03073 | Ivaylo Popov, Nicolas Heess, Timothy Lillicrap, Roland Hafner, Gabriel Barth-Maron, Matej Vecerik, Thomas Lampe, Yuval Tassa, Tom Erez, Martin Riedmiller | cs.LG, cs.RO | 12 pages, 5 Figures | null | cs.LG | 20170410 | 20170410 | [
{
"id": "1504.00702"
},
{
"id": "1610.00633"
}
] |
1704.03073 | 8 | We further propose two broadly applicable strategies that allow us to inject prior knowledge into the learning process in order to help reliably ï¬nd solutions to complex tasks and further reduce the amount of environmental interaction. The ï¬rst of these strategies is a recipe for designing effective shaping rewards for compositional tasks (section VI), while the second (section VII) uses a suitable bias in the distribution of initial states to achieve an effect akin to a curriculum or a form of apprenticeship learning.
In combination these contributions allow us to reliably learn robust policies for the full task from scratch in less than 10 million environment transitions. This corresponds to less than 10 hours of interaction time on 16 robots, thus entering a regime that no longer seems unrealistic with modern experimental setups. In addition, when states from successful trajectories are used as the start states for learning trials the full task can be learned with 1 million transitions (i.e. less than 1 hour of interaction on 16 robots). To our knowledge our results provide the ï¬rst demonstration of solving complex manipulation problems involving multiple freely moving ob- jects. They are also encouraging as a sensible lower bound for real-world experiments suggesting that it may indeed be possible to learn such non-trivial manipulation skills directly on real robots. | 1704.03073#8 | Data-efficient Deep Reinforcement Learning for Dexterous Manipulation | Deep learning and reinforcement learning methods have recently been used to
solve a variety of problems in continuous control domains. An obvious
application of these techniques is dexterous manipulation tasks in robotics
which are difficult to solve using traditional control theory or
hand-engineered approaches. One example of such a task is to grasp an object
and precisely stack it on another. Solving this difficult and practically
relevant problem in the real world is an important long-term goal for the field
of robotics. Here we take a step towards this goal by examining the problem in
simulation and providing models and techniques aimed at solving it. We
introduce two extensions to the Deep Deterministic Policy Gradient algorithm
(DDPG), a model-free Q-learning based method, which make it significantly more
data-efficient and scalable. Our results show that by making extensive use of
off-policy data and replay, it is possible to find control policies that
robustly grasp objects and stack them. Further, our results hint that it may
soon be feasible to train successful stacking policies by collecting
interactions on real robots. | http://arxiv.org/pdf/1704.03073 | Ivaylo Popov, Nicolas Heess, Timothy Lillicrap, Roland Hafner, Gabriel Barth-Maron, Matej Vecerik, Thomas Lampe, Yuval Tassa, Tom Erez, Martin Riedmiller | cs.LG, cs.RO | 12 pages, 5 Figures | null | cs.LG | 20170410 | 20170410 | [
{
"id": "1504.00702"
},
{
"id": "1610.00633"
}
] |
1704.03073 | 9 | One limitation that has hampered policy search methods is that they can scale poorly with the number of parameters that need to be estimated. This limitation, and other constraints when working with real robotics hardware has led research to focus on the use of manually engineered and restrictive features and movement representations, particularly trajectory- based ones such as spline based dynamic movement primitives. Simplifying the policy space can make learning on real hard- ware tractable, but it also limits the kinds of problems that can be solved. In order to solve a problem such as picking up and manipulating an object, more expressive function classes are likely to be needed.
The use of rich and ï¬exible function approximators such as neural networks in RL dates back many years, e.g. [38, 35, 12, 10]. In the last few years there has been a resurgence of interest in end-to-end training of neural networks for challenging control problems, and several algorithms, both value and policy focused have been developed and applied to challenging problems including continuous control, e.g. [22, 23, 6, 7, 20, 32, 31, 11, 17]. These methods work well with large neural networks and can learn directly from raw visual input streams. With few exceptions, e.g. [10, 5, 18, 39], they have been considered too data-inefï¬cient for robotics applications.
# II. RELATED WORK | 1704.03073#9 | Data-efficient Deep Reinforcement Learning for Dexterous Manipulation | Deep learning and reinforcement learning methods have recently been used to
solve a variety of problems in continuous control domains. An obvious
application of these techniques is dexterous manipulation tasks in robotics
which are difficult to solve using traditional control theory or
hand-engineered approaches. One example of such a task is to grasp an object
and precisely stack it on another. Solving this difficult and practically
relevant problem in the real world is an important long-term goal for the field
of robotics. Here we take a step towards this goal by examining the problem in
simulation and providing models and techniques aimed at solving it. We
introduce two extensions to the Deep Deterministic Policy Gradient algorithm
(DDPG), a model-free Q-learning based method, which make it significantly more
data-efficient and scalable. Our results show that by making extensive use of
off-policy data and replay, it is possible to find control policies that
robustly grasp objects and stack them. Further, our results hint that it may
soon be feasible to train successful stacking policies by collecting
interactions on real robots. | http://arxiv.org/pdf/1704.03073 | Ivaylo Popov, Nicolas Heess, Timothy Lillicrap, Roland Hafner, Gabriel Barth-Maron, Matej Vecerik, Thomas Lampe, Yuval Tassa, Tom Erez, Martin Riedmiller | cs.LG, cs.RO | 12 pages, 5 Figures | null | cs.LG | 20170410 | 20170410 | [
{
"id": "1504.00702"
},
{
"id": "1610.00633"
}
] |
1704.03073 | 10 | # II. RELATED WORK
Reinforcement learning approaches solve tasks through re- peated interactions with the environment guided by a reward signal that indicates the success or failure of a trial. A wide variety of techniques have been developed that exploit this idea [34], with a broad distinction often made between value- based and policy search methods. While the former estimate and improve a value function, policy search methods directly optimize the parameters of a policy to maximize cumulative
One exception are guided policy search methods (GPS) [18, 39]. These have recently been applied to several manip- ulation problems and employ a teacher algorithm to locally optimize trajectories which are then summarized by a neu- ral network policy. GPS algorithms gain data-efï¬ciency by employing aggressive local policy updates and by performing extensive training of their neural network policy before col- lecting more real-world data. The teacher can use model-based [18] or model-free [39] trajectory optimization. The former can struggle in situations with strong discontinuities in the
dynamics, and both rely on access to a well deï¬ned and fully observed state space. | 1704.03073#10 | Data-efficient Deep Reinforcement Learning for Dexterous Manipulation | Deep learning and reinforcement learning methods have recently been used to
solve a variety of problems in continuous control domains. An obvious
application of these techniques is dexterous manipulation tasks in robotics
which are difficult to solve using traditional control theory or
hand-engineered approaches. One example of such a task is to grasp an object
and precisely stack it on another. Solving this difficult and practically
relevant problem in the real world is an important long-term goal for the field
of robotics. Here we take a step towards this goal by examining the problem in
simulation and providing models and techniques aimed at solving it. We
introduce two extensions to the Deep Deterministic Policy Gradient algorithm
(DDPG), a model-free Q-learning based method, which make it significantly more
data-efficient and scalable. Our results show that by making extensive use of
off-policy data and replay, it is possible to find control policies that
robustly grasp objects and stack them. Further, our results hint that it may
soon be feasible to train successful stacking policies by collecting
interactions on real robots. | http://arxiv.org/pdf/1704.03073 | Ivaylo Popov, Nicolas Heess, Timothy Lillicrap, Roland Hafner, Gabriel Barth-Maron, Matej Vecerik, Thomas Lampe, Yuval Tassa, Tom Erez, Martin Riedmiller | cs.LG, cs.RO | 12 pages, 5 Figures | null | cs.LG | 20170410 | 20170410 | [
{
"id": "1504.00702"
},
{
"id": "1610.00633"
}
] |
1704.03073 | 11 | Model-free value function approaches offer an alternative way to handle to the issue of data-efï¬ciency in robotics. Such approaches enable effective reuse of data and do not require full access to the state space or to a model of the environment. One recent work [5], closely related to the ideas followed in this paper, provides a proof of concept demonstration that value-based methods using neural network approximators can be used for robotic manipulation in the real world . This work applied a Q-learning approach [7] to a door opening task in which a robotic arm ï¬tted with an unactuated hook needed to reach to a handle and pull a door to a given angle. The starting state of the arm and door were ï¬xed across trials and the reward structure was smooth and structured, with one term expressing the distance from the hook to the handle and a second term expressing the distance of the door to the desired angle. This task was learned in approximately 2 hours across 2 robots pooling their experience into a shared replay buffer. This work thus made use of a complementary solution to the need for large amounts of interaction data: the use of experimental rigs that allow large scale data collection, e.g. [27], including the use of | 1704.03073#11 | Data-efficient Deep Reinforcement Learning for Dexterous Manipulation | Deep learning and reinforcement learning methods have recently been used to
solve a variety of problems in continuous control domains. An obvious
application of these techniques is dexterous manipulation tasks in robotics
which are difficult to solve using traditional control theory or
hand-engineered approaches. One example of such a task is to grasp an object
and precisely stack it on another. Solving this difficult and practically
relevant problem in the real world is an important long-term goal for the field
of robotics. Here we take a step towards this goal by examining the problem in
simulation and providing models and techniques aimed at solving it. We
introduce two extensions to the Deep Deterministic Policy Gradient algorithm
(DDPG), a model-free Q-learning based method, which make it significantly more
data-efficient and scalable. Our results show that by making extensive use of
off-policy data and replay, it is possible to find control policies that
robustly grasp objects and stack them. Further, our results hint that it may
soon be feasible to train successful stacking policies by collecting
interactions on real robots. | http://arxiv.org/pdf/1704.03073 | Ivaylo Popov, Nicolas Heess, Timothy Lillicrap, Roland Hafner, Gabriel Barth-Maron, Matej Vecerik, Thomas Lampe, Yuval Tassa, Tom Erez, Martin Riedmiller | cs.LG, cs.RO | 12 pages, 5 Figures | null | cs.LG | 20170410 | 20170410 | [
{
"id": "1504.00702"
},
{
"id": "1610.00633"
}
] |
1704.03073 | 12 | solution to the need for large amounts of interaction data: the use of experimental rigs that allow large scale data collection, e.g. [27], including the use of several robots from which experience are gathered in parallel [19, 5, 39]. This can be combined with single machine or distributed training depending on whether the bottleneck is primarily one of data collection or also one of network training [23]. | 1704.03073#12 | Data-efficient Deep Reinforcement Learning for Dexterous Manipulation | Deep learning and reinforcement learning methods have recently been used to
solve a variety of problems in continuous control domains. An obvious
application of these techniques is dexterous manipulation tasks in robotics
which are difficult to solve using traditional control theory or
hand-engineered approaches. One example of such a task is to grasp an object
and precisely stack it on another. Solving this difficult and practically
relevant problem in the real world is an important long-term goal for the field
of robotics. Here we take a step towards this goal by examining the problem in
simulation and providing models and techniques aimed at solving it. We
introduce two extensions to the Deep Deterministic Policy Gradient algorithm
(DDPG), a model-free Q-learning based method, which make it significantly more
data-efficient and scalable. Our results show that by making extensive use of
off-policy data and replay, it is possible to find control policies that
robustly grasp objects and stack them. Further, our results hint that it may
soon be feasible to train successful stacking policies by collecting
interactions on real robots. | http://arxiv.org/pdf/1704.03073 | Ivaylo Popov, Nicolas Heess, Timothy Lillicrap, Roland Hafner, Gabriel Barth-Maron, Matej Vecerik, Thomas Lampe, Yuval Tassa, Tom Erez, Martin Riedmiller | cs.LG, cs.RO | 12 pages, 5 Figures | null | cs.LG | 20170410 | 20170410 | [
{
"id": "1504.00702"
},
{
"id": "1610.00633"
}
] |
1704.03073 | 13 | Finally, the use of demonstration data has played an impor- tant role in robot learning, both as a means to obtain suitable cost functions [2, 14, 4, 8] but also to bootstrap and thus speed up learning. For the latter, kinesthetic teaching is widely used [26, 13, 25, 39]. It integrates naturally with trajectory-based movement representations but the need for a human operator to be able to guide the robot through the full movement can be limiting. Furthermore, when the policy representation is not trajectory based (e.g. direct torque control with neural networks) the use of human demonstration trajectories may be less straightforward (e.g. since the associated controls are not available).
# III. BACKGROUND
In this section we brieï¬y formalize the learning problem, summarize the DDPG algorithm, and explain its relationship to several other Q-function based reinforcement learning (RL) algorithms. | 1704.03073#13 | Data-efficient Deep Reinforcement Learning for Dexterous Manipulation | Deep learning and reinforcement learning methods have recently been used to
solve a variety of problems in continuous control domains. An obvious
application of these techniques is dexterous manipulation tasks in robotics
which are difficult to solve using traditional control theory or
hand-engineered approaches. One example of such a task is to grasp an object
and precisely stack it on another. Solving this difficult and practically
relevant problem in the real world is an important long-term goal for the field
of robotics. Here we take a step towards this goal by examining the problem in
simulation and providing models and techniques aimed at solving it. We
introduce two extensions to the Deep Deterministic Policy Gradient algorithm
(DDPG), a model-free Q-learning based method, which make it significantly more
data-efficient and scalable. Our results show that by making extensive use of
off-policy data and replay, it is possible to find control policies that
robustly grasp objects and stack them. Further, our results hint that it may
soon be feasible to train successful stacking policies by collecting
interactions on real robots. | http://arxiv.org/pdf/1704.03073 | Ivaylo Popov, Nicolas Heess, Timothy Lillicrap, Roland Hafner, Gabriel Barth-Maron, Matej Vecerik, Thomas Lampe, Yuval Tassa, Tom Erez, Martin Riedmiller | cs.LG, cs.RO | 12 pages, 5 Figures | null | cs.LG | 20170410 | 20170410 | [
{
"id": "1504.00702"
},
{
"id": "1610.00633"
}
] |
1704.03073 | 14 | The RL problem consists of an agent interacting with an environment in a sequential manner to maximize the expected sum of rewards. At time t the agent observes the state x, of the system and produces a control u, = 7(2x1;6) according to policy 7 with parameters 6. This leads the environment to tran- sition to a new state x;,, according to the dynamics 2,4) ~ p(-|xz, Us), and the agent receives a reward r, = r(x;, uz). The goal is to maximize the expected sum of discounted rewards J(0) =Exnpy [p71 txt, us), where p(6) is the distribu- tion over trajectories tT = (xp, uo, 21, U1, -..) induced by the current policy: p(T) = p(%0) [ps9 p(we|teâ1, T(@1-13 9).
DPG [33] is a policy gradient algorithm for continuous action spaces that improves the deterministic policy function Ï via backpropagation of the action-value gradient from a learned approximation to the Q-function. Speciï¬cally, DPG maintains a parametric approximation Q(xt, ut; Ï) to the action value function QÏ(xt, ut) associated with Ï and Ï is chosen to minimize | 1704.03073#14 | Data-efficient Deep Reinforcement Learning for Dexterous Manipulation | Deep learning and reinforcement learning methods have recently been used to
solve a variety of problems in continuous control domains. An obvious
application of these techniques is dexterous manipulation tasks in robotics
which are difficult to solve using traditional control theory or
hand-engineered approaches. One example of such a task is to grasp an object
and precisely stack it on another. Solving this difficult and practically
relevant problem in the real world is an important long-term goal for the field
of robotics. Here we take a step towards this goal by examining the problem in
simulation and providing models and techniques aimed at solving it. We
introduce two extensions to the Deep Deterministic Policy Gradient algorithm
(DDPG), a model-free Q-learning based method, which make it significantly more
data-efficient and scalable. Our results show that by making extensive use of
off-policy data and replay, it is possible to find control policies that
robustly grasp objects and stack them. Further, our results hint that it may
soon be feasible to train successful stacking policies by collecting
interactions on real robots. | http://arxiv.org/pdf/1704.03073 | Ivaylo Popov, Nicolas Heess, Timothy Lillicrap, Roland Hafner, Gabriel Barth-Maron, Matej Vecerik, Thomas Lampe, Yuval Tassa, Tom Erez, Martin Riedmiller | cs.LG, cs.RO | 12 pages, 5 Figures | null | cs.LG | 20170410 | 20170410 | [
{
"id": "1504.00702"
},
{
"id": "1610.00633"
}
] |
1704.03073 | 15 | Eves uevess)~p (Q(ae, ues >) â ye)â] ()
where y, = r(@z, Ue) + YQ(Xt41, 7(Lt41)). P is usually close to the marginal transition distribution induced by 7 but often not identical. For instance, during learning u; may be chosen to be a noisy version of 7(2x;;9), e.g. up = (x4; 0) + ⬠where e ~ N (0,07) and f is then the transition distribution induced by this noisy policy.
The policy parameters θ are then updated according to
Ad x E a :a)2 0 2 x Ea u)~p ay eeu o) ag ⢠(as )| - (2)
DDPG is an improvement of the original DPG algo- rithm adding experience replay and target networks: Experi- ence is collected into a buffer and updates to 6 and ¢ (eqs. 2) are computed using mini-batch updates with random samples from this buffer. Furthermore, a second set of âtarget- networksâ is maintained with parameters 6â and ¢â. These are used to compute y; in eqn. (1) and their parameters are slowly updated towards the current parameters 0, ¢. Both measures significantly improve the stability of DDPG. | 1704.03073#15 | Data-efficient Deep Reinforcement Learning for Dexterous Manipulation | Deep learning and reinforcement learning methods have recently been used to
solve a variety of problems in continuous control domains. An obvious
application of these techniques is dexterous manipulation tasks in robotics
which are difficult to solve using traditional control theory or
hand-engineered approaches. One example of such a task is to grasp an object
and precisely stack it on another. Solving this difficult and practically
relevant problem in the real world is an important long-term goal for the field
of robotics. Here we take a step towards this goal by examining the problem in
simulation and providing models and techniques aimed at solving it. We
introduce two extensions to the Deep Deterministic Policy Gradient algorithm
(DDPG), a model-free Q-learning based method, which make it significantly more
data-efficient and scalable. Our results show that by making extensive use of
off-policy data and replay, it is possible to find control policies that
robustly grasp objects and stack them. Further, our results hint that it may
soon be feasible to train successful stacking policies by collecting
interactions on real robots. | http://arxiv.org/pdf/1704.03073 | Ivaylo Popov, Nicolas Heess, Timothy Lillicrap, Roland Hafner, Gabriel Barth-Maron, Matej Vecerik, Thomas Lampe, Yuval Tassa, Tom Erez, Martin Riedmiller | cs.LG, cs.RO | 12 pages, 5 Figures | null | cs.LG | 20170410 | 20170410 | [
{
"id": "1504.00702"
},
{
"id": "1610.00633"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.