doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1709.06560 | 22 | 508 394 ± 144 1095 ± 139 1930 ± 185 3018 ± 386 100,50,25 2782 ± 120 1673 ± 148 2812 ± 88 172 ± 257 2969 ± 111 1807 ± 309 1569 ± 453 4713 ± 374 345 ± 44 1868 ± 620 380 ± 65 988 ± 52 1589 ± 225 2554 ± 219 tanh 2674 ± 227 1939 ± 140 2828 ± 70 205 ± 256 2790 ± 62 2201 ± 323 971 ± 137 3908 ± 293 436 ± 48 1128 ± 511 354 ± 91 1311 ± 271 691 ± 55 2547 ± 172 ReLU 3104 ± 84 2281 ± 91 2829 ± 76 235 ± 260 2687 ± 144 1288 ± 12 852 ± 143 4197 ± 606 343 ± 34 1717 ± 508 394 ± 144 1095 ± 139 500 ± 379 3362 ± 682 LeakyReLU - - 3047 ± 68 325 ± 208 2748 ± 77 1227 ± 462 843 ± 160 5324 ± 280 - - - - 1930 ± 185 3018 ± 38 | 1709.06560#22 | Deep Reinforcement Learning that Matters | In recent years, significant progress has been made in solving challenging
problems across various domains using deep reinforcement learning (RL).
Reproducing existing work and accurately judging the improvements offered by
novel methods is vital to sustaining this progress. Unfortunately, reproducing
results for state-of-the-art deep RL methods is seldom straightforward. In
particular, non-determinism in standard benchmark environments, combined with
variance intrinsic to the methods, can make reported results tough to
interpret. Without significance metrics and tighter standardization of
experimental reporting, it is difficult to determine whether improvements over
the prior state-of-the-art are meaningful. In this paper, we investigate
challenges posed by reproducibility, proper experimental techniques, and
reporting procedures. We illustrate the variability in reported metrics and
results when comparing against common baselines and suggest guidelines to make
future results in deep RL more reproducible. We aim to spur discussion about
how to ensure continued progress in the field by minimizing wasted effort
stemming from results that are non-reproducible and easily misinterpreted. | http://arxiv.org/pdf/1709.06560 | Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger | cs.LG, stat.ML | Accepted to the Thirthy-Second AAAI Conference On Artificial
Intelligence (AAAI), 2018 | null | cs.LG | 20170919 | 20190130 | [
{
"id": "1611.02247"
},
{
"id": "1506.02438"
},
{
"id": "1707.06347"
},
{
"id": "1703.02660"
},
{
"id": "1705.10443"
},
{
"id": "1703.01703"
},
{
"id": "1509.02971"
},
{
"id": "1612.03780"
},
{
"id": "1606.01540"
},
{
"id": "1706.01905"
},
{
"id": "1706.00387"
},
{
"id": "1709.06009"
},
{
"id": "1505.00853"
},
{
"id": "1708.04782"
}
] |
1709.06560 | 23 | Table 2: Results for our value function (Q or V ) architecture permutations across various implementations and algorithms. Final average standard error across 5 trials of returns across the last 100 trajectories after 2M training samples. For ACKTR, we use ELU activations instead of leaky ReLU.
6000. 5000 4000 3000 2000. Average Return 1000 0 1000 HalfCheetah Environment eae poe 0.75 Loo Timesteps 135 3000 pd OWOCE 000) 100 Average Return 2.00 0.00 Hopper Environmer 050075 1.00 Timesteps 1 150 nt 200 150 100. Average Return 0.00 Swimmer Environment Ae 035 050 075 Lio 1.35 Timestens Protest ih almyaald iba 1.50
200 150 100. Average Return 0.00 Swimmer Environment Ae 035 050 075 Lio 1.35 Timestens Protest ih almyaald iba 1.50
6000. 5000 4000 3000 2000. Average Return 1000 0 1000 HalfCheetah Environment eae poe 0.75 Loo Timesteps 135 pd OWOCE 2.00
3000 000) 100 Average Return 0.00 Hopper Environmer 050075 1.00 Timesteps 1 150 nt
Figure 4: Performance of several policy gradient algorithms across benchmark MuJoCo environment suites | 1709.06560#23 | Deep Reinforcement Learning that Matters | In recent years, significant progress has been made in solving challenging
problems across various domains using deep reinforcement learning (RL).
Reproducing existing work and accurately judging the improvements offered by
novel methods is vital to sustaining this progress. Unfortunately, reproducing
results for state-of-the-art deep RL methods is seldom straightforward. In
particular, non-determinism in standard benchmark environments, combined with
variance intrinsic to the methods, can make reported results tough to
interpret. Without significance metrics and tighter standardization of
experimental reporting, it is difficult to determine whether improvements over
the prior state-of-the-art are meaningful. In this paper, we investigate
challenges posed by reproducibility, proper experimental techniques, and
reporting procedures. We illustrate the variability in reported metrics and
results when comparing against common baselines and suggest guidelines to make
future results in deep RL more reproducible. We aim to spur discussion about
how to ensure continued progress in the field by minimizing wasted effort
stemming from results that are non-reproducible and easily misinterpreted. | http://arxiv.org/pdf/1709.06560 | Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger | cs.LG, stat.ML | Accepted to the Thirthy-Second AAAI Conference On Artificial
Intelligence (AAAI), 2018 | null | cs.LG | 20170919 | 20190130 | [
{
"id": "1611.02247"
},
{
"id": "1506.02438"
},
{
"id": "1707.06347"
},
{
"id": "1703.02660"
},
{
"id": "1705.10443"
},
{
"id": "1703.01703"
},
{
"id": "1509.02971"
},
{
"id": "1612.03780"
},
{
"id": "1606.01540"
},
{
"id": "1706.01905"
},
{
"id": "1706.00387"
},
{
"id": "1709.06009"
},
{
"id": "1505.00853"
},
{
"id": "1708.04782"
}
] |
1709.06560 | 24 | Figure 4: Performance of several policy gradient algorithms across benchmark MuJoCo environment suites
Environment HalfCheetah-v1 Hopper-v1 Walker2d-v1 Swimmer-v1 DDPG 5037 (3664, 6574) 1632 (607, 2370) 1582 (901, 2174) 31 (21, 46) ACKTR 3888 (2288, 5131) 2546 (1875, 3217) 2285 (1246, 3235) 50 (42, 55) TRPO 1254.5 (999, 1464) 2965 (2854, 3076) 3072 (2957, 3183) 214 (141, 287) PPO 3043 (1920, 4165) 2715 (2589, 2847) 2926 (2514, 3361) 107 (101, 118)
Table 3: Bootstrap mean and 95% conï¬dence bounds for a subset of environment experiments. 10k bootstrap iterations and the pivotal method were used.
HalfCheetah-v1 (TRPO, Different Random Seeds) 5000 4000 3000 2000 Average Return 1000. 5 Random Average (5 runs) ~== Random Average (5 runs) 0.00025 0.50 0.75 1.001.350 752.00 Timesteps sage | 1709.06560#24 | Deep Reinforcement Learning that Matters | In recent years, significant progress has been made in solving challenging
problems across various domains using deep reinforcement learning (RL).
Reproducing existing work and accurately judging the improvements offered by
novel methods is vital to sustaining this progress. Unfortunately, reproducing
results for state-of-the-art deep RL methods is seldom straightforward. In
particular, non-determinism in standard benchmark environments, combined with
variance intrinsic to the methods, can make reported results tough to
interpret. Without significance metrics and tighter standardization of
experimental reporting, it is difficult to determine whether improvements over
the prior state-of-the-art are meaningful. In this paper, we investigate
challenges posed by reproducibility, proper experimental techniques, and
reporting procedures. We illustrate the variability in reported metrics and
results when comparing against common baselines and suggest guidelines to make
future results in deep RL more reproducible. We aim to spur discussion about
how to ensure continued progress in the field by minimizing wasted effort
stemming from results that are non-reproducible and easily misinterpreted. | http://arxiv.org/pdf/1709.06560 | Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger | cs.LG, stat.ML | Accepted to the Thirthy-Second AAAI Conference On Artificial
Intelligence (AAAI), 2018 | null | cs.LG | 20170919 | 20190130 | [
{
"id": "1611.02247"
},
{
"id": "1506.02438"
},
{
"id": "1707.06347"
},
{
"id": "1703.02660"
},
{
"id": "1705.10443"
},
{
"id": "1703.01703"
},
{
"id": "1509.02971"
},
{
"id": "1612.03780"
},
{
"id": "1606.01540"
},
{
"id": "1706.01905"
},
{
"id": "1706.00387"
},
{
"id": "1709.06009"
},
{
"id": "1505.00853"
},
{
"id": "1708.04782"
}
] |
1709.06560 | 26 | â
Results We perform 10 experiment trials, for the same hyperparameter conï¬guration, only varying the random seed across all 10 trials. We then split the trials into two sets of 5 and average these two groupings together. As shown in Figure 5, we ï¬nd that the performance of algorithms can be drastically different. We demonstrate that the variance between runs is enough to create statistically different dis- tributions just from varying random seeds. Unfortunately, in recent reported results, it is not uncommon for the top-N tri- als to be selected from among several trials (Wu et al. 2017; Mnih et al. 2016) or averaged over only small number of tri- als (N < 5) (Gu et al. 2017; Wu et al. 2017). Our experiment with random seeds shows that this can be potentially mislead- ing. Particularly for HalfCheetah, it is possible to get learning curves that do not fall within the same distribution at all, just by averaging different runs with the same hyperparameters, but different random seeds. While there can be no speciï¬c number of trials speciï¬ed as a recommendation, it is possible that power analysis methods can be used to give a general idea to this extent as we will discuss later. However, more investigation is needed to answer this open problem.
Environments How do the environment properties affect variability in re- ported RL algorithm performance? | 1709.06560#26 | Deep Reinforcement Learning that Matters | In recent years, significant progress has been made in solving challenging
problems across various domains using deep reinforcement learning (RL).
Reproducing existing work and accurately judging the improvements offered by
novel methods is vital to sustaining this progress. Unfortunately, reproducing
results for state-of-the-art deep RL methods is seldom straightforward. In
particular, non-determinism in standard benchmark environments, combined with
variance intrinsic to the methods, can make reported results tough to
interpret. Without significance metrics and tighter standardization of
experimental reporting, it is difficult to determine whether improvements over
the prior state-of-the-art are meaningful. In this paper, we investigate
challenges posed by reproducibility, proper experimental techniques, and
reporting procedures. We illustrate the variability in reported metrics and
results when comparing against common baselines and suggest guidelines to make
future results in deep RL more reproducible. We aim to spur discussion about
how to ensure continued progress in the field by minimizing wasted effort
stemming from results that are non-reproducible and easily misinterpreted. | http://arxiv.org/pdf/1709.06560 | Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger | cs.LG, stat.ML | Accepted to the Thirthy-Second AAAI Conference On Artificial
Intelligence (AAAI), 2018 | null | cs.LG | 20170919 | 20190130 | [
{
"id": "1611.02247"
},
{
"id": "1506.02438"
},
{
"id": "1707.06347"
},
{
"id": "1703.02660"
},
{
"id": "1705.10443"
},
{
"id": "1703.01703"
},
{
"id": "1509.02971"
},
{
"id": "1612.03780"
},
{
"id": "1606.01540"
},
{
"id": "1706.01905"
},
{
"id": "1706.00387"
},
{
"id": "1709.06009"
},
{
"id": "1505.00853"
},
{
"id": "1708.04782"
}
] |
1709.06560 | 27 | Environments How do the environment properties affect variability in re- ported RL algorithm performance?
To assess how the choice of evaluation environment can af- fect the presented results, we use our aforementioned default set of hyperparameters across our chosen testbed of algo- rithms and investigate how well each algorithm performs across an extended suite of continuous control tasks. For these experiments, we use the following environments from OpenAI Gym: Hopper-v1, HalfCheetah-v1, Swimmer-v1 and Walker2d-v1. The choice of environment often plays an im- portant role in demonstrating how well a new proposed algo- rithm performs against baselines. In continuous control tasks, often the environments have random stochasticity, shortened trajectories, or different dynamic properties. We demonstrate that, as a result of these differences, algorithm performance can vary across environments and the best performing algo- rithm across all environments is not always clear. Thus it is increasingly important to present results for a wide range of
environments and not only pick those which show a novel work outperforming other methods. | 1709.06560#27 | Deep Reinforcement Learning that Matters | In recent years, significant progress has been made in solving challenging
problems across various domains using deep reinforcement learning (RL).
Reproducing existing work and accurately judging the improvements offered by
novel methods is vital to sustaining this progress. Unfortunately, reproducing
results for state-of-the-art deep RL methods is seldom straightforward. In
particular, non-determinism in standard benchmark environments, combined with
variance intrinsic to the methods, can make reported results tough to
interpret. Without significance metrics and tighter standardization of
experimental reporting, it is difficult to determine whether improvements over
the prior state-of-the-art are meaningful. In this paper, we investigate
challenges posed by reproducibility, proper experimental techniques, and
reporting procedures. We illustrate the variability in reported metrics and
results when comparing against common baselines and suggest guidelines to make
future results in deep RL more reproducible. We aim to spur discussion about
how to ensure continued progress in the field by minimizing wasted effort
stemming from results that are non-reproducible and easily misinterpreted. | http://arxiv.org/pdf/1709.06560 | Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger | cs.LG, stat.ML | Accepted to the Thirthy-Second AAAI Conference On Artificial
Intelligence (AAAI), 2018 | null | cs.LG | 20170919 | 20190130 | [
{
"id": "1611.02247"
},
{
"id": "1506.02438"
},
{
"id": "1707.06347"
},
{
"id": "1703.02660"
},
{
"id": "1705.10443"
},
{
"id": "1703.01703"
},
{
"id": "1509.02971"
},
{
"id": "1612.03780"
},
{
"id": "1606.01540"
},
{
"id": "1706.01905"
},
{
"id": "1706.00387"
},
{
"id": "1709.06009"
},
{
"id": "1505.00853"
},
{
"id": "1708.04782"
}
] |
1709.06560 | 28 | environments and not only pick those which show a novel work outperforming other methods.
Results As shown in Figure 4, in environments with sta- ble dynamics (e.g. HalfCheetah-v1), DDPG outperforms all other algorithsm. However, as dynamics become more unsta- ble (e.g. in Hopper-v1) performance gains rapidly diminish. As DDPG is an off-policy method, exploration noise can cause sudden failures in unstable environments. Therefore, learning a proper Q-value estimation of expected returns is difï¬cult, particularly since many exploratory paths will result in failure. Since failures in such tasks are characterized by shortened trajectories, a local optimum in this case would be simply to survive until the maximum length of the trajectory (corresponding to one thousand timesteps and similar reward due to a survival bonus in the case of Hopper-v1). As can be seen in Figure 4, DDPG with Hopper does exactly this. This is a clear example where showing only the favourable and sta- ble HalfCheetah when reporting DDPG-based experiments would be unfair. | 1709.06560#28 | Deep Reinforcement Learning that Matters | In recent years, significant progress has been made in solving challenging
problems across various domains using deep reinforcement learning (RL).
Reproducing existing work and accurately judging the improvements offered by
novel methods is vital to sustaining this progress. Unfortunately, reproducing
results for state-of-the-art deep RL methods is seldom straightforward. In
particular, non-determinism in standard benchmark environments, combined with
variance intrinsic to the methods, can make reported results tough to
interpret. Without significance metrics and tighter standardization of
experimental reporting, it is difficult to determine whether improvements over
the prior state-of-the-art are meaningful. In this paper, we investigate
challenges posed by reproducibility, proper experimental techniques, and
reporting procedures. We illustrate the variability in reported metrics and
results when comparing against common baselines and suggest guidelines to make
future results in deep RL more reproducible. We aim to spur discussion about
how to ensure continued progress in the field by minimizing wasted effort
stemming from results that are non-reproducible and easily misinterpreted. | http://arxiv.org/pdf/1709.06560 | Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger | cs.LG, stat.ML | Accepted to the Thirthy-Second AAAI Conference On Artificial
Intelligence (AAAI), 2018 | null | cs.LG | 20170919 | 20190130 | [
{
"id": "1611.02247"
},
{
"id": "1506.02438"
},
{
"id": "1707.06347"
},
{
"id": "1703.02660"
},
{
"id": "1705.10443"
},
{
"id": "1703.01703"
},
{
"id": "1509.02971"
},
{
"id": "1612.03780"
},
{
"id": "1606.01540"
},
{
"id": "1706.01905"
},
{
"id": "1706.00387"
},
{
"id": "1709.06009"
},
{
"id": "1505.00853"
},
{
"id": "1708.04782"
}
] |
1709.06560 | 29 | Furthermore, let us consider the Swimmer-v1 environment shown in Figure 4. Here, TRPO signiï¬cantly outperforms all other algorithms. Due to the dynamics of the water-like environment, a local optimum for the system is to curl up and ï¬ail without proper swimming. However, this corresponds 130. By reaching a local optimum, learning to a return of curves can indicate successful optimization of the policy over time, when in reality the returns achieved are not qualitatively representative of learning the desired behaviour, as demon- strated in video replays of the learned policy5. Therefore, it is important to show not only returns but demonstrations of the learned policy in action. Without understanding what the evaluation returns indicate, it is possible that misleading results can be reported which in reality only optimize local optima rather than reaching the desired behaviour.
Codebases Are commonly used baseline implementations comparable? | 1709.06560#29 | Deep Reinforcement Learning that Matters | In recent years, significant progress has been made in solving challenging
problems across various domains using deep reinforcement learning (RL).
Reproducing existing work and accurately judging the improvements offered by
novel methods is vital to sustaining this progress. Unfortunately, reproducing
results for state-of-the-art deep RL methods is seldom straightforward. In
particular, non-determinism in standard benchmark environments, combined with
variance intrinsic to the methods, can make reported results tough to
interpret. Without significance metrics and tighter standardization of
experimental reporting, it is difficult to determine whether improvements over
the prior state-of-the-art are meaningful. In this paper, we investigate
challenges posed by reproducibility, proper experimental techniques, and
reporting procedures. We illustrate the variability in reported metrics and
results when comparing against common baselines and suggest guidelines to make
future results in deep RL more reproducible. We aim to spur discussion about
how to ensure continued progress in the field by minimizing wasted effort
stemming from results that are non-reproducible and easily misinterpreted. | http://arxiv.org/pdf/1709.06560 | Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger | cs.LG, stat.ML | Accepted to the Thirthy-Second AAAI Conference On Artificial
Intelligence (AAAI), 2018 | null | cs.LG | 20170919 | 20190130 | [
{
"id": "1611.02247"
},
{
"id": "1506.02438"
},
{
"id": "1707.06347"
},
{
"id": "1703.02660"
},
{
"id": "1705.10443"
},
{
"id": "1703.01703"
},
{
"id": "1509.02971"
},
{
"id": "1612.03780"
},
{
"id": "1606.01540"
},
{
"id": "1706.01905"
},
{
"id": "1706.00387"
},
{
"id": "1709.06009"
},
{
"id": "1505.00853"
},
{
"id": "1708.04782"
}
] |
1709.06560 | 30 | Codebases Are commonly used baseline implementations comparable?
In many cases, authors implement their own versions of base- line algorithms to compare against. We investigate the Ope- nAI baselines implementation of TRPO as used in (Schulman et al. 2017), the original TRPO code (Schulman et al. 2015a), and the rllab (Duan et al. 2016) Tensorï¬ow implementation of TRPO. We also compare the rllab Theano (Duan et al. 2016), rllabplusplus (Gu et al. 2016), and OpenAI baselines (Plap- pert et al. 2017) implementations of DDPG. Our goal is to draw attention to the variance due to implementation details across algorithms. We run a subset of our architecture experi- ments as with the OpenAI baselines implementations using the same hyperparameters as in those experiments6.
Results We ï¬nd that implementation differences which are often not reï¬ected in publications can have dramatic impacts on performance. This can be seen for our ï¬nal evalu- ation performance after training on 2M samples in Tables 1 and 2, as well as a sample comparison in Figure 6. This | 1709.06560#30 | Deep Reinforcement Learning that Matters | In recent years, significant progress has been made in solving challenging
problems across various domains using deep reinforcement learning (RL).
Reproducing existing work and accurately judging the improvements offered by
novel methods is vital to sustaining this progress. Unfortunately, reproducing
results for state-of-the-art deep RL methods is seldom straightforward. In
particular, non-determinism in standard benchmark environments, combined with
variance intrinsic to the methods, can make reported results tough to
interpret. Without significance metrics and tighter standardization of
experimental reporting, it is difficult to determine whether improvements over
the prior state-of-the-art are meaningful. In this paper, we investigate
challenges posed by reproducibility, proper experimental techniques, and
reporting procedures. We illustrate the variability in reported metrics and
results when comparing against common baselines and suggest guidelines to make
future results in deep RL more reproducible. We aim to spur discussion about
how to ensure continued progress in the field by minimizing wasted effort
stemming from results that are non-reproducible and easily misinterpreted. | http://arxiv.org/pdf/1709.06560 | Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger | cs.LG, stat.ML | Accepted to the Thirthy-Second AAAI Conference On Artificial
Intelligence (AAAI), 2018 | null | cs.LG | 20170919 | 20190130 | [
{
"id": "1611.02247"
},
{
"id": "1506.02438"
},
{
"id": "1707.06347"
},
{
"id": "1703.02660"
},
{
"id": "1705.10443"
},
{
"id": "1703.01703"
},
{
"id": "1509.02971"
},
{
"id": "1612.03780"
},
{
"id": "1606.01540"
},
{
"id": "1706.01905"
},
{
"id": "1706.00387"
},
{
"id": "1709.06009"
},
{
"id": "1505.00853"
},
{
"id": "1708.04782"
}
] |
1709.06560 | 31 | 5https://youtu.be/lKpUQYjgm80 6Differences are discussed in the supplemental (e.g. use of dif- ferent optimizers for the value function baseline). Leaky ReLU activations are left out to narrow the experiment scope.
HalfCheetah-v1 (TRPO, Codebase Comparison) 2000 1500. L000. 500 Average Return a Timesteps HalfCheetah-v1 (DDPG, Codebase Comparison) 5000 4000: 3000 2000 Average Return 1000. 0 om 035 050 0 100 135 150 Timesteps
Figure 6: TRPO codebase comparison using our default set of hyperparameters (as used in other experiments).
demonstrates the necessity that implementation details be enumerated, codebases packaged with publications, and that performance of baseline experiments in novel works matches the original baseline publication code.
# Reporting Evaluation Metrics | 1709.06560#31 | Deep Reinforcement Learning that Matters | In recent years, significant progress has been made in solving challenging
problems across various domains using deep reinforcement learning (RL).
Reproducing existing work and accurately judging the improvements offered by
novel methods is vital to sustaining this progress. Unfortunately, reproducing
results for state-of-the-art deep RL methods is seldom straightforward. In
particular, non-determinism in standard benchmark environments, combined with
variance intrinsic to the methods, can make reported results tough to
interpret. Without significance metrics and tighter standardization of
experimental reporting, it is difficult to determine whether improvements over
the prior state-of-the-art are meaningful. In this paper, we investigate
challenges posed by reproducibility, proper experimental techniques, and
reporting procedures. We illustrate the variability in reported metrics and
results when comparing against common baselines and suggest guidelines to make
future results in deep RL more reproducible. We aim to spur discussion about
how to ensure continued progress in the field by minimizing wasted effort
stemming from results that are non-reproducible and easily misinterpreted. | http://arxiv.org/pdf/1709.06560 | Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger | cs.LG, stat.ML | Accepted to the Thirthy-Second AAAI Conference On Artificial
Intelligence (AAAI), 2018 | null | cs.LG | 20170919 | 20190130 | [
{
"id": "1611.02247"
},
{
"id": "1506.02438"
},
{
"id": "1707.06347"
},
{
"id": "1703.02660"
},
{
"id": "1705.10443"
},
{
"id": "1703.01703"
},
{
"id": "1509.02971"
},
{
"id": "1612.03780"
},
{
"id": "1606.01540"
},
{
"id": "1706.01905"
},
{
"id": "1706.00387"
},
{
"id": "1709.06009"
},
{
"id": "1505.00853"
},
{
"id": "1708.04782"
}
] |
1709.06560 | 32 | In this section we analyze some of the evaluation metrics commonly used in the reinforcement learning literature. In practice, RL algorithms are often evaluated by simply pre- senting plots or tables of average cumulative reward (average returns) and, more recently, of maximum reward achieved over a ï¬xed number of timesteps. Due to the unstable na- ture of many of these algorithms, simply reporting the max- imum returns is typically inadequate for fair comparison; even reporting average returns can be misleading as the range of performance across seeds and trials is unknown. Alone, these may not provide a clear picture of an algorithmâs range of performance. However, when combined with conï¬dence intervals, this may be adequate to make an informed deci- sion given a large enough number of trials. As such, we investigate using the bootstrap and signiï¬cance testing as in ML (Kohavi and others 1995; Bouckaert and Frank 2004; Nadeau and Bengio 2000) to evaluate algorithm performance. Online View vs. Policy Optimization An important dis- tinction when reporting results is the online learning view versus the policy optimization view of RL. In the online view, an agent | 1709.06560#32 | Deep Reinforcement Learning that Matters | In recent years, significant progress has been made in solving challenging
problems across various domains using deep reinforcement learning (RL).
Reproducing existing work and accurately judging the improvements offered by
novel methods is vital to sustaining this progress. Unfortunately, reproducing
results for state-of-the-art deep RL methods is seldom straightforward. In
particular, non-determinism in standard benchmark environments, combined with
variance intrinsic to the methods, can make reported results tough to
interpret. Without significance metrics and tighter standardization of
experimental reporting, it is difficult to determine whether improvements over
the prior state-of-the-art are meaningful. In this paper, we investigate
challenges posed by reproducibility, proper experimental techniques, and
reporting procedures. We illustrate the variability in reported metrics and
results when comparing against common baselines and suggest guidelines to make
future results in deep RL more reproducible. We aim to spur discussion about
how to ensure continued progress in the field by minimizing wasted effort
stemming from results that are non-reproducible and easily misinterpreted. | http://arxiv.org/pdf/1709.06560 | Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger | cs.LG, stat.ML | Accepted to the Thirthy-Second AAAI Conference On Artificial
Intelligence (AAAI), 2018 | null | cs.LG | 20170919 | 20190130 | [
{
"id": "1611.02247"
},
{
"id": "1506.02438"
},
{
"id": "1707.06347"
},
{
"id": "1703.02660"
},
{
"id": "1705.10443"
},
{
"id": "1703.01703"
},
{
"id": "1509.02971"
},
{
"id": "1612.03780"
},
{
"id": "1606.01540"
},
{
"id": "1706.01905"
},
{
"id": "1706.00387"
},
{
"id": "1709.06009"
},
{
"id": "1505.00853"
},
{
"id": "1708.04782"
}
] |
1709.06560 | 33 | Online View vs. Policy Optimization An important dis- tinction when reporting results is the online learning view versus the policy optimization view of RL. In the online view, an agent will optimize the returns across the entire learning process and there is not necessarily an end to the agentâs trajectory. In this view, evaluations can use the average cumu- lative rewards across the entire learning process (balancing exploration and exploitation) as in (Hofer and Gimbert 2016), or can possibly use ofï¬ine evaluation as in (Mandel et al. 2016). The alternate view corresponds to policy optimization, where evaluation is performed using a target policy in an of- ï¬ine manner. In the policy optimization view it is important to | 1709.06560#33 | Deep Reinforcement Learning that Matters | In recent years, significant progress has been made in solving challenging
problems across various domains using deep reinforcement learning (RL).
Reproducing existing work and accurately judging the improvements offered by
novel methods is vital to sustaining this progress. Unfortunately, reproducing
results for state-of-the-art deep RL methods is seldom straightforward. In
particular, non-determinism in standard benchmark environments, combined with
variance intrinsic to the methods, can make reported results tough to
interpret. Without significance metrics and tighter standardization of
experimental reporting, it is difficult to determine whether improvements over
the prior state-of-the-art are meaningful. In this paper, we investigate
challenges posed by reproducibility, proper experimental techniques, and
reporting procedures. We illustrate the variability in reported metrics and
results when comparing against common baselines and suggest guidelines to make
future results in deep RL more reproducible. We aim to spur discussion about
how to ensure continued progress in the field by minimizing wasted effort
stemming from results that are non-reproducible and easily misinterpreted. | http://arxiv.org/pdf/1709.06560 | Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger | cs.LG, stat.ML | Accepted to the Thirthy-Second AAAI Conference On Artificial
Intelligence (AAAI), 2018 | null | cs.LG | 20170919 | 20190130 | [
{
"id": "1611.02247"
},
{
"id": "1506.02438"
},
{
"id": "1707.06347"
},
{
"id": "1703.02660"
},
{
"id": "1705.10443"
},
{
"id": "1703.01703"
},
{
"id": "1509.02971"
},
{
"id": "1612.03780"
},
{
"id": "1606.01540"
},
{
"id": "1706.01905"
},
{
"id": "1706.00387"
},
{
"id": "1709.06009"
},
{
"id": "1505.00853"
},
{
"id": "1708.04782"
}
] |
1709.06560 | 35 | Conï¬dence Bounds The sample bootstrap has been a pop- ular method to gain insight into a population distribution from a smaller sample (Efron and Tibshirani 1994). Boot- strap methods are particularly popular for A/B testing, and we can borrow some ideas from this ï¬eld. Generally a boot- strap estimator is obtained by resampling with replacement many times to generate a statistically relevant mean and con- ï¬dence bound. Using this technique, we can gain insight into what is the 95% conï¬dence interval of the results from our section on environments. Table 3 shows the bootstrap mean and 95% conï¬dence bounds on our environment experiments. Conï¬dence intervals can vary wildly between algorithms and environments. We ï¬nd that TRPO and PPO are the most stable with small conï¬dence bounds from the bootstrap. In cases where conï¬dence bounds are exceedingly large, it may be necessary to run more trials (i.e. increase the sample size). Power Analysis Another method to determine if the sample size must be increased is bootstrap power analy- sis (Tuff´ery 2011; Yuan and Hayashi 2003). If we use | 1709.06560#35 | Deep Reinforcement Learning that Matters | In recent years, significant progress has been made in solving challenging
problems across various domains using deep reinforcement learning (RL).
Reproducing existing work and accurately judging the improvements offered by
novel methods is vital to sustaining this progress. Unfortunately, reproducing
results for state-of-the-art deep RL methods is seldom straightforward. In
particular, non-determinism in standard benchmark environments, combined with
variance intrinsic to the methods, can make reported results tough to
interpret. Without significance metrics and tighter standardization of
experimental reporting, it is difficult to determine whether improvements over
the prior state-of-the-art are meaningful. In this paper, we investigate
challenges posed by reproducibility, proper experimental techniques, and
reporting procedures. We illustrate the variability in reported metrics and
results when comparing against common baselines and suggest guidelines to make
future results in deep RL more reproducible. We aim to spur discussion about
how to ensure continued progress in the field by minimizing wasted effort
stemming from results that are non-reproducible and easily misinterpreted. | http://arxiv.org/pdf/1709.06560 | Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger | cs.LG, stat.ML | Accepted to the Thirthy-Second AAAI Conference On Artificial
Intelligence (AAAI), 2018 | null | cs.LG | 20170919 | 20190130 | [
{
"id": "1611.02247"
},
{
"id": "1506.02438"
},
{
"id": "1707.06347"
},
{
"id": "1703.02660"
},
{
"id": "1705.10443"
},
{
"id": "1703.01703"
},
{
"id": "1509.02971"
},
{
"id": "1612.03780"
},
{
"id": "1606.01540"
},
{
"id": "1706.01905"
},
{
"id": "1706.00387"
},
{
"id": "1709.06009"
},
{
"id": "1505.00853"
},
{
"id": "1708.04782"
}
] |
1709.06560 | 36 | Power Analysis Another method to determine if the sample size must be increased is bootstrap power analy- sis (Tuff´ery 2011; Yuan and Hayashi 2003). If we use our sample and give it some uniform lift (for example, scaling uni- formly by 1.25), we can run many bootstrap simulations and determine what percentage of the simulations result in statis- tically signiï¬cant values with the lift. If there is a small per- centage of signiï¬cant values, a larger sample size is needed (more trials must be run). We do this across all environment experiment trial runs and indeed ï¬nd that, in more unstable settings, the bootstrap power percentage leans towards in- signiï¬cant results in the lift experiment. Conversely, in stable trials (e.g. TRPO on Hopper-v1) with a small sample size, the lift experiment shows that no more trials are needed to generate signiï¬cant comparisons. These results are provided in the supplemental material. | 1709.06560#36 | Deep Reinforcement Learning that Matters | In recent years, significant progress has been made in solving challenging
problems across various domains using deep reinforcement learning (RL).
Reproducing existing work and accurately judging the improvements offered by
novel methods is vital to sustaining this progress. Unfortunately, reproducing
results for state-of-the-art deep RL methods is seldom straightforward. In
particular, non-determinism in standard benchmark environments, combined with
variance intrinsic to the methods, can make reported results tough to
interpret. Without significance metrics and tighter standardization of
experimental reporting, it is difficult to determine whether improvements over
the prior state-of-the-art are meaningful. In this paper, we investigate
challenges posed by reproducibility, proper experimental techniques, and
reporting procedures. We illustrate the variability in reported metrics and
results when comparing against common baselines and suggest guidelines to make
future results in deep RL more reproducible. We aim to spur discussion about
how to ensure continued progress in the field by minimizing wasted effort
stemming from results that are non-reproducible and easily misinterpreted. | http://arxiv.org/pdf/1709.06560 | Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger | cs.LG, stat.ML | Accepted to the Thirthy-Second AAAI Conference On Artificial
Intelligence (AAAI), 2018 | null | cs.LG | 20170919 | 20190130 | [
{
"id": "1611.02247"
},
{
"id": "1506.02438"
},
{
"id": "1707.06347"
},
{
"id": "1703.02660"
},
{
"id": "1705.10443"
},
{
"id": "1703.01703"
},
{
"id": "1509.02971"
},
{
"id": "1612.03780"
},
{
"id": "1606.01540"
},
{
"id": "1706.01905"
},
{
"id": "1706.00387"
},
{
"id": "1709.06009"
},
{
"id": "1505.00853"
},
{
"id": "1708.04782"
}
] |
1709.06560 | 37 | Signiï¬cance An important factor when deciding on an RL algorithm to use is the signiï¬cance of the reported gains based on a given metric. Several works have investigated the use of signiï¬cance metrics to assess the reliability of reported evaluation metrics in ML. However, few works in reinforcement learning assess the signiï¬cance of reported metrics. Based on our experimental results which indicate that algorithm performance can vary wildly based simply on perturbations of random seeds, it is clear that some metric is necessary for assessing the signiï¬cance of algorithm perfor- mance gains and the conï¬dence of reported metrics. While more research and investigation is needed to determine the best metrics for assessing RL algorithms, we investigate an initial set of metrics based on results from ML.
In supervised learning, k-fold t-test, corrected resampled t- test, and other signiï¬cance metrics have been discussed when comparing machine learning results (Bouckaert and Frank 2004; Nadeau and Bengio 2000). However, the assumptions pertaining to the underlying data with corrected metrics do not necessarily apply in RL. Further work is needed to inves- tigate proper corrected signiï¬cance tests for RL. Nonetheless, we explore several signiï¬cance measures which give insight | 1709.06560#37 | Deep Reinforcement Learning that Matters | In recent years, significant progress has been made in solving challenging
problems across various domains using deep reinforcement learning (RL).
Reproducing existing work and accurately judging the improvements offered by
novel methods is vital to sustaining this progress. Unfortunately, reproducing
results for state-of-the-art deep RL methods is seldom straightforward. In
particular, non-determinism in standard benchmark environments, combined with
variance intrinsic to the methods, can make reported results tough to
interpret. Without significance metrics and tighter standardization of
experimental reporting, it is difficult to determine whether improvements over
the prior state-of-the-art are meaningful. In this paper, we investigate
challenges posed by reproducibility, proper experimental techniques, and
reporting procedures. We illustrate the variability in reported metrics and
results when comparing against common baselines and suggest guidelines to make
future results in deep RL more reproducible. We aim to spur discussion about
how to ensure continued progress in the field by minimizing wasted effort
stemming from results that are non-reproducible and easily misinterpreted. | http://arxiv.org/pdf/1709.06560 | Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger | cs.LG, stat.ML | Accepted to the Thirthy-Second AAAI Conference On Artificial
Intelligence (AAAI), 2018 | null | cs.LG | 20170919 | 20190130 | [
{
"id": "1611.02247"
},
{
"id": "1506.02438"
},
{
"id": "1707.06347"
},
{
"id": "1703.02660"
},
{
"id": "1705.10443"
},
{
"id": "1703.01703"
},
{
"id": "1509.02971"
},
{
"id": "1612.03780"
},
{
"id": "1606.01540"
},
{
"id": "1706.01905"
},
{
"id": "1706.00387"
},
{
"id": "1709.06009"
},
{
"id": "1505.00853"
},
{
"id": "1708.04782"
}
] |
1709.06560 | 38 | into whether a novel algorithm is truly performing as the state- of-the-art. We consider the simple 2-sample t-test (sorting all ï¬nal evaluation returns across N random trials with different random seeds); the Kolmogorov-Smirnov test (Wilcox 2005); and bootstrap percent differences with 95% conï¬dence in- tervals. All calculated metrics can be found in the supple- mental. Generally, we ï¬nd that the signiï¬cance values match up to what is to be expected. Take, for example, comparing Walker2d-v1 performance of ACKTR vs. DDPG. ACKTR performs slightly better, but this performance is not signiï¬- cant due to the overlapping conï¬dence intervals of the two: t = 1.03, p = 0.334, KS = 0.40, p = 0.697, bootstrapped percent difference 44.47% (-80.62%, 111.72%). | 1709.06560#38 | Deep Reinforcement Learning that Matters | In recent years, significant progress has been made in solving challenging
problems across various domains using deep reinforcement learning (RL).
Reproducing existing work and accurately judging the improvements offered by
novel methods is vital to sustaining this progress. Unfortunately, reproducing
results for state-of-the-art deep RL methods is seldom straightforward. In
particular, non-determinism in standard benchmark environments, combined with
variance intrinsic to the methods, can make reported results tough to
interpret. Without significance metrics and tighter standardization of
experimental reporting, it is difficult to determine whether improvements over
the prior state-of-the-art are meaningful. In this paper, we investigate
challenges posed by reproducibility, proper experimental techniques, and
reporting procedures. We illustrate the variability in reported metrics and
results when comparing against common baselines and suggest guidelines to make
future results in deep RL more reproducible. We aim to spur discussion about
how to ensure continued progress in the field by minimizing wasted effort
stemming from results that are non-reproducible and easily misinterpreted. | http://arxiv.org/pdf/1709.06560 | Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger | cs.LG, stat.ML | Accepted to the Thirthy-Second AAAI Conference On Artificial
Intelligence (AAAI), 2018 | null | cs.LG | 20170919 | 20190130 | [
{
"id": "1611.02247"
},
{
"id": "1506.02438"
},
{
"id": "1707.06347"
},
{
"id": "1703.02660"
},
{
"id": "1705.10443"
},
{
"id": "1703.01703"
},
{
"id": "1509.02971"
},
{
"id": "1612.03780"
},
{
"id": "1606.01540"
},
{
"id": "1706.01905"
},
{
"id": "1706.00387"
},
{
"id": "1709.06009"
},
{
"id": "1505.00853"
},
{
"id": "1708.04782"
}
] |
1709.06560 | 39 | Discussion and Conclusion Through experimental methods focusing on PG methods for continuous control, we investigate problems with repro- ducibility in deep RL. We ï¬nd that both intrinsic (e.g. random seeds, environment properties) and extrinsic sources (e.g. hy- perparameters, codebases) of non-determinism can contribute to difï¬culties in reproducing baseline algorithms. Moreover, we ï¬nd that highly varied results due to intrinsic sources bolster the need for using proper signiï¬cance analysis. We propose several such methods and show their value on a subset of our experiments.
What recommendations can we draw from our experiments? | 1709.06560#39 | Deep Reinforcement Learning that Matters | In recent years, significant progress has been made in solving challenging
problems across various domains using deep reinforcement learning (RL).
Reproducing existing work and accurately judging the improvements offered by
novel methods is vital to sustaining this progress. Unfortunately, reproducing
results for state-of-the-art deep RL methods is seldom straightforward. In
particular, non-determinism in standard benchmark environments, combined with
variance intrinsic to the methods, can make reported results tough to
interpret. Without significance metrics and tighter standardization of
experimental reporting, it is difficult to determine whether improvements over
the prior state-of-the-art are meaningful. In this paper, we investigate
challenges posed by reproducibility, proper experimental techniques, and
reporting procedures. We illustrate the variability in reported metrics and
results when comparing against common baselines and suggest guidelines to make
future results in deep RL more reproducible. We aim to spur discussion about
how to ensure continued progress in the field by minimizing wasted effort
stemming from results that are non-reproducible and easily misinterpreted. | http://arxiv.org/pdf/1709.06560 | Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger | cs.LG, stat.ML | Accepted to the Thirthy-Second AAAI Conference On Artificial
Intelligence (AAAI), 2018 | null | cs.LG | 20170919 | 20190130 | [
{
"id": "1611.02247"
},
{
"id": "1506.02438"
},
{
"id": "1707.06347"
},
{
"id": "1703.02660"
},
{
"id": "1705.10443"
},
{
"id": "1703.01703"
},
{
"id": "1509.02971"
},
{
"id": "1612.03780"
},
{
"id": "1606.01540"
},
{
"id": "1706.01905"
},
{
"id": "1706.00387"
},
{
"id": "1709.06009"
},
{
"id": "1505.00853"
},
{
"id": "1708.04782"
}
] |
1709.06560 | 40 | What recommendations can we draw from our experiments?
Based on our experimental results and investigations, we can provide some general recommendations. Hyperparame- ters can have signiï¬cantly different effects across algorithms and environments. Thus it is important to ï¬nd the work- ing set which at least matches the original reported perfor- mance of baseline algorithms through standard hyperparame- ter searches. Similarly, new baseline algorithm implementa- tions used for comparison should match the original codebase results if available. Overall, due to the high variance across trials and random seeds of reinforcement learning algorithms, many trials must be run with different random seeds when comparing performance. Unless random seed selection is explicitly part of the algorithm, averaging multiple runs over different random seeds gives insight into the population dis- tribution of the algorithm performance on an environment. Similarly, due to these effects, it is important to perform proper signiï¬cance testing to determine if the higher average returns are in fact representative of better performance. | 1709.06560#40 | Deep Reinforcement Learning that Matters | In recent years, significant progress has been made in solving challenging
problems across various domains using deep reinforcement learning (RL).
Reproducing existing work and accurately judging the improvements offered by
novel methods is vital to sustaining this progress. Unfortunately, reproducing
results for state-of-the-art deep RL methods is seldom straightforward. In
particular, non-determinism in standard benchmark environments, combined with
variance intrinsic to the methods, can make reported results tough to
interpret. Without significance metrics and tighter standardization of
experimental reporting, it is difficult to determine whether improvements over
the prior state-of-the-art are meaningful. In this paper, we investigate
challenges posed by reproducibility, proper experimental techniques, and
reporting procedures. We illustrate the variability in reported metrics and
results when comparing against common baselines and suggest guidelines to make
future results in deep RL more reproducible. We aim to spur discussion about
how to ensure continued progress in the field by minimizing wasted effort
stemming from results that are non-reproducible and easily misinterpreted. | http://arxiv.org/pdf/1709.06560 | Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger | cs.LG, stat.ML | Accepted to the Thirthy-Second AAAI Conference On Artificial
Intelligence (AAAI), 2018 | null | cs.LG | 20170919 | 20190130 | [
{
"id": "1611.02247"
},
{
"id": "1506.02438"
},
{
"id": "1707.06347"
},
{
"id": "1703.02660"
},
{
"id": "1705.10443"
},
{
"id": "1703.01703"
},
{
"id": "1509.02971"
},
{
"id": "1612.03780"
},
{
"id": "1606.01540"
},
{
"id": "1706.01905"
},
{
"id": "1706.00387"
},
{
"id": "1709.06009"
},
{
"id": "1505.00853"
},
{
"id": "1708.04782"
}
] |
1709.06560 | 41 | We highlight several forms of signiï¬cance testing and ï¬nd that they give generally expected results when taking conï¬- dence intervals into consideration. Furthermore, we demon- strate that bootstrapping and power analysis are possible ways to gain insight into the number of trial runs necessary to make an informed decision about the signiï¬cance of algorithm per- formance gains. In general, however, the most important step to reproducibility is to report all hyperparameters, implemen- tation details, experimental setup, and evaluation methods for both baseline comparison methods and novel work. Without the publication of implementations and related details, wasted effort on reproducing state-of-the-art works will plague the community and slow down progress.
What are possible future lines of investigation? | 1709.06560#41 | Deep Reinforcement Learning that Matters | In recent years, significant progress has been made in solving challenging
problems across various domains using deep reinforcement learning (RL).
Reproducing existing work and accurately judging the improvements offered by
novel methods is vital to sustaining this progress. Unfortunately, reproducing
results for state-of-the-art deep RL methods is seldom straightforward. In
particular, non-determinism in standard benchmark environments, combined with
variance intrinsic to the methods, can make reported results tough to
interpret. Without significance metrics and tighter standardization of
experimental reporting, it is difficult to determine whether improvements over
the prior state-of-the-art are meaningful. In this paper, we investigate
challenges posed by reproducibility, proper experimental techniques, and
reporting procedures. We illustrate the variability in reported metrics and
results when comparing against common baselines and suggest guidelines to make
future results in deep RL more reproducible. We aim to spur discussion about
how to ensure continued progress in the field by minimizing wasted effort
stemming from results that are non-reproducible and easily misinterpreted. | http://arxiv.org/pdf/1709.06560 | Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger | cs.LG, stat.ML | Accepted to the Thirthy-Second AAAI Conference On Artificial
Intelligence (AAAI), 2018 | null | cs.LG | 20170919 | 20190130 | [
{
"id": "1611.02247"
},
{
"id": "1506.02438"
},
{
"id": "1707.06347"
},
{
"id": "1703.02660"
},
{
"id": "1705.10443"
},
{
"id": "1703.01703"
},
{
"id": "1509.02971"
},
{
"id": "1612.03780"
},
{
"id": "1606.01540"
},
{
"id": "1706.01905"
},
{
"id": "1706.00387"
},
{
"id": "1709.06009"
},
{
"id": "1505.00853"
},
{
"id": "1708.04782"
}
] |
1709.06560 | 42 | What are possible future lines of investigation?
Due to the signiï¬cant effects of hyperparameters (partic- ularly reward scaling), another possibly important line of future investigation is in building hyperparameter agnostic algorithms. Such an approach would ensure that there is no unfairness introduced from external sources when compar- ing algorithms agnostic to parameters such as reward scale, batch size, or network structure. Furthermore, while we in- vestigate an initial set of signiï¬cance metrics here, they may not be the best ï¬t for comparing RL algorithms. Several works have begun investigating policy evaluation methods for the purposes of safe RL (Thomas and Brunskill 2016; Thomas, Theocharous, and Ghavamzadeh 2015), but further work is needed in signiï¬cance testing and statistical analysis. Similar lines of investigation to (Nadeau and Bengio 2000; Bouckaert and Frank 2004) would be helpful to determine the best methods for evaluating performance gain signiï¬cance.
How can we ensure that deep RL matters? | 1709.06560#42 | Deep Reinforcement Learning that Matters | In recent years, significant progress has been made in solving challenging
problems across various domains using deep reinforcement learning (RL).
Reproducing existing work and accurately judging the improvements offered by
novel methods is vital to sustaining this progress. Unfortunately, reproducing
results for state-of-the-art deep RL methods is seldom straightforward. In
particular, non-determinism in standard benchmark environments, combined with
variance intrinsic to the methods, can make reported results tough to
interpret. Without significance metrics and tighter standardization of
experimental reporting, it is difficult to determine whether improvements over
the prior state-of-the-art are meaningful. In this paper, we investigate
challenges posed by reproducibility, proper experimental techniques, and
reporting procedures. We illustrate the variability in reported metrics and
results when comparing against common baselines and suggest guidelines to make
future results in deep RL more reproducible. We aim to spur discussion about
how to ensure continued progress in the field by minimizing wasted effort
stemming from results that are non-reproducible and easily misinterpreted. | http://arxiv.org/pdf/1709.06560 | Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger | cs.LG, stat.ML | Accepted to the Thirthy-Second AAAI Conference On Artificial
Intelligence (AAAI), 2018 | null | cs.LG | 20170919 | 20190130 | [
{
"id": "1611.02247"
},
{
"id": "1506.02438"
},
{
"id": "1707.06347"
},
{
"id": "1703.02660"
},
{
"id": "1705.10443"
},
{
"id": "1703.01703"
},
{
"id": "1509.02971"
},
{
"id": "1612.03780"
},
{
"id": "1606.01540"
},
{
"id": "1706.01905"
},
{
"id": "1706.00387"
},
{
"id": "1709.06009"
},
{
"id": "1505.00853"
},
{
"id": "1708.04782"
}
] |
1709.06560 | 43 | How can we ensure that deep RL matters?
We discuss many different factors affecting reproducibility of RL algorithms. The sensitivity of these algorithms to changes in reward scale, environment dynamics, and random seeds can be considerable and varies between algorithms and set- tings. Since benchmark environments are proxies for real- world applications to gauge generalized algorithm perfor- mance, perhaps more emphasis should be placed on the appli- cability of RL algorithms to real-world tasks. That is, as there is often no clear winner among all benchmark environments, perhaps recommended areas of application should be demon- strated along with benchmark environment results when pre- senting a new algorithm. Maybe new methods should be answering the question: in what setting would this work be useful? This is something that is addressed for machine learn- ing in (Wagstaff 2012) and may warrant more discussion for RL. As a community, we must not only ensure reproducible results with fair comparisons, but we must also consider what are the best ways to demonstrate that RL continues to matter.
Acknowledgements We thank NSERC, CIFAR, the Open Philanthropy Project, and the AWS Cloud Credits for Research Program. | 1709.06560#43 | Deep Reinforcement Learning that Matters | In recent years, significant progress has been made in solving challenging
problems across various domains using deep reinforcement learning (RL).
Reproducing existing work and accurately judging the improvements offered by
novel methods is vital to sustaining this progress. Unfortunately, reproducing
results for state-of-the-art deep RL methods is seldom straightforward. In
particular, non-determinism in standard benchmark environments, combined with
variance intrinsic to the methods, can make reported results tough to
interpret. Without significance metrics and tighter standardization of
experimental reporting, it is difficult to determine whether improvements over
the prior state-of-the-art are meaningful. In this paper, we investigate
challenges posed by reproducibility, proper experimental techniques, and
reporting procedures. We illustrate the variability in reported metrics and
results when comparing against common baselines and suggest guidelines to make
future results in deep RL more reproducible. We aim to spur discussion about
how to ensure continued progress in the field by minimizing wasted effort
stemming from results that are non-reproducible and easily misinterpreted. | http://arxiv.org/pdf/1709.06560 | Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger | cs.LG, stat.ML | Accepted to the Thirthy-Second AAAI Conference On Artificial
Intelligence (AAAI), 2018 | null | cs.LG | 20170919 | 20190130 | [
{
"id": "1611.02247"
},
{
"id": "1506.02438"
},
{
"id": "1707.06347"
},
{
"id": "1703.02660"
},
{
"id": "1705.10443"
},
{
"id": "1703.01703"
},
{
"id": "1509.02971"
},
{
"id": "1612.03780"
},
{
"id": "1606.01540"
},
{
"id": "1706.01905"
},
{
"id": "1706.00387"
},
{
"id": "1709.06009"
},
{
"id": "1505.00853"
},
{
"id": "1708.04782"
}
] |
1709.06560 | 44 | Acknowledgements We thank NSERC, CIFAR, the Open Philanthropy Project, and the AWS Cloud Credits for Research Program.
References Bouckaert, R. R., and Frank, E. 2004. Evaluating the replicability of signiï¬cance tests for comparing learning algorithms. In PAKDD, 3â12. Springer. Bouckaert, R. R. 2004. Estimating replicability of classiï¬er learning experiments. In Proceedings of the 21st International Conference on Machine Learning (ICML). Boulesteix, A.-L.; Lauer, S.; and Eugster, M. J. 2013. A plea for neutral comparison studies in computational sciences. PloS one 8(4):e61562. Brockman, G.; Cheung, V.; Pettersson, L.; Schneider, J.; Schulman, J.; Tang, J.; and Zaremba, W. 2016. OpenAI gym. arXiv preprint arXiv:1606.01540. Duan, Y.; Chen, X.; Houthooft, R.; Schulman, J.; and Abbeel, P. 2016. Benchmarking deep reinforcement learning for continuous control. In Proceedings of the 33rd International Conference on Machine Learning (ICML). | 1709.06560#44 | Deep Reinforcement Learning that Matters | In recent years, significant progress has been made in solving challenging
problems across various domains using deep reinforcement learning (RL).
Reproducing existing work and accurately judging the improvements offered by
novel methods is vital to sustaining this progress. Unfortunately, reproducing
results for state-of-the-art deep RL methods is seldom straightforward. In
particular, non-determinism in standard benchmark environments, combined with
variance intrinsic to the methods, can make reported results tough to
interpret. Without significance metrics and tighter standardization of
experimental reporting, it is difficult to determine whether improvements over
the prior state-of-the-art are meaningful. In this paper, we investigate
challenges posed by reproducibility, proper experimental techniques, and
reporting procedures. We illustrate the variability in reported metrics and
results when comparing against common baselines and suggest guidelines to make
future results in deep RL more reproducible. We aim to spur discussion about
how to ensure continued progress in the field by minimizing wasted effort
stemming from results that are non-reproducible and easily misinterpreted. | http://arxiv.org/pdf/1709.06560 | Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger | cs.LG, stat.ML | Accepted to the Thirthy-Second AAAI Conference On Artificial
Intelligence (AAAI), 2018 | null | cs.LG | 20170919 | 20190130 | [
{
"id": "1611.02247"
},
{
"id": "1506.02438"
},
{
"id": "1707.06347"
},
{
"id": "1703.02660"
},
{
"id": "1705.10443"
},
{
"id": "1703.01703"
},
{
"id": "1509.02971"
},
{
"id": "1612.03780"
},
{
"id": "1606.01540"
},
{
"id": "1706.01905"
},
{
"id": "1706.00387"
},
{
"id": "1709.06009"
},
{
"id": "1505.00853"
},
{
"id": "1708.04782"
}
] |
1709.06560 | 45 | Efron, B., and Tibshirani, R. J. 1994. An introduction to the boot- strap. CRC press. Glorot, X., and Bengio, Y. 2010. Understanding the difï¬culty of training deep feedforward neural networks. In Proceedings of the Thirteenth International Conference on Artiï¬cial Intelligence and Statistics, 249â256. Gu, S.; Lillicrap, T.; Ghahramani, Z.; Turner, R. E.; and Levine, S. 2016. Q-prop: Sample-efï¬cient policy gradient with an off-policy critic. arXiv preprint arXiv:1611.02247. Gu, S.; Lillicrap, T.; Ghahramani, Z.; Turner, R. E.; Sch¨olkopf, B.; and Levine, S. 2017. Interpolated policy gradient: Merging on- policy and off-policy gradient estimation for deep reinforcement learning. arXiv preprint arXiv:1706.00387. Hofer, L., and Gimbert, H. 2016. Online reinforcement learning for real-time exploration in continuous state and action markov decision processes. arXiv preprint arXiv:1612.03780. | 1709.06560#45 | Deep Reinforcement Learning that Matters | In recent years, significant progress has been made in solving challenging
problems across various domains using deep reinforcement learning (RL).
Reproducing existing work and accurately judging the improvements offered by
novel methods is vital to sustaining this progress. Unfortunately, reproducing
results for state-of-the-art deep RL methods is seldom straightforward. In
particular, non-determinism in standard benchmark environments, combined with
variance intrinsic to the methods, can make reported results tough to
interpret. Without significance metrics and tighter standardization of
experimental reporting, it is difficult to determine whether improvements over
the prior state-of-the-art are meaningful. In this paper, we investigate
challenges posed by reproducibility, proper experimental techniques, and
reporting procedures. We illustrate the variability in reported metrics and
results when comparing against common baselines and suggest guidelines to make
future results in deep RL more reproducible. We aim to spur discussion about
how to ensure continued progress in the field by minimizing wasted effort
stemming from results that are non-reproducible and easily misinterpreted. | http://arxiv.org/pdf/1709.06560 | Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger | cs.LG, stat.ML | Accepted to the Thirthy-Second AAAI Conference On Artificial
Intelligence (AAAI), 2018 | null | cs.LG | 20170919 | 20190130 | [
{
"id": "1611.02247"
},
{
"id": "1506.02438"
},
{
"id": "1707.06347"
},
{
"id": "1703.02660"
},
{
"id": "1705.10443"
},
{
"id": "1703.01703"
},
{
"id": "1509.02971"
},
{
"id": "1612.03780"
},
{
"id": "1606.01540"
},
{
"id": "1706.01905"
},
{
"id": "1706.00387"
},
{
"id": "1709.06009"
},
{
"id": "1505.00853"
},
{
"id": "1708.04782"
}
] |
1709.06560 | 46 | Online reinforcement learning for real-time exploration in continuous state and action markov decision processes. arXiv preprint arXiv:1612.03780. Islam, R.; Henderson, P.; Gomrokchi, M.; and Precup, D. 2017. Reproducibility of benchmarked deep reinforcement learning tasks for continuous control. ICML Reproducibility in Machine Learning Workshop. Kohavi, R., et al. 1995. A study of cross-validation and bootstrap for accuracy estimation and model selection. In IJCAI, volume 14. LeCun, Y. A.; Bottou, L.; Orr, G. B.; and M¨uller, K.-R. 2012. Efï¬- cient backprop. In Neural Networks: Tricks of the Trade. Springer. Lillicrap, T. P.; Hunt, J. J.; Pritzel, A.; Heess, N.; Erez, T.; Tassa, Y.; Silver, D.; and Wierstra, D. 2015a. Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971. Lillicrap, T. P.; Hunt, J. J.; Pritzel, A.; Heess, N.; Erez, T.; | 1709.06560#46 | Deep Reinforcement Learning that Matters | In recent years, significant progress has been made in solving challenging
problems across various domains using deep reinforcement learning (RL).
Reproducing existing work and accurately judging the improvements offered by
novel methods is vital to sustaining this progress. Unfortunately, reproducing
results for state-of-the-art deep RL methods is seldom straightforward. In
particular, non-determinism in standard benchmark environments, combined with
variance intrinsic to the methods, can make reported results tough to
interpret. Without significance metrics and tighter standardization of
experimental reporting, it is difficult to determine whether improvements over
the prior state-of-the-art are meaningful. In this paper, we investigate
challenges posed by reproducibility, proper experimental techniques, and
reporting procedures. We illustrate the variability in reported metrics and
results when comparing against common baselines and suggest guidelines to make
future results in deep RL more reproducible. We aim to spur discussion about
how to ensure continued progress in the field by minimizing wasted effort
stemming from results that are non-reproducible and easily misinterpreted. | http://arxiv.org/pdf/1709.06560 | Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger | cs.LG, stat.ML | Accepted to the Thirthy-Second AAAI Conference On Artificial
Intelligence (AAAI), 2018 | null | cs.LG | 20170919 | 20190130 | [
{
"id": "1611.02247"
},
{
"id": "1506.02438"
},
{
"id": "1707.06347"
},
{
"id": "1703.02660"
},
{
"id": "1705.10443"
},
{
"id": "1703.01703"
},
{
"id": "1509.02971"
},
{
"id": "1612.03780"
},
{
"id": "1606.01540"
},
{
"id": "1706.01905"
},
{
"id": "1706.00387"
},
{
"id": "1709.06009"
},
{
"id": "1505.00853"
},
{
"id": "1708.04782"
}
] |
1709.06560 | 47 | Lillicrap, T. P.; Hunt, J. J.; Pritzel, A.; Heess, N.; Erez, T.; Tassa, Y.; Silver, D.; and Wierstra, D. 2015b. Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971. Machado, M. C.; Bellemare, M. G.; Talvitie, E.; Veness, J.; Hausknecht, M.; and Bowling, M. 2017. Revisiting the arcade learning environment: Evaluation protocols and open problems for general agents. arXiv preprint arXiv:1709.06009. Mandel, T.; Liu, Y.-E.; Brunskill, E.; and Popovic, Z. 2016. Ofï¬ine Evaluation of Online Reinforcement Learning Algorithms. In AAAI. Mnih, V.; Kavukcuoglu, K.; Silver, D.; Graves, A.; Antonoglou, I.; Wierstra, D.; and Riedmiller, M. 2013. Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602. Mnih, V.; Badia, A. P.; Mirza, M.; Graves, A.; | 1709.06560#47 | Deep Reinforcement Learning that Matters | In recent years, significant progress has been made in solving challenging
problems across various domains using deep reinforcement learning (RL).
Reproducing existing work and accurately judging the improvements offered by
novel methods is vital to sustaining this progress. Unfortunately, reproducing
results for state-of-the-art deep RL methods is seldom straightforward. In
particular, non-determinism in standard benchmark environments, combined with
variance intrinsic to the methods, can make reported results tough to
interpret. Without significance metrics and tighter standardization of
experimental reporting, it is difficult to determine whether improvements over
the prior state-of-the-art are meaningful. In this paper, we investigate
challenges posed by reproducibility, proper experimental techniques, and
reporting procedures. We illustrate the variability in reported metrics and
results when comparing against common baselines and suggest guidelines to make
future results in deep RL more reproducible. We aim to spur discussion about
how to ensure continued progress in the field by minimizing wasted effort
stemming from results that are non-reproducible and easily misinterpreted. | http://arxiv.org/pdf/1709.06560 | Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger | cs.LG, stat.ML | Accepted to the Thirthy-Second AAAI Conference On Artificial
Intelligence (AAAI), 2018 | null | cs.LG | 20170919 | 20190130 | [
{
"id": "1611.02247"
},
{
"id": "1506.02438"
},
{
"id": "1707.06347"
},
{
"id": "1703.02660"
},
{
"id": "1705.10443"
},
{
"id": "1703.01703"
},
{
"id": "1509.02971"
},
{
"id": "1612.03780"
},
{
"id": "1606.01540"
},
{
"id": "1706.01905"
},
{
"id": "1706.00387"
},
{
"id": "1709.06009"
},
{
"id": "1505.00853"
},
{
"id": "1708.04782"
}
] |
1709.06560 | 48 | learning. arXiv preprint arXiv:1312.5602. Mnih, V.; Badia, A. P.; Mirza, M.; Graves, A.; Lillicrap, T.; Harley, T.; Silver, D.; and Kavukcuoglu, K. 2016. Asynchronous methods for deep reinforcement learning. In International Conference on Machine Learning, 1928â1937. Nadeau, C., and Bengio, Y. 2000. Inference for the generalization error. In Advances in neural information processing systems. Plappert, M.; Houthooft, R.; Dhariwal, P.; Sidor, S.; Chen, R.; Chen, X.; Asfour, T.; Abbeel, P.; and Andrychowicz, M. 2017. Parameter space noise for exploration. arXiv preprint arXiv:1706.01905. Rajeswaran, A.; Lowrey, K.; Todorov, E.; and Kakade, S. 2017. Towards generalization and simplicity in continuous control. arXiv preprint arXiv:1703.02660. Schulman, J.; Levine, S.; Abbeel, P.; Jordan, M.; and Moritz, P. 2015a. Trust region policy optimization. In | 1709.06560#48 | Deep Reinforcement Learning that Matters | In recent years, significant progress has been made in solving challenging
problems across various domains using deep reinforcement learning (RL).
Reproducing existing work and accurately judging the improvements offered by
novel methods is vital to sustaining this progress. Unfortunately, reproducing
results for state-of-the-art deep RL methods is seldom straightforward. In
particular, non-determinism in standard benchmark environments, combined with
variance intrinsic to the methods, can make reported results tough to
interpret. Without significance metrics and tighter standardization of
experimental reporting, it is difficult to determine whether improvements over
the prior state-of-the-art are meaningful. In this paper, we investigate
challenges posed by reproducibility, proper experimental techniques, and
reporting procedures. We illustrate the variability in reported metrics and
results when comparing against common baselines and suggest guidelines to make
future results in deep RL more reproducible. We aim to spur discussion about
how to ensure continued progress in the field by minimizing wasted effort
stemming from results that are non-reproducible and easily misinterpreted. | http://arxiv.org/pdf/1709.06560 | Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger | cs.LG, stat.ML | Accepted to the Thirthy-Second AAAI Conference On Artificial
Intelligence (AAAI), 2018 | null | cs.LG | 20170919 | 20190130 | [
{
"id": "1611.02247"
},
{
"id": "1506.02438"
},
{
"id": "1707.06347"
},
{
"id": "1703.02660"
},
{
"id": "1705.10443"
},
{
"id": "1703.01703"
},
{
"id": "1509.02971"
},
{
"id": "1612.03780"
},
{
"id": "1606.01540"
},
{
"id": "1706.01905"
},
{
"id": "1706.00387"
},
{
"id": "1709.06009"
},
{
"id": "1505.00853"
},
{
"id": "1708.04782"
}
] |
1709.06560 | 49 | Schulman, J.; Levine, S.; Abbeel, P.; Jordan, M.; and Moritz, P. 2015a. Trust region policy optimization. In Proceedings of the 32nd International Conference on Machine Learning (ICML). Schulman, J.; Moritz, P.; Levine, S.; Jordan, M.; and Abbeel, P. 2015b. High-dimensional continuous control using generalized advantage estimation. arXiv preprint arXiv:1506.02438. Schulman, J.; Wolski, F.; Dhariwal, P.; Radford, A.; and Klimov, O. 2017. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347. | 1709.06560#49 | Deep Reinforcement Learning that Matters | In recent years, significant progress has been made in solving challenging
problems across various domains using deep reinforcement learning (RL).
Reproducing existing work and accurately judging the improvements offered by
novel methods is vital to sustaining this progress. Unfortunately, reproducing
results for state-of-the-art deep RL methods is seldom straightforward. In
particular, non-determinism in standard benchmark environments, combined with
variance intrinsic to the methods, can make reported results tough to
interpret. Without significance metrics and tighter standardization of
experimental reporting, it is difficult to determine whether improvements over
the prior state-of-the-art are meaningful. In this paper, we investigate
challenges posed by reproducibility, proper experimental techniques, and
reporting procedures. We illustrate the variability in reported metrics and
results when comparing against common baselines and suggest guidelines to make
future results in deep RL more reproducible. We aim to spur discussion about
how to ensure continued progress in the field by minimizing wasted effort
stemming from results that are non-reproducible and easily misinterpreted. | http://arxiv.org/pdf/1709.06560 | Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger | cs.LG, stat.ML | Accepted to the Thirthy-Second AAAI Conference On Artificial
Intelligence (AAAI), 2018 | null | cs.LG | 20170919 | 20190130 | [
{
"id": "1611.02247"
},
{
"id": "1506.02438"
},
{
"id": "1707.06347"
},
{
"id": "1703.02660"
},
{
"id": "1705.10443"
},
{
"id": "1703.01703"
},
{
"id": "1509.02971"
},
{
"id": "1612.03780"
},
{
"id": "1606.01540"
},
{
"id": "1706.01905"
},
{
"id": "1706.00387"
},
{
"id": "1709.06009"
},
{
"id": "1505.00853"
},
{
"id": "1708.04782"
}
] |
1709.06560 | 50 | Silva, V. d. N., and Chaimowicz, L. 2017. Moba: a new arena for game ai. arXiv preprint arXiv:1705.10443. Silver, D.; Huang, A.; Maddison, C. J.; Guez, A.; Sifre, L.; Van Den Driessche, G.; Schrittwieser, J.; Antonoglou, I.; Panneershel- vam, V.; Lanctot, M.; et al. 2016. Mastering the game of go with deep neural networks and tree search. Nature 529(7587):484â489. Stadie, B. C.; Abbeel, P.; and Sutskever, I. 2017. Third-person imitation learning. arXiv preprint arXiv:1703.01703. Stodden, V.; Leisch, F.; and Peng, R. D. 2014. reproducible research. CRC Press. Sutton, R. S.; McAllester, D. A.; Singh, S. P.; and Mansour, Y. 2000. Policy gradient methods for reinforcement learning with func- tion approximation. In Advances in neural information processing systems. Thomas, P., and Brunskill, E. 2016. | 1709.06560#50 | Deep Reinforcement Learning that Matters | In recent years, significant progress has been made in solving challenging
problems across various domains using deep reinforcement learning (RL).
Reproducing existing work and accurately judging the improvements offered by
novel methods is vital to sustaining this progress. Unfortunately, reproducing
results for state-of-the-art deep RL methods is seldom straightforward. In
particular, non-determinism in standard benchmark environments, combined with
variance intrinsic to the methods, can make reported results tough to
interpret. Without significance metrics and tighter standardization of
experimental reporting, it is difficult to determine whether improvements over
the prior state-of-the-art are meaningful. In this paper, we investigate
challenges posed by reproducibility, proper experimental techniques, and
reporting procedures. We illustrate the variability in reported metrics and
results when comparing against common baselines and suggest guidelines to make
future results in deep RL more reproducible. We aim to spur discussion about
how to ensure continued progress in the field by minimizing wasted effort
stemming from results that are non-reproducible and easily misinterpreted. | http://arxiv.org/pdf/1709.06560 | Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger | cs.LG, stat.ML | Accepted to the Thirthy-Second AAAI Conference On Artificial
Intelligence (AAAI), 2018 | null | cs.LG | 20170919 | 20190130 | [
{
"id": "1611.02247"
},
{
"id": "1506.02438"
},
{
"id": "1707.06347"
},
{
"id": "1703.02660"
},
{
"id": "1705.10443"
},
{
"id": "1703.01703"
},
{
"id": "1509.02971"
},
{
"id": "1612.03780"
},
{
"id": "1606.01540"
},
{
"id": "1706.01905"
},
{
"id": "1706.00387"
},
{
"id": "1709.06009"
},
{
"id": "1505.00853"
},
{
"id": "1708.04782"
}
] |
1709.06560 | 51 | gradient methods for reinforcement learning with func- tion approximation. In Advances in neural information processing systems. Thomas, P., and Brunskill, E. 2016. Data-efï¬cient off-policy policy evaluation for reinforcement learning. In International Conference on Machine Learning, 2139â2148. Thomas, P. S.; Theocharous, G.; and Ghavamzadeh, M. 2015. High- Conï¬dence Off-Policy Evaluation. In AAAI. Todorov, E.; Erez, T.; and Tassa, Y. 2012. Mujoco: A physics engine for model-based control. In 2012 IEEE/RSJ International Confer- ence on Intelligent Robots and Systems, IROS 2012, Vilamoura, Algarve, Portugal, October 7-12, 2012, 5026â5033. Tuff´ery, S. 2011. Data mining and statistics for decision making, volume 2. Wiley Chichester. van Hasselt, H. P.; Guez, A.; Hessel, M.; Mnih, V.; and Silver, D. 2016. Learning values across many orders of magnitude. In Advances in Neural Information Processing Systems, 4287â4295. Vaughan, R., and | 1709.06560#51 | Deep Reinforcement Learning that Matters | In recent years, significant progress has been made in solving challenging
problems across various domains using deep reinforcement learning (RL).
Reproducing existing work and accurately judging the improvements offered by
novel methods is vital to sustaining this progress. Unfortunately, reproducing
results for state-of-the-art deep RL methods is seldom straightforward. In
particular, non-determinism in standard benchmark environments, combined with
variance intrinsic to the methods, can make reported results tough to
interpret. Without significance metrics and tighter standardization of
experimental reporting, it is difficult to determine whether improvements over
the prior state-of-the-art are meaningful. In this paper, we investigate
challenges posed by reproducibility, proper experimental techniques, and
reporting procedures. We illustrate the variability in reported metrics and
results when comparing against common baselines and suggest guidelines to make
future results in deep RL more reproducible. We aim to spur discussion about
how to ensure continued progress in the field by minimizing wasted effort
stemming from results that are non-reproducible and easily misinterpreted. | http://arxiv.org/pdf/1709.06560 | Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger | cs.LG, stat.ML | Accepted to the Thirthy-Second AAAI Conference On Artificial
Intelligence (AAAI), 2018 | null | cs.LG | 20170919 | 20190130 | [
{
"id": "1611.02247"
},
{
"id": "1506.02438"
},
{
"id": "1707.06347"
},
{
"id": "1703.02660"
},
{
"id": "1705.10443"
},
{
"id": "1703.01703"
},
{
"id": "1509.02971"
},
{
"id": "1612.03780"
},
{
"id": "1606.01540"
},
{
"id": "1706.01905"
},
{
"id": "1706.00387"
},
{
"id": "1709.06009"
},
{
"id": "1505.00853"
},
{
"id": "1708.04782"
}
] |
1709.06560 | 52 | D. 2016. Learning values across many orders of magnitude. In Advances in Neural Information Processing Systems, 4287â4295. Vaughan, R., and Wawerla, J. 2012. Publishing identiï¬able exper- iment code and conï¬guration is important, good and easy. arXiv preprint arXiv:1204.2235. Vincent, P.; de Br´ebisson, A.; and Bouthillier, X. 2015. Efï¬cient exact gradient update for training deep networks with very large sparse targets. In Advances in Neural Information Processing Sys- tems, 1108â1116. Vinyals, O.; Ewalds, T.; Bartunov, S.; Georgiev, P.; Vezhnevets, A. S.; Yeo, M.; Makhzani, A.; K¨uttler, H.; Agapiou, J.; Schrittwieser, J.; et al. 2017. Starcraft ii: A new challenge for reinforcement learning. arXiv preprint arXiv:1708.04782. Wagstaff, K. 2012. Machine learning that matters. arXiv preprint arXiv:1206.4656. Whiteson, S.; Tanner, | 1709.06560#52 | Deep Reinforcement Learning that Matters | In recent years, significant progress has been made in solving challenging
problems across various domains using deep reinforcement learning (RL).
Reproducing existing work and accurately judging the improvements offered by
novel methods is vital to sustaining this progress. Unfortunately, reproducing
results for state-of-the-art deep RL methods is seldom straightforward. In
particular, non-determinism in standard benchmark environments, combined with
variance intrinsic to the methods, can make reported results tough to
interpret. Without significance metrics and tighter standardization of
experimental reporting, it is difficult to determine whether improvements over
the prior state-of-the-art are meaningful. In this paper, we investigate
challenges posed by reproducibility, proper experimental techniques, and
reporting procedures. We illustrate the variability in reported metrics and
results when comparing against common baselines and suggest guidelines to make
future results in deep RL more reproducible. We aim to spur discussion about
how to ensure continued progress in the field by minimizing wasted effort
stemming from results that are non-reproducible and easily misinterpreted. | http://arxiv.org/pdf/1709.06560 | Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger | cs.LG, stat.ML | Accepted to the Thirthy-Second AAAI Conference On Artificial
Intelligence (AAAI), 2018 | null | cs.LG | 20170919 | 20190130 | [
{
"id": "1611.02247"
},
{
"id": "1506.02438"
},
{
"id": "1707.06347"
},
{
"id": "1703.02660"
},
{
"id": "1705.10443"
},
{
"id": "1703.01703"
},
{
"id": "1509.02971"
},
{
"id": "1612.03780"
},
{
"id": "1606.01540"
},
{
"id": "1706.01905"
},
{
"id": "1706.00387"
},
{
"id": "1709.06009"
},
{
"id": "1505.00853"
},
{
"id": "1708.04782"
}
] |
1709.06560 | 53 | Wagstaff, K. 2012. Machine learning that matters. arXiv preprint arXiv:1206.4656. Whiteson, S.; Tanner, B.; Taylor, M. E.; and Stone, P. 2011. Pro- tecting against evaluation overï¬tting in empirical reinforcement learning. In 2011 IEEE Symposium on Adaptive Dynamic Program- ming And Reinforcement Learning, ADPRL 2011, Paris, France, April 12-14, 2011, 120â127. Wilcox, R. 2005. Kolmogorovâsmirnov test. Encyclopedia of biostatistics. Wu, Y.; Mansimov, E.; Liao, S.; Grosse, R.; and Ba, J. 2017. Scal- able trust-region method for deep reinforcement learning using kronecker-factored approximation. arXiv preprint:1708.05144. Xu, B.; Wang, N.; Chen, T.; and Li, M. 2015. Empirical evaluation of rectiï¬ed activations in convolutional network. arXiv preprint arXiv:1505.00853. Yuan, K.-H., and Hayashi, K. 2003. Bootstrap approach to inference and power analysis based on three test statistics for | 1709.06560#53 | Deep Reinforcement Learning that Matters | In recent years, significant progress has been made in solving challenging
problems across various domains using deep reinforcement learning (RL).
Reproducing existing work and accurately judging the improvements offered by
novel methods is vital to sustaining this progress. Unfortunately, reproducing
results for state-of-the-art deep RL methods is seldom straightforward. In
particular, non-determinism in standard benchmark environments, combined with
variance intrinsic to the methods, can make reported results tough to
interpret. Without significance metrics and tighter standardization of
experimental reporting, it is difficult to determine whether improvements over
the prior state-of-the-art are meaningful. In this paper, we investigate
challenges posed by reproducibility, proper experimental techniques, and
reporting procedures. We illustrate the variability in reported metrics and
results when comparing against common baselines and suggest guidelines to make
future results in deep RL more reproducible. We aim to spur discussion about
how to ensure continued progress in the field by minimizing wasted effort
stemming from results that are non-reproducible and easily misinterpreted. | http://arxiv.org/pdf/1709.06560 | Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger | cs.LG, stat.ML | Accepted to the Thirthy-Second AAAI Conference On Artificial
Intelligence (AAAI), 2018 | null | cs.LG | 20170919 | 20190130 | [
{
"id": "1611.02247"
},
{
"id": "1506.02438"
},
{
"id": "1707.06347"
},
{
"id": "1703.02660"
},
{
"id": "1705.10443"
},
{
"id": "1703.01703"
},
{
"id": "1509.02971"
},
{
"id": "1612.03780"
},
{
"id": "1606.01540"
},
{
"id": "1706.01905"
},
{
"id": "1706.00387"
},
{
"id": "1709.06009"
},
{
"id": "1505.00853"
},
{
"id": "1708.04782"
}
] |
1709.06560 | 55 | # Supplemental Material
In this supplemental material, we include a detailed review of experiment conï¬gurations of related work with policy gradient methods in continuous control MuJoCo (Todorov, Erez, and Tassa 2012) environment tasks from OpenAI Gym (Brockman et al. 2016). We include a detailed list of the hyperparameters and reported metrics typically used in policy gradient literature in deep RL. We also include all our experimental results, with baseline algorithms DDPG (Lillicrap et al. 2015b), TRPO (Schulman et al. 2015a), PPO (Schulman et al. 2017) and ACKTR (Wu et al. 2017)) as discussed in the paper. Our experimental results include ï¬gures with different hyperparameters (network architectures, activation functions) to highlight the differences this can have across algorithms and environments. Finally, as discussed in the paper, we include discussion of signiï¬cance metrics and show how these metrics can be useful for evaluating deep RL algorithms.
# Literature Reviews
# Hyperparameters | 1709.06560#55 | Deep Reinforcement Learning that Matters | In recent years, significant progress has been made in solving challenging
problems across various domains using deep reinforcement learning (RL).
Reproducing existing work and accurately judging the improvements offered by
novel methods is vital to sustaining this progress. Unfortunately, reproducing
results for state-of-the-art deep RL methods is seldom straightforward. In
particular, non-determinism in standard benchmark environments, combined with
variance intrinsic to the methods, can make reported results tough to
interpret. Without significance metrics and tighter standardization of
experimental reporting, it is difficult to determine whether improvements over
the prior state-of-the-art are meaningful. In this paper, we investigate
challenges posed by reproducibility, proper experimental techniques, and
reporting procedures. We illustrate the variability in reported metrics and
results when comparing against common baselines and suggest guidelines to make
future results in deep RL more reproducible. We aim to spur discussion about
how to ensure continued progress in the field by minimizing wasted effort
stemming from results that are non-reproducible and easily misinterpreted. | http://arxiv.org/pdf/1709.06560 | Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger | cs.LG, stat.ML | Accepted to the Thirthy-Second AAAI Conference On Artificial
Intelligence (AAAI), 2018 | null | cs.LG | 20170919 | 20190130 | [
{
"id": "1611.02247"
},
{
"id": "1506.02438"
},
{
"id": "1707.06347"
},
{
"id": "1703.02660"
},
{
"id": "1705.10443"
},
{
"id": "1703.01703"
},
{
"id": "1509.02971"
},
{
"id": "1612.03780"
},
{
"id": "1606.01540"
},
{
"id": "1706.01905"
},
{
"id": "1706.00387"
},
{
"id": "1709.06009"
},
{
"id": "1505.00853"
},
{
"id": "1708.04782"
}
] |
1709.06560 | 56 | # Literature Reviews
# Hyperparameters
In this section, we include a list of hyperparameters that are reported in related literature, as shown in ï¬gure 4. Our analysis shows that often there is no consistency in the type of network architectures and activation functions that are used in related literature. As shown in the paper and from our experimental results in later sections, we ï¬nd, however, that these hyperparameters can have a signiï¬cant effect in the performance of algorithms across benchmark environments typically used.
Table 4: Evaluation Hyperparameters of baseline algorithms reported in related literature | 1709.06560#56 | Deep Reinforcement Learning that Matters | In recent years, significant progress has been made in solving challenging
problems across various domains using deep reinforcement learning (RL).
Reproducing existing work and accurately judging the improvements offered by
novel methods is vital to sustaining this progress. Unfortunately, reproducing
results for state-of-the-art deep RL methods is seldom straightforward. In
particular, non-determinism in standard benchmark environments, combined with
variance intrinsic to the methods, can make reported results tough to
interpret. Without significance metrics and tighter standardization of
experimental reporting, it is difficult to determine whether improvements over
the prior state-of-the-art are meaningful. In this paper, we investigate
challenges posed by reproducibility, proper experimental techniques, and
reporting procedures. We illustrate the variability in reported metrics and
results when comparing against common baselines and suggest guidelines to make
future results in deep RL more reproducible. We aim to spur discussion about
how to ensure continued progress in the field by minimizing wasted effort
stemming from results that are non-reproducible and easily misinterpreted. | http://arxiv.org/pdf/1709.06560 | Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger | cs.LG, stat.ML | Accepted to the Thirthy-Second AAAI Conference On Artificial
Intelligence (AAAI), 2018 | null | cs.LG | 20170919 | 20190130 | [
{
"id": "1611.02247"
},
{
"id": "1506.02438"
},
{
"id": "1707.06347"
},
{
"id": "1703.02660"
},
{
"id": "1705.10443"
},
{
"id": "1703.01703"
},
{
"id": "1509.02971"
},
{
"id": "1612.03780"
},
{
"id": "1606.01540"
},
{
"id": "1706.01905"
},
{
"id": "1706.00387"
},
{
"id": "1709.06009"
},
{
"id": "1505.00853"
},
{
"id": "1708.04782"
}
] |
1709.06560 | 57 | Table 4: Evaluation Hyperparameters of baseline algorithms reported in related literature
Related Work (Algorithm) DDPG TRPO PPO ACKTR Q-Prop (DDPG) Q-Prop (TRPO) IPG (TRPO) Param Noise (DDPG) Param Noise (TRPO) Benchmarking (DDPG) Benchmarking (TRPO) Policy Network 64x64 64x64 64x64 64x64 100x50x25 100x50x25 100x50x25 64x64 64x64 400x300 Policy Network Activation ReLU TanH TanH TanH TanH TanH TanH ReLU TanH ReLU Value Network 64x64 64x64 64x64 64x64 100x100 100x100 100x100 64x64 64x64 400x300 Value Network Activation ReLU TanH TanH ELU ReLU ReLU ReLU ReLU TanH ReLU Reward Scaling 1.0 - - - 0.1 - - - - 0.1 Batch Size 128 5k 2048 2500 64 5k 10k 128 5k 64 100x50x25 TanH 100x50x25 TanH - 25k
# Reported Results on Benchmarked Environments | 1709.06560#57 | Deep Reinforcement Learning that Matters | In recent years, significant progress has been made in solving challenging
problems across various domains using deep reinforcement learning (RL).
Reproducing existing work and accurately judging the improvements offered by
novel methods is vital to sustaining this progress. Unfortunately, reproducing
results for state-of-the-art deep RL methods is seldom straightforward. In
particular, non-determinism in standard benchmark environments, combined with
variance intrinsic to the methods, can make reported results tough to
interpret. Without significance metrics and tighter standardization of
experimental reporting, it is difficult to determine whether improvements over
the prior state-of-the-art are meaningful. In this paper, we investigate
challenges posed by reproducibility, proper experimental techniques, and
reporting procedures. We illustrate the variability in reported metrics and
results when comparing against common baselines and suggest guidelines to make
future results in deep RL more reproducible. We aim to spur discussion about
how to ensure continued progress in the field by minimizing wasted effort
stemming from results that are non-reproducible and easily misinterpreted. | http://arxiv.org/pdf/1709.06560 | Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger | cs.LG, stat.ML | Accepted to the Thirthy-Second AAAI Conference On Artificial
Intelligence (AAAI), 2018 | null | cs.LG | 20170919 | 20190130 | [
{
"id": "1611.02247"
},
{
"id": "1506.02438"
},
{
"id": "1707.06347"
},
{
"id": "1703.02660"
},
{
"id": "1705.10443"
},
{
"id": "1703.01703"
},
{
"id": "1509.02971"
},
{
"id": "1612.03780"
},
{
"id": "1606.01540"
},
{
"id": "1706.01905"
},
{
"id": "1706.00387"
},
{
"id": "1709.06009"
},
{
"id": "1505.00853"
},
{
"id": "1708.04782"
}
] |
1709.06560 | 58 | # Reported Results on Benchmarked Environments
We then demonstrate how experimental reported results, on two different environments (HalfCheetah-v1 and Hopper-v1) can vary across different related work that uses these algorithms for baseline comparison. We further show the results we get, using the same hyperparameter conï¬guration, but using two different codebase implementations (note that these implementations are often used as baseline codebase to develop algorithms). We highlight that, depending on the codebase used, experimental results can vary signiï¬cantly.
Table 5: Comparison with Related Reported Results with Hopper Environment Number of Iterations Average Return Max Average Return rllab 500 1183.3 - QProp 500 - 2486 IPG TRPO 500 - 500 - 3668.8 Our Results (rllab) 500 2021.34 3229.1
# Our Results (Baselines) 500 2965.3 3034.4
Table 6: Comparison with Related Reported Results with HalfCheetah Environment Environment Metric TRPO on HalfCheetah Environment Number of Iterations Average Return Max Average Return rllab 500 1914.0 - QProp 500 4734 IPG 500 - 2889 TRPO 500 - 4855 Our Results (rllab) 500 3576.08 5197 Our Results (Baselines) 500 1045.6 1045.6 | 1709.06560#58 | Deep Reinforcement Learning that Matters | In recent years, significant progress has been made in solving challenging
problems across various domains using deep reinforcement learning (RL).
Reproducing existing work and accurately judging the improvements offered by
novel methods is vital to sustaining this progress. Unfortunately, reproducing
results for state-of-the-art deep RL methods is seldom straightforward. In
particular, non-determinism in standard benchmark environments, combined with
variance intrinsic to the methods, can make reported results tough to
interpret. Without significance metrics and tighter standardization of
experimental reporting, it is difficult to determine whether improvements over
the prior state-of-the-art are meaningful. In this paper, we investigate
challenges posed by reproducibility, proper experimental techniques, and
reporting procedures. We illustrate the variability in reported metrics and
results when comparing against common baselines and suggest guidelines to make
future results in deep RL more reproducible. We aim to spur discussion about
how to ensure continued progress in the field by minimizing wasted effort
stemming from results that are non-reproducible and easily misinterpreted. | http://arxiv.org/pdf/1709.06560 | Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger | cs.LG, stat.ML | Accepted to the Thirthy-Second AAAI Conference On Artificial
Intelligence (AAAI), 2018 | null | cs.LG | 20170919 | 20190130 | [
{
"id": "1611.02247"
},
{
"id": "1506.02438"
},
{
"id": "1707.06347"
},
{
"id": "1703.02660"
},
{
"id": "1705.10443"
},
{
"id": "1703.01703"
},
{
"id": "1509.02971"
},
{
"id": "1612.03780"
},
{
"id": "1606.01540"
},
{
"id": "1706.01905"
},
{
"id": "1706.00387"
},
{
"id": "1709.06009"
},
{
"id": "1505.00853"
},
{
"id": "1708.04782"
}
] |
1709.06560 | 59 | Work (Mnih et al. 2016) (Schulman et al. 2017) (Duan et al. 2016) (Gu et al. 2017) (Lillicrap et al. 2015b) (Schulman et al. 2015a) (Wu et al. 2017) Number of Trials top-5 3-9 5 (5) 3 5 5 top-2, top-3
Table 7: Number of trials reported during evaluation in various works.
Reported Evaluation Metrics in Related Work In table 8 we show the evaluation metrics, and reported results in further details across related work.
Table 8: Reported Evaluation Metrics of baseline algorithms in related literature | 1709.06560#59 | Deep Reinforcement Learning that Matters | In recent years, significant progress has been made in solving challenging
problems across various domains using deep reinforcement learning (RL).
Reproducing existing work and accurately judging the improvements offered by
novel methods is vital to sustaining this progress. Unfortunately, reproducing
results for state-of-the-art deep RL methods is seldom straightforward. In
particular, non-determinism in standard benchmark environments, combined with
variance intrinsic to the methods, can make reported results tough to
interpret. Without significance metrics and tighter standardization of
experimental reporting, it is difficult to determine whether improvements over
the prior state-of-the-art are meaningful. In this paper, we investigate
challenges posed by reproducibility, proper experimental techniques, and
reporting procedures. We illustrate the variability in reported metrics and
results when comparing against common baselines and suggest guidelines to make
future results in deep RL more reproducible. We aim to spur discussion about
how to ensure continued progress in the field by minimizing wasted effort
stemming from results that are non-reproducible and easily misinterpreted. | http://arxiv.org/pdf/1709.06560 | Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger | cs.LG, stat.ML | Accepted to the Thirthy-Second AAAI Conference On Artificial
Intelligence (AAAI), 2018 | null | cs.LG | 20170919 | 20190130 | [
{
"id": "1611.02247"
},
{
"id": "1506.02438"
},
{
"id": "1707.06347"
},
{
"id": "1703.02660"
},
{
"id": "1705.10443"
},
{
"id": "1703.01703"
},
{
"id": "1509.02971"
},
{
"id": "1612.03780"
},
{
"id": "1606.01540"
},
{
"id": "1706.01905"
},
{
"id": "1706.00387"
},
{
"id": "1709.06009"
},
{
"id": "1505.00853"
},
{
"id": "1708.04782"
}
] |
1709.06560 | 60 | Reported Evaluation Metrics in Related Work In table 8 we show the evaluation metrics, and reported results in further details across related work.
Table 8: Reported Evaluation Metrics of baseline algorithms in related literature
Related Work (Algorithm) Environments Timesteps or Episodes or Iterations Evaluation Metrics PPO ACKTR Q-Prop (DDPG) Q-Prop (TRPO) IPG (TRPO) Param Noise (DDPG) Param Noise (TRPO) Benchmarking (DDPG) Benchmarking (TRPO) HalfCheetah Hopper HalfCheetah Hopper HalfCheetah Hopper HalfCheetah Hopper HalfCheetah Hopper HalfCheetah Hopper HalfCheetah Hopper HalfCheetah Hopper HalfCheetah Hopper 1M 1M 6k (eps) 5k (timesteps) 10k (eps) 1M 1M 500 iters (25k eps) 500 iters (925k eps) Average Return 1800 2200 2400 3500 6000 - 4000 - 3000 - 1800 500 3900 2400 2148 267 1914 1183 â¼ â¼ â¼ â¼ â¼ â¼ â¼ Max Return - - 7490 2604 4734 2486 2889 - - - - - - - - Std Error - - - - - - - - - - - - 702 43 150 120
â¼ â¼ â¼ â¼ â¼ â¼ â¼ â¼ | 1709.06560#60 | Deep Reinforcement Learning that Matters | In recent years, significant progress has been made in solving challenging
problems across various domains using deep reinforcement learning (RL).
Reproducing existing work and accurately judging the improvements offered by
novel methods is vital to sustaining this progress. Unfortunately, reproducing
results for state-of-the-art deep RL methods is seldom straightforward. In
particular, non-determinism in standard benchmark environments, combined with
variance intrinsic to the methods, can make reported results tough to
interpret. Without significance metrics and tighter standardization of
experimental reporting, it is difficult to determine whether improvements over
the prior state-of-the-art are meaningful. In this paper, we investigate
challenges posed by reproducibility, proper experimental techniques, and
reporting procedures. We illustrate the variability in reported metrics and
results when comparing against common baselines and suggest guidelines to make
future results in deep RL more reproducible. We aim to spur discussion about
how to ensure continued progress in the field by minimizing wasted effort
stemming from results that are non-reproducible and easily misinterpreted. | http://arxiv.org/pdf/1709.06560 | Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger | cs.LG, stat.ML | Accepted to the Thirthy-Second AAAI Conference On Artificial
Intelligence (AAAI), 2018 | null | cs.LG | 20170919 | 20190130 | [
{
"id": "1611.02247"
},
{
"id": "1506.02438"
},
{
"id": "1707.06347"
},
{
"id": "1703.02660"
},
{
"id": "1705.10443"
},
{
"id": "1703.01703"
},
{
"id": "1509.02971"
},
{
"id": "1612.03780"
},
{
"id": "1606.01540"
},
{
"id": "1706.01905"
},
{
"id": "1706.00387"
},
{
"id": "1709.06009"
},
{
"id": "1505.00853"
},
{
"id": "1708.04782"
}
] |
1709.06560 | 61 | â¼ â¼ â¼ â¼ â¼ â¼ â¼ â¼
Experimental Setup In this section, we show detailed analysis of our experimental results, using same hyperparameter conï¬gurations used in related work. Experimental results are included for the OpenAI Gym (Brockman et al. 2016) Hopper-v1 and HalfCheetah-v1 environments, using the policy gradient algorithms including DDPG, TRPO, PPO and ACKTR. Our experiments are done using the available codebase from OpenAI rllab (Duan et al. 2016) and OpenAI Baselines. Each of our experiments are performed over 5 experimental trials with different random seeds, and results averaged over all trials. Unless explicitly speciï¬ed as otherwise (such as in hyperparameter modiï¬cations where we alter a hyperparameter under investigation), hyperparameters were as follows. All results (including graphs) show mean and standard error across random seeds.
⢠DDPG | 1709.06560#61 | Deep Reinforcement Learning that Matters | In recent years, significant progress has been made in solving challenging
problems across various domains using deep reinforcement learning (RL).
Reproducing existing work and accurately judging the improvements offered by
novel methods is vital to sustaining this progress. Unfortunately, reproducing
results for state-of-the-art deep RL methods is seldom straightforward. In
particular, non-determinism in standard benchmark environments, combined with
variance intrinsic to the methods, can make reported results tough to
interpret. Without significance metrics and tighter standardization of
experimental reporting, it is difficult to determine whether improvements over
the prior state-of-the-art are meaningful. In this paper, we investigate
challenges posed by reproducibility, proper experimental techniques, and
reporting procedures. We illustrate the variability in reported metrics and
results when comparing against common baselines and suggest guidelines to make
future results in deep RL more reproducible. We aim to spur discussion about
how to ensure continued progress in the field by minimizing wasted effort
stemming from results that are non-reproducible and easily misinterpreted. | http://arxiv.org/pdf/1709.06560 | Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger | cs.LG, stat.ML | Accepted to the Thirthy-Second AAAI Conference On Artificial
Intelligence (AAAI), 2018 | null | cs.LG | 20170919 | 20190130 | [
{
"id": "1611.02247"
},
{
"id": "1506.02438"
},
{
"id": "1707.06347"
},
{
"id": "1703.02660"
},
{
"id": "1705.10443"
},
{
"id": "1703.01703"
},
{
"id": "1509.02971"
},
{
"id": "1612.03780"
},
{
"id": "1606.01540"
},
{
"id": "1706.01905"
},
{
"id": "1706.00387"
},
{
"id": "1709.06009"
},
{
"id": "1505.00853"
},
{
"id": "1708.04782"
}
] |
1709.06560 | 62 | ⢠DDPG
â Policy Network: (64, relu, 64, relu, tanh); Q Network (64, relu, 64, relu, linear) â Normalized observations with running mean ï¬lter â Actor LR: 1e â 4; Critic LR: 1e â 3 â Reward Scale: 1.0 â Noise type: O-U 0.2 â Soft target update Ï = .01 â γ = 0.995 â batch size = 128 â Critic L2 reg 1e â 2
⢠PPO
â Policy Network: (64, tanh, 64, tanh, Linear) + Standard Deviation variable; Value Network (64, tanh, 64, tanh, linear) â Normalized observations with running mean ï¬lter â Timesteps per batch 2048 â clip param = 0.2 â entropy coeff = 0.0 â Optimizer epochs per iteration = 10 â Optimizer step size 3e â 4 â Optimizer batch size 64 â Discount γ = 0.995, GAE λ = 0.97 â learning rate schedule is constant
⢠TRPO | 1709.06560#62 | Deep Reinforcement Learning that Matters | In recent years, significant progress has been made in solving challenging
problems across various domains using deep reinforcement learning (RL).
Reproducing existing work and accurately judging the improvements offered by
novel methods is vital to sustaining this progress. Unfortunately, reproducing
results for state-of-the-art deep RL methods is seldom straightforward. In
particular, non-determinism in standard benchmark environments, combined with
variance intrinsic to the methods, can make reported results tough to
interpret. Without significance metrics and tighter standardization of
experimental reporting, it is difficult to determine whether improvements over
the prior state-of-the-art are meaningful. In this paper, we investigate
challenges posed by reproducibility, proper experimental techniques, and
reporting procedures. We illustrate the variability in reported metrics and
results when comparing against common baselines and suggest guidelines to make
future results in deep RL more reproducible. We aim to spur discussion about
how to ensure continued progress in the field by minimizing wasted effort
stemming from results that are non-reproducible and easily misinterpreted. | http://arxiv.org/pdf/1709.06560 | Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger | cs.LG, stat.ML | Accepted to the Thirthy-Second AAAI Conference On Artificial
Intelligence (AAAI), 2018 | null | cs.LG | 20170919 | 20190130 | [
{
"id": "1611.02247"
},
{
"id": "1506.02438"
},
{
"id": "1707.06347"
},
{
"id": "1703.02660"
},
{
"id": "1705.10443"
},
{
"id": "1703.01703"
},
{
"id": "1509.02971"
},
{
"id": "1612.03780"
},
{
"id": "1606.01540"
},
{
"id": "1706.01905"
},
{
"id": "1706.00387"
},
{
"id": "1709.06009"
},
{
"id": "1505.00853"
},
{
"id": "1708.04782"
}
] |
1709.06560 | 63 | ⢠TRPO
â Policy Network: (64, tanh, 64, tanh, Linear) + Standard Deviation variable; Value Network (64, tanh, 64, tanh, linear) â Normalized observations with running mean ï¬lter â Timesteps per batch 5000 â max KL=0.01 â Conjugate gradient iterations = 20 â CG damping = 0.1 â VF Iterations = 5 â VF Batch Size = 64 â VF Step Size = 1e â 3 â entropy coeff = 0.0 â Discount γ = 0.995, GAE λ = 0.97
⢠ACKTR
â Policy Network: (64, tanh, 64, tanh, Linear) + Standard Deviation variable; Value Network (64, elu, 64, elu, linear) â Normalized observations with running mean ï¬lter â Timesteps per batch 2500 â desired KL = .002 â Discount γ = 0.995, GAE λ = 0.97 | 1709.06560#63 | Deep Reinforcement Learning that Matters | In recent years, significant progress has been made in solving challenging
problems across various domains using deep reinforcement learning (RL).
Reproducing existing work and accurately judging the improvements offered by
novel methods is vital to sustaining this progress. Unfortunately, reproducing
results for state-of-the-art deep RL methods is seldom straightforward. In
particular, non-determinism in standard benchmark environments, combined with
variance intrinsic to the methods, can make reported results tough to
interpret. Without significance metrics and tighter standardization of
experimental reporting, it is difficult to determine whether improvements over
the prior state-of-the-art are meaningful. In this paper, we investigate
challenges posed by reproducibility, proper experimental techniques, and
reporting procedures. We illustrate the variability in reported metrics and
results when comparing against common baselines and suggest guidelines to make
future results in deep RL more reproducible. We aim to spur discussion about
how to ensure continued progress in the field by minimizing wasted effort
stemming from results that are non-reproducible and easily misinterpreted. | http://arxiv.org/pdf/1709.06560 | Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger | cs.LG, stat.ML | Accepted to the Thirthy-Second AAAI Conference On Artificial
Intelligence (AAAI), 2018 | null | cs.LG | 20170919 | 20190130 | [
{
"id": "1611.02247"
},
{
"id": "1506.02438"
},
{
"id": "1707.06347"
},
{
"id": "1703.02660"
},
{
"id": "1705.10443"
},
{
"id": "1703.01703"
},
{
"id": "1509.02971"
},
{
"id": "1612.03780"
},
{
"id": "1606.01540"
},
{
"id": "1706.01905"
},
{
"id": "1706.00387"
},
{
"id": "1709.06009"
},
{
"id": "1505.00853"
},
{
"id": "1708.04782"
}
] |
1709.06560 | 64 | Modiï¬cations to Baseline Implementations To ensure fairness of comparison, we make several modiï¬cations to the existing implementations. First, we change evaluation in DDPG (Plappert et al. 2017) such that during evaluation at the end of an epoch, 10 full trajectories are evaluated. In the current implementation, only a partial trajectory is evaluated immediately after training such that a full trajectory will be evaluated across several different policies, this corresponds more closely to the online view of evaluation, while we take a policy optimization view when evaluating algorithms. | 1709.06560#64 | Deep Reinforcement Learning that Matters | In recent years, significant progress has been made in solving challenging
problems across various domains using deep reinforcement learning (RL).
Reproducing existing work and accurately judging the improvements offered by
novel methods is vital to sustaining this progress. Unfortunately, reproducing
results for state-of-the-art deep RL methods is seldom straightforward. In
particular, non-determinism in standard benchmark environments, combined with
variance intrinsic to the methods, can make reported results tough to
interpret. Without significance metrics and tighter standardization of
experimental reporting, it is difficult to determine whether improvements over
the prior state-of-the-art are meaningful. In this paper, we investigate
challenges posed by reproducibility, proper experimental techniques, and
reporting procedures. We illustrate the variability in reported metrics and
results when comparing against common baselines and suggest guidelines to make
future results in deep RL more reproducible. We aim to spur discussion about
how to ensure continued progress in the field by minimizing wasted effort
stemming from results that are non-reproducible and easily misinterpreted. | http://arxiv.org/pdf/1709.06560 | Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger | cs.LG, stat.ML | Accepted to the Thirthy-Second AAAI Conference On Artificial
Intelligence (AAAI), 2018 | null | cs.LG | 20170919 | 20190130 | [
{
"id": "1611.02247"
},
{
"id": "1506.02438"
},
{
"id": "1707.06347"
},
{
"id": "1703.02660"
},
{
"id": "1705.10443"
},
{
"id": "1703.01703"
},
{
"id": "1509.02971"
},
{
"id": "1612.03780"
},
{
"id": "1606.01540"
},
{
"id": "1706.01905"
},
{
"id": "1706.00387"
},
{
"id": "1709.06009"
},
{
"id": "1505.00853"
},
{
"id": "1708.04782"
}
] |
1709.06560 | 65 | Hyperparameters : Network Structures and Activation Functions Below, we examine the signiï¬cance of the network conï¬gurations used for the non-linear function approximators in policy gradient methods. Several related work have used different sets of network conï¬gurations (network sizes and activation functions). We use the reported network conï¬gurations from other works, and demonstrate the signiï¬cance of careful ï¬ne tuning that is required. We demonstrate results using the network activation functions, ReLU, TanH and Leaky ReLU, where most papers use ReLU and TanH as activation functions without detailed reporting of the effect of these activation functions. We analyse the signifcance of using different activations in the policy and action value networks. Previously, we included a detailed table showing average reward with standard error obtained for each of the hyperparameter conï¬gurations. In the results below, we show detailed results of how each of these policy gradient algorithms are affected by the choice of the network conï¬guration.
Proximal Policy Optimization (PPO) | 1709.06560#65 | Deep Reinforcement Learning that Matters | In recent years, significant progress has been made in solving challenging
problems across various domains using deep reinforcement learning (RL).
Reproducing existing work and accurately judging the improvements offered by
novel methods is vital to sustaining this progress. Unfortunately, reproducing
results for state-of-the-art deep RL methods is seldom straightforward. In
particular, non-determinism in standard benchmark environments, combined with
variance intrinsic to the methods, can make reported results tough to
interpret. Without significance metrics and tighter standardization of
experimental reporting, it is difficult to determine whether improvements over
the prior state-of-the-art are meaningful. In this paper, we investigate
challenges posed by reproducibility, proper experimental techniques, and
reporting procedures. We illustrate the variability in reported metrics and
results when comparing against common baselines and suggest guidelines to make
future results in deep RL more reproducible. We aim to spur discussion about
how to ensure continued progress in the field by minimizing wasted effort
stemming from results that are non-reproducible and easily misinterpreted. | http://arxiv.org/pdf/1709.06560 | Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger | cs.LG, stat.ML | Accepted to the Thirthy-Second AAAI Conference On Artificial
Intelligence (AAAI), 2018 | null | cs.LG | 20170919 | 20190130 | [
{
"id": "1611.02247"
},
{
"id": "1506.02438"
},
{
"id": "1707.06347"
},
{
"id": "1703.02660"
},
{
"id": "1705.10443"
},
{
"id": "1703.01703"
},
{
"id": "1509.02971"
},
{
"id": "1612.03780"
},
{
"id": "1606.01540"
},
{
"id": "1706.01905"
},
{
"id": "1706.00387"
},
{
"id": "1709.06009"
},
{
"id": "1505.00853"
},
{
"id": "1708.04782"
}
] |
1709.06560 | 67 | 3000 3000 2500 2500 2000 § 2000 1500 [a4 0 1000 fa g 500 <= 1000 0 500 â tanh â tanh 5 â rau rel i ae a 0.00 0.25 0.50 0.95 1.00 1.25 1.50 Li5 2.00 0.00 0.25 0.50 0.75 1.00 1.25 1.50 175 2.00 Timesteps x10" Timesteps x10° HalfCheetah-v1 (PPO, Value Network Activation) sooo Hopper-v1 (PPO, Value Network Activation) 2500 â Frâtâse 2500 2000. ⬠1500 § 2000 Z 3 a wo 1000 1500 oy bo g id 0 Zac 0 = tanh 500 â tanh â500 ââ â rel leaky lu 5 â leaky se 0.00 0.25 0.50 0.75 1.00 1.25 1.50 175 2.00 0.00 0.25 0.50 0.75 1.00 1.25 1.50 Lis 2.00 Timesteps xi Timesteps x10? | 1709.06560#67 | Deep Reinforcement Learning that Matters | In recent years, significant progress has been made in solving challenging
problems across various domains using deep reinforcement learning (RL).
Reproducing existing work and accurately judging the improvements offered by
novel methods is vital to sustaining this progress. Unfortunately, reproducing
results for state-of-the-art deep RL methods is seldom straightforward. In
particular, non-determinism in standard benchmark environments, combined with
variance intrinsic to the methods, can make reported results tough to
interpret. Without significance metrics and tighter standardization of
experimental reporting, it is difficult to determine whether improvements over
the prior state-of-the-art are meaningful. In this paper, we investigate
challenges posed by reproducibility, proper experimental techniques, and
reporting procedures. We illustrate the variability in reported metrics and
results when comparing against common baselines and suggest guidelines to make
future results in deep RL more reproducible. We aim to spur discussion about
how to ensure continued progress in the field by minimizing wasted effort
stemming from results that are non-reproducible and easily misinterpreted. | http://arxiv.org/pdf/1709.06560 | Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger | cs.LG, stat.ML | Accepted to the Thirthy-Second AAAI Conference On Artificial
Intelligence (AAAI), 2018 | null | cs.LG | 20170919 | 20190130 | [
{
"id": "1611.02247"
},
{
"id": "1506.02438"
},
{
"id": "1707.06347"
},
{
"id": "1703.02660"
},
{
"id": "1705.10443"
},
{
"id": "1703.01703"
},
{
"id": "1509.02971"
},
{
"id": "1612.03780"
},
{
"id": "1606.01540"
},
{
"id": "1706.01905"
},
{
"id": "1706.00387"
},
{
"id": "1709.06009"
},
{
"id": "1505.00853"
},
{
"id": "1708.04782"
}
] |
1709.06560 | 68 | sooo Hopper-v1 (PPO, Value Network Activation) 2500 § 2000 3 a 1500 bo id Zac 500 â tanh â rel 5 â leaky se 0.00 0.25 0.50 0.75 1.00 1.25 1.50 Lis 2.00 Timesteps x10?
# 5 aoe
# g
Figure 7: PPO Policy and Value Network activation
Experiment results in Figure 7, 8, and 9 in this section show the effect of the policy network structures and activation functions in the Proximal Policy Optimization (PPO) algorithm.
Hopper-v1 (PPO, Policy Network Structure) 3000 2500 « 5 2000 3 a ©1500 é 5 100 500 â (64.64) aan 0 â . 0.00 025 0.50 0.75 1.00 155 1.50 5 2.00 Timesteps x108 | 1709.06560#68 | Deep Reinforcement Learning that Matters | In recent years, significant progress has been made in solving challenging
problems across various domains using deep reinforcement learning (RL).
Reproducing existing work and accurately judging the improvements offered by
novel methods is vital to sustaining this progress. Unfortunately, reproducing
results for state-of-the-art deep RL methods is seldom straightforward. In
particular, non-determinism in standard benchmark environments, combined with
variance intrinsic to the methods, can make reported results tough to
interpret. Without significance metrics and tighter standardization of
experimental reporting, it is difficult to determine whether improvements over
the prior state-of-the-art are meaningful. In this paper, we investigate
challenges posed by reproducibility, proper experimental techniques, and
reporting procedures. We illustrate the variability in reported metrics and
results when comparing against common baselines and suggest guidelines to make
future results in deep RL more reproducible. We aim to spur discussion about
how to ensure continued progress in the field by minimizing wasted effort
stemming from results that are non-reproducible and easily misinterpreted. | http://arxiv.org/pdf/1709.06560 | Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger | cs.LG, stat.ML | Accepted to the Thirthy-Second AAAI Conference On Artificial
Intelligence (AAAI), 2018 | null | cs.LG | 20170919 | 20190130 | [
{
"id": "1611.02247"
},
{
"id": "1506.02438"
},
{
"id": "1707.06347"
},
{
"id": "1703.02660"
},
{
"id": "1705.10443"
},
{
"id": "1703.01703"
},
{
"id": "1509.02971"
},
{
"id": "1612.03780"
},
{
"id": "1606.01540"
},
{
"id": "1706.01905"
},
{
"id": "1706.00387"
},
{
"id": "1709.06009"
},
{
"id": "1505.00853"
},
{
"id": "1708.04782"
}
] |
1709.06560 | 70 | # 3 a
# g
Figure 8: PPO Policy Network structure
HalfCheetah-v1 (PPO, Value Network Structure) Hopper-vl (PPO, Value Network Structure) 2500. 3000 2500 1500 8 Average Return Average Return BE 1000 0 â (6468) 500 â (64,64) 500 â (1005025) â (1005025) â (400,300) 0 ââ (400,300) 0.00 0.25 0.50 0.95 1.00 1.25 1.50 Li5 2.00 0.00 0.25 0.50 0.75 1.00 1.25 1.50 Li5 2.00 Timesteps x10° Timesteps x10°
Hopper-vl (PPO, Value Network Structure) 3000 2500 8 Average Return BE 1000 500 â (64,64) â (1005025) 0 ââ (400,300) 0.00 0.25 0.50 0.75 1.00 1.25 1.50 Li5 2.00 Timesteps x10°
HalfCheetah-v1 (PPO, Value Network Structure) 2500. 1500 Average Return 0 â (6468) 500 â (1005025) â (400,300) 0.00 0.25 0.50 0.95 1.00 1.25 1.50 Li5 2.00 Timesteps x10° | 1709.06560#70 | Deep Reinforcement Learning that Matters | In recent years, significant progress has been made in solving challenging
problems across various domains using deep reinforcement learning (RL).
Reproducing existing work and accurately judging the improvements offered by
novel methods is vital to sustaining this progress. Unfortunately, reproducing
results for state-of-the-art deep RL methods is seldom straightforward. In
particular, non-determinism in standard benchmark environments, combined with
variance intrinsic to the methods, can make reported results tough to
interpret. Without significance metrics and tighter standardization of
experimental reporting, it is difficult to determine whether improvements over
the prior state-of-the-art are meaningful. In this paper, we investigate
challenges posed by reproducibility, proper experimental techniques, and
reporting procedures. We illustrate the variability in reported metrics and
results when comparing against common baselines and suggest guidelines to make
future results in deep RL more reproducible. We aim to spur discussion about
how to ensure continued progress in the field by minimizing wasted effort
stemming from results that are non-reproducible and easily misinterpreted. | http://arxiv.org/pdf/1709.06560 | Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger | cs.LG, stat.ML | Accepted to the Thirthy-Second AAAI Conference On Artificial
Intelligence (AAAI), 2018 | null | cs.LG | 20170919 | 20190130 | [
{
"id": "1611.02247"
},
{
"id": "1506.02438"
},
{
"id": "1707.06347"
},
{
"id": "1703.02660"
},
{
"id": "1705.10443"
},
{
"id": "1703.01703"
},
{
"id": "1509.02971"
},
{
"id": "1612.03780"
},
{
"id": "1606.01540"
},
{
"id": "1706.01905"
},
{
"id": "1706.00387"
},
{
"id": "1709.06009"
},
{
"id": "1505.00853"
},
{
"id": "1708.04782"
}
] |
1709.06560 | 71 | Figure 9: PPO Value Network structure
# Actor Critic using Kronecker-Factored Trust Region (ACKTR)
HalfCheetah-vl (ACKTR, Policy Network Structure) Hopper-vl (ACKTR, Policy Network Structure) 3000 , 3000 2500 2500. c £ 3 2000. 2 2000 © 1500 oa 2 % 1500 © 1000 S = 2 1000. < 500 < 0 cos (6468) 500 soos (6468) == (4005025) Z ==. (4005025) â500 £400,300) 0 (400,300) 0.00 0.25 0.50 0.75 1.00 1.25 1.50 175 2.00 0.00 0.25, 0.50, 0.75 1.00 1.25 1.50 175 2.00 Timesteps x0 Timesteps xaoé
Figure 10: ACKTR Policy Network structure | 1709.06560#71 | Deep Reinforcement Learning that Matters | In recent years, significant progress has been made in solving challenging
problems across various domains using deep reinforcement learning (RL).
Reproducing existing work and accurately judging the improvements offered by
novel methods is vital to sustaining this progress. Unfortunately, reproducing
results for state-of-the-art deep RL methods is seldom straightforward. In
particular, non-determinism in standard benchmark environments, combined with
variance intrinsic to the methods, can make reported results tough to
interpret. Without significance metrics and tighter standardization of
experimental reporting, it is difficult to determine whether improvements over
the prior state-of-the-art are meaningful. In this paper, we investigate
challenges posed by reproducibility, proper experimental techniques, and
reporting procedures. We illustrate the variability in reported metrics and
results when comparing against common baselines and suggest guidelines to make
future results in deep RL more reproducible. We aim to spur discussion about
how to ensure continued progress in the field by minimizing wasted effort
stemming from results that are non-reproducible and easily misinterpreted. | http://arxiv.org/pdf/1709.06560 | Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger | cs.LG, stat.ML | Accepted to the Thirthy-Second AAAI Conference On Artificial
Intelligence (AAAI), 2018 | null | cs.LG | 20170919 | 20190130 | [
{
"id": "1611.02247"
},
{
"id": "1506.02438"
},
{
"id": "1707.06347"
},
{
"id": "1703.02660"
},
{
"id": "1705.10443"
},
{
"id": "1703.01703"
},
{
"id": "1509.02971"
},
{
"id": "1612.03780"
},
{
"id": "1606.01540"
},
{
"id": "1706.01905"
},
{
"id": "1706.00387"
},
{
"id": "1709.06009"
},
{
"id": "1505.00853"
},
{
"id": "1708.04782"
}
] |
1709.06560 | 72 | Figure 10: ACKTR Policy Network structure
HalfCheetah-vl (ACKTR, Value Network Structure) 3000 Hopper-v1 (ACKTR, Value Network Structure) 3000 2500 2500 2000 E 2000 co 1500 & 1500 % § & 1000 4 $ 1000 500 < 0 (e161) 500 soos (6468) == (4005025) ~~ (10050.28) â500 en) 0 (4a 300 0.00 0.25 0.50 0.75 1.00 1.25 1.50 175 2.00 0.00 0.25, 0.50, 0.75 1.00 1.25 1.50 1.75 2.00 Timesteps x0 Timesteps xaoé
= I &
2
© g =
Figure 11: ACKTR Value Network structure
HalfCheetah-vl (ACKTR, Policy Network Activation) Hopper-vl (ACKTR, Policy Network Activation) 3000 3000 2500 2500 E 2000 & 2 2 & 1500 em 2 $1 © 1000 enn = 50 = 1000 0 500 500 0 0.00 0.25 0.50 0.75 1.00 1.25 1.50 175 2.00 0.00 0.25, 0.50, 0.75 1.00 1.25 1.50 175 2.00 Timesteps xi Timesteps x1
Figure 12: ACKTR Policy Network Activation | 1709.06560#72 | Deep Reinforcement Learning that Matters | In recent years, significant progress has been made in solving challenging
problems across various domains using deep reinforcement learning (RL).
Reproducing existing work and accurately judging the improvements offered by
novel methods is vital to sustaining this progress. Unfortunately, reproducing
results for state-of-the-art deep RL methods is seldom straightforward. In
particular, non-determinism in standard benchmark environments, combined with
variance intrinsic to the methods, can make reported results tough to
interpret. Without significance metrics and tighter standardization of
experimental reporting, it is difficult to determine whether improvements over
the prior state-of-the-art are meaningful. In this paper, we investigate
challenges posed by reproducibility, proper experimental techniques, and
reporting procedures. We illustrate the variability in reported metrics and
results when comparing against common baselines and suggest guidelines to make
future results in deep RL more reproducible. We aim to spur discussion about
how to ensure continued progress in the field by minimizing wasted effort
stemming from results that are non-reproducible and easily misinterpreted. | http://arxiv.org/pdf/1709.06560 | Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger | cs.LG, stat.ML | Accepted to the Thirthy-Second AAAI Conference On Artificial
Intelligence (AAAI), 2018 | null | cs.LG | 20170919 | 20190130 | [
{
"id": "1611.02247"
},
{
"id": "1506.02438"
},
{
"id": "1707.06347"
},
{
"id": "1703.02660"
},
{
"id": "1705.10443"
},
{
"id": "1703.01703"
},
{
"id": "1509.02971"
},
{
"id": "1612.03780"
},
{
"id": "1606.01540"
},
{
"id": "1706.01905"
},
{
"id": "1706.00387"
},
{
"id": "1709.06009"
},
{
"id": "1505.00853"
},
{
"id": "1708.04782"
}
] |
1709.06560 | 73 | Figure 12: ACKTR Policy Network Activation
HalfCheetah-vl (ACKTR, Value Network Activation) Hopper-v1 (ACKTR, Value Network Activation) 4000. 3000 2500 c 3000 c 3 § 2000 o o 2000 & 1500 & & 4 4 S S I 2 1000 & 10000 _ 500 0 al oh 0 0.00, 0.25, 0.50 0.75 1.00 1.25 1.50 175 2.00 0.00 0.25, 0.50, 0.75 1.00 1.25 1.50 1.75 2.00 Timesteps xa Timesteps xaoé
Figure 13: ACKTR Value Network Activation
We then similarly, show the signiï¬cance of these hyperparameters in the ACKTR algorithm. Our results show that the value network structure can have a signiï¬cant effect on the performance of ACKTR algorithm.
# Trust Region Policy Optimization (TRPO) | 1709.06560#73 | Deep Reinforcement Learning that Matters | In recent years, significant progress has been made in solving challenging
problems across various domains using deep reinforcement learning (RL).
Reproducing existing work and accurately judging the improvements offered by
novel methods is vital to sustaining this progress. Unfortunately, reproducing
results for state-of-the-art deep RL methods is seldom straightforward. In
particular, non-determinism in standard benchmark environments, combined with
variance intrinsic to the methods, can make reported results tough to
interpret. Without significance metrics and tighter standardization of
experimental reporting, it is difficult to determine whether improvements over
the prior state-of-the-art are meaningful. In this paper, we investigate
challenges posed by reproducibility, proper experimental techniques, and
reporting procedures. We illustrate the variability in reported metrics and
results when comparing against common baselines and suggest guidelines to make
future results in deep RL more reproducible. We aim to spur discussion about
how to ensure continued progress in the field by minimizing wasted effort
stemming from results that are non-reproducible and easily misinterpreted. | http://arxiv.org/pdf/1709.06560 | Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger | cs.LG, stat.ML | Accepted to the Thirthy-Second AAAI Conference On Artificial
Intelligence (AAAI), 2018 | null | cs.LG | 20170919 | 20190130 | [
{
"id": "1611.02247"
},
{
"id": "1506.02438"
},
{
"id": "1707.06347"
},
{
"id": "1703.02660"
},
{
"id": "1705.10443"
},
{
"id": "1703.01703"
},
{
"id": "1509.02971"
},
{
"id": "1612.03780"
},
{
"id": "1606.01540"
},
{
"id": "1706.01905"
},
{
"id": "1706.00387"
},
{
"id": "1709.06009"
},
{
"id": "1505.00853"
},
{
"id": "1708.04782"
}
] |
1709.06560 | 74 | # Trust Region Policy Optimization (TRPO)
HalfCheetah-v1 (TRPO, Policy Network Structure) Hopper-v1 (TRPO, Policy Network Structure) 600 3000 m 2500 200 © g 5 2000 2 0 Ea ve 1500 : & â200 & < < 1000 400 â (es) 500 â (64.64) â600 â (100,50,25) â (200,50,25) â (400.300) 0 â (400.300) 0.00 05 050 O75 1.00 1S 50 Lis 200 0.00 05 0.50 075 1.00 125 150 17 2.00 Timesteps x10° Timesteps x10®
Hopper-v1 (TRPO, Policy Network Structure) 3000 2500 © 5 2000 Ea 1500 & & < 1000 500 â (64.64) â (200,50,25) 0 â (400.300) 0.00 05 0.50 075 1.00 125 150 17 2.00 Timesteps x10®
HalfCheetah-v1 (TRPO, Policy Network Structure) 600 m 200 g 2 0 ve : â200 < 400 â (es) â600 â (100,50,25) â (400.300) 0.00 05 050 O75 1.00 1S 50 Lis 200 Timesteps x10°
Figure 14: TRPO Policy Network structure | 1709.06560#74 | Deep Reinforcement Learning that Matters | In recent years, significant progress has been made in solving challenging
problems across various domains using deep reinforcement learning (RL).
Reproducing existing work and accurately judging the improvements offered by
novel methods is vital to sustaining this progress. Unfortunately, reproducing
results for state-of-the-art deep RL methods is seldom straightforward. In
particular, non-determinism in standard benchmark environments, combined with
variance intrinsic to the methods, can make reported results tough to
interpret. Without significance metrics and tighter standardization of
experimental reporting, it is difficult to determine whether improvements over
the prior state-of-the-art are meaningful. In this paper, we investigate
challenges posed by reproducibility, proper experimental techniques, and
reporting procedures. We illustrate the variability in reported metrics and
results when comparing against common baselines and suggest guidelines to make
future results in deep RL more reproducible. We aim to spur discussion about
how to ensure continued progress in the field by minimizing wasted effort
stemming from results that are non-reproducible and easily misinterpreted. | http://arxiv.org/pdf/1709.06560 | Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger | cs.LG, stat.ML | Accepted to the Thirthy-Second AAAI Conference On Artificial
Intelligence (AAAI), 2018 | null | cs.LG | 20170919 | 20190130 | [
{
"id": "1611.02247"
},
{
"id": "1506.02438"
},
{
"id": "1707.06347"
},
{
"id": "1703.02660"
},
{
"id": "1705.10443"
},
{
"id": "1703.01703"
},
{
"id": "1509.02971"
},
{
"id": "1612.03780"
},
{
"id": "1606.01540"
},
{
"id": "1706.01905"
},
{
"id": "1706.00387"
},
{
"id": "1709.06009"
},
{
"id": "1505.00853"
},
{
"id": "1708.04782"
}
] |
1709.06560 | 75 | Figure 14: TRPO Policy Network structure
HalfCheetah-v1 (TRPO, Value Network Structure) Hopper-v1 (TRPO, Value Network Structure) 3000 400 2500 20 ⬠⬠5 3 2000 0 2 % 1500 2 $ â200 5 1000 400 â (6464) 500, â (64.64) 600 â (005025) â (0050.25) â (400:300) 0 â (400:300) v0 02 050 on 100 io iso ii 200 00 0COCtSCiâiCHSSC*«iCS i 200 Timesteps x10° Timesteps x10"
Hopper-v1 (TRPO, Value Network Structure) 3000 2500 ⬠3 2000 2 1500 2 5 1000 500, â (64.64) â (0050.25) 0 â (400:300) 00 0COCtSCiâiCHSSC*«iCS i 200 Timesteps x10"
HalfCheetah-v1 (TRPO, Value Network Structure) 400 20 ⬠5 zg 0 % $ â200 < 400 â (6464) 600 â (005025) â (400:300) v0 02 050 on 100 io iso ii 200 Timesteps x10°
Figure 15: TRPO Value Network structure | 1709.06560#75 | Deep Reinforcement Learning that Matters | In recent years, significant progress has been made in solving challenging
problems across various domains using deep reinforcement learning (RL).
Reproducing existing work and accurately judging the improvements offered by
novel methods is vital to sustaining this progress. Unfortunately, reproducing
results for state-of-the-art deep RL methods is seldom straightforward. In
particular, non-determinism in standard benchmark environments, combined with
variance intrinsic to the methods, can make reported results tough to
interpret. Without significance metrics and tighter standardization of
experimental reporting, it is difficult to determine whether improvements over
the prior state-of-the-art are meaningful. In this paper, we investigate
challenges posed by reproducibility, proper experimental techniques, and
reporting procedures. We illustrate the variability in reported metrics and
results when comparing against common baselines and suggest guidelines to make
future results in deep RL more reproducible. We aim to spur discussion about
how to ensure continued progress in the field by minimizing wasted effort
stemming from results that are non-reproducible and easily misinterpreted. | http://arxiv.org/pdf/1709.06560 | Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger | cs.LG, stat.ML | Accepted to the Thirthy-Second AAAI Conference On Artificial
Intelligence (AAAI), 2018 | null | cs.LG | 20170919 | 20190130 | [
{
"id": "1611.02247"
},
{
"id": "1506.02438"
},
{
"id": "1707.06347"
},
{
"id": "1703.02660"
},
{
"id": "1705.10443"
},
{
"id": "1703.01703"
},
{
"id": "1509.02971"
},
{
"id": "1612.03780"
},
{
"id": "1606.01540"
},
{
"id": "1706.01905"
},
{
"id": "1706.00387"
},
{
"id": "1709.06009"
},
{
"id": "1505.00853"
},
{
"id": "1708.04782"
}
] |
1709.06560 | 76 | Figure 15: TRPO Value Network structure
HalfCheetah-vl (TRPO, Policy Network Activation) Hopper-vi (TRPO, Policy Network Activation) 1000 3000 750 2500 xy s g Average Return s 0 1000 â250 â500 500 â tah leaky relu et 750 0 hey st 0.00 0.25 0.50 0.75 1.00 1.25 1.50 175 2.00 0.00 0.35 0.50 0.75 1.00 1.25 150 175 2.00 Timesteps x08 Timesteps x10"
Hopper-vi (TRPO, Policy Network Activation) 3000 2500 g Average Return s 1000 500 â tah et 0 hey st 0.00 0.35 0.50 0.75 1.00 1.25 150 175 2.00 Timesteps x10"
# Average Return
Figure 16: TRPO Policy and Value Network activation | 1709.06560#76 | Deep Reinforcement Learning that Matters | In recent years, significant progress has been made in solving challenging
problems across various domains using deep reinforcement learning (RL).
Reproducing existing work and accurately judging the improvements offered by
novel methods is vital to sustaining this progress. Unfortunately, reproducing
results for state-of-the-art deep RL methods is seldom straightforward. In
particular, non-determinism in standard benchmark environments, combined with
variance intrinsic to the methods, can make reported results tough to
interpret. Without significance metrics and tighter standardization of
experimental reporting, it is difficult to determine whether improvements over
the prior state-of-the-art are meaningful. In this paper, we investigate
challenges posed by reproducibility, proper experimental techniques, and
reporting procedures. We illustrate the variability in reported metrics and
results when comparing against common baselines and suggest guidelines to make
future results in deep RL more reproducible. We aim to spur discussion about
how to ensure continued progress in the field by minimizing wasted effort
stemming from results that are non-reproducible and easily misinterpreted. | http://arxiv.org/pdf/1709.06560 | Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger | cs.LG, stat.ML | Accepted to the Thirthy-Second AAAI Conference On Artificial
Intelligence (AAAI), 2018 | null | cs.LG | 20170919 | 20190130 | [
{
"id": "1611.02247"
},
{
"id": "1506.02438"
},
{
"id": "1707.06347"
},
{
"id": "1703.02660"
},
{
"id": "1705.10443"
},
{
"id": "1703.01703"
},
{
"id": "1509.02971"
},
{
"id": "1612.03780"
},
{
"id": "1606.01540"
},
{
"id": "1706.01905"
},
{
"id": "1706.00387"
},
{
"id": "1709.06009"
},
{
"id": "1505.00853"
},
{
"id": "1708.04782"
}
] |
1709.06560 | 77 | # Average Return
Figure 16: TRPO Policy and Value Network activation
HalfCheetah-vl (TRPO, Value Network Activation) Hopper-v1 (TRPO, Value Network Activation) 600 soo 400 2500 = 200 ⬠5 5 2000 eo 4 % gis 5 â200 g < = 1000 400 â tanh 500 â tanh 600 â elu â rhs leaky rela 0 â leaky sea v0 02 050 on 100 ia 150 iis 200 00 0SCtSCtiSSSsiaSC<âiHSSCâ<âiC SSCS Timesteps x10° Timesteps x10°
Hopper-v1 (TRPO, Value Network Activation) soo 2500 ⬠5 2000 4 gis g = 1000 500 â tanh â rhs 0 â leaky sea 00 0SCtSCtiSSSsiaSC<âiHSSCâ<âiC SSCS Timesteps x10°
HalfCheetah-vl (TRPO, Value Network Activation) 600 400 = 200 5 eo % 5 â200 < 400 â tanh 600 â elu leaky rela v0 02 050 on 100 ia 150 iis 200 Timesteps x10°
Figure 17: TRPO Policy and Value Network activation | 1709.06560#77 | Deep Reinforcement Learning that Matters | In recent years, significant progress has been made in solving challenging
problems across various domains using deep reinforcement learning (RL).
Reproducing existing work and accurately judging the improvements offered by
novel methods is vital to sustaining this progress. Unfortunately, reproducing
results for state-of-the-art deep RL methods is seldom straightforward. In
particular, non-determinism in standard benchmark environments, combined with
variance intrinsic to the methods, can make reported results tough to
interpret. Without significance metrics and tighter standardization of
experimental reporting, it is difficult to determine whether improvements over
the prior state-of-the-art are meaningful. In this paper, we investigate
challenges posed by reproducibility, proper experimental techniques, and
reporting procedures. We illustrate the variability in reported metrics and
results when comparing against common baselines and suggest guidelines to make
future results in deep RL more reproducible. We aim to spur discussion about
how to ensure continued progress in the field by minimizing wasted effort
stemming from results that are non-reproducible and easily misinterpreted. | http://arxiv.org/pdf/1709.06560 | Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger | cs.LG, stat.ML | Accepted to the Thirthy-Second AAAI Conference On Artificial
Intelligence (AAAI), 2018 | null | cs.LG | 20170919 | 20190130 | [
{
"id": "1611.02247"
},
{
"id": "1506.02438"
},
{
"id": "1707.06347"
},
{
"id": "1703.02660"
},
{
"id": "1705.10443"
},
{
"id": "1703.01703"
},
{
"id": "1509.02971"
},
{
"id": "1612.03780"
},
{
"id": "1606.01540"
},
{
"id": "1706.01905"
},
{
"id": "1706.00387"
},
{
"id": "1709.06009"
},
{
"id": "1505.00853"
},
{
"id": "1708.04782"
}
] |
1709.06560 | 78 | Figure 17: TRPO Policy and Value Network activation
In Figures 14, 15, 16, and 17 we show the effects of network structure on the OpenAI baselines implementation of TRPO. In this case, only the policy architecture seems to have a large effect on the performance of the algorithmâs ability to learn.
Deep Deterministic Policy Gradient (DDPG)
DDPG with HalfCheetah Environment, Actor Network Size 6000 é & 1000 ES § 2000 = < â Actor Network Size â 64 x 64 0 - Actor Network Size = 100 x 50 x 25 7 Actor Network Size = 400 x 300 0.00 0.25 0.50 0.75 1.00 1.25 1.50 175 2.00 Timesteps x10?
DDPG with Hopper Environment, Actor Network Size 3000 | 2500 E 2000 @ 1500 % 1000 § 4 J ) | ' => 5 il . . < ââ Actor Network Size â 64 x 64 0 ~ Actor Network Size = 100 x 50 x 25 â500 â Actor Network Size = 400 x 300 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 Timesteps x10? | 1709.06560#78 | Deep Reinforcement Learning that Matters | In recent years, significant progress has been made in solving challenging
problems across various domains using deep reinforcement learning (RL).
Reproducing existing work and accurately judging the improvements offered by
novel methods is vital to sustaining this progress. Unfortunately, reproducing
results for state-of-the-art deep RL methods is seldom straightforward. In
particular, non-determinism in standard benchmark environments, combined with
variance intrinsic to the methods, can make reported results tough to
interpret. Without significance metrics and tighter standardization of
experimental reporting, it is difficult to determine whether improvements over
the prior state-of-the-art are meaningful. In this paper, we investigate
challenges posed by reproducibility, proper experimental techniques, and
reporting procedures. We illustrate the variability in reported metrics and
results when comparing against common baselines and suggest guidelines to make
future results in deep RL more reproducible. We aim to spur discussion about
how to ensure continued progress in the field by minimizing wasted effort
stemming from results that are non-reproducible and easily misinterpreted. | http://arxiv.org/pdf/1709.06560 | Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger | cs.LG, stat.ML | Accepted to the Thirthy-Second AAAI Conference On Artificial
Intelligence (AAAI), 2018 | null | cs.LG | 20170919 | 20190130 | [
{
"id": "1611.02247"
},
{
"id": "1506.02438"
},
{
"id": "1707.06347"
},
{
"id": "1703.02660"
},
{
"id": "1705.10443"
},
{
"id": "1703.01703"
},
{
"id": "1509.02971"
},
{
"id": "1612.03780"
},
{
"id": "1606.01540"
},
{
"id": "1706.01905"
},
{
"id": "1706.00387"
},
{
"id": "1709.06009"
},
{
"id": "1505.00853"
},
{
"id": "1708.04782"
}
] |
1709.06560 | 79 | 3000 | 6000 2500 é E 2000 1000 @ 1500 ES % 1000 § 2000 § 4 J ) | ' = => 5 il . . â Actor Network Size â 64 x 64 < ââ Actor Network Size â 64 x 64 0 - Actor Network Size = 100 x 50 x 25 0 ~ Actor Network Size = 100 x 50 x 25 7 Actor Network Size = 400 x 300 â500 â Actor Network Size = 400 x 300 0.00 0.25 0.50 0.75 1.00 1.25 1.50 175 2.00 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 Timesteps x10? Timesteps x10? DDPG with HalfCheetah Environment, Critic Network Size DDPG with Hopper Environment, Critic Network Size (lll 3000 | 6000 2500 E 2000 a4 gi @ 1500 % % 1000 I 2000 S 2 599 | ââ Critic Network Size = 64 x 64 <x ââ Critic Network Size = 64 x 64 cc Critic Network Size = 100 x 50 x 25 07 RTPI TYE P| -eee=: Critic Network Size = 100 x 50 x 25 tia Critic Network Size = 400 x 300 500 ââ- Critic Network Size = 400 x 300 0.00 0.25 0.50 0.75 1.00 1.25 1.50) 1.75 | 1709.06560#79 | Deep Reinforcement Learning that Matters | In recent years, significant progress has been made in solving challenging
problems across various domains using deep reinforcement learning (RL).
Reproducing existing work and accurately judging the improvements offered by
novel methods is vital to sustaining this progress. Unfortunately, reproducing
results for state-of-the-art deep RL methods is seldom straightforward. In
particular, non-determinism in standard benchmark environments, combined with
variance intrinsic to the methods, can make reported results tough to
interpret. Without significance metrics and tighter standardization of
experimental reporting, it is difficult to determine whether improvements over
the prior state-of-the-art are meaningful. In this paper, we investigate
challenges posed by reproducibility, proper experimental techniques, and
reporting procedures. We illustrate the variability in reported metrics and
results when comparing against common baselines and suggest guidelines to make
future results in deep RL more reproducible. We aim to spur discussion about
how to ensure continued progress in the field by minimizing wasted effort
stemming from results that are non-reproducible and easily misinterpreted. | http://arxiv.org/pdf/1709.06560 | Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger | cs.LG, stat.ML | Accepted to the Thirthy-Second AAAI Conference On Artificial
Intelligence (AAAI), 2018 | null | cs.LG | 20170919 | 20190130 | [
{
"id": "1611.02247"
},
{
"id": "1506.02438"
},
{
"id": "1707.06347"
},
{
"id": "1703.02660"
},
{
"id": "1705.10443"
},
{
"id": "1703.01703"
},
{
"id": "1509.02971"
},
{
"id": "1612.03780"
},
{
"id": "1606.01540"
},
{
"id": "1706.01905"
},
{
"id": "1706.00387"
},
{
"id": "1709.06009"
},
{
"id": "1505.00853"
},
{
"id": "1708.04782"
}
] |
1709.06560 | 81 | DDPG with HalfCheetah Environment, Critic Network Size (lll 6000 a E a4 gi % § 2000 g <x ââ Critic Network Size = 64 x 64 cc Critic Network Size = 100 x 50 x 25 tia Critic Network Size = 400 x 300 0.00 0.25 0.50 0.75 1.00 1.25 1.50) 1.75 2.00 Timesteps x10
DDPG with Hopper Environment, Critic Network Size 3000 | 2500 E 2000 @ 1500 % 1000 I S 2 599 | <x ââ Critic Network Size = 64 x 64 07 RTPI TYE P| -eee=: Critic Network Size = 100 x 50 x 25 500 ââ- Critic Network Size = 400 x 300 0.00 0.25 0.50 0.75 LOO «1.250 1.50 «1.75 2.00 Timesteps x10°
Figure 18: Policy or Actor Network Architecture experiments for DDPG on HalfCheetah and Hopper Environment | 1709.06560#81 | Deep Reinforcement Learning that Matters | In recent years, significant progress has been made in solving challenging
problems across various domains using deep reinforcement learning (RL).
Reproducing existing work and accurately judging the improvements offered by
novel methods is vital to sustaining this progress. Unfortunately, reproducing
results for state-of-the-art deep RL methods is seldom straightforward. In
particular, non-determinism in standard benchmark environments, combined with
variance intrinsic to the methods, can make reported results tough to
interpret. Without significance metrics and tighter standardization of
experimental reporting, it is difficult to determine whether improvements over
the prior state-of-the-art are meaningful. In this paper, we investigate
challenges posed by reproducibility, proper experimental techniques, and
reporting procedures. We illustrate the variability in reported metrics and
results when comparing against common baselines and suggest guidelines to make
future results in deep RL more reproducible. We aim to spur discussion about
how to ensure continued progress in the field by minimizing wasted effort
stemming from results that are non-reproducible and easily misinterpreted. | http://arxiv.org/pdf/1709.06560 | Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger | cs.LG, stat.ML | Accepted to the Thirthy-Second AAAI Conference On Artificial
Intelligence (AAAI), 2018 | null | cs.LG | 20170919 | 20190130 | [
{
"id": "1611.02247"
},
{
"id": "1506.02438"
},
{
"id": "1707.06347"
},
{
"id": "1703.02660"
},
{
"id": "1705.10443"
},
{
"id": "1703.01703"
},
{
"id": "1509.02971"
},
{
"id": "1612.03780"
},
{
"id": "1606.01540"
},
{
"id": "1706.01905"
},
{
"id": "1706.00387"
},
{
"id": "1709.06009"
},
{
"id": "1505.00853"
},
{
"id": "1708.04782"
}
] |
1709.06560 | 82 | Figure 18: Policy or Actor Network Architecture experiments for DDPG on HalfCheetah and Hopper Environment
We further analyze the actor and critic network conï¬gurations for use in DDPG. As in default conï¬gurations, we ï¬rst use the ReLU activation function for policy networks, and examine the effect of different activations and network sizes for the critic networks. Similarly, keeping critic network conï¬gurations under default setting, we also examine the effect of actor network activation functions and network sizes.
DDPG with HalfCheetah Environment - Actor Network Activations 6000 3000 = 4000 © 3000 % © 2000 A g M < = 1000 â Policy Network Activation = ReLU 0 -----: Policy Network Activation = TanH Lovo --- Policy Network Activation = Leaky ReLU 0.00 025 050 075 100 1.35 150 175 2.00 Timesteps x10"
DDPG with Hopper Environment - Actor Network Activations 3000 il | 2500 2 2 B00 3 as ce 1500 8 1000 g Wn £500 nat amr invUaee < ââ Policy Network Activatio 0 ~ Policy Network Activation = TanH â500 Policy Network Activation = Leaky ReLU 0.00-0.25°-0.50~(0.75â«1.00â«.25 LTS 2.00 Timesteps x10° | 1709.06560#82 | Deep Reinforcement Learning that Matters | In recent years, significant progress has been made in solving challenging
problems across various domains using deep reinforcement learning (RL).
Reproducing existing work and accurately judging the improvements offered by
novel methods is vital to sustaining this progress. Unfortunately, reproducing
results for state-of-the-art deep RL methods is seldom straightforward. In
particular, non-determinism in standard benchmark environments, combined with
variance intrinsic to the methods, can make reported results tough to
interpret. Without significance metrics and tighter standardization of
experimental reporting, it is difficult to determine whether improvements over
the prior state-of-the-art are meaningful. In this paper, we investigate
challenges posed by reproducibility, proper experimental techniques, and
reporting procedures. We illustrate the variability in reported metrics and
results when comparing against common baselines and suggest guidelines to make
future results in deep RL more reproducible. We aim to spur discussion about
how to ensure continued progress in the field by minimizing wasted effort
stemming from results that are non-reproducible and easily misinterpreted. | http://arxiv.org/pdf/1709.06560 | Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger | cs.LG, stat.ML | Accepted to the Thirthy-Second AAAI Conference On Artificial
Intelligence (AAAI), 2018 | null | cs.LG | 20170919 | 20190130 | [
{
"id": "1611.02247"
},
{
"id": "1506.02438"
},
{
"id": "1707.06347"
},
{
"id": "1703.02660"
},
{
"id": "1705.10443"
},
{
"id": "1703.01703"
},
{
"id": "1509.02971"
},
{
"id": "1612.03780"
},
{
"id": "1606.01540"
},
{
"id": "1706.01905"
},
{
"id": "1706.00387"
},
{
"id": "1709.06009"
},
{
"id": "1505.00853"
},
{
"id": "1708.04782"
}
] |
1709.06560 | 83 | DDPG with HalfCheetah Environment - Critic Network Activations 6000 5000 aâ 5 4000 2 = 3000 a © 009 g p { << 1000) | â Critic Network Activation = ReLU 0 ~ Critic Network Activation = TanHl ~ Critic Network Activation = Leaky ReLU â1000 0.00 025 0.50 075 100 1.35 150 175 2.00 Timesteps x10°
DDPG with Hopper Environment - Critic Network Activations 2500 2 2000 5 ®@ 1500 od © 1000 S500 oh < 0 Critic Network Activation = TanH . Critic Network Activation = Leaky ReLU â500 000 0.5 050 (0.75 «100 135 150 175 200 Timesteps x10°
Figure 19: Signiï¬cance of Value Function or Critic Network Activations for DDPG on HalfCheetah and Hopper Environment
Reward Scaling Parameter in DDPG
Hopper-v1 (DDPG, Reward Scale, No Layer Norm) Hopper-v1 (DDPG, Reward Scale, Layer Norm) 1400 1750 1200 4500 < c § 1000 5 1250 o a © 800 & 1000 i) S © 600 © 750 g $ <x 400 <= 500 200 250 0 0 00 02 04 06 08 10 00 02 04 06 08 10 Timesteps â Timesteps â | 1709.06560#83 | Deep Reinforcement Learning that Matters | In recent years, significant progress has been made in solving challenging
problems across various domains using deep reinforcement learning (RL).
Reproducing existing work and accurately judging the improvements offered by
novel methods is vital to sustaining this progress. Unfortunately, reproducing
results for state-of-the-art deep RL methods is seldom straightforward. In
particular, non-determinism in standard benchmark environments, combined with
variance intrinsic to the methods, can make reported results tough to
interpret. Without significance metrics and tighter standardization of
experimental reporting, it is difficult to determine whether improvements over
the prior state-of-the-art are meaningful. In this paper, we investigate
challenges posed by reproducibility, proper experimental techniques, and
reporting procedures. We illustrate the variability in reported metrics and
results when comparing against common baselines and suggest guidelines to make
future results in deep RL more reproducible. We aim to spur discussion about
how to ensure continued progress in the field by minimizing wasted effort
stemming from results that are non-reproducible and easily misinterpreted. | http://arxiv.org/pdf/1709.06560 | Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger | cs.LG, stat.ML | Accepted to the Thirthy-Second AAAI Conference On Artificial
Intelligence (AAAI), 2018 | null | cs.LG | 20170919 | 20190130 | [
{
"id": "1611.02247"
},
{
"id": "1506.02438"
},
{
"id": "1707.06347"
},
{
"id": "1703.02660"
},
{
"id": "1705.10443"
},
{
"id": "1703.01703"
},
{
"id": "1509.02971"
},
{
"id": "1612.03780"
},
{
"id": "1606.01540"
},
{
"id": "1706.01905"
},
{
"id": "1706.00387"
},
{
"id": "1709.06009"
},
{
"id": "1505.00853"
},
{
"id": "1708.04782"
}
] |
1709.06560 | 84 | Hopper-v1 (DDPG, Reward Scale, No Layer Norm) 1400 1200 < § 1000 o © 800 i) © 600 g <x 400 200 0 00 02 04 06 08 10 Timesteps â
Hopper-v1 (DDPG, Reward Scale, Layer Norm) 1750 4500 c 5 1250 a & 1000 S © 750 $ <= 500 250 0 00 02 04 06 08 10 Timesteps â
Figure 20: DDPG reward rescaling on Hopper-v1, with and without layer norm.
5000 HalfCheetah-v1 (DDPG, Reward Scale, Layer Norm) HalfCheetah-v1 (DDPG, Reward Scale, No Layer Norm) 4000 | 3000 < 3000 a cncbeon th 5 eye % 2000 ce 2000 © 20 g g 1000 1000 2 0 0 0.00 0.25 0.50 0.75 100 125015075 2.00 0.00 0.25 0.500 0.75 1.000 125015075 2.00 Timesteps xg? Timesteps x16?
< 5 3 ce @ 20 g g Z
Figure 21: DDPG reward rescaling on HalfCheetah-v1, with and without layer norm. | 1709.06560#84 | Deep Reinforcement Learning that Matters | In recent years, significant progress has been made in solving challenging
problems across various domains using deep reinforcement learning (RL).
Reproducing existing work and accurately judging the improvements offered by
novel methods is vital to sustaining this progress. Unfortunately, reproducing
results for state-of-the-art deep RL methods is seldom straightforward. In
particular, non-determinism in standard benchmark environments, combined with
variance intrinsic to the methods, can make reported results tough to
interpret. Without significance metrics and tighter standardization of
experimental reporting, it is difficult to determine whether improvements over
the prior state-of-the-art are meaningful. In this paper, we investigate
challenges posed by reproducibility, proper experimental techniques, and
reporting procedures. We illustrate the variability in reported metrics and
results when comparing against common baselines and suggest guidelines to make
future results in deep RL more reproducible. We aim to spur discussion about
how to ensure continued progress in the field by minimizing wasted effort
stemming from results that are non-reproducible and easily misinterpreted. | http://arxiv.org/pdf/1709.06560 | Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger | cs.LG, stat.ML | Accepted to the Thirthy-Second AAAI Conference On Artificial
Intelligence (AAAI), 2018 | null | cs.LG | 20170919 | 20190130 | [
{
"id": "1611.02247"
},
{
"id": "1506.02438"
},
{
"id": "1707.06347"
},
{
"id": "1703.02660"
},
{
"id": "1705.10443"
},
{
"id": "1703.01703"
},
{
"id": "1509.02971"
},
{
"id": "1612.03780"
},
{
"id": "1606.01540"
},
{
"id": "1706.01905"
},
{
"id": "1706.00387"
},
{
"id": "1709.06009"
},
{
"id": "1505.00853"
},
{
"id": "1708.04782"
}
] |
1709.06560 | 85 | < 5 3 ce @ 20 g g Z
Figure 21: DDPG reward rescaling on HalfCheetah-v1, with and without layer norm.
Several related work (Gu et al. 2016; 2017; Duan et al. 2016) have often reported that for DDPG the reward scaling parameter often needs to be ï¬ne-tuned for stabilizing the performance of DDPG. It can make a signiï¬cant impact in performance of DDPG based on the choice of environment. We examine several reward scaling parameters and demonstrate the effect this parameter can have on the stability and performance of DDPG, based on the HalfCheetah and Hopper environments. Our experiment results, as demonstrated in Figure 21 and 20, show that the reward scaling parameter indeed can have a signiï¬cant impact on performance. Our results show that, very small or negligible reward scaling parameter can signiï¬cantly detriment the performance of DDPG across all environments. Furthermore, a scaling parameter of 10 or 1 often performs good. Based on our analysis, we suggest that every time DDPG is reported as a baseline algorithm for comparison, the reward scaling parameter should be ï¬ne-tuned, speciï¬c to the algorithm.
# Batch Size in TRPO | 1709.06560#85 | Deep Reinforcement Learning that Matters | In recent years, significant progress has been made in solving challenging
problems across various domains using deep reinforcement learning (RL).
Reproducing existing work and accurately judging the improvements offered by
novel methods is vital to sustaining this progress. Unfortunately, reproducing
results for state-of-the-art deep RL methods is seldom straightforward. In
particular, non-determinism in standard benchmark environments, combined with
variance intrinsic to the methods, can make reported results tough to
interpret. Without significance metrics and tighter standardization of
experimental reporting, it is difficult to determine whether improvements over
the prior state-of-the-art are meaningful. In this paper, we investigate
challenges posed by reproducibility, proper experimental techniques, and
reporting procedures. We illustrate the variability in reported metrics and
results when comparing against common baselines and suggest guidelines to make
future results in deep RL more reproducible. We aim to spur discussion about
how to ensure continued progress in the field by minimizing wasted effort
stemming from results that are non-reproducible and easily misinterpreted. | http://arxiv.org/pdf/1709.06560 | Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger | cs.LG, stat.ML | Accepted to the Thirthy-Second AAAI Conference On Artificial
Intelligence (AAAI), 2018 | null | cs.LG | 20170919 | 20190130 | [
{
"id": "1611.02247"
},
{
"id": "1506.02438"
},
{
"id": "1707.06347"
},
{
"id": "1703.02660"
},
{
"id": "1705.10443"
},
{
"id": "1703.01703"
},
{
"id": "1509.02971"
},
{
"id": "1612.03780"
},
{
"id": "1606.01540"
},
{
"id": "1706.01905"
},
{
"id": "1706.00387"
},
{
"id": "1709.06009"
},
{
"id": "1505.00853"
},
{
"id": "1708.04782"
}
] |
1709.06560 | 87 | Figure 22: TRPO (Schulman et al. 2015a) original code batch size experiments.
Hopper-vl (TRPO, baselines, Batch Size) HalfCheetah-v1 (TRPO, baselines, Batch Size) 200 2500. ⬠0 2000: 3 [a gy 200 gs g â â400 50 0.75 1.00 1.25 1.50 1.75 2.00 0.00 0.25 0.50, 0.75 1.00 1.25 1.50 175 2.00 Timesteps x10F Timesteps x10 Walker2d-v1 (TRPO, baselines, Batch Size) Reacher-v1 (TRPO, baselines, Batch Size) 3000 â1n0 2500. -115 £ 2000 5 2 â120 1500 % § â125 1000 2 500 â130 o| Z ~135 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 Timesteps x10F Timesteps x10
# & FA a
% 4 S <
£ 5 Z g, § 2
Figure 23: TRPO (Schulman et al. 2017) baselines code batch size experiments. | 1709.06560#87 | Deep Reinforcement Learning that Matters | In recent years, significant progress has been made in solving challenging
problems across various domains using deep reinforcement learning (RL).
Reproducing existing work and accurately judging the improvements offered by
novel methods is vital to sustaining this progress. Unfortunately, reproducing
results for state-of-the-art deep RL methods is seldom straightforward. In
particular, non-determinism in standard benchmark environments, combined with
variance intrinsic to the methods, can make reported results tough to
interpret. Without significance metrics and tighter standardization of
experimental reporting, it is difficult to determine whether improvements over
the prior state-of-the-art are meaningful. In this paper, we investigate
challenges posed by reproducibility, proper experimental techniques, and
reporting procedures. We illustrate the variability in reported metrics and
results when comparing against common baselines and suggest guidelines to make
future results in deep RL more reproducible. We aim to spur discussion about
how to ensure continued progress in the field by minimizing wasted effort
stemming from results that are non-reproducible and easily misinterpreted. | http://arxiv.org/pdf/1709.06560 | Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger | cs.LG, stat.ML | Accepted to the Thirthy-Second AAAI Conference On Artificial
Intelligence (AAAI), 2018 | null | cs.LG | 20170919 | 20190130 | [
{
"id": "1611.02247"
},
{
"id": "1506.02438"
},
{
"id": "1707.06347"
},
{
"id": "1703.02660"
},
{
"id": "1705.10443"
},
{
"id": "1703.01703"
},
{
"id": "1509.02971"
},
{
"id": "1612.03780"
},
{
"id": "1606.01540"
},
{
"id": "1706.01905"
},
{
"id": "1706.00387"
},
{
"id": "1709.06009"
},
{
"id": "1505.00853"
},
{
"id": "1708.04782"
}
] |
1709.06560 | 88 | # & FA a
% 4 S <
£ 5 Z g, § 2
Figure 23: TRPO (Schulman et al. 2017) baselines code batch size experiments.
We run batch size experiments using the original TRPO code (Schulman et al. 2015a) and the OpenAI baselines code (Schulman et al. 2017). These results can be found in Experiment results in Figure 22 and Figure 23, show that for both HalfCheetah-v1 and Hopper-v1 environments, a batch size of 1024 for TRPO performs best, while perform degrades consecutively as the batch size is increased.
# Random Seeds
To determine much random seeds can affect results, we run 10 trials total on two environments using the default previously described settings usign the (Gu et al. 2016) implementation of DDPG and the (Duan et al. 2016) version of TRPO. We divide our trials random into 2 partitions and plot them in Figures 24 and Fig 25. As can be seen, statistically different distributions can be attained just from the random seeds with the same exact hyperparameters. As we will discuss later, bootstrapping off of the sample can give an idea for how drastic this effect will be, though too small a bootstrap will still not give concrete enough results. | 1709.06560#88 | Deep Reinforcement Learning that Matters | In recent years, significant progress has been made in solving challenging
problems across various domains using deep reinforcement learning (RL).
Reproducing existing work and accurately judging the improvements offered by
novel methods is vital to sustaining this progress. Unfortunately, reproducing
results for state-of-the-art deep RL methods is seldom straightforward. In
particular, non-determinism in standard benchmark environments, combined with
variance intrinsic to the methods, can make reported results tough to
interpret. Without significance metrics and tighter standardization of
experimental reporting, it is difficult to determine whether improvements over
the prior state-of-the-art are meaningful. In this paper, we investigate
challenges posed by reproducibility, proper experimental techniques, and
reporting procedures. We illustrate the variability in reported metrics and
results when comparing against common baselines and suggest guidelines to make
future results in deep RL more reproducible. We aim to spur discussion about
how to ensure continued progress in the field by minimizing wasted effort
stemming from results that are non-reproducible and easily misinterpreted. | http://arxiv.org/pdf/1709.06560 | Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger | cs.LG, stat.ML | Accepted to the Thirthy-Second AAAI Conference On Artificial
Intelligence (AAAI), 2018 | null | cs.LG | 20170919 | 20190130 | [
{
"id": "1611.02247"
},
{
"id": "1506.02438"
},
{
"id": "1707.06347"
},
{
"id": "1703.02660"
},
{
"id": "1705.10443"
},
{
"id": "1703.01703"
},
{
"id": "1509.02971"
},
{
"id": "1612.03780"
},
{
"id": "1606.01540"
},
{
"id": "1706.01905"
},
{
"id": "1706.00387"
},
{
"id": "1709.06009"
},
{
"id": "1505.00853"
},
{
"id": "1708.04782"
}
] |
1709.06560 | 89 | HalfCheetah-v1 (TRPO, Different Random Seeds) Hopper-vl (TRPO, Different Random Seeds) 5000 3500 4000: 3000 © = 2500 5 3000 z 2 2 2000 Pa © $2000 801500 g g < 1000 <= 1000 5 Random Average (5 runs) 500 Random Average (5 runs) Random Average (5 runs) 5 ~--- Random Average (5 runs) 000 025.050 075 100 13 10 17 200 0000350500 1002.00 Timesteps xa0t Timesteps x0!
Figure 24: Two different TRPO experiment runs, with same hyperparameter conï¬gurations, averaged over two splits of 5 different random seeds. | 1709.06560#89 | Deep Reinforcement Learning that Matters | In recent years, significant progress has been made in solving challenging
problems across various domains using deep reinforcement learning (RL).
Reproducing existing work and accurately judging the improvements offered by
novel methods is vital to sustaining this progress. Unfortunately, reproducing
results for state-of-the-art deep RL methods is seldom straightforward. In
particular, non-determinism in standard benchmark environments, combined with
variance intrinsic to the methods, can make reported results tough to
interpret. Without significance metrics and tighter standardization of
experimental reporting, it is difficult to determine whether improvements over
the prior state-of-the-art are meaningful. In this paper, we investigate
challenges posed by reproducibility, proper experimental techniques, and
reporting procedures. We illustrate the variability in reported metrics and
results when comparing against common baselines and suggest guidelines to make
future results in deep RL more reproducible. We aim to spur discussion about
how to ensure continued progress in the field by minimizing wasted effort
stemming from results that are non-reproducible and easily misinterpreted. | http://arxiv.org/pdf/1709.06560 | Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger | cs.LG, stat.ML | Accepted to the Thirthy-Second AAAI Conference On Artificial
Intelligence (AAAI), 2018 | null | cs.LG | 20170919 | 20190130 | [
{
"id": "1611.02247"
},
{
"id": "1506.02438"
},
{
"id": "1707.06347"
},
{
"id": "1703.02660"
},
{
"id": "1705.10443"
},
{
"id": "1703.01703"
},
{
"id": "1509.02971"
},
{
"id": "1612.03780"
},
{
"id": "1606.01540"
},
{
"id": "1706.01905"
},
{
"id": "1706.00387"
},
{
"id": "1709.06009"
},
{
"id": "1505.00853"
},
{
"id": "1708.04782"
}
] |
1709.06560 | 90 | Figure 24: Two different TRPO experiment runs, with same hyperparameter conï¬gurations, averaged over two splits of 5 different random seeds.
HalfCheetah-v1 (DDPG, Different Random Seeds) Hopper-v1 (DDPG, Different Random Seeds) 1750: 4000: 1500, ⬠E aos 3 3000 3 1250: 8 a 2000 & 1000 & &% gS © 750- 5 5 1000 = snp a 0 250. <= Random Average (5 runs) sa-+ Random Average (5 runs) =: Random Average (5 runs) 0 =<: Random Average (5 runs) 0.00 0.25 0.50 0.75, 1.00 1.25 1.50 175 2.00 0.00 0.25 0.50 0.75, 1.00 1.25 1.50 1.75, 2.00 Timesteps x08 Timesteps x0!
Figure 25: Two different DDPG experiment runs, with same hyperparameter conï¬gurations, averaged over two splits of 5 different random seeds.
# Choice of Benchmark Continuous Control Environment | 1709.06560#90 | Deep Reinforcement Learning that Matters | In recent years, significant progress has been made in solving challenging
problems across various domains using deep reinforcement learning (RL).
Reproducing existing work and accurately judging the improvements offered by
novel methods is vital to sustaining this progress. Unfortunately, reproducing
results for state-of-the-art deep RL methods is seldom straightforward. In
particular, non-determinism in standard benchmark environments, combined with
variance intrinsic to the methods, can make reported results tough to
interpret. Without significance metrics and tighter standardization of
experimental reporting, it is difficult to determine whether improvements over
the prior state-of-the-art are meaningful. In this paper, we investigate
challenges posed by reproducibility, proper experimental techniques, and
reporting procedures. We illustrate the variability in reported metrics and
results when comparing against common baselines and suggest guidelines to make
future results in deep RL more reproducible. We aim to spur discussion about
how to ensure continued progress in the field by minimizing wasted effort
stemming from results that are non-reproducible and easily misinterpreted. | http://arxiv.org/pdf/1709.06560 | Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger | cs.LG, stat.ML | Accepted to the Thirthy-Second AAAI Conference On Artificial
Intelligence (AAAI), 2018 | null | cs.LG | 20170919 | 20190130 | [
{
"id": "1611.02247"
},
{
"id": "1506.02438"
},
{
"id": "1707.06347"
},
{
"id": "1703.02660"
},
{
"id": "1705.10443"
},
{
"id": "1703.01703"
},
{
"id": "1509.02971"
},
{
"id": "1612.03780"
},
{
"id": "1606.01540"
},
{
"id": "1706.01905"
},
{
"id": "1706.00387"
},
{
"id": "1709.06009"
},
{
"id": "1505.00853"
},
{
"id": "1708.04782"
}
] |
1709.06560 | 91 | # Choice of Benchmark Continuous Control Environment
We previously demonstrated that the performance of policy gradient algorithms can be highly biased based on the choice of the environment. In this section, we include further results examining the impact the choice of environment can have. We show that no single algorithm can perform consistenly better in all environments. This is often unlike the results we see with DQN networks in Atari domains, where results can often be demonstrated across a wide range of Atari games. Our results, for example, shows that while TRPO can perform signiï¬cantly better than other algorithms on the Swimmer environment, it may perform quite poorly n the HalfCheetah environment, and marginally better on the Hopper environment compared to PPO. We demonstrate our results using the OpenAI MuJoCo Gym environments including Hopper, HalfCheetah, Swimmer and Walker environments. It is notable to see the varying performance these algorithms can have even in this small set of environment domains. The choice of reporting algorithm performance results can therefore often be biased based on the algorithm designerâs experience with these environments.
Hopper Environment 3000 < A @ 2000 o cot) fa 1000 $ < 0 0.00-0.250.50 0.75 1,00 1.350150 «1.75 2.00 Timesteps x0" | 1709.06560#91 | Deep Reinforcement Learning that Matters | In recent years, significant progress has been made in solving challenging
problems across various domains using deep reinforcement learning (RL).
Reproducing existing work and accurately judging the improvements offered by
novel methods is vital to sustaining this progress. Unfortunately, reproducing
results for state-of-the-art deep RL methods is seldom straightforward. In
particular, non-determinism in standard benchmark environments, combined with
variance intrinsic to the methods, can make reported results tough to
interpret. Without significance metrics and tighter standardization of
experimental reporting, it is difficult to determine whether improvements over
the prior state-of-the-art are meaningful. In this paper, we investigate
challenges posed by reproducibility, proper experimental techniques, and
reporting procedures. We illustrate the variability in reported metrics and
results when comparing against common baselines and suggest guidelines to make
future results in deep RL more reproducible. We aim to spur discussion about
how to ensure continued progress in the field by minimizing wasted effort
stemming from results that are non-reproducible and easily misinterpreted. | http://arxiv.org/pdf/1709.06560 | Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger | cs.LG, stat.ML | Accepted to the Thirthy-Second AAAI Conference On Artificial
Intelligence (AAAI), 2018 | null | cs.LG | 20170919 | 20190130 | [
{
"id": "1611.02247"
},
{
"id": "1506.02438"
},
{
"id": "1707.06347"
},
{
"id": "1703.02660"
},
{
"id": "1705.10443"
},
{
"id": "1703.01703"
},
{
"id": "1509.02971"
},
{
"id": "1612.03780"
},
{
"id": "1606.01540"
},
{
"id": "1706.01905"
},
{
"id": "1706.00387"
},
{
"id": "1709.06009"
},
{
"id": "1505.00853"
},
{
"id": "1708.04782"
}
] |
1709.06560 | 92 | Hopper Environment HalfCheetah Environment 6000 3000 _ 3000 sot dll < 5 4000 mn seed | A z i paeannd yer @ 2000 © 3000 o ov cot) Bo fa s 2000 1000 $ <= 1000 < 0 0 â1000 0.000.255 ~«050°0~«~0.75~â«.00 1.252.507 2.00 0.00-0.250.50 0.75 1,00 1.350150 «1.75 2.00 Timesteps x10! Timesteps x0"
HalfCheetah Environment 6000 _ 3000 sot dll 5 4000 mn seed | z i paeannd yer © 3000 ov Bo s 2000 <= 1000 0 â1000 0.000.255 ~«050°0~«~0.75~â«.00 1.252.507 2.00 Timesteps x10!
Walker Environment 3500 3000 § 2500 io ce 2000 © 1500 oO < 100 500 0 . . . . . . 0.00 0.25 0.50 0.75 1.00 1.25 1.50 2.00 Timesteps x10? | 1709.06560#92 | Deep Reinforcement Learning that Matters | In recent years, significant progress has been made in solving challenging
problems across various domains using deep reinforcement learning (RL).
Reproducing existing work and accurately judging the improvements offered by
novel methods is vital to sustaining this progress. Unfortunately, reproducing
results for state-of-the-art deep RL methods is seldom straightforward. In
particular, non-determinism in standard benchmark environments, combined with
variance intrinsic to the methods, can make reported results tough to
interpret. Without significance metrics and tighter standardization of
experimental reporting, it is difficult to determine whether improvements over
the prior state-of-the-art are meaningful. In this paper, we investigate
challenges posed by reproducibility, proper experimental techniques, and
reporting procedures. We illustrate the variability in reported metrics and
results when comparing against common baselines and suggest guidelines to make
future results in deep RL more reproducible. We aim to spur discussion about
how to ensure continued progress in the field by minimizing wasted effort
stemming from results that are non-reproducible and easily misinterpreted. | http://arxiv.org/pdf/1709.06560 | Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger | cs.LG, stat.ML | Accepted to the Thirthy-Second AAAI Conference On Artificial
Intelligence (AAAI), 2018 | null | cs.LG | 20170919 | 20190130 | [
{
"id": "1611.02247"
},
{
"id": "1506.02438"
},
{
"id": "1707.06347"
},
{
"id": "1703.02660"
},
{
"id": "1705.10443"
},
{
"id": "1703.01703"
},
{
"id": "1509.02971"
},
{
"id": "1612.03780"
},
{
"id": "1606.01540"
},
{
"id": "1706.01905"
},
{
"id": "1706.00387"
},
{
"id": "1709.06009"
},
{
"id": "1505.00853"
},
{
"id": "1708.04782"
}
] |
1709.06560 | 93 | Swimmer Environment 300 250, § 200 cy © 150 © 2 100 o <= 0 0 â50, 0.00 0.25 0.50 0.75 1.00 1.25 1.50 «1.75 2.00 Timesteps x10°
Walker Environment 300 3500 3000 250, § 2500 § 200 cy 2000 © 150 © © 1500 2 100 oO o 100 <= 0 500 0 0 . . . . . . â50, 0.00 0.25 0.50 0.75 1.00 1.25 1.50 2.00 0.00 0.25 0.50 0.75 1.00 1.25 1.50 «1.75 2.00 Timesteps x10? Timesteps x10°
Figure 26: Comparing Policy Gradients across various environments
Codebases
We include a detailed analysis of performance comparison, with different network structures and activations, based on the choice of the algorithm implementation codebase.
HalfCheetah-v1 (TRPO, Original Code, Policy Network Structure) 2000 1500 2 & 100 & © sm < 0 â (6459 : â 1050.25) 500 â (40.30) 00 0Rs OTS 1h To 1% 20 Timesteps x108 | 1709.06560#93 | Deep Reinforcement Learning that Matters | In recent years, significant progress has been made in solving challenging
problems across various domains using deep reinforcement learning (RL).
Reproducing existing work and accurately judging the improvements offered by
novel methods is vital to sustaining this progress. Unfortunately, reproducing
results for state-of-the-art deep RL methods is seldom straightforward. In
particular, non-determinism in standard benchmark environments, combined with
variance intrinsic to the methods, can make reported results tough to
interpret. Without significance metrics and tighter standardization of
experimental reporting, it is difficult to determine whether improvements over
the prior state-of-the-art are meaningful. In this paper, we investigate
challenges posed by reproducibility, proper experimental techniques, and
reporting procedures. We illustrate the variability in reported metrics and
results when comparing against common baselines and suggest guidelines to make
future results in deep RL more reproducible. We aim to spur discussion about
how to ensure continued progress in the field by minimizing wasted effort
stemming from results that are non-reproducible and easily misinterpreted. | http://arxiv.org/pdf/1709.06560 | Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger | cs.LG, stat.ML | Accepted to the Thirthy-Second AAAI Conference On Artificial
Intelligence (AAAI), 2018 | null | cs.LG | 20170919 | 20190130 | [
{
"id": "1611.02247"
},
{
"id": "1506.02438"
},
{
"id": "1707.06347"
},
{
"id": "1703.02660"
},
{
"id": "1705.10443"
},
{
"id": "1703.01703"
},
{
"id": "1509.02971"
},
{
"id": "1612.03780"
},
{
"id": "1606.01540"
},
{
"id": "1706.01905"
},
{
"id": "1706.00387"
},
{
"id": "1709.06009"
},
{
"id": "1505.00853"
},
{
"id": "1708.04782"
}
] |
1709.06560 | 94 | Hopper-v1 (TRPO, Original Code, Policy Network Structure) 3000 2500 52000 4 1500 g 1000 500 â (es â 1050.25) 0 â (400.300) Ca rn cD 1h To 1% 20 Timesteps x108
HalfCheetah-v1 (TRPO, Original Code, Policy Network Structure) Hopper-v1 (TRPO, Original Code, Policy Network Structure) 2000 3000 1500 2500 2 52000 & 100 4 & 1500 © sm g < 1000 0 â (6459 500 â (es : â 1050.25) â 1050.25) 500 â (40.30) 0 â (400.300) 00 0Rs OTS 1h To 1% 20 Ca rn cD 1h To 1% 20 Timesteps x108 Timesteps x108 HalfCheetah-v1 (TRPO, Original Code, Value Network Structure) Hopper-v1 (TRPO, Original Code, Value Network Structure) 2500 3000 2000 2500 1500 © 3 52000 0 4 S500 g < = 1000 : 5 â (6459 500 â (es â 1050.25) â 1050.25) â 500, â (400,300) 0. â (400,300) 00 0Rs OTS 1h To 1% 20 Ca rn cD 1h To 1% 20 Timesteps x108 Timesteps x108 | 1709.06560#94 | Deep Reinforcement Learning that Matters | In recent years, significant progress has been made in solving challenging
problems across various domains using deep reinforcement learning (RL).
Reproducing existing work and accurately judging the improvements offered by
novel methods is vital to sustaining this progress. Unfortunately, reproducing
results for state-of-the-art deep RL methods is seldom straightforward. In
particular, non-determinism in standard benchmark environments, combined with
variance intrinsic to the methods, can make reported results tough to
interpret. Without significance metrics and tighter standardization of
experimental reporting, it is difficult to determine whether improvements over
the prior state-of-the-art are meaningful. In this paper, we investigate
challenges posed by reproducibility, proper experimental techniques, and
reporting procedures. We illustrate the variability in reported metrics and
results when comparing against common baselines and suggest guidelines to make
future results in deep RL more reproducible. We aim to spur discussion about
how to ensure continued progress in the field by minimizing wasted effort
stemming from results that are non-reproducible and easily misinterpreted. | http://arxiv.org/pdf/1709.06560 | Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger | cs.LG, stat.ML | Accepted to the Thirthy-Second AAAI Conference On Artificial
Intelligence (AAAI), 2018 | null | cs.LG | 20170919 | 20190130 | [
{
"id": "1611.02247"
},
{
"id": "1506.02438"
},
{
"id": "1707.06347"
},
{
"id": "1703.02660"
},
{
"id": "1705.10443"
},
{
"id": "1703.01703"
},
{
"id": "1509.02971"
},
{
"id": "1612.03780"
},
{
"id": "1606.01540"
},
{
"id": "1706.01905"
},
{
"id": "1706.00387"
},
{
"id": "1709.06009"
},
{
"id": "1505.00853"
},
{
"id": "1708.04782"
}
] |
1709.06560 | 95 | HalfCheetah-v1 (TRPO, Original Code, Value Network Structure) 2500 2000 1500 3 © 0 S500 < : â (6459 â 1050.25) â 500, â (400,300) 00 0Rs OTS 1h To 1% 20 Timesteps x108
Hopper-v1 (TRPO, Original Code, Value Network Structure) 3000 2500 © 52000 4 g = 1000 5 500 â (es â 1050.25) 0. â (400,300) Ca rn cD 1h To 1% 20 Timesteps x108
Figure 27: TRPO Policy and Value Network structure
HalfCheetah-v1 (TRPO, Original Code, Policy Network Activation) 3000. 2500 ae & 100 % 2 100 2 0 500 e000. Oe Timesteps x108
Hopper-vl (TRPO, Original Code, Policy Network Activation) soon 2500 S200 é 1500 2 2100 . ° i ee D O.Oe o Timesteps x108 | 1709.06560#95 | Deep Reinforcement Learning that Matters | In recent years, significant progress has been made in solving challenging
problems across various domains using deep reinforcement learning (RL).
Reproducing existing work and accurately judging the improvements offered by
novel methods is vital to sustaining this progress. Unfortunately, reproducing
results for state-of-the-art deep RL methods is seldom straightforward. In
particular, non-determinism in standard benchmark environments, combined with
variance intrinsic to the methods, can make reported results tough to
interpret. Without significance metrics and tighter standardization of
experimental reporting, it is difficult to determine whether improvements over
the prior state-of-the-art are meaningful. In this paper, we investigate
challenges posed by reproducibility, proper experimental techniques, and
reporting procedures. We illustrate the variability in reported metrics and
results when comparing against common baselines and suggest guidelines to make
future results in deep RL more reproducible. We aim to spur discussion about
how to ensure continued progress in the field by minimizing wasted effort
stemming from results that are non-reproducible and easily misinterpreted. | http://arxiv.org/pdf/1709.06560 | Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger | cs.LG, stat.ML | Accepted to the Thirthy-Second AAAI Conference On Artificial
Intelligence (AAAI), 2018 | null | cs.LG | 20170919 | 20190130 | [
{
"id": "1611.02247"
},
{
"id": "1506.02438"
},
{
"id": "1707.06347"
},
{
"id": "1703.02660"
},
{
"id": "1705.10443"
},
{
"id": "1703.01703"
},
{
"id": "1509.02971"
},
{
"id": "1612.03780"
},
{
"id": "1606.01540"
},
{
"id": "1706.01905"
},
{
"id": "1706.00387"
},
{
"id": "1709.06009"
},
{
"id": "1505.00853"
},
{
"id": "1708.04782"
}
] |
1709.06560 | 96 | Hopper-vl (TRPO, Original Code, Policy Network Activation) soon 2500 S200 é 1500 2 2100 . ° i ee D O.Oe o Timesteps x108
HalfCheetah-v1 (TRPO, Original Code, Policy Network Activation) Hopper-vl (TRPO, Original Code, Policy Network Activation) soon 3000. 2500 2500 ae S200 & 100 é % 1500 2 100 2 2 2100 0 . 500 ° e000. Oe i ee D O.Oe o Timesteps x108 Timesteps x108 an HalfCheetah-v1 (TRPO, Original Code, Value Network Activation) Hopper-v1 (TRPO, Original Code, Value Network Activation) 2000 = 1500 2 q = im & 2 500. 0 300 : e000. Oe i e000 Timesteps x108 Timesteps x108
an HalfCheetah-v1 (TRPO, Original Code, Value Network Activation) 2000 = 1500 2 q = im & 2 500. 0 300 e000. Oe Timesteps x108
Hopper-v1 (TRPO, Original Code, Value Network Activation) : i e000 Timesteps x108
Figure 28: TRPO Policy and Value Network activations.
HalfCheetah-vl (TRPO, rllab, Policy Network Structure) | 1709.06560#96 | Deep Reinforcement Learning that Matters | In recent years, significant progress has been made in solving challenging
problems across various domains using deep reinforcement learning (RL).
Reproducing existing work and accurately judging the improvements offered by
novel methods is vital to sustaining this progress. Unfortunately, reproducing
results for state-of-the-art deep RL methods is seldom straightforward. In
particular, non-determinism in standard benchmark environments, combined with
variance intrinsic to the methods, can make reported results tough to
interpret. Without significance metrics and tighter standardization of
experimental reporting, it is difficult to determine whether improvements over
the prior state-of-the-art are meaningful. In this paper, we investigate
challenges posed by reproducibility, proper experimental techniques, and
reporting procedures. We illustrate the variability in reported metrics and
results when comparing against common baselines and suggest guidelines to make
future results in deep RL more reproducible. We aim to spur discussion about
how to ensure continued progress in the field by minimizing wasted effort
stemming from results that are non-reproducible and easily misinterpreted. | http://arxiv.org/pdf/1709.06560 | Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger | cs.LG, stat.ML | Accepted to the Thirthy-Second AAAI Conference On Artificial
Intelligence (AAAI), 2018 | null | cs.LG | 20170919 | 20190130 | [
{
"id": "1611.02247"
},
{
"id": "1506.02438"
},
{
"id": "1707.06347"
},
{
"id": "1703.02660"
},
{
"id": "1705.10443"
},
{
"id": "1703.01703"
},
{
"id": "1509.02971"
},
{
"id": "1612.03780"
},
{
"id": "1606.01540"
},
{
"id": "1706.01905"
},
{
"id": "1706.00387"
},
{
"id": "1709.06009"
},
{
"id": "1505.00853"
},
{
"id": "1708.04782"
}
] |
1709.06560 | 97 | Figure 28: TRPO Policy and Value Network activations.
HalfCheetah-vl (TRPO, rllab, Policy Network Structure)
1250 1000 E750 I 2 500 © 2 950 5 <0 con (6464) â250 ---- (100,50,25) 400,300) â500 é ) 000 025050 ~OOOCSCOSCdSTN SC Timesteps xo HalfCheetah-v1 (TRPO, rllab, Policy Network Activation) 1500 1250 1000 g B 750 © 500 2 g 250 id 0 tanh 950 relu leaky relu =500 y 000 025 050 O75 100 12 150 175 200 xo
Hopper-v1 (TRPO, rllab, Policy Network Structure)
1400, 1200 1000 800 600 400} 200 === (100,50,25) 400,300) 0 â ) 0.00 0.35050 (075 1.001.351.5075 2.00 Timesteps xaoé Hopper-v1 (TRPO, rllab, Policy Network Activation) 1400 1200 1000 800 600 400 tanh 200 relu leaky relu 0 000 035 (0500 (075 1001515075 2.00 Timesteps xaoé
# £ co &
© 2 3 id
E 3 © 2 3 id
# Timesteps | 1709.06560#97 | Deep Reinforcement Learning that Matters | In recent years, significant progress has been made in solving challenging
problems across various domains using deep reinforcement learning (RL).
Reproducing existing work and accurately judging the improvements offered by
novel methods is vital to sustaining this progress. Unfortunately, reproducing
results for state-of-the-art deep RL methods is seldom straightforward. In
particular, non-determinism in standard benchmark environments, combined with
variance intrinsic to the methods, can make reported results tough to
interpret. Without significance metrics and tighter standardization of
experimental reporting, it is difficult to determine whether improvements over
the prior state-of-the-art are meaningful. In this paper, we investigate
challenges posed by reproducibility, proper experimental techniques, and
reporting procedures. We illustrate the variability in reported metrics and
results when comparing against common baselines and suggest guidelines to make
future results in deep RL more reproducible. We aim to spur discussion about
how to ensure continued progress in the field by minimizing wasted effort
stemming from results that are non-reproducible and easily misinterpreted. | http://arxiv.org/pdf/1709.06560 | Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger | cs.LG, stat.ML | Accepted to the Thirthy-Second AAAI Conference On Artificial
Intelligence (AAAI), 2018 | null | cs.LG | 20170919 | 20190130 | [
{
"id": "1611.02247"
},
{
"id": "1506.02438"
},
{
"id": "1707.06347"
},
{
"id": "1703.02660"
},
{
"id": "1705.10443"
},
{
"id": "1703.01703"
},
{
"id": "1509.02971"
},
{
"id": "1612.03780"
},
{
"id": "1606.01540"
},
{
"id": "1706.01905"
},
{
"id": "1706.00387"
},
{
"id": "1709.06009"
},
{
"id": "1505.00853"
},
{
"id": "1708.04782"
}
] |
1709.06560 | 99 | # HalfCheetah-v1 (DDPG, rllab++, Policy Network Structure)
3500 3000 2500 2000 â1500 1000 f 500 (64,64) (64,64) 0 ~ (100,50,25) tool (100,50,25) (400,300) (400,300) â500 0 000025 ~â(050~â(OTS | 100COSCLGSC«idCST SSO 00002505005 LOCHSCSSGOSCLSTS SCD. Timesteps xaoe Timesteps xt Hopper-vl (DDPG, rllab++, Value Network Structure) HalfCheetah-v1 (DDPG, rllab++, Value Network Structure) 1200 2500 1000 2000 E 800 2 1500 é G00 & 1000 z 3 400 Zz 300 (64,64) (64,64) 200 ~ (100,50,25) 0 (100,50,25) (400,300) 400,300 0 â500 ( ) 0.00025 0500.75 1001355075 2.00 000035 050 075 100135) Timesteps x1 Timesteps x1
# E 2
& § =
# E % 2 S 2 g Z
Figure 30: DDPG rllab++ Policy and Value Network structure | 1709.06560#99 | Deep Reinforcement Learning that Matters | In recent years, significant progress has been made in solving challenging
problems across various domains using deep reinforcement learning (RL).
Reproducing existing work and accurately judging the improvements offered by
novel methods is vital to sustaining this progress. Unfortunately, reproducing
results for state-of-the-art deep RL methods is seldom straightforward. In
particular, non-determinism in standard benchmark environments, combined with
variance intrinsic to the methods, can make reported results tough to
interpret. Without significance metrics and tighter standardization of
experimental reporting, it is difficult to determine whether improvements over
the prior state-of-the-art are meaningful. In this paper, we investigate
challenges posed by reproducibility, proper experimental techniques, and
reporting procedures. We illustrate the variability in reported metrics and
results when comparing against common baselines and suggest guidelines to make
future results in deep RL more reproducible. We aim to spur discussion about
how to ensure continued progress in the field by minimizing wasted effort
stemming from results that are non-reproducible and easily misinterpreted. | http://arxiv.org/pdf/1709.06560 | Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger | cs.LG, stat.ML | Accepted to the Thirthy-Second AAAI Conference On Artificial
Intelligence (AAAI), 2018 | null | cs.LG | 20170919 | 20190130 | [
{
"id": "1611.02247"
},
{
"id": "1506.02438"
},
{
"id": "1707.06347"
},
{
"id": "1703.02660"
},
{
"id": "1705.10443"
},
{
"id": "1703.01703"
},
{
"id": "1509.02971"
},
{
"id": "1612.03780"
},
{
"id": "1606.01540"
},
{
"id": "1706.01905"
},
{
"id": "1706.00387"
},
{
"id": "1709.06009"
},
{
"id": "1505.00853"
},
{
"id": "1708.04782"
}
] |
1709.06560 | 100 | # E 2
& § =
# E % 2 S 2 g Z
Figure 30: DDPG rllab++ Policy and Value Network structure
3500 HalfCheetah-v1 (DDPG, rllab++, Policy Network Activation) Hopper-v1 (DDPG, rllab++, Policy Network Activation) 3000 1000 2 = 800 2000 g 1500 000 2 1000 3 400 500 < tanh on ~ tanh ) na" elu relu â 500 leaky relu 0 leaky 000025050 (075 100135 GDL 2.00 0.00035 050 07 100 135 150 175 200 Timesteps xaoe Timesteps xa HalfCheetah-vl (DDPG, rllab++, Value Network Activation) Hopper-v1 (DDPG, rllab+-+, Value Network Activation) 2500 1000 2000 800 © 1500 3 & 600 1000 © @ 400 500 2 tanh 200 tanh 0 ons elu relu 500 leaky relu 9 leaky relu 000025050 (075 100135 GDL 2.00 0.000235 050 07 100 135 150 175 200 Timesteps x1 Timesteps xi
â 2
2 § <=
© & & âo fa 2
Figure 31: DDPG rllab++ Policy and Value Network activations. | 1709.06560#100 | Deep Reinforcement Learning that Matters | In recent years, significant progress has been made in solving challenging
problems across various domains using deep reinforcement learning (RL).
Reproducing existing work and accurately judging the improvements offered by
novel methods is vital to sustaining this progress. Unfortunately, reproducing
results for state-of-the-art deep RL methods is seldom straightforward. In
particular, non-determinism in standard benchmark environments, combined with
variance intrinsic to the methods, can make reported results tough to
interpret. Without significance metrics and tighter standardization of
experimental reporting, it is difficult to determine whether improvements over
the prior state-of-the-art are meaningful. In this paper, we investigate
challenges posed by reproducibility, proper experimental techniques, and
reporting procedures. We illustrate the variability in reported metrics and
results when comparing against common baselines and suggest guidelines to make
future results in deep RL more reproducible. We aim to spur discussion about
how to ensure continued progress in the field by minimizing wasted effort
stemming from results that are non-reproducible and easily misinterpreted. | http://arxiv.org/pdf/1709.06560 | Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger | cs.LG, stat.ML | Accepted to the Thirthy-Second AAAI Conference On Artificial
Intelligence (AAAI), 2018 | null | cs.LG | 20170919 | 20190130 | [
{
"id": "1611.02247"
},
{
"id": "1506.02438"
},
{
"id": "1707.06347"
},
{
"id": "1703.02660"
},
{
"id": "1705.10443"
},
{
"id": "1703.01703"
},
{
"id": "1509.02971"
},
{
"id": "1612.03780"
},
{
"id": "1606.01540"
},
{
"id": "1706.01905"
},
{
"id": "1706.00387"
},
{
"id": "1709.06009"
},
{
"id": "1505.00853"
},
{
"id": "1708.04782"
}
] |
1709.06560 | 102 | 3500 3000 2500 2000 â1500 1000 os Oe 500 (64,64) (64,64) 0 ~ (100,50,25) 100! | (100,50,25) (400,300) (400,300) â500 0 0.00 0.25 0.50 0.75, 1.00 1.25 1.50 1.75 2.00 0.00 0.25 0.50, 0.75, 1.00 1.25 1.50 175 2.00 Timesteps xaoe Timesteps xt 000 Hopper-vl (DDPG, rllab, Value Network Structure) 2000 HalfCheetah-vl (DDPG, rllab, Value Network Structure) 1750. 1500 100 < 1250. 3 1000 1000 % 750 5 00 $ 500 ~(e4e) | 250 ~-â- (100,50,25) 0 (100,50,25) ) â (400,300) (400,300) â500 0.00, 0.25 0.50, 0.75 1.00 1.25 1.50 1.75 2.00 0.00 0.25 0.50 0.75, 1.00 1.25 1.50 1.75 2.00 Timesteps x1 Timesteps x1
£ 3
# & Ey =
â 5 3 g, § $ * | 1709.06560#102 | Deep Reinforcement Learning that Matters | In recent years, significant progress has been made in solving challenging
problems across various domains using deep reinforcement learning (RL).
Reproducing existing work and accurately judging the improvements offered by
novel methods is vital to sustaining this progress. Unfortunately, reproducing
results for state-of-the-art deep RL methods is seldom straightforward. In
particular, non-determinism in standard benchmark environments, combined with
variance intrinsic to the methods, can make reported results tough to
interpret. Without significance metrics and tighter standardization of
experimental reporting, it is difficult to determine whether improvements over
the prior state-of-the-art are meaningful. In this paper, we investigate
challenges posed by reproducibility, proper experimental techniques, and
reporting procedures. We illustrate the variability in reported metrics and
results when comparing against common baselines and suggest guidelines to make
future results in deep RL more reproducible. We aim to spur discussion about
how to ensure continued progress in the field by minimizing wasted effort
stemming from results that are non-reproducible and easily misinterpreted. | http://arxiv.org/pdf/1709.06560 | Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger | cs.LG, stat.ML | Accepted to the Thirthy-Second AAAI Conference On Artificial
Intelligence (AAAI), 2018 | null | cs.LG | 20170919 | 20190130 | [
{
"id": "1611.02247"
},
{
"id": "1506.02438"
},
{
"id": "1707.06347"
},
{
"id": "1703.02660"
},
{
"id": "1705.10443"
},
{
"id": "1703.01703"
},
{
"id": "1509.02971"
},
{
"id": "1612.03780"
},
{
"id": "1606.01540"
},
{
"id": "1706.01905"
},
{
"id": "1706.00387"
},
{
"id": "1709.06009"
},
{
"id": "1505.00853"
},
{
"id": "1708.04782"
}
] |
1709.06560 | 103 | £ 3
# & Ey =
â 5 3 g, § $ *
Figure 32: DDPG rllab Policy and Value Network structure
HalfCheetah-v1 (DDPG, rllab, Policy Network Activation) Hopper-v1 (DDPG, rllab, Policy Network Activation) 2000: 1000. 1500. a 800 g 1000. = 600 © e 500. 2 400, 200: 0 relu 0 0.00, 0.25 0.50, 0.75 1.00 1.25 1.50 1.75 2.00 0.00, 0.25 0.50, 0.75 1.00 1.25 1.50 1.75 2.00 Timesteps xe Timesteps xi HalfCheetah-v1 (DDPG, rllab, Value Network Activation) Hopper-vl (DDPG, rllab, Value Network Activation) 1000. 1500: 800 £ iow B 00 © @ 400 500 2 200: 0. P tanh 0. - relu 0.00, 0.25 0.50, 0.75 1.00 1.25 1.50 1.75 2.00 5 0.00, 0.25 0.50, 0.75 1.00 1.25 1.50 1.75 2.00 Timesteps xd Timesteps xi | 1709.06560#103 | Deep Reinforcement Learning that Matters | In recent years, significant progress has been made in solving challenging
problems across various domains using deep reinforcement learning (RL).
Reproducing existing work and accurately judging the improvements offered by
novel methods is vital to sustaining this progress. Unfortunately, reproducing
results for state-of-the-art deep RL methods is seldom straightforward. In
particular, non-determinism in standard benchmark environments, combined with
variance intrinsic to the methods, can make reported results tough to
interpret. Without significance metrics and tighter standardization of
experimental reporting, it is difficult to determine whether improvements over
the prior state-of-the-art are meaningful. In this paper, we investigate
challenges posed by reproducibility, proper experimental techniques, and
reporting procedures. We illustrate the variability in reported metrics and
results when comparing against common baselines and suggest guidelines to make
future results in deep RL more reproducible. We aim to spur discussion about
how to ensure continued progress in the field by minimizing wasted effort
stemming from results that are non-reproducible and easily misinterpreted. | http://arxiv.org/pdf/1709.06560 | Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger | cs.LG, stat.ML | Accepted to the Thirthy-Second AAAI Conference On Artificial
Intelligence (AAAI), 2018 | null | cs.LG | 20170919 | 20190130 | [
{
"id": "1611.02247"
},
{
"id": "1506.02438"
},
{
"id": "1707.06347"
},
{
"id": "1703.02660"
},
{
"id": "1705.10443"
},
{
"id": "1703.01703"
},
{
"id": "1509.02971"
},
{
"id": "1612.03780"
},
{
"id": "1606.01540"
},
{
"id": "1706.01905"
},
{
"id": "1706.00387"
},
{
"id": "1709.06009"
},
{
"id": "1505.00853"
},
{
"id": "1708.04782"
}
] |
1709.06560 | 104 | # £ g c
© e 2
£ B © F 2
Figure 33: DDPG rllab Policy and Value Network activations.
Often in related literature, there is different baseline codebase people use for implementation of algorithms. One such example is for the TRPO algorithm. It is a commonly used policy gradient method for continuous control tasks, and there exists several implementations from
OpenAI Baselines (Plappert et al. 2017), OpenAI rllab (Duan et al. 2016) and the original TRPO codebase (Schulman et al. 2015a). In this section, we perform an analysis of the impact the choice of algorithm codebase can have on the performance. Figures 27 and 28 summarizes our results with TRPO policy network and value networks, using the original TRPO codebase from (Schulman et al. 2015a). Figure 29 shows the results using the rllab implementation of TRPO using the same hyperparameters as our default experiments aforementioned. Note, we use a linear function approximator rather than a neural network due to the fact that the Tensorï¬ow implementation of OpenAI rllab doesnât provide anything else. We note that this is commonly used in other works (Duan et al. 2016; Stadie, Abbeel, and Sutskever 2017), but may cause differences in performance. Furthermore, we leave out our value function network experiments due to this. | 1709.06560#104 | Deep Reinforcement Learning that Matters | In recent years, significant progress has been made in solving challenging
problems across various domains using deep reinforcement learning (RL).
Reproducing existing work and accurately judging the improvements offered by
novel methods is vital to sustaining this progress. Unfortunately, reproducing
results for state-of-the-art deep RL methods is seldom straightforward. In
particular, non-determinism in standard benchmark environments, combined with
variance intrinsic to the methods, can make reported results tough to
interpret. Without significance metrics and tighter standardization of
experimental reporting, it is difficult to determine whether improvements over
the prior state-of-the-art are meaningful. In this paper, we investigate
challenges posed by reproducibility, proper experimental techniques, and
reporting procedures. We illustrate the variability in reported metrics and
results when comparing against common baselines and suggest guidelines to make
future results in deep RL more reproducible. We aim to spur discussion about
how to ensure continued progress in the field by minimizing wasted effort
stemming from results that are non-reproducible and easily misinterpreted. | http://arxiv.org/pdf/1709.06560 | Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger | cs.LG, stat.ML | Accepted to the Thirthy-Second AAAI Conference On Artificial
Intelligence (AAAI), 2018 | null | cs.LG | 20170919 | 20190130 | [
{
"id": "1611.02247"
},
{
"id": "1506.02438"
},
{
"id": "1707.06347"
},
{
"id": "1703.02660"
},
{
"id": "1705.10443"
},
{
"id": "1703.01703"
},
{
"id": "1509.02971"
},
{
"id": "1612.03780"
},
{
"id": "1606.01540"
},
{
"id": "1706.01905"
},
{
"id": "1706.00387"
},
{
"id": "1709.06009"
},
{
"id": "1505.00853"
},
{
"id": "1708.04782"
}
] |
1709.06560 | 105 | HalfCheetah-v1 (DDPG, Codebase Comparison) Hopper-vl (DDPG, Codebase Comparison) 5000 1750 as 1500 4000 © 1250 3000 I @ 1000 ©. © $2000 ; 8 750 g g 1099 ~ <= 500 ) === Duan 2016 250 Dean 206 <= Guns <= Guns Propper 2017 0 Papper 2017 0.000.250 (0.50 0.75 001.25 1.50.75 2.00 0.000.250 (0.50 0.75 001.25 1.50 1.75 2.00 Timesteps x10° Timesteps xaoé
2 2
=
Figure 34: DDPG codebase comparison using our default set of hyperparameters (as used in other experiments). | 1709.06560#105 | Deep Reinforcement Learning that Matters | In recent years, significant progress has been made in solving challenging
problems across various domains using deep reinforcement learning (RL).
Reproducing existing work and accurately judging the improvements offered by
novel methods is vital to sustaining this progress. Unfortunately, reproducing
results for state-of-the-art deep RL methods is seldom straightforward. In
particular, non-determinism in standard benchmark environments, combined with
variance intrinsic to the methods, can make reported results tough to
interpret. Without significance metrics and tighter standardization of
experimental reporting, it is difficult to determine whether improvements over
the prior state-of-the-art are meaningful. In this paper, we investigate
challenges posed by reproducibility, proper experimental techniques, and
reporting procedures. We illustrate the variability in reported metrics and
results when comparing against common baselines and suggest guidelines to make
future results in deep RL more reproducible. We aim to spur discussion about
how to ensure continued progress in the field by minimizing wasted effort
stemming from results that are non-reproducible and easily misinterpreted. | http://arxiv.org/pdf/1709.06560 | Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger | cs.LG, stat.ML | Accepted to the Thirthy-Second AAAI Conference On Artificial
Intelligence (AAAI), 2018 | null | cs.LG | 20170919 | 20190130 | [
{
"id": "1611.02247"
},
{
"id": "1506.02438"
},
{
"id": "1707.06347"
},
{
"id": "1703.02660"
},
{
"id": "1705.10443"
},
{
"id": "1703.01703"
},
{
"id": "1509.02971"
},
{
"id": "1612.03780"
},
{
"id": "1606.01540"
},
{
"id": "1706.01905"
},
{
"id": "1706.00387"
},
{
"id": "1709.06009"
},
{
"id": "1505.00853"
},
{
"id": "1708.04782"
}
] |
1709.06560 | 106 | 2 2
=
Figure 34: DDPG codebase comparison using our default set of hyperparameters (as used in other experiments).
HalfCheetah-vl (TRPO, Codebase Comparison) Hopper-vl (TRPO, Codebase Comparison) : 3000 , 2000; 45 oven ful sa adel al OPT WA on Na 1500 2500 ⬠£ 5 5 2000 & 1000) 2 © 1500 2 500 ° o o $ $1000 < 4 < 500 == Sehuiman 2015 == Schulman 2015 â500 == Sehuiman 2017 = Schuman 2017 Duan 2016 0 Duan 2016 0.00 -0250~«(0500~«(0.75 L005 150175 2.00 0.00 025050075 1005 150)L:752.00 Timesteps xe Timesteps x10*
Figure 35: TRPO codebase comparison using our default set of hyperparameters (as used in other experiments). | 1709.06560#106 | Deep Reinforcement Learning that Matters | In recent years, significant progress has been made in solving challenging
problems across various domains using deep reinforcement learning (RL).
Reproducing existing work and accurately judging the improvements offered by
novel methods is vital to sustaining this progress. Unfortunately, reproducing
results for state-of-the-art deep RL methods is seldom straightforward. In
particular, non-determinism in standard benchmark environments, combined with
variance intrinsic to the methods, can make reported results tough to
interpret. Without significance metrics and tighter standardization of
experimental reporting, it is difficult to determine whether improvements over
the prior state-of-the-art are meaningful. In this paper, we investigate
challenges posed by reproducibility, proper experimental techniques, and
reporting procedures. We illustrate the variability in reported metrics and
results when comparing against common baselines and suggest guidelines to make
future results in deep RL more reproducible. We aim to spur discussion about
how to ensure continued progress in the field by minimizing wasted effort
stemming from results that are non-reproducible and easily misinterpreted. | http://arxiv.org/pdf/1709.06560 | Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger | cs.LG, stat.ML | Accepted to the Thirthy-Second AAAI Conference On Artificial
Intelligence (AAAI), 2018 | null | cs.LG | 20170919 | 20190130 | [
{
"id": "1611.02247"
},
{
"id": "1506.02438"
},
{
"id": "1707.06347"
},
{
"id": "1703.02660"
},
{
"id": "1705.10443"
},
{
"id": "1703.01703"
},
{
"id": "1509.02971"
},
{
"id": "1612.03780"
},
{
"id": "1606.01540"
},
{
"id": "1706.01905"
},
{
"id": "1706.00387"
},
{
"id": "1709.06009"
},
{
"id": "1505.00853"
},
{
"id": "1708.04782"
}
] |
1709.06560 | 107 | Figure 35: TRPO codebase comparison using our default set of hyperparameters (as used in other experiments).
Figure 35 shows a comparison of the TRPO implementations using the default hyperparamters as speciï¬ed earlier in the supplemental. Note, the exception is that we use a larger batch size for rllab and original TRPO code of 20k samples per batch, as optimized in a second set of experiments. Figure 30 and 31 show the same network experiments for DDPG with the rllab++ code (Gu et al. 2016). We can then compare the performance of the algorithm across 3 codebases (keeping all hyperparameters constant at the defaults), this can be seen in Figure 34.
# Signiï¬cance | 1709.06560#107 | Deep Reinforcement Learning that Matters | In recent years, significant progress has been made in solving challenging
problems across various domains using deep reinforcement learning (RL).
Reproducing existing work and accurately judging the improvements offered by
novel methods is vital to sustaining this progress. Unfortunately, reproducing
results for state-of-the-art deep RL methods is seldom straightforward. In
particular, non-determinism in standard benchmark environments, combined with
variance intrinsic to the methods, can make reported results tough to
interpret. Without significance metrics and tighter standardization of
experimental reporting, it is difficult to determine whether improvements over
the prior state-of-the-art are meaningful. In this paper, we investigate
challenges posed by reproducibility, proper experimental techniques, and
reporting procedures. We illustrate the variability in reported metrics and
results when comparing against common baselines and suggest guidelines to make
future results in deep RL more reproducible. We aim to spur discussion about
how to ensure continued progress in the field by minimizing wasted effort
stemming from results that are non-reproducible and easily misinterpreted. | http://arxiv.org/pdf/1709.06560 | Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger | cs.LG, stat.ML | Accepted to the Thirthy-Second AAAI Conference On Artificial
Intelligence (AAAI), 2018 | null | cs.LG | 20170919 | 20190130 | [
{
"id": "1611.02247"
},
{
"id": "1506.02438"
},
{
"id": "1707.06347"
},
{
"id": "1703.02660"
},
{
"id": "1705.10443"
},
{
"id": "1703.01703"
},
{
"id": "1509.02971"
},
{
"id": "1612.03780"
},
{
"id": "1606.01540"
},
{
"id": "1706.01905"
},
{
"id": "1706.00387"
},
{
"id": "1709.06009"
},
{
"id": "1505.00853"
},
{
"id": "1708.04782"
}
] |
1709.06560 | 108 | # Signiï¬cance
Our full results from signiï¬cance testing with difference metrics can be found in Table 9 and Table 10. Our bootstrap mean and conï¬dence intervals can be found in Table 13. Bootstrap power analysis can be found in Table 14. To performance signiï¬cance testing, we use our 5 sample trials to generate a bootstrap with 10k bootstraps. From this conï¬dence intervals can be obtained. For the t-test and KS-test, the average returns from the 5 trials are sorted and compared using the normal 2-sample versions of these tests. Scipy ( https://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.stats.ks_2samp. html, https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.ttest_ind.html) and Facebook Boostrapped (https://github.com/facebookincubator/bootstrapped) are used for the KS test, t-test, and bootstrap analysis. For power analysis, we attempt to determine if a sample is enough to game the signiï¬cance of a 25% lift. This is commonly used in A/B testing (Tuff´ery 2011). | 1709.06560#108 | Deep Reinforcement Learning that Matters | In recent years, significant progress has been made in solving challenging
problems across various domains using deep reinforcement learning (RL).
Reproducing existing work and accurately judging the improvements offered by
novel methods is vital to sustaining this progress. Unfortunately, reproducing
results for state-of-the-art deep RL methods is seldom straightforward. In
particular, non-determinism in standard benchmark environments, combined with
variance intrinsic to the methods, can make reported results tough to
interpret. Without significance metrics and tighter standardization of
experimental reporting, it is difficult to determine whether improvements over
the prior state-of-the-art are meaningful. In this paper, we investigate
challenges posed by reproducibility, proper experimental techniques, and
reporting procedures. We illustrate the variability in reported metrics and
results when comparing against common baselines and suggest guidelines to make
future results in deep RL more reproducible. We aim to spur discussion about
how to ensure continued progress in the field by minimizing wasted effort
stemming from results that are non-reproducible and easily misinterpreted. | http://arxiv.org/pdf/1709.06560 | Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger | cs.LG, stat.ML | Accepted to the Thirthy-Second AAAI Conference On Artificial
Intelligence (AAAI), 2018 | null | cs.LG | 20170919 | 20190130 | [
{
"id": "1611.02247"
},
{
"id": "1506.02438"
},
{
"id": "1707.06347"
},
{
"id": "1703.02660"
},
{
"id": "1705.10443"
},
{
"id": "1703.01703"
},
{
"id": "1509.02971"
},
{
"id": "1612.03780"
},
{
"id": "1606.01540"
},
{
"id": "1706.01905"
},
{
"id": "1706.00387"
},
{
"id": "1709.06009"
},
{
"id": "1505.00853"
},
{
"id": "1708.04782"
}
] |
1709.06560 | 109 | - DDPG ACKTR TRPO PPO DDPG - t = â1.85, p = 0.102 KS = 0.60, p = 0.209 -38.24 % (-75.42 %, -15.19 %) t = â4.59, p = 0.002 KS = 1.00, p = 0.004 -75.09 % (-86.44 %, -68.36 %) t = â2.67, p = 0.029 KS = 0.80, p = 0.036 -51.67 % (-80.69 %, -31.94 %) ACKTR t = 1.85, p = 0.102 KS = 0.60, p = 0.209 61.91 % (-32.27 %, 122.99 %) - t = â2.78, p = 0.024 KS = 0.80, p = 0.036 -59.67 % (-81.70 %, -46.84 %) t = â0.80, p = 0.448 KS = 0.60, p = 0.209 -21.75 % (-75.99 %, 11.68 %) TRPO t = 4.59, p = 0.002 KS = 1.00, p = 0.004 301.48 % (150.50 %, 431.67 %) t = 2.78, p = 0.024 KS = 0.80, p = | 1709.06560#109 | Deep Reinforcement Learning that Matters | In recent years, significant progress has been made in solving challenging
problems across various domains using deep reinforcement learning (RL).
Reproducing existing work and accurately judging the improvements offered by
novel methods is vital to sustaining this progress. Unfortunately, reproducing
results for state-of-the-art deep RL methods is seldom straightforward. In
particular, non-determinism in standard benchmark environments, combined with
variance intrinsic to the methods, can make reported results tough to
interpret. Without significance metrics and tighter standardization of
experimental reporting, it is difficult to determine whether improvements over
the prior state-of-the-art are meaningful. In this paper, we investigate
challenges posed by reproducibility, proper experimental techniques, and
reporting procedures. We illustrate the variability in reported metrics and
results when comparing against common baselines and suggest guidelines to make
future results in deep RL more reproducible. We aim to spur discussion about
how to ensure continued progress in the field by minimizing wasted effort
stemming from results that are non-reproducible and easily misinterpreted. | http://arxiv.org/pdf/1709.06560 | Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger | cs.LG, stat.ML | Accepted to the Thirthy-Second AAAI Conference On Artificial
Intelligence (AAAI), 2018 | null | cs.LG | 20170919 | 20190130 | [
{
"id": "1611.02247"
},
{
"id": "1506.02438"
},
{
"id": "1707.06347"
},
{
"id": "1703.02660"
},
{
"id": "1705.10443"
},
{
"id": "1703.01703"
},
{
"id": "1509.02971"
},
{
"id": "1612.03780"
},
{
"id": "1606.01540"
},
{
"id": "1706.01905"
},
{
"id": "1706.00387"
},
{
"id": "1709.06009"
},
{
"id": "1505.00853"
},
{
"id": "1708.04782"
}
] |
1709.06560 | 110 | p = 0.004 301.48 % (150.50 %, 431.67 %) t = 2.78, p = 0.024 KS = 0.80, p = 0.036 147.96 % (30.84 %, 234.60 %) - t = 2.12, p = 0.067 KS = 0.80, p = 0.036 94.04 % (2.73 %, 169.06 %) PPO t = 2.67, p = 0.029 KS = 0.80, p = 0.036 106.91 % (-37.62 %, 185.26 %) t = 0.80, p = 0.448 KS = 0.60, p = 0.209 27.79 % (-67.77 %, 79.56 %) t = â2.12, p = 0.067 KS = 0.80, p = 0.036 -48.46 % (-81.23 %, -32.05 %) Table 9: HalfCheetah Signiï¬cance values and metrics for different algorithms. Rows in cells are: sorted 2-sample t-test, Kolmogorov-Smirnov test, bootstrap A/B comparison % difference with 95% conï¬dence bounds. | 1709.06560#110 | Deep Reinforcement Learning that Matters | In recent years, significant progress has been made in solving challenging
problems across various domains using deep reinforcement learning (RL).
Reproducing existing work and accurately judging the improvements offered by
novel methods is vital to sustaining this progress. Unfortunately, reproducing
results for state-of-the-art deep RL methods is seldom straightforward. In
particular, non-determinism in standard benchmark environments, combined with
variance intrinsic to the methods, can make reported results tough to
interpret. Without significance metrics and tighter standardization of
experimental reporting, it is difficult to determine whether improvements over
the prior state-of-the-art are meaningful. In this paper, we investigate
challenges posed by reproducibility, proper experimental techniques, and
reporting procedures. We illustrate the variability in reported metrics and
results when comparing against common baselines and suggest guidelines to make
future results in deep RL more reproducible. We aim to spur discussion about
how to ensure continued progress in the field by minimizing wasted effort
stemming from results that are non-reproducible and easily misinterpreted. | http://arxiv.org/pdf/1709.06560 | Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger | cs.LG, stat.ML | Accepted to the Thirthy-Second AAAI Conference On Artificial
Intelligence (AAAI), 2018 | null | cs.LG | 20170919 | 20190130 | [
{
"id": "1611.02247"
},
{
"id": "1506.02438"
},
{
"id": "1707.06347"
},
{
"id": "1703.02660"
},
{
"id": "1705.10443"
},
{
"id": "1703.01703"
},
{
"id": "1509.02971"
},
{
"id": "1612.03780"
},
{
"id": "1606.01540"
},
{
"id": "1706.01905"
},
{
"id": "1706.00387"
},
{
"id": "1709.06009"
},
{
"id": "1505.00853"
},
{
"id": "1708.04782"
}
] |
1709.06560 | 111 | - DDPG ACKTR TRPO PPO DDPG - t = 1.41, p = 0.196 KS = 0.60, p = 0.209 56.05 % (-87.98 %, 123.15 %) t = 2.58, p = 0.033 KS = 0.80, p = 0.036 81.68 % (-67.76 %, 151.64 %) t = 2.09, p = 0.070 KS = 0.80, p = 0.036 66.39 % (-67.80 %, 130.16 %) ACKTR t = â1.41, p = 0.196 KS = 0.60, p = 0.209 -35.92 % (-85.62 %, -5.38 %) - t = 1.05, p = 0.326 KS = 0.60, p = 0.209 16.43 % (-27.92 %, 41.17 %) t = 0.42, p = 0.686 KS = 0.40, p = 0.697 6.63 % (-33.54 %, 29.59 %) TRPO t = â2.58, p = 0.033 KS = 0.80, p = 0.036 -44.96 % (-78.82 %, -20.29 %) t = â1.05, p = 0.326 KS = 0.60, p = 0.209 -14.11 % | 1709.06560#111 | Deep Reinforcement Learning that Matters | In recent years, significant progress has been made in solving challenging
problems across various domains using deep reinforcement learning (RL).
Reproducing existing work and accurately judging the improvements offered by
novel methods is vital to sustaining this progress. Unfortunately, reproducing
results for state-of-the-art deep RL methods is seldom straightforward. In
particular, non-determinism in standard benchmark environments, combined with
variance intrinsic to the methods, can make reported results tough to
interpret. Without significance metrics and tighter standardization of
experimental reporting, it is difficult to determine whether improvements over
the prior state-of-the-art are meaningful. In this paper, we investigate
challenges posed by reproducibility, proper experimental techniques, and
reporting procedures. We illustrate the variability in reported metrics and
results when comparing against common baselines and suggest guidelines to make
future results in deep RL more reproducible. We aim to spur discussion about
how to ensure continued progress in the field by minimizing wasted effort
stemming from results that are non-reproducible and easily misinterpreted. | http://arxiv.org/pdf/1709.06560 | Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger | cs.LG, stat.ML | Accepted to the Thirthy-Second AAAI Conference On Artificial
Intelligence (AAAI), 2018 | null | cs.LG | 20170919 | 20190130 | [
{
"id": "1611.02247"
},
{
"id": "1506.02438"
},
{
"id": "1707.06347"
},
{
"id": "1703.02660"
},
{
"id": "1705.10443"
},
{
"id": "1703.01703"
},
{
"id": "1509.02971"
},
{
"id": "1612.03780"
},
{
"id": "1606.01540"
},
{
"id": "1706.01905"
},
{
"id": "1706.00387"
},
{
"id": "1709.06009"
},
{
"id": "1505.00853"
},
{
"id": "1708.04782"
}
] |
1709.06560 | 112 | % (-78.82 %, -20.29 %) t = â1.05, p = 0.326 KS = 0.60, p = 0.209 -14.11 % (-37.17 %, 9.11 %) - t = â2.57, p = 0.033 KS = 0.60, p = 0.209 -8.42 % (-14.08 %, -2.97 %) PPO t = â2.09, p = 0.070 KS = 0.80, p = 0.036 -39.90 % (-77.12 %, -12.95 %) t = â0.42, p = 0.686 KS = 0.40, p = 0.697 -6.22 % (-31.58 %, 18.98 %) t = 2.57, p = 0.033 KS = 0.60, p = 0.209 9.19 % (2.37 %, 15.58 %) Table 10: Hopper Signiï¬cance values and metrics for different algorithms. Rows in cells are: sorted 2-sample t-test, Kolmogorov- Smirnov test, bootstrap A/B comparison % difference with 95% conï¬dence bounds. | 1709.06560#112 | Deep Reinforcement Learning that Matters | In recent years, significant progress has been made in solving challenging
problems across various domains using deep reinforcement learning (RL).
Reproducing existing work and accurately judging the improvements offered by
novel methods is vital to sustaining this progress. Unfortunately, reproducing
results for state-of-the-art deep RL methods is seldom straightforward. In
particular, non-determinism in standard benchmark environments, combined with
variance intrinsic to the methods, can make reported results tough to
interpret. Without significance metrics and tighter standardization of
experimental reporting, it is difficult to determine whether improvements over
the prior state-of-the-art are meaningful. In this paper, we investigate
challenges posed by reproducibility, proper experimental techniques, and
reporting procedures. We illustrate the variability in reported metrics and
results when comparing against common baselines and suggest guidelines to make
future results in deep RL more reproducible. We aim to spur discussion about
how to ensure continued progress in the field by minimizing wasted effort
stemming from results that are non-reproducible and easily misinterpreted. | http://arxiv.org/pdf/1709.06560 | Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger | cs.LG, stat.ML | Accepted to the Thirthy-Second AAAI Conference On Artificial
Intelligence (AAAI), 2018 | null | cs.LG | 20170919 | 20190130 | [
{
"id": "1611.02247"
},
{
"id": "1506.02438"
},
{
"id": "1707.06347"
},
{
"id": "1703.02660"
},
{
"id": "1705.10443"
},
{
"id": "1703.01703"
},
{
"id": "1509.02971"
},
{
"id": "1612.03780"
},
{
"id": "1606.01540"
},
{
"id": "1706.01905"
},
{
"id": "1706.00387"
},
{
"id": "1709.06009"
},
{
"id": "1505.00853"
},
{
"id": "1708.04782"
}
] |
1709.06560 | 113 | - DDPG ACKTR TRPO PPO DDPG - t = 1.03, p = 0.334 KS = 0.40, p = 0.697 44.47 % (-80.62 %, 111.72 %) t = 4.04, p = 0.004 KS = 1.00, p = 0.004 94.24 % (-22.59 %, 152.61 %) t = 3.07, p = 0.015 KS = 0.80, p = 0.036 85.01 % (-31.02 %, 144.35 %) ACKTR t = â1.03, p = 0.334 KS = 0.40, p = 0.697 -30.78 % (-91.35 %, 1.06 %) - t = 1.35, p = 0.214 KS = 0.60, p = 0.209 34.46 % (-60.47 %, 77.32 %) t = 1.02, p = 0.338 KS = 0.60, p = 0.209 28.07 % (-65.67 %, 71.71 %) TRPO t = â4.04, p = 0.004 KS = 1.00, p = 0.004 -48.52 % (-70.33 %, -28.62 %) t = â1.35, p = 0.214 KS = 0.60, p = 0.209 -25.63 % | 1709.06560#113 | Deep Reinforcement Learning that Matters | In recent years, significant progress has been made in solving challenging
problems across various domains using deep reinforcement learning (RL).
Reproducing existing work and accurately judging the improvements offered by
novel methods is vital to sustaining this progress. Unfortunately, reproducing
results for state-of-the-art deep RL methods is seldom straightforward. In
particular, non-determinism in standard benchmark environments, combined with
variance intrinsic to the methods, can make reported results tough to
interpret. Without significance metrics and tighter standardization of
experimental reporting, it is difficult to determine whether improvements over
the prior state-of-the-art are meaningful. In this paper, we investigate
challenges posed by reproducibility, proper experimental techniques, and
reporting procedures. We illustrate the variability in reported metrics and
results when comparing against common baselines and suggest guidelines to make
future results in deep RL more reproducible. We aim to spur discussion about
how to ensure continued progress in the field by minimizing wasted effort
stemming from results that are non-reproducible and easily misinterpreted. | http://arxiv.org/pdf/1709.06560 | Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger | cs.LG, stat.ML | Accepted to the Thirthy-Second AAAI Conference On Artificial
Intelligence (AAAI), 2018 | null | cs.LG | 20170919 | 20190130 | [
{
"id": "1611.02247"
},
{
"id": "1506.02438"
},
{
"id": "1707.06347"
},
{
"id": "1703.02660"
},
{
"id": "1705.10443"
},
{
"id": "1703.01703"
},
{
"id": "1509.02971"
},
{
"id": "1612.03780"
},
{
"id": "1606.01540"
},
{
"id": "1706.01905"
},
{
"id": "1706.00387"
},
{
"id": "1709.06009"
},
{
"id": "1505.00853"
},
{
"id": "1708.04782"
}
] |
1709.06560 | 114 | % (-70.33 %, -28.62 %) t = â1.35, p = 0.214 KS = 0.60, p = 0.209 -25.63 % (-61.28 %, 5.54 %) - t = â0.57, p = 0.582 KS = 0.40, p = 0.697 -4.75 % (-19.06 %, 10.02 %) PPO t = â3.07, p = 0.015 KS = 0.80, p = 0.036 -45.95 % (-70.85 %, -24.65 %) t = â1.02, p = 0.338 KS = 0.60, p = 0.209 -21.91 % (-61.53 %, 11.02 %) Table 11: Walker2d Signiï¬cance values and metrics for different algorithms. Rows in cells are: sorted 2-sample t-test, Kolmogorov- Smirnov test, bootstrap A/B comparison % difference with 95% conï¬dence bounds. | 1709.06560#114 | Deep Reinforcement Learning that Matters | In recent years, significant progress has been made in solving challenging
problems across various domains using deep reinforcement learning (RL).
Reproducing existing work and accurately judging the improvements offered by
novel methods is vital to sustaining this progress. Unfortunately, reproducing
results for state-of-the-art deep RL methods is seldom straightforward. In
particular, non-determinism in standard benchmark environments, combined with
variance intrinsic to the methods, can make reported results tough to
interpret. Without significance metrics and tighter standardization of
experimental reporting, it is difficult to determine whether improvements over
the prior state-of-the-art are meaningful. In this paper, we investigate
challenges posed by reproducibility, proper experimental techniques, and
reporting procedures. We illustrate the variability in reported metrics and
results when comparing against common baselines and suggest guidelines to make
future results in deep RL more reproducible. We aim to spur discussion about
how to ensure continued progress in the field by minimizing wasted effort
stemming from results that are non-reproducible and easily misinterpreted. | http://arxiv.org/pdf/1709.06560 | Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger | cs.LG, stat.ML | Accepted to the Thirthy-Second AAAI Conference On Artificial
Intelligence (AAAI), 2018 | null | cs.LG | 20170919 | 20190130 | [
{
"id": "1611.02247"
},
{
"id": "1506.02438"
},
{
"id": "1707.06347"
},
{
"id": "1703.02660"
},
{
"id": "1705.10443"
},
{
"id": "1703.01703"
},
{
"id": "1509.02971"
},
{
"id": "1612.03780"
},
{
"id": "1606.01540"
},
{
"id": "1706.01905"
},
{
"id": "1706.00387"
},
{
"id": "1709.06009"
},
{
"id": "1505.00853"
},
{
"id": "1708.04782"
}
] |
1709.06560 | 115 | - DDPG ACKTR TRPO PPO DDPG - t = 2.18, p = 0.061 KS = 0.80, p = 0.036 57.34 % (-80.96 %, 101.11 %) t = 4.06, p = 0.004 KS = 1.00, p = 0.004 572.61 % (-73.29 %, 869.24 %) t = 8.33, p = 0.000 KS = 1.00, p = 0.004 237.97 % (-59.74 %, 326.85 %) ACKTR t = â2.18, p = 0.061 KS = 0.80, p = 0.036 -36.44 % (-61.04 %, -6.94 %) - t = 3.69, p = 0.006 KS = 1.00, p = 0.004 327.48 % (165.47 %, 488.66 %) t = 8.85, p = 0.000 KS = 1.00, p = 0.004 114.80 % (81.85 %, 147.33 %) TRPO t = â4.06, p = 0.004 KS = 1.00, p = 0.004 -85.13 % (-97.17 %, -77.95 %) t = â3.69, p = 0.006 KS = 1.00, p = 0.004 | 1709.06560#115 | Deep Reinforcement Learning that Matters | In recent years, significant progress has been made in solving challenging
problems across various domains using deep reinforcement learning (RL).
Reproducing existing work and accurately judging the improvements offered by
novel methods is vital to sustaining this progress. Unfortunately, reproducing
results for state-of-the-art deep RL methods is seldom straightforward. In
particular, non-determinism in standard benchmark environments, combined with
variance intrinsic to the methods, can make reported results tough to
interpret. Without significance metrics and tighter standardization of
experimental reporting, it is difficult to determine whether improvements over
the prior state-of-the-art are meaningful. In this paper, we investigate
challenges posed by reproducibility, proper experimental techniques, and
reporting procedures. We illustrate the variability in reported metrics and
results when comparing against common baselines and suggest guidelines to make
future results in deep RL more reproducible. We aim to spur discussion about
how to ensure continued progress in the field by minimizing wasted effort
stemming from results that are non-reproducible and easily misinterpreted. | http://arxiv.org/pdf/1709.06560 | Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger | cs.LG, stat.ML | Accepted to the Thirthy-Second AAAI Conference On Artificial
Intelligence (AAAI), 2018 | null | cs.LG | 20170919 | 20190130 | [
{
"id": "1611.02247"
},
{
"id": "1506.02438"
},
{
"id": "1707.06347"
},
{
"id": "1703.02660"
},
{
"id": "1705.10443"
},
{
"id": "1703.01703"
},
{
"id": "1509.02971"
},
{
"id": "1612.03780"
},
{
"id": "1606.01540"
},
{
"id": "1706.01905"
},
{
"id": "1706.00387"
},
{
"id": "1709.06009"
},
{
"id": "1505.00853"
},
{
"id": "1708.04782"
}
] |
1709.06560 | 116 | -85.13 % (-97.17 %, -77.95 %) t = â3.69, p = 0.006 KS = 1.00, p = 0.004 -76.61 % (-90.68 %, -70.06 %) - t = â2.39, p = 0.044 KS = 0.60, p = 0.209 -49.75 % (-78.58 %, -36.43 %) PPO t = â8.33, p = 0.000 KS = 1.00, p = 0.004 -70.41 % (-80.86 %, -56.52 %) t = â8.85, p = 0.000 KS = 1.00, p = 0.004 -53.45 % (-62.22 %, -47.30 %) t = 2.39, p = 0.044 KS = 0.60, p = 0.209 99.01 % (28.44 %, 171.85 %) Table 12: Swimmer Signiï¬cance values and metrics for different algorithms. Rows in cells are: sorted 2-sample t-test, Kolmogorov-Smirnov test, bootstrap A/B comparison % difference with 95% conï¬dence bounds. | 1709.06560#116 | Deep Reinforcement Learning that Matters | In recent years, significant progress has been made in solving challenging
problems across various domains using deep reinforcement learning (RL).
Reproducing existing work and accurately judging the improvements offered by
novel methods is vital to sustaining this progress. Unfortunately, reproducing
results for state-of-the-art deep RL methods is seldom straightforward. In
particular, non-determinism in standard benchmark environments, combined with
variance intrinsic to the methods, can make reported results tough to
interpret. Without significance metrics and tighter standardization of
experimental reporting, it is difficult to determine whether improvements over
the prior state-of-the-art are meaningful. In this paper, we investigate
challenges posed by reproducibility, proper experimental techniques, and
reporting procedures. We illustrate the variability in reported metrics and
results when comparing against common baselines and suggest guidelines to make
future results in deep RL more reproducible. We aim to spur discussion about
how to ensure continued progress in the field by minimizing wasted effort
stemming from results that are non-reproducible and easily misinterpreted. | http://arxiv.org/pdf/1709.06560 | Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger | cs.LG, stat.ML | Accepted to the Thirthy-Second AAAI Conference On Artificial
Intelligence (AAAI), 2018 | null | cs.LG | 20170919 | 20190130 | [
{
"id": "1611.02247"
},
{
"id": "1506.02438"
},
{
"id": "1707.06347"
},
{
"id": "1703.02660"
},
{
"id": "1705.10443"
},
{
"id": "1703.01703"
},
{
"id": "1509.02971"
},
{
"id": "1612.03780"
},
{
"id": "1606.01540"
},
{
"id": "1706.01905"
},
{
"id": "1706.00387"
},
{
"id": "1709.06009"
},
{
"id": "1505.00853"
},
{
"id": "1708.04782"
}
] |
1709.06560 | 117 | Environment HalfCheetah-v1 Hopper-v1 Walker2d-v1 Swimmer-v1 DDPG 5037.26 (3664.11, 6574.01) 1632.13 (607.98, 2370.21) 1582.04 (901.66, 2174.66) 31.92 (21.68, 46.23) ACKTR 3888.85 (2288.13, 5131.96) 2546.89 (1875.79, 3217.98) 2285.49 (1246.00, 3235.96) 50.22 (42.47, 55.37) TRPO 1254.55 (999.52, 1464.86) 2965.33 (2854.66, 3076.00) 3072.97 (2957.94, 3183.10) 214.69 (141.52, 287.92) PPO 3043.1 (1920.4, 4165.86) 2715.72 (2589.06, 2847.93) 2926.92 (2514.83, 3361.43) 107.88 (101.13, 118.56)
Table 13: Envs bootstrap mean and 95% conï¬dence bounds | 1709.06560#117 | Deep Reinforcement Learning that Matters | In recent years, significant progress has been made in solving challenging
problems across various domains using deep reinforcement learning (RL).
Reproducing existing work and accurately judging the improvements offered by
novel methods is vital to sustaining this progress. Unfortunately, reproducing
results for state-of-the-art deep RL methods is seldom straightforward. In
particular, non-determinism in standard benchmark environments, combined with
variance intrinsic to the methods, can make reported results tough to
interpret. Without significance metrics and tighter standardization of
experimental reporting, it is difficult to determine whether improvements over
the prior state-of-the-art are meaningful. In this paper, we investigate
challenges posed by reproducibility, proper experimental techniques, and
reporting procedures. We illustrate the variability in reported metrics and
results when comparing against common baselines and suggest guidelines to make
future results in deep RL more reproducible. We aim to spur discussion about
how to ensure continued progress in the field by minimizing wasted effort
stemming from results that are non-reproducible and easily misinterpreted. | http://arxiv.org/pdf/1709.06560 | Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger | cs.LG, stat.ML | Accepted to the Thirthy-Second AAAI Conference On Artificial
Intelligence (AAAI), 2018 | null | cs.LG | 20170919 | 20190130 | [
{
"id": "1611.02247"
},
{
"id": "1506.02438"
},
{
"id": "1707.06347"
},
{
"id": "1703.02660"
},
{
"id": "1705.10443"
},
{
"id": "1703.01703"
},
{
"id": "1509.02971"
},
{
"id": "1612.03780"
},
{
"id": "1606.01540"
},
{
"id": "1706.01905"
},
{
"id": "1706.00387"
},
{
"id": "1709.06009"
},
{
"id": "1505.00853"
},
{
"id": "1708.04782"
}
] |
1709.06560 | 118 | Table 13: Envs bootstrap mean and 95% conï¬dence bounds
Environment HalfCheetah-v1 Hopper-v1 Walker2d-v1 DDPG 100.00 % 0.00 % 0.00 % 60.90 % 10.00 % 29.10 % 89.50 % 0.00 % 10.50 % 89.97 % 0.00 % 10.03 % ACKTR 79.03 % 11.53 % 9.43 % 79.60 % 11.00 % 9.40 % 60.33 % 9.73 % 29.93 % 59.90 % 40.10 % 0.00 % TRPO 79.47 % 20.53 % 0.00 % 0.00 % 100.00 % 0.00 % 0.00 % 100.00 % 0.00 % 89.47 % 0.00 % 10.53 % PPO 61.07 % 10.50 % 28.43 % 0.00 % 100.00 % 0.00 % 59.80 % 31.27 % 8.93 % 40.27 % 59.73 % 0.00 % Swimmer-v1
Table 14: Power Analysis for predicted signiï¬cance of 25% lift. Rows in cells are: % insigniï¬cant simulations,% positive signiï¬cant, % negative signiï¬cant. | 1709.06560#118 | Deep Reinforcement Learning that Matters | In recent years, significant progress has been made in solving challenging
problems across various domains using deep reinforcement learning (RL).
Reproducing existing work and accurately judging the improvements offered by
novel methods is vital to sustaining this progress. Unfortunately, reproducing
results for state-of-the-art deep RL methods is seldom straightforward. In
particular, non-determinism in standard benchmark environments, combined with
variance intrinsic to the methods, can make reported results tough to
interpret. Without significance metrics and tighter standardization of
experimental reporting, it is difficult to determine whether improvements over
the prior state-of-the-art are meaningful. In this paper, we investigate
challenges posed by reproducibility, proper experimental techniques, and
reporting procedures. We illustrate the variability in reported metrics and
results when comparing against common baselines and suggest guidelines to make
future results in deep RL more reproducible. We aim to spur discussion about
how to ensure continued progress in the field by minimizing wasted effort
stemming from results that are non-reproducible and easily misinterpreted. | http://arxiv.org/pdf/1709.06560 | Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger | cs.LG, stat.ML | Accepted to the Thirthy-Second AAAI Conference On Artificial
Intelligence (AAAI), 2018 | null | cs.LG | 20170919 | 20190130 | [
{
"id": "1611.02247"
},
{
"id": "1506.02438"
},
{
"id": "1707.06347"
},
{
"id": "1703.02660"
},
{
"id": "1705.10443"
},
{
"id": "1703.01703"
},
{
"id": "1509.02971"
},
{
"id": "1612.03780"
},
{
"id": "1606.01540"
},
{
"id": "1706.01905"
},
{
"id": "1706.00387"
},
{
"id": "1709.06009"
},
{
"id": "1505.00853"
},
{
"id": "1708.04782"
}
] |
1709.04546 | 0 | 8 1 0 2
p e S 8 1 ] G L . s c [
2 v 6 4 5 4 0 . 9 0 7 1 : v i X r a
# NORMALIZED DIRECTION-PRESERVING ADAM
# Zijun Zhang Department of Computer Science University of Calgary [email protected]
Lin Ma School of Computer Science Wuhan University [email protected]
# Zongpeng Li Department of Computer Science University of Calgary [email protected]
Chuan Wu Department of Computer Science The University of Hong Kong [email protected]
# ABSTRACT | 1709.04546#0 | Normalized Direction-preserving Adam | Adaptive optimization algorithms, such as Adam and RMSprop, have shown better
optimization performance than stochastic gradient descent (SGD) in some
scenarios. However, recent studies show that they often lead to worse
generalization performance than SGD, especially for training deep neural
networks (DNNs). In this work, we identify the reasons that Adam generalizes
worse than SGD, and develop a variant of Adam to eliminate the generalization
gap. The proposed method, normalized direction-preserving Adam (ND-Adam),
enables more precise control of the direction and step size for updating weight
vectors, leading to significantly improved generalization performance.
Following a similar rationale, we further improve the generalization
performance in classification tasks by regularizing the softmax logits. By
bridging the gap between SGD and Adam, we also hope to shed light on why
certain optimization algorithms generalize better than others. | http://arxiv.org/pdf/1709.04546 | Zijun Zhang, Lin Ma, Zongpeng Li, Chuan Wu | cs.LG, stat.ML | null | null | cs.LG | 20170913 | 20180918 | [
{
"id": "1711.05101"
},
{
"id": "1707.04199"
},
{
"id": "1605.07146"
},
{
"id": "1707.04822"
}
] |
1709.04546 | 1 | Chuan Wu Department of Computer Science The University of Hong Kong [email protected]
# ABSTRACT
Adaptive optimization algorithms, such as Adam and RMSprop, have shown bet- ter optimization performance than stochastic gradient descent (SGD) in some sce- narios. However, recent studies show that they often lead to worse generalization performance than SGD, especially for training deep neural networks (DNNs). In this work, we identify the reasons that Adam generalizes worse than SGD, and develop a variant of Adam to eliminate the generalization gap. The proposed method, normalized direction-preserving Adam (ND-Adam), enables more pre- cise control of the direction and step size for updating weight vectors, leading to signiï¬cantly improved generalization performance. Following a similar rationale, we further improve the generalization performance in classiï¬cation tasks by regu- larizing the softmax logits. By bridging the gap between SGD and Adam, we also hope to shed light on why certain optimization algorithms generalize better than others.
# INTRODUCTION | 1709.04546#1 | Normalized Direction-preserving Adam | Adaptive optimization algorithms, such as Adam and RMSprop, have shown better
optimization performance than stochastic gradient descent (SGD) in some
scenarios. However, recent studies show that they often lead to worse
generalization performance than SGD, especially for training deep neural
networks (DNNs). In this work, we identify the reasons that Adam generalizes
worse than SGD, and develop a variant of Adam to eliminate the generalization
gap. The proposed method, normalized direction-preserving Adam (ND-Adam),
enables more precise control of the direction and step size for updating weight
vectors, leading to significantly improved generalization performance.
Following a similar rationale, we further improve the generalization
performance in classification tasks by regularizing the softmax logits. By
bridging the gap between SGD and Adam, we also hope to shed light on why
certain optimization algorithms generalize better than others. | http://arxiv.org/pdf/1709.04546 | Zijun Zhang, Lin Ma, Zongpeng Li, Chuan Wu | cs.LG, stat.ML | null | null | cs.LG | 20170913 | 20180918 | [
{
"id": "1711.05101"
},
{
"id": "1707.04199"
},
{
"id": "1605.07146"
},
{
"id": "1707.04822"
}
] |
1709.04546 | 2 | # INTRODUCTION
In contrast with the growing complexity of neural network architectures (Szegedy et al., 2015; He et al., 2016; Hu et al., 2018), the training methods remain relatively simple. Most practical opti- mization methods for deep neural networks (DNNs) are based on the stochastic gradient descent (SGD) algorithm. However, the learning rate of SGD, as a hyperparameter, is often difï¬cult to tune, since the magnitudes of different parameters vary widely, and adjustment is required throughout the training process. | 1709.04546#2 | Normalized Direction-preserving Adam | Adaptive optimization algorithms, such as Adam and RMSprop, have shown better
optimization performance than stochastic gradient descent (SGD) in some
scenarios. However, recent studies show that they often lead to worse
generalization performance than SGD, especially for training deep neural
networks (DNNs). In this work, we identify the reasons that Adam generalizes
worse than SGD, and develop a variant of Adam to eliminate the generalization
gap. The proposed method, normalized direction-preserving Adam (ND-Adam),
enables more precise control of the direction and step size for updating weight
vectors, leading to significantly improved generalization performance.
Following a similar rationale, we further improve the generalization
performance in classification tasks by regularizing the softmax logits. By
bridging the gap between SGD and Adam, we also hope to shed light on why
certain optimization algorithms generalize better than others. | http://arxiv.org/pdf/1709.04546 | Zijun Zhang, Lin Ma, Zongpeng Li, Chuan Wu | cs.LG, stat.ML | null | null | cs.LG | 20170913 | 20180918 | [
{
"id": "1711.05101"
},
{
"id": "1707.04199"
},
{
"id": "1605.07146"
},
{
"id": "1707.04822"
}
] |
1709.04546 | 3 | To tackle this problem, several adaptive variants of SGD were developed, including Adagrad (Duchi et al., 2011), Adadelta (Zeiler, 2012), RMSprop (Tieleman & Hinton, 2012), Adam (Kingma & Ba, 2015). These algorithms aim to adapt the learning rate to different parameters automatically, based on the statistics of gradient. Although they usually simplify learning rate settings, and lead to faster convergence, it is observed that their generalization performance tend to be signiï¬cantly worse than that of SGD in some scenarios (Wilson et al., 2017). This intriguing phenomenon may explain why SGD (possibly with momentum) is still prevalent in training state-of-the-art deep models, especially feedforward DNNs (Szegedy et al., 2015; He et al., 2016; Hu et al., 2018). Furthermore, recent work has shown that DNNs are capable of ï¬tting noise data (Zhang et al., 2017), suggesting that their generalization capabilities are not the mere result of DNNs themselves, but are entwined with optimization (Arpit et al., 2017). | 1709.04546#3 | Normalized Direction-preserving Adam | Adaptive optimization algorithms, such as Adam and RMSprop, have shown better
optimization performance than stochastic gradient descent (SGD) in some
scenarios. However, recent studies show that they often lead to worse
generalization performance than SGD, especially for training deep neural
networks (DNNs). In this work, we identify the reasons that Adam generalizes
worse than SGD, and develop a variant of Adam to eliminate the generalization
gap. The proposed method, normalized direction-preserving Adam (ND-Adam),
enables more precise control of the direction and step size for updating weight
vectors, leading to significantly improved generalization performance.
Following a similar rationale, we further improve the generalization
performance in classification tasks by regularizing the softmax logits. By
bridging the gap between SGD and Adam, we also hope to shed light on why
certain optimization algorithms generalize better than others. | http://arxiv.org/pdf/1709.04546 | Zijun Zhang, Lin Ma, Zongpeng Li, Chuan Wu | cs.LG, stat.ML | null | null | cs.LG | 20170913 | 20180918 | [
{
"id": "1711.05101"
},
{
"id": "1707.04199"
},
{
"id": "1605.07146"
},
{
"id": "1707.04822"
}
] |
1709.04546 | 4 | This work aims to bridge the gap between SGD and Adam in terms of the generalization perfor- mance. To this end, we identify two problems that may degrade the generalization performance of Adam, and show how these problems are (partially) avoided by using SGD with L2 weight de- cay. First, the updates of SGD lie in the span of historical gradients, whereas it is not the case for Adam. This difference has been discussed in rather recent literature (Wilson et al., 2017), where the authors show that adaptive methods can ï¬nd drastically different but worse solutions than SGD.
Second, while the magnitudes of Adam parameter updates are invariant to rescaling of the gradient, the effect of the updates on the same overall network function still varies with the magnitudes of pa- rameters. As a result, the effective learning rates of weight vectors tend to decrease during training, which leads to sharp local minima that do not generalize well (Hochreiter & Schmidhuber, 1997).
To address these two problems of Adam, we propose the normalized direction-preserving Adam (ND-Adam) algorithm, which controls the update direction and step size in a more precise way. We show that ND-Adam is able to achieve signiï¬cantly better generalization performance than vanilla Adam, and matches that of SGD in image classiï¬cation tasks. | 1709.04546#4 | Normalized Direction-preserving Adam | Adaptive optimization algorithms, such as Adam and RMSprop, have shown better
optimization performance than stochastic gradient descent (SGD) in some
scenarios. However, recent studies show that they often lead to worse
generalization performance than SGD, especially for training deep neural
networks (DNNs). In this work, we identify the reasons that Adam generalizes
worse than SGD, and develop a variant of Adam to eliminate the generalization
gap. The proposed method, normalized direction-preserving Adam (ND-Adam),
enables more precise control of the direction and step size for updating weight
vectors, leading to significantly improved generalization performance.
Following a similar rationale, we further improve the generalization
performance in classification tasks by regularizing the softmax logits. By
bridging the gap between SGD and Adam, we also hope to shed light on why
certain optimization algorithms generalize better than others. | http://arxiv.org/pdf/1709.04546 | Zijun Zhang, Lin Ma, Zongpeng Li, Chuan Wu | cs.LG, stat.ML | null | null | cs.LG | 20170913 | 20180918 | [
{
"id": "1711.05101"
},
{
"id": "1707.04199"
},
{
"id": "1605.07146"
},
{
"id": "1707.04822"
}
] |
1709.04546 | 5 | We summarize our contributions as follows:
# e
# e
# e
We observe that the directions of Adam parameter updates are different from that of SGD, i.e., Adam does not preserve the directions of gradients as SGD does. We ï¬x the problem by adapting the learning rate to each weight vector, instead of each individual weight, such that the direction of the gradient is preserved. For both Adam and SGD without L2 weight decay, we observe that the magnitude of each vectorâs direction change depends on its L2-norm. We show that, using SGD with L2 weight decay implicitly normalizes the weight vectors, and thus remove the dependence in an approximate manner. We ï¬x the problem for Adam by explicitly normalizing each weight vector, and by optimizing only its direction, such that the effective learning rate can be precisely controlled. We further demonstrate that, without proper regularization, the learning signal backpropa- gated from the softmax layer may vary with the overall magnitude of the logits in an unde- sirable way. Based on the observation, we apply batch normalization or L2-regularization to the logits, which further improves the generalization performance in classiï¬cation tasks. | 1709.04546#5 | Normalized Direction-preserving Adam | Adaptive optimization algorithms, such as Adam and RMSprop, have shown better
optimization performance than stochastic gradient descent (SGD) in some
scenarios. However, recent studies show that they often lead to worse
generalization performance than SGD, especially for training deep neural
networks (DNNs). In this work, we identify the reasons that Adam generalizes
worse than SGD, and develop a variant of Adam to eliminate the generalization
gap. The proposed method, normalized direction-preserving Adam (ND-Adam),
enables more precise control of the direction and step size for updating weight
vectors, leading to significantly improved generalization performance.
Following a similar rationale, we further improve the generalization
performance in classification tasks by regularizing the softmax logits. By
bridging the gap between SGD and Adam, we also hope to shed light on why
certain optimization algorithms generalize better than others. | http://arxiv.org/pdf/1709.04546 | Zijun Zhang, Lin Ma, Zongpeng Li, Chuan Wu | cs.LG, stat.ML | null | null | cs.LG | 20170913 | 20180918 | [
{
"id": "1711.05101"
},
{
"id": "1707.04199"
},
{
"id": "1605.07146"
},
{
"id": "1707.04822"
}
] |
1709.04546 | 6 | In essence, our proposed methods, ND-Adam and regularized softmax, improve the generalization performance of Adam by enabling more precise control over the directions of parameter updates, the learning rates, and the learning signals.
The remainder of this paper is organized as follows. In Sec. 2, we identify two problems of Adam, and show how SGD with L2 weight decay partially avoids these problems. In Sec. 3, we further discuss and develop ND-Adam as a solution to the two problems. In Sec. 4, we propose regularized softmax to improve the learning signal backpropagated from the softmax layer. We provide em- pirical evidence for our analysis, and evaluate the performance of the proposed methods in Sec. 5. 1
# 2 BACKGROUND AND MOTIVATION
2.1 ADAPTIVE MOMENT ESTIMATION (ADAM)
Adaptive moment estimation (Adam) (Kingma & Ba, 2015) is a stochastic optimization method that applies individual adaptive learning rates to different parameters, based on the estimates of the Rn, Adam ï¬rst and second moments of the gradients. Speciï¬cally, for n trainable parameters, θ maintains a running average of the ï¬rst and second moments of the gradient w.r.t. each parameter as
mt = β1mtâ1 + (1 β1) gt, (1a) | 1709.04546#6 | Normalized Direction-preserving Adam | Adaptive optimization algorithms, such as Adam and RMSprop, have shown better
optimization performance than stochastic gradient descent (SGD) in some
scenarios. However, recent studies show that they often lead to worse
generalization performance than SGD, especially for training deep neural
networks (DNNs). In this work, we identify the reasons that Adam generalizes
worse than SGD, and develop a variant of Adam to eliminate the generalization
gap. The proposed method, normalized direction-preserving Adam (ND-Adam),
enables more precise control of the direction and step size for updating weight
vectors, leading to significantly improved generalization performance.
Following a similar rationale, we further improve the generalization
performance in classification tasks by regularizing the softmax logits. By
bridging the gap between SGD and Adam, we also hope to shed light on why
certain optimization algorithms generalize better than others. | http://arxiv.org/pdf/1709.04546 | Zijun Zhang, Lin Ma, Zongpeng Li, Chuan Wu | cs.LG, stat.ML | null | null | cs.LG | 20170913 | 20180918 | [
{
"id": "1711.05101"
},
{
"id": "1707.04199"
},
{
"id": "1605.07146"
},
{
"id": "1707.04822"
}
] |
1709.04546 | 7 | mt = β1mtâ1 + (1 β1) gt, (1a)
â β2) g2 t .
and
(1b) â Rn denote respectively the ï¬rst and second Here, t denotes the time step, mt â R are the corresponding decay factors. Kingma & Ba (2015) moments, and β1 â further notice that, since m0 and v0 are initialized to 0âs, they are biased towards zero during the initial time steps, especially when the decay factors are large (i.e., close to 1). Thus, for computing the next update, they need to be corrected as
Ëmt = 1 mt βt 1 , Ëvt = 1 vt βt 2 , (2)
â
â
1Code is available at https://github.com/zj10/ND-Adam.
# where βt
# 1, βt
are the t-th powers of 31, 32 respectively. Then, we can update each parameter as Ot - m4, aie O, = 1 ~~
where a, is the global learning rate, and ⬠is a small constant to avoid division by zero. Note the above computations between vectors are element-wise. | 1709.04546#7 | Normalized Direction-preserving Adam | Adaptive optimization algorithms, such as Adam and RMSprop, have shown better
optimization performance than stochastic gradient descent (SGD) in some
scenarios. However, recent studies show that they often lead to worse
generalization performance than SGD, especially for training deep neural
networks (DNNs). In this work, we identify the reasons that Adam generalizes
worse than SGD, and develop a variant of Adam to eliminate the generalization
gap. The proposed method, normalized direction-preserving Adam (ND-Adam),
enables more precise control of the direction and step size for updating weight
vectors, leading to significantly improved generalization performance.
Following a similar rationale, we further improve the generalization
performance in classification tasks by regularizing the softmax logits. By
bridging the gap between SGD and Adam, we also hope to shed light on why
certain optimization algorithms generalize better than others. | http://arxiv.org/pdf/1709.04546 | Zijun Zhang, Lin Ma, Zongpeng Li, Chuan Wu | cs.LG, stat.ML | null | null | cs.LG | 20170913 | 20180918 | [
{
"id": "1711.05101"
},
{
"id": "1707.04199"
},
{
"id": "1605.07146"
},
{
"id": "1707.04822"
}
] |
1709.04546 | 8 | where a, is the global learning rate, and ⬠is a small constant to avoid division by zero. Note the above computations between vectors are element-wise.
A distinguishing merit of Adam is that the magnitudes of parameter updates are invariant to rescaling of the gradient, as shown by the adaptive learning rate term, a,/ (Vor + â¬). However, there are two potential problems when applying Adam to DNNs. | 1709.04546#8 | Normalized Direction-preserving Adam | Adaptive optimization algorithms, such as Adam and RMSprop, have shown better
optimization performance than stochastic gradient descent (SGD) in some
scenarios. However, recent studies show that they often lead to worse
generalization performance than SGD, especially for training deep neural
networks (DNNs). In this work, we identify the reasons that Adam generalizes
worse than SGD, and develop a variant of Adam to eliminate the generalization
gap. The proposed method, normalized direction-preserving Adam (ND-Adam),
enables more precise control of the direction and step size for updating weight
vectors, leading to significantly improved generalization performance.
Following a similar rationale, we further improve the generalization
performance in classification tasks by regularizing the softmax logits. By
bridging the gap between SGD and Adam, we also hope to shed light on why
certain optimization algorithms generalize better than others. | http://arxiv.org/pdf/1709.04546 | Zijun Zhang, Lin Ma, Zongpeng Li, Chuan Wu | cs.LG, stat.ML | null | null | cs.LG | 20170913 | 20180918 | [
{
"id": "1711.05101"
},
{
"id": "1707.04199"
},
{
"id": "1605.07146"
},
{
"id": "1707.04822"
}
] |
1709.04546 | 9 | First, in some scenarios, DNNs trained with Adam generalize worse than that trained with stochas- tic gradient descent (SGD) (Wilson et al., 2017). Zhang et al. (2017) demonstrate that over- parameterized DNNs are capable of memorizing the entire dataset, no matter if it is natural data or meaningless noise data, and thus suggest much of the generalization power of DNNs comes from the training algorithm, e.g., SGD and its variants. It coincides with another recent work (Wilson et al., 2017), which shows that simple SGD often yields better generalization performance than adaptive gradient methods, such as Adam. As pointed out by the latter, the difference in the gen- eralization performance may result from the different directions of updates. Speciï¬cally, for each hidden unit, the SGD update of its input weight vector can only lie in the span of all possible input vectors, which, however, is not the case for Adam due to the individually adapted learning rates. We refer to this problem as the direction missing problem. | 1709.04546#9 | Normalized Direction-preserving Adam | Adaptive optimization algorithms, such as Adam and RMSprop, have shown better
optimization performance than stochastic gradient descent (SGD) in some
scenarios. However, recent studies show that they often lead to worse
generalization performance than SGD, especially for training deep neural
networks (DNNs). In this work, we identify the reasons that Adam generalizes
worse than SGD, and develop a variant of Adam to eliminate the generalization
gap. The proposed method, normalized direction-preserving Adam (ND-Adam),
enables more precise control of the direction and step size for updating weight
vectors, leading to significantly improved generalization performance.
Following a similar rationale, we further improve the generalization
performance in classification tasks by regularizing the softmax logits. By
bridging the gap between SGD and Adam, we also hope to shed light on why
certain optimization algorithms generalize better than others. | http://arxiv.org/pdf/1709.04546 | Zijun Zhang, Lin Ma, Zongpeng Li, Chuan Wu | cs.LG, stat.ML | null | null | cs.LG | 20170913 | 20180918 | [
{
"id": "1711.05101"
},
{
"id": "1707.04199"
},
{
"id": "1605.07146"
},
{
"id": "1707.04822"
}
] |
1709.04546 | 10 | Second, while batch normalization (Ioffe & Szegedy, 2015) can signiï¬cantly accelerate the con- vergence of DNNs, the input weights and the scaling factor of each hidden unit can be scaled in inï¬nitely many (but consistent) ways, without changing the function implemented by the hidden unit. Thus, for different magnitudes of an input weight vector, the updates given by Adam can have different effects on the overall network function, which is undesirable. Furthermore, even when batch normalization is not used, a network using linear rectiï¬ers (e.g., ReLU, leaky ReLU) as acti- vation functions, is still subject to ill-conditioning of the parameterization (Glorot et al., 2011), and hence the same problem. We refer to this problem as the ill-conditioning problem.
# 2.2 L2 WEIGHT DECAY | 1709.04546#10 | Normalized Direction-preserving Adam | Adaptive optimization algorithms, such as Adam and RMSprop, have shown better
optimization performance than stochastic gradient descent (SGD) in some
scenarios. However, recent studies show that they often lead to worse
generalization performance than SGD, especially for training deep neural
networks (DNNs). In this work, we identify the reasons that Adam generalizes
worse than SGD, and develop a variant of Adam to eliminate the generalization
gap. The proposed method, normalized direction-preserving Adam (ND-Adam),
enables more precise control of the direction and step size for updating weight
vectors, leading to significantly improved generalization performance.
Following a similar rationale, we further improve the generalization
performance in classification tasks by regularizing the softmax logits. By
bridging the gap between SGD and Adam, we also hope to shed light on why
certain optimization algorithms generalize better than others. | http://arxiv.org/pdf/1709.04546 | Zijun Zhang, Lin Ma, Zongpeng Li, Chuan Wu | cs.LG, stat.ML | null | null | cs.LG | 20170913 | 20180918 | [
{
"id": "1711.05101"
},
{
"id": "1707.04199"
},
{
"id": "1605.07146"
},
{
"id": "1707.04822"
}
] |
1709.04546 | 11 | # 2.2 L2 WEIGHT DECAY
L2 weight decay is a regularization technique frequently used with SGD. It often has a signiï¬cant effect on the generalization performance of DNNs. Despite its simplicity and crucial role in the training process, how L2 weight decay works in DNNs remains to be explained. A common jus- tiï¬cation is that L2 weight decay can be introduced by placing a Gaussian prior upon the weights, when the objective is to ï¬nd the maximum a posteriori (MAP) weights (Blundell et al.). How- ever, as discussed in Sec. 2.1, the magnitudes of input weight vectors are irrelevant in terms of the overall network function, in some common scenarios, rendering the variance of the Gaussian prior meaningless.
We propose to view L2 weight decay in neural networks as a form of weight normalization, which may better explain its effect on the generalization performance. Consider a neural network trained with the following loss function:
~ r 2 L(6;D) = L(6:D) + 5 De lel, (4)
where L (θ; is the set of all hidden units, and wi denotes the input weights of hidden unit i, which is included in the trainable parameters, θ. For simplicity, we consider SGD updates without momentum. Therefore, the update of wi at each time step is | 1709.04546#11 | Normalized Direction-preserving Adam | Adaptive optimization algorithms, such as Adam and RMSprop, have shown better
optimization performance than stochastic gradient descent (SGD) in some
scenarios. However, recent studies show that they often lead to worse
generalization performance than SGD, especially for training deep neural
networks (DNNs). In this work, we identify the reasons that Adam generalizes
worse than SGD, and develop a variant of Adam to eliminate the generalization
gap. The proposed method, normalized direction-preserving Adam (ND-Adam),
enables more precise control of the direction and step size for updating weight
vectors, leading to significantly improved generalization performance.
Following a similar rationale, we further improve the generalization
performance in classification tasks by regularizing the softmax logits. By
bridging the gap between SGD and Adam, we also hope to shed light on why
certain optimization algorithms generalize better than others. | http://arxiv.org/pdf/1709.04546 | Zijun Zhang, Lin Ma, Zongpeng Li, Chuan Wu | cs.LG, stat.ML | null | null | cs.LG | 20170913 | 20180918 | [
{
"id": "1711.05101"
},
{
"id": "1707.04199"
},
{
"id": "1605.07146"
},
{
"id": "1707.04822"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.