doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1707.06875 | 82 | inf TGEN nat qual inf LOLS nat qual inf RNNLG nat qual TER BLEU1 BLEU2 BLEU3 BLEU4 ROUGE NIST LEPOR CIDEr METEOR SIM RE cpw len wps sps spw pol ppw msp prs -0.21* 0.30* 0.30* 0.27* 0.23* 0.20* 0.25* 0.17* 0.26* 0.29* 0.16* -0.06 0.03 0.25* 0.33* 0.25* 0.01 0.16* -0.02 -0.02 -0.23* -0.19* 0.15* 0.17* 0.17* 0.15* 0.11 0.07 0.12 0.14* 0.09 0.04 0.09 -0.12 -0.25* -0.17* -0.20* -0.07 -0.06 0.06 -0.06 0.18* -0.16* 0.13 0.14 0.12 0.11 0.09 0.02 0.07 0.10 0.09 0.06 0.13 -0.19* -0.21* -0.12 -0.17* -0.13 -0.07 0.00 -0.11 | 1707.06875#82 | Why We Need New Evaluation Metrics for NLG | The majority of NLG evaluation relies on automatic metrics, such as BLEU . In
this paper, we motivate the need for novel, system- and data-independent
automatic evaluation methods: We investigate a wide range of metrics, including
state-of-the-art word-based and novel grammar-based ones, and demonstrate that
they only weakly reflect human judgements of system outputs as generated by
data-driven, end-to-end NLG. We also show that metric performance is data- and
system-specific. Nevertheless, our results also suggest that automatic metrics
perform reliably at system-level and can support system development by finding
cases where a system performs poorly. | http://arxiv.org/pdf/1707.06875 | Jekaterina Novikova, Ondřej Dušek, Amanda Cercas Curry, Verena Rieser | cs.CL | accepted to EMNLP 2017 | Proceedings of the 2017 Conference on Empirical Methods in Natural
Language Processing, pages 2231-2242, Copenhagen, Denmark, September 7-11,
2017 | cs.CL | 20170721 | 20170721 | [
{
"id": "1612.07600"
},
{
"id": "1706.09254"
},
{
"id": "1509.00838"
},
{
"id": "1608.00339"
},
{
"id": "1606.05491"
},
{
"id": "1610.02124"
},
{
"id": "1603.01232"
},
{
"id": "1608.07076"
},
{
"id": "1603.08023"
},
{
"id": "1606.03254"
}
] |
1707.06875 | 83 | 0.06 0.13 -0.19* -0.21* -0.12 -0.17* -0.13 -0.07 0.00 -0.11 0.13 -0.07* 0.08* 0.05 0.04 0.04 0.05 0.07* 0.13* 0.05 0.14* 0.14* -0.02 0.11* 0.17* 0.11* 0.09* -0.07* -0.02 -0.08* 0.10* -0.12* -0.15* 0.12* 0.11* 0.09* 0.04 0.09* 0.11* 0.13* 0.13* 0.13* 0.02 0.04 0.11* -0.12* -0.17* -0.19* -0.06* -0.09* 0.00 0.00 0.16* -0.11* 0.08* 0.07* 0.07* 0.04 0.05 0.09* 0.11* 0.09* 0.12* 0.00 0.07* 0.08* -0.10* -0.13* -0.17* -0.10* -0.11* -0.05 0.02 0.15* | 1707.06875#83 | Why We Need New Evaluation Metrics for NLG | The majority of NLG evaluation relies on automatic metrics, such as BLEU . In
this paper, we motivate the need for novel, system- and data-independent
automatic evaluation methods: We investigate a wide range of metrics, including
state-of-the-art word-based and novel grammar-based ones, and demonstrate that
they only weakly reflect human judgements of system outputs as generated by
data-driven, end-to-end NLG. We also show that metric performance is data- and
system-specific. Nevertheless, our results also suggest that automatic metrics
perform reliably at system-level and can support system development by finding
cases where a system performs poorly. | http://arxiv.org/pdf/1707.06875 | Jekaterina Novikova, Ondřej Dušek, Amanda Cercas Curry, Verena Rieser | cs.CL | accepted to EMNLP 2017 | Proceedings of the 2017 Conference on Empirical Methods in Natural
Language Processing, pages 2231-2242, Copenhagen, Denmark, September 7-11,
2017 | cs.CL | 20170721 | 20170721 | [
{
"id": "1612.07600"
},
{
"id": "1706.09254"
},
{
"id": "1509.00838"
},
{
"id": "1608.00339"
},
{
"id": "1606.05491"
},
{
"id": "1610.02124"
},
{
"id": "1603.01232"
},
{
"id": "1608.07076"
},
{
"id": "1603.08023"
},
{
"id": "1606.03254"
}
] |
1707.06875 | 84 | 0.08* -0.10* -0.13* -0.17* -0.10* -0.11* -0.05 0.02 0.15* -0.02 0.07* 0.06* 0.06 0.06 0.07* 0.04 0.02 0.04 0.08* 0.05 0.02 -0.02 0.06 0.07* 0.03 -0.09* -0.08* -0.11* 0.02 -0.07* -0.13* 0.13* 0.14* 0.13* 0.11* 0.15* 0.06* 0.05 0.10* 0.15* -0.08* -0.01 0.02 -0.18* -0.17* -0.17* 0.01 -0.08* 0.00 -0.04 0.14* -0.08* 0.07* 0.08* 0.08* 0.08* 0.09* 0.01 0.00 0.02 0.10* -0.09* 0.06* -0.05 -0.08* -0.06 -0.08* -0.07* -0.09* -0.07* -0.07* 0.10* | 1707.06875#84 | Why We Need New Evaluation Metrics for NLG | The majority of NLG evaluation relies on automatic metrics, such as BLEU . In
this paper, we motivate the need for novel, system- and data-independent
automatic evaluation methods: We investigate a wide range of metrics, including
state-of-the-art word-based and novel grammar-based ones, and demonstrate that
they only weakly reflect human judgements of system outputs as generated by
data-driven, end-to-end NLG. We also show that metric performance is data- and
system-specific. Nevertheless, our results also suggest that automatic metrics
perform reliably at system-level and can support system development by finding
cases where a system performs poorly. | http://arxiv.org/pdf/1707.06875 | Jekaterina Novikova, Ondřej Dušek, Amanda Cercas Curry, Verena Rieser | cs.CL | accepted to EMNLP 2017 | Proceedings of the 2017 Conference on Empirical Methods in Natural
Language Processing, pages 2231-2242, Copenhagen, Denmark, September 7-11,
2017 | cs.CL | 20170721 | 20170721 | [
{
"id": "1612.07600"
},
{
"id": "1706.09254"
},
{
"id": "1509.00838"
},
{
"id": "1608.00339"
},
{
"id": "1606.05491"
},
{
"id": "1610.02124"
},
{
"id": "1603.01232"
},
{
"id": "1608.07076"
},
{
"id": "1603.08023"
},
{
"id": "1606.03254"
}
] |
1707.06875 | 86 | M I S E R E T E M r E D I C R O P E L T S I N E G U O R 4 U E L B 3 U E L B 2 U E L B 1 U E L B R E T d n a r * 9 0 . 1 4 3 1 . 7 3 * 4 5 . 5 4 * 7 0 3 4 . * 8 5 . 1 4 * 7 0 . 3 4 * 7 0 . 3 4 * 8 0 . 2 4 * 7 5 . 2 4 * 8 5 . 1 4 * 8 5 . 1 4 * 5 0 . 5 4 3 1 7 3 . * 7 0 . 3 4 8 0 . 2 4 * 5 0 . 5 4 * 5 0 5 4 . * 3 5 . 6 4 * 5 5 . 4 4 * 4 0 . 6 4 * 5 0 . 5 4 * 6 0 . 4 4 * 4 5 . 5 4 * 4 0 . 6 4 * 3 0 . 7 4 8 0 2 4 . * 7 5 . 2 4 2 6 . 7 3 * 8 5 . 1 4 * 8 0 2 4 . * 9 5 . 0 4 * 9 0 . 1 4 * 7 0 . 3 4 * 6 5 . 3 4 * 9 5 . 0 4 * 0 1 . 0 4 * 7 0 . 3 4 * 4 5 . 5 4 7 1 3 3 . * 2 9 . 3 3 * 2 9 . 4 3 * 3 4 . | 1707.06875#86 | Why We Need New Evaluation Metrics for NLG | The majority of NLG evaluation relies on automatic metrics, such as BLEU . In
this paper, we motivate the need for novel, system- and data-independent
automatic evaluation methods: We investigate a wide range of metrics, including
state-of-the-art word-based and novel grammar-based ones, and demonstrate that
they only weakly reflect human judgements of system outputs as generated by
data-driven, end-to-end NLG. We also show that metric performance is data- and
system-specific. Nevertheless, our results also suggest that automatic metrics
perform reliably at system-level and can support system development by finding
cases where a system performs poorly. | http://arxiv.org/pdf/1707.06875 | Jekaterina Novikova, Ondřej Dušek, Amanda Cercas Curry, Verena Rieser | cs.CL | accepted to EMNLP 2017 | Proceedings of the 2017 Conference on Empirical Methods in Natural
Language Processing, pages 2231-2242, Copenhagen, Denmark, September 7-11,
2017 | cs.CL | 20170721 | 20170721 | [
{
"id": "1612.07600"
},
{
"id": "1706.09254"
},
{
"id": "1509.00838"
},
{
"id": "1608.00339"
},
{
"id": "1606.05491"
},
{
"id": "1610.02124"
},
{
"id": "1603.01232"
},
{
"id": "1608.07076"
},
{
"id": "1603.08023"
},
{
"id": "1606.03254"
}
] |
1707.06875 | 87 | 4 * 0 1 . 0 4 * 7 0 . 3 4 * 4 5 . 5 4 7 1 3 3 . * 2 9 . 3 3 * 2 9 . 4 3 * 3 4 . 6 3 * 2 9 3 3 . * 6 1 . 2 3 1 4 . 1 3 * 3 4 . 6 3 * 7 6 . 4 3 * 8 6 . 5 3 * 8 1 . 5 3 * 8 6 . 5 3 * 2 9 . 4 3 8 3 5 2 . 8 9 6 4 . 9 1 7 3 . * 5 7 9 4 . 2 7 . 4 4 2 7 3 4 . 1 2 . 1 4 4 7 . 8 4 3 2 . 5 4 8 4 . 6 4 8 4 . 5 4 8 4 . 6 4 3 7 . 5 4 6 9 1 4 . 4 4 . 7 3 7 6 . 3 3 8 9 . 5 4 6 4 2 4 . 5 9 . 0 4 2 . 0 4 2 2 . 3 4 6 4 . 1 4 2 7 . 4 4 1 2 . 2 4 5 9 . 0 4 5 9 . 0 4 7 4 4 4 . * 6 6 2 4 . * 4 3 8 3 . 2 7 4 3 . 7 4 . 2 3 7 2 6 3 . 8 5 . 5 3 6 1 . 3 3 6 9 . 6 3 2 7 . 4 3 2 0 . 4 3 1 4 . 5 3 7 2 . 6 3 8 6 3 3 . 0 0 | 1707.06875#87 | Why We Need New Evaluation Metrics for NLG | The majority of NLG evaluation relies on automatic metrics, such as BLEU . In
this paper, we motivate the need for novel, system- and data-independent
automatic evaluation methods: We investigate a wide range of metrics, including
state-of-the-art word-based and novel grammar-based ones, and demonstrate that
they only weakly reflect human judgements of system outputs as generated by
data-driven, end-to-end NLG. We also show that metric performance is data- and
system-specific. Nevertheless, our results also suggest that automatic metrics
perform reliably at system-level and can support system development by finding
cases where a system performs poorly. | http://arxiv.org/pdf/1707.06875 | Jekaterina Novikova, Ondřej Dušek, Amanda Cercas Curry, Verena Rieser | cs.CL | accepted to EMNLP 2017 | Proceedings of the 2017 Conference on Empirical Methods in Natural
Language Processing, pages 2231-2242, Copenhagen, Denmark, September 7-11,
2017 | cs.CL | 20170721 | 20170721 | [
{
"id": "1612.07600"
},
{
"id": "1706.09254"
},
{
"id": "1509.00838"
},
{
"id": "1608.00339"
},
{
"id": "1606.05491"
},
{
"id": "1610.02124"
},
{
"id": "1603.01232"
},
{
"id": "1608.07076"
},
{
"id": "1603.08023"
},
{
"id": "1606.03254"
}
] |
1707.06875 | 88 | . 5 3 6 1 . 3 3 6 9 . 6 3 2 7 . 4 3 2 0 . 4 3 1 4 . 5 3 7 2 . 6 3 8 6 3 3 . 0 0 8 3 . 8 3 9 3 . 8 3 9 3 . 9 7 . 6 3 * 1 1 1 4 . 8 3 . 9 3 7 1 . 8 3 6 8 . 8 3 4 3 . 8 3 6 8 . 8 3 7 0 . 0 4 1 4 . 0 4 0 1 6 3 . 1 3 7 3 . 3 9 0 4 . 9 8 4 3 . 3 2 . 5 3 2 7 9 3 . 9 6 . 8 3 0 1 . 6 3 5 5 . 9 3 5 6 . 7 3 1 2 . 9 3 6 9 . 6 3 3 1 . 7 3 8 3 9 3 . . t n a u q * 3 8 . 2 4 * 7 1 . 8 3 * 9 7 . 6 3 * 7 2 6 3 . * 3 1 . 7 3 * 5 5 . 9 3 * 4 4 . 6 3 * 4 5 . 4 3 * 2 9 . 5 3 * 7 3 . 4 3 * 7 2 . 6 3 * 5 7 . 5 3 5 9 1 3 . 1 4 5 3 . 7 4 2 3 . 1 6 1 3 . 2 9 . 0 3 2 9 5 3 . 6 1 . 3 3 4 5 . 4 3 4 9 . 6 2 7 5 . 0 3 3 . | 1707.06875#88 | Why We Need New Evaluation Metrics for NLG | The majority of NLG evaluation relies on automatic metrics, such as BLEU . In
this paper, we motivate the need for novel, system- and data-independent
automatic evaluation methods: We investigate a wide range of metrics, including
state-of-the-art word-based and novel grammar-based ones, and demonstrate that
they only weakly reflect human judgements of system outputs as generated by
data-driven, end-to-end NLG. We also show that metric performance is data- and
system-specific. Nevertheless, our results also suggest that automatic metrics
perform reliably at system-level and can support system development by finding
cases where a system performs poorly. | http://arxiv.org/pdf/1707.06875 | Jekaterina Novikova, Ondřej Dušek, Amanda Cercas Curry, Verena Rieser | cs.CL | accepted to EMNLP 2017 | Proceedings of the 2017 Conference on Empirical Methods in Natural
Language Processing, pages 2231-2242, Copenhagen, Denmark, September 7-11,
2017 | cs.CL | 20170721 | 20170721 | [
{
"id": "1612.07600"
},
{
"id": "1706.09254"
},
{
"id": "1509.00838"
},
{
"id": "1608.00339"
},
{
"id": "1606.05491"
},
{
"id": "1610.02124"
},
{
"id": "1603.01232"
},
{
"id": "1608.07076"
},
{
"id": "1603.08023"
},
{
"id": "1606.03254"
}
] |
1707.06875 | 89 | 2 3 . 1 6 1 3 . 2 9 . 0 3 2 9 5 3 . 6 1 . 3 3 4 5 . 4 3 4 9 . 6 2 7 5 . 0 3 3 . 2 3 7 3 . 4 3 3 3 . 3 3 1 2 9 3 . 1 6 3 . 2 4 3 . 4 3 8 3 . 3 2 . 5 3 6 8 8 3 . 1 2 . 9 3 6 9 . 6 3 3 . 2 3 5 7 . 5 3 1 . 6 3 9 6 . 8 3 2 8 7 3 . 3 1 7 3 . . ) 5 0 . 0 < p ( e c n a c ï¬ i n g i s l a c i t s i t a t s g n i t o n e d â * â h t i w , s g n i t a r n a m u h e v i t a l e r g n i t c i d e r p s c i r t e m f o y c a r u c c A : 2 1 e l b a T | 1707.06875#89 | Why We Need New Evaluation Metrics for NLG | The majority of NLG evaluation relies on automatic metrics, such as BLEU . In
this paper, we motivate the need for novel, system- and data-independent
automatic evaluation methods: We investigate a wide range of metrics, including
state-of-the-art word-based and novel grammar-based ones, and demonstrate that
they only weakly reflect human judgements of system outputs as generated by
data-driven, end-to-end NLG. We also show that metric performance is data- and
system-specific. Nevertheless, our results also suggest that automatic metrics
perform reliably at system-level and can support system development by finding
cases where a system performs poorly. | http://arxiv.org/pdf/1707.06875 | Jekaterina Novikova, Ondřej Dušek, Amanda Cercas Curry, Verena Rieser | cs.CL | accepted to EMNLP 2017 | Proceedings of the 2017 Conference on Empirical Methods in Natural
Language Processing, pages 2231-2242, Copenhagen, Denmark, September 7-11,
2017 | cs.CL | 20170721 | 20170721 | [
{
"id": "1612.07600"
},
{
"id": "1706.09254"
},
{
"id": "1509.00838"
},
{
"id": "1608.00339"
},
{
"id": "1606.05491"
},
{
"id": "1610.02124"
},
{
"id": "1603.01232"
},
{
"id": "1608.07076"
},
{
"id": "1603.08023"
},
{
"id": "1606.03254"
}
] |
1707.06875 | 91 | informativeness Bad Good and avg Bad naturalness Good and avg Bad TER BLEU1 BLEU2 BLEU3 BLEU4 ROUGE NIST LEPOR CIDEr METEOR 0.48* 0.45* 0.49* 0.40* 0.41* 0.50* 0.26 0.40* 0.42* 0.45* 0.37* 0.07* 0.11* 0.09* 0.08* 0.07* 0.08* 0.08* 0.09* 0.09* 0.14* 0.12* 0.31* 0.26* 0.29* 0.25* 0.21* 0.28* 0.23* 0.23* 0.21* 0.24* 0.29* 0.15* 0.13* 0.13* 0.13* 0.08* 0.13* 0.08* 0.10* 0.12* 0.15* -0.03 0.08 0.07 0.05 0.01 0.01 0.07 0.08 0.03 0.02 0.03 0.21* 0.06* 0.04 0.04* 0.05* 0.04 0.04* 0.03 0.01 0.04 0.08* -0.08* SIM | 1707.06875#91 | Why We Need New Evaluation Metrics for NLG | The majority of NLG evaluation relies on automatic metrics, such as BLEU . In
this paper, we motivate the need for novel, system- and data-independent
automatic evaluation methods: We investigate a wide range of metrics, including
state-of-the-art word-based and novel grammar-based ones, and demonstrate that
they only weakly reflect human judgements of system outputs as generated by
data-driven, end-to-end NLG. We also show that metric performance is data- and
system-specific. Nevertheless, our results also suggest that automatic metrics
perform reliably at system-level and can support system development by finding
cases where a system performs poorly. | http://arxiv.org/pdf/1707.06875 | Jekaterina Novikova, Ondřej Dušek, Amanda Cercas Curry, Verena Rieser | cs.CL | accepted to EMNLP 2017 | Proceedings of the 2017 Conference on Empirical Methods in Natural
Language Processing, pages 2231-2242, Copenhagen, Denmark, September 7-11,
2017 | cs.CL | 20170721 | 20170721 | [
{
"id": "1612.07600"
},
{
"id": "1706.09254"
},
{
"id": "1509.00838"
},
{
"id": "1608.00339"
},
{
"id": "1606.05491"
},
{
"id": "1610.02124"
},
{
"id": "1603.01232"
},
{
"id": "1608.07076"
},
{
"id": "1603.08023"
},
{
"id": "1606.03254"
}
] |
1707.06875 | 93 | naturalness Inform Not inform Inform Not inform Inform Not inform informativeness quality TER BLEU1 BLEU2 BLEU3 BLEU4 ROUGE NIST LEPOR CIDEr METEOR SIM cpw len wps sps spw pol ppw msp prs -0.08* 0.11* 0.09* 0.07* 0.06* 0.08* 0.08* 0.09* 0.10* 0.14* 0.15* 0.12* 0.17* 0.11* 0.09* -0.06* -0.08* -0.14* 0.11* -0.10* -0.10 0.09 0.10 0.11* 0.11* 0.12* 0.05 0.16* 0.01 0.17* 0.09 -0.15* 0.08 0.19* 0.18* 0.09 0.05 -0.01 -0.03 -0.18* -0.17* 0.14* 0.14* 0.13* 0.09* 0.14* 0.10* 0.11* 0.16* 0.15* -0.01 0.09* -0.15* -0.19* -0.20* -0.03 -0.10* 0.00 | 1707.06875#93 | Why We Need New Evaluation Metrics for NLG | The majority of NLG evaluation relies on automatic metrics, such as BLEU . In
this paper, we motivate the need for novel, system- and data-independent
automatic evaluation methods: We investigate a wide range of metrics, including
state-of-the-art word-based and novel grammar-based ones, and demonstrate that
they only weakly reflect human judgements of system outputs as generated by
data-driven, end-to-end NLG. We also show that metric performance is data- and
system-specific. Nevertheless, our results also suggest that automatic metrics
perform reliably at system-level and can support system development by finding
cases where a system performs poorly. | http://arxiv.org/pdf/1707.06875 | Jekaterina Novikova, Ondřej Dušek, Amanda Cercas Curry, Verena Rieser | cs.CL | accepted to EMNLP 2017 | Proceedings of the 2017 Conference on Empirical Methods in Natural
Language Processing, pages 2231-2242, Copenhagen, Denmark, September 7-11,
2017 | cs.CL | 20170721 | 20170721 | [
{
"id": "1612.07600"
},
{
"id": "1706.09254"
},
{
"id": "1509.00838"
},
{
"id": "1608.00339"
},
{
"id": "1606.05491"
},
{
"id": "1610.02124"
},
{
"id": "1603.01232"
},
{
"id": "1608.07076"
},
{
"id": "1603.08023"
},
{
"id": "1606.03254"
}
] |
1707.06875 | 94 | 0.15* -0.01 0.09* -0.15* -0.19* -0.20* -0.03 -0.10* 0.00 0.00 0.18* -0.18* 0.20* 0.20* 0.20* 0.18* 0.22* 0.06 0.16* 0.04 0.22* -0.03 -0.14* -0.12* -0.03 -0.02 0.01 -0.03 -0.03 -0.08 0.04 -0.09* 0.07* 0.07* 0.06* 0.05* 0.06* 0.07* 0.05* 0.07* 0.09* -0.05* 0.01 -0.12* -0.12* -0.17* -0.12* -0.09* -0.03 -0.03 0.15* -0.11* 0.11* 0.13* 0.14* 0.14* 0.16* -0.06 0.04 0.02 0.18* -0.10 -0.11* -0.05 0.01 0.02 0.01 -0.03 -0.05 -0.08 0.02 | 1707.06875#94 | Why We Need New Evaluation Metrics for NLG | The majority of NLG evaluation relies on automatic metrics, such as BLEU . In
this paper, we motivate the need for novel, system- and data-independent
automatic evaluation methods: We investigate a wide range of metrics, including
state-of-the-art word-based and novel grammar-based ones, and demonstrate that
they only weakly reflect human judgements of system outputs as generated by
data-driven, end-to-end NLG. We also show that metric performance is data- and
system-specific. Nevertheless, our results also suggest that automatic metrics
perform reliably at system-level and can support system development by finding
cases where a system performs poorly. | http://arxiv.org/pdf/1707.06875 | Jekaterina Novikova, Ondřej Dušek, Amanda Cercas Curry, Verena Rieser | cs.CL | accepted to EMNLP 2017 | Proceedings of the 2017 Conference on Empirical Methods in Natural
Language Processing, pages 2231-2242, Copenhagen, Denmark, September 7-11,
2017 | cs.CL | 20170721 | 20170721 | [
{
"id": "1612.07600"
},
{
"id": "1706.09254"
},
{
"id": "1509.00838"
},
{
"id": "1608.00339"
},
{
"id": "1606.05491"
},
{
"id": "1610.02124"
},
{
"id": "1603.01232"
},
{
"id": "1608.07076"
},
{
"id": "1603.08023"
},
{
"id": "1606.03254"
}
] |
1707.06347 | 0 | 7 1 0 2
g u A 8 2
] G L . s c [
2 v 7 4 3 6 0 . 7 0 7 1 : v i X r a
# Proximal Policy Optimization Algorithms
John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, Oleg Klimov
OpenAI
{joschu, filip, prafulla, alec, oleg}@openai.com
# Abstract
We propose a new family of policy gradient methods for reinforcement learning, which al- ternate between sampling data through interaction with the environment, and optimizing a âsurrogateâ objective function using stochastic gradient ascent. Whereas standard policy gra- dient methods perform one gradient update per data sample, we propose a novel objective function that enables multiple epochs of minibatch updates. The new methods, which we call proximal policy optimization (PPO), have some of the benefits of trust region policy optimiza. tion (TRPO), but they are much simpler to implement, more general, and have better sample complexity (empirically). Our experiments test PPO on a collection of benchmark tasks, includ ing simulated robotic locomotion and Atari game playing, and we show that PPO outperforms other online policy gradient methods, and overall strikes a favorable balance between sample complexity, simplicity, and wall-time.
# 1 Introduction | 1707.06347#0 | Proximal Policy Optimization Algorithms | We propose a new family of policy gradient methods for reinforcement
learning, which alternate between sampling data through interaction with the
environment, and optimizing a "surrogate" objective function using stochastic
gradient ascent. Whereas standard policy gradient methods perform one gradient
update per data sample, we propose a novel objective function that enables
multiple epochs of minibatch updates. The new methods, which we call proximal
policy optimization (PPO), have some of the benefits of trust region policy
optimization (TRPO), but they are much simpler to implement, more general, and
have better sample complexity (empirically). Our experiments test PPO on a
collection of benchmark tasks, including simulated robotic locomotion and Atari
game playing, and we show that PPO outperforms other online policy gradient
methods, and overall strikes a favorable balance between sample complexity,
simplicity, and wall-time. | http://arxiv.org/pdf/1707.06347 | John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, Oleg Klimov | cs.LG | null | null | cs.LG | 20170720 | 20170828 | [
{
"id": "1602.01783"
},
{
"id": "1707.02286"
},
{
"id": "1604.06778"
},
{
"id": "1506.02488"
},
{
"id": "1611.01224"
},
{
"id": "1606.01540"
}
] |
1707.06658 | 0 | 7 1 0 2
v o N 9 2 ] G L . s c [
4 v 8 5 6 6 0 . 7 0 7 1 : v i X r a
# RAIL: Risk-Averse Imitation Learning
# Anirban Santaraâ IIT Kharagpur [email protected]
# Abhishek Naikâ Balaraman Ravindran IIT Madras {anaik,ravi}@cse.iitm.ac.in
# Dipankar Das Dheevatsa Mudigere Sasikanth Avancha Bharat Kaul
# Parallel Computing Lab - Intel Labs, India {dipankar.das,dheevatsa.mudigere,sasikanth.avancha,bharat.kaul}@intel.com
# Abstract | 1707.06658#0 | RAIL: Risk-Averse Imitation Learning | Imitation learning algorithms learn viable policies by imitating an expert's
behavior when reward signals are not available. Generative Adversarial
Imitation Learning (GAIL) is a state-of-the-art algorithm for learning policies
when the expert's behavior is available as a fixed set of trajectories. We
evaluate in terms of the expert's cost function and observe that the
distribution of trajectory-costs is often more heavy-tailed for GAIL-agents
than the expert at a number of benchmark continuous-control tasks. Thus,
high-cost trajectories, corresponding to tail-end events of catastrophic
failure, are more likely to be encountered by the GAIL-agents than the expert.
This makes the reliability of GAIL-agents questionable when it comes to
deployment in risk-sensitive applications like robotic surgery and autonomous
driving. In this work, we aim to minimize the occurrence of tail-end events by
minimizing tail risk within the GAIL framework. We quantify tail risk by the
Conditional-Value-at-Risk (CVaR) of trajectories and develop the Risk-Averse
Imitation Learning (RAIL) algorithm. We observe that the policies learned with
RAIL show lower tail-end risk than those of vanilla GAIL. Thus the proposed
RAIL algorithm appears as a potent alternative to GAIL for improved reliability
in risk-sensitive applications. | http://arxiv.org/pdf/1707.06658 | Anirban Santara, Abhishek Naik, Balaraman Ravindran, Dipankar Das, Dheevatsa Mudigere, Sasikanth Avancha, Bharat Kaul | cs.LG, cs.AI | Accepted for presentation in Deep Reinforcement Learning Symposium at
NIPS 2017 | null | cs.LG | 20170720 | 20171129 | [
{
"id": "1703.01703"
},
{
"id": "1704.07911"
},
{
"id": "1708.06374"
},
{
"id": "1604.07316"
},
{
"id": "1610.03295"
},
{
"id": "1606.01540"
}
] |
1707.06342 | 1 | We propose an efï¬cient and uniï¬ed framework, namely ThiNet, to simultaneously accelerate and compress CNN models in both training and inference stages. We focus on the ï¬lter level pruning, i.e., the whole ï¬lter would be dis- carded if it is less important. Our method does not change the original network structure, thus it can be perfectly sup- ported by any off-the-shelf deep learning libraries. We for- mally establish ï¬lter pruning as an optimization problem, and reveal that we need to prune ï¬lters based on statistics in- formation computed from its next layer, not the current layer, which differentiates ThiNet from existing methods. Experi- mental results demonstrate the effectiveness of this strategy, which has advanced the state-of-the-art. We also show the performance of ThiNet on ILSVRC-12 benchmark. ThiNet achieves 3.31à FLOPs reduction and 16.63à compression on VGG-16, with only 0.52% top-5 accuracy drop. Similar experiments with ResNet-50 reveal that even for a compact network, ThiNet can also reduce more than half of the param- eters and | 1707.06342#1 | ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression | We propose an efficient and unified framework, namely ThiNet, to
simultaneously accelerate and compress CNN models in both training and
inference stages. We focus on the filter level pruning, i.e., the whole filter
would be discarded if it is less important. Our method does not change the
original network structure, thus it can be perfectly supported by any
off-the-shelf deep learning libraries. We formally establish filter pruning as
an optimization problem, and reveal that we need to prune filters based on
statistics information computed from its next layer, not the current layer,
which differentiates ThiNet from existing methods. Experimental results
demonstrate the effectiveness of this strategy, which has advanced the
state-of-the-art. We also show the performance of ThiNet on ILSVRC-12
benchmark. ThiNet achieves 3.31$\times$ FLOPs reduction and 16.63$\times$
compression on VGG-16, with only 0.52$\%$ top-5 accuracy drop. Similar
experiments with ResNet-50 reveal that even for a compact network, ThiNet can
also reduce more than half of the parameters and FLOPs, at the cost of roughly
1$\%$ top-5 accuracy drop. Moreover, the original VGG-16 model can be further
pruned into a very small model with only 5.05MB model size, preserving AlexNet
level accuracy but showing much stronger generalization ability. | http://arxiv.org/pdf/1707.06342 | Jian-Hao Luo, Jianxin Wu, Weiyao Lin | cs.CV | To appear in ICCV 2017 | null | cs.CV | 20170720 | 20170720 | [
{
"id": "1602.07360"
},
{
"id": "1610.02391"
},
{
"id": "1607.03250"
}
] |
1707.06347 | 1 | # 1 Introduction
In recent years, several different approaches have been proposed for reinforcement learning with neural network function approximators. The leading contenders are deep Q-learning [Mni+15], âvanillaâ policy gradient methods [Mni+16], and trust region / natural policy gradient methods [Sch+15b]. However, there is room for improvement in developing a method that is scalable (to large models and parallel implementations), data efficient, and robust (i.e., successful on a variety of problems without hyperparameter tuning). Q-learning (with function approximation) fails on many simple problems! and is poorly understood, vanilla policy gradient methods have poor data effiency and robustness; and trust region policy optimization (TRPO) is relatively complicated, and is not compatible with architectures that include noise (such as dropout) or parameter sharing between the policy and value function, or with auxiliary tasks).
This paper seeks to improve the current state of affairs by introducing an algorithm that attains he data efficiency and reliable performance of TRPO, while using only first-order optimization. We propose a novel objective with clipped probability ratios, which forms a pessimistic estimate ie., lower bound) of the performance of the policy. To optimize policies, we alternate between sampling data from the policy and performing several epochs of optimization on the sampled data. | 1707.06347#1 | Proximal Policy Optimization Algorithms | We propose a new family of policy gradient methods for reinforcement
learning, which alternate between sampling data through interaction with the
environment, and optimizing a "surrogate" objective function using stochastic
gradient ascent. Whereas standard policy gradient methods perform one gradient
update per data sample, we propose a novel objective function that enables
multiple epochs of minibatch updates. The new methods, which we call proximal
policy optimization (PPO), have some of the benefits of trust region policy
optimization (TRPO), but they are much simpler to implement, more general, and
have better sample complexity (empirically). Our experiments test PPO on a
collection of benchmark tasks, including simulated robotic locomotion and Atari
game playing, and we show that PPO outperforms other online policy gradient
methods, and overall strikes a favorable balance between sample complexity,
simplicity, and wall-time. | http://arxiv.org/pdf/1707.06347 | John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, Oleg Klimov | cs.LG | null | null | cs.LG | 20170720 | 20170828 | [
{
"id": "1602.01783"
},
{
"id": "1707.02286"
},
{
"id": "1604.06778"
},
{
"id": "1506.02488"
},
{
"id": "1611.01224"
},
{
"id": "1606.01540"
}
] |
1707.06658 | 1 | # Abstract
Imitation learning algorithms learn viable policies by imitating an expertâs behavior when reward signals are not available. Generative Adversarial Imitation Learning (GAIL) is a state-of-the-art algorithm for learning policies when the expertâs behavior is available as a ï¬xed set of trajectories. We evaluate in terms of the expertâs cost function and observe that the distribution of trajectory-costs is often more heavy-tailed for GAIL-agents than the expert at a number of benchmark continuous-control tasks. Thus, high-cost trajectories, corresponding to tail-end events of catastrophic failure, are more likely to be encountered by the GAIL- agents than the expert. This makes the reliability of GAIL-agents questionable when it comes to deployment in risk-sensitive applications like robotic surgery and autonomous driving. In this work, we aim to minimize the occurrence of tail-end events by minimizing tail risk within the GAIL framework. We quantify tail risk by the Conditional-Value-at-Risk (CV aR) of trajectories and develop the Risk-Averse Imitation Learning (RAIL) algorithm. We observe that the policies learned with RAIL show lower tail-end risk than those of vanilla GAIL. Thus the proposed RAIL algorithm appears as a potent alternative to GAIL for improved reliability in risk-sensitive applications.
# Introduction | 1707.06658#1 | RAIL: Risk-Averse Imitation Learning | Imitation learning algorithms learn viable policies by imitating an expert's
behavior when reward signals are not available. Generative Adversarial
Imitation Learning (GAIL) is a state-of-the-art algorithm for learning policies
when the expert's behavior is available as a fixed set of trajectories. We
evaluate in terms of the expert's cost function and observe that the
distribution of trajectory-costs is often more heavy-tailed for GAIL-agents
than the expert at a number of benchmark continuous-control tasks. Thus,
high-cost trajectories, corresponding to tail-end events of catastrophic
failure, are more likely to be encountered by the GAIL-agents than the expert.
This makes the reliability of GAIL-agents questionable when it comes to
deployment in risk-sensitive applications like robotic surgery and autonomous
driving. In this work, we aim to minimize the occurrence of tail-end events by
minimizing tail risk within the GAIL framework. We quantify tail risk by the
Conditional-Value-at-Risk (CVaR) of trajectories and develop the Risk-Averse
Imitation Learning (RAIL) algorithm. We observe that the policies learned with
RAIL show lower tail-end risk than those of vanilla GAIL. Thus the proposed
RAIL algorithm appears as a potent alternative to GAIL for improved reliability
in risk-sensitive applications. | http://arxiv.org/pdf/1707.06658 | Anirban Santara, Abhishek Naik, Balaraman Ravindran, Dipankar Das, Dheevatsa Mudigere, Sasikanth Avancha, Bharat Kaul | cs.LG, cs.AI | Accepted for presentation in Deep Reinforcement Learning Symposium at
NIPS 2017 | null | cs.LG | 20170720 | 20171129 | [
{
"id": "1703.01703"
},
{
"id": "1704.07911"
},
{
"id": "1708.06374"
},
{
"id": "1604.07316"
},
{
"id": "1610.03295"
},
{
"id": "1606.01540"
}
] |
1707.06347 | 2 | Our experiments compare the performance of various different versions of the surrogate objec- ive, and find that the version with the clipped probability ratios performs best. We also compare PPO to several previous algorithms from the literature. On continuous control tasks, it performs etter than the algorithms we compare against. On Atari, it performs significantly better (in terms of sample complexity) than A2C and similarly to ACER though it is much simpler.
âWhile DQN works well on game environments like the Arcade Learning Environment [Bel+15] with discrete action spaces, it has not been demonstrated to perform well on continuous control benchmarks such as those in OpenAI Gym [Bro+16] and described by Duan et al. [Dua+16].
# 2 Background: Policy Optimization
# 2.1 Policy Gradient Methods
Policy gradient methods work by comp uting an estimator of the policy gradient and plugging it into a stochastic gradient ascent algorithm. The most commonly used gradient estimator has the form
g= (1) on Vo log 79 (at | si)A] | 1707.06347#2 | Proximal Policy Optimization Algorithms | We propose a new family of policy gradient methods for reinforcement
learning, which alternate between sampling data through interaction with the
environment, and optimizing a "surrogate" objective function using stochastic
gradient ascent. Whereas standard policy gradient methods perform one gradient
update per data sample, we propose a novel objective function that enables
multiple epochs of minibatch updates. The new methods, which we call proximal
policy optimization (PPO), have some of the benefits of trust region policy
optimization (TRPO), but they are much simpler to implement, more general, and
have better sample complexity (empirically). Our experiments test PPO on a
collection of benchmark tasks, including simulated robotic locomotion and Atari
game playing, and we show that PPO outperforms other online policy gradient
methods, and overall strikes a favorable balance between sample complexity,
simplicity, and wall-time. | http://arxiv.org/pdf/1707.06347 | John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, Oleg Klimov | cs.LG | null | null | cs.LG | 20170720 | 20170828 | [
{
"id": "1602.01783"
},
{
"id": "1707.02286"
},
{
"id": "1604.06778"
},
{
"id": "1506.02488"
},
{
"id": "1611.01224"
},
{
"id": "1606.01540"
}
] |
1707.06658 | 2 | Reinforcement learning (RL) [Sutton and Barto, 1998] is used to learn an effective policy of choosing actions in order to achieve a speciï¬ed goal in an environment. The goal is communicated to the agent through a scalar cost and the agent learns a policy that minimizes the expected total cost incurred over a trajectory. RL algorithms, along with efï¬cient function approximators like deep neural networks, have achieved human-level or beyond human-level performance at many challenging planning tasks like continuous-control [Lillicrap et al., 2015, Schulman et al., 2015] and game-playing [Silver et al., 2016, Mnih et al., 2015]. In classical RL, the cost function is handcrafted based on heuristic assumptions about the goal and the environment. This is challenging in most real-world applications and also prone to subjectivity induced bias. Imitation learning or Learning from Demonstration (LfD) [Argall et al., 2009, Schaal, 1997, Atkeson and Schaal, 1997, Abbeel and Ng, 2011, 2004, Ng et al., 2000] addresses this challenge by providing methods of learning policies through imitation of an expertâs behavior | 1707.06658#2 | RAIL: Risk-Averse Imitation Learning | Imitation learning algorithms learn viable policies by imitating an expert's
behavior when reward signals are not available. Generative Adversarial
Imitation Learning (GAIL) is a state-of-the-art algorithm for learning policies
when the expert's behavior is available as a fixed set of trajectories. We
evaluate in terms of the expert's cost function and observe that the
distribution of trajectory-costs is often more heavy-tailed for GAIL-agents
than the expert at a number of benchmark continuous-control tasks. Thus,
high-cost trajectories, corresponding to tail-end events of catastrophic
failure, are more likely to be encountered by the GAIL-agents than the expert.
This makes the reliability of GAIL-agents questionable when it comes to
deployment in risk-sensitive applications like robotic surgery and autonomous
driving. In this work, we aim to minimize the occurrence of tail-end events by
minimizing tail risk within the GAIL framework. We quantify tail risk by the
Conditional-Value-at-Risk (CVaR) of trajectories and develop the Risk-Averse
Imitation Learning (RAIL) algorithm. We observe that the policies learned with
RAIL show lower tail-end risk than those of vanilla GAIL. Thus the proposed
RAIL algorithm appears as a potent alternative to GAIL for improved reliability
in risk-sensitive applications. | http://arxiv.org/pdf/1707.06658 | Anirban Santara, Abhishek Naik, Balaraman Ravindran, Dipankar Das, Dheevatsa Mudigere, Sasikanth Avancha, Bharat Kaul | cs.LG, cs.AI | Accepted for presentation in Deep Reinforcement Learning Symposium at
NIPS 2017 | null | cs.LG | 20170720 | 20171129 | [
{
"id": "1703.01703"
},
{
"id": "1704.07911"
},
{
"id": "1708.06374"
},
{
"id": "1604.07316"
},
{
"id": "1610.03295"
},
{
"id": "1606.01540"
}
] |
1707.06342 | 3 | # 1. Introduction
In the past few years, we have witnessed a rapid develop- ment of deep neural networks in the ï¬eld of computer vision, from basic image classiï¬cation tasks such as the ImageNet recognition challenge [18, 28, 11], to some more advanced applications, e.g., object detection [7], semantic segmenta- tion [24], image captioning [16] and many others. Deep neural networks have achieved state-of-the-art performance in these ï¬elds compared with traditional methods based on manually designed visual features.
nario means a computing task must be accomplished with limited resource supply, such as computing time, storage space, battery power, etc. One of the main issues of deep neural networks is its huge computational cost and storage overhead, which constitute a serious challenge for a mobile device. For instance, the VGG-16 model [28] has 138.34 mil- lion parameters, taking up more than 500MB storage space,1 and needs 30.94 billion ï¬oat point operations (FLOPs) to classify a single image. Such a cumbersome model can easily exceed the computing limit of small devices. Thus, network compression has drawn a signiï¬cant amount of interest from both academia and industry. | 1707.06342#3 | ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression | We propose an efficient and unified framework, namely ThiNet, to
simultaneously accelerate and compress CNN models in both training and
inference stages. We focus on the filter level pruning, i.e., the whole filter
would be discarded if it is less important. Our method does not change the
original network structure, thus it can be perfectly supported by any
off-the-shelf deep learning libraries. We formally establish filter pruning as
an optimization problem, and reveal that we need to prune filters based on
statistics information computed from its next layer, not the current layer,
which differentiates ThiNet from existing methods. Experimental results
demonstrate the effectiveness of this strategy, which has advanced the
state-of-the-art. We also show the performance of ThiNet on ILSVRC-12
benchmark. ThiNet achieves 3.31$\times$ FLOPs reduction and 16.63$\times$
compression on VGG-16, with only 0.52$\%$ top-5 accuracy drop. Similar
experiments with ResNet-50 reveal that even for a compact network, ThiNet can
also reduce more than half of the parameters and FLOPs, at the cost of roughly
1$\%$ top-5 accuracy drop. Moreover, the original VGG-16 model can be further
pruned into a very small model with only 5.05MB model size, preserving AlexNet
level accuracy but showing much stronger generalization ability. | http://arxiv.org/pdf/1707.06342 | Jian-Hao Luo, Jianxin Wu, Weiyao Lin | cs.CV | To appear in ICCV 2017 | null | cs.CV | 20170720 | 20170720 | [
{
"id": "1602.07360"
},
{
"id": "1610.02391"
},
{
"id": "1607.03250"
}
] |
1707.06347 | 3 | g= (1) on Vo log 79 (at | si)A]
where 7 is a stochastic policy and A, Here, the expectation E,[...] indicates algorithm that alternates between sampl differentiation software work by constrt gradient estimator; the estimator g is o is an estimator of the advantage function at timestep t. he empirical average over a finite batch of samples, in an ing and optimization. Implementations that use automatic acting an objective function whose gradient is the policy tained by differentiating the objective
LPG ) | log 7 (2) ar | si)Ar].
While it is appealing to perform multi trajectory, doing so is not well-justified, updates (see Section 6.1; results are noâ penaltyâ setting). le steps of optimization on this loss L? using the same and empirically it often leads to destructively large policy shown but were similar or worse than the âno clipping or
# 2.2 Trust Region Methods
In TRPO [Sch+15b], an objective func ion (the âsurrogateâ objective) is maximized subject to a constraint on the size of the policy update. Specifically,
(3) maximize 6 | To(ae | ro al Tora (At | St
subject to Br[KL[m,.4(+ | $1), 7a(- | s1)]] < 6. (4) | 1707.06347#3 | Proximal Policy Optimization Algorithms | We propose a new family of policy gradient methods for reinforcement
learning, which alternate between sampling data through interaction with the
environment, and optimizing a "surrogate" objective function using stochastic
gradient ascent. Whereas standard policy gradient methods perform one gradient
update per data sample, we propose a novel objective function that enables
multiple epochs of minibatch updates. The new methods, which we call proximal
policy optimization (PPO), have some of the benefits of trust region policy
optimization (TRPO), but they are much simpler to implement, more general, and
have better sample complexity (empirically). Our experiments test PPO on a
collection of benchmark tasks, including simulated robotic locomotion and Atari
game playing, and we show that PPO outperforms other online policy gradient
methods, and overall strikes a favorable balance between sample complexity,
simplicity, and wall-time. | http://arxiv.org/pdf/1707.06347 | John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, Oleg Klimov | cs.LG | null | null | cs.LG | 20170720 | 20170828 | [
{
"id": "1602.01783"
},
{
"id": "1707.02286"
},
{
"id": "1604.06778"
},
{
"id": "1506.02488"
},
{
"id": "1611.01224"
},
{
"id": "1606.01540"
}
] |
1707.06658 | 3 | Ng, 2011, 2004, Ng et al., 2000] addresses this challenge by providing methods of learning policies through imitation of an expertâs behavior without the need of a handcrafted cost function. In this paper we study the reliability of existing imitation learning algorithms when it comes to learning solely from a ï¬xed set of trajectories demonstrated by an expert with no interaction between the agent and the expert during training. | 1707.06658#3 | RAIL: Risk-Averse Imitation Learning | Imitation learning algorithms learn viable policies by imitating an expert's
behavior when reward signals are not available. Generative Adversarial
Imitation Learning (GAIL) is a state-of-the-art algorithm for learning policies
when the expert's behavior is available as a fixed set of trajectories. We
evaluate in terms of the expert's cost function and observe that the
distribution of trajectory-costs is often more heavy-tailed for GAIL-agents
than the expert at a number of benchmark continuous-control tasks. Thus,
high-cost trajectories, corresponding to tail-end events of catastrophic
failure, are more likely to be encountered by the GAIL-agents than the expert.
This makes the reliability of GAIL-agents questionable when it comes to
deployment in risk-sensitive applications like robotic surgery and autonomous
driving. In this work, we aim to minimize the occurrence of tail-end events by
minimizing tail risk within the GAIL framework. We quantify tail risk by the
Conditional-Value-at-Risk (CVaR) of trajectories and develop the Risk-Averse
Imitation Learning (RAIL) algorithm. We observe that the policies learned with
RAIL show lower tail-end risk than those of vanilla GAIL. Thus the proposed
RAIL algorithm appears as a potent alternative to GAIL for improved reliability
in risk-sensitive applications. | http://arxiv.org/pdf/1707.06658 | Anirban Santara, Abhishek Naik, Balaraman Ravindran, Dipankar Das, Dheevatsa Mudigere, Sasikanth Avancha, Bharat Kaul | cs.LG, cs.AI | Accepted for presentation in Deep Reinforcement Learning Symposium at
NIPS 2017 | null | cs.LG | 20170720 | 20171129 | [
{
"id": "1703.01703"
},
{
"id": "1704.07911"
},
{
"id": "1708.06374"
},
{
"id": "1604.07316"
},
{
"id": "1610.03295"
},
{
"id": "1606.01540"
}
] |
1707.06342 | 4 | Pruning is one of the most popular methods to reduce network complexity, which has been widely studied in the model compression community. In the 1990s, LeCun et al. [20] had observed that several unimportant weights can be removed from a trained network with negligible loss in accuracy. A similar strategy was also explored in [2]. This process resembles the biological phenomena in mammalian brain, where the number of neuron synapses has reached the peak in early childhood, followed by gradual pruning during its development. However, these methods are mainly based on the second derivative, thus are not applicable for todayâs deep model due to expensive memory and computation costs. Recently, Han et al. [10] introduced a simple pruning strategy: all connections with weights below a threshold are removed, followed by ï¬ne-tuning to recover its accuracy. This iterative procedure is performed several times, gener- ating a very sparse model. However, such a non-structured sparse model can not be supported by off-the-shelf libraries, thus specialized hardwares and softwares are needed for efï¬- cient inference, which is difï¬cult and expensive in | 1707.06342#4 | ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression | We propose an efficient and unified framework, namely ThiNet, to
simultaneously accelerate and compress CNN models in both training and
inference stages. We focus on the filter level pruning, i.e., the whole filter
would be discarded if it is less important. Our method does not change the
original network structure, thus it can be perfectly supported by any
off-the-shelf deep learning libraries. We formally establish filter pruning as
an optimization problem, and reveal that we need to prune filters based on
statistics information computed from its next layer, not the current layer,
which differentiates ThiNet from existing methods. Experimental results
demonstrate the effectiveness of this strategy, which has advanced the
state-of-the-art. We also show the performance of ThiNet on ILSVRC-12
benchmark. ThiNet achieves 3.31$\times$ FLOPs reduction and 16.63$\times$
compression on VGG-16, with only 0.52$\%$ top-5 accuracy drop. Similar
experiments with ResNet-50 reveal that even for a compact network, ThiNet can
also reduce more than half of the parameters and FLOPs, at the cost of roughly
1$\%$ top-5 accuracy drop. Moreover, the original VGG-16 model can be further
pruned into a very small model with only 5.05MB model size, preserving AlexNet
level accuracy but showing much stronger generalization ability. | http://arxiv.org/pdf/1707.06342 | Jian-Hao Luo, Jianxin Wu, Weiyao Lin | cs.CV | To appear in ICCV 2017 | null | cs.CV | 20170720 | 20170720 | [
{
"id": "1602.07360"
},
{
"id": "1610.02391"
},
{
"id": "1607.03250"
}
] |
1707.06347 | 4 | subject to Br[KL[m,.4(+ | $1), 7a(- | s1)]] < 6. (4)
Here, Aoi is the vector of policy parameters before the update. This problem can efficiently be approximately solved using the conjuga to the objective and a quadratic approxi e gradient algorithm, after making a linear approximation imation to the constraint.
The theory justifying TRPO actua ly suggests using a penalty instead of a constraint, i.e., solving the unconstrained optimization roblem
ae To (at maximize ti 6 TOo1a (at | st) | st) on At â BKL[m9,.4(- | s+), 70(-| $0)] | 1707.06347#4 | Proximal Policy Optimization Algorithms | We propose a new family of policy gradient methods for reinforcement
learning, which alternate between sampling data through interaction with the
environment, and optimizing a "surrogate" objective function using stochastic
gradient ascent. Whereas standard policy gradient methods perform one gradient
update per data sample, we propose a novel objective function that enables
multiple epochs of minibatch updates. The new methods, which we call proximal
policy optimization (PPO), have some of the benefits of trust region policy
optimization (TRPO), but they are much simpler to implement, more general, and
have better sample complexity (empirically). Our experiments test PPO on a
collection of benchmark tasks, including simulated robotic locomotion and Atari
game playing, and we show that PPO outperforms other online policy gradient
methods, and overall strikes a favorable balance between sample complexity,
simplicity, and wall-time. | http://arxiv.org/pdf/1707.06347 | John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, Oleg Klimov | cs.LG | null | null | cs.LG | 20170720 | 20170828 | [
{
"id": "1602.01783"
},
{
"id": "1707.02286"
},
{
"id": "1604.06778"
},
{
"id": "1506.02488"
},
{
"id": "1611.01224"
},
{
"id": "1606.01540"
}
] |
1707.06658 | 4 | âAuthors contributed equally as a part of their internship at Parallel Computing Lab - Intel Labs, India.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
Expert GAIL 80% y , 80% 7 o% 6% 60% | sm _ 0% om s am s 4% Hopper-v1 SB 40%) Sad 3 40% » a a a > 8 . a vs | 9 20% ow hi LMUELL 20% | A *=2163 -1442 -721â«8 Sita 1659-1106 553 LY \ \ Se 0% + __1._________. 4. ee AJ ~4000 -3000 -2000 -1000 0 ~4000 â3000 -2000 -1000 0 cost cost 19%f ] 19% f om o* 14%} 5% 14%} % © < £ 4% 2 4% Humanoid-vl 3 9%, o% SZ 9%) Sd S$ 2% Ss 2% Ey Ey 4% | â ot â ela lta 4a =10000 -7500 -5000 -2500 0 cost 0 | 1707.06658#4 | RAIL: Risk-Averse Imitation Learning | Imitation learning algorithms learn viable policies by imitating an expert's
behavior when reward signals are not available. Generative Adversarial
Imitation Learning (GAIL) is a state-of-the-art algorithm for learning policies
when the expert's behavior is available as a fixed set of trajectories. We
evaluate in terms of the expert's cost function and observe that the
distribution of trajectory-costs is often more heavy-tailed for GAIL-agents
than the expert at a number of benchmark continuous-control tasks. Thus,
high-cost trajectories, corresponding to tail-end events of catastrophic
failure, are more likely to be encountered by the GAIL-agents than the expert.
This makes the reliability of GAIL-agents questionable when it comes to
deployment in risk-sensitive applications like robotic surgery and autonomous
driving. In this work, we aim to minimize the occurrence of tail-end events by
minimizing tail risk within the GAIL framework. We quantify tail risk by the
Conditional-Value-at-Risk (CVaR) of trajectories and develop the Risk-Averse
Imitation Learning (RAIL) algorithm. We observe that the policies learned with
RAIL show lower tail-end risk than those of vanilla GAIL. Thus the proposed
RAIL algorithm appears as a potent alternative to GAIL for improved reliability
in risk-sensitive applications. | http://arxiv.org/pdf/1707.06658 | Anirban Santara, Abhishek Naik, Balaraman Ravindran, Dipankar Das, Dheevatsa Mudigere, Sasikanth Avancha, Bharat Kaul | cs.LG, cs.AI | Accepted for presentation in Deep Reinforcement Learning Symposium at
NIPS 2017 | null | cs.LG | 20170720 | 20171129 | [
{
"id": "1703.01703"
},
{
"id": "1704.07911"
},
{
"id": "1708.06374"
},
{
"id": "1604.07316"
},
{
"id": "1610.03295"
},
{
"id": "1606.01540"
}
] |
1707.06342 | 5 | libraries, thus specialized hardwares and softwares are needed for efï¬- cient inference, which is difï¬cult and expensive in real-world applications. On the other hand, the non-structured random connectivity ignores cache and memory access issues. As indicated in [32], due to the poor cache locality and jumping memory access caused by random connectivity, the practical acceleration is very limited (sometimes even slows down), even though the actual sparsity is relatively high. | 1707.06342#5 | ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression | We propose an efficient and unified framework, namely ThiNet, to
simultaneously accelerate and compress CNN models in both training and
inference stages. We focus on the filter level pruning, i.e., the whole filter
would be discarded if it is less important. Our method does not change the
original network structure, thus it can be perfectly supported by any
off-the-shelf deep learning libraries. We formally establish filter pruning as
an optimization problem, and reveal that we need to prune filters based on
statistics information computed from its next layer, not the current layer,
which differentiates ThiNet from existing methods. Experimental results
demonstrate the effectiveness of this strategy, which has advanced the
state-of-the-art. We also show the performance of ThiNet on ILSVRC-12
benchmark. ThiNet achieves 3.31$\times$ FLOPs reduction and 16.63$\times$
compression on VGG-16, with only 0.52$\%$ top-5 accuracy drop. Similar
experiments with ResNet-50 reveal that even for a compact network, ThiNet can
also reduce more than half of the parameters and FLOPs, at the cost of roughly
1$\%$ top-5 accuracy drop. Moreover, the original VGG-16 model can be further
pruned into a very small model with only 5.05MB model size, preserving AlexNet
level accuracy but showing much stronger generalization ability. | http://arxiv.org/pdf/1707.06342 | Jian-Hao Luo, Jianxin Wu, Weiyao Lin | cs.CV | To appear in ICCV 2017 | null | cs.CV | 20170720 | 20170720 | [
{
"id": "1602.07360"
},
{
"id": "1610.02391"
},
{
"id": "1607.03250"
}
] |
1707.06347 | 5 | for some coefficient 3. This follows from the max KL over states instead of the m performance of the policy 7. TRPO uses a hard constraint raâ to choose a single value of 3 that perfor problem, where the the charact of a first-order algorithm that that it is not sufficient to sim objective Equation (5) with SGD; addit emulates eristics change over the course of learning. Hence, to achieve our goal ly choose a fixed penalty coe the fact that a certain surrogate objective (which computes ean) forms a lower bound (i.e., a pessimistic bound) on the her than a penalty because it is har ms well across different problemsâor even within a single the monotonic improvement of TRPO, experiments show ficient 3 and optimize the penalize ional modifications are required.
# 3 Clipped Surrogate Objective
Let r;(@) denote the probability ratio r;(0) = Talal si) 56 r(Oo1a) = 1. TRPO maximizes a Taq (at | 8)? âsurrogateâ objective
> LoPl 6g z T(at | st) Ai| altel st) t [r(@) Ar] : (6 | 1707.06347#5 | Proximal Policy Optimization Algorithms | We propose a new family of policy gradient methods for reinforcement
learning, which alternate between sampling data through interaction with the
environment, and optimizing a "surrogate" objective function using stochastic
gradient ascent. Whereas standard policy gradient methods perform one gradient
update per data sample, we propose a novel objective function that enables
multiple epochs of minibatch updates. The new methods, which we call proximal
policy optimization (PPO), have some of the benefits of trust region policy
optimization (TRPO), but they are much simpler to implement, more general, and
have better sample complexity (empirically). Our experiments test PPO on a
collection of benchmark tasks, including simulated robotic locomotion and Atari
game playing, and we show that PPO outperforms other online policy gradient
methods, and overall strikes a favorable balance between sample complexity,
simplicity, and wall-time. | http://arxiv.org/pdf/1707.06347 | John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, Oleg Klimov | cs.LG | null | null | cs.LG | 20170720 | 20170828 | [
{
"id": "1602.01783"
},
{
"id": "1707.02286"
},
{
"id": "1604.06778"
},
{
"id": "1506.02488"
},
{
"id": "1611.01224"
},
{
"id": "1606.01540"
}
] |
1707.06658 | 5 | Figure 1: Histograms of the costs of 250 trajectories generated by the expert and GAIL agents at high-dimensional continuous control tasks, Hopper-v1 and Humanoid-v1, from OpenAI Gym. The inset diagrams show zoomed-in views of the tails of these distributions (the region beyond 2Ï of the mean). We observe that the GAIL agents produce tails heavier than the expert, indicating that GAIL is more prone to generating high-cost trajectories. | 1707.06658#5 | RAIL: Risk-Averse Imitation Learning | Imitation learning algorithms learn viable policies by imitating an expert's
behavior when reward signals are not available. Generative Adversarial
Imitation Learning (GAIL) is a state-of-the-art algorithm for learning policies
when the expert's behavior is available as a fixed set of trajectories. We
evaluate in terms of the expert's cost function and observe that the
distribution of trajectory-costs is often more heavy-tailed for GAIL-agents
than the expert at a number of benchmark continuous-control tasks. Thus,
high-cost trajectories, corresponding to tail-end events of catastrophic
failure, are more likely to be encountered by the GAIL-agents than the expert.
This makes the reliability of GAIL-agents questionable when it comes to
deployment in risk-sensitive applications like robotic surgery and autonomous
driving. In this work, we aim to minimize the occurrence of tail-end events by
minimizing tail risk within the GAIL framework. We quantify tail risk by the
Conditional-Value-at-Risk (CVaR) of trajectories and develop the Risk-Averse
Imitation Learning (RAIL) algorithm. We observe that the policies learned with
RAIL show lower tail-end risk than those of vanilla GAIL. Thus the proposed
RAIL algorithm appears as a potent alternative to GAIL for improved reliability
in risk-sensitive applications. | http://arxiv.org/pdf/1707.06658 | Anirban Santara, Abhishek Naik, Balaraman Ravindran, Dipankar Das, Dheevatsa Mudigere, Sasikanth Avancha, Bharat Kaul | cs.LG, cs.AI | Accepted for presentation in Deep Reinforcement Learning Symposium at
NIPS 2017 | null | cs.LG | 20170720 | 20171129 | [
{
"id": "1703.01703"
},
{
"id": "1704.07911"
},
{
"id": "1708.06374"
},
{
"id": "1604.07316"
},
{
"id": "1610.03295"
},
{
"id": "1606.01540"
}
] |
1707.06342 | 6 | In spite of its great success, a typical deep model is hard to be deployed on resource constrained devices, e.g., mobile phones or embedded gadgets. A resource constrained sceTo avoid the limitations of non-structured pruning men11 MB = 220 â 1.048 million bytes, and 1 million is 106.
1
tioned above, we suggest that the ï¬lter level pruning would be a better choice. The beneï¬ts of removing the whole unim- portant ï¬lter have a great deal: 1) The pruned model has no difference in network structure, thus it can be perfectly supported by any off-the-shelf deep learning libraries. 2) Memory footprint would be reduced dramatically. Such memory reduction comes not only from model parameter itself, but also from the intermediate activation, which is rarely considered in previous studies. 3) Since the pruned network structure has not be damaged, it can be further com- pressed and accelerated by other compression methods, e.g., the parameter quantization approach [33]. 4) More vision tasks, such as object detection or semantic segmentation, can be accelerated greatly using the pruned model. | 1707.06342#6 | ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression | We propose an efficient and unified framework, namely ThiNet, to
simultaneously accelerate and compress CNN models in both training and
inference stages. We focus on the filter level pruning, i.e., the whole filter
would be discarded if it is less important. Our method does not change the
original network structure, thus it can be perfectly supported by any
off-the-shelf deep learning libraries. We formally establish filter pruning as
an optimization problem, and reveal that we need to prune filters based on
statistics information computed from its next layer, not the current layer,
which differentiates ThiNet from existing methods. Experimental results
demonstrate the effectiveness of this strategy, which has advanced the
state-of-the-art. We also show the performance of ThiNet on ILSVRC-12
benchmark. ThiNet achieves 3.31$\times$ FLOPs reduction and 16.63$\times$
compression on VGG-16, with only 0.52$\%$ top-5 accuracy drop. Similar
experiments with ResNet-50 reveal that even for a compact network, ThiNet can
also reduce more than half of the parameters and FLOPs, at the cost of roughly
1$\%$ top-5 accuracy drop. Moreover, the original VGG-16 model can be further
pruned into a very small model with only 5.05MB model size, preserving AlexNet
level accuracy but showing much stronger generalization ability. | http://arxiv.org/pdf/1707.06342 | Jian-Hao Luo, Jianxin Wu, Weiyao Lin | cs.CV | To appear in ICCV 2017 | null | cs.CV | 20170720 | 20170720 | [
{
"id": "1602.07360"
},
{
"id": "1610.02391"
},
{
"id": "1607.03250"
}
] |
1707.06347 | 6 | > LoPl 6g z T(at | st) Ai| altel st) t [r(@) Ar] : (6
The superscript CPI refers to conservative policy iteration [KL02], where this objective was pro- posed. Without a constraint, maximization of L°?! would lead to an excessively large policy update; hence, we now consider how to modify the objective, to penalize changes to the policy tha move r;(@) away from 1.
The main objective we propose is the following:
LCP g) = B, | min(r,(0) Ap, clip(r:(9), 1 â 61 +)A,)| (7 | 1707.06347#6 | Proximal Policy Optimization Algorithms | We propose a new family of policy gradient methods for reinforcement
learning, which alternate between sampling data through interaction with the
environment, and optimizing a "surrogate" objective function using stochastic
gradient ascent. Whereas standard policy gradient methods perform one gradient
update per data sample, we propose a novel objective function that enables
multiple epochs of minibatch updates. The new methods, which we call proximal
policy optimization (PPO), have some of the benefits of trust region policy
optimization (TRPO), but they are much simpler to implement, more general, and
have better sample complexity (empirically). Our experiments test PPO on a
collection of benchmark tasks, including simulated robotic locomotion and Atari
game playing, and we show that PPO outperforms other online policy gradient
methods, and overall strikes a favorable balance between sample complexity,
simplicity, and wall-time. | http://arxiv.org/pdf/1707.06347 | John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, Oleg Klimov | cs.LG | null | null | cs.LG | 20170720 | 20170828 | [
{
"id": "1602.01783"
},
{
"id": "1707.02286"
},
{
"id": "1604.06778"
},
{
"id": "1506.02488"
},
{
"id": "1611.01224"
},
{
"id": "1606.01540"
}
] |
1707.06658 | 6 | Imitation learning algorithms fall into two broad categories. The ï¬rst category, known as Behavioral Cloning [Pomerleau, 1989, Bojarski et al., 2016, 2017], uses supervised learning to ï¬t a policy function to the state-action pairs from expert-demonstrated trajectories. Despite its simplicity, Behavioral Cloning fails to work well when only a limited amount of data is available. These algorithms assume that observations are i.i.d. and learn to ï¬t single time-step decisions. Whereas, in sequential decision making problems where predicted actions affect the future observations (e.g. driving), the i.i.d. assumption is violated. As a result, these algorithms suffer from the problem of compounding error due to covariate shift [Ross and Bagnell, 2010, Ross et al., 2011]. Approaches to ameliorate the issue of compounding error like SMILe [Ross and Bagnell, 2010], SEARN [Daumé et al., 2009], CPI [Kakade and Langford, 2002] suffer from instability in practical applications [Ross et al., 2011] while DAGGER [Ross et al., 2011] and | 1707.06658#6 | RAIL: Risk-Averse Imitation Learning | Imitation learning algorithms learn viable policies by imitating an expert's
behavior when reward signals are not available. Generative Adversarial
Imitation Learning (GAIL) is a state-of-the-art algorithm for learning policies
when the expert's behavior is available as a fixed set of trajectories. We
evaluate in terms of the expert's cost function and observe that the
distribution of trajectory-costs is often more heavy-tailed for GAIL-agents
than the expert at a number of benchmark continuous-control tasks. Thus,
high-cost trajectories, corresponding to tail-end events of catastrophic
failure, are more likely to be encountered by the GAIL-agents than the expert.
This makes the reliability of GAIL-agents questionable when it comes to
deployment in risk-sensitive applications like robotic surgery and autonomous
driving. In this work, we aim to minimize the occurrence of tail-end events by
minimizing tail risk within the GAIL framework. We quantify tail risk by the
Conditional-Value-at-Risk (CVaR) of trajectories and develop the Risk-Averse
Imitation Learning (RAIL) algorithm. We observe that the policies learned with
RAIL show lower tail-end risk than those of vanilla GAIL. Thus the proposed
RAIL algorithm appears as a potent alternative to GAIL for improved reliability
in risk-sensitive applications. | http://arxiv.org/pdf/1707.06658 | Anirban Santara, Abhishek Naik, Balaraman Ravindran, Dipankar Das, Dheevatsa Mudigere, Sasikanth Avancha, Bharat Kaul | cs.LG, cs.AI | Accepted for presentation in Deep Reinforcement Learning Symposium at
NIPS 2017 | null | cs.LG | 20170720 | 20171129 | [
{
"id": "1703.01703"
},
{
"id": "1704.07911"
},
{
"id": "1708.06374"
},
{
"id": "1604.07316"
},
{
"id": "1610.03295"
},
{
"id": "1606.01540"
}
] |
1707.06342 | 7 | In this paper, we propose a uniï¬ed framework, namely ThiNet (stands for âThin Netâ), to prune the unimportant ï¬lters to simultaneously accelerate and compress CNN mod- els in both training and test stages with minor performance degradation. With our pruned network, some important trans- fer tasks such as object detection or ï¬ne-grained recognition can run much faster (both training and inference), especially in small devices. Our main insight is that we establish a well- deï¬ned optimization problem, which shows that whether a ï¬lter can be pruned depends on the outputs of its next layer, not its own layer. This novel ï¬nding differentiates ThiNet from existing methods which prune ï¬lters using statistics calculated from their own layer. | 1707.06342#7 | ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression | We propose an efficient and unified framework, namely ThiNet, to
simultaneously accelerate and compress CNN models in both training and
inference stages. We focus on the filter level pruning, i.e., the whole filter
would be discarded if it is less important. Our method does not change the
original network structure, thus it can be perfectly supported by any
off-the-shelf deep learning libraries. We formally establish filter pruning as
an optimization problem, and reveal that we need to prune filters based on
statistics information computed from its next layer, not the current layer,
which differentiates ThiNet from existing methods. Experimental results
demonstrate the effectiveness of this strategy, which has advanced the
state-of-the-art. We also show the performance of ThiNet on ILSVRC-12
benchmark. ThiNet achieves 3.31$\times$ FLOPs reduction and 16.63$\times$
compression on VGG-16, with only 0.52$\%$ top-5 accuracy drop. Similar
experiments with ResNet-50 reveal that even for a compact network, ThiNet can
also reduce more than half of the parameters and FLOPs, at the cost of roughly
1$\%$ top-5 accuracy drop. Moreover, the original VGG-16 model can be further
pruned into a very small model with only 5.05MB model size, preserving AlexNet
level accuracy but showing much stronger generalization ability. | http://arxiv.org/pdf/1707.06342 | Jian-Hao Luo, Jianxin Wu, Weiyao Lin | cs.CV | To appear in ICCV 2017 | null | cs.CV | 20170720 | 20170720 | [
{
"id": "1602.07360"
},
{
"id": "1610.02391"
},
{
"id": "1607.03250"
}
] |
1707.06347 | 7 | LCP g) = B, | min(r,(0) Ap, clip(r:(9), 1 â 61 +)A,)| (7
where epsilon is a hyperparameter, say, ⬠= 0.2. The motivation for this objective is as follows. The first term inside the min is LCP!. The second term, clip(r;(0), 1â¢, 1+â¬)A;, modifies the surrogate objective by clipping the probability ratio, which removes the incentive for moving r; outside of the interval [1 â ¢«,1 +]. Finally, we take the minimum of the clipped and unclipped objective, so the final objective is a lower bound (i.e., a pessimistic bound) on the unclipped objective. With this scheme, we only ignore the change in probability ratio when it would make the objective improve, and we include it when it makes the objective worse. Note that L°//?(6) = LC?! (6) to first order around Ogi (i.e., where r = 1), however, they become different as 6 moves away from iq. Figure 1 plots a single term (i.e., a single ¢t) in LC/!P. note that the probability ratio r is clipped at 1â or 1+ depending on whether the advantage is positive or negative. | 1707.06347#7 | Proximal Policy Optimization Algorithms | We propose a new family of policy gradient methods for reinforcement
learning, which alternate between sampling data through interaction with the
environment, and optimizing a "surrogate" objective function using stochastic
gradient ascent. Whereas standard policy gradient methods perform one gradient
update per data sample, we propose a novel objective function that enables
multiple epochs of minibatch updates. The new methods, which we call proximal
policy optimization (PPO), have some of the benefits of trust region policy
optimization (TRPO), but they are much simpler to implement, more general, and
have better sample complexity (empirically). Our experiments test PPO on a
collection of benchmark tasks, including simulated robotic locomotion and Atari
game playing, and we show that PPO outperforms other online policy gradient
methods, and overall strikes a favorable balance between sample complexity,
simplicity, and wall-time. | http://arxiv.org/pdf/1707.06347 | John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, Oleg Klimov | cs.LG | null | null | cs.LG | 20170720 | 20170828 | [
{
"id": "1602.01783"
},
{
"id": "1707.02286"
},
{
"id": "1604.06778"
},
{
"id": "1506.02488"
},
{
"id": "1611.01224"
},
{
"id": "1606.01540"
}
] |
1707.06658 | 7 | and Langford, 2002] suffer from instability in practical applications [Ross et al., 2011] while DAGGER [Ross et al., 2011] and AGGREVATE [Ross and Bagnell, 2014] require the agent to query the expert during training which is not allowed in our setting of learning from a ï¬xed set of expert demonstrations. Another drawback of Behavioral Cloning is that it does not allow the agent to explore alternate policies for achieving the same objective that might be efï¬cient in some sense other than what the expert cared for. | 1707.06658#7 | RAIL: Risk-Averse Imitation Learning | Imitation learning algorithms learn viable policies by imitating an expert's
behavior when reward signals are not available. Generative Adversarial
Imitation Learning (GAIL) is a state-of-the-art algorithm for learning policies
when the expert's behavior is available as a fixed set of trajectories. We
evaluate in terms of the expert's cost function and observe that the
distribution of trajectory-costs is often more heavy-tailed for GAIL-agents
than the expert at a number of benchmark continuous-control tasks. Thus,
high-cost trajectories, corresponding to tail-end events of catastrophic
failure, are more likely to be encountered by the GAIL-agents than the expert.
This makes the reliability of GAIL-agents questionable when it comes to
deployment in risk-sensitive applications like robotic surgery and autonomous
driving. In this work, we aim to minimize the occurrence of tail-end events by
minimizing tail risk within the GAIL framework. We quantify tail risk by the
Conditional-Value-at-Risk (CVaR) of trajectories and develop the Risk-Averse
Imitation Learning (RAIL) algorithm. We observe that the policies learned with
RAIL show lower tail-end risk than those of vanilla GAIL. Thus the proposed
RAIL algorithm appears as a potent alternative to GAIL for improved reliability
in risk-sensitive applications. | http://arxiv.org/pdf/1707.06658 | Anirban Santara, Abhishek Naik, Balaraman Ravindran, Dipankar Das, Dheevatsa Mudigere, Sasikanth Avancha, Bharat Kaul | cs.LG, cs.AI | Accepted for presentation in Deep Reinforcement Learning Symposium at
NIPS 2017 | null | cs.LG | 20170720 | 20171129 | [
{
"id": "1703.01703"
},
{
"id": "1704.07911"
},
{
"id": "1708.06374"
},
{
"id": "1604.07316"
},
{
"id": "1610.03295"
},
{
"id": "1606.01540"
}
] |
1707.06342 | 8 | We then compare the proposed method with other state- of-the-art criteria. Experimental results show that our ap- proach is signiï¬cantly better than existing methods, espe- cially when the compression rate is relatively high. We evaluate ThiNet on the large-scale ImageNet classiï¬cation task. ThiNet achieves 3.31à FLOPs reduction and 16.63à compression on VGG-16 model [28], with only 0.52% top-5 accuracy drop. The ResNet-50 model [11] has less redun- dancy compared with classic CNN models. ThiNet can still reduce 2.26à FLOPs and 2.06à parameters with roughly 1% top-5 accuracy drop. To explore the limits of ThiNet, we show that the original VGG-16 model can even be pruned into 5.05MB, but still preserving AlexNet level accuracy.
In addition, we also explore the performance of ThiNet in a more practical task, i.e., transfer learning on small-scale datasets. Experimental results demonstrate the excellent effectiveness of ThiNet, which achieves the best trade-off between model size and accuracy.
The key advantages and major contributions of this paper can be summarized as follows. | 1707.06342#8 | ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression | We propose an efficient and unified framework, namely ThiNet, to
simultaneously accelerate and compress CNN models in both training and
inference stages. We focus on the filter level pruning, i.e., the whole filter
would be discarded if it is less important. Our method does not change the
original network structure, thus it can be perfectly supported by any
off-the-shelf deep learning libraries. We formally establish filter pruning as
an optimization problem, and reveal that we need to prune filters based on
statistics information computed from its next layer, not the current layer,
which differentiates ThiNet from existing methods. Experimental results
demonstrate the effectiveness of this strategy, which has advanced the
state-of-the-art. We also show the performance of ThiNet on ILSVRC-12
benchmark. ThiNet achieves 3.31$\times$ FLOPs reduction and 16.63$\times$
compression on VGG-16, with only 0.52$\%$ top-5 accuracy drop. Similar
experiments with ResNet-50 reveal that even for a compact network, ThiNet can
also reduce more than half of the parameters and FLOPs, at the cost of roughly
1$\%$ top-5 accuracy drop. Moreover, the original VGG-16 model can be further
pruned into a very small model with only 5.05MB model size, preserving AlexNet
level accuracy but showing much stronger generalization ability. | http://arxiv.org/pdf/1707.06342 | Jian-Hao Luo, Jianxin Wu, Weiyao Lin | cs.CV | To appear in ICCV 2017 | null | cs.CV | 20170720 | 20170720 | [
{
"id": "1602.07360"
},
{
"id": "1610.02391"
},
{
"id": "1607.03250"
}
] |
1707.06347 | 8 | A<0 [CLIP A>0 l-el ' +ât r f 1 1 mI r 0 ll+e LOLP
Figure 1: Plots showing one term (i.e., a single timestep) of the surrogate function LC!â as a function of the probability ratio r, for positive advantages (left) and negative advantages (right). The red circle on each plot shows the starting point for the optimization, i.e., r = 1. Note that L°/? sums many of these terms.
Figure 2 provides another source of intuition about the surrogate objective LC/!â. It shows how several objectives vary as we interpolate along the policy update direction, obtained by proximal policy optimization (the algorithm we will introduce shortly) on a continuous control problem. We can see that LC//P is a lower bound on L°P!, with a penalty for having too large of a policy update.
â Edkte] â Les EtrAd â Ellclip(r,1â¢,1+ Ad ââ LHP = E[min(r Ae, clip(r;, 1 -â â¬,1 + 2)A)] 0.12 0.10 0.08 0.06 0.04 0.02 0.00 4» â0.02 Linear interpolation factor | 1707.06347#8 | Proximal Policy Optimization Algorithms | We propose a new family of policy gradient methods for reinforcement
learning, which alternate between sampling data through interaction with the
environment, and optimizing a "surrogate" objective function using stochastic
gradient ascent. Whereas standard policy gradient methods perform one gradient
update per data sample, we propose a novel objective function that enables
multiple epochs of minibatch updates. The new methods, which we call proximal
policy optimization (PPO), have some of the benefits of trust region policy
optimization (TRPO), but they are much simpler to implement, more general, and
have better sample complexity (empirically). Our experiments test PPO on a
collection of benchmark tasks, including simulated robotic locomotion and Atari
game playing, and we show that PPO outperforms other online policy gradient
methods, and overall strikes a favorable balance between sample complexity,
simplicity, and wall-time. | http://arxiv.org/pdf/1707.06347 | John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, Oleg Klimov | cs.LG | null | null | cs.LG | 20170720 | 20170828 | [
{
"id": "1602.01783"
},
{
"id": "1707.02286"
},
{
"id": "1604.06778"
},
{
"id": "1506.02488"
},
{
"id": "1611.01224"
},
{
"id": "1606.01540"
}
] |
1707.06658 | 8 | The second category of algorithms is known as Inverse Reinforcement Learning (IRL) (Russell [1998], Ng et al. [2000], Abbeel and Ng [2011]). It attempts to uncover the underlying reward function that the expert is trying to maximize from a set of expert-demonstrated trajectories. This reward function succinctly encodes the expertâs behavior and can be used by an agent to learn a policy through an RL algorithm. The method of learning policies through RL after IRL is known as Apprenticeship Learning (Abbeel and Ng [2004]). IRL algorithms ï¬nd reward functions that prioritize entire trajectories over others. Unlike behavioral cloning, they do not ï¬t single time-step decisions, and hence they do not suffer from the issue of compounding error. However, IRL algorithms are indirect because they learn a reward function that explains expert behavior but do not tell the learner how to act directly (Ho and Ermon [2016]). The job of learning an actionable policy is left to RL algorithms. Moreover, IRL algorithms are computationally expensive and have scalability issues in large environments (Finn et al. [2016], Levine and Koltun [2012]).
2 | 1707.06658#8 | RAIL: Risk-Averse Imitation Learning | Imitation learning algorithms learn viable policies by imitating an expert's
behavior when reward signals are not available. Generative Adversarial
Imitation Learning (GAIL) is a state-of-the-art algorithm for learning policies
when the expert's behavior is available as a fixed set of trajectories. We
evaluate in terms of the expert's cost function and observe that the
distribution of trajectory-costs is often more heavy-tailed for GAIL-agents
than the expert at a number of benchmark continuous-control tasks. Thus,
high-cost trajectories, corresponding to tail-end events of catastrophic
failure, are more likely to be encountered by the GAIL-agents than the expert.
This makes the reliability of GAIL-agents questionable when it comes to
deployment in risk-sensitive applications like robotic surgery and autonomous
driving. In this work, we aim to minimize the occurrence of tail-end events by
minimizing tail risk within the GAIL framework. We quantify tail risk by the
Conditional-Value-at-Risk (CVaR) of trajectories and develop the Risk-Averse
Imitation Learning (RAIL) algorithm. We observe that the policies learned with
RAIL show lower tail-end risk than those of vanilla GAIL. Thus the proposed
RAIL algorithm appears as a potent alternative to GAIL for improved reliability
in risk-sensitive applications. | http://arxiv.org/pdf/1707.06658 | Anirban Santara, Abhishek Naik, Balaraman Ravindran, Dipankar Das, Dheevatsa Mudigere, Sasikanth Avancha, Bharat Kaul | cs.LG, cs.AI | Accepted for presentation in Deep Reinforcement Learning Symposium at
NIPS 2017 | null | cs.LG | 20170720 | 20171129 | [
{
"id": "1703.01703"
},
{
"id": "1704.07911"
},
{
"id": "1708.06374"
},
{
"id": "1604.07316"
},
{
"id": "1610.03295"
},
{
"id": "1606.01540"
}
] |
1707.06342 | 9 | The key advantages and major contributions of this paper can be summarized as follows.
⢠We propose a simple yet effective framework, namely ThiNet, to simultaneously accelerate and compress CNN models. ThiNet shows signiï¬cant improvements over existing methods on numerous tasks.
⢠We formally establish ï¬lter pruning as an optimization problem, and reveal that we need to prune ï¬lters us2
ing statistics information computed from its next layer, not the current layer, which differentiates ThiNet from existing methods.
⢠In experiments, the VGG-16 model can be pruned into 5.05MB, showing promising generalization ability on transfer learning. Higher accuracy could be preserved with a more accurate model using ThiNet.
# 2. Related work
Many researchers have found that deep models suffer from heavy over-parameterization. For example, Denil et al. [4] demonstrated that a network can be efï¬ciently recon- structed with only a small subset of its original parameters. However, this redundancy seems necessary during model training, since the highly non-convex optimization is hard to be solved with current techniques [5, 13]. Hence, there is a great need to reduce model size after its training. | 1707.06342#9 | ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression | We propose an efficient and unified framework, namely ThiNet, to
simultaneously accelerate and compress CNN models in both training and
inference stages. We focus on the filter level pruning, i.e., the whole filter
would be discarded if it is less important. Our method does not change the
original network structure, thus it can be perfectly supported by any
off-the-shelf deep learning libraries. We formally establish filter pruning as
an optimization problem, and reveal that we need to prune filters based on
statistics information computed from its next layer, not the current layer,
which differentiates ThiNet from existing methods. Experimental results
demonstrate the effectiveness of this strategy, which has advanced the
state-of-the-art. We also show the performance of ThiNet on ILSVRC-12
benchmark. ThiNet achieves 3.31$\times$ FLOPs reduction and 16.63$\times$
compression on VGG-16, with only 0.52$\%$ top-5 accuracy drop. Similar
experiments with ResNet-50 reveal that even for a compact network, ThiNet can
also reduce more than half of the parameters and FLOPs, at the cost of roughly
1$\%$ top-5 accuracy drop. Moreover, the original VGG-16 model can be further
pruned into a very small model with only 5.05MB model size, preserving AlexNet
level accuracy but showing much stronger generalization ability. | http://arxiv.org/pdf/1707.06342 | Jian-Hao Luo, Jianxin Wu, Weiyao Lin | cs.CV | To appear in ICCV 2017 | null | cs.CV | 20170720 | 20170720 | [
{
"id": "1602.07360"
},
{
"id": "1610.02391"
},
{
"id": "1607.03250"
}
] |
1707.06347 | 9 | Figure 2: Surrogate objectives, as we interpolate between the initial policy parameter @o1a, and the updated policy parameter, which we compute after one iteration of PPO. The updated policy has a KL divergence of about 0.02 from the initial policy, and this is the point at which LCâ! is maximal. This plot corresponds to the first policy update on the Hopper-vl problem, using hyperparameters provided in Section 6.1.
# 4 Adaptive KL Penalty Coefficient
Another approach, which can be used as an alternative to the clipped surrogate objective, or in addition to it, is to use a penalty on KL divergence, and to adapt the penalty coefficient so that we achieve some target value of the KL divergence dtarg each policy update. In our experiments, we found that the KL penalty performed worse than the clipped surrogate objective, however, weâve included it here because itâs an important baseline.
In the simplest instantiation of this algorithm, we perform the following steps in each policy update:
e Using several epochs of minibatch SGD, optimize the KL-penalized objective
nS Tg (a, S. ~ r&EPEN (g) = {TL 4) grL fray | 50), (| 50] (8) Tota (a | St) | 1707.06347#9 | Proximal Policy Optimization Algorithms | We propose a new family of policy gradient methods for reinforcement
learning, which alternate between sampling data through interaction with the
environment, and optimizing a "surrogate" objective function using stochastic
gradient ascent. Whereas standard policy gradient methods perform one gradient
update per data sample, we propose a novel objective function that enables
multiple epochs of minibatch updates. The new methods, which we call proximal
policy optimization (PPO), have some of the benefits of trust region policy
optimization (TRPO), but they are much simpler to implement, more general, and
have better sample complexity (empirically). Our experiments test PPO on a
collection of benchmark tasks, including simulated robotic locomotion and Atari
game playing, and we show that PPO outperforms other online policy gradient
methods, and overall strikes a favorable balance between sample complexity,
simplicity, and wall-time. | http://arxiv.org/pdf/1707.06347 | John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, Oleg Klimov | cs.LG | null | null | cs.LG | 20170720 | 20170828 | [
{
"id": "1602.01783"
},
{
"id": "1707.02286"
},
{
"id": "1604.06778"
},
{
"id": "1506.02488"
},
{
"id": "1611.01224"
},
{
"id": "1606.01540"
}
] |
1707.06658 | 9 | 2
The recently proposed Generative Adversarial Imitation Learning (GAIL) algorithm [Ho and Ermon, 2016] presents a novel mathematical framework in which the agent learns to act by directly extracting a policy from expert-demonstrated trajectories, as if it were obtained by RL following IRL. The authors show that unlike Behavioral Cloning, this method is not prone to the issue of compounding error and it is also scalable to large environments. Currently, GAIL provides state-of-the-art performance at several benchmark control tasks, including those in Table 1. | 1707.06658#9 | RAIL: Risk-Averse Imitation Learning | Imitation learning algorithms learn viable policies by imitating an expert's
behavior when reward signals are not available. Generative Adversarial
Imitation Learning (GAIL) is a state-of-the-art algorithm for learning policies
when the expert's behavior is available as a fixed set of trajectories. We
evaluate in terms of the expert's cost function and observe that the
distribution of trajectory-costs is often more heavy-tailed for GAIL-agents
than the expert at a number of benchmark continuous-control tasks. Thus,
high-cost trajectories, corresponding to tail-end events of catastrophic
failure, are more likely to be encountered by the GAIL-agents than the expert.
This makes the reliability of GAIL-agents questionable when it comes to
deployment in risk-sensitive applications like robotic surgery and autonomous
driving. In this work, we aim to minimize the occurrence of tail-end events by
minimizing tail risk within the GAIL framework. We quantify tail risk by the
Conditional-Value-at-Risk (CVaR) of trajectories and develop the Risk-Averse
Imitation Learning (RAIL) algorithm. We observe that the policies learned with
RAIL show lower tail-end risk than those of vanilla GAIL. Thus the proposed
RAIL algorithm appears as a potent alternative to GAIL for improved reliability
in risk-sensitive applications. | http://arxiv.org/pdf/1707.06658 | Anirban Santara, Abhishek Naik, Balaraman Ravindran, Dipankar Das, Dheevatsa Mudigere, Sasikanth Avancha, Bharat Kaul | cs.LG, cs.AI | Accepted for presentation in Deep Reinforcement Learning Symposium at
NIPS 2017 | null | cs.LG | 20170720 | 20171129 | [
{
"id": "1703.01703"
},
{
"id": "1704.07911"
},
{
"id": "1708.06374"
},
{
"id": "1604.07316"
},
{
"id": "1610.03295"
},
{
"id": "1606.01540"
}
] |
1707.06342 | 10 | Some methods have been proposed to pursuit a balance between model size and accuracy. Han et al. [10] proposed an iterative pruning method to remove the redundancy in deep models. Their main insight is that small-weight con- nectivity below a threshold should be discarded. In practice, this can be aided by applying 4; or 2 regularization to push connectivity values becoming smaller. The major weakness of this strategy is the loss of universality and flexibility, thus seems to be less practical in the real applications.
In order to avoid these weaknesses, some attention has been focused on the group-wise sparsity. Lebedev and Lem- pitsky [19] explored group-sparse convolution by introduc- ing the group-sparsity regularization to the loss function, then some entire groups of weights would shrink to zeros, thus can be removed. Similarly, Wen et al. [32] proposed the Structured Sparsity Learning (SSL) method to regularize ï¬lter, channel, ï¬lter shape and depth structures. In spite of their success, the original network structure has been de- stroyed. As a result, some dedicated libraries are needed for an efï¬cient inference speed-up. | 1707.06342#10 | ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression | We propose an efficient and unified framework, namely ThiNet, to
simultaneously accelerate and compress CNN models in both training and
inference stages. We focus on the filter level pruning, i.e., the whole filter
would be discarded if it is less important. Our method does not change the
original network structure, thus it can be perfectly supported by any
off-the-shelf deep learning libraries. We formally establish filter pruning as
an optimization problem, and reveal that we need to prune filters based on
statistics information computed from its next layer, not the current layer,
which differentiates ThiNet from existing methods. Experimental results
demonstrate the effectiveness of this strategy, which has advanced the
state-of-the-art. We also show the performance of ThiNet on ILSVRC-12
benchmark. ThiNet achieves 3.31$\times$ FLOPs reduction and 16.63$\times$
compression on VGG-16, with only 0.52$\%$ top-5 accuracy drop. Similar
experiments with ResNet-50 reveal that even for a compact network, ThiNet can
also reduce more than half of the parameters and FLOPs, at the cost of roughly
1$\%$ top-5 accuracy drop. Moreover, the original VGG-16 model can be further
pruned into a very small model with only 5.05MB model size, preserving AlexNet
level accuracy but showing much stronger generalization ability. | http://arxiv.org/pdf/1707.06342 | Jian-Hao Luo, Jianxin Wu, Weiyao Lin | cs.CV | To appear in ICCV 2017 | null | cs.CV | 20170720 | 20170720 | [
{
"id": "1602.07360"
},
{
"id": "1610.02391"
},
{
"id": "1607.03250"
}
] |
1707.06347 | 10 | e Compute d= Ee [KL [rt04 (- | se), 7o(-| se)]]
â Ifd < dtarg/1.5, 8 â 6/2
â Ifd> dtarg X 1.5, 8+ Bx 2
The updated £ is used for the next policy update. With this scheme, we occasionally see policy updates where the KL divergence is significantly different from dtarg, however, these are rare, and 8 quickly adjusts. The parameters 1.5 and 2 above are chosen heuristically, but the algorithm is not very sensitive to them. The initial value of 3 is a another hyperparameter but is not important in practice because the algorithm quickly adjusts it.
# 5 Algorithm
The surrogate losses from the previous sections can be computed and differentiated with a minor change to a typical policy gradient implementation. For implementations that use automatic dif- ferentation, one simply constructs the loss L¢/!? or LK4PFN instead of L?@, and one performs multiple steps of stochastic gradient ascent on this objective.
Most techniques for computing variance-reduced advantage-function estimators make use a learned state-value function V(s); for example, generalized advantage estimation [Sch+15a], or the | 1707.06347#10 | Proximal Policy Optimization Algorithms | We propose a new family of policy gradient methods for reinforcement
learning, which alternate between sampling data through interaction with the
environment, and optimizing a "surrogate" objective function using stochastic
gradient ascent. Whereas standard policy gradient methods perform one gradient
update per data sample, we propose a novel objective function that enables
multiple epochs of minibatch updates. The new methods, which we call proximal
policy optimization (PPO), have some of the benefits of trust region policy
optimization (TRPO), but they are much simpler to implement, more general, and
have better sample complexity (empirically). Our experiments test PPO on a
collection of benchmark tasks, including simulated robotic locomotion and Atari
game playing, and we show that PPO outperforms other online policy gradient
methods, and overall strikes a favorable balance between sample complexity,
simplicity, and wall-time. | http://arxiv.org/pdf/1707.06347 | John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, Oleg Klimov | cs.LG | null | null | cs.LG | 20170720 | 20170828 | [
{
"id": "1602.01783"
},
{
"id": "1707.02286"
},
{
"id": "1604.06778"
},
{
"id": "1506.02488"
},
{
"id": "1611.01224"
},
{
"id": "1606.01540"
}
] |
1707.06658 | 10 | Risk sensitivity is integral to human learning [Nagengast et al., 2010, Niv et al., 2012], and risk- sensitive decision-making problems, in the context of MDPs, have been investigated in various ï¬elds, e.g., in ï¬nance [Ruszczy´nski, 2010], operations research [Howard and Matheson, 1972, Borkar, 2002], machine learning [Heger, 1994, Mihatsch and Neuneier, 2002] and robotics [Shalev-Shwartz et al., 2016, 2017, Abbeel et al., 2007, Rajeswaran et al., 2016]. [Garcıa and Fernández, 2015] give a comprehensive overview of different risk-sensitive RL algorithms. They fall in two broad categories. The ï¬rst category includes methods that constrain the agent to safe states during exploration while the second modiï¬es the optimality criterion of the agent to embed a term for minimizing risk. Studies on risk-minimization are rather scarce in the imitation learning literature. [Majumdar et al., 2017] take inspiration from studies like [Glimcher and Fehr, 2013, Shen et al., 2014, Hsu et | 1707.06658#10 | RAIL: Risk-Averse Imitation Learning | Imitation learning algorithms learn viable policies by imitating an expert's
behavior when reward signals are not available. Generative Adversarial
Imitation Learning (GAIL) is a state-of-the-art algorithm for learning policies
when the expert's behavior is available as a fixed set of trajectories. We
evaluate in terms of the expert's cost function and observe that the
distribution of trajectory-costs is often more heavy-tailed for GAIL-agents
than the expert at a number of benchmark continuous-control tasks. Thus,
high-cost trajectories, corresponding to tail-end events of catastrophic
failure, are more likely to be encountered by the GAIL-agents than the expert.
This makes the reliability of GAIL-agents questionable when it comes to
deployment in risk-sensitive applications like robotic surgery and autonomous
driving. In this work, we aim to minimize the occurrence of tail-end events by
minimizing tail risk within the GAIL framework. We quantify tail risk by the
Conditional-Value-at-Risk (CVaR) of trajectories and develop the Risk-Averse
Imitation Learning (RAIL) algorithm. We observe that the policies learned with
RAIL show lower tail-end risk than those of vanilla GAIL. Thus the proposed
RAIL algorithm appears as a potent alternative to GAIL for improved reliability
in risk-sensitive applications. | http://arxiv.org/pdf/1707.06658 | Anirban Santara, Abhishek Naik, Balaraman Ravindran, Dipankar Das, Dheevatsa Mudigere, Sasikanth Avancha, Bharat Kaul | cs.LG, cs.AI | Accepted for presentation in Deep Reinforcement Learning Symposium at
NIPS 2017 | null | cs.LG | 20170720 | 20171129 | [
{
"id": "1703.01703"
},
{
"id": "1704.07911"
},
{
"id": "1708.06374"
},
{
"id": "1604.07316"
},
{
"id": "1610.03295"
},
{
"id": "1606.01540"
}
] |
1707.06342 | 11 | In line with our work, some ï¬lter level pruning strate- gies have been explored too. The core is to evaluate neuron importance, which has been widely studied in the commu- nity [34, 27, 21, 14, 23]. A simplest possible method is based on the magnitude of weights. Li et al. [21] measured the importance of each ï¬lter by calculating its absolute weight sum. Another practical criterion is to measure the sparsity of activations after the ReLU function. Hu et al. [14] believed that if most outputs of some neurons are zero, these activa- tions should be expected to be redundant. They compute the Average Percentage of Zeros (APoZ) of each ï¬lter as its importance score. These two criteria are simple and straight- forward, but not directly related to the ï¬nal loss. Inspired by this observation, Molchanov et al. [23] adopted Taylor expansion to approximate the inï¬uence to loss function in- duced by removing each ï¬lter.
input of filters of input of filters of input of layer i layer i layer i+1 layer i+1 layer i+2 Original Model prune weak Pruned oF Model oi | Fine-tuned , oo > Model â | 1707.06342#11 | ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression | We propose an efficient and unified framework, namely ThiNet, to
simultaneously accelerate and compress CNN models in both training and
inference stages. We focus on the filter level pruning, i.e., the whole filter
would be discarded if it is less important. Our method does not change the
original network structure, thus it can be perfectly supported by any
off-the-shelf deep learning libraries. We formally establish filter pruning as
an optimization problem, and reveal that we need to prune filters based on
statistics information computed from its next layer, not the current layer,
which differentiates ThiNet from existing methods. Experimental results
demonstrate the effectiveness of this strategy, which has advanced the
state-of-the-art. We also show the performance of ThiNet on ILSVRC-12
benchmark. ThiNet achieves 3.31$\times$ FLOPs reduction and 16.63$\times$
compression on VGG-16, with only 0.52$\%$ top-5 accuracy drop. Similar
experiments with ResNet-50 reveal that even for a compact network, ThiNet can
also reduce more than half of the parameters and FLOPs, at the cost of roughly
1$\%$ top-5 accuracy drop. Moreover, the original VGG-16 model can be further
pruned into a very small model with only 5.05MB model size, preserving AlexNet
level accuracy but showing much stronger generalization ability. | http://arxiv.org/pdf/1707.06342 | Jian-Hao Luo, Jianxin Wu, Weiyao Lin | cs.CV | To appear in ICCV 2017 | null | cs.CV | 20170720 | 20170720 | [
{
"id": "1602.07360"
},
{
"id": "1610.02391"
},
{
"id": "1607.03250"
}
] |
1707.06347 | 11 | Most techniques for computing variance-reduced advantage-function estimators make use a learned state-value function V(s); for example, generalized advantage estimation [Sch+15a], or the
finite-horizon estimators in [Mni+16]. If using a neural network architecture that shares parameters between the policy and value function, we must use a loss function that combines the policy surrogate and a value function error term. This objective can further be augmented by adding an entropy bonus to ensure sufficient exploration, as suggested in past work [Wil92; Mni+16]. Combining these terms, we obtain the following objective, which is (approximately) maximized each iteration:
LE MPHVESS (9) = Ey [Ly 1? (0) â ey" (8) + c25[m6](s1)] (9)
where c1,¢2 are coefficients, and S denotes an entropy bonus, and LY (Vo(se) â Vi"8)?. is a squared-error loss
One style of policy gradient implementation, popularized in [Mni+16] and well-suited for use with recurrent neural networks, runs the policy for T timesteps (where T is much less than the episode length), and uses the collected samples for an update. This style requires an advantage estimator that does not look beyond timestep T. The estimator used by [Mni+16] is | 1707.06347#11 | Proximal Policy Optimization Algorithms | We propose a new family of policy gradient methods for reinforcement
learning, which alternate between sampling data through interaction with the
environment, and optimizing a "surrogate" objective function using stochastic
gradient ascent. Whereas standard policy gradient methods perform one gradient
update per data sample, we propose a novel objective function that enables
multiple epochs of minibatch updates. The new methods, which we call proximal
policy optimization (PPO), have some of the benefits of trust region policy
optimization (TRPO), but they are much simpler to implement, more general, and
have better sample complexity (empirically). Our experiments test PPO on a
collection of benchmark tasks, including simulated robotic locomotion and Atari
game playing, and we show that PPO outperforms other online policy gradient
methods, and overall strikes a favorable balance between sample complexity,
simplicity, and wall-time. | http://arxiv.org/pdf/1707.06347 | John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, Oleg Klimov | cs.LG | null | null | cs.LG | 20170720 | 20170828 | [
{
"id": "1602.01783"
},
{
"id": "1707.02286"
},
{
"id": "1604.06778"
},
{
"id": "1506.02488"
},
{
"id": "1611.01224"
},
{
"id": "1606.01540"
}
] |
1707.06658 | 11 | literature. [Majumdar et al., 2017] take inspiration from studies like [Glimcher and Fehr, 2013, Shen et al., 2014, Hsu et al., 2005] on modeling risk in human decision-making and conservatively approximate the expertâs risk preferences by ï¬nding an outer approximation of the risk envelope. Much of the literature on imitation learning has been developed with average-case performance at the center, overlooking tail-end events. In this work, we aim to take an inclusive and direct approach to minimizing tail risk of GAIL-learned policies at test time irrespective of the expertâs risk preferences. | 1707.06658#11 | RAIL: Risk-Averse Imitation Learning | Imitation learning algorithms learn viable policies by imitating an expert's
behavior when reward signals are not available. Generative Adversarial
Imitation Learning (GAIL) is a state-of-the-art algorithm for learning policies
when the expert's behavior is available as a fixed set of trajectories. We
evaluate in terms of the expert's cost function and observe that the
distribution of trajectory-costs is often more heavy-tailed for GAIL-agents
than the expert at a number of benchmark continuous-control tasks. Thus,
high-cost trajectories, corresponding to tail-end events of catastrophic
failure, are more likely to be encountered by the GAIL-agents than the expert.
This makes the reliability of GAIL-agents questionable when it comes to
deployment in risk-sensitive applications like robotic surgery and autonomous
driving. In this work, we aim to minimize the occurrence of tail-end events by
minimizing tail risk within the GAIL framework. We quantify tail risk by the
Conditional-Value-at-Risk (CVaR) of trajectories and develop the Risk-Averse
Imitation Learning (RAIL) algorithm. We observe that the policies learned with
RAIL show lower tail-end risk than those of vanilla GAIL. Thus the proposed
RAIL algorithm appears as a potent alternative to GAIL for improved reliability
in risk-sensitive applications. | http://arxiv.org/pdf/1707.06658 | Anirban Santara, Abhishek Naik, Balaraman Ravindran, Dipankar Das, Dheevatsa Mudigere, Sasikanth Avancha, Bharat Kaul | cs.LG, cs.AI | Accepted for presentation in Deep Reinforcement Learning Symposium at
NIPS 2017 | null | cs.LG | 20170720 | 20171129 | [
{
"id": "1703.01703"
},
{
"id": "1704.07911"
},
{
"id": "1708.06374"
},
{
"id": "1604.07316"
},
{
"id": "1610.03295"
},
{
"id": "1606.01540"
}
] |
1707.06342 | 12 | Figure 1. Illustration of ThiNet. First, we focus on the dotted box part to determine several weak channels and their corresponding ï¬lters (highlighted in yellow in the ï¬rst row). These channels (and their associated ï¬lters) have little contribution to the overall performance, thus can be discarded, leading to a pruned model. Finally, the network is ï¬ne-tuned to recover its accuracy. (This ï¬gure is best viewed in color.)
Beyond pruning, there are also other strategies to obtain small CNN models. One popular approaches is parameter quantization [8, 3, 33, 9]. Low-rank approximation is also widely studied [5, 29]. Note that these methods are com- plementary to ï¬lter pruning, which can be combined with ThiNet for further improvement.
# 3. ThiNet
In this section, we will give a comprehensive introduc- tion to our ï¬lter level pruning approach: ThiNet. First, the overall framework will be presented. Next, a more detailed description of our selection algorithm would be presented. Finally, we will show our pruning strategy, which takes both efï¬ciency and effectiveness into consideration.
# 3.1. Framework of ThiNet | 1707.06342#12 | ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression | We propose an efficient and unified framework, namely ThiNet, to
simultaneously accelerate and compress CNN models in both training and
inference stages. We focus on the filter level pruning, i.e., the whole filter
would be discarded if it is less important. Our method does not change the
original network structure, thus it can be perfectly supported by any
off-the-shelf deep learning libraries. We formally establish filter pruning as
an optimization problem, and reveal that we need to prune filters based on
statistics information computed from its next layer, not the current layer,
which differentiates ThiNet from existing methods. Experimental results
demonstrate the effectiveness of this strategy, which has advanced the
state-of-the-art. We also show the performance of ThiNet on ILSVRC-12
benchmark. ThiNet achieves 3.31$\times$ FLOPs reduction and 16.63$\times$
compression on VGG-16, with only 0.52$\%$ top-5 accuracy drop. Similar
experiments with ResNet-50 reveal that even for a compact network, ThiNet can
also reduce more than half of the parameters and FLOPs, at the cost of roughly
1$\%$ top-5 accuracy drop. Moreover, the original VGG-16 model can be further
pruned into a very small model with only 5.05MB model size, preserving AlexNet
level accuracy but showing much stronger generalization ability. | http://arxiv.org/pdf/1707.06342 | Jian-Hao Luo, Jianxin Wu, Weiyao Lin | cs.CV | To appear in ICCV 2017 | null | cs.CV | 20170720 | 20170720 | [
{
"id": "1602.07360"
},
{
"id": "1610.02391"
},
{
"id": "1607.03250"
}
] |
1707.06347 | 12 | A, V(si) ret rege $e FI pi $V (87) (10)
where t specifies the time index in [0, T], within a given length-T trajectory segment. Generalizing this choice, we can use a truncated version of generalized advantage estimation, which reduces to Equation (10) when A = 1:
Ap = bt + (VA)beH Hee $e FAT Hp, (11)
where 6; = r¢ + yV(si41) â V(st) (12)
A proximal policy optimization (PPO) algorithm that uses fixed-length trajectory segments is shown below. Each iteration, each of N (parallel) actors collect T timesteps of data. Then we construct the surrogate loss on these NT timesteps of data, and optimize it with minibatch SGD (or usually for better performance, Adam [KB14]), for K epochs.
Algorithm 1 PPO, Actor-Critic Style
1 PPO, Actor-Critic Style iteration=1,2,... do for actor=1,2,...,N do Run policy 7,,, in environment for T timesteps Compute advantage estimates A,,..., Ar end for Optimize surrogate L wrt 0, with K epochs and minibatch size M < NT O14 â 8 end for
# for
# 6 Experiments
# 6.1 Comparison of Surrogate Objectives | 1707.06347#12 | Proximal Policy Optimization Algorithms | We propose a new family of policy gradient methods for reinforcement
learning, which alternate between sampling data through interaction with the
environment, and optimizing a "surrogate" objective function using stochastic
gradient ascent. Whereas standard policy gradient methods perform one gradient
update per data sample, we propose a novel objective function that enables
multiple epochs of minibatch updates. The new methods, which we call proximal
policy optimization (PPO), have some of the benefits of trust region policy
optimization (TRPO), but they are much simpler to implement, more general, and
have better sample complexity (empirically). Our experiments test PPO on a
collection of benchmark tasks, including simulated robotic locomotion and Atari
game playing, and we show that PPO outperforms other online policy gradient
methods, and overall strikes a favorable balance between sample complexity,
simplicity, and wall-time. | http://arxiv.org/pdf/1707.06347 | John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, Oleg Klimov | cs.LG | null | null | cs.LG | 20170720 | 20170828 | [
{
"id": "1602.01783"
},
{
"id": "1707.02286"
},
{
"id": "1604.06778"
},
{
"id": "1506.02488"
},
{
"id": "1611.01224"
},
{
"id": "1606.01540"
}
] |
1707.06658 | 12 | In order to evaluate the worst-case risk of deploying GAIL-learned policies, we studied the distribu- tions (see Figure 1) of trajectory-costs (according to the expertâs cost function) for the GAIL agents and experts at different control tasks (see Table 1). We observed that the distributions for GAIL are more heavy-tailed than the expert, where the tail corresponds to occurrences of high trajectory-costs. In order to quantify tail risk, we use Conditional-Value-at-Risk (CV aR) [Rockafellar and Uryasev, 2000]. CV aR is deï¬ned as the expected cost above a given level of conï¬dence and is a popular and coherent tail risk measure. The heavier the tail, the higher the value of CV aR. We observe that the value of CV aR is much higher for GAIL than the experts at most of the tasks (see Table 1) which again suggests that the GAIL agents encounter high-cost trajectories more often than the experts. Since high trajectory-costs may correspond to events of catastrophic failure, GAIL agents are not reliable in risk-sensitive applications. In this work, we aim to | 1707.06658#12 | RAIL: Risk-Averse Imitation Learning | Imitation learning algorithms learn viable policies by imitating an expert's
behavior when reward signals are not available. Generative Adversarial
Imitation Learning (GAIL) is a state-of-the-art algorithm for learning policies
when the expert's behavior is available as a fixed set of trajectories. We
evaluate in terms of the expert's cost function and observe that the
distribution of trajectory-costs is often more heavy-tailed for GAIL-agents
than the expert at a number of benchmark continuous-control tasks. Thus,
high-cost trajectories, corresponding to tail-end events of catastrophic
failure, are more likely to be encountered by the GAIL-agents than the expert.
This makes the reliability of GAIL-agents questionable when it comes to
deployment in risk-sensitive applications like robotic surgery and autonomous
driving. In this work, we aim to minimize the occurrence of tail-end events by
minimizing tail risk within the GAIL framework. We quantify tail risk by the
Conditional-Value-at-Risk (CVaR) of trajectories and develop the Risk-Averse
Imitation Learning (RAIL) algorithm. We observe that the policies learned with
RAIL show lower tail-end risk than those of vanilla GAIL. Thus the proposed
RAIL algorithm appears as a potent alternative to GAIL for improved reliability
in risk-sensitive applications. | http://arxiv.org/pdf/1707.06658 | Anirban Santara, Abhishek Naik, Balaraman Ravindran, Dipankar Das, Dheevatsa Mudigere, Sasikanth Avancha, Bharat Kaul | cs.LG, cs.AI | Accepted for presentation in Deep Reinforcement Learning Symposium at
NIPS 2017 | null | cs.LG | 20170720 | 20171129 | [
{
"id": "1703.01703"
},
{
"id": "1704.07911"
},
{
"id": "1708.06374"
},
{
"id": "1604.07316"
},
{
"id": "1610.03295"
},
{
"id": "1606.01540"
}
] |
1707.06342 | 13 | # 3.1. Framework of ThiNet
Pruning is a classic method used for reducing model complexity. Although vast differences exist (such as differ- ent criteria in selecting what should be pruned), the overall framework is similar in pruning ï¬lters inside a deep neural network. It can be summarized in one sentence: evaluate the importance of each neuron, remove those unimportant ones, and ï¬ne-tune the whole network.
This framework is illustrated in Figure 1. In the next sub- section, we will focus on the dotted box part to introduce our data-driven channel selection method, which determines the channels (and their associated ï¬lters) that are to be pruned away.
Given a pre-trained model, it would be pruned layer by layer with a predeï¬ned compression rate. We summarize our framework as follows:
1. Filter selection. Unlike existing methods that use layer iâs statistics to guide the pruning of layer iâs ï¬lters, we use layer i + 1 to guide the pruning in layer i. The key idea is: if we can use a subset of channels in layer
3 | 1707.06342#13 | ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression | We propose an efficient and unified framework, namely ThiNet, to
simultaneously accelerate and compress CNN models in both training and
inference stages. We focus on the filter level pruning, i.e., the whole filter
would be discarded if it is less important. Our method does not change the
original network structure, thus it can be perfectly supported by any
off-the-shelf deep learning libraries. We formally establish filter pruning as
an optimization problem, and reveal that we need to prune filters based on
statistics information computed from its next layer, not the current layer,
which differentiates ThiNet from existing methods. Experimental results
demonstrate the effectiveness of this strategy, which has advanced the
state-of-the-art. We also show the performance of ThiNet on ILSVRC-12
benchmark. ThiNet achieves 3.31$\times$ FLOPs reduction and 16.63$\times$
compression on VGG-16, with only 0.52$\%$ top-5 accuracy drop. Similar
experiments with ResNet-50 reveal that even for a compact network, ThiNet can
also reduce more than half of the parameters and FLOPs, at the cost of roughly
1$\%$ top-5 accuracy drop. Moreover, the original VGG-16 model can be further
pruned into a very small model with only 5.05MB model size, preserving AlexNet
level accuracy but showing much stronger generalization ability. | http://arxiv.org/pdf/1707.06342 | Jian-Hao Luo, Jianxin Wu, Weiyao Lin | cs.CV | To appear in ICCV 2017 | null | cs.CV | 20170720 | 20170720 | [
{
"id": "1602.07360"
},
{
"id": "1610.02391"
},
{
"id": "1607.03250"
}
] |
1707.06347 | 13 | # for
# 6 Experiments
# 6.1 Comparison of Surrogate Objectives
First, we compare several different surrogate objectives under different hyperparameters. Here, we compare the surrogate objective L©//â to several natural variations and ablated versions.
No clipping or penalty: L;(0) = r1(0) At Clipping: L,(0) = min(r;(9) A;, clip(r:(0)), 1 â â¬, 1 + ©) Ay KL penalty (fixed or adaptive) L1(0) = r+(0) At â 8 KL [r19,14, 74]
ol
For the KL penalty, one can either use a fixed penalty coefficient 6 or an adaptive coefficient as described in Section 4 using target KL value dtarg. Note that we also tried clipping in log space, but found the performance to be no better.
Because we are searching over hyperparameters for each algorithm variant, we chose a compu- tationally cheap benchmark to test the algorithms on. Namely, we used 7 simulated robotics tasksâ implemented in OpenAI Gym [Bro+16], which use the MuJoCo [TET12] physics engine. We do one million timesteps of training on each one. Besides the hyperparameters used for clipping (⬠and the KL penalty (8, dtarg), which we search over, the other hyperparameters are provided in in Table 3. | 1707.06347#13 | Proximal Policy Optimization Algorithms | We propose a new family of policy gradient methods for reinforcement
learning, which alternate between sampling data through interaction with the
environment, and optimizing a "surrogate" objective function using stochastic
gradient ascent. Whereas standard policy gradient methods perform one gradient
update per data sample, we propose a novel objective function that enables
multiple epochs of minibatch updates. The new methods, which we call proximal
policy optimization (PPO), have some of the benefits of trust region policy
optimization (TRPO), but they are much simpler to implement, more general, and
have better sample complexity (empirically). Our experiments test PPO on a
collection of benchmark tasks, including simulated robotic locomotion and Atari
game playing, and we show that PPO outperforms other online policy gradient
methods, and overall strikes a favorable balance between sample complexity,
simplicity, and wall-time. | http://arxiv.org/pdf/1707.06347 | John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, Oleg Klimov | cs.LG | null | null | cs.LG | 20170720 | 20170828 | [
{
"id": "1602.01783"
},
{
"id": "1707.02286"
},
{
"id": "1604.06778"
},
{
"id": "1506.02488"
},
{
"id": "1611.01224"
},
{
"id": "1606.01540"
}
] |
1707.06658 | 13 | trajectory-costs may correspond to events of catastrophic failure, GAIL agents are not reliable in risk-sensitive applications. In this work, we aim to explicitly minimize expected worst-case risk for a given conï¬dence bound (quantiï¬ed by CV aR) along with the GAIL objective, such that the learned policies are more reliable than GAIL, when deployed, while still preserving the average performance of GAIL. [Chow and Ghavamzadeh, 2014] developed policy gradient and actor-critic algorithms for mean-CV aR optimization for learning policies in the classic RL setting. However these algorithms are not directly applicable in our setting of learning a policy from a set of expert-demonstrated trajectories. We take inspiration from this work and make the following contributions: | 1707.06658#13 | RAIL: Risk-Averse Imitation Learning | Imitation learning algorithms learn viable policies by imitating an expert's
behavior when reward signals are not available. Generative Adversarial
Imitation Learning (GAIL) is a state-of-the-art algorithm for learning policies
when the expert's behavior is available as a fixed set of trajectories. We
evaluate in terms of the expert's cost function and observe that the
distribution of trajectory-costs is often more heavy-tailed for GAIL-agents
than the expert at a number of benchmark continuous-control tasks. Thus,
high-cost trajectories, corresponding to tail-end events of catastrophic
failure, are more likely to be encountered by the GAIL-agents than the expert.
This makes the reliability of GAIL-agents questionable when it comes to
deployment in risk-sensitive applications like robotic surgery and autonomous
driving. In this work, we aim to minimize the occurrence of tail-end events by
minimizing tail risk within the GAIL framework. We quantify tail risk by the
Conditional-Value-at-Risk (CVaR) of trajectories and develop the Risk-Averse
Imitation Learning (RAIL) algorithm. We observe that the policies learned with
RAIL show lower tail-end risk than those of vanilla GAIL. Thus the proposed
RAIL algorithm appears as a potent alternative to GAIL for improved reliability
in risk-sensitive applications. | http://arxiv.org/pdf/1707.06658 | Anirban Santara, Abhishek Naik, Balaraman Ravindran, Dipankar Das, Dheevatsa Mudigere, Sasikanth Avancha, Bharat Kaul | cs.LG, cs.AI | Accepted for presentation in Deep Reinforcement Learning Symposium at
NIPS 2017 | null | cs.LG | 20170720 | 20171129 | [
{
"id": "1703.01703"
},
{
"id": "1704.07911"
},
{
"id": "1708.06374"
},
{
"id": "1604.07316"
},
{
"id": "1610.03295"
},
{
"id": "1606.01540"
}
] |
1707.06342 | 14 | 3
(i + 1)âs input to approximate the output in layer i + 1, the other channels can be safely removed from the input of layer i + 1. Note that one channel in layer (i + 1)âs input is produced by one ï¬lter in layer i, hence we can safely prune the corresponding ï¬lter in layer i.
2. Pruning. Weak channels in layer (i + 1)âs input and their corresponding ï¬lters in layer i would be pruned away, leading to a much smaller model. Note that, the pruned network has exactly the same structure but with fewer ï¬lters and channels. In other words, the original wide network is becoming much thinner. That is why we call our method âThiNetâ.
3. Fine-tuning. Fine-tuning is a necessary step to recover the generalization ability damaged by ï¬lter pruning. But it will take very long for large datasets and complex models. For time-saving considerations, we ï¬ne-tune one or two epochs after the pruning of one layer. In order to get an accurate model, more additional epochs would be carried out when all layers have been pruned.
# 4. Iterate to step 1 to prune the next layer.
# 3.2. Data-driven channel selection | 1707.06342#14 | ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression | We propose an efficient and unified framework, namely ThiNet, to
simultaneously accelerate and compress CNN models in both training and
inference stages. We focus on the filter level pruning, i.e., the whole filter
would be discarded if it is less important. Our method does not change the
original network structure, thus it can be perfectly supported by any
off-the-shelf deep learning libraries. We formally establish filter pruning as
an optimization problem, and reveal that we need to prune filters based on
statistics information computed from its next layer, not the current layer,
which differentiates ThiNet from existing methods. Experimental results
demonstrate the effectiveness of this strategy, which has advanced the
state-of-the-art. We also show the performance of ThiNet on ILSVRC-12
benchmark. ThiNet achieves 3.31$\times$ FLOPs reduction and 16.63$\times$
compression on VGG-16, with only 0.52$\%$ top-5 accuracy drop. Similar
experiments with ResNet-50 reveal that even for a compact network, ThiNet can
also reduce more than half of the parameters and FLOPs, at the cost of roughly
1$\%$ top-5 accuracy drop. Moreover, the original VGG-16 model can be further
pruned into a very small model with only 5.05MB model size, preserving AlexNet
level accuracy but showing much stronger generalization ability. | http://arxiv.org/pdf/1707.06342 | Jian-Hao Luo, Jianxin Wu, Weiyao Lin | cs.CV | To appear in ICCV 2017 | null | cs.CV | 20170720 | 20170720 | [
{
"id": "1602.07360"
},
{
"id": "1610.02391"
},
{
"id": "1607.03250"
}
] |
1707.06347 | 14 | To represent the policy, we used a fully-connected MLP with two hidden layers of 64 units and tanh nonlinearities, outputting the mean of a Gaussian distribution, with variable standar deviations, following [Sch+15b; Dua+16]. We donât share parameters between the policy and value function (so coefficient c, is irrelevant), and we donât use an entropy bonus. â
Each algorithm was run on all 7 environment run of the algorithm by computing the average and scaled the scores for each environment so thaâ result was set to 1, and averaged over 21 runs to s, with 3 random seeds on each. We scored each otal reward of the last 100 episodes. We shifte the random policy gave a score of 0) and the bes roduce a single scalar for each algorithm setting.
The results are shown in Table 1. Note that the score is negative for the setting without clipping or penalties, because for one environment (half cheetah) it leads to a very negative score, which is worse than the initial random policy. | 1707.06347#14 | Proximal Policy Optimization Algorithms | We propose a new family of policy gradient methods for reinforcement
learning, which alternate between sampling data through interaction with the
environment, and optimizing a "surrogate" objective function using stochastic
gradient ascent. Whereas standard policy gradient methods perform one gradient
update per data sample, we propose a novel objective function that enables
multiple epochs of minibatch updates. The new methods, which we call proximal
policy optimization (PPO), have some of the benefits of trust region policy
optimization (TRPO), but they are much simpler to implement, more general, and
have better sample complexity (empirically). Our experiments test PPO on a
collection of benchmark tasks, including simulated robotic locomotion and Atari
game playing, and we show that PPO outperforms other online policy gradient
methods, and overall strikes a favorable balance between sample complexity,
simplicity, and wall-time. | http://arxiv.org/pdf/1707.06347 | John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, Oleg Klimov | cs.LG | null | null | cs.LG | 20170720 | 20170828 | [
{
"id": "1602.01783"
},
{
"id": "1707.02286"
},
{
"id": "1604.06778"
},
{
"id": "1506.02488"
},
{
"id": "1611.01224"
},
{
"id": "1606.01540"
}
] |
1707.06658 | 14 | 1. We formulate the Risk-Averse Imitation Learning (RAIL) algorithm which optimizes CV aR in addition to the original GAIL objective.
2. We evaluate RAIL at a number of benchmark control tasks and demonstrate that it obtains policies with lesser tail risk at test time than GAIL.
The rest of the paper is organized as follows. Section 2 builds the mathematical foundation of the paper by introducing essential concepts of imitation learning. Section 3 deï¬nes relevant risk- measures and describes the proposed Risk-Averse Imitation Learning algorithm. Section 4 speciï¬es our experimental setup and Section 5 outlines the evaluation metrics. Finally, Section 6 presents the results of our experiments comparing RAIL with GAIL followed by a discussion of the same and Section 7 concludes the paper with scope of future work.
# 2 Mathematical Background | 1707.06658#14 | RAIL: Risk-Averse Imitation Learning | Imitation learning algorithms learn viable policies by imitating an expert's
behavior when reward signals are not available. Generative Adversarial
Imitation Learning (GAIL) is a state-of-the-art algorithm for learning policies
when the expert's behavior is available as a fixed set of trajectories. We
evaluate in terms of the expert's cost function and observe that the
distribution of trajectory-costs is often more heavy-tailed for GAIL-agents
than the expert at a number of benchmark continuous-control tasks. Thus,
high-cost trajectories, corresponding to tail-end events of catastrophic
failure, are more likely to be encountered by the GAIL-agents than the expert.
This makes the reliability of GAIL-agents questionable when it comes to
deployment in risk-sensitive applications like robotic surgery and autonomous
driving. In this work, we aim to minimize the occurrence of tail-end events by
minimizing tail risk within the GAIL framework. We quantify tail risk by the
Conditional-Value-at-Risk (CVaR) of trajectories and develop the Risk-Averse
Imitation Learning (RAIL) algorithm. We observe that the policies learned with
RAIL show lower tail-end risk than those of vanilla GAIL. Thus the proposed
RAIL algorithm appears as a potent alternative to GAIL for improved reliability
in risk-sensitive applications. | http://arxiv.org/pdf/1707.06658 | Anirban Santara, Abhishek Naik, Balaraman Ravindran, Dipankar Das, Dheevatsa Mudigere, Sasikanth Avancha, Bharat Kaul | cs.LG, cs.AI | Accepted for presentation in Deep Reinforcement Learning Symposium at
NIPS 2017 | null | cs.LG | 20170720 | 20171129 | [
{
"id": "1703.01703"
},
{
"id": "1704.07911"
},
{
"id": "1708.06374"
},
{
"id": "1604.07316"
},
{
"id": "1610.03295"
},
{
"id": "1606.01540"
}
] |
1707.06342 | 15 | # 4. Iterate to step 1 to prune the next layer.
# 3.2. Data-driven channel selection
We use a triplet (Z;, Vj, *) to denote the convolution process in layer i, where Z; ⬠RC**W js the input tensor, which has C' channels, H rows and W columns. And W; ⬠RPxCxKXK ig a set of filters with K x K kernel size, which generates a new tensor with D channels.
Our goal is to remove some unimportant ï¬lters in Wi. Note that, if a ï¬lter in Wi is removed, its corresponding channel in Ii+1 and Wi+1 would also be discarded. How- ever, since the ï¬lter number in layer i + 1 has not been changed, the size of its output tensor, i.e., Ii+2, would be kept exactly the same. Inspired by this observation, we believe that if we can remove several ï¬lters that has little inï¬uence on Ii+2 (which is also the output of layer i + 1), it would have little inï¬uence on the overall performance too. In other words, minimizing the reconstruction error of Ii+2 is closely related to the networkâs classiï¬cation performance. | 1707.06342#15 | ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression | We propose an efficient and unified framework, namely ThiNet, to
simultaneously accelerate and compress CNN models in both training and
inference stages. We focus on the filter level pruning, i.e., the whole filter
would be discarded if it is less important. Our method does not change the
original network structure, thus it can be perfectly supported by any
off-the-shelf deep learning libraries. We formally establish filter pruning as
an optimization problem, and reveal that we need to prune filters based on
statistics information computed from its next layer, not the current layer,
which differentiates ThiNet from existing methods. Experimental results
demonstrate the effectiveness of this strategy, which has advanced the
state-of-the-art. We also show the performance of ThiNet on ILSVRC-12
benchmark. ThiNet achieves 3.31$\times$ FLOPs reduction and 16.63$\times$
compression on VGG-16, with only 0.52$\%$ top-5 accuracy drop. Similar
experiments with ResNet-50 reveal that even for a compact network, ThiNet can
also reduce more than half of the parameters and FLOPs, at the cost of roughly
1$\%$ top-5 accuracy drop. Moreover, the original VGG-16 model can be further
pruned into a very small model with only 5.05MB model size, preserving AlexNet
level accuracy but showing much stronger generalization ability. | http://arxiv.org/pdf/1707.06342 | Jian-Hao Luo, Jianxin Wu, Weiyao Lin | cs.CV | To appear in ICCV 2017 | null | cs.CV | 20170720 | 20170720 | [
{
"id": "1602.07360"
},
{
"id": "1610.02391"
},
{
"id": "1607.03250"
}
] |
1707.06347 | 15 | algorithm avg. normalized score No clipping or penalty -0.39 Clipping, « = 0.1 0.76 Clipping, « = 0.2 0.82 Clipping, « = 0.3 0.70 Adaptive KL dtarg = 0.003 0.68 Adaptive KL dtarg = 0.01 0.74 Adaptive KL dtarg = 0.03 0.71 Fixed KL, 6 = 0.3 0.62 Fixed KL, 6 = 1. 0.71 Fixed KL, 6 = 3. 0.72 Fixed KL, 6 = 10. 0.69
Table 1: Results from continuous control benchmark. Average normalized scores (over 21 runs of the algorithm, on 7 environments) for each algorithm / hyperparameter setting . 3 was initialized at 1.
# 6.2 Comparison to Other Algorithms in the Continuous Domain
Next, we compare PPO (with the âclippedâ surrogate objective from Section 3) to several other methods from the literature, which are considered to be effective for continuous problems. We com- pared against tuned implementations of the following algorithms: trust region policy optimization [Sch+15b], cross-entropy method (CEM) [SLO06], vanilla policy gradient with adaptive stepsize®, | 1707.06347#15 | Proximal Policy Optimization Algorithms | We propose a new family of policy gradient methods for reinforcement
learning, which alternate between sampling data through interaction with the
environment, and optimizing a "surrogate" objective function using stochastic
gradient ascent. Whereas standard policy gradient methods perform one gradient
update per data sample, we propose a novel objective function that enables
multiple epochs of minibatch updates. The new methods, which we call proximal
policy optimization (PPO), have some of the benefits of trust region policy
optimization (TRPO), but they are much simpler to implement, more general, and
have better sample complexity (empirically). Our experiments test PPO on a
collection of benchmark tasks, including simulated robotic locomotion and Atari
game playing, and we show that PPO outperforms other online policy gradient
methods, and overall strikes a favorable balance between sample complexity,
simplicity, and wall-time. | http://arxiv.org/pdf/1707.06347 | John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, Oleg Klimov | cs.LG | null | null | cs.LG | 20170720 | 20170828 | [
{
"id": "1602.01783"
},
{
"id": "1707.02286"
},
{
"id": "1604.06778"
},
{
"id": "1506.02488"
},
{
"id": "1611.01224"
},
{
"id": "1606.01540"
}
] |
1707.06658 | 15 | # 2 Mathematical Background
Let us consider a Markov Decision Process (MDP), M = (S,A,7,c,po,7), where S denotes the set of all possible states, A denotes the set of all possible actions that the agent can take, T:S*xAxS â [0,1] is the state transition function such that, T(sâ|s,a) is a probability distribution over next states, sâ ⬠S given current state s ⬠S and actiona ⬠A,c: S x Aâ Ris the cost function which generates a real number as feedback for every state-action pair, po : S â [0, 1] gives the initial state distribution, and 7 is a temporal discount factor.
3
A policy Ï : S à A â [0, 1] is a function such that Ï(a|s) gives a probability distribution over actions, a â A in a given state, s â S. Let ξ = (s0, a0, s1, . . . , sLξ ) denote a trajectory of length Lξ, obtained by following a policy Ï. We deï¬ne expectation of a function f (·, ·) deï¬ned on S à A with respect to a policy Ï as follows: | 1707.06658#15 | RAIL: Risk-Averse Imitation Learning | Imitation learning algorithms learn viable policies by imitating an expert's
behavior when reward signals are not available. Generative Adversarial
Imitation Learning (GAIL) is a state-of-the-art algorithm for learning policies
when the expert's behavior is available as a fixed set of trajectories. We
evaluate in terms of the expert's cost function and observe that the
distribution of trajectory-costs is often more heavy-tailed for GAIL-agents
than the expert at a number of benchmark continuous-control tasks. Thus,
high-cost trajectories, corresponding to tail-end events of catastrophic
failure, are more likely to be encountered by the GAIL-agents than the expert.
This makes the reliability of GAIL-agents questionable when it comes to
deployment in risk-sensitive applications like robotic surgery and autonomous
driving. In this work, we aim to minimize the occurrence of tail-end events by
minimizing tail risk within the GAIL framework. We quantify tail risk by the
Conditional-Value-at-Risk (CVaR) of trajectories and develop the Risk-Averse
Imitation Learning (RAIL) algorithm. We observe that the policies learned with
RAIL show lower tail-end risk than those of vanilla GAIL. Thus the proposed
RAIL algorithm appears as a potent alternative to GAIL for improved reliability
in risk-sensitive applications. | http://arxiv.org/pdf/1707.06658 | Anirban Santara, Abhishek Naik, Balaraman Ravindran, Dipankar Das, Dheevatsa Mudigere, Sasikanth Avancha, Bharat Kaul | cs.LG, cs.AI | Accepted for presentation in Deep Reinforcement Learning Symposium at
NIPS 2017 | null | cs.LG | 20170720 | 20171129 | [
{
"id": "1703.01703"
},
{
"id": "1704.07911"
},
{
"id": "1708.06374"
},
{
"id": "1604.07316"
},
{
"id": "1610.03295"
},
{
"id": "1606.01540"
}
] |
1707.06342 | 16 | # 3.2.1 Collecting training examples
In order to determine which channel can be removed safely, a training set used for importance evaluation would be col- lected. As illustrated in Figure 2, an element, denoted by y, is randomly sampled from the tensor Z;+2 (before ReLU). A corresponding filter W ⬠RC***¥ and sliding window x ⬠RCX*** (after ReLU) can also be determined accord- ing to its location. Here, some index notations are omitted for a clearer presentation. Normally, the convolution operation can be computed with a corresponding bias b as follows:
C K K 9= SOY YE Were X terre to c=1 ky=1 k2=1
input of layer i+1 filters of layer i+1 input of layer i+2 y:arandom sampled data Loe window W.: the corresponding filter
Figure 2. Illustration of data sampling and variablesâ relationship.
Now, if we further deï¬ne:
K K fe = > > We,ki ko X Le,ky,kos (2) ky=1k2=1
Eq. 1 can be simpliï¬ed as:
ll (3) | 1707.06342#16 | ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression | We propose an efficient and unified framework, namely ThiNet, to
simultaneously accelerate and compress CNN models in both training and
inference stages. We focus on the filter level pruning, i.e., the whole filter
would be discarded if it is less important. Our method does not change the
original network structure, thus it can be perfectly supported by any
off-the-shelf deep learning libraries. We formally establish filter pruning as
an optimization problem, and reveal that we need to prune filters based on
statistics information computed from its next layer, not the current layer,
which differentiates ThiNet from existing methods. Experimental results
demonstrate the effectiveness of this strategy, which has advanced the
state-of-the-art. We also show the performance of ThiNet on ILSVRC-12
benchmark. ThiNet achieves 3.31$\times$ FLOPs reduction and 16.63$\times$
compression on VGG-16, with only 0.52$\%$ top-5 accuracy drop. Similar
experiments with ResNet-50 reveal that even for a compact network, ThiNet can
also reduce more than half of the parameters and FLOPs, at the cost of roughly
1$\%$ top-5 accuracy drop. Moreover, the original VGG-16 model can be further
pruned into a very small model with only 5.05MB model size, preserving AlexNet
level accuracy but showing much stronger generalization ability. | http://arxiv.org/pdf/1707.06342 | Jian-Hao Luo, Jianxin Wu, Weiyao Lin | cs.CV | To appear in ICCV 2017 | null | cs.CV | 20170720 | 20170720 | [
{
"id": "1602.07360"
},
{
"id": "1610.02391"
},
{
"id": "1607.03250"
}
] |
1707.06347 | 16 | ?HalfCheetah, Hopper, InvertedDoublePendulum, InvertedPendulum, Reacher, Swimmer, and Walker2d, all â-v1â 3 After each batch of data, the Adam stepsize is adjusted based on the KL divergence of the original and updated policy, using a rule similar to the one shown in Section 4. An implementation is available at https: //github.com/ berkeleydeeprlcourse/homework/tree/master/hw4.
A2C [Mni+16], A2C with trust region [Wan+16]. A2C stands for advantage actor critic, and is a synchronous version of A3C, which we found to have the same or better performance than the asynchronous version. For PPO, we used the hyperparameters from the previous section, with e⬠= 0.2. We see that PPO outperforms the previous methods on almost all the continuous control environments. | 1707.06347#16 | Proximal Policy Optimization Algorithms | We propose a new family of policy gradient methods for reinforcement
learning, which alternate between sampling data through interaction with the
environment, and optimizing a "surrogate" objective function using stochastic
gradient ascent. Whereas standard policy gradient methods perform one gradient
update per data sample, we propose a novel objective function that enables
multiple epochs of minibatch updates. The new methods, which we call proximal
policy optimization (PPO), have some of the benefits of trust region policy
optimization (TRPO), but they are much simpler to implement, more general, and
have better sample complexity (empirically). Our experiments test PPO on a
collection of benchmark tasks, including simulated robotic locomotion and Atari
game playing, and we show that PPO outperforms other online policy gradient
methods, and overall strikes a favorable balance between sample complexity,
simplicity, and wall-time. | http://arxiv.org/pdf/1707.06347 | John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, Oleg Klimov | cs.LG | null | null | cs.LG | 20170720 | 20170828 | [
{
"id": "1602.01783"
},
{
"id": "1707.02286"
},
{
"id": "1604.06778"
},
{
"id": "1506.02488"
},
{
"id": "1611.01224"
},
{
"id": "1606.01540"
}
] |
1707.06658 | 16 | Le-1 E,[f(s,a)] = Eee | D> 7'F (si, a1) oo) t=0
# 2.1 Generative Adversarial Imitation Learning
Apprenticeship learning or Apprenticeship Learning via Inverse Reinforcement Learning algorithms [Abbeel and Ng, 2004] ï¬rst estimate the expertâs reward function using IRL and then ï¬nd the optimal policy for the recovered reward function using RL. Mathematically, this problem can be described as:
RL ⦠IRL(ÏE) = argmin ÏâÎ max câC EÏ[c(s, a)] â EÏE [c(s, a)] â H(Ï) (2)
where, ÏE denotes the expert-policy. c(·, ·) denotes the cost function. Î and C denote the hypothesis classes for policy and cost functions. H(Ï) denotes entropy of policy Ï. The term âH(Ï) provides causal-entropy regularization [Ziebart, 2010, Ziebart et al., 2008] which helps in making the policy optimization algorithm unbiased to factors other than the expected reward. | 1707.06658#16 | RAIL: Risk-Averse Imitation Learning | Imitation learning algorithms learn viable policies by imitating an expert's
behavior when reward signals are not available. Generative Adversarial
Imitation Learning (GAIL) is a state-of-the-art algorithm for learning policies
when the expert's behavior is available as a fixed set of trajectories. We
evaluate in terms of the expert's cost function and observe that the
distribution of trajectory-costs is often more heavy-tailed for GAIL-agents
than the expert at a number of benchmark continuous-control tasks. Thus,
high-cost trajectories, corresponding to tail-end events of catastrophic
failure, are more likely to be encountered by the GAIL-agents than the expert.
This makes the reliability of GAIL-agents questionable when it comes to
deployment in risk-sensitive applications like robotic surgery and autonomous
driving. In this work, we aim to minimize the occurrence of tail-end events by
minimizing tail risk within the GAIL framework. We quantify tail risk by the
Conditional-Value-at-Risk (CVaR) of trajectories and develop the Risk-Averse
Imitation Learning (RAIL) algorithm. We observe that the policies learned with
RAIL show lower tail-end risk than those of vanilla GAIL. Thus the proposed
RAIL algorithm appears as a potent alternative to GAIL for improved reliability
in risk-sensitive applications. | http://arxiv.org/pdf/1707.06658 | Anirban Santara, Abhishek Naik, Balaraman Ravindran, Dipankar Das, Dheevatsa Mudigere, Sasikanth Avancha, Bharat Kaul | cs.LG, cs.AI | Accepted for presentation in Deep Reinforcement Learning Symposium at
NIPS 2017 | null | cs.LG | 20170720 | 20171129 | [
{
"id": "1703.01703"
},
{
"id": "1704.07911"
},
{
"id": "1708.06374"
},
{
"id": "1604.07316"
},
{
"id": "1610.03295"
},
{
"id": "1606.01540"
}
] |
1707.06342 | 17 | Eq. 1 can be simpliï¬ed as:
ll (3)
in which y = y â b. It is worthwhile to keep in mind that ¢ and Â¥ are random variables whose instantiations require fixed spatial locations indexed by c, k, and ky. A key observation is that channels in X = (#1, & ., 4c) is independent: &, with ry..,ife #¢.
In other words, if we can ï¬nd a subset S â {1, 2, . . . , C} and the equality
Ëy = Ëxc câS (4)
always holds, then we do not need any Ëxc if c /â S and these variables can be safely removed without changing the CNN modelâs result.
Of course, Eq. 4 cannot always be true for all instances of the random variables Ëx and Ëy. However, we can manually extract instances of them to ï¬nd a subset S such that Eq. 4 is approximately correct. | 1707.06342#17 | ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression | We propose an efficient and unified framework, namely ThiNet, to
simultaneously accelerate and compress CNN models in both training and
inference stages. We focus on the filter level pruning, i.e., the whole filter
would be discarded if it is less important. Our method does not change the
original network structure, thus it can be perfectly supported by any
off-the-shelf deep learning libraries. We formally establish filter pruning as
an optimization problem, and reveal that we need to prune filters based on
statistics information computed from its next layer, not the current layer,
which differentiates ThiNet from existing methods. Experimental results
demonstrate the effectiveness of this strategy, which has advanced the
state-of-the-art. We also show the performance of ThiNet on ILSVRC-12
benchmark. ThiNet achieves 3.31$\times$ FLOPs reduction and 16.63$\times$
compression on VGG-16, with only 0.52$\%$ top-5 accuracy drop. Similar
experiments with ResNet-50 reveal that even for a compact network, ThiNet can
also reduce more than half of the parameters and FLOPs, at the cost of roughly
1$\%$ top-5 accuracy drop. Moreover, the original VGG-16 model can be further
pruned into a very small model with only 5.05MB model size, preserving AlexNet
level accuracy but showing much stronger generalization ability. | http://arxiv.org/pdf/1707.06342 | Jian-Hao Luo, Jianxin Wu, Weiyao Lin | cs.CV | To appear in ICCV 2017 | null | cs.CV | 20170720 | 20170720 | [
{
"id": "1602.07360"
},
{
"id": "1610.02391"
},
{
"id": "1607.03250"
}
] |
1707.06347 | 17 | 2000 1500 1000 500 500 100 HalfCheetah-vt 2500 2000 1500 1000 500 1000000 Reacher-v1 120 100 80 60 40 20 o Hopper-v1 âSwimmer-v1 In parr genres rarer 8000 6000 4000 2000 1000000 3000 2000 1000 InvertedDoublePendulum-v1 Walker2d-v1 1000000 1000 800 600 400 200 0 InvertedPendulum-v1 1000000 A2Cc A2C + Trust Region cEM PPO (Clip) Vanilla PG, Adaptive TRPO 120 0 1000000 0 1000000 0 1000000
Figure 3: Comparison of several algorithms on several MuJoCo environments, training for one million timesteps.
# 6.3. Showcase in the Continuous Domain: Humanoid Running and Steering | 1707.06347#17 | Proximal Policy Optimization Algorithms | We propose a new family of policy gradient methods for reinforcement
learning, which alternate between sampling data through interaction with the
environment, and optimizing a "surrogate" objective function using stochastic
gradient ascent. Whereas standard policy gradient methods perform one gradient
update per data sample, we propose a novel objective function that enables
multiple epochs of minibatch updates. The new methods, which we call proximal
policy optimization (PPO), have some of the benefits of trust region policy
optimization (TRPO), but they are much simpler to implement, more general, and
have better sample complexity (empirically). Our experiments test PPO on a
collection of benchmark tasks, including simulated robotic locomotion and Atari
game playing, and we show that PPO outperforms other online policy gradient
methods, and overall strikes a favorable balance between sample complexity,
simplicity, and wall-time. | http://arxiv.org/pdf/1707.06347 | John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, Oleg Klimov | cs.LG | null | null | cs.LG | 20170720 | 20170828 | [
{
"id": "1602.01783"
},
{
"id": "1707.02286"
},
{
"id": "1604.06778"
},
{
"id": "1506.02488"
},
{
"id": "1611.01224"
},
{
"id": "1606.01540"
}
] |
1707.06658 | 17 | [Ho and Ermon, 2016] proposed Generative Adversarial Imitation Learning (GAIL) which packs the two step process of RL ⦠IRLÏ(ÏE) into a single optimization problem with special considerations for scalability in large environments. The name is due to the fact that this objective function can be optimized using the Generative Adversarial Network (GAN) [Goodfellow et al., 2014] framework. The following is objective function of GAIL:
argmin ÏâÎ max Dâ(0,1)SÃA EÏ[log(D(s, a))] + EÏE [log(1 â D(s, a))] â H(Ï) (3) | 1707.06658#17 | RAIL: Risk-Averse Imitation Learning | Imitation learning algorithms learn viable policies by imitating an expert's
behavior when reward signals are not available. Generative Adversarial
Imitation Learning (GAIL) is a state-of-the-art algorithm for learning policies
when the expert's behavior is available as a fixed set of trajectories. We
evaluate in terms of the expert's cost function and observe that the
distribution of trajectory-costs is often more heavy-tailed for GAIL-agents
than the expert at a number of benchmark continuous-control tasks. Thus,
high-cost trajectories, corresponding to tail-end events of catastrophic
failure, are more likely to be encountered by the GAIL-agents than the expert.
This makes the reliability of GAIL-agents questionable when it comes to
deployment in risk-sensitive applications like robotic surgery and autonomous
driving. In this work, we aim to minimize the occurrence of tail-end events by
minimizing tail risk within the GAIL framework. We quantify tail risk by the
Conditional-Value-at-Risk (CVaR) of trajectories and develop the Risk-Averse
Imitation Learning (RAIL) algorithm. We observe that the policies learned with
RAIL show lower tail-end risk than those of vanilla GAIL. Thus the proposed
RAIL algorithm appears as a potent alternative to GAIL for improved reliability
in risk-sensitive applications. | http://arxiv.org/pdf/1707.06658 | Anirban Santara, Abhishek Naik, Balaraman Ravindran, Dipankar Das, Dheevatsa Mudigere, Sasikanth Avancha, Bharat Kaul | cs.LG, cs.AI | Accepted for presentation in Deep Reinforcement Learning Symposium at
NIPS 2017 | null | cs.LG | 20170720 | 20171129 | [
{
"id": "1703.01703"
},
{
"id": "1704.07911"
},
{
"id": "1708.06374"
},
{
"id": "1604.07316"
},
{
"id": "1610.03295"
},
{
"id": "1606.01540"
}
] |
1707.06342 | 18 | Given an input image, we ï¬rst apply the CNN model in the forward run to ï¬nd the input and output of layer i + 1. Then for any feasible (c, k1, k2) triplet, we can obtain a C- dimensional vector variable Ëx = {Ëx1, Ëx2, . . . , ËxC} and a scalar value Ëy using Eq. 1 to Eq. 3. Since Ëx and Ëy can be viewed as random variables, more instances can be sampled by choosing different input images, different channels, and different spatial locations.
# 3.2.2 A greedy algorithm for channel selection
Now, given a set of m (the product of number of images and number of locations) training examples {(Ëxi, Ëyi)}, the original channel selection problem becomes the following
4 | 1707.06342#18 | ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression | We propose an efficient and unified framework, namely ThiNet, to
simultaneously accelerate and compress CNN models in both training and
inference stages. We focus on the filter level pruning, i.e., the whole filter
would be discarded if it is less important. Our method does not change the
original network structure, thus it can be perfectly supported by any
off-the-shelf deep learning libraries. We formally establish filter pruning as
an optimization problem, and reveal that we need to prune filters based on
statistics information computed from its next layer, not the current layer,
which differentiates ThiNet from existing methods. Experimental results
demonstrate the effectiveness of this strategy, which has advanced the
state-of-the-art. We also show the performance of ThiNet on ILSVRC-12
benchmark. ThiNet achieves 3.31$\times$ FLOPs reduction and 16.63$\times$
compression on VGG-16, with only 0.52$\%$ top-5 accuracy drop. Similar
experiments with ResNet-50 reveal that even for a compact network, ThiNet can
also reduce more than half of the parameters and FLOPs, at the cost of roughly
1$\%$ top-5 accuracy drop. Moreover, the original VGG-16 model can be further
pruned into a very small model with only 5.05MB model size, preserving AlexNet
level accuracy but showing much stronger generalization ability. | http://arxiv.org/pdf/1707.06342 | Jian-Hao Luo, Jianxin Wu, Weiyao Lin | cs.CV | To appear in ICCV 2017 | null | cs.CV | 20170720 | 20170720 | [
{
"id": "1602.07360"
},
{
"id": "1610.02391"
},
{
"id": "1607.03250"
}
] |
1707.06347 | 18 | Figure 3: Comparison of several algorithms on several MuJoCo environments, training for one million timesteps.
# 6.3. Showcase in the Continuous Domain: Humanoid Running and Steering
To showcase the performance of PPO on high-dimensional continuous control problems, we train on a set of problems involving a 3D humanoid, where the robot must run, steer, and get up off the ground, possibly while being pelted by cubes. The three tasks we test on are (1) Ro- boschoolHumanoid: forward locomotion only, (2) RoboschoolHumanoidFlagrun: position of target is randomly varied every 200 timesteps or whenever the goal is reached, (3) RoboschoolHumanoid- FlagrunHarder, where the robot is pelted by cubes and needs to get up off the ground. See Figure 5 for still frames of a learned policy, and Figure 4 for learning curves on the three tasks. Hyperpa- rameters are provided in Table 4. In concurrent work, Heess et al. [Hee+17] used the adaptive KL variant of PPO (Section 4) to learn locomotion policies for 3D robots.
RoboschoolHumanoid-v0 4000 3000 2000 1000 2500 2000 1500 1000 Timestep RoboschoolHumanoidFlagrun-vO Timestep 3000 2000 1000 100M 0 Timestep RoboschoolHumanoidFlagrunHarder-vO 100M | 1707.06347#18 | Proximal Policy Optimization Algorithms | We propose a new family of policy gradient methods for reinforcement
learning, which alternate between sampling data through interaction with the
environment, and optimizing a "surrogate" objective function using stochastic
gradient ascent. Whereas standard policy gradient methods perform one gradient
update per data sample, we propose a novel objective function that enables
multiple epochs of minibatch updates. The new methods, which we call proximal
policy optimization (PPO), have some of the benefits of trust region policy
optimization (TRPO), but they are much simpler to implement, more general, and
have better sample complexity (empirically). Our experiments test PPO on a
collection of benchmark tasks, including simulated robotic locomotion and Atari
game playing, and we show that PPO outperforms other online policy gradient
methods, and overall strikes a favorable balance between sample complexity,
simplicity, and wall-time. | http://arxiv.org/pdf/1707.06347 | John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, Oleg Klimov | cs.LG | null | null | cs.LG | 20170720 | 20170828 | [
{
"id": "1602.01783"
},
{
"id": "1707.02286"
},
{
"id": "1604.06778"
},
{
"id": "1506.02488"
},
{
"id": "1611.01224"
},
{
"id": "1606.01540"
}
] |
1707.06658 | 18 | Here, the agentâs policy, Ï, acts as a generator of state-action pairs. D is a discriminative binary classiï¬er of the form D : S à A â (0, 1), known as discriminator, which given a state-action pair (s, a), predicts the likelihood of it being generated by the generator. A two-player adversarial game is started, wherein the generator tries to generate (s, a) pairs that closely match the expert, while the discriminator tries to correctly classify the (s, a) pairs of the expert and the agent. At convergence, the agentâs actions resemble those of the expert in any given state.
The generator and the discriminator are assigned parameterized models Ïθ and Dw respectively. The training algorithm alternates between a gradient ascent step with respect to the discriminator parameters, w, and a policy-gradient descent step with respect to the generator parameters, θ. Following the example of [Ho and Ermon, 2016] we use multi-layer perceptrons (neural networks with fully-connected layers) [Haykin, 1998] to model both the generator and the discriminator.
# 3 Risk-Averse Imitation Learning | 1707.06658#18 | RAIL: Risk-Averse Imitation Learning | Imitation learning algorithms learn viable policies by imitating an expert's
behavior when reward signals are not available. Generative Adversarial
Imitation Learning (GAIL) is a state-of-the-art algorithm for learning policies
when the expert's behavior is available as a fixed set of trajectories. We
evaluate in terms of the expert's cost function and observe that the
distribution of trajectory-costs is often more heavy-tailed for GAIL-agents
than the expert at a number of benchmark continuous-control tasks. Thus,
high-cost trajectories, corresponding to tail-end events of catastrophic
failure, are more likely to be encountered by the GAIL-agents than the expert.
This makes the reliability of GAIL-agents questionable when it comes to
deployment in risk-sensitive applications like robotic surgery and autonomous
driving. In this work, we aim to minimize the occurrence of tail-end events by
minimizing tail risk within the GAIL framework. We quantify tail risk by the
Conditional-Value-at-Risk (CVaR) of trajectories and develop the Risk-Averse
Imitation Learning (RAIL) algorithm. We observe that the policies learned with
RAIL show lower tail-end risk than those of vanilla GAIL. Thus the proposed
RAIL algorithm appears as a potent alternative to GAIL for improved reliability
in risk-sensitive applications. | http://arxiv.org/pdf/1707.06658 | Anirban Santara, Abhishek Naik, Balaraman Ravindran, Dipankar Das, Dheevatsa Mudigere, Sasikanth Avancha, Bharat Kaul | cs.LG, cs.AI | Accepted for presentation in Deep Reinforcement Learning Symposium at
NIPS 2017 | null | cs.LG | 20170720 | 20171129 | [
{
"id": "1703.01703"
},
{
"id": "1704.07911"
},
{
"id": "1708.06374"
},
{
"id": "1604.07316"
},
{
"id": "1610.03295"
},
{
"id": "1606.01540"
}
] |
1707.06342 | 19 | 4
Algorithm 1 A greedy algorithm for minimizing Eq. 6 Input: Training set {(Ëxi, Ëyi)}, and compression rate r Output: The subset of removed channels: T 1: T â â
; I â {1, 2, . . . , C}; 2: while |T | < C à (1 â r) do 3: min value â +â; 4: 5: 6: 7: for each item i â I do tmpT â T ⪠{i}; compute value from Eq. 6 using tmpT ; if value < min value then min value â value; min i â i; 8: 9: 10: 11: move min i from I into T ; 12: end while end if end for
optimization problem:
2 argmin ) > 9 -â Yo %,5 s 4 fea (5) st. |S] =Cxr, SC {1,2,...,C}.
Here, |S| is the number of elements in a subset S, and r is a pre-deï¬ned compression rate (i.e., how many channels are preserved). Equivalently, let T be the subset of removed channels (i.e., S ⪠T = {1, 2, . . . , C} and S â© T = â
), we can minimize the following alternative objective: | 1707.06342#19 | ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression | We propose an efficient and unified framework, namely ThiNet, to
simultaneously accelerate and compress CNN models in both training and
inference stages. We focus on the filter level pruning, i.e., the whole filter
would be discarded if it is less important. Our method does not change the
original network structure, thus it can be perfectly supported by any
off-the-shelf deep learning libraries. We formally establish filter pruning as
an optimization problem, and reveal that we need to prune filters based on
statistics information computed from its next layer, not the current layer,
which differentiates ThiNet from existing methods. Experimental results
demonstrate the effectiveness of this strategy, which has advanced the
state-of-the-art. We also show the performance of ThiNet on ILSVRC-12
benchmark. ThiNet achieves 3.31$\times$ FLOPs reduction and 16.63$\times$
compression on VGG-16, with only 0.52$\%$ top-5 accuracy drop. Similar
experiments with ResNet-50 reveal that even for a compact network, ThiNet can
also reduce more than half of the parameters and FLOPs, at the cost of roughly
1$\%$ top-5 accuracy drop. Moreover, the original VGG-16 model can be further
pruned into a very small model with only 5.05MB model size, preserving AlexNet
level accuracy but showing much stronger generalization ability. | http://arxiv.org/pdf/1707.06342 | Jian-Hao Luo, Jianxin Wu, Weiyao Lin | cs.CV | To appear in ICCV 2017 | null | cs.CV | 20170720 | 20170720 | [
{
"id": "1602.07360"
},
{
"id": "1610.02391"
},
{
"id": "1607.03250"
}
] |
1707.06658 | 19 | # 3 Risk-Averse Imitation Learning
In this section, we develop the mathematical formulation of the proposed Risk-Averse Imitation Learning (RAIL) algorithm. We introduce CV aR [Rockafellar and Uryasev, 2000] as a measure of tail risk, and apply it in the GAIL-framework to minimize the tail risk of learned policies.
# 3.1 Conditional-Value-at-Risk
In the portfolio-risk optimization literature, tail risk is a form of portfolio risk that arises when the possibility that an investment moving more than three standard deviations away from the mean is greater than what is shown by a normal distribution [Investopedia, 2017]. Tail risk corresponds to events that have a small probability of occurring. When the distribution of market returns is heavy-tailed, tail risk is high because there is a probability, which may be small, that an investment will move beyond three standard deviations.
Conditional-Value-at-Risk (CV aR) [Rockafellar and Uryasev, 2000] is the most conservative mea- sure of tail risk [Dalleh, 2011]. Unlike other measures like Variance and Value at Risk (V aR), it can
4 | 1707.06658#19 | RAIL: Risk-Averse Imitation Learning | Imitation learning algorithms learn viable policies by imitating an expert's
behavior when reward signals are not available. Generative Adversarial
Imitation Learning (GAIL) is a state-of-the-art algorithm for learning policies
when the expert's behavior is available as a fixed set of trajectories. We
evaluate in terms of the expert's cost function and observe that the
distribution of trajectory-costs is often more heavy-tailed for GAIL-agents
than the expert at a number of benchmark continuous-control tasks. Thus,
high-cost trajectories, corresponding to tail-end events of catastrophic
failure, are more likely to be encountered by the GAIL-agents than the expert.
This makes the reliability of GAIL-agents questionable when it comes to
deployment in risk-sensitive applications like robotic surgery and autonomous
driving. In this work, we aim to minimize the occurrence of tail-end events by
minimizing tail risk within the GAIL framework. We quantify tail risk by the
Conditional-Value-at-Risk (CVaR) of trajectories and develop the Risk-Averse
Imitation Learning (RAIL) algorithm. We observe that the policies learned with
RAIL show lower tail-end risk than those of vanilla GAIL. Thus the proposed
RAIL algorithm appears as a potent alternative to GAIL for improved reliability
in risk-sensitive applications. | http://arxiv.org/pdf/1707.06658 | Anirban Santara, Abhishek Naik, Balaraman Ravindran, Dipankar Das, Dheevatsa Mudigere, Sasikanth Avancha, Bharat Kaul | cs.LG, cs.AI | Accepted for presentation in Deep Reinforcement Learning Symposium at
NIPS 2017 | null | cs.LG | 20170720 | 20171129 | [
{
"id": "1703.01703"
},
{
"id": "1704.07911"
},
{
"id": "1708.06374"
},
{
"id": "1604.07316"
},
{
"id": "1610.03295"
},
{
"id": "1606.01540"
}
] |
1707.06342 | 20 | arg min)? > Rij 6)
|T | = C Ã (1 â r), T â {1, 2, . . . , C}.
Eq. 6 is equivalent to Eq. 5, but has faster speed because |T | is usually smaller than |S|. Solving Eq. 6 is still NP hard, thus we use a greedy strategy (illustrated in algorithm 1). We add one element to T at a time, and choose the channel leading to the smallest objective value in the current iteration. Obviously, this greedy solution is sub-optimal. But the gap can be compensated by ï¬ne-tuning. We have also tried some other sophisticated algorithms, such as sparse coding (speciï¬cally, the homotopy method [6]). However, our sim- ple greedy approach has better performance and faster speed according to our experiments.
# 3.2.3 Minimize the reconstruction error
So far, we have obtained the subset T such that the n-th channel in each ï¬lter of layer i + 1 can be safely removed if n â T . Hence, the corresponding ï¬lters in the previous layer i can be pruned too.
Now we will further minimize the reconstruction error (c.f . Eq. 5) by weighing the channels, which can be deï¬ned as: | 1707.06342#20 | ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression | We propose an efficient and unified framework, namely ThiNet, to
simultaneously accelerate and compress CNN models in both training and
inference stages. We focus on the filter level pruning, i.e., the whole filter
would be discarded if it is less important. Our method does not change the
original network structure, thus it can be perfectly supported by any
off-the-shelf deep learning libraries. We formally establish filter pruning as
an optimization problem, and reveal that we need to prune filters based on
statistics information computed from its next layer, not the current layer,
which differentiates ThiNet from existing methods. Experimental results
demonstrate the effectiveness of this strategy, which has advanced the
state-of-the-art. We also show the performance of ThiNet on ILSVRC-12
benchmark. ThiNet achieves 3.31$\times$ FLOPs reduction and 16.63$\times$
compression on VGG-16, with only 0.52$\%$ top-5 accuracy drop. Similar
experiments with ResNet-50 reveal that even for a compact network, ThiNet can
also reduce more than half of the parameters and FLOPs, at the cost of roughly
1$\%$ top-5 accuracy drop. Moreover, the original VGG-16 model can be further
pruned into a very small model with only 5.05MB model size, preserving AlexNet
level accuracy but showing much stronger generalization ability. | http://arxiv.org/pdf/1707.06342 | Jian-Hao Luo, Jianxin Wu, Weiyao Lin | cs.CV | To appear in ICCV 2017 | null | cs.CV | 20170720 | 20170720 | [
{
"id": "1602.07360"
},
{
"id": "1610.02391"
},
{
"id": "1607.03250"
}
] |
1707.06347 | 20 | Figure 5: Still frames of the policy learned from RoboschoolHumanoidFlagrun. In the first six frames, the robot runs towards a target. Then the position is randomly changed, and the robot turns and runs toward the new target. 6.4 Comparison to Other Algorithms on the Atari Domain We also ran PPO on the Arcade Learning Environment [Bel+15] benchmark and compared against well-tuned implementations of A2C [Mni+16] and ACER [Wan+16]. For all three algorithms, we used the same policy network architechture as used in [Mni+16]. The hyperparameters for PPO are provided in Table 5. For the other two algorithms, we used hyperparameters that were tuned to maximize performance on this benchmark. A table of results and learning curves for all 49 games is provided in Appendix B. We consider the following two scoring metrics: (1) average reward per episode over entire training period (which favors fast learning), and (2) average reward per episode over last 100 episodes of training (which favors final performance). Table 2 shows the number of games âwonâ by each algorithm, where we compute the victor by averaging the scoring metric across three trials. | A2C | 1707.06347#20 | Proximal Policy Optimization Algorithms | We propose a new family of policy gradient methods for reinforcement
learning, which alternate between sampling data through interaction with the
environment, and optimizing a "surrogate" objective function using stochastic
gradient ascent. Whereas standard policy gradient methods perform one gradient
update per data sample, we propose a novel objective function that enables
multiple epochs of minibatch updates. The new methods, which we call proximal
policy optimization (PPO), have some of the benefits of trust region policy
optimization (TRPO), but they are much simpler to implement, more general, and
have better sample complexity (empirically). Our experiments test PPO on a
collection of benchmark tasks, including simulated robotic locomotion and Atari
game playing, and we show that PPO outperforms other online policy gradient
methods, and overall strikes a favorable balance between sample complexity,
simplicity, and wall-time. | http://arxiv.org/pdf/1707.06347 | John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, Oleg Klimov | cs.LG | null | null | cs.LG | 20170720 | 20170828 | [
{
"id": "1602.01783"
},
{
"id": "1707.02286"
},
{
"id": "1604.06778"
},
{
"id": "1506.02488"
},
{
"id": "1611.01224"
},
{
"id": "1606.01540"
}
] |
1707.06658 | 20 | 4
be applied when the distribution of returns is not normal. Mathematically, let Z be a random variable. Let α â [0, 1] denote a probability value. The Value-at-Risk of Z with respect to conï¬dence level α, denoted by V aRα(Z), is deï¬ned as the minimum value z â R such that with probability α, Z will not exceed z.
V aRα(Z) = min(z | P (Z ⤠z) ⥠α) (4)
CV aRα(Z) is deï¬ned as the conditional expectation of losses above V aRα(Z):
CV aRα(Z) = E [Z | Z ⥠V aRα(Z)] = min νâR Hα(Z, ν) (5)
where Hα(Z, ν) is given by:
H,(Z,v) = {v + 7 [(Z âv)*]}; (x)* = max(x,0) (6)
# 3.2 RAIL Framework
We use CV aR to quantify the tail risk of the trajectory-cost variable RÏ(ξ|c(D)), deï¬ned in the context of GAIL as:
Le-1 R*(Ele(D)) = S> ye(D(se,ae)) ) t=0 | 1707.06658#20 | RAIL: Risk-Averse Imitation Learning | Imitation learning algorithms learn viable policies by imitating an expert's
behavior when reward signals are not available. Generative Adversarial
Imitation Learning (GAIL) is a state-of-the-art algorithm for learning policies
when the expert's behavior is available as a fixed set of trajectories. We
evaluate in terms of the expert's cost function and observe that the
distribution of trajectory-costs is often more heavy-tailed for GAIL-agents
than the expert at a number of benchmark continuous-control tasks. Thus,
high-cost trajectories, corresponding to tail-end events of catastrophic
failure, are more likely to be encountered by the GAIL-agents than the expert.
This makes the reliability of GAIL-agents questionable when it comes to
deployment in risk-sensitive applications like robotic surgery and autonomous
driving. In this work, we aim to minimize the occurrence of tail-end events by
minimizing tail risk within the GAIL framework. We quantify tail risk by the
Conditional-Value-at-Risk (CVaR) of trajectories and develop the Risk-Averse
Imitation Learning (RAIL) algorithm. We observe that the policies learned with
RAIL show lower tail-end risk than those of vanilla GAIL. Thus the proposed
RAIL algorithm appears as a potent alternative to GAIL for improved reliability
in risk-sensitive applications. | http://arxiv.org/pdf/1707.06658 | Anirban Santara, Abhishek Naik, Balaraman Ravindran, Dipankar Das, Dheevatsa Mudigere, Sasikanth Avancha, Bharat Kaul | cs.LG, cs.AI | Accepted for presentation in Deep Reinforcement Learning Symposium at
NIPS 2017 | null | cs.LG | 20170720 | 20171129 | [
{
"id": "1703.01703"
},
{
"id": "1704.07911"
},
{
"id": "1708.06374"
},
{
"id": "1604.07316"
},
{
"id": "1610.03295"
},
{
"id": "1606.01540"
}
] |
1707.06342 | 21 | Now we will further minimize the reconstruction error (c.f . Eq. 5) by weighing the channels, which can be deï¬ned as:
WwW = argmin Gj; â wi *), 7 g Y i
where Ëxâ i indicates the training examples after channel se- lection. Eq. 7 is a classic linear regression problem, which has a unique closed-form solution using the ordinary least squares approach: Ëw = (XTX)â1XTy.
Each element in Ëw can be regarded as a scaling factor of corresponding ï¬lter channel such that W:,i,:,: = ËwiW:,i,:,:. From another point of view, this scaling operation provides a better initialization for ï¬ne-tuning, hence the network is more likely to reach higher accuracy.
# 3.3. Pruning strategy | 1707.06342#21 | ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression | We propose an efficient and unified framework, namely ThiNet, to
simultaneously accelerate and compress CNN models in both training and
inference stages. We focus on the filter level pruning, i.e., the whole filter
would be discarded if it is less important. Our method does not change the
original network structure, thus it can be perfectly supported by any
off-the-shelf deep learning libraries. We formally establish filter pruning as
an optimization problem, and reveal that we need to prune filters based on
statistics information computed from its next layer, not the current layer,
which differentiates ThiNet from existing methods. Experimental results
demonstrate the effectiveness of this strategy, which has advanced the
state-of-the-art. We also show the performance of ThiNet on ILSVRC-12
benchmark. ThiNet achieves 3.31$\times$ FLOPs reduction and 16.63$\times$
compression on VGG-16, with only 0.52$\%$ top-5 accuracy drop. Similar
experiments with ResNet-50 reveal that even for a compact network, ThiNet can
also reduce more than half of the parameters and FLOPs, at the cost of roughly
1$\%$ top-5 accuracy drop. Moreover, the original VGG-16 model can be further
pruned into a very small model with only 5.05MB model size, preserving AlexNet
level accuracy but showing much stronger generalization ability. | http://arxiv.org/pdf/1707.06342 | Jian-Hao Luo, Jianxin Wu, Weiyao Lin | cs.CV | To appear in ICCV 2017 | null | cs.CV | 20170720 | 20170720 | [
{
"id": "1602.07360"
},
{
"id": "1610.02391"
},
{
"id": "1607.03250"
}
] |
1707.06347 | 21 | 2 shows the number of games âwonâ by each algorithm, where we compute the victor by averaging the scoring metric across three trials. | A2C ACER PPO Tie (1) avg. episode reward over all of training 1 18 30 0 (2) avg. episode reward over last 100 episodes 1 28 19 1 Table 2: Number of games âwonâ by each algorithm, where the scoring metric is averaged across three trials. 7 Conclusion We have introduced proximal policy optimization, a family of policy optimization methods that use multiple epochs of stochastic gradient ascent to perform each policy update. These methods have the stability and reliability of trust-region methods but are much simpler to implement, requiring only few lines of code change to a vanilla policy gradient implementation, applicable in more general settings (for example, when using a joint architecture for the policy and value function), and have better overall performance. 8 Acknowledgements Thanks to Rocky Duan, Peter Chen, and others at OpenAI for insightful comments. | 1707.06347#21 | Proximal Policy Optimization Algorithms | We propose a new family of policy gradient methods for reinforcement
learning, which alternate between sampling data through interaction with the
environment, and optimizing a "surrogate" objective function using stochastic
gradient ascent. Whereas standard policy gradient methods perform one gradient
update per data sample, we propose a novel objective function that enables
multiple epochs of minibatch updates. The new methods, which we call proximal
policy optimization (PPO), have some of the benefits of trust region policy
optimization (TRPO), but they are much simpler to implement, more general, and
have better sample complexity (empirically). Our experiments test PPO on a
collection of benchmark tasks, including simulated robotic locomotion and Atari
game playing, and we show that PPO outperforms other online policy gradient
methods, and overall strikes a favorable balance between sample complexity,
simplicity, and wall-time. | http://arxiv.org/pdf/1707.06347 | John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, Oleg Klimov | cs.LG | null | null | cs.LG | 20170720 | 20170828 | [
{
"id": "1602.01783"
},
{
"id": "1707.02286"
},
{
"id": "1604.06778"
},
{
"id": "1506.02488"
},
{
"id": "1611.01224"
},
{
"id": "1606.01540"
}
] |
1707.06658 | 21 | Le-1 R*(Ele(D)) = S> ye(D(se,ae)) ) t=0
where c(·) is order-preserving. Next, we formulate the optimization problem to optimize CV aR of RÏ(ξ|c(D)) as: Hα(RÏ(ξ|c(D)), ν) CV aRα(RÏ(ξ|c(D))) = min Ï,ν
min Ï max c max c (8)
Integrating this with the GAIL objective of equation 3, we have the following:
â =mi F _H E, [log(D(s, BE pelea TEP pelea | ~ H+ Ealloo(P(s a) +E, [log(1 â D(s, a))] + Acvar Ha(R* (Ele(D)), v)} cc) | 1707.06658#21 | RAIL: Risk-Averse Imitation Learning | Imitation learning algorithms learn viable policies by imitating an expert's
behavior when reward signals are not available. Generative Adversarial
Imitation Learning (GAIL) is a state-of-the-art algorithm for learning policies
when the expert's behavior is available as a fixed set of trajectories. We
evaluate in terms of the expert's cost function and observe that the
distribution of trajectory-costs is often more heavy-tailed for GAIL-agents
than the expert at a number of benchmark continuous-control tasks. Thus,
high-cost trajectories, corresponding to tail-end events of catastrophic
failure, are more likely to be encountered by the GAIL-agents than the expert.
This makes the reliability of GAIL-agents questionable when it comes to
deployment in risk-sensitive applications like robotic surgery and autonomous
driving. In this work, we aim to minimize the occurrence of tail-end events by
minimizing tail risk within the GAIL framework. We quantify tail risk by the
Conditional-Value-at-Risk (CVaR) of trajectories and develop the Risk-Averse
Imitation Learning (RAIL) algorithm. We observe that the policies learned with
RAIL show lower tail-end risk than those of vanilla GAIL. Thus the proposed
RAIL algorithm appears as a potent alternative to GAIL for improved reliability
in risk-sensitive applications. | http://arxiv.org/pdf/1707.06658 | Anirban Santara, Abhishek Naik, Balaraman Ravindran, Dipankar Das, Dheevatsa Mudigere, Sasikanth Avancha, Bharat Kaul | cs.LG, cs.AI | Accepted for presentation in Deep Reinforcement Learning Symposium at
NIPS 2017 | null | cs.LG | 20170720 | 20171129 | [
{
"id": "1703.01703"
},
{
"id": "1704.07911"
},
{
"id": "1708.06374"
},
{
"id": "1604.07316"
},
{
"id": "1610.03295"
},
{
"id": "1606.01540"
}
] |
1707.06342 | 22 | # 3.3. Pruning strategy
There are mainly two types of different network archi- tectures: the traditional convolutional/fully-connected archi- tecture, and recent structural variants. The former is repre- sented by AlexNet [18] or VGGNet [28], while the latter mainly includes some recent networks like GoogLeNet [30] and ResNet [11]. The main difference between these two types is that more recent networks usually replace the FC (fully-connected) layers with a global average pooling layer [22, 34], and adopt some novel network structures like Inception in GoogLeNet or residual blocks in ResNet.
We use different strategies to prune these two types of net- works. For VGG-16, we notice that more than 90% FLOPs exist in the ï¬rst 10 layers (conv1-1 to conv4-3), while the FC layers contribute nearly 86.41% parameters. Hence, we prune the ï¬rst 10 layers for acceleration consideration, but replace the FC layers with a global average pooling layer. Although the proposed method is also valid for FC layers, we believe removing them is simpler and more efï¬cient. | 1707.06342#22 | ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression | We propose an efficient and unified framework, namely ThiNet, to
simultaneously accelerate and compress CNN models in both training and
inference stages. We focus on the filter level pruning, i.e., the whole filter
would be discarded if it is less important. Our method does not change the
original network structure, thus it can be perfectly supported by any
off-the-shelf deep learning libraries. We formally establish filter pruning as
an optimization problem, and reveal that we need to prune filters based on
statistics information computed from its next layer, not the current layer,
which differentiates ThiNet from existing methods. Experimental results
demonstrate the effectiveness of this strategy, which has advanced the
state-of-the-art. We also show the performance of ThiNet on ILSVRC-12
benchmark. ThiNet achieves 3.31$\times$ FLOPs reduction and 16.63$\times$
compression on VGG-16, with only 0.52$\%$ top-5 accuracy drop. Similar
experiments with ResNet-50 reveal that even for a compact network, ThiNet can
also reduce more than half of the parameters and FLOPs, at the cost of roughly
1$\%$ top-5 accuracy drop. Moreover, the original VGG-16 model can be further
pruned into a very small model with only 5.05MB model size, preserving AlexNet
level accuracy but showing much stronger generalization ability. | http://arxiv.org/pdf/1707.06342 | Jian-Hao Luo, Jianxin Wu, Weiyao Lin | cs.CV | To appear in ICCV 2017 | null | cs.CV | 20170720 | 20170720 | [
{
"id": "1602.07360"
},
{
"id": "1610.02391"
},
{
"id": "1607.03250"
}
] |
1707.06347 | 22 | | A2C ACER PPO Tie (1) avg. episode reward over all of training 1 18 30 0 (2) avg. episode reward over last 100 episodes 1 28 19 1
# References
Bel+15] M. Bellemare, Y. Naddaf, J. Veness, and M. Bowling. âThe arcade learning environ- ment: An evaluation platform for general agentsâ. In: Twenty-Fourth International Joint Conference on Artificial Intelligence. 2015.
Bro+16] G. Brockman, V. Cheung, L. Pettersson, J. Schneider, J. Schulman, J. Tang, and W. Zaremba. âOpenAI Gymâ. In: arXiv preprint arXiv:1606.01540 (2016).
Dua+16] Y. Duan, X. Chen, R. Houthooft, J. Schulman, and P. Abbeel. âBenchmarking Deep Reinforcement Learning for Continuous Controlâ. In: arXiv preprint arXiv:1604.06778 (2016). | 1707.06347#22 | Proximal Policy Optimization Algorithms | We propose a new family of policy gradient methods for reinforcement
learning, which alternate between sampling data through interaction with the
environment, and optimizing a "surrogate" objective function using stochastic
gradient ascent. Whereas standard policy gradient methods perform one gradient
update per data sample, we propose a novel objective function that enables
multiple epochs of minibatch updates. The new methods, which we call proximal
policy optimization (PPO), have some of the benefits of trust region policy
optimization (TRPO), but they are much simpler to implement, more general, and
have better sample complexity (empirically). Our experiments test PPO on a
collection of benchmark tasks, including simulated robotic locomotion and Atari
game playing, and we show that PPO outperforms other online policy gradient
methods, and overall strikes a favorable balance between sample complexity,
simplicity, and wall-time. | http://arxiv.org/pdf/1707.06347 | John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, Oleg Klimov | cs.LG | null | null | cs.LG | 20170720 | 20170828 | [
{
"id": "1602.01783"
},
{
"id": "1707.02286"
},
{
"id": "1604.06778"
},
{
"id": "1506.02488"
},
{
"id": "1611.01224"
},
{
"id": "1606.01540"
}
] |
1707.06658 | 22 | Note that as c(·) is order-preserving, the maximization with respect to c in equation 8 is equivalent to maximization with respect to D in equation 9. λCV aR is a constant that controls the amount of weightage given to CV aR optimization relative to the original GAIL objective. Equation 9 comprises the objective function of the proposed Risk-Averse Imitation Learning (RAIL) algorithm. Algorithm 1 gives the pseudo-code. Appendix A derives the expressions of gradients of the CV aR term Hα(RÏ(ξ|c(D))ν) with respect to Ï, D, and ν. When α â 0, namely the risk-neutral case, CV aR is equal to the mean of all trajectory costs and hence, RAIL â GAIL. We use Adam algorithm [Diederik Kingma, 2015] for gradient ascent in the discriminator and Trust Region Policy Optimization (TRPO) [Schulman et al., 2015] for policy gradient descent in the generator. The CV aR term ν is trained by batch gradient descent [Haykin, 1998].
# 4 Experimental Setup | 1707.06658#22 | RAIL: Risk-Averse Imitation Learning | Imitation learning algorithms learn viable policies by imitating an expert's
behavior when reward signals are not available. Generative Adversarial
Imitation Learning (GAIL) is a state-of-the-art algorithm for learning policies
when the expert's behavior is available as a fixed set of trajectories. We
evaluate in terms of the expert's cost function and observe that the
distribution of trajectory-costs is often more heavy-tailed for GAIL-agents
than the expert at a number of benchmark continuous-control tasks. Thus,
high-cost trajectories, corresponding to tail-end events of catastrophic
failure, are more likely to be encountered by the GAIL-agents than the expert.
This makes the reliability of GAIL-agents questionable when it comes to
deployment in risk-sensitive applications like robotic surgery and autonomous
driving. In this work, we aim to minimize the occurrence of tail-end events by
minimizing tail risk within the GAIL framework. We quantify tail risk by the
Conditional-Value-at-Risk (CVaR) of trajectories and develop the Risk-Averse
Imitation Learning (RAIL) algorithm. We observe that the policies learned with
RAIL show lower tail-end risk than those of vanilla GAIL. Thus the proposed
RAIL algorithm appears as a potent alternative to GAIL for improved reliability
in risk-sensitive applications. | http://arxiv.org/pdf/1707.06658 | Anirban Santara, Abhishek Naik, Balaraman Ravindran, Dipankar Das, Dheevatsa Mudigere, Sasikanth Avancha, Bharat Kaul | cs.LG, cs.AI | Accepted for presentation in Deep Reinforcement Learning Symposium at
NIPS 2017 | null | cs.LG | 20170720 | 20171129 | [
{
"id": "1703.01703"
},
{
"id": "1704.07911"
},
{
"id": "1708.06374"
},
{
"id": "1604.07316"
},
{
"id": "1610.03295"
},
{
"id": "1606.01540"
}
] |
1707.06342 | 23 | For ResNet, there exist some restrictions due to its special structure. For example, the channel number of each block in the same group needs to be consistent in order to ï¬nish the sum operation (see [11] for more details). Thus it is hard to prune the last convolutional layer of each residual block directly. Since most parameters are located in the ï¬rst two layers, pruning the ï¬rst two layers is a good choice, which is illustrated in Figure 3.
# 4. Experiments
We empirically study the performance of ThiNet in this section. First, a comparison among several different ï¬l- ter selection criteria would be presented. Experimental re- sults show that our method is signiï¬cantly better than others. Then, we would report the performance on ILSCVR-12 [26]. Two widely used networks are pruned: VGG-16 [28] and ResNet-50 [11]. Finally, we focus on a more practical sce- nario to show the advantages of ThiNet. All the experiments
5
256-d 64%256x 1x1 relu 64x64%3%3 prune 50% >> relu 256%64x11 ReLU 256-d | 1707.06342#23 | ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression | We propose an efficient and unified framework, namely ThiNet, to
simultaneously accelerate and compress CNN models in both training and
inference stages. We focus on the filter level pruning, i.e., the whole filter
would be discarded if it is less important. Our method does not change the
original network structure, thus it can be perfectly supported by any
off-the-shelf deep learning libraries. We formally establish filter pruning as
an optimization problem, and reveal that we need to prune filters based on
statistics information computed from its next layer, not the current layer,
which differentiates ThiNet from existing methods. Experimental results
demonstrate the effectiveness of this strategy, which has advanced the
state-of-the-art. We also show the performance of ThiNet on ILSVRC-12
benchmark. ThiNet achieves 3.31$\times$ FLOPs reduction and 16.63$\times$
compression on VGG-16, with only 0.52$\%$ top-5 accuracy drop. Similar
experiments with ResNet-50 reveal that even for a compact network, ThiNet can
also reduce more than half of the parameters and FLOPs, at the cost of roughly
1$\%$ top-5 accuracy drop. Moreover, the original VGG-16 model can be further
pruned into a very small model with only 5.05MB model size, preserving AlexNet
level accuracy but showing much stronger generalization ability. | http://arxiv.org/pdf/1707.06342 | Jian-Hao Luo, Jianxin Wu, Weiyao Lin | cs.CV | To appear in ICCV 2017 | null | cs.CV | 20170720 | 20170720 | [
{
"id": "1602.07360"
},
{
"id": "1610.02391"
},
{
"id": "1607.03250"
}
] |
1707.06347 | 23 | Hee+17| N. Heess, S. Sriram, J. Lemmon, J. Merel, G. Wayne, Y. Tassa, T. Erez, Z. Wang, A. Eslami, M. Riedmiller, et al. âEmergence of Locomotion Behaviours in Rich Envi- ronmentsâ. In: arXiv preprint arXiv:1707.02286 (2017).
KL0Q] S. Kakade and J. Langford. âApproximately optimal approximate reinforcement learn- ingâ. In: ICML. Vol. 2. 2002, pp. 267-274.
KB14| D. Kingma and J. Ba. âAdam: A method for stochastic optimizationâ. In: arXiv preprint arXiv:1412.6980 (2014).
Mni-+15] V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski, et al. âHuman-level control through deep reinforcement learningâ. In: Nature 518.7540 (2015), pp. 529-533. | 1707.06347#23 | Proximal Policy Optimization Algorithms | We propose a new family of policy gradient methods for reinforcement
learning, which alternate between sampling data through interaction with the
environment, and optimizing a "surrogate" objective function using stochastic
gradient ascent. Whereas standard policy gradient methods perform one gradient
update per data sample, we propose a novel objective function that enables
multiple epochs of minibatch updates. The new methods, which we call proximal
policy optimization (PPO), have some of the benefits of trust region policy
optimization (TRPO), but they are much simpler to implement, more general, and
have better sample complexity (empirically). Our experiments test PPO on a
collection of benchmark tasks, including simulated robotic locomotion and Atari
game playing, and we show that PPO outperforms other online policy gradient
methods, and overall strikes a favorable balance between sample complexity,
simplicity, and wall-time. | http://arxiv.org/pdf/1707.06347 | John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, Oleg Klimov | cs.LG | null | null | cs.LG | 20170720 | 20170828 | [
{
"id": "1602.01783"
},
{
"id": "1707.02286"
},
{
"id": "1604.06778"
},
{
"id": "1506.02488"
},
{
"id": "1611.01224"
},
{
"id": "1606.01540"
}
] |
1707.06658 | 23 | # 4 Experimental Setup
We compare the tail risk of policies learned by GAIL and RAIL for ï¬ve continuous control tasks listed in Table 1. All these environments, were simulated using MuJoCo Physics Simulator [Todorov et al., 2012]. Each of these environments come packed with a âtrue" reward function in OpenAI Gym [Brockman et al., 2016]. [Ho and Ermon, 2016] trained neural network policies using Trust Region Policy Optimization (TRPO) [Schulman et al., 2015] on these reward functions to achieve state-of-the-art performance and have made the pre-trained models publicly available for all these environments as a part of their repository [OpenAI-GAIL, 2017]. They used these policies to generate the expert trajectories in their work on GAIL [Ho and Ermon, 2016]. For a fair comparison, we use the same policies to generate expert trajectories in our experiments. Table 1 gives the number of expert trajectories sampled for each environment. These numbers correspond to the best results reported in [Ho and Ermon, 2016].
5
# Algorithm 1 Risk-Averse Imitation learning (RAIL) | 1707.06658#23 | RAIL: Risk-Averse Imitation Learning | Imitation learning algorithms learn viable policies by imitating an expert's
behavior when reward signals are not available. Generative Adversarial
Imitation Learning (GAIL) is a state-of-the-art algorithm for learning policies
when the expert's behavior is available as a fixed set of trajectories. We
evaluate in terms of the expert's cost function and observe that the
distribution of trajectory-costs is often more heavy-tailed for GAIL-agents
than the expert at a number of benchmark continuous-control tasks. Thus,
high-cost trajectories, corresponding to tail-end events of catastrophic
failure, are more likely to be encountered by the GAIL-agents than the expert.
This makes the reliability of GAIL-agents questionable when it comes to
deployment in risk-sensitive applications like robotic surgery and autonomous
driving. In this work, we aim to minimize the occurrence of tail-end events by
minimizing tail risk within the GAIL framework. We quantify tail risk by the
Conditional-Value-at-Risk (CVaR) of trajectories and develop the Risk-Averse
Imitation Learning (RAIL) algorithm. We observe that the policies learned with
RAIL show lower tail-end risk than those of vanilla GAIL. Thus the proposed
RAIL algorithm appears as a potent alternative to GAIL for improved reliability
in risk-sensitive applications. | http://arxiv.org/pdf/1707.06658 | Anirban Santara, Abhishek Naik, Balaraman Ravindran, Dipankar Das, Dheevatsa Mudigere, Sasikanth Avancha, Bharat Kaul | cs.LG, cs.AI | Accepted for presentation in Deep Reinforcement Learning Symposium at
NIPS 2017 | null | cs.LG | 20170720 | 20171129 | [
{
"id": "1703.01703"
},
{
"id": "1704.07911"
},
{
"id": "1708.06374"
},
{
"id": "1604.07316"
},
{
"id": "1610.03295"
},
{
"id": "1606.01540"
}
] |
1707.06342 | 24 | 5
256-d 64%256x 1x1 relu 64x64%3%3 prune 50% >> relu 256%64x11 ReLU 256-d
Figure 3. Illustration of the ResNet pruning strategy. For each residual block, we only prune the ï¬rst two convolutional layers, keeping the block output dimension unchanged.
are conducted within Caffe [17].
# 4.1. Different ï¬lter selection criteria
There exist some heuristic criteria to evaluate the impor- tance of each ï¬lter in the literature. We compare our selec- tion method with two recently proposed criteria to demon- strate the effectiveness of our evaluation criterion. These criteria are brieï¬y summarized as follows:
e Weight sum [21]. Filters with smaller kernel weights tend to produce weaker activations. Thus, in this strat- egy the absolute sum of each filter is calculated as its importance score: s; = )> |W(i,:,:,:)|e APoZ (Average Percentage of Zeros) [14]. This criterion calculates the sparsity of each channel in output activations as its importance score: s; = Tes b DULG, == 0), where |Z(i,:,:)| is the elements number in i-th channel of tensor Z (af- ter ReLU), and I(-) denotes the indicator function. | 1707.06342#24 | ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression | We propose an efficient and unified framework, namely ThiNet, to
simultaneously accelerate and compress CNN models in both training and
inference stages. We focus on the filter level pruning, i.e., the whole filter
would be discarded if it is less important. Our method does not change the
original network structure, thus it can be perfectly supported by any
off-the-shelf deep learning libraries. We formally establish filter pruning as
an optimization problem, and reveal that we need to prune filters based on
statistics information computed from its next layer, not the current layer,
which differentiates ThiNet from existing methods. Experimental results
demonstrate the effectiveness of this strategy, which has advanced the
state-of-the-art. We also show the performance of ThiNet on ILSVRC-12
benchmark. ThiNet achieves 3.31$\times$ FLOPs reduction and 16.63$\times$
compression on VGG-16, with only 0.52$\%$ top-5 accuracy drop. Similar
experiments with ResNet-50 reveal that even for a compact network, ThiNet can
also reduce more than half of the parameters and FLOPs, at the cost of roughly
1$\%$ top-5 accuracy drop. Moreover, the original VGG-16 model can be further
pruned into a very small model with only 5.05MB model size, preserving AlexNet
level accuracy but showing much stronger generalization ability. | http://arxiv.org/pdf/1707.06342 | Jian-Hao Luo, Jianxin Wu, Weiyao Lin | cs.CV | To appear in ICCV 2017 | null | cs.CV | 20170720 | 20170720 | [
{
"id": "1602.07360"
},
{
"id": "1610.02391"
},
{
"id": "1607.03250"
}
] |
1707.06347 | 24 | Mni+16] V. Mnih, A. P. Badia, M. Mirza, A. Graves, T. P. Lillicrap, T. Harley, D. Silver, and K. Kavukcuoglu. âAsynchronous methods for deep reinforcement learningâ. In: arXiv preprint arXiv:1602.01783 (2016).
Sch+ 15a] J. Schulman, P. Moritz, S. Levine, M. Jordan, and P. Abbeel. âHigh-dimensional contin- uous control using generalized advantage estimationâ. In: arXiv preprint arXiv:1506.02488 (2015).
Sch+15b] J. Schulman, S. Levine, P. Moritz, M. I. Jordan, and P. Abbeel. âTrust region policy optimizationâ. In: CoRR, abs/1502.05477 (2015).
SLO6] I. Szita and A. Lorincz. âLearning Tetris using the noisy cross-entropy methodâ. In: Neural computation 18.12 (2006), pp. 2936-2941. | 1707.06347#24 | Proximal Policy Optimization Algorithms | We propose a new family of policy gradient methods for reinforcement
learning, which alternate between sampling data through interaction with the
environment, and optimizing a "surrogate" objective function using stochastic
gradient ascent. Whereas standard policy gradient methods perform one gradient
update per data sample, we propose a novel objective function that enables
multiple epochs of minibatch updates. The new methods, which we call proximal
policy optimization (PPO), have some of the benefits of trust region policy
optimization (TRPO), but they are much simpler to implement, more general, and
have better sample complexity (empirically). Our experiments test PPO on a
collection of benchmark tasks, including simulated robotic locomotion and Atari
game playing, and we show that PPO outperforms other online policy gradient
methods, and overall strikes a favorable balance between sample complexity,
simplicity, and wall-time. | http://arxiv.org/pdf/1707.06347 | John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, Oleg Klimov | cs.LG | null | null | cs.LG | 20170720 | 20170828 | [
{
"id": "1602.01783"
},
{
"id": "1707.02286"
},
{
"id": "1604.06778"
},
{
"id": "1506.02488"
},
{
"id": "1611.01224"
},
{
"id": "1606.01540"
}
] |
1707.06658 | 24 | Input: Expert trajectories ξE â¼ ÏE, hyper-parameters α, β, λCV aR Output: Optimized learnerâs policy Ï 1: Initialization: θ â θ0, w â w0, ν â ν0, λ â λCV aR 2: repeat 3: 4: 5: Sample trajectories ξi â¼ Ïθi Estimate ËHα(DÏ(ξ|c(D)), ν) = ν + 1 1âα Gradient ascent on discriminator parameters using: Eξi[(DÏ(ξ|c(D)) â ν)+] âwiJ = ËEξi[âwi log(D(s, a))] + ËEξE [âwi log(1 â D(s, a))] + λCV aRâwiHα(RÏ(ξ|c(D)), ν) 6: KL-constrained natural gradient descent step (TRPO) on policy parameters using: âθiJ = E(s,a)â¼Î¾i [âθilogÏθ(a|s)Q(s, a)] â âθiH(Ïθ) +λCV | 1707.06658#24 | RAIL: Risk-Averse Imitation Learning | Imitation learning algorithms learn viable policies by imitating an expert's
behavior when reward signals are not available. Generative Adversarial
Imitation Learning (GAIL) is a state-of-the-art algorithm for learning policies
when the expert's behavior is available as a fixed set of trajectories. We
evaluate in terms of the expert's cost function and observe that the
distribution of trajectory-costs is often more heavy-tailed for GAIL-agents
than the expert at a number of benchmark continuous-control tasks. Thus,
high-cost trajectories, corresponding to tail-end events of catastrophic
failure, are more likely to be encountered by the GAIL-agents than the expert.
This makes the reliability of GAIL-agents questionable when it comes to
deployment in risk-sensitive applications like robotic surgery and autonomous
driving. In this work, we aim to minimize the occurrence of tail-end events by
minimizing tail risk within the GAIL framework. We quantify tail risk by the
Conditional-Value-at-Risk (CVaR) of trajectories and develop the Risk-Averse
Imitation Learning (RAIL) algorithm. We observe that the policies learned with
RAIL show lower tail-end risk than those of vanilla GAIL. Thus the proposed
RAIL algorithm appears as a potent alternative to GAIL for improved reliability
in risk-sensitive applications. | http://arxiv.org/pdf/1707.06658 | Anirban Santara, Abhishek Naik, Balaraman Ravindran, Dipankar Das, Dheevatsa Mudigere, Sasikanth Avancha, Bharat Kaul | cs.LG, cs.AI | Accepted for presentation in Deep Reinforcement Learning Symposium at
NIPS 2017 | null | cs.LG | 20170720 | 20171129 | [
{
"id": "1703.01703"
},
{
"id": "1704.07911"
},
{
"id": "1708.06374"
},
{
"id": "1604.07316"
},
{
"id": "1610.03295"
},
{
"id": "1606.01540"
}
] |
1707.06342 | 25 | To compare these different selection methods, we evalu- ate their performance on the widely used ï¬ne-grained dataset: CUB-200 [31], which contains 11,788 images of 200 differ- ent bird species (5994/5794 images for training/test, respec- tively). Except for labels, no additional supervised informa- tion (e.g., bounding box) is used.
Following the pruning strategy in Section 3.3, all the FC layers in VGG-16 are removed, and replaced with a global average pooling layer, and ï¬ne-tuned on new datasets. Start- ing from this ï¬ne-tuned model, we then prune the network layer by layer with different compression rate. Each prun- ing is followed by one epoch ï¬ne-tuning, and 12 epochs are performed in the ï¬nal layer to improve accuracy. This procedure is repeated several times with different channel selection strategies. Due to the random nature of ThiNet, we repeated our method 4 times and report the averaged result. For a fair comparison, all the settings are kept the same, except the selection method.
Figure 4 shows the pruning results on the CUB bird dataset. We also evaluated the performance of random se- lection with the same pruning strategy. In addition, another | 1707.06342#25 | ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression | We propose an efficient and unified framework, namely ThiNet, to
simultaneously accelerate and compress CNN models in both training and
inference stages. We focus on the filter level pruning, i.e., the whole filter
would be discarded if it is less important. Our method does not change the
original network structure, thus it can be perfectly supported by any
off-the-shelf deep learning libraries. We formally establish filter pruning as
an optimization problem, and reveal that we need to prune filters based on
statistics information computed from its next layer, not the current layer,
which differentiates ThiNet from existing methods. Experimental results
demonstrate the effectiveness of this strategy, which has advanced the
state-of-the-art. We also show the performance of ThiNet on ILSVRC-12
benchmark. ThiNet achieves 3.31$\times$ FLOPs reduction and 16.63$\times$
compression on VGG-16, with only 0.52$\%$ top-5 accuracy drop. Similar
experiments with ResNet-50 reveal that even for a compact network, ThiNet can
also reduce more than half of the parameters and FLOPs, at the cost of roughly
1$\%$ top-5 accuracy drop. Moreover, the original VGG-16 model can be further
pruned into a very small model with only 5.05MB model size, preserving AlexNet
level accuracy but showing much stronger generalization ability. | http://arxiv.org/pdf/1707.06342 | Jian-Hao Luo, Jianxin Wu, Weiyao Lin | cs.CV | To appear in ICCV 2017 | null | cs.CV | 20170720 | 20170720 | [
{
"id": "1602.07360"
},
{
"id": "1610.02391"
},
{
"id": "1607.03250"
}
] |
1707.06347 | 25 | TET 12] E. Todorov, T. Erez, and Y. Tassa. âMuJoCo: A physics engine for model-based con- trolâ. In: Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Con- ference on. IEEE. 2012, pp. 5026-5033.
Wan-+16] Z. Wang, V. Bapst, N. Heess, V. Mnih, R. Munos, K. Kavukcuoglu, and N. de Freitas. âSample Efficient Actor-Critic with Experience Replayâ. In: arXiv preprint arXiv:1611.01224 (2016).
Wil92| R. J. Williams. âSimple statistical gradient-following algorithms for connectionist re- inforcement learningâ. In: Machine learning 8.3-4 (1992), pp. 229-256.
# A Hyperparameters
Hyperparameter | Value Horizon (T) 2048 Adam stepsize 3x 1074 Num. epochs 10 Minibatch size 64 Discount (Â¥) 0.99 GAE parameter (A) | 0.95
Table 3: PPO hyperparameters used for the Mujoco 1 million timestep benchmark. | 1707.06347#25 | Proximal Policy Optimization Algorithms | We propose a new family of policy gradient methods for reinforcement
learning, which alternate between sampling data through interaction with the
environment, and optimizing a "surrogate" objective function using stochastic
gradient ascent. Whereas standard policy gradient methods perform one gradient
update per data sample, we propose a novel objective function that enables
multiple epochs of minibatch updates. The new methods, which we call proximal
policy optimization (PPO), have some of the benefits of trust region policy
optimization (TRPO), but they are much simpler to implement, more general, and
have better sample complexity (empirically). Our experiments test PPO on a
collection of benchmark tasks, including simulated robotic locomotion and Atari
game playing, and we show that PPO outperforms other online policy gradient
methods, and overall strikes a favorable balance between sample complexity,
simplicity, and wall-time. | http://arxiv.org/pdf/1707.06347 | John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, Oleg Klimov | cs.LG | null | null | cs.LG | 20170720 | 20170828 | [
{
"id": "1602.01783"
},
{
"id": "1707.02286"
},
{
"id": "1604.06778"
},
{
"id": "1506.02488"
},
{
"id": "1611.01224"
},
{
"id": "1606.01540"
}
] |
1707.06342 | 26 | Figure 4 shows the pruning results on the CUB bird dataset. We also evaluated the performance of random se- lection with the same pruning strategy. In addition, another
S in Random Weight sum APoZ ThiNet w/o W ThiNet Top-1 Accuracy oo bo oR ed iv S Box 80% 60% 40% 20% 0% FLOPs Reduction
Figure 4. Performance comparison of different channel selection methods: the VGG-16-GAP model pruned on CUB-200 with dif- ferent compression rates. (This ï¬gure is best viewed in color and zoomed in.)
version of ThiNet without least squares (denoted by âThiNet w/o Ëwâ) is also evaluated to demonstrate the effectiveness of least squares in our method. Obviously, ThiNet achieves con- sistently and signiï¬cantly higher accuracy compared with other selection methods. | 1707.06342#26 | ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression | We propose an efficient and unified framework, namely ThiNet, to
simultaneously accelerate and compress CNN models in both training and
inference stages. We focus on the filter level pruning, i.e., the whole filter
would be discarded if it is less important. Our method does not change the
original network structure, thus it can be perfectly supported by any
off-the-shelf deep learning libraries. We formally establish filter pruning as
an optimization problem, and reveal that we need to prune filters based on
statistics information computed from its next layer, not the current layer,
which differentiates ThiNet from existing methods. Experimental results
demonstrate the effectiveness of this strategy, which has advanced the
state-of-the-art. We also show the performance of ThiNet on ILSVRC-12
benchmark. ThiNet achieves 3.31$\times$ FLOPs reduction and 16.63$\times$
compression on VGG-16, with only 0.52$\%$ top-5 accuracy drop. Similar
experiments with ResNet-50 reveal that even for a compact network, ThiNet can
also reduce more than half of the parameters and FLOPs, at the cost of roughly
1$\%$ top-5 accuracy drop. Moreover, the original VGG-16 model can be further
pruned into a very small model with only 5.05MB model size, preserving AlexNet
level accuracy but showing much stronger generalization ability. | http://arxiv.org/pdf/1707.06342 | Jian-Hao Luo, Jianxin Wu, Weiyao Lin | cs.CV | To appear in ICCV 2017 | null | cs.CV | 20170720 | 20170720 | [
{
"id": "1602.07360"
},
{
"id": "1610.02391"
},
{
"id": "1607.03250"
}
] |
1707.06347 | 26 | Table 3: PPO hyperparameters used for the Mujoco 1 million timestep benchmark.
Number of actors Log stdev. of action distribution Hyperparameter Value Horizon (T) 512 Adam stepsize * Num. epochs 15 Minibatch size 4096 Discount (7) 0.99 GAE parameter (A) 0.95 32 (locomotion), 128 (flagrun) LinearAnneal(â0.7, â1.6)
Table 4: PPO hyperparameters used for the Roboschool experiments. Adam stepsize was adjusted based on the target value of the KL divergence.
Hyperparameter Value Horizon (T) Adam stepsize Num. epochs Minibatch size Discount (7) GAE parameter (A) Number of actors Clipping parameter ⬠128 2.5x 1074 xa 3 32 x 8 0.99 0.95 8 O.1lxa VF coeff. cy (9) Entropy coeff. cz (9) 1 0.01
Table 5: PPO hyperparameters used in Atari experiments. a is linearly annealed from 1 to 0 over the course of learning.
# B- Performance on More Atari Games
Here we include a comparison of PPO against A2C on a larger collection of 49 Atari games. Figure 6 shows the learning curves of each of three random seeds, while Table 6 shows the mean performance. | 1707.06347#26 | Proximal Policy Optimization Algorithms | We propose a new family of policy gradient methods for reinforcement
learning, which alternate between sampling data through interaction with the
environment, and optimizing a "surrogate" objective function using stochastic
gradient ascent. Whereas standard policy gradient methods perform one gradient
update per data sample, we propose a novel objective function that enables
multiple epochs of minibatch updates. The new methods, which we call proximal
policy optimization (PPO), have some of the benefits of trust region policy
optimization (TRPO), but they are much simpler to implement, more general, and
have better sample complexity (empirically). Our experiments test PPO on a
collection of benchmark tasks, including simulated robotic locomotion and Atari
game playing, and we show that PPO outperforms other online policy gradient
methods, and overall strikes a favorable balance between sample complexity,
simplicity, and wall-time. | http://arxiv.org/pdf/1707.06347 | John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, Oleg Klimov | cs.LG | null | null | cs.LG | 20170720 | 20170828 | [
{
"id": "1602.01783"
},
{
"id": "1707.02286"
},
{
"id": "1604.06778"
},
{
"id": "1506.02488"
},
{
"id": "1611.01224"
},
{
"id": "1606.01540"
}
] |
1707.06658 | 26 | Again, following [Ho and Ermon, 2016], we model the generator (policy), discriminator and value function (used for advantage estimation [Sutton and Barto, 1998] for the generator) with multi-layer perceptrons of the following architecture: observationDim - fc_100 - tanh - fc_100 - tanh - outDim, where fc_100 means fully connected layer with 100 nodes, tanh represents the hyperbolic-tangent activation function of the hidden layers, observationDim stands for the dimensionality of the observed feature space, outDim is equal to 1 for the discriminator and value function networks and equal to the twice of the dimensionality of the action space (for mean and standard deviation of the Gaussian from which the action should be sampled) for the policy network. For example, in case of Humanoid-v1, observationDim = 376 and outDim = 34 in the policy network. The value of the CV aR coefï¬cient λCV aR is set as given by Table 1 after a coarse hyperparameter search. All other hyperparameters corresponding to the GAIL component of the algorithm are set identical to those used in [Ho and Ermon, 2016] and their repository [OpenAI-GAIL, 2017] | 1707.06658#26 | RAIL: Risk-Averse Imitation Learning | Imitation learning algorithms learn viable policies by imitating an expert's
behavior when reward signals are not available. Generative Adversarial
Imitation Learning (GAIL) is a state-of-the-art algorithm for learning policies
when the expert's behavior is available as a fixed set of trajectories. We
evaluate in terms of the expert's cost function and observe that the
distribution of trajectory-costs is often more heavy-tailed for GAIL-agents
than the expert at a number of benchmark continuous-control tasks. Thus,
high-cost trajectories, corresponding to tail-end events of catastrophic
failure, are more likely to be encountered by the GAIL-agents than the expert.
This makes the reliability of GAIL-agents questionable when it comes to
deployment in risk-sensitive applications like robotic surgery and autonomous
driving. In this work, we aim to minimize the occurrence of tail-end events by
minimizing tail risk within the GAIL framework. We quantify tail risk by the
Conditional-Value-at-Risk (CVaR) of trajectories and develop the Risk-Averse
Imitation Learning (RAIL) algorithm. We observe that the policies learned with
RAIL show lower tail-end risk than those of vanilla GAIL. Thus the proposed
RAIL algorithm appears as a potent alternative to GAIL for improved reliability
in risk-sensitive applications. | http://arxiv.org/pdf/1707.06658 | Anirban Santara, Abhishek Naik, Balaraman Ravindran, Dipankar Das, Dheevatsa Mudigere, Sasikanth Avancha, Bharat Kaul | cs.LG, cs.AI | Accepted for presentation in Deep Reinforcement Learning Symposium at
NIPS 2017 | null | cs.LG | 20170720 | 20171129 | [
{
"id": "1703.01703"
},
{
"id": "1704.07911"
},
{
"id": "1708.06374"
},
{
"id": "1604.07316"
},
{
"id": "1610.03295"
},
{
"id": "1606.01540"
}
] |
1707.06342 | 27 | One interesting result is: random selection shows pretty good performance, even better than heuristic criteria in some cases. In fact, according to the property of distributed repre- sentations (i.e., each concept is represented by many neurons; and, each neuron participates in the representation of many concepts [12, 1]), randomly selected channels may be quite powerful in theory. However, this criterion is not robust. As shown in Figure 4, it can lead to very bad result and the accuracy is very low after all layers are compressed. Thus, random selection is not applicable in practice. | 1707.06342#27 | ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression | We propose an efficient and unified framework, namely ThiNet, to
simultaneously accelerate and compress CNN models in both training and
inference stages. We focus on the filter level pruning, i.e., the whole filter
would be discarded if it is less important. Our method does not change the
original network structure, thus it can be perfectly supported by any
off-the-shelf deep learning libraries. We formally establish filter pruning as
an optimization problem, and reveal that we need to prune filters based on
statistics information computed from its next layer, not the current layer,
which differentiates ThiNet from existing methods. Experimental results
demonstrate the effectiveness of this strategy, which has advanced the
state-of-the-art. We also show the performance of ThiNet on ILSVRC-12
benchmark. ThiNet achieves 3.31$\times$ FLOPs reduction and 16.63$\times$
compression on VGG-16, with only 0.52$\%$ top-5 accuracy drop. Similar
experiments with ResNet-50 reveal that even for a compact network, ThiNet can
also reduce more than half of the parameters and FLOPs, at the cost of roughly
1$\%$ top-5 accuracy drop. Moreover, the original VGG-16 model can be further
pruned into a very small model with only 5.05MB model size, preserving AlexNet
level accuracy but showing much stronger generalization ability. | http://arxiv.org/pdf/1707.06342 | Jian-Hao Luo, Jianxin Wu, Weiyao Lin | cs.CV | To appear in ICCV 2017 | null | cs.CV | 20170720 | 20170720 | [
{
"id": "1602.07360"
},
{
"id": "1610.02391"
},
{
"id": "1607.03250"
}
] |
1707.06347 | 27 | Alien 2000 1000 Atlantis 3000000 2000000 1000000 \ Boxing 100 s DemonAttack 40000 20000 \ Frostbite 4 100 Kangaroo 10000 5000 â NameThisGame 10000 7500 5000 \ 2500 Riverraid 10000 7500 5000 2500 \ StarGunner 40000 20000 \ Venture 10 - ° $ Frames BankHeist 1000 -10.0 12.5 -15.0 17.5 40000 20000 VideoPinball 150000 100000 50000 3 & \: BattleZone 20000 15000 10000 5000 Centipede 10000 5000 o 8 8 8 Gravitar â KungFu Master Robotank 6 4 2 TimePilot Frames Astenx 7500 2500 : : MN FishingDerby LT IceHockey PrivateEye : Seaquest 1500 1000 } \ Tutankham Y Zaxxon N ° g = Frames 2500 : . 3 i=) - wo o 98 w o 100000 : \ 30 20 10 600 400 200 15000 10000 5000 500 200000 100000 Asteroids Bowling CrazyClimber Freeway SD TU Spacelnvaders UpNDown A2C â ACER PPO
publication. Figure 6: Comparison of PPO and A2C on all 49 ATARI games included in OpenAI Gym at the time of
11 | 1707.06347#27 | Proximal Policy Optimization Algorithms | We propose a new family of policy gradient methods for reinforcement
learning, which alternate between sampling data through interaction with the
environment, and optimizing a "surrogate" objective function using stochastic
gradient ascent. Whereas standard policy gradient methods perform one gradient
update per data sample, we propose a novel objective function that enables
multiple epochs of minibatch updates. The new methods, which we call proximal
policy optimization (PPO), have some of the benefits of trust region policy
optimization (TRPO), but they are much simpler to implement, more general, and
have better sample complexity (empirically). Our experiments test PPO on a
collection of benchmark tasks, including simulated robotic locomotion and Atari
game playing, and we show that PPO outperforms other online policy gradient
methods, and overall strikes a favorable balance between sample complexity,
simplicity, and wall-time. | http://arxiv.org/pdf/1707.06347 | John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, Oleg Klimov | cs.LG | null | null | cs.LG | 20170720 | 20170828 | [
{
"id": "1602.01783"
},
{
"id": "1707.02286"
},
{
"id": "1604.06778"
},
{
"id": "1506.02488"
},
{
"id": "1611.01224"
},
{
"id": "1606.01540"
}
] |
1707.06342 | 28 | Weight sum has pretty poor accuracy on CUB-200. This result is reasonable, since it only takes the magnitude of ker- nel weights into consideration, which is not directly related to the ï¬nal classiï¬cation accuracy. In fact, small weights could still have large impact on the loss function. When we discard a large number of small ï¬lters at the same time, the ï¬nal accuracy can be damaged greatly. For example, if we removed 60% ï¬lters in conv1-1 using the small weight crite- rion, the top-1 accuracy is only 40.99% (before ï¬ne-tuning), while random criterion is 51.26%. By contrast, our method (ThiNet w/o w) can reach 68.24%, and even 70.75% with least squares (ThiNet). The accuracy loss of weight sum is so large that ï¬ne-tuning cannot completely recover it from the drop.
In contrast, our method shows much higher and robust results. The least squares approach does indeed aid to get a better weight initialization for ï¬ne-tuning, especially when the compression rate is relatively high.
6
# 4.2. VGG-16 on ImageNet | 1707.06342#28 | ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression | We propose an efficient and unified framework, namely ThiNet, to
simultaneously accelerate and compress CNN models in both training and
inference stages. We focus on the filter level pruning, i.e., the whole filter
would be discarded if it is less important. Our method does not change the
original network structure, thus it can be perfectly supported by any
off-the-shelf deep learning libraries. We formally establish filter pruning as
an optimization problem, and reveal that we need to prune filters based on
statistics information computed from its next layer, not the current layer,
which differentiates ThiNet from existing methods. Experimental results
demonstrate the effectiveness of this strategy, which has advanced the
state-of-the-art. We also show the performance of ThiNet on ILSVRC-12
benchmark. ThiNet achieves 3.31$\times$ FLOPs reduction and 16.63$\times$
compression on VGG-16, with only 0.52$\%$ top-5 accuracy drop. Similar
experiments with ResNet-50 reveal that even for a compact network, ThiNet can
also reduce more than half of the parameters and FLOPs, at the cost of roughly
1$\%$ top-5 accuracy drop. Moreover, the original VGG-16 model can be further
pruned into a very small model with only 5.05MB model size, preserving AlexNet
level accuracy but showing much stronger generalization ability. | http://arxiv.org/pdf/1707.06342 | Jian-Hao Luo, Jianxin Wu, Weiyao Lin | cs.CV | To appear in ICCV 2017 | null | cs.CV | 20170720 | 20170720 | [
{
"id": "1602.07360"
},
{
"id": "1610.02391"
},
{
"id": "1607.03250"
}
] |
1707.06347 | 28 | A2C ACER PPO Alien 1141.7 1655.4 1850.3 Amidar 380.8 827.6 674.6 Assault 1562.9 4653.8 4971.9 Asterix 3176.3 6801.2 4532.5 Asteroids 1653.3 2389.3 2097.5 Atlantis 729265.3 1841376.0 2311815.0 BankHeist 1095.3 1177.5 1280.6 BattleZone 3080.0 8983.3 17366.7 BeamRider 3031.7 3863.3 1590.0 Bowling 30.1 33.3 40.1 Boxing 17.7 98.9 94.6 Breakout 303.0 456.4 274.8 Centipede 3496.5 8904.8 4386.4 ChopperCommand 1171.7 5287.7 3516.3 CrazyClimber 107770.0 132461.0 110202.0 DemonAttack 6639.1 38808.3 11378.4 DoubleDunk -16.2 -13.2 -14.9 Enduro 0.0 0.0 758.3 FishingDerby 20.6 34.7 17.8 Freeway 0.0 0.0 32.5 Frostbite 261.8 285.6 314.2 Gopher | 1707.06347#28 | Proximal Policy Optimization Algorithms | We propose a new family of policy gradient methods for reinforcement
learning, which alternate between sampling data through interaction with the
environment, and optimizing a "surrogate" objective function using stochastic
gradient ascent. Whereas standard policy gradient methods perform one gradient
update per data sample, we propose a novel objective function that enables
multiple epochs of minibatch updates. The new methods, which we call proximal
policy optimization (PPO), have some of the benefits of trust region policy
optimization (TRPO), but they are much simpler to implement, more general, and
have better sample complexity (empirically). Our experiments test PPO on a
collection of benchmark tasks, including simulated robotic locomotion and Atari
game playing, and we show that PPO outperforms other online policy gradient
methods, and overall strikes a favorable balance between sample complexity,
simplicity, and wall-time. | http://arxiv.org/pdf/1707.06347 | John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, Oleg Klimov | cs.LG | null | null | cs.LG | 20170720 | 20170828 | [
{
"id": "1602.01783"
},
{
"id": "1707.02286"
},
{
"id": "1604.06778"
},
{
"id": "1506.02488"
},
{
"id": "1611.01224"
},
{
"id": "1606.01540"
}
] |
1707.06658 | 28 | # 5 Evaluation Metrics
In this section we deï¬ne the metrics we use to evaluate the efï¬cacy of RAIL at reducing the tail risk of GAIL learned policies. Given an agent Aâs policy ÏA we roll out N trajectories T = {ξi}N i=1 from it and estimate V aRα and CV aRα as deï¬ned in Section 3.1. V aRα denotes the value under
Table 1: Hyperparameters for the RAIL experiments on various continuous control tasks from OpenAI Gym. For a fair comparison, the number of training iterations and expert trajectories are same as those used by [Ho and Ermon, 2016].
Task Reacher-v1 HalfCheetah-v1 Hopper-v1 Walker-v1 Humanoid-v1 #training iterations 200 500 500 500 1500 #expert trajectories 18 25 25 25 240 λCV aR 0.25 0.5 0.5 0.25 0.75
6 | 1707.06658#28 | RAIL: Risk-Averse Imitation Learning | Imitation learning algorithms learn viable policies by imitating an expert's
behavior when reward signals are not available. Generative Adversarial
Imitation Learning (GAIL) is a state-of-the-art algorithm for learning policies
when the expert's behavior is available as a fixed set of trajectories. We
evaluate in terms of the expert's cost function and observe that the
distribution of trajectory-costs is often more heavy-tailed for GAIL-agents
than the expert at a number of benchmark continuous-control tasks. Thus,
high-cost trajectories, corresponding to tail-end events of catastrophic
failure, are more likely to be encountered by the GAIL-agents than the expert.
This makes the reliability of GAIL-agents questionable when it comes to
deployment in risk-sensitive applications like robotic surgery and autonomous
driving. In this work, we aim to minimize the occurrence of tail-end events by
minimizing tail risk within the GAIL framework. We quantify tail risk by the
Conditional-Value-at-Risk (CVaR) of trajectories and develop the Risk-Averse
Imitation Learning (RAIL) algorithm. We observe that the policies learned with
RAIL show lower tail-end risk than those of vanilla GAIL. Thus the proposed
RAIL algorithm appears as a potent alternative to GAIL for improved reliability
in risk-sensitive applications. | http://arxiv.org/pdf/1707.06658 | Anirban Santara, Abhishek Naik, Balaraman Ravindran, Dipankar Das, Dheevatsa Mudigere, Sasikanth Avancha, Bharat Kaul | cs.LG, cs.AI | Accepted for presentation in Deep Reinforcement Learning Symposium at
NIPS 2017 | null | cs.LG | 20170720 | 20171129 | [
{
"id": "1703.01703"
},
{
"id": "1704.07911"
},
{
"id": "1708.06374"
},
{
"id": "1604.07316"
},
{
"id": "1610.03295"
},
{
"id": "1606.01540"
}
] |
1707.06342 | 29 | 6
# 4.2. VGG-16 on ImageNet
We now evaluate the performance of the proposed ThiNet method on large-scale ImageNet classiï¬cation task. The ILSCVR-12 dataset [26] consists of over one million train- ing images drawn from 1000 categories. We randomly select 10 images from each category in the training set to comprise our evaluation set (i.e., collected training examples for chan- nel selection). And for each input image, 10 instances are randomly sampled with different channels and different spa- tial locations as described in section 3.2.1. Hence, there are in total 100,000 training samples used for ï¬nding the optimal channel subset via Algorithm 1. We compared several dif- ferent choices of image and location number, and found that the current choice (10 images per class and 10 locations per image) is enough for neuron importance evaluation. Finally, top-1 and top-5 classiï¬cation performance are reported on the 50k standard validation set, using the single-view testing approach (central patch only). | 1707.06342#29 | ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression | We propose an efficient and unified framework, namely ThiNet, to
simultaneously accelerate and compress CNN models in both training and
inference stages. We focus on the filter level pruning, i.e., the whole filter
would be discarded if it is less important. Our method does not change the
original network structure, thus it can be perfectly supported by any
off-the-shelf deep learning libraries. We formally establish filter pruning as
an optimization problem, and reveal that we need to prune filters based on
statistics information computed from its next layer, not the current layer,
which differentiates ThiNet from existing methods. Experimental results
demonstrate the effectiveness of this strategy, which has advanced the
state-of-the-art. We also show the performance of ThiNet on ILSVRC-12
benchmark. ThiNet achieves 3.31$\times$ FLOPs reduction and 16.63$\times$
compression on VGG-16, with only 0.52$\%$ top-5 accuracy drop. Similar
experiments with ResNet-50 reveal that even for a compact network, ThiNet can
also reduce more than half of the parameters and FLOPs, at the cost of roughly
1$\%$ top-5 accuracy drop. Moreover, the original VGG-16 model can be further
pruned into a very small model with only 5.05MB model size, preserving AlexNet
level accuracy but showing much stronger generalization ability. | http://arxiv.org/pdf/1707.06342 | Jian-Hao Luo, Jianxin Wu, Weiyao Lin | cs.CV | To appear in ICCV 2017 | null | cs.CV | 20170720 | 20170720 | [
{
"id": "1602.07360"
},
{
"id": "1610.02391"
},
{
"id": "1607.03250"
}
] |
1707.06347 | 29 | FishingDerby 20.6 34.7 17.8 Freeway 0.0 0.0 32.5 Frostbite 261.8 285.6 314.2 Gopher 1500.9 37802.3 2932.9 Gravitar 194.0 225.3 737.2 ceHockey -6.4 -5.9 -4.2 Jamesbond 52.3 261.8 560.7 Kangaroo 45.3 50.0 9928.7 Krull 8367.4 7268.4 7942.3 KungFuMaster 24900.3 27599.3 23310.3 MontezumaRevenge 0.0 0.3 42.0 MsPacman 1626.9 2718.5 2096.5 NameThisGame 5961.2 8488.0 6254.9 Pitfall -55.0 -16.9 -32.9 Pong 19.7 20.7 20.7 PrivateEye 91.3 182.0 69.5 Qbert 10065.7 â15316.6 14293.3 Riverraid 7653.5 9125.1 8393.6 RoadRunner 32810.0 35466.0 25076.0 Robotank 2.2 2.5 5.5 Seaquest 1714.3 1739.5 1204.5 Spacelnvaders 744.5 1213.9 | 1707.06347#29 | Proximal Policy Optimization Algorithms | We propose a new family of policy gradient methods for reinforcement
learning, which alternate between sampling data through interaction with the
environment, and optimizing a "surrogate" objective function using stochastic
gradient ascent. Whereas standard policy gradient methods perform one gradient
update per data sample, we propose a novel objective function that enables
multiple epochs of minibatch updates. The new methods, which we call proximal
policy optimization (PPO), have some of the benefits of trust region policy
optimization (TRPO), but they are much simpler to implement, more general, and
have better sample complexity (empirically). Our experiments test PPO on a
collection of benchmark tasks, including simulated robotic locomotion and Atari
game playing, and we show that PPO outperforms other online policy gradient
methods, and overall strikes a favorable balance between sample complexity,
simplicity, and wall-time. | http://arxiv.org/pdf/1707.06347 | John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, Oleg Klimov | cs.LG | null | null | cs.LG | 20170720 | 20170828 | [
{
"id": "1602.01783"
},
{
"id": "1707.02286"
},
{
"id": "1604.06778"
},
{
"id": "1506.02488"
},
{
"id": "1611.01224"
},
{
"id": "1606.01540"
}
] |
1707.06658 | 29 | 6
Reacher-v1. HalfCheetah-v1 Hopper-v1 150 1000 100 -2000 cost cost 3000 400 500 0 100 200300 400» 500 iterations iterations iterations Walker-v1. Humanoid-v1. ween Expert -2000 â= GAIL 4000 cost cost â RAL ~6000 -2000 x-axis: training iterations y-axis: mean trajectory-cost 10000. -"â---~ ae epee 0 100 200 00 400-500, 0 250 500 750 1000 1250 1500 iterations iterations
Figure 2: Convergence of mean trajectory-cost during training. The faded curves corresponds to the original value of mean trajectory-cost which varies highly between successive iterations. The data is smoothened with a moving average ï¬lter of window size 21 to demonstrate the prevalent behavior and plotted with solid curves. RAIL converges almost as fast as GAIL at all the ï¬ve continuous-control tasks, and at times, even faster.
which the trajectory-cost remains with probability α and CV aRα gives the expected value of cost above V aRα. Intuitively, CV aRα gives the average value of cost of the worst cases that have a total probability no more than (1 â α). The lower the value of both these metrics, the lower is the tail risk. | 1707.06658#29 | RAIL: Risk-Averse Imitation Learning | Imitation learning algorithms learn viable policies by imitating an expert's
behavior when reward signals are not available. Generative Adversarial
Imitation Learning (GAIL) is a state-of-the-art algorithm for learning policies
when the expert's behavior is available as a fixed set of trajectories. We
evaluate in terms of the expert's cost function and observe that the
distribution of trajectory-costs is often more heavy-tailed for GAIL-agents
than the expert at a number of benchmark continuous-control tasks. Thus,
high-cost trajectories, corresponding to tail-end events of catastrophic
failure, are more likely to be encountered by the GAIL-agents than the expert.
This makes the reliability of GAIL-agents questionable when it comes to
deployment in risk-sensitive applications like robotic surgery and autonomous
driving. In this work, we aim to minimize the occurrence of tail-end events by
minimizing tail risk within the GAIL framework. We quantify tail risk by the
Conditional-Value-at-Risk (CVaR) of trajectories and develop the Risk-Averse
Imitation Learning (RAIL) algorithm. We observe that the policies learned with
RAIL show lower tail-end risk than those of vanilla GAIL. Thus the proposed
RAIL algorithm appears as a potent alternative to GAIL for improved reliability
in risk-sensitive applications. | http://arxiv.org/pdf/1707.06658 | Anirban Santara, Abhishek Naik, Balaraman Ravindran, Dipankar Das, Dheevatsa Mudigere, Sasikanth Avancha, Bharat Kaul | cs.LG, cs.AI | Accepted for presentation in Deep Reinforcement Learning Symposium at
NIPS 2017 | null | cs.LG | 20170720 | 20171129 | [
{
"id": "1703.01703"
},
{
"id": "1704.07911"
},
{
"id": "1708.06374"
},
{
"id": "1604.07316"
},
{
"id": "1610.03295"
},
{
"id": "1606.01540"
}
] |
1707.06342 | 30 | During ï¬ne-tuning, images are resized to 256 à 256, then 224 à 224 random cropping is adopted to feed the data into network. Horizontal ï¬ip is also used for data augmentation. At the inference stage, we center crop the resized images to 224 à 224. No more tricks are used here. The whole network is pruned layer by layer and ï¬ne-tuned in one epoch with 10â3 learning rate. Since the last layer of each group (i.e., conv1-2, conv2-2, conv3-3) is more important (pruning these layers would lead to a big accuracy drop), we ï¬ne-tune these layers with additional one epoch of 10â4 learning rate to prevent accuracy drop too much. When pruning the last layer, more epochs (12 epochs) are adopted to get an accurate result with learning rate varying from 10â3 to 10â5. We use SGD with mini-batch size of 128, and other parameters are kept the same as the original VGG paper [28]. | 1707.06342#30 | ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression | We propose an efficient and unified framework, namely ThiNet, to
simultaneously accelerate and compress CNN models in both training and
inference stages. We focus on the filter level pruning, i.e., the whole filter
would be discarded if it is less important. Our method does not change the
original network structure, thus it can be perfectly supported by any
off-the-shelf deep learning libraries. We formally establish filter pruning as
an optimization problem, and reveal that we need to prune filters based on
statistics information computed from its next layer, not the current layer,
which differentiates ThiNet from existing methods. Experimental results
demonstrate the effectiveness of this strategy, which has advanced the
state-of-the-art. We also show the performance of ThiNet on ILSVRC-12
benchmark. ThiNet achieves 3.31$\times$ FLOPs reduction and 16.63$\times$
compression on VGG-16, with only 0.52$\%$ top-5 accuracy drop. Similar
experiments with ResNet-50 reveal that even for a compact network, ThiNet can
also reduce more than half of the parameters and FLOPs, at the cost of roughly
1$\%$ top-5 accuracy drop. Moreover, the original VGG-16 model can be further
pruned into a very small model with only 5.05MB model size, preserving AlexNet
level accuracy but showing much stronger generalization ability. | http://arxiv.org/pdf/1707.06342 | Jian-Hao Luo, Jianxin Wu, Weiyao Lin | cs.CV | To appear in ICCV 2017 | null | cs.CV | 20170720 | 20170720 | [
{
"id": "1602.07360"
},
{
"id": "1610.02391"
},
{
"id": "1607.03250"
}
] |
1707.06658 | 30 | In order to compare tail risk of an agent with respect to the expert, E, we deï¬ne percentage relative- V aRα as follows:
V aRα(A|E) = 100 à V aRα(E) â V aRα(A) |V aRα(E)| % (10)
Similarly, we deï¬ne percentage relative-CV aRα as:
CV aRα(A|E) = 100 à CV aRα(E) â CV aRα(A) |CV aRα(E)| % (11)
The higher these numbers, the lesser is the tail risk of agent A. We deï¬ne Gain in Reliability (GR) as the difference in percentage relative tail risk between RAIL and GAIL agents.
GR-VaR = VaR,(RAIL\E) â VaRo(GAIL|E) (12)
GR-V aR = V aRα(RAIL|E) â V aRα(GAIL|E) GR-CV aR = CV aRα(RAIL|E) â CV aRα(GAIL|E)
GR-CVaR = CVaR,(RAIL|E) â CVaR,(GAIL|E) (13) | 1707.06658#30 | RAIL: Risk-Averse Imitation Learning | Imitation learning algorithms learn viable policies by imitating an expert's
behavior when reward signals are not available. Generative Adversarial
Imitation Learning (GAIL) is a state-of-the-art algorithm for learning policies
when the expert's behavior is available as a fixed set of trajectories. We
evaluate in terms of the expert's cost function and observe that the
distribution of trajectory-costs is often more heavy-tailed for GAIL-agents
than the expert at a number of benchmark continuous-control tasks. Thus,
high-cost trajectories, corresponding to tail-end events of catastrophic
failure, are more likely to be encountered by the GAIL-agents than the expert.
This makes the reliability of GAIL-agents questionable when it comes to
deployment in risk-sensitive applications like robotic surgery and autonomous
driving. In this work, we aim to minimize the occurrence of tail-end events by
minimizing tail risk within the GAIL framework. We quantify tail risk by the
Conditional-Value-at-Risk (CVaR) of trajectories and develop the Risk-Averse
Imitation Learning (RAIL) algorithm. We observe that the policies learned with
RAIL show lower tail-end risk than those of vanilla GAIL. Thus the proposed
RAIL algorithm appears as a potent alternative to GAIL for improved reliability
in risk-sensitive applications. | http://arxiv.org/pdf/1707.06658 | Anirban Santara, Abhishek Naik, Balaraman Ravindran, Dipankar Das, Dheevatsa Mudigere, Sasikanth Avancha, Bharat Kaul | cs.LG, cs.AI | Accepted for presentation in Deep Reinforcement Learning Symposium at
NIPS 2017 | null | cs.LG | 20170720 | 20171129 | [
{
"id": "1703.01703"
},
{
"id": "1704.07911"
},
{
"id": "1708.06374"
},
{
"id": "1604.07316"
},
{
"id": "1610.03295"
},
{
"id": "1606.01540"
}
] |
1707.06342 | 31 | We summarize the performance of the ThiNet approach in Table 1. Here, âThiNet-Convâ refers to the model in which only the ï¬rst 10 convolutional layers are pruned with compression rate 0.5 (i.e., half of the ï¬lters are removed in each layer till conv4-3) as stated above. Because some useless ï¬lters are discarded, the pruned model can even outperform the original VGG-16 model. However, if we train this model from scratch, the top-1/top-5 accuracy are only 67.00%/87.45% respectively, which is much worse than our pruned network. Then the FC layers are removed, replaced with a GAP (global average pooling) layer and ï¬ne- tuned in 12 epochs with the same hyper-parameters, which is denoted by âThiNet-GAPâ. The classiï¬cation accuracy of GAP model is slightly lower than the original model, since the model size has been reduced dramatically. Further reduction can be obtained with a higher compression rate (denoted by âThiNet-Tinyâ), which would be discussed later. The actual speed-up of | 1707.06342#31 | ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression | We propose an efficient and unified framework, namely ThiNet, to
simultaneously accelerate and compress CNN models in both training and
inference stages. We focus on the filter level pruning, i.e., the whole filter
would be discarded if it is less important. Our method does not change the
original network structure, thus it can be perfectly supported by any
off-the-shelf deep learning libraries. We formally establish filter pruning as
an optimization problem, and reveal that we need to prune filters based on
statistics information computed from its next layer, not the current layer,
which differentiates ThiNet from existing methods. Experimental results
demonstrate the effectiveness of this strategy, which has advanced the
state-of-the-art. We also show the performance of ThiNet on ILSVRC-12
benchmark. ThiNet achieves 3.31$\times$ FLOPs reduction and 16.63$\times$
compression on VGG-16, with only 0.52$\%$ top-5 accuracy drop. Similar
experiments with ResNet-50 reveal that even for a compact network, ThiNet can
also reduce more than half of the parameters and FLOPs, at the cost of roughly
1$\%$ top-5 accuracy drop. Moreover, the original VGG-16 model can be further
pruned into a very small model with only 5.05MB model size, preserving AlexNet
level accuracy but showing much stronger generalization ability. | http://arxiv.org/pdf/1707.06342 | Jian-Hao Luo, Jianxin Wu, Weiyao Lin | cs.CV | To appear in ICCV 2017 | null | cs.CV | 20170720 | 20170720 | [
{
"id": "1602.07360"
},
{
"id": "1610.02391"
},
{
"id": "1607.03250"
}
] |
1707.06658 | 32 | Environment Reacher-v1 Hopper-v1 HalfCheetah-v1 Walker-v1 Humanoid-v1 VaR Observation Action Expert GAIL 9.55 Dimensionality CVaR Expert GAIL 13.25 Ours 7.28 Ours 9.41 11 11 17 17 376 2 3 6 6 17 5.88 6.34 -3754.71 -1758.19 -3745.90 -2674.65 -1347.60 -3727.94 -3431.59 -2688.34 -3150.31 -3356.67 -2220.64 -2945.76 -5402.52 -5314.05 -5404.00 -2310.54 -3359.29 -3939.99 -9839.79 -2641.14 -9252.29 -4591.43 -1298.80 -4640.42
7
(12) (13)
Table 3: Values of percentage relative tail risk measures and gains in reliability on using RAIL over GAIL for different continuous control tasks. | 1707.06658#32 | RAIL: Risk-Averse Imitation Learning | Imitation learning algorithms learn viable policies by imitating an expert's
behavior when reward signals are not available. Generative Adversarial
Imitation Learning (GAIL) is a state-of-the-art algorithm for learning policies
when the expert's behavior is available as a fixed set of trajectories. We
evaluate in terms of the expert's cost function and observe that the
distribution of trajectory-costs is often more heavy-tailed for GAIL-agents
than the expert at a number of benchmark continuous-control tasks. Thus,
high-cost trajectories, corresponding to tail-end events of catastrophic
failure, are more likely to be encountered by the GAIL-agents than the expert.
This makes the reliability of GAIL-agents questionable when it comes to
deployment in risk-sensitive applications like robotic surgery and autonomous
driving. In this work, we aim to minimize the occurrence of tail-end events by
minimizing tail risk within the GAIL framework. We quantify tail risk by the
Conditional-Value-at-Risk (CVaR) of trajectories and develop the Risk-Averse
Imitation Learning (RAIL) algorithm. We observe that the policies learned with
RAIL show lower tail-end risk than those of vanilla GAIL. Thus the proposed
RAIL algorithm appears as a potent alternative to GAIL for improved reliability
in risk-sensitive applications. | http://arxiv.org/pdf/1707.06658 | Anirban Santara, Abhishek Naik, Balaraman Ravindran, Dipankar Das, Dheevatsa Mudigere, Sasikanth Avancha, Bharat Kaul | cs.LG, cs.AI | Accepted for presentation in Deep Reinforcement Learning Symposium at
NIPS 2017 | null | cs.LG | 20170720 | 20171129 | [
{
"id": "1703.01703"
},
{
"id": "1704.07911"
},
{
"id": "1708.06374"
},
{
"id": "1604.07316"
},
{
"id": "1610.03295"
},
{
"id": "1606.01540"
}
] |
1707.06342 | 33 | Table 1. Pruning results of VGG-16 on ImageNet using ThiNet. Here, M/B means million/billion (106/109), respectively; f./b. de- notes the forward/backward timing in milliseconds tested on one M40 GPU with batch size 32. Model Original2 68.34% 88.44% 138.34M 30.94B 189.92/407.56 ThiNet-Conv 69.80% 89.53% 131.44M 9.58B 76.71/152.05 Train from scratch 67.00% 87.45% 131.44M 9.58B 76.71/152.05 67.34% 87.92% 8.32M 9.34B 71.73/145.51 ThiNet-GAP 29.51/55.83 59.34% 81.97% 1.32M 2.01B ThiNet-Tiny 37.30/68.62 57.67% 80.39% 1.24M 1.72B SqueezeNet[15] 1 In this paper, we only consider the FLOPs of convolution operations, which is commonly used for computation complexity comparison. 2 For a fair comparison, the accuracy of original VGG-16 model is eval- uated on resized center-cropped | 1707.06342#33 | ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression | We propose an efficient and unified framework, namely ThiNet, to
simultaneously accelerate and compress CNN models in both training and
inference stages. We focus on the filter level pruning, i.e., the whole filter
would be discarded if it is less important. Our method does not change the
original network structure, thus it can be perfectly supported by any
off-the-shelf deep learning libraries. We formally establish filter pruning as
an optimization problem, and reveal that we need to prune filters based on
statistics information computed from its next layer, not the current layer,
which differentiates ThiNet from existing methods. Experimental results
demonstrate the effectiveness of this strategy, which has advanced the
state-of-the-art. We also show the performance of ThiNet on ILSVRC-12
benchmark. ThiNet achieves 3.31$\times$ FLOPs reduction and 16.63$\times$
compression on VGG-16, with only 0.52$\%$ top-5 accuracy drop. Similar
experiments with ResNet-50 reveal that even for a compact network, ThiNet can
also reduce more than half of the parameters and FLOPs, at the cost of roughly
1$\%$ top-5 accuracy drop. Moreover, the original VGG-16 model can be further
pruned into a very small model with only 5.05MB model size, preserving AlexNet
level accuracy but showing much stronger generalization ability. | http://arxiv.org/pdf/1707.06342 | Jian-Hao Luo, Jianxin Wu, Weiyao Lin | cs.CV | To appear in ICCV 2017 | null | cs.CV | 20170720 | 20170720 | [
{
"id": "1602.07360"
},
{
"id": "1610.02391"
},
{
"id": "1607.03250"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.