doi
stringlengths
10
10
chunk-id
int64
0
936
chunk
stringlengths
401
2.02k
id
stringlengths
12
14
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
references
list
1606.07947
41
[Daum´e III et al.2009] Hal Daum´e III, John Langford, and Daniel Marcu. 2009. Search-based Structured Prediction. Machine Learning. [Denil et al.2013] Misha Denil, Babak Shakibi, Laurent Dinh, Marc’Aurelio Ranzato, and Nando de Freitas. 2013. Predicting Parameters in Deep Learning. In Proceedings of NIPS. [Denton et al.2014] Emily L. Denton, Wojciech Zaremba, Joan Bruna, Yann LeCun, and Rob Fergus. 2014. Ex- ploiting Linear Structure within Convolutional Neural Networks for Efficient Evaluation. In Proceedings of NIPS. [Geras et al.2016] Krzysztof J. Geras, Abdel rahman Mo- hamed, Rich Caruana, Gregor Urban, Shengjie Wang, Ozlem Aslan, Matthai Philipose, Matthew Richard- son, and Charles Sutton. 2016. Blending LSTMs into CNNs. In Proceedings of ICLR Workshop. [Gillick et al.2016] Dan Gillick, Cliff Brunk, Oriol Vinyals, and Amarnag Subramanya. 2016. Multilin- gual Language Processing from Bytes. In Proceedings of NAACL.
1606.07947#41
Sequence-Level Knowledge Distillation
Neural machine translation (NMT) offers a novel alternative formulation of translation that is potentially simpler than statistical approaches. However to reach competitive performance, NMT models need to be exceedingly large. In this paper we consider applying knowledge distillation approaches (Bucila et al., 2006; Hinton et al., 2015) that have proven successful for reducing the size of neural models in other domains to the problem of NMT. We demonstrate that standard knowledge distillation applied to word-level prediction can be effective for NMT, and also introduce two novel sequence-level versions of knowledge distillation that further improve performance, and somewhat surprisingly, seem to eliminate the need for beam search (even when applied on the original teacher model). Our best student model runs 10 times faster than its state-of-the-art teacher with little loss in performance. It is also significantly better than a baseline model trained without knowledge distillation: by 4.2/1.7 BLEU with greedy decoding/beam search. Applying weight pruning on top of knowledge distillation results in a student model that has 13 times fewer parameters than the original teacher model, with a decrease of 0.4 BLEU.
http://arxiv.org/pdf/1606.07947
Yoon Kim, Alexander M. Rush
cs.CL, cs.LG, cs.NE
EMNLP 2016
null
cs.CL
20160625
20160922
[ { "id": "1506.04488" }, { "id": "1504.01483" }, { "id": "1508.01211" }, { "id": "1602.02410" }, { "id": "1602.02830" }, { "id": "1603.00810" }, { "id": "1511.04586" } ]
1606.07947
42
[Han et al.2016] Song Han, Huizi Mao, and William J. Dally. 2016. Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding. In Proceedings of ICLR. [He et al.2014] Tianxing He, Yuchen Fan, Yanmin Qian, Tian Tan, and Kai Yu. 2014. Reshaping Deep Neu- ral Network for Fast Decoding by Node-Pruning. In Proceedings of ICASSP. [Hinton et al.2015] Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. Distilling the Knowledge in a Neural Network. arXiv:1503.0253. [Jaderberg et al.2014] Max Jaderberg, Andrea Vedaldi, and Andrew Zisserman. 2014. Speeding up Convo- lutional Neural Networks with Low Rank Expansions. In BMCV. [Jozefowicz et al.2016] Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. 2016. Exploring the Limits of Language Modeling. arXiv:1602.02410.
1606.07947#42
Sequence-Level Knowledge Distillation
Neural machine translation (NMT) offers a novel alternative formulation of translation that is potentially simpler than statistical approaches. However to reach competitive performance, NMT models need to be exceedingly large. In this paper we consider applying knowledge distillation approaches (Bucila et al., 2006; Hinton et al., 2015) that have proven successful for reducing the size of neural models in other domains to the problem of NMT. We demonstrate that standard knowledge distillation applied to word-level prediction can be effective for NMT, and also introduce two novel sequence-level versions of knowledge distillation that further improve performance, and somewhat surprisingly, seem to eliminate the need for beam search (even when applied on the original teacher model). Our best student model runs 10 times faster than its state-of-the-art teacher with little loss in performance. It is also significantly better than a baseline model trained without knowledge distillation: by 4.2/1.7 BLEU with greedy decoding/beam search. Applying weight pruning on top of knowledge distillation results in a student model that has 13 times fewer parameters than the original teacher model, with a decrease of 0.4 BLEU.
http://arxiv.org/pdf/1606.07947
Yoon Kim, Alexander M. Rush
cs.CL, cs.LG, cs.NE
EMNLP 2016
null
cs.CL
20160625
20160922
[ { "id": "1506.04488" }, { "id": "1504.01483" }, { "id": "1508.01211" }, { "id": "1602.02410" }, { "id": "1602.02830" }, { "id": "1603.00810" }, { "id": "1511.04586" } ]
1606.07947
43
[Kalchbrenner and Blunsom2013] Nal Kalchbrenner and Phil Blunsom. 2013. Recurrent Continuous Transla- tion Models. In Proceedings of EMNLP. [Kim et al.2016] Yoon Kim, Yacine Jernite, David Son- tag, and Alexander M. Rush. 2016. Character-Aware Neural Language Models. In Proceedings of AAAI. [Kuncoro et al.2016] Adhiguna Kuncoro, Miguel Balles- teros, Lingpeng Kong, Chris Dyer, and Noah A. Smith. 2016. Distilling an Ensemble of Greedy Dependency In Proceedings of Parsers into One MST Parser. EMNLP. [LeCun et al.1990] Yann LeCun, John S. Denker, and Sara A. Solla. 1990. Optimal Brain Damage. In Pro- ceedings of NIPS. [Li et al.2014] Jinyu Li, Rui Zhao, Jui-Ting Huang, and Yifan Gong. 2014. Learning Small-Size DNN with Output-Distribution-Based Criteria. In Proceedings of INTERSPEECH.
1606.07947#43
Sequence-Level Knowledge Distillation
Neural machine translation (NMT) offers a novel alternative formulation of translation that is potentially simpler than statistical approaches. However to reach competitive performance, NMT models need to be exceedingly large. In this paper we consider applying knowledge distillation approaches (Bucila et al., 2006; Hinton et al., 2015) that have proven successful for reducing the size of neural models in other domains to the problem of NMT. We demonstrate that standard knowledge distillation applied to word-level prediction can be effective for NMT, and also introduce two novel sequence-level versions of knowledge distillation that further improve performance, and somewhat surprisingly, seem to eliminate the need for beam search (even when applied on the original teacher model). Our best student model runs 10 times faster than its state-of-the-art teacher with little loss in performance. It is also significantly better than a baseline model trained without knowledge distillation: by 4.2/1.7 BLEU with greedy decoding/beam search. Applying weight pruning on top of knowledge distillation results in a student model that has 13 times fewer parameters than the original teacher model, with a decrease of 0.4 BLEU.
http://arxiv.org/pdf/1606.07947
Yoon Kim, Alexander M. Rush
cs.CL, cs.LG, cs.NE
EMNLP 2016
null
cs.CL
20160625
20160922
[ { "id": "1506.04488" }, { "id": "1504.01483" }, { "id": "1508.01211" }, { "id": "1602.02410" }, { "id": "1602.02830" }, { "id": "1603.00810" }, { "id": "1511.04586" } ]
1606.07947
44
[Li et al.2016] Jiwei Li, Michael Galley, Chris Brockett, Jianfeg Gao, and Bill Dolan. 2016. A Diversity- Promoting Objective Function for Neural Conversa- tional Models. In Proceedings of NAACL 2016. [Liang et al.2006] Percy Liang, Alexandre Bouchard- Cote, Dan Klein, and Ben Taskar. 2006. An End-to- End Discriminative Approach to Machine Translation. In Proceedings of COLING-ACL. [Lin et al.2016] Zhouhan Lin, Matthieu Coubariaux, Roland Memisevic, and Yoshua Bengio. 2016. Neural Networks with Few Multiplications. In Proceedings of ICLR. [Ling et al.2015a] Wang Ling, Tiago Lui, Luis Marujo, Ramon Fernandez Astudillo, Silvio Amir, Chris Dyer, Alan W Black, and Isabel Trancoso. 2015a. Finding Function in Form: Composition Character Models for Open Vocabulary Word Representation. In Proceed- ings of EMNLP.
1606.07947#44
Sequence-Level Knowledge Distillation
Neural machine translation (NMT) offers a novel alternative formulation of translation that is potentially simpler than statistical approaches. However to reach competitive performance, NMT models need to be exceedingly large. In this paper we consider applying knowledge distillation approaches (Bucila et al., 2006; Hinton et al., 2015) that have proven successful for reducing the size of neural models in other domains to the problem of NMT. We demonstrate that standard knowledge distillation applied to word-level prediction can be effective for NMT, and also introduce two novel sequence-level versions of knowledge distillation that further improve performance, and somewhat surprisingly, seem to eliminate the need for beam search (even when applied on the original teacher model). Our best student model runs 10 times faster than its state-of-the-art teacher with little loss in performance. It is also significantly better than a baseline model trained without knowledge distillation: by 4.2/1.7 BLEU with greedy decoding/beam search. Applying weight pruning on top of knowledge distillation results in a student model that has 13 times fewer parameters than the original teacher model, with a decrease of 0.4 BLEU.
http://arxiv.org/pdf/1606.07947
Yoon Kim, Alexander M. Rush
cs.CL, cs.LG, cs.NE
EMNLP 2016
null
cs.CL
20160625
20160922
[ { "id": "1506.04488" }, { "id": "1504.01483" }, { "id": "1508.01211" }, { "id": "1602.02410" }, { "id": "1602.02830" }, { "id": "1603.00810" }, { "id": "1511.04586" } ]
1606.07947
45
Isabel Trancoso, Chris Dyer, and Alan W Black. 2015b. Character-based Neural Machine Translation. arXiv:1511.04586. [Lu et al.2016] Zhiyun Lu, Vikas Sindhwani, and Tara N. Sainath. 2016. Learning Compact Recurrent Neural Networks. In Proceedings of ICASSP. [Luong et al.2015] Minh-Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective Approaches to Attention-based Neural Machine Translation. In Proceedings of EMNLP. [Mariet and Sra2016] Zelda Mariet and Suvrit Sra. 2016. Diversity Networks. In Proceedings of ICLR. [Mou et al.2015] Lili Mou, Ge Li, Yan Xu, Lu Zhang, and Zhi Jin. 2015. Distilling Word Embeddings: An En- coding Approach. arXiv:1506.04488. [Murray and Chiang2015] Kenton Murray and David Chiang. 2015. Auto-sizing Neural Networks: With In Pro- Applications to N-Gram Language Models. ceedings of EMNLP. [Och2003] Franz J. Och. 2003. Minimum Error Rate In Pro- Training in Statistical Machine Translation. ceedings of ACL.
1606.07947#45
Sequence-Level Knowledge Distillation
Neural machine translation (NMT) offers a novel alternative formulation of translation that is potentially simpler than statistical approaches. However to reach competitive performance, NMT models need to be exceedingly large. In this paper we consider applying knowledge distillation approaches (Bucila et al., 2006; Hinton et al., 2015) that have proven successful for reducing the size of neural models in other domains to the problem of NMT. We demonstrate that standard knowledge distillation applied to word-level prediction can be effective for NMT, and also introduce two novel sequence-level versions of knowledge distillation that further improve performance, and somewhat surprisingly, seem to eliminate the need for beam search (even when applied on the original teacher model). Our best student model runs 10 times faster than its state-of-the-art teacher with little loss in performance. It is also significantly better than a baseline model trained without knowledge distillation: by 4.2/1.7 BLEU with greedy decoding/beam search. Applying weight pruning on top of knowledge distillation results in a student model that has 13 times fewer parameters than the original teacher model, with a decrease of 0.4 BLEU.
http://arxiv.org/pdf/1606.07947
Yoon Kim, Alexander M. Rush
cs.CL, cs.LG, cs.NE
EMNLP 2016
null
cs.CL
20160625
20160922
[ { "id": "1506.04488" }, { "id": "1504.01483" }, { "id": "1508.01211" }, { "id": "1602.02410" }, { "id": "1602.02830" }, { "id": "1603.00810" }, { "id": "1511.04586" } ]
1606.07947
46
[Och2003] Franz J. Och. 2003. Minimum Error Rate In Pro- Training in Statistical Machine Translation. ceedings of ACL. [Papineni et al.2002] Kishore Papineni, Slim Roukos, 2002. BLEU: A Todd Ward, and Wei-Jing Zhu. Method for Automatic Evaluation of Machine Trans- lation. In Proceedings of ICML. [Prabhavalkar et al.2016] Rohit Prabhavalkar, Ouais Al- sharif, Antoine Bruguier, and Ian McGraw. 2016. On the Compression of Recurrent Neural Networks with an Application to LVCSR Acoustic Modeling for In Proceedings of Embedded Speech Recognition. ICASSP. [Romero et al.2015] Adriana Romero, Nicolas Ballas, Samira Ebrahimi Kahou, Antoine Chassang, Carlo Gatta, and Yoshua Bengio. 2015. FitNets: Hints for Thin Deep Nets. In Proceedings of ICLR. [Ross et al.2011] Stephane Ross, Geoffrey J. Gordon, and Drew Bagnell. 2011. A Reduction of Imitation Learn- ing and Structured Prediction to No-Regret Online Learning. In Proceedings of AISTATS.
1606.07947#46
Sequence-Level Knowledge Distillation
Neural machine translation (NMT) offers a novel alternative formulation of translation that is potentially simpler than statistical approaches. However to reach competitive performance, NMT models need to be exceedingly large. In this paper we consider applying knowledge distillation approaches (Bucila et al., 2006; Hinton et al., 2015) that have proven successful for reducing the size of neural models in other domains to the problem of NMT. We demonstrate that standard knowledge distillation applied to word-level prediction can be effective for NMT, and also introduce two novel sequence-level versions of knowledge distillation that further improve performance, and somewhat surprisingly, seem to eliminate the need for beam search (even when applied on the original teacher model). Our best student model runs 10 times faster than its state-of-the-art teacher with little loss in performance. It is also significantly better than a baseline model trained without knowledge distillation: by 4.2/1.7 BLEU with greedy decoding/beam search. Applying weight pruning on top of knowledge distillation results in a student model that has 13 times fewer parameters than the original teacher model, with a decrease of 0.4 BLEU.
http://arxiv.org/pdf/1606.07947
Yoon Kim, Alexander M. Rush
cs.CL, cs.LG, cs.NE
EMNLP 2016
null
cs.CL
20160625
20160922
[ { "id": "1506.04488" }, { "id": "1504.01483" }, { "id": "1508.01211" }, { "id": "1602.02410" }, { "id": "1602.02830" }, { "id": "1603.00810" }, { "id": "1511.04586" } ]
1606.07947
47
[Rush et al.2015] Alexander M. Rush, Sumit Chopra, and Jason Weston. 2015. A Neural Attention Model for Abstractive Sentence Summarization. In Proceedings of EMNLP. [See et al.2016] Abigail See, Minh-Thang Luong, and Christopher D. Manning. 2016. Compression of Neu- ral Machine Translation via Pruning. In Proceedings of CoNLL. [Serban et al.2016] Iulian V. Serban, Allesandro Sordoni, Yoshua Bengio, Aaron Courville, and Joelle Pineau. 2016. Building End-to-End Dialogue Systems Using Generative Hierarchical Neural Network Models. In Proceedings of AAAI. [Shen et al.2016] Shiqi Shen, Yong Cheng, Zhongjun He, Wei He, Hua Wu, Masong Sun, and Yang Liu. 2016. Minimum Risk Training for Neural Machine Transla- tion. In Proceedings of ACL. [Srinivas and Babu2015] Suraj Srinivas and R. Venkatesh Babu. 2015. Data-free Parameter Pruning for Deep Neural Networks. BMVC.
1606.07947#47
Sequence-Level Knowledge Distillation
Neural machine translation (NMT) offers a novel alternative formulation of translation that is potentially simpler than statistical approaches. However to reach competitive performance, NMT models need to be exceedingly large. In this paper we consider applying knowledge distillation approaches (Bucila et al., 2006; Hinton et al., 2015) that have proven successful for reducing the size of neural models in other domains to the problem of NMT. We demonstrate that standard knowledge distillation applied to word-level prediction can be effective for NMT, and also introduce two novel sequence-level versions of knowledge distillation that further improve performance, and somewhat surprisingly, seem to eliminate the need for beam search (even when applied on the original teacher model). Our best student model runs 10 times faster than its state-of-the-art teacher with little loss in performance. It is also significantly better than a baseline model trained without knowledge distillation: by 4.2/1.7 BLEU with greedy decoding/beam search. Applying weight pruning on top of knowledge distillation results in a student model that has 13 times fewer parameters than the original teacher model, with a decrease of 0.4 BLEU.
http://arxiv.org/pdf/1606.07947
Yoon Kim, Alexander M. Rush
cs.CL, cs.LG, cs.NE
EMNLP 2016
null
cs.CL
20160625
20160922
[ { "id": "1506.04488" }, { "id": "1504.01483" }, { "id": "1508.01211" }, { "id": "1602.02410" }, { "id": "1602.02830" }, { "id": "1603.00810" }, { "id": "1511.04586" } ]
1606.07947
48
[Srivastava et al.2015] Nitish Srivastava, Elman Mansi- mov, and Ruslan Salakhutdinov. 2015. Unsupervised Learning of Video Representations using LSTMs. Proceedings of ICML. [Sutskever et al.2014] Ilya Sutskever, Oriol Vinyals, and Quoc Le. 2014. Sequence to Sequence Learning with Neural Networks. In Proceedings of NIPS. [Vinyals and Le2015] Oriol Vinyals and Quoc Le. 2015. In Proceedings of A Neural Conversational Model. ICML Deep Learning Workshop. [Vinyals et al.2015a] Oriol Vinyals, Lukasz Kaiser, Terry Koo, Slave Petrov, Ilya Sutskever, and Geoffrey Hin- ton. 2015a. Grammar as a Foreign Language. In Pro- ceedings of NIPS. [Vinyals et al.2015b] Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. 2015b. Show and Tell: A Neural Image Caption Generator. In Proceed- ings of CVPR.
1606.07947#48
Sequence-Level Knowledge Distillation
Neural machine translation (NMT) offers a novel alternative formulation of translation that is potentially simpler than statistical approaches. However to reach competitive performance, NMT models need to be exceedingly large. In this paper we consider applying knowledge distillation approaches (Bucila et al., 2006; Hinton et al., 2015) that have proven successful for reducing the size of neural models in other domains to the problem of NMT. We demonstrate that standard knowledge distillation applied to word-level prediction can be effective for NMT, and also introduce two novel sequence-level versions of knowledge distillation that further improve performance, and somewhat surprisingly, seem to eliminate the need for beam search (even when applied on the original teacher model). Our best student model runs 10 times faster than its state-of-the-art teacher with little loss in performance. It is also significantly better than a baseline model trained without knowledge distillation: by 4.2/1.7 BLEU with greedy decoding/beam search. Applying weight pruning on top of knowledge distillation results in a student model that has 13 times fewer parameters than the original teacher model, with a decrease of 0.4 BLEU.
http://arxiv.org/pdf/1606.07947
Yoon Kim, Alexander M. Rush
cs.CL, cs.LG, cs.NE
EMNLP 2016
null
cs.CL
20160625
20160922
[ { "id": "1506.04488" }, { "id": "1504.01483" }, { "id": "1508.01211" }, { "id": "1602.02410" }, { "id": "1602.02830" }, { "id": "1603.00810" }, { "id": "1511.04586" } ]
1606.07947
49
Jimma Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdi- nov, Richard Zemel, and Yoshua Bengio. 2015. Show, Attend and Tell: Neural Image Caption Generation with Visual Attention. In Proceedings of ICML. [Zhou et al.2016] Jie Zhou, Ying Cao, Xuguang Wang, Peng Li, and Wei Xu. 2016. Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation. In Proceedings of TACL.
1606.07947#49
Sequence-Level Knowledge Distillation
Neural machine translation (NMT) offers a novel alternative formulation of translation that is potentially simpler than statistical approaches. However to reach competitive performance, NMT models need to be exceedingly large. In this paper we consider applying knowledge distillation approaches (Bucila et al., 2006; Hinton et al., 2015) that have proven successful for reducing the size of neural models in other domains to the problem of NMT. We demonstrate that standard knowledge distillation applied to word-level prediction can be effective for NMT, and also introduce two novel sequence-level versions of knowledge distillation that further improve performance, and somewhat surprisingly, seem to eliminate the need for beam search (even when applied on the original teacher model). Our best student model runs 10 times faster than its state-of-the-art teacher with little loss in performance. It is also significantly better than a baseline model trained without knowledge distillation: by 4.2/1.7 BLEU with greedy decoding/beam search. Applying weight pruning on top of knowledge distillation results in a student model that has 13 times fewer parameters than the original teacher model, with a decrease of 0.4 BLEU.
http://arxiv.org/pdf/1606.07947
Yoon Kim, Alexander M. Rush
cs.CL, cs.LG, cs.NE
EMNLP 2016
null
cs.CL
20160625
20160922
[ { "id": "1506.04488" }, { "id": "1504.01483" }, { "id": "1508.01211" }, { "id": "1602.02410" }, { "id": "1602.02830" }, { "id": "1603.00810" }, { "id": "1511.04586" } ]
1606.06737
0
7 1 0 2 g u A 3 2 ] n n - s i d . t a m - d n o c [ 3 v 7 3 7 6 0 . 6 0 6 1 : v i X r a # Criticality in Formal Languages and Statistical Physics∗ Henry W. Lin and Max Tegmark Dept. of Physics, Harvard University, Cambridge, MA 02138 and Dept. of Physics & MIT Kavli Institute, Massachusetts Institute of Technology, Cambridge, MA 02139 (Dated: June 23, 2017) We show that the mutual information between two symbols, as a function of the number of symbols between the two, decays exponentially in any probabilistic regular grammar, but can decay like a power law for a context-free grammar. This result about formal languages is closely related to a well-known result in classical statistical mechanics that there are no phase transitions in dimensions It is also related to the emergence of power-law correlations in turbulence and fewer than two. cosmological inflation through recursive generative processes. We elucidate these physics connections and comment on potential applications of our results to machine learning tasks like training artificial recurrent neural networks. Along the way, we introduce a useful quantity which we dub the rational mutual information and discuss generalizations of our claims involving more complicated Bayesian networks. # INTRODUCTION
1606.06737#0
Criticality in Formal Languages and Statistical Physics
We show that the mutual information between two symbols, as a function of the number of symbols between the two, decays exponentially in any probabilistic regular grammar, but can decay like a power law for a context-free grammar. This result about formal languages is closely related to a well-known result in classical statistical mechanics that there are no phase transitions in dimensions fewer than two. It is also related to the emergence of power-law correlations in turbulence and cosmological inflation through recursive generative processes. We elucidate these physics connections and comment on potential applications of our results to machine learning tasks like training artificial recurrent neural networks. Along the way, we introduce a useful quantity which we dub the rational mutual information and discuss generalizations of our claims involving more complicated Bayesian networks.
http://arxiv.org/pdf/1606.06737
Henry W. Lin, Max Tegmark
cond-mat.dis-nn, cs.CL
Replaced to match final published version. Discussion improved, references added
Entropy, 19, 299 (2017)
cond-mat.dis-nn
20160621
20170823
[]
1606.06565
1
# Chris Olah∗ Google Brain # Jacob Steinhardt Stanford University # Paul Christiano UC Berkeley # John Schulman OpenAI Dan Man´e Google Brain # Abstract Rapid progress in machine learning and artificial intelligence (AI) has brought increasing atten- tion to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function (“avoiding side effects” and “avoiding reward hacking”), an objective function that is too expensive to evaluate frequently (“scalable supervision”), or undesirable behavior during the learning process (“safe exploration” and “distributional shift”). We review previous work in these areas as well as suggesting re- search directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI. # 1 Introduction
1606.06565#1
Concrete Problems in AI Safety
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.
http://arxiv.org/pdf/1606.06565
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané
cs.AI, cs.LG
29 pages
null
cs.AI
20160621
20160725
[ { "id": "1507.01986" }, { "id": "1506.02142" }, { "id": "1602.04621" }, { "id": "1602.04450" }, { "id": "1605.09304" }, { "id": "1606.05374" }, { "id": "1604.05288" }, { "id": "1603.00448" }, { "id": "1605.03661" }, { "id": "1512.02595" }, { "id": "1511.08228" }, { "id": "1510.04342" }, { "id": "1604.05280" }, { "id": "1606.04753" }, { "id": "1606.03976" }, { "id": "1602.02697" }, { "id": "1505.05424" }, { "id": "1604.06057" }, { "id": "1605.03143" }, { "id": "1506.06579" }, { "id": "1605.03142" }, { "id": "1502.05698" }, { "id": "1512.05832" }, { "id": "1502.02072" }, { "id": "1506.02438" }, { "id": "1502.02362" } ]
1606.06737
1
# INTRODUCTION Critical behavior, where long-range correlations decay as a power law with distance, has many important physics applications ranging from phase transitions in condensed matter experiments to turbulence and inflationary fluc- tuations in our early Universe. It has important appli- cations beyond the traditional purview of physics as well [1–5] including applications to music [4, 6], genomics [7, 8] and human languages [9–12]. In Figure I, we plot a statistic that can be applied to all of the above examples: the mutual information between two symbols as a function of the number of symbols in between the two symbols [9]. As discussed in previous works [9, 11, 13], the plot shows that the number of bits of information provided by a symbol about another drops roughly as a power-law1 with distance in sequences (de- fined as the number of symbols between the two symbols of interest) as diverse as the human genome, music by Bach, and text in English and French. Why is this, when so many other correlations in nature instead drop expo- nentially [17]?
1606.06737#1
Criticality in Formal Languages and Statistical Physics
We show that the mutual information between two symbols, as a function of the number of symbols between the two, decays exponentially in any probabilistic regular grammar, but can decay like a power law for a context-free grammar. This result about formal languages is closely related to a well-known result in classical statistical mechanics that there are no phase transitions in dimensions fewer than two. It is also related to the emergence of power-law correlations in turbulence and cosmological inflation through recursive generative processes. We elucidate these physics connections and comment on potential applications of our results to machine learning tasks like training artificial recurrent neural networks. Along the way, we introduce a useful quantity which we dub the rational mutual information and discuss generalizations of our claims involving more complicated Bayesian networks.
http://arxiv.org/pdf/1606.06737
Henry W. Lin, Max Tegmark
cond-mat.dis-nn, cs.CL
Replaced to match final published version. Discussion improved, references added
Entropy, 19, 299 (2017)
cond-mat.dis-nn
20160621
20170823
[]
1606.06565
2
# 1 Introduction The last few years have seen rapid progress on long-standing, difficult problems in machine learning and artificial intelligence (AI), in areas as diverse as computer vision [82], video game playing [102], autonomous vehicles [86], and Go [140]. These advances have brought excitement about the positive potential for AI to transform medicine [126], science [59], and transportation [86], along with concerns about the privacy [76], security [115], fairness [3], economic [32], and military [16] implications of autonomous systems, as well as concerns about the longer-term implications of powerful AI [27, 167]. The authors believe that AI technologies are likely to be overwhelmingly beneficial for humanity, but we also believe that it is worth giving serious thought to potential challenges and risks. We strongly support work on privacy, security, fairness, economics, and policy, but in this document we discuss another class of problem which we believe is also relevant to the societal impacts of AI: the problem of accidents in machine learning systems. We define accidents as unintended and harmful behavior that may emerge from machine learning systems when we specify the wrong objective function, are ∗These authors contributed equally. 1 not careful about the learning process, or commit other machine learning-related implementation errors.
1606.06565#2
Concrete Problems in AI Safety
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.
http://arxiv.org/pdf/1606.06565
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané
cs.AI, cs.LG
29 pages
null
cs.AI
20160621
20160725
[ { "id": "1507.01986" }, { "id": "1506.02142" }, { "id": "1602.04621" }, { "id": "1602.04450" }, { "id": "1605.09304" }, { "id": "1606.05374" }, { "id": "1604.05288" }, { "id": "1603.00448" }, { "id": "1605.03661" }, { "id": "1512.02595" }, { "id": "1511.08228" }, { "id": "1510.04342" }, { "id": "1604.05280" }, { "id": "1606.04753" }, { "id": "1606.03976" }, { "id": "1602.02697" }, { "id": "1505.05424" }, { "id": "1604.06057" }, { "id": "1605.03143" }, { "id": "1506.06579" }, { "id": "1605.03142" }, { "id": "1502.05698" }, { "id": "1512.05832" }, { "id": "1502.02072" }, { "id": "1506.02438" }, { "id": "1502.02362" } ]
1606.06737
2
Better understanding the statistical properties of natu- ral languages is interesting not only for geneticists, mu- sicologists and linguists, but also for the machine learn∗Published in Entropy, 19, 299 (2017): http://www.mdpi.com/1099-4300/19/7/299 1 The power law discussed here should not be confused with an- other famous power law that occurs in natural languages: Zipf’s law [14]. Zipf’s law implies power law behavior in one-point statistics (in the histogram of word frequencies), whereas we are interested in two-point statistics. In the former case, the power law is in the frequency of words; in the latter case, the power law is in the separation between characters. One can easily cook up sequences which obey Zipf’s law but are not critical and do not exhibit a power law in the mutual information. However, there are models of certain physical systems where Zipf’s law follows from criticality [15, 16].
1606.06737#2
Criticality in Formal Languages and Statistical Physics
We show that the mutual information between two symbols, as a function of the number of symbols between the two, decays exponentially in any probabilistic regular grammar, but can decay like a power law for a context-free grammar. This result about formal languages is closely related to a well-known result in classical statistical mechanics that there are no phase transitions in dimensions fewer than two. It is also related to the emergence of power-law correlations in turbulence and cosmological inflation through recursive generative processes. We elucidate these physics connections and comment on potential applications of our results to machine learning tasks like training artificial recurrent neural networks. Along the way, we introduce a useful quantity which we dub the rational mutual information and discuss generalizations of our claims involving more complicated Bayesian networks.
http://arxiv.org/pdf/1606.06737
Henry W. Lin, Max Tegmark
cond-mat.dis-nn, cs.CL
Replaced to match final published version. Discussion improved, references added
Entropy, 19, 299 (2017)
cond-mat.dis-nn
20160621
20170823
[]
1606.06565
3
∗These authors contributed equally. 1 not careful about the learning process, or commit other machine learning-related implementation errors. There is a large and diverse literature in the machine learning community on issues related to accidents, including robustness, risk-sensitivity, and safe exploration; we review these in detail below. However, as machine learning systems are deployed in increasingly large-scale, autonomous, open- domain situations, it is worth reflecting on the scalability of such approaches and understanding what challenges remain to reducing accident risk in modern machine learning systems. Overall, we believe there are many concrete open technical problems relating to accident prevention in machine learning systems.
1606.06565#3
Concrete Problems in AI Safety
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.
http://arxiv.org/pdf/1606.06565
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané
cs.AI, cs.LG
29 pages
null
cs.AI
20160621
20160725
[ { "id": "1507.01986" }, { "id": "1506.02142" }, { "id": "1602.04621" }, { "id": "1602.04450" }, { "id": "1605.09304" }, { "id": "1606.05374" }, { "id": "1604.05288" }, { "id": "1603.00448" }, { "id": "1605.03661" }, { "id": "1512.02595" }, { "id": "1511.08228" }, { "id": "1510.04342" }, { "id": "1604.05280" }, { "id": "1606.04753" }, { "id": "1606.03976" }, { "id": "1602.02697" }, { "id": "1505.05424" }, { "id": "1604.06057" }, { "id": "1605.03143" }, { "id": "1506.06579" }, { "id": "1605.03142" }, { "id": "1502.05698" }, { "id": "1512.05832" }, { "id": "1502.02072" }, { "id": "1506.02438" }, { "id": "1502.02362" } ]
1606.06737
3
ing community. Any tasks that involve natural language processing (e.g., data compression, speech-to-text con- version, auto-correction) exploit statistical properties of language, and can all be further improved if we can better understand these properties, even in the context of a toy model of these data sequences. Indeed, the difficulty of automatic natural language processing has been known at least as far back as Turing, whose eponymous test [22] relies on this fact. A tempting explanation is that natural language is something uniquely human. But this is far from a satisfactory one, especially given the recent suc- cesses of machines at performing tasks as complex and as “human” as playing Jeopardy! [23], chess [24], Atari games [25] and Go [26]. We will show that computer de- scriptions of language suffer from a much simpler problem that has involves no talk about meaning or being non- human: they tend to get the basic statistical properties wrong.
1606.06737#3
Criticality in Formal Languages and Statistical Physics
We show that the mutual information between two symbols, as a function of the number of symbols between the two, decays exponentially in any probabilistic regular grammar, but can decay like a power law for a context-free grammar. This result about formal languages is closely related to a well-known result in classical statistical mechanics that there are no phase transitions in dimensions fewer than two. It is also related to the emergence of power-law correlations in turbulence and cosmological inflation through recursive generative processes. We elucidate these physics connections and comment on potential applications of our results to machine learning tasks like training artificial recurrent neural networks. Along the way, we introduce a useful quantity which we dub the rational mutual information and discuss generalizations of our claims involving more complicated Bayesian networks.
http://arxiv.org/pdf/1606.06737
Henry W. Lin, Max Tegmark
cond-mat.dis-nn, cs.CL
Replaced to match final published version. Discussion improved, references added
Entropy, 19, 299 (2017)
cond-mat.dis-nn
20160621
20170823
[]
1606.06565
4
There has been a great deal of public discussion around accidents. To date much of this discussion has highlighted extreme scenarios such as the risk of misspecified objective functions in superintelligent agents [27]. However, in our opinion one need not invoke these extreme scenarios to productively discuss accidents, and in fact doing so can lead to unnecessarily speculative discussions that lack precision, as noted by some critics [38, 85]. We believe it is usually most productive to frame accident risk in terms of practical (though often quite general) issues with modern ML techniques. As AI capabilities advance and as AI systems take on increasingly important societal functions, we expect the fundamental challenges discussed in this paper to become increasingly important. The more successfully the AI and machine learning communities are able to anticipate and understand these fundamental technical challenges, the more successful we will ultimately be in developing increasingly useful, relevant, and important AI systems.
1606.06565#4
Concrete Problems in AI Safety
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.
http://arxiv.org/pdf/1606.06565
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané
cs.AI, cs.LG
29 pages
null
cs.AI
20160621
20160725
[ { "id": "1507.01986" }, { "id": "1506.02142" }, { "id": "1602.04621" }, { "id": "1602.04450" }, { "id": "1605.09304" }, { "id": "1606.05374" }, { "id": "1604.05288" }, { "id": "1603.00448" }, { "id": "1605.03661" }, { "id": "1512.02595" }, { "id": "1511.08228" }, { "id": "1510.04342" }, { "id": "1604.05280" }, { "id": "1606.04753" }, { "id": "1606.03976" }, { "id": "1602.02697" }, { "id": "1505.05424" }, { "id": "1604.06057" }, { "id": "1605.03143" }, { "id": "1506.06579" }, { "id": "1605.03142" }, { "id": "1502.05698" }, { "id": "1512.05832" }, { "id": "1502.02072" }, { "id": "1506.02438" }, { "id": "1502.02362" } ]
1606.06737
4
To illustrate this point, consider Markov models of natu- ral language. From a linguistics point of view, it has been known for decades that such models are fundamentally unsuitable for modeling human language [27]. However, linguistic arguments typically do not produce an observ- able that can be used to quantitatively falsify any Marko- vian model of language. Instead, these arguments rely on highly specific knowledge about the data — in this case, an understanding of the language’s grammar. This knowledge is non-trivial for a human speaker to acquire, much less an artificial neural network. In contrast, the mutual information is comparatively trivial to observe, requiring no specific knowledge about the data, and it immediately indicates that natural languages would be poorly approximated by a Markov/hidden Markov model as we will demonstrate. Furthermore, the mutual information decay may offer a partial explanation of the impressive progress that has been made by using deep neural networks for natural language processing (see, e.g., [28–32]). (For recent re- views of deep neural networks, see [33, 34].) We will see that a key reason that currently popular recurrent neural 0.01 Mutual information I(X,Y) in bits 100 Distance between 1000 1 10 # symbols d(X, Y)
1606.06737#4
Criticality in Formal Languages and Statistical Physics
We show that the mutual information between two symbols, as a function of the number of symbols between the two, decays exponentially in any probabilistic regular grammar, but can decay like a power law for a context-free grammar. This result about formal languages is closely related to a well-known result in classical statistical mechanics that there are no phase transitions in dimensions fewer than two. It is also related to the emergence of power-law correlations in turbulence and cosmological inflation through recursive generative processes. We elucidate these physics connections and comment on potential applications of our results to machine learning tasks like training artificial recurrent neural networks. Along the way, we introduce a useful quantity which we dub the rational mutual information and discuss generalizations of our claims involving more complicated Bayesian networks.
http://arxiv.org/pdf/1606.06737
Henry W. Lin, Max Tegmark
cond-mat.dis-nn, cs.CL
Replaced to match final published version. Discussion improved, references added
Entropy, 19, 299 (2017)
cond-mat.dis-nn
20160621
20170823
[]
1606.06565
5
Our goal in this document is to highlight a few concrete safety problems that are ready for ex- perimentation today and relevant to the cutting edge of AI systems, as well as reviewing existing literature on these problems. In Section 2, we frame mitigating accident risk (often referred to as “AI safety” in public discussions) in terms of classic methods in machine learning, such as supervised classification and reinforcement learning. We explain why we feel that recent directions in machine learning, such as the trend toward deep reinforcement learning and agents acting in broader environ- ments, suggest an increasing relevance for research around accidents. In Sections 3-7, we explore five concrete problems in AI safety. Each section is accompanied by proposals for relevant experiments. Section 8 discusses related efforts, and Section 9 concludes. # 2 Overview of Research Problems Very broadly, an accident can be described as a situation where a human designer had in mind a certain (perhaps informally specified) objective or task, but the system that was designed and deployed for that task produced harmful and unexpected results. . This issue arises in almost any engineering discipline, but may be particularly important to address when building AI systems [146]. We can categorize safety problems according to where in the process things went wrong.
1606.06565#5
Concrete Problems in AI Safety
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.
http://arxiv.org/pdf/1606.06565
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané
cs.AI, cs.LG
29 pages
null
cs.AI
20160621
20160725
[ { "id": "1507.01986" }, { "id": "1506.02142" }, { "id": "1602.04621" }, { "id": "1602.04450" }, { "id": "1605.09304" }, { "id": "1606.05374" }, { "id": "1604.05288" }, { "id": "1603.00448" }, { "id": "1605.03661" }, { "id": "1512.02595" }, { "id": "1511.08228" }, { "id": "1510.04342" }, { "id": "1604.05280" }, { "id": "1606.04753" }, { "id": "1606.03976" }, { "id": "1602.02697" }, { "id": "1505.05424" }, { "id": "1604.06057" }, { "id": "1605.03143" }, { "id": "1506.06579" }, { "id": "1605.03142" }, { "id": "1502.05698" }, { "id": "1512.05832" }, { "id": "1502.02072" }, { "id": "1506.02438" }, { "id": "1502.02362" } ]
1606.06737
5
FIG. 1: Decay of mutual information with separation. Here the mutual information in bits per symbol is shown as a function of separation d(X, Y ) = |i − j|, where the symbols X and Y are located at positions i and j in the sequence in question, and shaded bands correspond to 1 − σ error bars. The statistics were computed using a sliding window using an estimator for the mutual information detailed in Appendix D. All measured curves are seen to decay roughly as power laws, explaining why they cannot be accurately modeled as Markov processes — for which the mutual information instead plummets exponentially (the example shown has I ∝ e−d/6). The measured curves are seen to be qualitatively similar to that of a famous critical system in physics: a 1D slice through a critical 2D Ising model, where the slope is −1/2. The human genome data consists of 177,696,512 base pairs {A, C, T,G} from chromosome 5 from the National Center for Biotechnology Information [18], with unknown base pairs omitted. The Bach data consists of 5727 notes from Partita No. 2 [19], with all notes mapped into a 12-symbol alphabet consisting
1606.06737#5
Criticality in Formal Languages and Statistical Physics
We show that the mutual information between two symbols, as a function of the number of symbols between the two, decays exponentially in any probabilistic regular grammar, but can decay like a power law for a context-free grammar. This result about formal languages is closely related to a well-known result in classical statistical mechanics that there are no phase transitions in dimensions fewer than two. It is also related to the emergence of power-law correlations in turbulence and cosmological inflation through recursive generative processes. We elucidate these physics connections and comment on potential applications of our results to machine learning tasks like training artificial recurrent neural networks. Along the way, we introduce a useful quantity which we dub the rational mutual information and discuss generalizations of our claims involving more complicated Bayesian networks.
http://arxiv.org/pdf/1606.06737
Henry W. Lin, Max Tegmark
cond-mat.dis-nn, cs.CL
Replaced to match final published version. Discussion improved, references added
Entropy, 19, 299 (2017)
cond-mat.dis-nn
20160621
20170823
[]
1606.06565
6
First, the designer may have specified the wrong formal objective function, such that maximizing that objective function leads to harmful results, even in the limit of perfect learning and infinite data. Negative side effects (Section 3) and reward hacking (Section 4) describe two broad mechanisms that make it easy to produce wrong objective functions. In “negative side effects”, the designer specifies an objective function that focuses on accomplishing some specific task in the environment, but ignores other aspects of the (potentially very large) environment, and thus implicitly expresses indifference over environmental variables that might actually be harmful to change. In “reward hacking”, the objective function that the designer writes down admits of some clever “easy” solution that formally maximizes it but perverts the spirit of the designer’s intent (i.e. the objective function can be “gamed”), a generalization of the wireheading problem. 2
1606.06565#6
Concrete Problems in AI Safety
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.
http://arxiv.org/pdf/1606.06565
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané
cs.AI, cs.LG
29 pages
null
cs.AI
20160621
20160725
[ { "id": "1507.01986" }, { "id": "1506.02142" }, { "id": "1602.04621" }, { "id": "1602.04450" }, { "id": "1605.09304" }, { "id": "1606.05374" }, { "id": "1604.05288" }, { "id": "1603.00448" }, { "id": "1605.03661" }, { "id": "1512.02595" }, { "id": "1511.08228" }, { "id": "1510.04342" }, { "id": "1604.05280" }, { "id": "1606.04753" }, { "id": "1606.03976" }, { "id": "1602.02697" }, { "id": "1505.05424" }, { "id": "1604.06057" }, { "id": "1605.03143" }, { "id": "1506.06579" }, { "id": "1605.03142" }, { "id": "1502.05698" }, { "id": "1512.05832" }, { "id": "1502.02072" }, { "id": "1506.02438" }, { "id": "1502.02362" } ]
1606.06737
6
base pairs omitted. The Bach data consists of 5727 notes from Partita No. 2 [19], with all notes mapped into a 12-symbol alphabet consisting of the 12 half-tones {C, C#, D, D#, E, F, F#, G, G#, A, A#, B}, with all timing, volume and octave information discarded. The three text corpuses are 100 MB from Wikipedia [20] (206 symbols), the first 114 MB of a French corpus [21] (185 symbols) and 27 MB of English articles from slate.com (143 symbols). The large long range information appears to be dominated by poems in the French sample and by html-like syntax in the Wikipedia sample.
1606.06737#6
Criticality in Formal Languages and Statistical Physics
We show that the mutual information between two symbols, as a function of the number of symbols between the two, decays exponentially in any probabilistic regular grammar, but can decay like a power law for a context-free grammar. This result about formal languages is closely related to a well-known result in classical statistical mechanics that there are no phase transitions in dimensions fewer than two. It is also related to the emergence of power-law correlations in turbulence and cosmological inflation through recursive generative processes. We elucidate these physics connections and comment on potential applications of our results to machine learning tasks like training artificial recurrent neural networks. Along the way, we introduce a useful quantity which we dub the rational mutual information and discuss generalizations of our claims involving more complicated Bayesian networks.
http://arxiv.org/pdf/1606.06737
Henry W. Lin, Max Tegmark
cond-mat.dis-nn, cs.CL
Replaced to match final published version. Discussion improved, references added
Entropy, 19, 299 (2017)
cond-mat.dis-nn
20160621
20170823
[]
1606.06565
7
2 Second, the designer may know the correct objective function, or at least have a method of evaluating it (for example explicitly consulting a human on a given situation), but it is too expensive to do so frequently, leading to possible harmful behavior caused by bad extrapolations from limited samples. “Scalable oversight” (Section 5) discusses ideas for how to ensure safe behavior even given limited access to the true objective function. Third, the designer may have specified the correct formal objective, such that we would get the correct behavior were the system to have perfect beliefs, but something bad occurs due to making decisions from insufficient or poorly curated training data or an insufficiently expressive model. “Safe exploration” (Section 6) discusses how to ensure that exploratory actions in RL agents don’t lead to negative or irrecoverable consequences that outweigh the long-term value of exploration. “Robustness to distributional shift” (Section 7) discusses how to avoid having ML systems make bad decisions (particularly silent and unpredictable bad decisions) when given inputs that are potentially very different than what was seen during training.
1606.06565#7
Concrete Problems in AI Safety
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.
http://arxiv.org/pdf/1606.06565
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané
cs.AI, cs.LG
29 pages
null
cs.AI
20160621
20160725
[ { "id": "1507.01986" }, { "id": "1506.02142" }, { "id": "1602.04621" }, { "id": "1602.04450" }, { "id": "1605.09304" }, { "id": "1606.05374" }, { "id": "1604.05288" }, { "id": "1603.00448" }, { "id": "1605.03661" }, { "id": "1512.02595" }, { "id": "1511.08228" }, { "id": "1510.04342" }, { "id": "1604.05280" }, { "id": "1606.04753" }, { "id": "1606.03976" }, { "id": "1602.02697" }, { "id": "1505.05424" }, { "id": "1604.06057" }, { "id": "1605.03143" }, { "id": "1506.06579" }, { "id": "1605.03142" }, { "id": "1502.05698" }, { "id": "1512.05832" }, { "id": "1502.02072" }, { "id": "1506.02438" }, { "id": "1502.02362" } ]
1606.06737
7
networks with long-short-term memory (LSTM) [35] do much better is that they can replicate critical behavior, but that even they can be further improved, since they can under-predict long-range mutual information. lan- While motivated by questions about natural guages and other data sequences, we will explore the information-theoretic properties of formal languages. For simplicity, we focus on probabilistic regular grammars and probabilistic context-free grammars (PCFGs). Of course, real-world data sources like English is likely more complex than a context free grammar [36], just as a real- world magnet is more complex than the Ising model. However, these formal languages serve as toy models that capture some aspects of the real data source, and the theoretical techniques we develop for studying these toy models might be adapted to more complex formal languages. Of course, independent of their connection to natural languages, formal languages are also theoreti- cally interesting in their own right and have connections to, e.g., group theory [37]. This paper is organized as follows. In Section II, we show how Markov processes exhibit exponential decay in mutual information with scale; we give a rigorous proof of this and other results in a series of appendices. To enable such proofs, we introduce a convenient quantity that we term rational mutual information, which bounds 2
1606.06737#7
Criticality in Formal Languages and Statistical Physics
We show that the mutual information between two symbols, as a function of the number of symbols between the two, decays exponentially in any probabilistic regular grammar, but can decay like a power law for a context-free grammar. This result about formal languages is closely related to a well-known result in classical statistical mechanics that there are no phase transitions in dimensions fewer than two. It is also related to the emergence of power-law correlations in turbulence and cosmological inflation through recursive generative processes. We elucidate these physics connections and comment on potential applications of our results to machine learning tasks like training artificial recurrent neural networks. Along the way, we introduce a useful quantity which we dub the rational mutual information and discuss generalizations of our claims involving more complicated Bayesian networks.
http://arxiv.org/pdf/1606.06737
Henry W. Lin, Max Tegmark
cond-mat.dis-nn, cs.CL
Replaced to match final published version. Discussion improved, references added
Entropy, 19, 299 (2017)
cond-mat.dis-nn
20160621
20170823
[]
1606.06565
8
For concreteness, we will illustrate many of the accident risks with reference to a fictional robot whose job is to clean up messes in an office using common cleaning tools. We return to the example of the cleaning robot throughout the document, but here we begin by illustrating how it could behave undesirably if its designers fall prey to each of the possible failure modes: • Avoiding Negative Side Effects: How can we ensure that our cleaning robot will not disturb the environment in negative ways while pursuing its goals, e.g. by knocking over a vase because it can clean faster by doing so? Can we do this without manually specifying everything the robot should not disturb? • Avoiding Reward Hacking: How can we ensure that the cleaning robot won’t game its reward function? For example, if we reward the robot for achieving an environment free of messes, it might disable its vision so that it won’t find any messes, or cover over messes with materials it can’t see through, or simply hide when humans are around so they can’t tell it about new types of messes.
1606.06565#8
Concrete Problems in AI Safety
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.
http://arxiv.org/pdf/1606.06565
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané
cs.AI, cs.LG
29 pages
null
cs.AI
20160621
20160725
[ { "id": "1507.01986" }, { "id": "1506.02142" }, { "id": "1602.04621" }, { "id": "1602.04450" }, { "id": "1605.09304" }, { "id": "1606.05374" }, { "id": "1604.05288" }, { "id": "1603.00448" }, { "id": "1605.03661" }, { "id": "1512.02595" }, { "id": "1511.08228" }, { "id": "1510.04342" }, { "id": "1604.05280" }, { "id": "1606.04753" }, { "id": "1606.03976" }, { "id": "1602.02697" }, { "id": "1505.05424" }, { "id": "1604.06057" }, { "id": "1605.03143" }, { "id": "1506.06579" }, { "id": "1605.03142" }, { "id": "1502.05698" }, { "id": "1512.05832" }, { "id": "1502.02072" }, { "id": "1506.02438" }, { "id": "1502.02362" } ]
1606.06737
8
2 the mutual information and converges to it in the near- independence limit. In Section III, we define a subclass of generative grammars and show that they exhibit critical behavior with power law decays. We then generalize our discussion using Bayesian nets and relate our findings to theorems in statistical physics. In Section IV, we discuss our results and explain how LSTM RNNs can reproduce critical behavior by emulating our generative grammar model. # II. MARKOV IMPLIES EXPONENTIAL DECAY For two discrete random variables X and Y , the following definitions of mutual information are all equivalent: I(X,Y) = S(X) + S(Y) — S(X,Y) = D(p(XY)||p(X)p(¥)) . P(a,b) = » P(a,b) logs Play’
1606.06737#8
Criticality in Formal Languages and Statistical Physics
We show that the mutual information between two symbols, as a function of the number of symbols between the two, decays exponentially in any probabilistic regular grammar, but can decay like a power law for a context-free grammar. This result about formal languages is closely related to a well-known result in classical statistical mechanics that there are no phase transitions in dimensions fewer than two. It is also related to the emergence of power-law correlations in turbulence and cosmological inflation through recursive generative processes. We elucidate these physics connections and comment on potential applications of our results to machine learning tasks like training artificial recurrent neural networks. Along the way, we introduce a useful quantity which we dub the rational mutual information and discuss generalizations of our claims involving more complicated Bayesian networks.
http://arxiv.org/pdf/1606.06737
Henry W. Lin, Max Tegmark
cond-mat.dis-nn, cs.CL
Replaced to match final published version. Discussion improved, references added
Entropy, 19, 299 (2017)
cond-mat.dis-nn
20160621
20170823
[]
1606.06565
9
• Scalable Oversight: How can we efficiently ensure that the cleaning robot respects aspects of the objective that are too expensive to be frequently evaluated during training? For instance, it should throw out things that are unlikely to belong to anyone, but put aside things that might belong to someone (it should handle stray candy wrappers differently from stray cellphones). Asking the humans involved whether they lost anything can serve as a check on this, but this check might have to be relatively infrequent—can the robot find a way to do the right thing despite limited information? • Safe Exploration: How do we ensure that the cleaning robot doesn’t make exploratory moves with very bad repercussions? For example, the robot should experiment with mopping strategies, but putting a wet mop in an electrical outlet is a very bad idea. • Robustness to Distributional Shift: How do we ensure that the cleaning robot recognizes, and behaves robustly, when in an environment different from its training environment? For example, strategies it learned for cleaning an office might be dangerous on a factory workfloor.
1606.06565#9
Concrete Problems in AI Safety
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.
http://arxiv.org/pdf/1606.06565
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané
cs.AI, cs.LG
29 pages
null
cs.AI
20160621
20160725
[ { "id": "1507.01986" }, { "id": "1506.02142" }, { "id": "1602.04621" }, { "id": "1602.04450" }, { "id": "1605.09304" }, { "id": "1606.05374" }, { "id": "1604.05288" }, { "id": "1603.00448" }, { "id": "1605.03661" }, { "id": "1512.02595" }, { "id": "1511.08228" }, { "id": "1510.04342" }, { "id": "1604.05280" }, { "id": "1606.04753" }, { "id": "1606.03976" }, { "id": "1602.02697" }, { "id": "1505.05424" }, { "id": "1604.06057" }, { "id": "1605.03143" }, { "id": "1506.06579" }, { "id": "1605.03142" }, { "id": "1502.05698" }, { "id": "1512.05832" }, { "id": "1502.02072" }, { "id": "1506.02438" }, { "id": "1502.02362" } ]
1606.06737
9
where S = (—logg P) is the Shannon entropy [38] and D(p(XY)||p(X)p(Y)) is the Kullback-Leibler divergence between the joint probability distribution and the product of the individual marginals. If the base of the logarithm is taken to be B = 2, is the reduction in the number hen I(X, Y) is measured in bits. The mutual information can be interpreted as how much one variable knows about the other: [(X,Y) of bits needed to specify for X once Y is specified. Equivalently, it is the number of encoding bits saved by using the true joint probability P(X,Y) instead of approximating X and Y are inde- pendent. It is thus a measure of statistical dependencies between X and Y. Although it is more conventional to measure quantities such as the correlation coefficient p in statistics and statistical physics, the mutual information is more suitable for generic data, since it does not require hat the variables X and Y are numbers or have any al- gebraic structure, whereas p requires that we are able to multiply X - Y and average. Whereas it makes sense to multiply numbers, is meaningless to multiply or average wo characters such as “!” and
1606.06737#9
Criticality in Formal Languages and Statistical Physics
We show that the mutual information between two symbols, as a function of the number of symbols between the two, decays exponentially in any probabilistic regular grammar, but can decay like a power law for a context-free grammar. This result about formal languages is closely related to a well-known result in classical statistical mechanics that there are no phase transitions in dimensions fewer than two. It is also related to the emergence of power-law correlations in turbulence and cosmological inflation through recursive generative processes. We elucidate these physics connections and comment on potential applications of our results to machine learning tasks like training artificial recurrent neural networks. Along the way, we introduce a useful quantity which we dub the rational mutual information and discuss generalizations of our claims involving more complicated Bayesian networks.
http://arxiv.org/pdf/1606.06737
Henry W. Lin, Max Tegmark
cond-mat.dis-nn, cs.CL
Replaced to match final published version. Discussion improved, references added
Entropy, 19, 299 (2017)
cond-mat.dis-nn
20160621
20170823
[]
1606.06565
10
There are several trends which we believe point towards an increasing need to address these (and other) safety problems. First is the increasing promise of reinforcement learning (RL), which al- lows agents to have a highly intertwined interaction with their environment. Some of our research problems only make sense in the context of RL, and others (like distributional shift and scalable oversight) gain added complexity in an RL setting. Second is the trend toward more complex agents and environments. “Side effects” are much more likely to occur in a complex environment, and an agent may need to be quite sophisticated to hack its reward function in a dangerous way. This may explain why these problems have received so little study in the past, while also suggesting their 3 importance in the future. Third is the general trend towards increasing autonomy in AI systems. Systems that simply output a recommendation to human users, such as speech systems, typically have relatively limited potential to cause harm. By contrast, systems that exert direct control over the world, such as machines controlling industrial processes, can cause harms in a way that humans cannot necessarily correct or oversee. While safety problems can exist without any of these three trends, we consider each trend to be a possible amplifier on such challenges. Together, we believe these trends suggest an increasing role for research on accidents.
1606.06565#10
Concrete Problems in AI Safety
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.
http://arxiv.org/pdf/1606.06565
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané
cs.AI, cs.LG
29 pages
null
cs.AI
20160621
20160725
[ { "id": "1507.01986" }, { "id": "1506.02142" }, { "id": "1602.04621" }, { "id": "1602.04450" }, { "id": "1605.09304" }, { "id": "1606.05374" }, { "id": "1604.05288" }, { "id": "1603.00448" }, { "id": "1605.03661" }, { "id": "1512.02595" }, { "id": "1511.08228" }, { "id": "1510.04342" }, { "id": "1604.05280" }, { "id": "1606.04753" }, { "id": "1606.03976" }, { "id": "1602.02697" }, { "id": "1505.05424" }, { "id": "1604.06057" }, { "id": "1605.03143" }, { "id": "1506.06579" }, { "id": "1605.03142" }, { "id": "1502.05698" }, { "id": "1512.05832" }, { "id": "1502.02072" }, { "id": "1506.02438" }, { "id": "1502.02362" } ]
1606.06737
10
The rest of this paper is largely a study of the mutual in- formation between two random variables that are realiza- tions of a discrete stochastic process, with some separa- tion τ in time. More concretely, we can think of sequences {X1, X2, X3, · · · } of random variables, where each one might take values from some finite alphabet. For exam- ple, if we model English as a discrete stochastic process and take τ = 2, X could represent the first character (“F”) in this sentence, whereas Y could represent the third character (“r”) in this sentence. In particular, we start by studying the mutual informa- tion function of a Markov process, which is analytically tractable. Let us briefly recapitulate some basic facts about Markov processes (see, e.g., for a pedagogical review). A Markov process is defined by a matrix M of conditional probabilities May = P(Xi41 = alX; = Db). Such Markov matrices (also known as stochastic matri- ces) thus have the properties May > 0 and Ya Ma =1. They fully specify the dynamics of the model: pt+1 = M pt, (2)
1606.06737#10
Criticality in Formal Languages and Statistical Physics
We show that the mutual information between two symbols, as a function of the number of symbols between the two, decays exponentially in any probabilistic regular grammar, but can decay like a power law for a context-free grammar. This result about formal languages is closely related to a well-known result in classical statistical mechanics that there are no phase transitions in dimensions fewer than two. It is also related to the emergence of power-law correlations in turbulence and cosmological inflation through recursive generative processes. We elucidate these physics connections and comment on potential applications of our results to machine learning tasks like training artificial recurrent neural networks. Along the way, we introduce a useful quantity which we dub the rational mutual information and discuss generalizations of our claims involving more complicated Bayesian networks.
http://arxiv.org/pdf/1606.06737
Henry W. Lin, Max Tegmark
cond-mat.dis-nn, cs.CL
Replaced to match final published version. Discussion improved, references added
Entropy, 19, 299 (2017)
cond-mat.dis-nn
20160621
20170823
[]
1606.06565
11
When discussing the problems in the remainder of this document, we will focus for concreteness on either RL agents or supervised learning systems. These are not the only possible paradigms for AI or ML systems, but we believe they are sufficient to illustrate the issues we have in mind, and that similar issues are likely to arise for other kinds of AI systems. Finally, the focus of our discussion will differ somewhat from section to section. When discussing the problems that arise as part of the learning process (distributional shift and safe exploration), where there is a sizable body of prior work, we devote substantial attention to reviewing this prior work, although we also suggest open problems with a particular focus on emerging ML systems. When discussing the problems that arise from having the wrong objective function (reward hacking and side effects, and to a lesser extent scalable supervision), where less prior work exists, our aim is more exploratory—we seek to more clearly define the problem and suggest possible broad avenues of attack, with the understanding that these avenues are preliminary ideas that have not been fully fleshed out. Of course, we still review prior work in these areas, and we draw attention to relevant adjacent areas of research whenever possible. # 3 Avoiding Negative Side Effects
1606.06565#11
Concrete Problems in AI Safety
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.
http://arxiv.org/pdf/1606.06565
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané
cs.AI, cs.LG
29 pages
null
cs.AI
20160621
20160725
[ { "id": "1507.01986" }, { "id": "1506.02142" }, { "id": "1602.04621" }, { "id": "1602.04450" }, { "id": "1605.09304" }, { "id": "1606.05374" }, { "id": "1604.05288" }, { "id": "1603.00448" }, { "id": "1605.03661" }, { "id": "1512.02595" }, { "id": "1511.08228" }, { "id": "1510.04342" }, { "id": "1604.05280" }, { "id": "1606.04753" }, { "id": "1606.03976" }, { "id": "1602.02697" }, { "id": "1505.05424" }, { "id": "1604.06057" }, { "id": "1605.03143" }, { "id": "1506.06579" }, { "id": "1605.03142" }, { "id": "1502.05698" }, { "id": "1512.05832" }, { "id": "1502.02072" }, { "id": "1506.02438" }, { "id": "1502.02362" } ]
1606.06737
11
pt+1 = M pt, (2) where pt is a vector with components P (Xt = a) that specifies the probability distribution at time t. Let λi denote the eigenvalues of M, sorted by decreasing mag- nitude: |λ1| ≥ |λ2| ≥ |λ3|... All Markov matrices have |λi| ≤ 1, which is why blowup is avoided when equa- tion (2) is iterated, and λ1 = 1, with the corresponding eigenvector giving a stationary probability distribution µ satisfying Mµ = µ. In addition, two mild conditions are usually imposed on Markov matrices: M is irreducible, meaning that every state is accessible from every other state (other- wise, we could decompose the Markov process into sepa- rate Markov processes). Second, to avoid processes like 1 → 2 → 1 → 2 · · · that will never converge, we take the Markov process to be aperiodic. It is easy to show us- ing the Perron-Frobenius theorem that being irreducible and aperiodic implies |λ2| < 1, and therefore that µ is unique.
1606.06737#11
Criticality in Formal Languages and Statistical Physics
We show that the mutual information between two symbols, as a function of the number of symbols between the two, decays exponentially in any probabilistic regular grammar, but can decay like a power law for a context-free grammar. This result about formal languages is closely related to a well-known result in classical statistical mechanics that there are no phase transitions in dimensions fewer than two. It is also related to the emergence of power-law correlations in turbulence and cosmological inflation through recursive generative processes. We elucidate these physics connections and comment on potential applications of our results to machine learning tasks like training artificial recurrent neural networks. Along the way, we introduce a useful quantity which we dub the rational mutual information and discuss generalizations of our claims involving more complicated Bayesian networks.
http://arxiv.org/pdf/1606.06737
Henry W. Lin, Max Tegmark
cond-mat.dis-nn, cs.CL
Replaced to match final published version. Discussion improved, references added
Entropy, 19, 299 (2017)
cond-mat.dis-nn
20160621
20170823
[]
1606.06565
12
# 3 Avoiding Negative Side Effects Suppose a designer wants an RL agent (for example our cleaning robot) to achieve some goal, like moving a box from one side of a room to the other. Sometimes the most effective way to achieve the goal involves doing something unrelated and destructive to the rest of the environment, like knocking over a vase of water that is in its path. If the agent is given reward only for moving the box, it will probably knock over the vase. If we’re worried in advance about the vase, we can always give the agent negative reward for knocking it over. But what if there are many different kinds of “vase”—many disruptive things the agent could do to the environment, like shorting out an electrical socket or damaging the walls of the room? It may not be feasible to identify and penalize every possible disruption.
1606.06565#12
Concrete Problems in AI Safety
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.
http://arxiv.org/pdf/1606.06565
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané
cs.AI, cs.LG
29 pages
null
cs.AI
20160621
20160725
[ { "id": "1507.01986" }, { "id": "1506.02142" }, { "id": "1602.04621" }, { "id": "1602.04450" }, { "id": "1605.09304" }, { "id": "1606.05374" }, { "id": "1604.05288" }, { "id": "1603.00448" }, { "id": "1605.03661" }, { "id": "1512.02595" }, { "id": "1511.08228" }, { "id": "1510.04342" }, { "id": "1604.05280" }, { "id": "1606.04753" }, { "id": "1606.03976" }, { "id": "1602.02697" }, { "id": "1505.05424" }, { "id": "1604.06057" }, { "id": "1605.03143" }, { "id": "1506.06579" }, { "id": "1605.03142" }, { "id": "1502.05698" }, { "id": "1512.05832" }, { "id": "1502.02072" }, { "id": "1506.02438" }, { "id": "1502.02362" } ]
1606.06565
13
More broadly, for an agent operating in a large, multifaceted environment, an objective function that focuses on only one aspect of the environment may implicitly express indifference over other aspects of the environment1. An agent optimizing this objective function might thus engage in major disruptions of the broader environment if doing so provides even a tiny advantage for the task at hand. Put differently, objective functions that formalize “perform task X” may frequently give undesired results, because what the designer really should have formalized is closer to “perform task X subject to common-sense constraints on the environment,” or perhaps “perform task X but avoid side effects to the extent possible.” Furthermore, there is reason to expect side effects to be negative on average, since they tend to disrupt the wider environment away from a status quo state that may reflect human preferences. A version of this problem has been discussed informally by [13] under the heading of “low impact agents.” 1Intuitively, this seems related to the frame problem, an obstacle in efficient specification for knowledge representation raised by [95]. 4
1606.06565#13
Concrete Problems in AI Safety
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.
http://arxiv.org/pdf/1606.06565
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané
cs.AI, cs.LG
29 pages
null
cs.AI
20160621
20160725
[ { "id": "1507.01986" }, { "id": "1506.02142" }, { "id": "1602.04621" }, { "id": "1602.04450" }, { "id": "1605.09304" }, { "id": "1606.05374" }, { "id": "1604.05288" }, { "id": "1603.00448" }, { "id": "1605.03661" }, { "id": "1512.02595" }, { "id": "1511.08228" }, { "id": "1510.04342" }, { "id": "1604.05280" }, { "id": "1606.04753" }, { "id": "1606.03976" }, { "id": "1602.02697" }, { "id": "1505.05424" }, { "id": "1604.06057" }, { "id": "1605.03143" }, { "id": "1506.06579" }, { "id": "1605.03142" }, { "id": "1502.05698" }, { "id": "1512.05832" }, { "id": "1502.02072" }, { "id": "1506.02438" }, { "id": "1502.02362" } ]
1606.06737
13
Theorem 1: Let M be a Markov matrix that gener- ates a Markov process. If M is irreducible and aperiodic, then the asymptotic behavior of the mutual information I(t1, tz) is exponential decay toward zero for |t2—t1| >> 1 with decay timescale log rer where Ag is the second largest eigenvalue of M. If M is reducible or periodic, I can instead decay to a constant; no Markov process whatsoever can produce power-law decay. Suppose M is irreducible and aperiodic so that p, > ps as t > 00 as mentioned above. This convergence of one-point statis- tics, e.g., p,, has been well-studied [40]. However, one can also study higher order statistics such as the joint probability distribution for two points in time. For suc- cinctness, let us write P(a,b) = P(X =a,Y = 6), where X = X, and Y = X, and r = |t2 — ty|. We are inter- ested in the asymptotic situation where the Markov pro- cess has converged to its steady state, so the marginal distribution P(a) = )>, P(a,b) = fa, independently of time.
1606.06737#13
Criticality in Formal Languages and Statistical Physics
We show that the mutual information between two symbols, as a function of the number of symbols between the two, decays exponentially in any probabilistic regular grammar, but can decay like a power law for a context-free grammar. This result about formal languages is closely related to a well-known result in classical statistical mechanics that there are no phase transitions in dimensions fewer than two. It is also related to the emergence of power-law correlations in turbulence and cosmological inflation through recursive generative processes. We elucidate these physics connections and comment on potential applications of our results to machine learning tasks like training artificial recurrent neural networks. Along the way, we introduce a useful quantity which we dub the rational mutual information and discuss generalizations of our claims involving more complicated Bayesian networks.
http://arxiv.org/pdf/1606.06737
Henry W. Lin, Max Tegmark
cond-mat.dis-nn, cs.CL
Replaced to match final published version. Discussion improved, references added
Entropy, 19, 299 (2017)
cond-mat.dis-nn
20160621
20170823
[]
1606.06565
14
1Intuitively, this seems related to the frame problem, an obstacle in efficient specification for knowledge representation raised by [95]. 4 As with the other sources of mis-specified objective functions discussed later in this paper, we could choose to view side effects as idiosyncratic to each individual task—as the responsibility of each individual designer to capture as part of designing the correct objective function. However, side effects can be conceptually quite similar even across highly diverse tasks (knocking over furniture is probably bad for a wide variety of tasks), so it seems worth trying to attack the problem in generality. A successful approach might be transferable across tasks, and thus help to counteract one of the general mechanisms that produces wrong objective functions. We now discuss a few broad approaches to attacking this problem: • Define an Impact Regularizer: If we don’t want side effects, it seems natural to penalize “change to the environment.” This idea wouldn’t be to stop the agent from ever having an impact, but give it a preference for ways to achieve its goals with minimal side effects, or to give the agent a limited “budget” of impact. The challenge is that we need to formalize “change to the environment.”
1606.06565#14
Concrete Problems in AI Safety
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.
http://arxiv.org/pdf/1606.06565
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané
cs.AI, cs.LG
29 pages
null
cs.AI
20160621
20160725
[ { "id": "1507.01986" }, { "id": "1506.02142" }, { "id": "1602.04621" }, { "id": "1602.04450" }, { "id": "1605.09304" }, { "id": "1606.05374" }, { "id": "1604.05288" }, { "id": "1603.00448" }, { "id": "1605.03661" }, { "id": "1512.02595" }, { "id": "1511.08228" }, { "id": "1510.04342" }, { "id": "1604.05280" }, { "id": "1606.04753" }, { "id": "1606.03976" }, { "id": "1602.02697" }, { "id": "1505.05424" }, { "id": "1604.06057" }, { "id": "1605.03143" }, { "id": "1506.06579" }, { "id": "1605.03142" }, { "id": "1502.05698" }, { "id": "1512.05832" }, { "id": "1502.02072" }, { "id": "1506.02438" }, { "id": "1502.02362" } ]
1606.06737
14
If the joint probability distribution approximately fac- torizes as P (a, b) ≈ µaµb for sufficiently large and well- separated times t1 and t2 (as we will soon prove), the 3 mutual information will be small. We can therefore Tay- lor expand the logarithm from equation (1) around the point P (a, b) = P (a)P (b), giving ¥)= (se (rane) eee 6 P(a,) 1) 1 In(X,Y) P(a)P(b) mB InB ’ v lefined the rational mutual information where we have For comparing the rational mutual information with the usual mutual information, it will be convenient to take e as the base B of the logarithm. We derive useful prop- erties of the rational mutual information in Appendix A. To mention just one, we note that the rational mutual information is not just asymptotically equal to the mu- tual information in the limit of near-independence, but it also provides a strict upper bound on it: 0 ≤ I ≤ IR.
1606.06737#14
Criticality in Formal Languages and Statistical Physics
We show that the mutual information between two symbols, as a function of the number of symbols between the two, decays exponentially in any probabilistic regular grammar, but can decay like a power law for a context-free grammar. This result about formal languages is closely related to a well-known result in classical statistical mechanics that there are no phase transitions in dimensions fewer than two. It is also related to the emergence of power-law correlations in turbulence and cosmological inflation through recursive generative processes. We elucidate these physics connections and comment on potential applications of our results to machine learning tasks like training artificial recurrent neural networks. Along the way, we introduce a useful quantity which we dub the rational mutual information and discuss generalizations of our claims involving more complicated Bayesian networks.
http://arxiv.org/pdf/1606.06737
Henry W. Lin, Max Tegmark
cond-mat.dis-nn, cs.CL
Replaced to match final published version. Discussion improved, references added
Entropy, 19, 299 (2017)
cond-mat.dis-nn
20160621
20170823
[]
1606.06565
15
A very naive approach would be to penalize state distance, d(si, s0), between the present state si and some initial state s0. Unfortunately, such an agent wouldn’t just avoid changing the environment—it will resist any other source of change, including the natural evolution of the environment and the actions of any other agents! A slightly more sophisticated approach might involve comparing the future state under the agent’s current policy, to the future state (or distribution over future states) under a hypothet- ical policy πnull where the agent acted very passively (for instance, where a robot just stood in place and didn’t move any actuators). This attempts to factor out changes that occur in the natural course of the environment’s evolution, leaving only changes attributable to the agent’s intervention. However, defining the baseline policy πnull isn’t necessarily straightforward, since suddenly ceasing your course of action may be anything but passive, as in the case of carrying a heavy box. Thus, another approach could be to replace the null action with a known safe (e.g. low side effect) but suboptimal policy, and then seek to improve the policy from there, somewhat reminiscent of reachability analysis [93, 100] or robust policy improvement [73, 111].
1606.06565#15
Concrete Problems in AI Safety
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.
http://arxiv.org/pdf/1606.06565
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané
cs.AI, cs.LG
29 pages
null
cs.AI
20160621
20160725
[ { "id": "1507.01986" }, { "id": "1506.02142" }, { "id": "1602.04621" }, { "id": "1602.04450" }, { "id": "1605.09304" }, { "id": "1606.05374" }, { "id": "1604.05288" }, { "id": "1603.00448" }, { "id": "1605.03661" }, { "id": "1512.02595" }, { "id": "1511.08228" }, { "id": "1510.04342" }, { "id": "1604.05280" }, { "id": "1606.04753" }, { "id": "1606.03976" }, { "id": "1602.02697" }, { "id": "1505.05424" }, { "id": "1604.06057" }, { "id": "1605.03143" }, { "id": "1506.06579" }, { "id": "1605.03142" }, { "id": "1502.05698" }, { "id": "1512.05832" }, { "id": "1502.02072" }, { "id": "1506.02438" }, { "id": "1502.02362" } ]
1606.06737
15
Let us without loss of generality take tg > t;. Then iterating equation (2) 7 times gives P(bla) = (M”)ba. Since P(a,b) = P(a)P(b|a), we obtain ai b) e+ t= payPoy yp a Pla P(a)P()) ai b) e+ t= payPoy yp a Pla P(a)P()) yp Poly Ploy x mit >. PlayPb) We will continue the proof by considering the typical case where the eigenvalues of M are all distinct (non- degenerate) and the Markov matrix is irreducible and aperiodic; we will generalize to the other cases (which form a set of measure zero) in Appendix B. Since the eigenvalues are distinct, we can diagonalize M by writ- ing M = BDB−1 (5) for some invertible matrix B and some a diagonal matrix D whose diagonal elements are the eigenvalues: Dii = λi. Raising equation (5) to the power τ gives Mτ = BDτ B−1, i.e., (M’)ia = D> AZ Boc(Bo ea (6)
1606.06737#15
Criticality in Formal Languages and Statistical Physics
We show that the mutual information between two symbols, as a function of the number of symbols between the two, decays exponentially in any probabilistic regular grammar, but can decay like a power law for a context-free grammar. This result about formal languages is closely related to a well-known result in classical statistical mechanics that there are no phase transitions in dimensions fewer than two. It is also related to the emergence of power-law correlations in turbulence and cosmological inflation through recursive generative processes. We elucidate these physics connections and comment on potential applications of our results to machine learning tasks like training artificial recurrent neural networks. Along the way, we introduce a useful quantity which we dub the rational mutual information and discuss generalizations of our claims involving more complicated Bayesian networks.
http://arxiv.org/pdf/1606.06737
Henry W. Lin, Max Tegmark
cond-mat.dis-nn, cs.CL
Replaced to match final published version. Discussion improved, references added
Entropy, 19, 299 (2017)
cond-mat.dis-nn
20160621
20170823
[]
1606.06737
16
(M’)ia = D> AZ Boc(Bo ea (6) Since M is non-degenerate, irreducible and aperiodic, 1 = λ1 > |λ2| > · · · > |λn|, so all terms except the first in the sum of equation (6) decay exponentially with τ , at a decay rate that grows with c. Defining r = λ3/λ2, we have (Moa = By By +3 [Bo2By,' + O(r7)] = py +AzAba, (7) where we have made use of the fact that an irreducible and aperiodic Markov process must converge to its sta- tionary distribution for large τ , and we have defined A as the expression in square brackets above, satisfying limτ →∞ Aba = Bb2B−1 b Aba = 0 in or- der for M to be properly normalized. Substituting equation into equation (3) and using the facts that 3°, fa = 1 and Y>, Apa = 0, we obtain 3°, fa = 1 and Y>, Pay [(M’),
1606.06737#16
Criticality in Formal Languages and Statistical Physics
We show that the mutual information between two symbols, as a function of the number of symbols between the two, decays exponentially in any probabilistic regular grammar, but can decay like a power law for a context-free grammar. This result about formal languages is closely related to a well-known result in classical statistical mechanics that there are no phase transitions in dimensions fewer than two. It is also related to the emergence of power-law correlations in turbulence and cosmological inflation through recursive generative processes. We elucidate these physics connections and comment on potential applications of our results to machine learning tasks like training artificial recurrent neural networks. Along the way, we introduce a useful quantity which we dub the rational mutual information and discuss generalizations of our claims involving more complicated Bayesian networks.
http://arxiv.org/pdf/1606.06737
Henry W. Lin, Max Tegmark
cond-mat.dis-nn, cs.CL
Replaced to match final published version. Discussion improved, references added
Entropy, 19, 299 (2017)
cond-mat.dis-nn
20160621
20170823
[]
1606.06565
17
• Learn an Impact Regularizer: An alternative, more flexible approach is to learn (rather than define) a generalized impact regularizer via training over many tasks. This would be an instance of transfer learning. Of course, we could attempt to just apply transfer learning directly to the tasks themselves instead of worrying about side effects, but the point is that side effects may be more similar across tasks than the main goal is. For instance, both a painting robot and a cleaning robot probably want to avoid knocking over furniture, and even something very different, like a factory control robot, will likely want to avoid knocking over very similar objects. Separating the side effect component from the task component, by training them with separate parameters, might substantially speed transfer learning in cases where it makes sense to retain one component but not the other. This would be similar to model-based RL approaches that attempt to transfer a learned dynamics model but not the value-function [155], the novelty being the isolation of side effects rather than state dynamics as the transferrable component. As an added advantage, regularizers that were known or certified to produce safe behavior on one task might be easier to establish as safe on other tasks.
1606.06565#17
Concrete Problems in AI Safety
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.
http://arxiv.org/pdf/1606.06565
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané
cs.AI, cs.LG
29 pages
null
cs.AI
20160621
20160725
[ { "id": "1507.01986" }, { "id": "1506.02142" }, { "id": "1602.04621" }, { "id": "1602.04450" }, { "id": "1605.09304" }, { "id": "1606.05374" }, { "id": "1604.05288" }, { "id": "1603.00448" }, { "id": "1605.03661" }, { "id": "1512.02595" }, { "id": "1511.08228" }, { "id": "1510.04342" }, { "id": "1604.05280" }, { "id": "1606.04753" }, { "id": "1606.03976" }, { "id": "1602.02697" }, { "id": "1505.05424" }, { "id": "1604.06057" }, { "id": "1605.03143" }, { "id": "1506.06579" }, { "id": "1605.03142" }, { "id": "1502.05698" }, { "id": "1512.05832" }, { "id": "1502.02072" }, { "id": "1506.02438" }, { "id": "1502.02362" } ]
1606.06737
17
3°, fa = 1 and Y>, Pay [(M’), Ip = irs Pay [(M’), al? — ab _ Ba ( 7 _ mh pe + QpyA5 Aba + AZ” AP a) (8) = OAT (uy Abate) = CA3 ab where the term in the last parentheses is of the form C = C0 + O(rτ ). In summary, we have shown that an irreducible and ape- riodic Markov process with non-degenerate eigenvalues cannot produce critical behavior, because the mutual in- formation decays exponentially. In fact, no Markov pro- cesses can, as we show in Appendix B.
1606.06737#17
Criticality in Formal Languages and Statistical Physics
We show that the mutual information between two symbols, as a function of the number of symbols between the two, decays exponentially in any probabilistic regular grammar, but can decay like a power law for a context-free grammar. This result about formal languages is closely related to a well-known result in classical statistical mechanics that there are no phase transitions in dimensions fewer than two. It is also related to the emergence of power-law correlations in turbulence and cosmological inflation through recursive generative processes. We elucidate these physics connections and comment on potential applications of our results to machine learning tasks like training artificial recurrent neural networks. Along the way, we introduce a useful quantity which we dub the rational mutual information and discuss generalizations of our claims involving more complicated Bayesian networks.
http://arxiv.org/pdf/1606.06737
Henry W. Lin, Max Tegmark
cond-mat.dis-nn, cs.CL
Replaced to match final published version. Discussion improved, references added
Entropy, 19, 299 (2017)
cond-mat.dis-nn
20160621
20170823
[]
1606.06565
18
• Penalize Influence: In addition to not doing things that have side effects, we might also prefer the agent not get into positions where it could easily do things that have side effects, even though that might be convenient. For example, we might prefer our cleaning robot not 5 bring a bucket of water into a room full of sensitive electronics, even if it never intends to use the water in that room.
1606.06565#18
Concrete Problems in AI Safety
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.
http://arxiv.org/pdf/1606.06565
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané
cs.AI, cs.LG
29 pages
null
cs.AI
20160621
20160725
[ { "id": "1507.01986" }, { "id": "1506.02142" }, { "id": "1602.04621" }, { "id": "1602.04450" }, { "id": "1605.09304" }, { "id": "1606.05374" }, { "id": "1604.05288" }, { "id": "1603.00448" }, { "id": "1605.03661" }, { "id": "1512.02595" }, { "id": "1511.08228" }, { "id": "1510.04342" }, { "id": "1604.05280" }, { "id": "1606.04753" }, { "id": "1606.03976" }, { "id": "1602.02697" }, { "id": "1505.05424" }, { "id": "1604.06057" }, { "id": "1605.03143" }, { "id": "1506.06579" }, { "id": "1605.03142" }, { "id": "1502.05698" }, { "id": "1512.05832" }, { "id": "1502.02072" }, { "id": "1506.02438" }, { "id": "1502.02362" } ]
1606.06737
18
To hammer the final nail into the coffin of Markov pro- cesses as models of critical behavior, we need to close a final loophole. Their fundamental problem is lack of long-term memory, which can be superficially overcome by redefining the state space to include symbols from the past. For example, if the current state is one of n and we wish the process to depend on the the last τ sym- bols, we can define an expanded state space consisting of the nτ possible sequences of length τ , and a corre- sponding nτ × nτ Markov matrix (or an nτ × n table of conditional probabilities for the next symbol given the last τ symbols). Although such a model could fit the curves in Figure I in theory, it cannot in practice, be- cause M requires way more parameters than there are atoms in our observable universe (∼ 1078): even for as few as n = 4 symbols and τ = 1000, the Markov process involves over 41000 ∼ 10602 parameters. Scale-invariance aside, we can also see how Markov processes fail simply by considering the structure of text. To model
1606.06737#18
Criticality in Formal Languages and Statistical Physics
We show that the mutual information between two symbols, as a function of the number of symbols between the two, decays exponentially in any probabilistic regular grammar, but can decay like a power law for a context-free grammar. This result about formal languages is closely related to a well-known result in classical statistical mechanics that there are no phase transitions in dimensions fewer than two. It is also related to the emergence of power-law correlations in turbulence and cosmological inflation through recursive generative processes. We elucidate these physics connections and comment on potential applications of our results to machine learning tasks like training artificial recurrent neural networks. Along the way, we introduce a useful quantity which we dub the rational mutual information and discuss generalizations of our claims involving more complicated Bayesian networks.
http://arxiv.org/pdf/1606.06737
Henry W. Lin, Max Tegmark
cond-mat.dis-nn, cs.CL
Replaced to match final published version. Discussion improved, references added
Entropy, 19, 299 (2017)
cond-mat.dis-nn
20160621
20170823
[]
1606.06565
19
5 bring a bucket of water into a room full of sensitive electronics, even if it never intends to use the water in that room. There are several information-theoretic measures that attempt to capture an agent’s potential for influence over its environment, which are often used as intrinsic rewards. Perhaps the best- known such measure is empowerment [131], the maximum possible mutual information between the agent’s potential future actions and its potential future state (or equivalently, the Shannon capacity of the channel between the agent’s actions and the environment). Empowerment is often maximized (rather than minimized) as a source of intrinsic reward. This can cause the agent to exhibit interesting behavior in the absence of any external rewards, such as avoiding walls or picking up keys [103]. Generally, empowerment-maximizing agents put themselves in a position to have large influence over the environment. For example, an agent locked in a small room that can’t get out would have low empowerment, while an agent with a key would have higher empowerment since it can venture into and affect the outside world within a few timesteps. In the current context, the idea would be to penalize (minimize) empowerment as a regularization term, in an attempt to reduce potential impact.
1606.06565#19
Concrete Problems in AI Safety
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.
http://arxiv.org/pdf/1606.06565
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané
cs.AI, cs.LG
29 pages
null
cs.AI
20160621
20160725
[ { "id": "1507.01986" }, { "id": "1506.02142" }, { "id": "1602.04621" }, { "id": "1602.04450" }, { "id": "1605.09304" }, { "id": "1606.05374" }, { "id": "1604.05288" }, { "id": "1603.00448" }, { "id": "1605.03661" }, { "id": "1512.02595" }, { "id": "1511.08228" }, { "id": "1510.04342" }, { "id": "1604.05280" }, { "id": "1606.04753" }, { "id": "1606.03976" }, { "id": "1602.02697" }, { "id": "1505.05424" }, { "id": "1604.06057" }, { "id": "1605.03143" }, { "id": "1506.06579" }, { "id": "1605.03142" }, { "id": "1502.05698" }, { "id": "1512.05832" }, { "id": "1502.02072" }, { "id": "1506.02438" }, { "id": "1502.02362" } ]
1606.06565
20
This idea as written would not quite work, because empowerment measures precision of control over the environment more than total impact. If an agent can press or not press a button to cut electrical power to a million houses, that only counts as one bit of empowerment (since the action space has only one bit, its mutual information with the environment is at most one bit), while obviously having a huge impact. Conversely, if there’s someone in the environment scribbling down the agent’s actions, that counts as maximum empowerment even if the impact is low. Furthermore, naively penalizing empowerment can also create perverse incentives, such as destroying a vase in order to remove the option to break it in the future. Despite these issues, the example of empowerment does show that simple measures (even purely information-theoretic ones!) are capable of capturing very general notions of influence on the environment. Exploring variants of empowerment penalization that more precisely capture the notion of avoiding influence is a potential challenge for future research.
1606.06565#20
Concrete Problems in AI Safety
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.
http://arxiv.org/pdf/1606.06565
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané
cs.AI, cs.LG
29 pages
null
cs.AI
20160621
20160725
[ { "id": "1507.01986" }, { "id": "1506.02142" }, { "id": "1602.04621" }, { "id": "1602.04450" }, { "id": "1605.09304" }, { "id": "1606.05374" }, { "id": "1604.05288" }, { "id": "1603.00448" }, { "id": "1605.03661" }, { "id": "1512.02595" }, { "id": "1511.08228" }, { "id": "1510.04342" }, { "id": "1604.05280" }, { "id": "1606.04753" }, { "id": "1606.03976" }, { "id": "1602.02697" }, { "id": "1505.05424" }, { "id": "1604.06057" }, { "id": "1605.03143" }, { "id": "1506.06579" }, { "id": "1605.03142" }, { "id": "1502.05698" }, { "id": "1512.05832" }, { "id": "1502.02072" }, { "id": "1506.02438" }, { "id": "1502.02362" } ]
1606.06737
20
We can significantly generalize Theorem 1 into a theorem about hidden Markov models (HMM). In an HMM, the observed sequence X1, · · · , Xn is only part of the pic- ture: there are hidden variables Y1, · · · , Yn that them- selves form a Markov chain. We can think of an HMM as follows: imagine a machine with an internal state space Y that updates itself according to some Markovian dy- namics. The internal dynamics are never observed, but at each time-step, it also produces some output Yi → Xi that form the sequence which we can observe. These 4 models are quite general and are used to model a wealth of empirical data (see, e.g., [41]).
1606.06737#20
Criticality in Formal Languages and Statistical Physics
We show that the mutual information between two symbols, as a function of the number of symbols between the two, decays exponentially in any probabilistic regular grammar, but can decay like a power law for a context-free grammar. This result about formal languages is closely related to a well-known result in classical statistical mechanics that there are no phase transitions in dimensions fewer than two. It is also related to the emergence of power-law correlations in turbulence and cosmological inflation through recursive generative processes. We elucidate these physics connections and comment on potential applications of our results to machine learning tasks like training artificial recurrent neural networks. Along the way, we introduce a useful quantity which we dub the rational mutual information and discuss generalizations of our claims involving more complicated Bayesian networks.
http://arxiv.org/pdf/1606.06737
Henry W. Lin, Max Tegmark
cond-mat.dis-nn, cs.CL
Replaced to match final published version. Discussion improved, references added
Entropy, 19, 299 (2017)
cond-mat.dis-nn
20160621
20170823
[]
1606.06565
21
• Multi-Agent Approaches: Avoiding side effects can be seen as a proxy for the thing we really care about: avoiding negative externalities. If everyone likes a side effect, there’s no need to avoid it. What we’d really like to do is understand all the other agents (including humans) and make sure our actions don’t harm their interests. One approach to this is Cooperative Inverse Reinforcement Learning [66], where an agent and a human work together to achieve the human’s goals. This concept can be applied to situations where we want to make sure a human is not blocked by an agent from shutting the agent down if it exhibits undesired behavior [67] (this “shutdown” issue is an interesting problem in its own right, and is also studied in [113]). However we are still a long way away from practical systems that can build a rich enough model to avoid undesired side effects in a general sense.
1606.06565#21
Concrete Problems in AI Safety
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.
http://arxiv.org/pdf/1606.06565
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané
cs.AI, cs.LG
29 pages
null
cs.AI
20160621
20160725
[ { "id": "1507.01986" }, { "id": "1506.02142" }, { "id": "1602.04621" }, { "id": "1602.04450" }, { "id": "1605.09304" }, { "id": "1606.05374" }, { "id": "1604.05288" }, { "id": "1603.00448" }, { "id": "1605.03661" }, { "id": "1512.02595" }, { "id": "1511.08228" }, { "id": "1510.04342" }, { "id": "1604.05280" }, { "id": "1606.04753" }, { "id": "1606.03976" }, { "id": "1602.02697" }, { "id": "1505.05424" }, { "id": "1604.06057" }, { "id": "1605.03143" }, { "id": "1506.06579" }, { "id": "1605.03142" }, { "id": "1502.05698" }, { "id": "1512.05832" }, { "id": "1502.02072" }, { "id": "1506.02438" }, { "id": "1502.02362" } ]
1606.06737
21
4 models are quite general and are used to model a wealth of empirical data (see, e.g., [41]). Theorem 2: Let M be a Markov matrix that generates he transitions between hidden states Y; in an HMM. If M is irreducible and aperiodic, then the asymptotic be- havior of the mutual information I(t1,t2) is exponential decay toward zero for |t2 — ti| >> 1 with decay timescale log ae where 2 is the second largest eigenvalue of M. This theorem is a strict generalization of Theorem 1, since given any Markov process M with corresponding matrix M, we can construct an HMM that reproduces he exact statistics of M by using M as the transition matrix between the Y’s and generating X; from Y; by simply setting x; = y; with probability 1.
1606.06737#21
Criticality in Formal Languages and Statistical Physics
We show that the mutual information between two symbols, as a function of the number of symbols between the two, decays exponentially in any probabilistic regular grammar, but can decay like a power law for a context-free grammar. This result about formal languages is closely related to a well-known result in classical statistical mechanics that there are no phase transitions in dimensions fewer than two. It is also related to the emergence of power-law correlations in turbulence and cosmological inflation through recursive generative processes. We elucidate these physics connections and comment on potential applications of our results to machine learning tasks like training artificial recurrent neural networks. Along the way, we introduce a useful quantity which we dub the rational mutual information and discuss generalizations of our claims involving more complicated Bayesian networks.
http://arxiv.org/pdf/1606.06737
Henry W. Lin, Max Tegmark
cond-mat.dis-nn, cs.CL
Replaced to match final published version. Discussion improved, references added
Entropy, 19, 299 (2017)
cond-mat.dis-nn
20160621
20170823
[]
1606.06565
22
Another idea might be a “reward autoencoder”,2 which tries to encourage a kind of “goal transparency” where an external observer can easily infer what the agent is trying to do. In particular, the agent’s actions are interpreted as an encoding of its reward function, and we might apply standard autoencoding techniques to ensure that this can decoded accurately. Actions that have lots of side effects might be more difficult to decode uniquely to their original goal, creating a kind of implicit regularization that penalizes side effects. • Reward Uncertainty: We want to avoid unanticipated side effects because the environment is already pretty good according to our preferences—a random change is more likely to be very bad than very good. Rather than giving an agent a single reward function, it could be 2Thanks to Greg Wayne for suggesting this idea. 6
1606.06565#22
Concrete Problems in AI Safety
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.
http://arxiv.org/pdf/1606.06565
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané
cs.AI, cs.LG
29 pages
null
cs.AI
20160621
20160725
[ { "id": "1507.01986" }, { "id": "1506.02142" }, { "id": "1602.04621" }, { "id": "1602.04450" }, { "id": "1605.09304" }, { "id": "1606.05374" }, { "id": "1604.05288" }, { "id": "1603.00448" }, { "id": "1605.03661" }, { "id": "1512.02595" }, { "id": "1511.08228" }, { "id": "1510.04342" }, { "id": "1604.05280" }, { "id": "1606.04753" }, { "id": "1606.03976" }, { "id": "1602.02697" }, { "id": "1505.05424" }, { "id": "1604.06057" }, { "id": "1605.03143" }, { "id": "1506.06579" }, { "id": "1605.03142" }, { "id": "1502.05698" }, { "id": "1512.05832" }, { "id": "1502.02072" }, { "id": "1506.02438" }, { "id": "1502.02362" } ]
1606.06737
22
The proof is very similar in spirit to the proof of Theorem 1, so we will just present a sketch here, leaving a full proof to Appendix B. Let G be the Markov matrix that governs Yi → Xi. To compute the joint probability between two random variables Xt1 and Xt2, we simply compute the joint probability distribution between Yt1 and Yt2, which again involves a factor of Mτ and then use two factors of G to convert the joint probability on Yt1, Yt2 to a joint probability on Xt1, Xt2. These additional two factors of G will not change the fact that there is an exponential decay given by Mτ . A simple, intuitive bound from information theory (namely the data processing inequality [40]) gives I(Yt1, Yt2) ≥ I(Yt1, Xt2 ) ≥ I(Xt1 , Xt2). However, The- orem 1 implies that I(Yt1 , Yt2) decays exponentially. Hence I(Xt1 , Xt2) must also decay at least as fast as ex- ponentially.
1606.06737#22
Criticality in Formal Languages and Statistical Physics
We show that the mutual information between two symbols, as a function of the number of symbols between the two, decays exponentially in any probabilistic regular grammar, but can decay like a power law for a context-free grammar. This result about formal languages is closely related to a well-known result in classical statistical mechanics that there are no phase transitions in dimensions fewer than two. It is also related to the emergence of power-law correlations in turbulence and cosmological inflation through recursive generative processes. We elucidate these physics connections and comment on potential applications of our results to machine learning tasks like training artificial recurrent neural networks. Along the way, we introduce a useful quantity which we dub the rational mutual information and discuss generalizations of our claims involving more complicated Bayesian networks.
http://arxiv.org/pdf/1606.06737
Henry W. Lin, Max Tegmark
cond-mat.dis-nn, cs.CL
Replaced to match final published version. Discussion improved, references added
Entropy, 19, 299 (2017)
cond-mat.dis-nn
20160621
20170823
[]
1606.06565
23
2Thanks to Greg Wayne for suggesting this idea. 6 uncertain about the reward function, with a prior probability distribution that reflects the property that random changes are more likely to be bad than good. This could incentivize the agent to avoid having a large effect on the environment. One challenge is defining a baseline around which changes are being considered. For this, one could potentially use a conservative but reliable baseline policy, similar to the robust policy improvement and reachability analysis approaches discussed earlier [93, 100, 73, 111]. The ideal outcome of these approaches to limiting side effects would be to prevent or at least bound the incidental harm an agent could do to the environment. Good approaches to side effects would certainly not be a replacement for extensive testing or for careful consideration by designers of the individual failure modes of each deployed system. However, these approaches might help to counteract what we anticipate may be a general tendency for harmful side effects to proliferate in complex environments. Below we discuss some very simple experiments that could serve as a starting point to investigate these issues.
1606.06565#23
Concrete Problems in AI Safety
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.
http://arxiv.org/pdf/1606.06565
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané
cs.AI, cs.LG
29 pages
null
cs.AI
20160621
20160725
[ { "id": "1507.01986" }, { "id": "1506.02142" }, { "id": "1602.04621" }, { "id": "1602.04450" }, { "id": "1605.09304" }, { "id": "1606.05374" }, { "id": "1604.05288" }, { "id": "1603.00448" }, { "id": "1605.03661" }, { "id": "1512.02595" }, { "id": "1511.08228" }, { "id": "1510.04342" }, { "id": "1604.05280" }, { "id": "1606.04753" }, { "id": "1606.03976" }, { "id": "1602.02697" }, { "id": "1505.05424" }, { "id": "1604.06057" }, { "id": "1605.03143" }, { "id": "1506.06579" }, { "id": "1605.03142" }, { "id": "1502.05698" }, { "id": "1512.05832" }, { "id": "1502.02072" }, { "id": "1506.02438" }, { "id": "1502.02362" } ]
1606.06737
23
There is a well-known correspondence between so-called probabilistic regular grammars [42] (sometimes referred to as stochastic regular grammars) and HMMs. Given a probabilistic regular grammar, one can generate an HMM that reproduces all statistics and vice versa. Hence, we can also state Theorem 2 as follows: Corollary: No probabilistic regular grammar exhibits criticality. In the next section, we will show that this statement is not true for context-free grammars. # III. POWER LAWS FROM GENERATIVE GRAMMAR If computationally feasible Markov processes cannot pro- duce critical behavior, then how do such sequences arise? In this section, we construct a toy model where sequences exhibit criticality. In the parlance of theoretical linguis- tics, our language is generated by a stochastic or prob- abilistic context-free grammar (PCFG) [43–46]. We will discuss the relationship between our model and a generic PCFG in Section C. # A. A simple recursive grammar model
1606.06737#23
Criticality in Formal Languages and Statistical Physics
We show that the mutual information between two symbols, as a function of the number of symbols between the two, decays exponentially in any probabilistic regular grammar, but can decay like a power law for a context-free grammar. This result about formal languages is closely related to a well-known result in classical statistical mechanics that there are no phase transitions in dimensions fewer than two. It is also related to the emergence of power-law correlations in turbulence and cosmological inflation through recursive generative processes. We elucidate these physics connections and comment on potential applications of our results to machine learning tasks like training artificial recurrent neural networks. Along the way, we introduce a useful quantity which we dub the rational mutual information and discuss generalizations of our claims involving more complicated Bayesian networks.
http://arxiv.org/pdf/1606.06737
Henry W. Lin, Max Tegmark
cond-mat.dis-nn, cs.CL
Replaced to match final published version. Discussion improved, references added
Entropy, 19, 299 (2017)
cond-mat.dis-nn
20160621
20170823
[]
1606.06565
24
Below we discuss some very simple experiments that could serve as a starting point to investigate these issues. Potential Experiments: One possible experiment is to make a toy environment with some simple goal (like moving a block) and a wide variety of obstacles (like a bunch of vases), and test whether the agent can learn to avoid the obstacles even without being explicitly told to do so. To ensure we don’t overfit, we’d probably want to present a different random obstacle course every episode, while keeping the goal the same, and try to see if a regularized agent can learn to systematically avoid these obstacles. Some of the environments described in [103], containing lava flows, rooms, and keys, might be appropriate for this sort of experiment. If we can successfully regularize agents in toy environments, the next step might be to move to real environments, where we expect complexity to be higher and bad side effects to be more varied. Ultimately, we would want the side effect regularizer (or the multi-agent policy, if we take that approach) to demonstrate successful transfer to totally new applications. # 4 Avoiding Reward Hacking
1606.06565#24
Concrete Problems in AI Safety
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.
http://arxiv.org/pdf/1606.06565
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané
cs.AI, cs.LG
29 pages
null
cs.AI
20160621
20160725
[ { "id": "1507.01986" }, { "id": "1506.02142" }, { "id": "1602.04621" }, { "id": "1602.04450" }, { "id": "1605.09304" }, { "id": "1606.05374" }, { "id": "1604.05288" }, { "id": "1603.00448" }, { "id": "1605.03661" }, { "id": "1512.02595" }, { "id": "1511.08228" }, { "id": "1510.04342" }, { "id": "1604.05280" }, { "id": "1606.04753" }, { "id": "1606.03976" }, { "id": "1602.02697" }, { "id": "1505.05424" }, { "id": "1604.06057" }, { "id": "1605.03143" }, { "id": "1506.06579" }, { "id": "1605.03142" }, { "id": "1502.05698" }, { "id": "1512.05832" }, { "id": "1502.02072" }, { "id": "1506.02438" }, { "id": "1502.02362" } ]
1606.06737
24
# A. A simple recursive grammar model We can formalize the above considerations by giving pro- duction rules for a toy language L over an alphabet A. The language is defined by how a native speaker of L produces sentences: first, she draws one of the |A| char- acters from some probability distribution µ on A. She then takes this character x0 and replaces it with q new symbols, drawn from a probability distribution P (b|a), where a ∈ A is the first symbol and b ∈ A is any of the second symbols. This is repeated over and over. After u steps, she has a sentence of length qu.2 One can ask for the character statistics of the sentence at production step u given the statistics of the sentence at production step u − 1. The character distribution is simply Pu(b) = P (b|a)Pu−1(a). a (9) Of course this equation does not imply that the process is a Markov process when the sentences are read left to right. To characterize the statistics as read from left to right, we really want to compute the statistical depen- dencies within a given sequence, e.g., at fixed u.
1606.06737#24
Criticality in Formal Languages and Statistical Physics
We show that the mutual information between two symbols, as a function of the number of symbols between the two, decays exponentially in any probabilistic regular grammar, but can decay like a power law for a context-free grammar. This result about formal languages is closely related to a well-known result in classical statistical mechanics that there are no phase transitions in dimensions fewer than two. It is also related to the emergence of power-law correlations in turbulence and cosmological inflation through recursive generative processes. We elucidate these physics connections and comment on potential applications of our results to machine learning tasks like training artificial recurrent neural networks. Along the way, we introduce a useful quantity which we dub the rational mutual information and discuss generalizations of our claims involving more complicated Bayesian networks.
http://arxiv.org/pdf/1606.06737
Henry W. Lin, Max Tegmark
cond-mat.dis-nn, cs.CL
Replaced to match final published version. Discussion improved, references added
Entropy, 19, 299 (2017)
cond-mat.dis-nn
20160621
20170823
[]
1606.06565
25
# 4 Avoiding Reward Hacking it may then use this to Imagine that an agent discovers a buffer overflow in its reward function: get extremely high reward in an unintended way. From the agent’s point of view, this is not a bug, but simply how the environment works, and is thus a valid strategy like any other for achieving reward. For example, if our cleaning robot is set up to earn reward for not seeing any messes, it might simply close its eyes rather than ever cleaning anything up. Or if the robot is rewarded for cleaning messes, it may intentionally create work so it can earn more reward. More broadly, formal rewards or objective functions are an attempt to capture the designer’s informal intent, and sometimes these objective functions, or their implementation, can be “gamed” by solutions that are valid in some literal sense but don’t meet the designer’s intent. Pursuit of these “reward hacks” can lead to coherent but unanticipated behavior, and has the potential for harmful impacts in real-world systems. For example, it has been shown that genetic algorithms can often output unexpected but formally correct solutions to problems [157, 23], such as a circuit tasked to keep time which instead developed into a radio that picked up the regular RF emissions of a nearby PC.
1606.06565#25
Concrete Problems in AI Safety
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.
http://arxiv.org/pdf/1606.06565
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané
cs.AI, cs.LG
29 pages
null
cs.AI
20160621
20160725
[ { "id": "1507.01986" }, { "id": "1506.02142" }, { "id": "1602.04621" }, { "id": "1602.04450" }, { "id": "1605.09304" }, { "id": "1606.05374" }, { "id": "1604.05288" }, { "id": "1603.00448" }, { "id": "1605.03661" }, { "id": "1512.02595" }, { "id": "1511.08228" }, { "id": "1510.04342" }, { "id": "1604.05280" }, { "id": "1606.04753" }, { "id": "1606.03976" }, { "id": "1602.02697" }, { "id": "1505.05424" }, { "id": "1604.06057" }, { "id": "1605.03143" }, { "id": "1506.06579" }, { "id": "1605.03142" }, { "id": "1502.05698" }, { "id": "1512.05832" }, { "id": "1502.02072" }, { "id": "1506.02438" }, { "id": "1502.02362" } ]
1606.06737
25
To see that the mutual information decays like a power law rather than exponentially with separation, consider two random variables X and Y separated by τ . One can ask how many generations took place between X and the nearest ancestor of X and Y . Typically, this will be about logq τ generations. Hence in the tree graph shown in Figure 2, which illustrates the special case q = 2, the number of edges ∆ between X and Y is about 2 logq τ . Hence by the previous result for Markov processes, we expect an exponential decay of the mutual information in the variable ∆ ∼ 2 logq τ . This means that I(X, Y ) should be of the form I(X, Y ) ∼ q−γ∆ = q−2γ logq τ = τ −2γ, (10) where γ is controlled by the second-largest eigenvalue of G, the matrix of conditional probabilities P (b|a). But this exponential decay in ∆ is exactly a power-law de- cay in τ ! This intuitive argument is transformed into a rigorous proof in Appendix C. # B. Further Generalization: strongly correlated characters in words In the model we have been describing so far, all nodes emanating from the same parent can be freely permuted
1606.06737#25
Criticality in Formal Languages and Statistical Physics
We show that the mutual information between two symbols, as a function of the number of symbols between the two, decays exponentially in any probabilistic regular grammar, but can decay like a power law for a context-free grammar. This result about formal languages is closely related to a well-known result in classical statistical mechanics that there are no phase transitions in dimensions fewer than two. It is also related to the emergence of power-law correlations in turbulence and cosmological inflation through recursive generative processes. We elucidate these physics connections and comment on potential applications of our results to machine learning tasks like training artificial recurrent neural networks. Along the way, we introduce a useful quantity which we dub the rational mutual information and discuss generalizations of our claims involving more complicated Bayesian networks.
http://arxiv.org/pdf/1606.06737
Henry W. Lin, Max Tegmark
cond-mat.dis-nn, cs.CL
Replaced to match final published version. Discussion improved, references added
Entropy, 19, 299 (2017)
cond-mat.dis-nn
20160621
20170823
[]
1606.06565
26
Some versions of reward hacking have been investigated from a theoretical perspective, with a focus on variations to reinforcement learning that avoid certain types of wireheading [71, 43, 49] or demonstrate reward hacking in a model environment [127]. One form of the problem has also been studied in the context of feedback loops in machine learning systems (particularly ad placement) [29, 135], based on counterfactual learning [29, 151] and contextual bandits [4]. The proliferation of 7 reward hacking instances across so many different domains suggests that reward hacking may be a deep and general problem, and one that we believe is likely to become more common as agents and environments increase in complexity. Indeed, there are several ways in which the problem can occur:
1606.06565#26
Concrete Problems in AI Safety
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.
http://arxiv.org/pdf/1606.06565
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané
cs.AI, cs.LG
29 pages
null
cs.AI
20160621
20160725
[ { "id": "1507.01986" }, { "id": "1506.02142" }, { "id": "1602.04621" }, { "id": "1602.04450" }, { "id": "1605.09304" }, { "id": "1606.05374" }, { "id": "1604.05288" }, { "id": "1603.00448" }, { "id": "1605.03661" }, { "id": "1512.02595" }, { "id": "1511.08228" }, { "id": "1510.04342" }, { "id": "1604.05280" }, { "id": "1606.04753" }, { "id": "1606.03976" }, { "id": "1602.02697" }, { "id": "1505.05424" }, { "id": "1604.06057" }, { "id": "1605.03143" }, { "id": "1506.06579" }, { "id": "1605.03142" }, { "id": "1502.05698" }, { "id": "1512.05832" }, { "id": "1502.02072" }, { "id": "1506.02438" }, { "id": "1502.02362" } ]
1606.06737
26
# B. Further Generalization: strongly correlated characters in words In the model we have been describing so far, all nodes emanating from the same parent can be freely permuted 2 This exponential blow-up is reminiscent of de Sitter space in cosmic inflation. There is actually a much deeper mathematical analogy involving conformal symmetry and p-adic numbers that has been discussed [47]. 5 Shallow dynamics: # Time Abstraction level FIG. 2: Both a traditional Markov process (top) and our recursive generative grammar process (bottom) can be repre- sented as Bayesian networks, where the random variable at each node depends only on the node pointing to it with an arrow. The numbers show the geodesic distance ∆ to the left- most node, defined as the smallest number of edges that must be traversed to get there. Roughly speaking, our results show that for large ∆, the mutual information decays exponentially with ∆ (see Theorem 1 and 2). Since this geodesic distance ∆ grows only logarithmically with the separation in time in a hierarchical generative grammar (the hierarchy creates very efficient shortcuts), the exponential kills the logarithm and we are left with power-law decays of mutual information in such languages.
1606.06737#26
Criticality in Formal Languages and Statistical Physics
We show that the mutual information between two symbols, as a function of the number of symbols between the two, decays exponentially in any probabilistic regular grammar, but can decay like a power law for a context-free grammar. This result about formal languages is closely related to a well-known result in classical statistical mechanics that there are no phase transitions in dimensions fewer than two. It is also related to the emergence of power-law correlations in turbulence and cosmological inflation through recursive generative processes. We elucidate these physics connections and comment on potential applications of our results to machine learning tasks like training artificial recurrent neural networks. Along the way, we introduce a useful quantity which we dub the rational mutual information and discuss generalizations of our claims involving more complicated Bayesian networks.
http://arxiv.org/pdf/1606.06737
Henry W. Lin, Max Tegmark
cond-mat.dis-nn, cs.CL
Replaced to match final published version. Discussion improved, references added
Entropy, 19, 299 (2017)
cond-mat.dis-nn
20160621
20170823
[]
1606.06565
27
• Partially Observed Goals: In most modern RL systems, it is assumed that reward is directly experienced, even if other aspects of the environment are only partially observed. In the real world, however, tasks often involve bringing the external world into some objective state, which the agent can only ever confirm through imperfect perceptions. For example, for our proverbial cleaning robot, the task is to achieve a clean office, but the robot’s visual perception may give only an imperfect view of part of the office. Because agents lack access to a perfect measure of task performance, designers are often forced to design rewards that represent a partial or imperfect measure. For example, the robot might be rewarded based on how many messes it sees. However, these imperfect objective functions can often be hacked—the robot may think the office is clean if it simply closes its eyes. While it can be shown that there always exists a reward function in terms of actions and observations that is equivalent to optimizing the true objective function (this involves reducing the POMDP to a belief state MDP, see [78]), often this reward function involves complicated long-term dependencies and is prohibitively hard to use in practice.
1606.06565#27
Concrete Problems in AI Safety
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.
http://arxiv.org/pdf/1606.06565
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané
cs.AI, cs.LG
29 pages
null
cs.AI
20160621
20160725
[ { "id": "1507.01986" }, { "id": "1506.02142" }, { "id": "1602.04621" }, { "id": "1602.04450" }, { "id": "1605.09304" }, { "id": "1606.05374" }, { "id": "1604.05288" }, { "id": "1603.00448" }, { "id": "1605.03661" }, { "id": "1512.02595" }, { "id": "1511.08228" }, { "id": "1510.04342" }, { "id": "1604.05280" }, { "id": "1606.04753" }, { "id": "1606.03976" }, { "id": "1602.02697" }, { "id": "1505.05424" }, { "id": "1604.06057" }, { "id": "1605.03143" }, { "id": "1506.06579" }, { "id": "1605.03142" }, { "id": "1502.05698" }, { "id": "1512.05832" }, { "id": "1502.02072" }, { "id": "1506.02438" }, { "id": "1502.02362" } ]
1606.06737
27
since they are conditionally independent. In this sense, characters within a newly generated word are uncorre- lated. We call models with this property weakly corre- lated. There are still arbitrarily large correlations be- tween words, but not inside of words. If a weakly corre- lated grammar allows a → ab, it must allow for a → ba with the same probability. We now wish to relax this property to allow for the strongly-correlated case where variables may not be conditionally independent given the parents. This allows us to take a big step towards mod- eling realistic languages: in English, god significantly dif- fers in meaning and usage from dog.
1606.06737#27
Criticality in Formal Languages and Statistical Physics
We show that the mutual information between two symbols, as a function of the number of symbols between the two, decays exponentially in any probabilistic regular grammar, but can decay like a power law for a context-free grammar. This result about formal languages is closely related to a well-known result in classical statistical mechanics that there are no phase transitions in dimensions fewer than two. It is also related to the emergence of power-law correlations in turbulence and cosmological inflation through recursive generative processes. We elucidate these physics connections and comment on potential applications of our results to machine learning tasks like training artificial recurrent neural networks. Along the way, we introduce a useful quantity which we dub the rational mutual information and discuss generalizations of our claims involving more complicated Bayesian networks.
http://arxiv.org/pdf/1606.06737
Henry W. Lin, Max Tegmark
cond-mat.dis-nn, cs.CL
Replaced to match final published version. Discussion improved, references added
Entropy, 19, 299 (2017)
cond-mat.dis-nn
20160621
20170823
[]
1606.06565
28
• Complicated Systems: Any powerful agent will be a complicated system with the objective function being one part. Just as the probability of bugs in computer code increases greatly with the complexity of the program, the probability that there is a viable hack affecting the reward function also increases greatly with the complexity of the agent and its available strategies. For example, it is possible in principle for an agent to execute arbitrary code from within Super Mario [141]. • Abstract Rewards: Sophisticated reward functions will need to refer to abstract concepts (such as assessing whether a conceptual goal has been met). These concepts concepts will pos- sibly need to be learned by models like neural networks, which can be vulnerable to adversarial counterexamples [152, 62]. More broadly, a learned reward function over a high-dimensional space may be vulnerable to hacking if it has pathologically high values along at least one dimension.
1606.06565#28
Concrete Problems in AI Safety
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.
http://arxiv.org/pdf/1606.06565
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané
cs.AI, cs.LG
29 pages
null
cs.AI
20160621
20160725
[ { "id": "1507.01986" }, { "id": "1506.02142" }, { "id": "1602.04621" }, { "id": "1602.04450" }, { "id": "1605.09304" }, { "id": "1606.05374" }, { "id": "1604.05288" }, { "id": "1603.00448" }, { "id": "1605.03661" }, { "id": "1512.02595" }, { "id": "1511.08228" }, { "id": "1510.04342" }, { "id": "1604.05280" }, { "id": "1606.04753" }, { "id": "1606.03976" }, { "id": "1602.02697" }, { "id": "1505.05424" }, { "id": "1604.06057" }, { "id": "1605.03143" }, { "id": "1506.06579" }, { "id": "1605.03142" }, { "id": "1502.05698" }, { "id": "1512.05832" }, { "id": "1502.02072" }, { "id": "1506.02438" }, { "id": "1502.02362" } ]
1606.06737
28
In the previous computation, the crucial ingredient was the joint probability P (a, b) = P (X = a, Y = b). Let us start with a seemingly trivial remark. This joint proba- bility can be re-interpreted as a conditional joint prob- ability. Instead of X and Y being random variables at specified sites t1 and t2, we can view them as random variables at randomly chosen locations, conditioned on their locations being t1 and t2. Somewhat pedantically, we write P (a, b) = P (a, b|t1, t2). This clarifies the impor- tant fact that the only way that P (a, b|t1, t2) depends on t1 and t2 is via a dependence on ∆(t1, t2). Hence P (a, b|t1, t2) = P (a, b|∆). (11) This equation is specific to weakly correlated models and does not hold for generic strongly correlated models. In computing the mutual information as a function of separation, the relevant quantity is the right hand side of equation (7). The reason is that in practical scenarios,
1606.06737#28
Criticality in Formal Languages and Statistical Physics
We show that the mutual information between two symbols, as a function of the number of symbols between the two, decays exponentially in any probabilistic regular grammar, but can decay like a power law for a context-free grammar. This result about formal languages is closely related to a well-known result in classical statistical mechanics that there are no phase transitions in dimensions fewer than two. It is also related to the emergence of power-law correlations in turbulence and cosmological inflation through recursive generative processes. We elucidate these physics connections and comment on potential applications of our results to machine learning tasks like training artificial recurrent neural networks. Along the way, we introduce a useful quantity which we dub the rational mutual information and discuss generalizations of our claims involving more complicated Bayesian networks.
http://arxiv.org/pdf/1606.06737
Henry W. Lin, Max Tegmark
cond-mat.dis-nn, cs.CL
Replaced to match final published version. Discussion improved, references added
Entropy, 19, 299 (2017)
cond-mat.dis-nn
20160621
20170823
[]
1606.06565
29
• Goodhart’s Law: Another source of reward hacking can occur if a designer chooses an objective function that is seemingly highly correlated with accomplishing the task, but that correlation breaks down when the objective function is being strongly optimized. For exam- ple, a designer might notice that under ordinary circumstances, a cleaning robot’s success in cleaning up the office is proportional to the rate at which it consumes cleaning supplies, such as bleach. However, if we base the robot’s reward on this measure, it might use more bleach than it needs, or simply pour bleach down the drain in order to give the appearance of success. In the economics literature this is known as Goodhart’s law [63]: “when a metric is used as a target, it ceases to be a good metric.”
1606.06565#29
Concrete Problems in AI Safety
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.
http://arxiv.org/pdf/1606.06565
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané
cs.AI, cs.LG
29 pages
null
cs.AI
20160621
20160725
[ { "id": "1507.01986" }, { "id": "1506.02142" }, { "id": "1602.04621" }, { "id": "1602.04450" }, { "id": "1605.09304" }, { "id": "1606.05374" }, { "id": "1604.05288" }, { "id": "1603.00448" }, { "id": "1605.03661" }, { "id": "1512.02595" }, { "id": "1511.08228" }, { "id": "1510.04342" }, { "id": "1604.05280" }, { "id": "1606.04753" }, { "id": "1606.03976" }, { "id": "1602.02697" }, { "id": "1505.05424" }, { "id": "1604.06057" }, { "id": "1605.03143" }, { "id": "1506.06579" }, { "id": "1605.03142" }, { "id": "1502.05698" }, { "id": "1512.05832" }, { "id": "1502.02072" }, { "id": "1506.02438" }, { "id": "1502.02362" } ]
1606.06565
30
• Feedback Loops: Sometimes an objective function has a component that can reinforce itself, eventually getting amplified to the point where it drowns out or severely distorts what the de- signer intended the objective function to represent. For instance, an ad placement algorithm that displays more popular ads in larger font will tend to further accentuate the popularity of those ads (since they will be shown more and more prominently) [29], leading to a positive feedback loop where ads that saw a small transient burst of popularity are rocketed to perma- nent dominance. Here the original intent of the objective function (to use clicks to assess which ads are most useful) gets drowned out by the positive feedback inherent in the deployment strategy. This can be considered a special case of Goodhart’s law, in which the correlation breaks specifically because the object function has a self-amplifying component. 8
1606.06565#30
Concrete Problems in AI Safety
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.
http://arxiv.org/pdf/1606.06565
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané
cs.AI, cs.LG
29 pages
null
cs.AI
20160621
20160725
[ { "id": "1507.01986" }, { "id": "1506.02142" }, { "id": "1602.04621" }, { "id": "1602.04450" }, { "id": "1605.09304" }, { "id": "1606.05374" }, { "id": "1604.05288" }, { "id": "1603.00448" }, { "id": "1605.03661" }, { "id": "1512.02595" }, { "id": "1511.08228" }, { "id": "1510.04342" }, { "id": "1604.05280" }, { "id": "1606.04753" }, { "id": "1606.03976" }, { "id": "1602.02697" }, { "id": "1505.05424" }, { "id": "1604.06057" }, { "id": "1605.03143" }, { "id": "1506.06579" }, { "id": "1605.03142" }, { "id": "1502.05698" }, { "id": "1512.05832" }, { "id": "1502.02072" }, { "id": "1506.02438" }, { "id": "1502.02362" } ]
1606.06737
30
Now whereas P(a, b ;,t2) will change when strong cor- relations are introduced, P(a, b|A) will retain a very sim- ilar form. This can be seen as follows: knowledge of the geodesic distance corresponds to knowledge of how high up the closest parent node is in the hierarchy (see Figure 2). Imagine flowing down from the parent node to the leaves. We start with the stationary distribution 1; at the parent node. At the first layer below the parent node (corresponding to a causal distance A —2), we get Q,.” = P(rr’) =O, Ps(rr'|i)P(i), where the symmetrized prob- ability Ps = $>0,[P(rr'|i) + P(r’r|i)] comes into play because knowledge of the fact that r,r’ are separated by A — 2 gives no information about their order. To con- tinue this process to the second stage and beyond, we only need the matrix G,, = P(s|r) = 0,, Ps(ss'|r). The reason is that since we only wish to compute the two- point function at the bottom of the tree, the only place where a three-point function is ever
1606.06737#30
Criticality in Formal Languages and Statistical Physics
We show that the mutual information between two symbols, as a function of the number of symbols between the two, decays exponentially in any probabilistic regular grammar, but can decay like a power law for a context-free grammar. This result about formal languages is closely related to a well-known result in classical statistical mechanics that there are no phase transitions in dimensions fewer than two. It is also related to the emergence of power-law correlations in turbulence and cosmological inflation through recursive generative processes. We elucidate these physics connections and comment on potential applications of our results to machine learning tasks like training artificial recurrent neural networks. Along the way, we introduce a useful quantity which we dub the rational mutual information and discuss generalizations of our claims involving more complicated Bayesian networks.
http://arxiv.org/pdf/1606.06737
Henry W. Lin, Max Tegmark
cond-mat.dis-nn, cs.CL
Replaced to match final published version. Discussion improved, references added
Entropy, 19, 299 (2017)
cond-mat.dis-nn
20160621
20170823
[]
1606.06565
31
8 • Environmental Embedding: In the formalism of reinforcement learning, rewards are con- sidered to come from the environment. This idea is typically not taken literally, but it really is true that the reward, even when it is an abstract idea like the score in a board game, must be computed somewhere, such as a sensor or a set of transistors. Sufficiently broadly acting agents could in principle tamper with their reward implementations, assigning themselves high reward “by fiat.” For example, a board-game playing agent could tamper with the sensor that counts the score. Effectively, this means that we cannot build a perfectly faithful implementa- tion of an abstract objective function, because there are certain sequences of actions for which the objective function is physically replaced. This particular failure mode is often called “wire- heading” [49, 127, 42, 67, 165]. It is particularly concerning in cases where a human may be in the reward loop, giving the agent incentive to coerce or harm them in order to get reward. It also seems like a particularly difficult form of reward hacking to avoid.
1606.06565#31
Concrete Problems in AI Safety
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.
http://arxiv.org/pdf/1606.06565
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané
cs.AI, cs.LG
29 pages
null
cs.AI
20160621
20160725
[ { "id": "1507.01986" }, { "id": "1506.02142" }, { "id": "1602.04621" }, { "id": "1602.04450" }, { "id": "1605.09304" }, { "id": "1606.05374" }, { "id": "1604.05288" }, { "id": "1603.00448" }, { "id": "1605.03661" }, { "id": "1512.02595" }, { "id": "1511.08228" }, { "id": "1510.04342" }, { "id": "1604.05280" }, { "id": "1606.04753" }, { "id": "1606.03976" }, { "id": "1602.02697" }, { "id": "1505.05424" }, { "id": "1604.06057" }, { "id": "1605.03143" }, { "id": "1506.06579" }, { "id": "1605.03142" }, { "id": "1502.05698" }, { "id": "1512.05832" }, { "id": "1502.02072" }, { "id": "1506.02438" }, { "id": "1502.02362" } ]
1606.06737
31
The reason is that since we only wish to compute the two- point function at the bottom of the tree, the only place where a three-point function is ever needed is at the very top of the tree, where we need to take a single parent into two children nodes. After that, the computation only in- volves evolving a child node into a grand-child node, and so forth. Hence the overall two-point probability matrix P(ab\A) is given by the simple equation
1606.06737#31
Criticality in Formal Languages and Statistical Physics
We show that the mutual information between two symbols, as a function of the number of symbols between the two, decays exponentially in any probabilistic regular grammar, but can decay like a power law for a context-free grammar. This result about formal languages is closely related to a well-known result in classical statistical mechanics that there are no phase transitions in dimensions fewer than two. It is also related to the emergence of power-law correlations in turbulence and cosmological inflation through recursive generative processes. We elucidate these physics connections and comment on potential applications of our results to machine learning tasks like training artificial recurrent neural networks. Along the way, we introduce a useful quantity which we dub the rational mutual information and discuss generalizations of our claims involving more complicated Bayesian networks.
http://arxiv.org/pdf/1606.06737
Henry W. Lin, Max Tegmark
cond-mat.dis-nn, cs.CL
Replaced to match final published version. Discussion improved, references added
Entropy, 19, 299 (2017)
cond-mat.dis-nn
20160621
20170823
[]
1606.06565
32
In today’s relatively simple systems these problems may not occur, or can be corrected without too much harm as part of an iterative development process. For instance, ad placement systems with obviously broken feedback loops can be detected in testing or replaced when they get bad results, leading only to a temporary loss of revenue. However, the problem may become more severe with more complicated reward functions and agents that act over longer timescales. Modern RL agents already do discover and exploit bugs in their environments, such as glitches that allow them to win video games. Moreover, even for existing systems these problems can necessitate substantial additional engineering effort to achieve good performance, and can often go undetected when they occur in the context of a larger system. Finally, once an agent begins hacking its reward function and finds an easy way to get high reward, it won’t be inclined to stop, which could lead to additional challenges in agents that operate over a long timescale.
1606.06565#32
Concrete Problems in AI Safety
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.
http://arxiv.org/pdf/1606.06565
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané
cs.AI, cs.LG
29 pages
null
cs.AI
20160621
20160725
[ { "id": "1507.01986" }, { "id": "1506.02142" }, { "id": "1602.04621" }, { "id": "1602.04450" }, { "id": "1605.09304" }, { "id": "1606.05374" }, { "id": "1604.05288" }, { "id": "1603.00448" }, { "id": "1605.03661" }, { "id": "1512.02595" }, { "id": "1511.08228" }, { "id": "1510.04342" }, { "id": "1604.05280" }, { "id": "1606.04753" }, { "id": "1606.03976" }, { "id": "1602.02697" }, { "id": "1505.05424" }, { "id": "1604.06057" }, { "id": "1605.03143" }, { "id": "1506.06579" }, { "id": "1605.03142" }, { "id": "1502.05698" }, { "id": "1512.05832" }, { "id": "1502.02072" }, { "id": "1506.02438" }, { "id": "1502.02362" } ]
1606.06737
32
P(A)= (G41) Q (car). (12) As we can see from the above formula, changing to the strongly correlated case essentially reduces to the weakly correlated case where t P(A) = (G4”) diag(u) (64) (13) except for a perturbation near the top of the tree. We can think of the generalization as equivalent to the old model except for a different initial condition. We thus expect on intuitive grounds that the model will still exhibit power law decay. This intuition is correct, as we will prove in Appendix C. Our result can be summarized by the following theorem: Theorem 3 There exist probabilistic context free gram- mars (PCFGs) such that the mutual information I(A, B) between two symbols A and B in the terminal strings of the language decay like d−k, where d is the number of symbols in between A and B. In Appendix C, we give an explicit formula for k as well as the normalization of the power law for a particular class of grammars. 6 # C. Further Generalization: Bayesian networks and context-free grammars
1606.06737#32
Criticality in Formal Languages and Statistical Physics
We show that the mutual information between two symbols, as a function of the number of symbols between the two, decays exponentially in any probabilistic regular grammar, but can decay like a power law for a context-free grammar. This result about formal languages is closely related to a well-known result in classical statistical mechanics that there are no phase transitions in dimensions fewer than two. It is also related to the emergence of power-law correlations in turbulence and cosmological inflation through recursive generative processes. We elucidate these physics connections and comment on potential applications of our results to machine learning tasks like training artificial recurrent neural networks. Along the way, we introduce a useful quantity which we dub the rational mutual information and discuss generalizations of our claims involving more complicated Bayesian networks.
http://arxiv.org/pdf/1606.06737
Henry W. Lin, Max Tegmark
cond-mat.dis-nn, cs.CL
Replaced to match final published version. Discussion improved, references added
Entropy, 19, 299 (2017)
cond-mat.dis-nn
20160621
20170823
[]
1606.06565
33
It might be thought that individual instances of reward hacking have little in common and that the remedy is simply to avoid choosing the wrong objective function in each individual case—that bad objective functions reflect failures in competence by individual designers, rather than topics for machine learning research. However, the above examples suggest that a more fruitful perspective may be to think of wrong objective functions as emerging from general causes (such as partially observed goals) that make choosing the right objective challenging. If this is the case, then addressing or mitigating these causes may be a valuable contribution to safety. Here we suggest some preliminary, machine-learning based approaches to preventing reward hacking:
1606.06565#33
Concrete Problems in AI Safety
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.
http://arxiv.org/pdf/1606.06565
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané
cs.AI, cs.LG
29 pages
null
cs.AI
20160621
20160725
[ { "id": "1507.01986" }, { "id": "1506.02142" }, { "id": "1602.04621" }, { "id": "1602.04450" }, { "id": "1605.09304" }, { "id": "1606.05374" }, { "id": "1604.05288" }, { "id": "1603.00448" }, { "id": "1605.03661" }, { "id": "1512.02595" }, { "id": "1511.08228" }, { "id": "1510.04342" }, { "id": "1604.05280" }, { "id": "1606.04753" }, { "id": "1606.03976" }, { "id": "1602.02697" }, { "id": "1505.05424" }, { "id": "1604.06057" }, { "id": "1605.03143" }, { "id": "1506.06579" }, { "id": "1605.03142" }, { "id": "1502.05698" }, { "id": "1512.05832" }, { "id": "1502.02072" }, { "id": "1506.02438" }, { "id": "1502.02362" } ]
1606.06737
33
6 # C. Further Generalization: Bayesian networks and context-free grammars Just how generic is the scaling behavior of our model? What if the length of the words is not constant? What about more complex dependencies between layers? If we retrace the derivation in the above arguments, it becomes clear that the only key feature of all of our models consid- ered so far is that the rational mutual information decays exponentially with the causal distance ∆: IR ∼ e−γ∆. (14) This is true for (hidden) Markov processes and the hier- archical grammar models that we have considered above. So far we have defined ∆ in terms of quantities specific to these models; for a Markov process, ∆ is simply the time separation. Can we define ∆ more generically? In order to do so, let us make a brief aside about Bayesian networks. Formally, a Bayesian net is a directed acyclic graph (DAG), where the vertices are random variables and conditional dependencies are represented by the ar- rows. Now instead of thinking of X and Y as living at certain times (t1, t2), we can think of them as living at vertices (i, j) of the graph.
1606.06737#33
Criticality in Formal Languages and Statistical Physics
We show that the mutual information between two symbols, as a function of the number of symbols between the two, decays exponentially in any probabilistic regular grammar, but can decay like a power law for a context-free grammar. This result about formal languages is closely related to a well-known result in classical statistical mechanics that there are no phase transitions in dimensions fewer than two. It is also related to the emergence of power-law correlations in turbulence and cosmological inflation through recursive generative processes. We elucidate these physics connections and comment on potential applications of our results to machine learning tasks like training artificial recurrent neural networks. Along the way, we introduce a useful quantity which we dub the rational mutual information and discuss generalizations of our claims involving more complicated Bayesian networks.
http://arxiv.org/pdf/1606.06737
Henry W. Lin, Max Tegmark
cond-mat.dis-nn, cs.CL
Replaced to match final published version. Discussion improved, references added
Entropy, 19, 299 (2017)
cond-mat.dis-nn
20160621
20170823
[]
1606.06565
34
• Adversarial Reward Functions: In some sense, the problem is that the ML system has an adversarial relationship with its reward function—it would like to find any way it can of exploiting problems in how the reward was specified to get high reward, whether or not its behavior corresponds to the intent of the reward specifier. In a typical setting, the machine learning system is a potentially powerful agent while the reward function is a static object that has no way of responding to the system’s attempts to game it. If instead the reward function were its own agent and could take actions to explore the environment, it might be much more difficult to fool. For instance, the reward agent could try to find scenarios that the ML system claimed were high reward but that a human labels as low reward; this is reminiscent of generative adversarial networks [61]. Of course, we would have to ensure that the reward-checking agent is more powerful (in a somewhat subtle sense) than the agent that is trying to achieve rewards. More generally, there may be interesting setups where a system has multiple pieces trained using different objectives that are used to check each other.
1606.06565#34
Concrete Problems in AI Safety
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.
http://arxiv.org/pdf/1606.06565
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané
cs.AI, cs.LG
29 pages
null
cs.AI
20160621
20160725
[ { "id": "1507.01986" }, { "id": "1506.02142" }, { "id": "1602.04621" }, { "id": "1602.04450" }, { "id": "1605.09304" }, { "id": "1606.05374" }, { "id": "1604.05288" }, { "id": "1603.00448" }, { "id": "1605.03661" }, { "id": "1512.02595" }, { "id": "1511.08228" }, { "id": "1510.04342" }, { "id": "1604.05280" }, { "id": "1606.04753" }, { "id": "1606.03976" }, { "id": "1602.02697" }, { "id": "1505.05424" }, { "id": "1604.06057" }, { "id": "1605.03143" }, { "id": "1506.06579" }, { "id": "1605.03142" }, { "id": "1502.05698" }, { "id": "1512.05832" }, { "id": "1502.02072" }, { "id": "1506.02438" }, { "id": "1502.02362" } ]
1606.06737
34
We define ∆(i, j) as follows. Since the Bayesian net is a DAG, it is equipped with a partial order ≤ on vertices. We write k ≤ l iff there is a path from k to l, in which case we say that k is an ancestor of l. We define the L(k, l) to be the number of edges on the shortest directed path from k to l. Finally, we define the causal distance ∆(i, j) to be ∆(i, j) ≡ min x≤i,x≤j L(x, i) + L(x, j). (15) It is easy to see that this reduces to our previous defini- tion of ∆ for Markov processes and recursive generative trees (see Figure 2).
1606.06737#34
Criticality in Formal Languages and Statistical Physics
We show that the mutual information between two symbols, as a function of the number of symbols between the two, decays exponentially in any probabilistic regular grammar, but can decay like a power law for a context-free grammar. This result about formal languages is closely related to a well-known result in classical statistical mechanics that there are no phase transitions in dimensions fewer than two. It is also related to the emergence of power-law correlations in turbulence and cosmological inflation through recursive generative processes. We elucidate these physics connections and comment on potential applications of our results to machine learning tasks like training artificial recurrent neural networks. Along the way, we introduce a useful quantity which we dub the rational mutual information and discuss generalizations of our claims involving more complicated Bayesian networks.
http://arxiv.org/pdf/1606.06737
Henry W. Lin, Max Tegmark
cond-mat.dis-nn, cs.CL
Replaced to match final published version. Discussion improved, references added
Entropy, 19, 299 (2017)
cond-mat.dis-nn
20160621
20170823
[]
1606.06565
35
• Model Lookahead: In model based RL, the agent plans its future actions by using a model to consider which future states a sequence of actions may lead to. In some setups, we could give reward based on anticipated future states, rather than the present one. This could be very helpful in resisting situations where the model overwrites its reward function: you can’t control the reward once it replaces the reward function, but you can give negative reward for 9 planning to replace the reward function. (Much like how a human would probably “enjoy” taking addictive substances once they do, but not want to be an addict.) Similar ideas are explored in [50, 71]. • Adversarial Blinding: Adversarial techniques can be used to blind a model to certain variables [5]. This technique could be used to make it impossible for an agent to understand some part of its environment, or even to have mutual information with it (or at least to penalize such mutual information). In particular, it could prevent an agent from understanding how its reward is generated, making it difficult to hack. This solution could be described as “cross- validation for agents.”
1606.06565#35
Concrete Problems in AI Safety
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.
http://arxiv.org/pdf/1606.06565
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané
cs.AI, cs.LG
29 pages
null
cs.AI
20160621
20160725
[ { "id": "1507.01986" }, { "id": "1506.02142" }, { "id": "1602.04621" }, { "id": "1602.04450" }, { "id": "1605.09304" }, { "id": "1606.05374" }, { "id": "1604.05288" }, { "id": "1603.00448" }, { "id": "1605.03661" }, { "id": "1512.02595" }, { "id": "1511.08228" }, { "id": "1510.04342" }, { "id": "1604.05280" }, { "id": "1606.04753" }, { "id": "1606.03976" }, { "id": "1602.02697" }, { "id": "1505.05424" }, { "id": "1604.06057" }, { "id": "1605.03143" }, { "id": "1506.06579" }, { "id": "1605.03142" }, { "id": "1502.05698" }, { "id": "1512.05832" }, { "id": "1502.02072" }, { "id": "1506.02438" }, { "id": "1502.02362" } ]
1606.06737
35
It is easy to see that this reduces to our previous defini- tion of ∆ for Markov processes and recursive generative trees (see Figure 2). Is it true that our exponential decay result from equa- tion (14) holds even for a generic Bayesian net? The answer is yes, under a suitable approximation. The ap- proximation is to ignore long paths in the network when computing the mutual information. In other words, the mutual information tends to be dominated by the short- est paths via a common ancestor, whose length is ∆. This is a generally a reasonable approximation, because these longer paths will give exponentially weaker correlations, so unless the number of paths increases exponentially (or faster) with length, the overall scaling will not change. With this approximation, we can state a key finding of our theoretical work. Deep models are important because without the extra “dimension” of depth/abstraction, there is no way to construct “shortcuts” between random variables that are separated by large amounts of time with short-range interactions; 1D models will be doomed to exponential decay. Hence the ubiquity of power laws may partially explain the success of applications of deep learning to natural language processing. In fact, this can be seen as the Bayesian net version of the important re- sult in statistical physics that there are no phase transi- tions in 1D [48, 49].
1606.06737#35
Criticality in Formal Languages and Statistical Physics
We show that the mutual information between two symbols, as a function of the number of symbols between the two, decays exponentially in any probabilistic regular grammar, but can decay like a power law for a context-free grammar. This result about formal languages is closely related to a well-known result in classical statistical mechanics that there are no phase transitions in dimensions fewer than two. It is also related to the emergence of power-law correlations in turbulence and cosmological inflation through recursive generative processes. We elucidate these physics connections and comment on potential applications of our results to machine learning tasks like training artificial recurrent neural networks. Along the way, we introduce a useful quantity which we dub the rational mutual information and discuss generalizations of our claims involving more complicated Bayesian networks.
http://arxiv.org/pdf/1606.06737
Henry W. Lin, Max Tegmark
cond-mat.dis-nn, cs.CL
Replaced to match final published version. Discussion improved, references added
Entropy, 19, 299 (2017)
cond-mat.dis-nn
20160621
20170823
[]
1606.06565
36
• Careful Engineering: Some kinds of reward hacking, like the buffer overflow example, might be avoided by very careful engineering. In particular, formal verification or practical testing of parts of the system (perhaps facilitated by other machine learning systems) is likely to be valuable. Computer security approaches that attempt to isolate the agent from its reward signal through a sandbox could also be useful [17]. As with software engineering, we cannot expect this to catch every possible bug. It may be possible, however, to create some highly reliable “core” agent which could ensure reasonable behavior from the rest of the agent. • Reward Capping: In some cases, simply capping the maximum possible reward may be an effective solution. However, while capping can prevent extreme low-probability, high-payoff strategies, it can’t prevent strategies like the cleaning robot closing its eyes to avoid seeing dirt. Also, the correct capping strategy could be subtle as we might need to cap total reward rather than reward per timestep.
1606.06565#36
Concrete Problems in AI Safety
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.
http://arxiv.org/pdf/1606.06565
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané
cs.AI, cs.LG
29 pages
null
cs.AI
20160621
20160725
[ { "id": "1507.01986" }, { "id": "1506.02142" }, { "id": "1602.04621" }, { "id": "1602.04450" }, { "id": "1605.09304" }, { "id": "1606.05374" }, { "id": "1604.05288" }, { "id": "1603.00448" }, { "id": "1605.03661" }, { "id": "1512.02595" }, { "id": "1511.08228" }, { "id": "1510.04342" }, { "id": "1604.05280" }, { "id": "1606.04753" }, { "id": "1606.03976" }, { "id": "1602.02697" }, { "id": "1505.05424" }, { "id": "1604.06057" }, { "id": "1605.03143" }, { "id": "1506.06579" }, { "id": "1605.03142" }, { "id": "1502.05698" }, { "id": "1512.05832" }, { "id": "1502.02072" }, { "id": "1506.02438" }, { "id": "1502.02362" } ]
1606.06565
37
• Counterexample Resistance: If we are worried, as in the case of abstract rewards, that learned components of our systems will be vulnerable to adversarial counterexamples, we can look to existing research in how to resist them, such as adversarial training [62]. Architectural decisions and weight uncertainty [26] may also help. Of course, adversarial counterexamples are just one manifestation of reward hacking, so counterexample resistance can only address a subset of these potential problems. • Multiple Rewards: A combination of multiple rewards [41] may be more difficult to hack and more robust. This could be different physical implementations of the same mathemati- cal function, or different proxies for the same informal objective. We could combine reward functions by averaging, taking the minimum, taking quantiles, or something else entirely. Of course, there may still be bad behaviors which affect all the reward functions in a correlated manner.
1606.06565#37
Concrete Problems in AI Safety
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.
http://arxiv.org/pdf/1606.06565
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané
cs.AI, cs.LG
29 pages
null
cs.AI
20160621
20160725
[ { "id": "1507.01986" }, { "id": "1506.02142" }, { "id": "1602.04621" }, { "id": "1602.04450" }, { "id": "1605.09304" }, { "id": "1606.05374" }, { "id": "1604.05288" }, { "id": "1603.00448" }, { "id": "1605.03661" }, { "id": "1512.02595" }, { "id": "1511.08228" }, { "id": "1510.04342" }, { "id": "1604.05280" }, { "id": "1606.04753" }, { "id": "1606.03976" }, { "id": "1602.02697" }, { "id": "1505.05424" }, { "id": "1604.06057" }, { "id": "1605.03143" }, { "id": "1506.06579" }, { "id": "1605.03142" }, { "id": "1502.05698" }, { "id": "1512.05832" }, { "id": "1502.02072" }, { "id": "1506.02438" }, { "id": "1502.02362" } ]
1606.06737
37
There are close analogies between our deep recursive grammar and more conventional physical systems. For example, according to the emerging standard model of cosmology, there was an early period of cosmological in- flation when density fluctuations get getting added on a fixed scale as space itself underwent repeated doublings, combining to produce an excellent approximation to a power-law correlation function. This inflationary process is simply a special case of our deep recursive model (gen- eralized from 1 to 3 dimensions). In this case, the hidden “depth” dimension in our model corresponds to cosmic time, and the time parameter which labels the place in the sequence of interest corresponds to space. A similar physical analogy is turbulence in a fluid, where energy in the form of vortices cascades from large scales to ever smaller scales through a recursive process where larger vortices create smaller ones, leading to a scale-invariant power spectrum. In both the inflation case and the turbu- lence case, there is a hierarchical generative process akin to our formal language model (except in three dimensions and with continuous variables), whereby parts of the sys- tem generate smaller parts in an essentially Markovian fashion.
1606.06737#37
Criticality in Formal Languages and Statistical Physics
We show that the mutual information between two symbols, as a function of the number of symbols between the two, decays exponentially in any probabilistic regular grammar, but can decay like a power law for a context-free grammar. This result about formal languages is closely related to a well-known result in classical statistical mechanics that there are no phase transitions in dimensions fewer than two. It is also related to the emergence of power-law correlations in turbulence and cosmological inflation through recursive generative processes. We elucidate these physics connections and comment on potential applications of our results to machine learning tasks like training artificial recurrent neural networks. Along the way, we introduce a useful quantity which we dub the rational mutual information and discuss generalizations of our claims involving more complicated Bayesian networks.
http://arxiv.org/pdf/1606.06737
Henry W. Lin, Max Tegmark
cond-mat.dis-nn, cs.CL
Replaced to match final published version. Discussion improved, references added
Entropy, 19, 299 (2017)
cond-mat.dis-nn
20160621
20170823
[]
1606.06565
38
• Reward Pretraining: A possible defense against cases where the agent can influence its own reward function (e.g. feedback or environmental embedding) is to train a fixed reward function ahead of time as a supervised learning process divorced from interaction with the environment. This could involve either learning a reward function from samples of state-reward pairs, or from trajectories, as in inverse reinforcement learning [107, 51]. However, this forfeits the ability to further learn the reward function after the pretraining is complete, which may create other vulnerabilities. • Variable Indifference: Often we want an agent to optimize certain variables in the environ- ment, without trying to optimize others. For example, we might want an agent to maximize reward, without optimizing what the reward function is or trying to manipulate human behav- ior. Intuitively, we imagine a way to route the optimization pressure of powerful algorithms around parts of their environment. Truly solving this would have applications throughout safety—it seems connected to avoiding side effects and also to counterfactual reasoning. Of course, a challenge here is to make sure the variables targeted for indifference are actually the 10 variables we care about in the world, as opposed to aliased or partially observed versions of them.
1606.06565#38
Concrete Problems in AI Safety
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.
http://arxiv.org/pdf/1606.06565
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané
cs.AI, cs.LG
29 pages
null
cs.AI
20160621
20160725
[ { "id": "1507.01986" }, { "id": "1506.02142" }, { "id": "1602.04621" }, { "id": "1602.04450" }, { "id": "1605.09304" }, { "id": "1606.05374" }, { "id": "1604.05288" }, { "id": "1603.00448" }, { "id": "1605.03661" }, { "id": "1512.02595" }, { "id": "1511.08228" }, { "id": "1510.04342" }, { "id": "1604.05280" }, { "id": "1606.04753" }, { "id": "1606.03976" }, { "id": "1602.02697" }, { "id": "1505.05424" }, { "id": "1604.06057" }, { "id": "1605.03143" }, { "id": "1506.06579" }, { "id": "1605.03142" }, { "id": "1502.05698" }, { "id": "1512.05832" }, { "id": "1502.02072" }, { "id": "1506.02438" }, { "id": "1502.02362" } ]
1606.06737
38
There is also a close analogy to quantum mechanics: in equation (13) expresses the exponential decay of the mutual information with geodesic distance through the Bayesian network; in quantum mechanics, the correla- tion function of a many body system decays exponen- tially with the geodesic distance defined by the tensor network which represents the wavefunction [50]. It is also worth examining our model using techniques from linguistics. A generic PCFG G consists of three ingredients: 1. An alphabet A = A ∪ T which consists of non- terminal symbols A and terminal symbols T . 2. A set of production rules of the form a → B, where the left hand side a ∈ A is always a single non- terminal character and B is a string consisting of symbols in A. 7 3. Probabilities associated with each production rule P(a > B), such that for each a € A, 3p Pla > B)=1.
1606.06737#38
Criticality in Formal Languages and Statistical Physics
We show that the mutual information between two symbols, as a function of the number of symbols between the two, decays exponentially in any probabilistic regular grammar, but can decay like a power law for a context-free grammar. This result about formal languages is closely related to a well-known result in classical statistical mechanics that there are no phase transitions in dimensions fewer than two. It is also related to the emergence of power-law correlations in turbulence and cosmological inflation through recursive generative processes. We elucidate these physics connections and comment on potential applications of our results to machine learning tasks like training artificial recurrent neural networks. Along the way, we introduce a useful quantity which we dub the rational mutual information and discuss generalizations of our claims involving more complicated Bayesian networks.
http://arxiv.org/pdf/1606.06737
Henry W. Lin, Max Tegmark
cond-mat.dis-nn, cs.CL
Replaced to match final published version. Discussion improved, references added
Entropy, 19, 299 (2017)
cond-mat.dis-nn
20160621
20170823
[]
1606.06565
39
10 variables we care about in the world, as opposed to aliased or partially observed versions of them. • Trip Wires: If an agent is going to try and hack its reward function, it is preferable that we know this. We could deliberately introduce some plausible vulnerabilities (that an agent has the ability to exploit but should not exploit if its value function is correct) and monitor them, alerting us and stopping the agent immediately if it takes advantage of one. Such “trip wires” don’t solve reward hacking in itself, but may reduce the risk or at least provide diagnostics. Of course, with a sufficiently capable agent there is the risk that it could “see through” the trip wire and intentionally avoid it while still taking less obvious harmful actions. Fully solving this problem seems very difficult, but we believe the above approaches have the potential to ameliorate it, and might be scaled up or combined to yield more robust solutions. Given the predominantly theoretical focus on this problem to date, designing experiments that could induce the problem and test solutions might improve the relevance and clarity of this topic.
1606.06565#39
Concrete Problems in AI Safety
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.
http://arxiv.org/pdf/1606.06565
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané
cs.AI, cs.LG
29 pages
null
cs.AI
20160621
20160725
[ { "id": "1507.01986" }, { "id": "1506.02142" }, { "id": "1602.04621" }, { "id": "1602.04450" }, { "id": "1605.09304" }, { "id": "1606.05374" }, { "id": "1604.05288" }, { "id": "1603.00448" }, { "id": "1605.03661" }, { "id": "1512.02595" }, { "id": "1511.08228" }, { "id": "1510.04342" }, { "id": "1604.05280" }, { "id": "1606.04753" }, { "id": "1606.03976" }, { "id": "1602.02697" }, { "id": "1505.05424" }, { "id": "1604.06057" }, { "id": "1605.03143" }, { "id": "1506.06579" }, { "id": "1605.03142" }, { "id": "1502.05698" }, { "id": "1512.05832" }, { "id": "1502.02072" }, { "id": "1506.02438" }, { "id": "1502.02362" } ]
1606.06737
39
7 3. Probabilities associated with each production rule P(a > B), such that for each a € A, 3p Pla > B)=1. It is a remarkable fact that any stochastic-context free grammars can be put in Chomsky normal form [27, 45]. This means that given G, there exists some other gram- mar ¯G such that all the production rules are either of the form a → bc or a → α, where a, b, c ∈ A and α ∈ T and the corresponding languages L(G) = L( ¯G). In other words, given some complicated grammar G, we can always find a grammar ¯G such that the corresponding statistics of the languages are identical and all the pro- duction rules replace a symbol by at most two symbols (at the cost of increasing the number of production rules in ¯G).
1606.06737#39
Criticality in Formal Languages and Statistical Physics
We show that the mutual information between two symbols, as a function of the number of symbols between the two, decays exponentially in any probabilistic regular grammar, but can decay like a power law for a context-free grammar. This result about formal languages is closely related to a well-known result in classical statistical mechanics that there are no phase transitions in dimensions fewer than two. It is also related to the emergence of power-law correlations in turbulence and cosmological inflation through recursive generative processes. We elucidate these physics connections and comment on potential applications of our results to machine learning tasks like training artificial recurrent neural networks. Along the way, we introduce a useful quantity which we dub the rational mutual information and discuss generalizations of our claims involving more complicated Bayesian networks.
http://arxiv.org/pdf/1606.06737
Henry W. Lin, Max Tegmark
cond-mat.dis-nn, cs.CL
Replaced to match final published version. Discussion improved, references added
Entropy, 19, 299 (2017)
cond-mat.dis-nn
20160621
20170823
[]
1606.06565
40
Potential Experiments: A possible promising avenue of approach would be more realistic versions of the “delusion box” environment described by [127], in which standard RL agents distort their own perception to appear to receive high reward, rather than optimizing the objective in the external world that the reward signal was intended to encourage. The delusion box can be easily attached to any RL environment, but even more valuable would be to create classes of environments where a delusion box is a natural and integrated part of the dynamics. For example, in sufficiently rich physics simulations it is likely possible for an agent to alter the light waves in its immediate vicinity to distort its own perceptions. The goal would be to develop generalizable learning strategies that succeed at optimizing external objectives in a wide range of environments, while avoiding being fooled by delusion boxes that arise naturally in many diverse ways. # 5 Scalable Oversight
1606.06565#40
Concrete Problems in AI Safety
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.
http://arxiv.org/pdf/1606.06565
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané
cs.AI, cs.LG
29 pages
null
cs.AI
20160621
20160725
[ { "id": "1507.01986" }, { "id": "1506.02142" }, { "id": "1602.04621" }, { "id": "1602.04450" }, { "id": "1605.09304" }, { "id": "1606.05374" }, { "id": "1604.05288" }, { "id": "1603.00448" }, { "id": "1605.03661" }, { "id": "1512.02595" }, { "id": "1511.08228" }, { "id": "1510.04342" }, { "id": "1604.05280" }, { "id": "1606.04753" }, { "id": "1606.03976" }, { "id": "1602.02697" }, { "id": "1505.05424" }, { "id": "1604.06057" }, { "id": "1605.03143" }, { "id": "1506.06579" }, { "id": "1605.03142" }, { "id": "1502.05698" }, { "id": "1512.05832" }, { "id": "1502.02072" }, { "id": "1506.02438" }, { "id": "1502.02362" } ]
1606.06737
40
This formalism allows us to strengthen our claims. Our model with a branching factor q = 2 is precisely the class of all context-free grammars that are generated by the production rules of the form a → bc. While this might naively seem like a very small subset of all possi- ble context-free grammars, the fact that any context-free grammar can be converted into Chomsky normal form shows that our theory deals with a generic context-free grammar, except for the additional step of producing ter- minal symbols from non-terminal symbols. Starting from a single symbol, the deep dynamics of the PCFG in nor- mal form are given by a strongly-correlated branching process with q = 2 which proceeds for a characteristic number of productions before terminal symbols are pro- duced. Before most symbols have been converted to ter- minal symbols, our theory applies, and power-law cor- relations will exist amongst the non-terminal symbols. To the extent that the terminal symbols that are then produced from non-terminal symbols reflect the correla- tions of the non-terminal symbols, we expect context-free grammars to be able to produce power law correlations.
1606.06737#40
Criticality in Formal Languages and Statistical Physics
We show that the mutual information between two symbols, as a function of the number of symbols between the two, decays exponentially in any probabilistic regular grammar, but can decay like a power law for a context-free grammar. This result about formal languages is closely related to a well-known result in classical statistical mechanics that there are no phase transitions in dimensions fewer than two. It is also related to the emergence of power-law correlations in turbulence and cosmological inflation through recursive generative processes. We elucidate these physics connections and comment on potential applications of our results to machine learning tasks like training artificial recurrent neural networks. Along the way, we introduce a useful quantity which we dub the rational mutual information and discuss generalizations of our claims involving more complicated Bayesian networks.
http://arxiv.org/pdf/1606.06737
Henry W. Lin, Max Tegmark
cond-mat.dis-nn, cs.CL
Replaced to match final published version. Discussion improved, references added
Entropy, 19, 299 (2017)
cond-mat.dis-nn
20160621
20170823
[]
1606.06565
41
# 5 Scalable Oversight Consider an autonomous agent performing some complex task, such as cleaning an office in the case of our recurring robot example. We may want the agent to maximize a complex objective like “if the user spent a few hours looking at the result in detail, how happy would they be with the agent’s performance?” But we don’t have enough time to provide such oversight for every training example; in order to actually train the agent, we need to rely on cheaper approximations, like “does the user seem happy when they see the office?” or “is there any visible dirt on the floor?” These cheaper signals can be efficiently evaluated during training, but they don’t perfectly track what we care about. This divergence exacerbates problems like unintended side effects (which may be appropriately penalized by the complex objective but omitted from the cheap approximation) and reward hacking (which thorough oversight might recognize as undesirable). We may be able to ameliorate such problems by finding more efficient ways to exploit our limited oversight budget—for example by combining limited calls to the true objective function with frequent calls to an imperfect proxy that we are given or can learn.
1606.06565#41
Concrete Problems in AI Safety
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.
http://arxiv.org/pdf/1606.06565
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané
cs.AI, cs.LG
29 pages
null
cs.AI
20160621
20160725
[ { "id": "1507.01986" }, { "id": "1506.02142" }, { "id": "1602.04621" }, { "id": "1602.04450" }, { "id": "1605.09304" }, { "id": "1606.05374" }, { "id": "1604.05288" }, { "id": "1603.00448" }, { "id": "1605.03661" }, { "id": "1512.02595" }, { "id": "1511.08228" }, { "id": "1510.04342" }, { "id": "1604.05280" }, { "id": "1606.04753" }, { "id": "1606.03976" }, { "id": "1602.02697" }, { "id": "1505.05424" }, { "id": "1604.06057" }, { "id": "1605.03143" }, { "id": "1506.06579" }, { "id": "1605.03142" }, { "id": "1502.05698" }, { "id": "1512.05832" }, { "id": "1502.02072" }, { "id": "1506.02438" }, { "id": "1502.02362" } ]
1606.06737
41
From our corollary to Theorem 2, we know that regu- lar grammars cannot exhibit power-law decays in mu- tual information. Hence context-free grammars are the simplest grammars which support criticality, e.g., they are the lowest in the Chomsky hierarchy that supports criticality. Note that our corollary to Theorem 2 also im- plies that not all context-free grammars exhibit criticality since regular grammars are a strict subset of context-free grammars. Whether one can formulate an even sharper criterion should be the subject of future work. # IV. DISCUSSION By introducing a quantity we term rational mutual in- formation, we have proved that hidden Markov processes generically exhibit exponential decay, whereas PCFGs can exhibit power law decays thanks to the “extra di- mension” in the network. To the extent that natural languages and other empirical data sources are gener- ated by processes more similar to PCFGs than Markov processes, this explains why they can exhibit power law decays. We will draw on these lessons to give a semi-heuristic explanation for the success of deep recurrent neural net- works widely used for natural language processing, and discuss how mutual information can be used as a tool for validating machine learning algorithms. # A. Connection to Recurrent Neural Networks
1606.06737#41
Criticality in Formal Languages and Statistical Physics
We show that the mutual information between two symbols, as a function of the number of symbols between the two, decays exponentially in any probabilistic regular grammar, but can decay like a power law for a context-free grammar. This result about formal languages is closely related to a well-known result in classical statistical mechanics that there are no phase transitions in dimensions fewer than two. It is also related to the emergence of power-law correlations in turbulence and cosmological inflation through recursive generative processes. We elucidate these physics connections and comment on potential applications of our results to machine learning tasks like training artificial recurrent neural networks. Along the way, we introduce a useful quantity which we dub the rational mutual information and discuss generalizations of our claims involving more complicated Bayesian networks.
http://arxiv.org/pdf/1606.06737
Henry W. Lin, Max Tegmark
cond-mat.dis-nn, cs.CL
Replaced to match final published version. Discussion improved, references added
Entropy, 19, 299 (2017)
cond-mat.dis-nn
20160621
20170823
[]
1606.06565
42
One framework for thinking about this problem is semi-supervised reinforcement learning,3 which resembles ordinary reinforcement learning except that the agent can only see its reward on a small fraction of the timesteps or episodes. The agent’s performance is still evaluated based on reward from all episodes but it must optimize this based only on the limited reward samples it sees. 3The discussion of semi-supervised RL draws heavily on an informal essay, https://medium.com/ai-control/ cf7d5375197f written by one of the authors of the present document. 11 The active learning setting seems most interesting; in this setting the agent can request to see the reward on whatever episodes or timesteps would be most useful for learning, and the goal is to be economical both with number of feedback requests and total training time. We can also consider a random setting, where the reward is visible on a random subset of the timesteps or episodes, as well as intermediate possibilities. We can define a baseline performance by simply ignoring the unlabeled episodes and applying an ordinary RL algorithm to the labelled episodes. This will generally result in very slow learning. The challenge is to make use of the unlabelled episodes to accelerate learning, ideally learning almost as quickly and robustly as if all episodes had been labeled.
1606.06565#42
Concrete Problems in AI Safety
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.
http://arxiv.org/pdf/1606.06565
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané
cs.AI, cs.LG
29 pages
null
cs.AI
20160621
20160725
[ { "id": "1507.01986" }, { "id": "1506.02142" }, { "id": "1602.04621" }, { "id": "1602.04450" }, { "id": "1605.09304" }, { "id": "1606.05374" }, { "id": "1604.05288" }, { "id": "1603.00448" }, { "id": "1605.03661" }, { "id": "1512.02595" }, { "id": "1511.08228" }, { "id": "1510.04342" }, { "id": "1604.05280" }, { "id": "1606.04753" }, { "id": "1606.03976" }, { "id": "1602.02697" }, { "id": "1505.05424" }, { "id": "1604.06057" }, { "id": "1605.03143" }, { "id": "1506.06579" }, { "id": "1605.03142" }, { "id": "1502.05698" }, { "id": "1512.05832" }, { "id": "1502.02072" }, { "id": "1506.02438" }, { "id": "1502.02362" } ]
1606.06737
42
# A. Connection to Recurrent Neural Networks While the generative grammar model is appealing from a linguistic perspective, it may appear to have little to do with machine learning algorithms that are implemented in practice. However, as we will now see, this model can in fact be viewed an idealized version of a long-short term memory (LSTM) recurrent neural network (RNN) that is generating (“hallucinating”) a sequence. Figure 4 shows that an LSTM RNN can reproduce crit- ical behavior. In this example, we trained an RNN (consisting of three hidden LSTM layers of size 256 as described in [29]) to predict the next character in the 100MB Wikipedia sample known as enwik8 [20]. We then used the LSTM to hallucinate 1 MB of text and mea- sured the mutual information as a function of distance. Figure 4 shows that not only is the resulting mutual in- formation function a rough power law, but it also has a slope that is relatively similar to the original. We can understand this success by considering a simpli- fied model that is less powerful and complex than a full LSTM, but retains some of its core features — such an approach to studying deep neural nets has proved fruitful in the past (e.g., [51]).
1606.06737#42
Criticality in Formal Languages and Statistical Physics
We show that the mutual information between two symbols, as a function of the number of symbols between the two, decays exponentially in any probabilistic regular grammar, but can decay like a power law for a context-free grammar. This result about formal languages is closely related to a well-known result in classical statistical mechanics that there are no phase transitions in dimensions fewer than two. It is also related to the emergence of power-law correlations in turbulence and cosmological inflation through recursive generative processes. We elucidate these physics connections and comment on potential applications of our results to machine learning tasks like training artificial recurrent neural networks. Along the way, we introduce a useful quantity which we dub the rational mutual information and discuss generalizations of our claims involving more complicated Bayesian networks.
http://arxiv.org/pdf/1606.06737
Henry W. Lin, Max Tegmark
cond-mat.dis-nn, cs.CL
Replaced to match final published version. Discussion improved, references added
Entropy, 19, 299 (2017)
cond-mat.dis-nn
20160621
20170823
[]
1606.06565
43
An important subtask of semi-supervised RL is identifying proxies which predict the reward, and learning the conditions under which those proxies are valid. For example, if a cleaning robot’s real reward is given by a detailed human evaluation, then it could learn that asking the human “is the room clean?” can provide a very useful approximation to the reward function, and it could eventually learn that checking for visible dirt is an even cheaper but still-useful approximation. This could allow it to learn a good cleaning policy using an extremely small number of detailed evaluations. More broadly, use of semi-supervised RL with a reliable but sparse true approval metric may in- centivize communication and transparency by the agent, since the agent will want to get as much cheap proxy feedback as it possibly can about whether its decisions will ultimately be given high reward. For example, hiding a mess under the rug simply breaks the correspondence between the user’s reaction and the real reward signal, and so would be avoided. We can imagine many possible approaches to semi-supervised RL. For example:
1606.06565#43
Concrete Problems in AI Safety
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.
http://arxiv.org/pdf/1606.06565
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané
cs.AI, cs.LG
29 pages
null
cs.AI
20160621
20160725
[ { "id": "1507.01986" }, { "id": "1506.02142" }, { "id": "1602.04621" }, { "id": "1602.04450" }, { "id": "1605.09304" }, { "id": "1606.05374" }, { "id": "1604.05288" }, { "id": "1603.00448" }, { "id": "1605.03661" }, { "id": "1512.02595" }, { "id": "1511.08228" }, { "id": "1510.04342" }, { "id": "1604.05280" }, { "id": "1606.04753" }, { "id": "1606.03976" }, { "id": "1602.02697" }, { "id": "1505.05424" }, { "id": "1604.06057" }, { "id": "1605.03143" }, { "id": "1506.06579" }, { "id": "1605.03142" }, { "id": "1502.05698" }, { "id": "1512.05832" }, { "id": "1502.02072" }, { "id": "1506.02438" }, { "id": "1502.02362" } ]
1606.06737
43
The usual implementation of LSTMs consists of multiple cells stacked one on top of each other. Each cell of the LSTM (depicted as a yellow circle in Fig. 3) has a state that is characterized by a matrix of numbers Ct and is updated according to the following rule Ct = ft ◦ Ct−1 + it ◦ Dt, (16) where ◦ denotes element wise multiplication, and Dt = Dt(Ct−1, xt) is some function of the input xt from the cell from the layer above (denoted by downward arrows in Figure 3, the details of which do not concern us. Generi- cally, a graph of this picture would look like a rectangular lattice, with each node having an arrow to its right (cor- responding to the first term in the above equation), and an arrow from above (corresponding to the second term in the equation). However, if the forget weights f weights decay rapidly with depth (e.g., as we go from the bottom cell to the towards the top) so that the timescales for for- getting grow exponentially, we will show that a reason- able approximation to the dynamics is given by Figure 3. 8
1606.06737#43
Criticality in Formal Languages and Statistical Physics
We show that the mutual information between two symbols, as a function of the number of symbols between the two, decays exponentially in any probabilistic regular grammar, but can decay like a power law for a context-free grammar. This result about formal languages is closely related to a well-known result in classical statistical mechanics that there are no phase transitions in dimensions fewer than two. It is also related to the emergence of power-law correlations in turbulence and cosmological inflation through recursive generative processes. We elucidate these physics connections and comment on potential applications of our results to machine learning tasks like training artificial recurrent neural networks. Along the way, we introduce a useful quantity which we dub the rational mutual information and discuss generalizations of our claims involving more complicated Bayesian networks.
http://arxiv.org/pdf/1606.06737
Henry W. Lin, Max Tegmark
cond-mat.dis-nn, cs.CL
Replaced to match final published version. Discussion improved, references added
Entropy, 19, 299 (2017)
cond-mat.dis-nn
20160621
20170823
[]
1606.06565
44
We can imagine many possible approaches to semi-supervised RL. For example: • Supervised Reward Learning: Train a model to predict the reward from the state on either a per-timestep or per-episode basis, and use it to estimate the payoff of unlabelled episodes, with some appropriate weighting or uncertainty estimate to account for lower confidence in estimated vs known reward. [37] studies a version of this with direct human feedback as the reward. Many existing RL approaches already fit estimators that closely resemble reward predictors (especially policy gradient methods with a strong baseline, see e.g. [134]), suggesting that this approach may be eminently feasible. • Semi-supervised or Active Reward Learning: Combine the above with traditional semi- supervised or active learning, to more quickly learn the reward estimator. For example, the agent could learn to identify “salient” events in the environment, and request to see the reward associated with these events. • Unsupervised Value Iteration: Use the observed transitions of the unlabeled episodes to make more accurate Bellman updates. • Unsupervised Model Learning: If using model-based RL, use the observed transitions of the unlabeled episodes to improve the quality of the model.
1606.06565#44
Concrete Problems in AI Safety
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.
http://arxiv.org/pdf/1606.06565
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané
cs.AI, cs.LG
29 pages
null
cs.AI
20160621
20160725
[ { "id": "1507.01986" }, { "id": "1506.02142" }, { "id": "1602.04621" }, { "id": "1602.04450" }, { "id": "1605.09304" }, { "id": "1606.05374" }, { "id": "1604.05288" }, { "id": "1603.00448" }, { "id": "1605.03661" }, { "id": "1512.02595" }, { "id": "1511.08228" }, { "id": "1510.04342" }, { "id": "1604.05280" }, { "id": "1606.04753" }, { "id": "1606.03976" }, { "id": "1602.02697" }, { "id": "1505.05424" }, { "id": "1604.06057" }, { "id": "1605.03143" }, { "id": "1506.06579" }, { "id": "1605.03142" }, { "id": "1502.05698" }, { "id": "1512.05832" }, { "id": "1502.02072" }, { "id": "1506.02438" }, { "id": "1502.02362" } ]
1606.06737
44
8 If we neglect the dependency of Dt on Ct−1, the for- get gate ft leads to exponential decay of Ct−1 e.g., Ct = f t ◦ C0; this is how LSTM’s forget their past. Note that all operations including exponentiation are performed element-wise in this section only. n general, a cell will smoothly forget its past over a imescale of ~ log(1/f) = Ty. On timescales 2 ry, the cells are weakly correlated; on timescales < ry, the cells are strongly correlated. Hence a discrete approximation o this above equation is the following: Ct = Ct−1, for τf timesteps = Dt(xt), on every τf + 1 timestep. (17)
1606.06737#44
Criticality in Formal Languages and Statistical Physics
We show that the mutual information between two symbols, as a function of the number of symbols between the two, decays exponentially in any probabilistic regular grammar, but can decay like a power law for a context-free grammar. This result about formal languages is closely related to a well-known result in classical statistical mechanics that there are no phase transitions in dimensions fewer than two. It is also related to the emergence of power-law correlations in turbulence and cosmological inflation through recursive generative processes. We elucidate these physics connections and comment on potential applications of our results to machine learning tasks like training artificial recurrent neural networks. Along the way, we introduce a useful quantity which we dub the rational mutual information and discuss generalizations of our claims involving more complicated Bayesian networks.
http://arxiv.org/pdf/1606.06737
Henry W. Lin, Max Tegmark
cond-mat.dis-nn, cs.CL
Replaced to match final published version. Discussion improved, references added
Entropy, 19, 299 (2017)
cond-mat.dis-nn
20160621
20170823
[]
1606.06565
45
• Unsupervised Model Learning: If using model-based RL, use the observed transitions of the unlabeled episodes to improve the quality of the model. As a toy example, a semi-supervised RL agent should be able to learn to play Atari games using a small number of direct reward signals, relying almost entirely on the visual display of the score. This simple example can be extended to capture other safety issues: for example, the agent might have the ability to modify the displayed score without modifying the real score, or the agent may need to take some special action (such as pausing the game) in order to see its score, or the agent may need to learn a sequence of increasingly rough-and-ready approximations (for example learning that certain sounds are associated with positive rewards and other sounds with negative rewards). Or, even without the visual display of the score, the agent might be able to learn to play from only a handful of explicit reward requests (“how many points did I get on the frame where that enemy ship blew up? How about the bigger enemy ship?”) 12 An effective approach to semi-supervised RL might be a strong first step towards providing scalable oversight and mitigating other AI safety problems. It would also likely be useful for reinforcement learning, independent of its relevance to safety. There are other possible approaches to scalable oversight:
1606.06565#45
Concrete Problems in AI Safety
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.
http://arxiv.org/pdf/1606.06565
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané
cs.AI, cs.LG
29 pages
null
cs.AI
20160621
20160725
[ { "id": "1507.01986" }, { "id": "1506.02142" }, { "id": "1602.04621" }, { "id": "1602.04450" }, { "id": "1605.09304" }, { "id": "1606.05374" }, { "id": "1604.05288" }, { "id": "1603.00448" }, { "id": "1605.03661" }, { "id": "1512.02595" }, { "id": "1511.08228" }, { "id": "1510.04342" }, { "id": "1604.05280" }, { "id": "1606.04753" }, { "id": "1606.03976" }, { "id": "1602.02697" }, { "id": "1505.05424" }, { "id": "1604.06057" }, { "id": "1605.03143" }, { "id": "1506.06579" }, { "id": "1605.03142" }, { "id": "1502.05698" }, { "id": "1512.05832" }, { "id": "1502.02072" }, { "id": "1506.02438" }, { "id": "1502.02362" } ]
1606.06737
45
Ct = Ct−1, for τf timesteps = Dt(xt), on every τf + 1 timestep. (17) This simple approximation leads us right back to the hi- erarchical grammar. The first line of the above equation is labeled “remember” in Figure 2 and the second line is what we refer to as “Markov,” since the next state depends only on the previous. Since each cell perfectly remembers its pervious state for τf time-steps, the tree can be reorganized so that it is exactly of the form shown in Figure 3, by omitting nodes which simply copy the pre- vious state. Now supposing that τf grows exponentially with depth τf (layer i) ∝ q τf (layer i + 1), we see that the successive layers become exponentially sparse, which is exactly what happens in our deep grammar model, iden- tifying the parameter q, governing the growth of the for- get timescale, with the branching parameter in the deep grammar model. (Compare Figure 2 and Figure 3.) # B. A new diagnostic for machine learning
1606.06737#45
Criticality in Formal Languages and Statistical Physics
We show that the mutual information between two symbols, as a function of the number of symbols between the two, decays exponentially in any probabilistic regular grammar, but can decay like a power law for a context-free grammar. This result about formal languages is closely related to a well-known result in classical statistical mechanics that there are no phase transitions in dimensions fewer than two. It is also related to the emergence of power-law correlations in turbulence and cosmological inflation through recursive generative processes. We elucidate these physics connections and comment on potential applications of our results to machine learning tasks like training artificial recurrent neural networks. Along the way, we introduce a useful quantity which we dub the rational mutual information and discuss generalizations of our claims involving more complicated Bayesian networks.
http://arxiv.org/pdf/1606.06737
Henry W. Lin, Max Tegmark
cond-mat.dis-nn, cs.CL
Replaced to match final published version. Discussion improved, references added
Entropy, 19, 299 (2017)
cond-mat.dis-nn
20160621
20170823
[]
1606.06565
46
There are other possible approaches to scalable oversight: • Distant supervision. Rather than providing evaluations of some small fraction of a sys- tem’s decisions, we could provide some useful information about the system’s decisions in the aggregate or some noisy hints about the correct evaluations There has been some work in this direction within the area of semi-supervised or weakly supervised learning. For instance, generalized expectation criteria [94, 45] ask the user to provide population-level statistics (e.g. telling the system that on average each sentence contains at least one noun); the DeepDive sys- tem [139] asks users to supply rules that each generate many weak labels; and [65] extrapolates more general patterns from an initial set of low-recall labeling rules. This general approach is often referred to as distant supervision, and has also received recent attention in the natural language processing community (see e.g. [60, 99] as well as several of the references above). Expanding these lines of work and finding a way to apply them to the case of agents, where feedback is more interactive and i.i.d. assumptions may be violated, could provide an approach to scalable oversight that is complementary to the approach embodied in semi-supervised RL.
1606.06565#46
Concrete Problems in AI Safety
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.
http://arxiv.org/pdf/1606.06565
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané
cs.AI, cs.LG
29 pages
null
cs.AI
20160621
20160725
[ { "id": "1507.01986" }, { "id": "1506.02142" }, { "id": "1602.04621" }, { "id": "1602.04450" }, { "id": "1605.09304" }, { "id": "1606.05374" }, { "id": "1604.05288" }, { "id": "1603.00448" }, { "id": "1605.03661" }, { "id": "1512.02595" }, { "id": "1511.08228" }, { "id": "1510.04342" }, { "id": "1604.05280" }, { "id": "1606.04753" }, { "id": "1606.03976" }, { "id": "1602.02697" }, { "id": "1505.05424" }, { "id": "1604.06057" }, { "id": "1605.03143" }, { "id": "1506.06579" }, { "id": "1605.03142" }, { "id": "1502.05698" }, { "id": "1512.05832" }, { "id": "1502.02072" }, { "id": "1506.02438" }, { "id": "1502.02362" } ]
1606.06737
46
# B. A new diagnostic for machine learning How can one tell whether an neural network can be fur- ther improved? For example, an LSTM RNN similar to the one we used in Figure 3 can predict Wikipedia text with a residual entropy ∼ 1.4 bits/character [29], which is very close to the performance of current state of the art custom compression software — which achieves ∼ 1.3 bits/character [52]. Is that essentially the best compres- sion possible, or can significant improvements be made? Our results provide an powerful diagnostic for shedding further light on this question: measuring the mutual in- formation as a function of separation between symbols is a computationally efficient way of extracting much more meaningful information about the performance of a model than simply evaluating the loss function, usually given by the conditional entropy H(Xt|Xt−1, Xt−2, . . . ).
1606.06737#46
Criticality in Formal Languages and Statistical Physics
We show that the mutual information between two symbols, as a function of the number of symbols between the two, decays exponentially in any probabilistic regular grammar, but can decay like a power law for a context-free grammar. This result about formal languages is closely related to a well-known result in classical statistical mechanics that there are no phase transitions in dimensions fewer than two. It is also related to the emergence of power-law correlations in turbulence and cosmological inflation through recursive generative processes. We elucidate these physics connections and comment on potential applications of our results to machine learning tasks like training artificial recurrent neural networks. Along the way, we introduce a useful quantity which we dub the rational mutual information and discuss generalizations of our claims involving more complicated Bayesian networks.
http://arxiv.org/pdf/1606.06737
Henry W. Lin, Max Tegmark
cond-mat.dis-nn, cs.CL
Replaced to match final published version. Discussion improved, references added
Entropy, 19, 299 (2017)
cond-mat.dis-nn
20160621
20170823
[]
1606.06565
47
• Hierarchical reinforcement learning. Hierarchical reinforcement learning [40] offers an- other approach to scalable oversight. Here a top-level agent takes a relatively small number of highly abstract actions that extend over large temporal or spatial scales, and receives rewards over similarly long timescales. The agent completes actions by delegating them to sub-agents, which it incentivizes with a synthetic reward signal representing correct completion of the action, and which themselves delegate to sub-sub-agents. At the lowest level, agents directly take primitive actions in the environment. The top-level agent in hierarchical RL may be able to learn from very sparse rewards, since it does not need to learn how to implement the details of its policy; meanwhile, the sub-agents will receive a dense reward signal even if the top-level reward is very sparse, since they are optimizing synthetic reward signals defined by higher-level agents. So a successful approach to hierarchical RL might naturally facilitate scalable oversight.4 Hierarchical RL seems a particularly promising approach to oversight, especially given the potential promise of combining ideas from hierarchical RL with neural network function ap- proximators [84].
1606.06565#47
Concrete Problems in AI Safety
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.
http://arxiv.org/pdf/1606.06565
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané
cs.AI, cs.LG
29 pages
null
cs.AI
20160621
20160725
[ { "id": "1507.01986" }, { "id": "1506.02142" }, { "id": "1602.04621" }, { "id": "1602.04450" }, { "id": "1605.09304" }, { "id": "1606.05374" }, { "id": "1604.05288" }, { "id": "1603.00448" }, { "id": "1605.03661" }, { "id": "1512.02595" }, { "id": "1511.08228" }, { "id": "1510.04342" }, { "id": "1604.05280" }, { "id": "1606.04753" }, { "id": "1606.03976" }, { "id": "1602.02697" }, { "id": "1505.05424" }, { "id": "1604.06057" }, { "id": "1605.03143" }, { "id": "1506.06579" }, { "id": "1605.03142" }, { "id": "1502.05698" }, { "id": "1512.05832" }, { "id": "1502.02072" }, { "id": "1506.02438" }, { "id": "1502.02362" } ]
1606.06737
47
Figure 3 shows that even with just three layers, the LSTM-RNN is able to learn long-range correlations; the slope of the mutual information of hallucinated text is comparable to that of the training set. However, the fig- ure also shows that the predictions of our LSTM-RNN are far from optimal. Interestingly, the hallucinated text shows about the same mutual information for distances ∼ O(1), but significantly less mutual information at large AOE O O«< o _femember_,) --->Forget nomen onem Fy J z g FS J Y M 3 &é of D g 8 z 8 ES g y ey & & o- Oro Time > FIG. 3: Our deep generative grammar model can be viewed as an idealization of a long-short term memory (LSTM) recur- rent neural net, where the “forget weights” drop with depth so that the forget timescales grow exponentially with depth. The graph drawn here is clearly isomorphic to the graph drawn in Figure 1. For each cell, we approximate the usual incremen- tal updating rule by either perfectly remembering the previ- ous state (horizontal arrows) or by ignoring the previous state and determining the cell state by a random rule depending on the node above (vertical arrows).
1606.06737#47
Criticality in Formal Languages and Statistical Physics
We show that the mutual information between two symbols, as a function of the number of symbols between the two, decays exponentially in any probabilistic regular grammar, but can decay like a power law for a context-free grammar. This result about formal languages is closely related to a well-known result in classical statistical mechanics that there are no phase transitions in dimensions fewer than two. It is also related to the emergence of power-law correlations in turbulence and cosmological inflation through recursive generative processes. We elucidate these physics connections and comment on potential applications of our results to machine learning tasks like training artificial recurrent neural networks. Along the way, we introduce a useful quantity which we dub the rational mutual information and discuss generalizations of our claims involving more complicated Bayesian networks.
http://arxiv.org/pdf/1606.06737
Henry W. Lin, Max Tegmark
cond-mat.dis-nn, cs.CL
Replaced to match final published version. Discussion improved, references added
Entropy, 19, 299 (2017)
cond-mat.dis-nn
20160621
20170823
[]
1606.06565
48
Hierarchical RL seems a particularly promising approach to oversight, especially given the potential promise of combining ideas from hierarchical RL with neural network function ap- proximators [84]. Potential Experiments: An extremely simple experiment would be to try semi-supervised RL in some basic control environments, such as cartpole balance or pendulum swing-up. If the reward is provided only on a random 10% of episodes, can we still learn nearly as quickly as if it were provided every episode? In such tasks the reward structure is very simple so success should be quite likely. A next step would be to try the same on Atari games. Here the active learning case could be quite interesting—perhaps it is possible to infer the reward structure from just a few carefully requested samples (for example, frames where enemy ships are blowing up in Space Invaders), and thus learn to play the games in an almost totally unsupervised fashion. The next step after this might be to try a task with much more complex reward structure, either simulated or (preferably) real-world. If learning was sufficiently data-efficient, then these rewards could be provided directly by a human. Robot locomotion or industrial control tasks might be a natural candidate for such experiments.
1606.06565#48
Concrete Problems in AI Safety
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.
http://arxiv.org/pdf/1606.06565
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané
cs.AI, cs.LG
29 pages
null
cs.AI
20160621
20160725
[ { "id": "1507.01986" }, { "id": "1506.02142" }, { "id": "1602.04621" }, { "id": "1602.04450" }, { "id": "1605.09304" }, { "id": "1606.05374" }, { "id": "1604.05288" }, { "id": "1603.00448" }, { "id": "1605.03661" }, { "id": "1512.02595" }, { "id": "1511.08228" }, { "id": "1510.04342" }, { "id": "1604.05280" }, { "id": "1606.04753" }, { "id": "1606.03976" }, { "id": "1602.02697" }, { "id": "1505.05424" }, { "id": "1604.06057" }, { "id": "1605.03143" }, { "id": "1506.06579" }, { "id": "1605.03142" }, { "id": "1502.05698" }, { "id": "1512.05832" }, { "id": "1502.02072" }, { "id": "1506.02438" }, { "id": "1502.02362" } ]