doi
stringlengths
10
10
chunk-id
int64
0
936
chunk
stringlengths
401
2.02k
id
stringlengths
12
14
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
references
list
1606.06737
48
separation. Without requiring any knowledge about the true entropy of the input text (which is famously NP- hard to compute), this figure immediately shows that the LSTM-RNN we trained is performing sub-optimally; it is not able to capture all the long-term dependencies found in the training data. As a comparison, we also calculated the bigram transi- tion matrix P(X3X4|X,X2) from the data and used it to hallucinate 1 MB of text. Despite the fact that this higher order Markov model needs ~ 10° more parameters than our LSTM-RNN, it captures less than a fifth of the mutual information captured by the LSTM-RNN even at modest separations 2 5. This phenomenon is related to a classic result in the theory of formal languages: a context free grammar In summary, Figure 3 shows both the successes and short- comings of machine learning. On the one hand, LSTM- RNN’s can capture long-range correlations much more efficiently than Markovian models; on the other hand, they cannot match the two point functions of training data, never mind higher order statistics! One might wonder how the lack of mutual information at large scales for the bigram Markov model is manifested in the hallucinated text. Below we give a line from the Markov hallucinations: 9
1606.06737#48
Criticality in Formal Languages and Statistical Physics
We show that the mutual information between two symbols, as a function of the number of symbols between the two, decays exponentially in any probabilistic regular grammar, but can decay like a power law for a context-free grammar. This result about formal languages is closely related to a well-known result in classical statistical mechanics that there are no phase transitions in dimensions fewer than two. It is also related to the emergence of power-law correlations in turbulence and cosmological inflation through recursive generative processes. We elucidate these physics connections and comment on potential applications of our results to machine learning tasks like training artificial recurrent neural networks. Along the way, we introduce a useful quantity which we dub the rational mutual information and discuss generalizations of our claims involving more complicated Bayesian networks.
http://arxiv.org/pdf/1606.06737
Henry W. Lin, Max Tegmark
cond-mat.dis-nn, cs.CL
Replaced to match final published version. Discussion improved, references added
Entropy, 19, 299 (2017)
cond-mat.dis-nn
20160621
20170823
[]
1606.06565
49
4When implementing hierarchical RL, we may find that subagents take actions that don’t serve top-level agent’s real goals, in the same way that a human may be concerned that the top-level agent’s actions don’t serve the human’s real goals. This is an intriguing analogy that suggests that there may be fruitful parallels between hierarchical RL and several aspects of the safety problem. 13 # 6 Safe Exploration
1606.06565#49
Concrete Problems in AI Safety
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.
http://arxiv.org/pdf/1606.06565
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané
cs.AI, cs.LG
29 pages
null
cs.AI
20160621
20160725
[ { "id": "1507.01986" }, { "id": "1506.02142" }, { "id": "1602.04621" }, { "id": "1602.04450" }, { "id": "1605.09304" }, { "id": "1606.05374" }, { "id": "1604.05288" }, { "id": "1603.00448" }, { "id": "1605.03661" }, { "id": "1512.02595" }, { "id": "1511.08228" }, { "id": "1510.04342" }, { "id": "1604.05280" }, { "id": "1606.04753" }, { "id": "1606.03976" }, { "id": "1602.02697" }, { "id": "1505.05424" }, { "id": "1604.06057" }, { "id": "1605.03143" }, { "id": "1506.06579" }, { "id": "1605.03142" }, { "id": "1502.05698" }, { "id": "1512.05832" }, { "id": "1502.02072" }, { "id": "1506.02438" }, { "id": "1502.02362" } ]
1606.06737
49
One might wonder how the lack of mutual information at large scales for the bigram Markov model is manifested in the hallucinated text. Below we give a line from the Markov hallucinations: 9 1 2 0.100 > > 0.010 c £ 0.001 £ S 104 £& $ 10° s 10°° 1 10 100 Distance between symbols d(X,Y) 1000 FIG. 4: Diagnosing different models with by hallucinating text and then measuring the mutual information as a func- tion of separation. The red line is the mutual information of enwik8, a 100 MB sample of English Wikipedia. In shaded blue is the mutual information of hallucinated Wikipedia from a trained LSTM with 3 layers of size 256. We plot in solid black the mutual information of a Markov process on sin- gle characters, which we compute exactly. (This would cor- respond to the mutual information of hallucinations in the limit where the length of the hallucinations goes to infinity). This curve shows a sharp exponential decay after a distance of ∼ 10, in agreement with our theoretical predictions. We also measured the mutual information for hallucinated text on a Markov process for bigrams, which still underperforms the LSTMs in long-ranged correlations, despite having ∼ 103 more parameters than
1606.06737#49
Criticality in Formal Languages and Statistical Physics
We show that the mutual information between two symbols, as a function of the number of symbols between the two, decays exponentially in any probabilistic regular grammar, but can decay like a power law for a context-free grammar. This result about formal languages is closely related to a well-known result in classical statistical mechanics that there are no phase transitions in dimensions fewer than two. It is also related to the emergence of power-law correlations in turbulence and cosmological inflation through recursive generative processes. We elucidate these physics connections and comment on potential applications of our results to machine learning tasks like training artificial recurrent neural networks. Along the way, we introduce a useful quantity which we dub the rational mutual information and discuss generalizations of our claims involving more complicated Bayesian networks.
http://arxiv.org/pdf/1606.06737
Henry W. Lin, Max Tegmark
cond-mat.dis-nn, cs.CL
Replaced to match final published version. Discussion improved, references added
Entropy, 19, 299 (2017)
cond-mat.dis-nn
20160621
20170823
[]
1606.06565
50
All autonomous learning agents need to sometimes engage in exploration—taking actions that don’t seem ideal given current information, but which help the agent learn about its environment. However, exploration can be dangerous, since it involves taking actions whose consequences the agent doesn’t understand well. In toy environments, like an Atari video game, there’s a limit to how bad these consequences can be—maybe the agent loses some score, or runs into an enemy and suffers some damage. But the real world can be much less forgiving. Badly chosen actions may destroy the agent or trap it in states it can’t get out of. Robot helicopters may run into the ground or damage property; industrial control systems could cause serious issues. Common exploration policies such as epsilon- greedy [150] or R-max [31] explore by choosing an action at random or viewing unexplored actions optimistically, and thus make no attempt to avoid these dangerous situations. More sophisticated exploration strategies that adopt a coherent exploration policy over extended temporal scales [114] could actually have even greater potential for harm, since a coherently chosen bad policy may be more insidious than mere random actions. Yet intuitively it seems
1606.06565#50
Concrete Problems in AI Safety
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.
http://arxiv.org/pdf/1606.06565
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané
cs.AI, cs.LG
29 pages
null
cs.AI
20160621
20160725
[ { "id": "1507.01986" }, { "id": "1506.02142" }, { "id": "1602.04621" }, { "id": "1602.04450" }, { "id": "1605.09304" }, { "id": "1606.05374" }, { "id": "1604.05288" }, { "id": "1603.00448" }, { "id": "1605.03661" }, { "id": "1512.02595" }, { "id": "1511.08228" }, { "id": "1510.04342" }, { "id": "1604.05280" }, { "id": "1606.04753" }, { "id": "1606.03976" }, { "id": "1602.02697" }, { "id": "1505.05424" }, { "id": "1604.06057" }, { "id": "1605.03143" }, { "id": "1506.06579" }, { "id": "1605.03142" }, { "id": "1502.05698" }, { "id": "1512.05832" }, { "id": "1502.02072" }, { "id": "1506.02438" }, { "id": "1502.02362" } ]
1606.06737
50
[[computhourgist, Flagesernmenserved whirequotes or thand dy excommentaligmaktophy as its:Fran at ||<If ISBN 088;&ampategor and on of to [[Prefung]]’ and at them rector> This can be compared with an example from the LSTM RNN: Proudknow pop groups at Oxford - [http://ccw.com/faqsisdaler/cardiffstwander --helgar.jpg] and Cape Normans’s first attacks Cup rigid (AM). Despite using many fewer parameters, the LSTM man- ages to produce a realistic looking URL and is able to close brackets correctly [53], something that the Markov model struggles with. Although great challenges remain to accurately model natural languages, our results at least allow us to improve on some earlier answers to key questions we sought to address : 1. Why is natural language so hard? The old answer was that language is uniquely human. Our new an- swer is that at least part of the difficulty is that nat- ural language is a critical system, with long-ranged correlations that are difficult for machines to learn.
1606.06737#50
Criticality in Formal Languages and Statistical Physics
We show that the mutual information between two symbols, as a function of the number of symbols between the two, decays exponentially in any probabilistic regular grammar, but can decay like a power law for a context-free grammar. This result about formal languages is closely related to a well-known result in classical statistical mechanics that there are no phase transitions in dimensions fewer than two. It is also related to the emergence of power-law correlations in turbulence and cosmological inflation through recursive generative processes. We elucidate these physics connections and comment on potential applications of our results to machine learning tasks like training artificial recurrent neural networks. Along the way, we introduce a useful quantity which we dub the rational mutual information and discuss generalizations of our claims involving more complicated Bayesian networks.
http://arxiv.org/pdf/1606.06737
Henry W. Lin, Max Tegmark
cond-mat.dis-nn, cs.CL
Replaced to match final published version. Discussion improved, references added
Entropy, 19, 299 (2017)
cond-mat.dis-nn
20160621
20170823
[]
1606.06565
51
[114] could actually have even greater potential for harm, since a coherently chosen bad policy may be more insidious than mere random actions. Yet intuitively it seems like it should often be possible to predict which actions are dangerous and explore in a way that avoids them, even when we don’t have that much information about the environment. For example, if I want to learn about tigers, should I buy a tiger, or buy a book about tigers? It takes only a tiny bit of prior knowledge about tigers to determine which option is safer.
1606.06565#51
Concrete Problems in AI Safety
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.
http://arxiv.org/pdf/1606.06565
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané
cs.AI, cs.LG
29 pages
null
cs.AI
20160621
20160725
[ { "id": "1507.01986" }, { "id": "1506.02142" }, { "id": "1602.04621" }, { "id": "1602.04450" }, { "id": "1605.09304" }, { "id": "1606.05374" }, { "id": "1604.05288" }, { "id": "1603.00448" }, { "id": "1605.03661" }, { "id": "1512.02595" }, { "id": "1511.08228" }, { "id": "1510.04342" }, { "id": "1604.05280" }, { "id": "1606.04753" }, { "id": "1606.03976" }, { "id": "1602.02697" }, { "id": "1505.05424" }, { "id": "1604.06057" }, { "id": "1605.03143" }, { "id": "1506.06579" }, { "id": "1605.03142" }, { "id": "1502.05698" }, { "id": "1512.05832" }, { "id": "1502.02072" }, { "id": "1506.02438" }, { "id": "1502.02362" } ]
1606.06737
51
2. Why are machines bad at natural languages, and why are they good? The old answer is that Markov models are simply not brain/human-like, whereas neural nets are more brain-like and hence better. Our new answer is that Markov models or other 1-dimensional models cannot exhibit critical be- havior, whereas neural nets and other deep models (where an extra hidden dimension is formed by the layers of the network) are able to exhibit critical behavior. 3. How can we know when machines are bad or good? The old answer is to compute the loss function. Our new answer is to also compute the mutual in- formation as a function of separation, which can immediately show how well the model is doing at capturing correlations on different scales. Future studies could include generalizing our theorems to more complex formal languages such as Merge Gram- mars.
1606.06737#51
Criticality in Formal Languages and Statistical Physics
We show that the mutual information between two symbols, as a function of the number of symbols between the two, decays exponentially in any probabilistic regular grammar, but can decay like a power law for a context-free grammar. This result about formal languages is closely related to a well-known result in classical statistical mechanics that there are no phase transitions in dimensions fewer than two. It is also related to the emergence of power-law correlations in turbulence and cosmological inflation through recursive generative processes. We elucidate these physics connections and comment on potential applications of our results to machine learning tasks like training artificial recurrent neural networks. Along the way, we introduce a useful quantity which we dub the rational mutual information and discuss generalizations of our claims involving more complicated Bayesian networks.
http://arxiv.org/pdf/1606.06737
Henry W. Lin, Max Tegmark
cond-mat.dis-nn, cs.CL
Replaced to match final published version. Discussion improved, references added
Entropy, 19, 299 (2017)
cond-mat.dis-nn
20160621
20170823
[]
1606.06565
52
In practice, real world RL projects can often avoid these issues by simply hard-coding an avoidance of catastrophic behaviors. For instance, an RL-based robot helicopter might be programmed to override its policy with a hard-coded collision avoidance sequence (such as spinning its propellers to gain altitude) whenever it’s too close to the ground. This approach works well when there are only a few things that could go wrong, and the designers know all of them ahead of time. But as agents become more autonomous and act in more complex domains, it may become harder and harder to anticipate every possible catastrophic failure. The space of failure modes for an agent running a power grid or a search-and-rescue operation could be quite large. Hard-coding against every possible failure is unlikely to be feasible in these cases, so a more principled approach to preventing harmful exploration seems essential. Even in simple cases like the robot helicopter, a principled approach would simplify system design and reduce the need for domain-specific engineering. There is a sizable literature on such safe exploration—it is arguably the most studied of the problems we discuss in this document. [55, 118] provide thorough reviews of this literature, so we don’t review it extensively here, but simply describe some general routes that this research has taken, as well as suggesting some directions that might have increasing relevance as RL systems expand in scope and capability.
1606.06565#52
Concrete Problems in AI Safety
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.
http://arxiv.org/pdf/1606.06565
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané
cs.AI, cs.LG
29 pages
null
cs.AI
20160621
20160725
[ { "id": "1507.01986" }, { "id": "1506.02142" }, { "id": "1602.04621" }, { "id": "1602.04450" }, { "id": "1605.09304" }, { "id": "1606.05374" }, { "id": "1604.05288" }, { "id": "1603.00448" }, { "id": "1605.03661" }, { "id": "1512.02595" }, { "id": "1511.08228" }, { "id": "1510.04342" }, { "id": "1604.05280" }, { "id": "1606.04753" }, { "id": "1606.03976" }, { "id": "1602.02697" }, { "id": "1505.05424" }, { "id": "1604.06057" }, { "id": "1605.03143" }, { "id": "1506.06579" }, { "id": "1605.03142" }, { "id": "1502.05698" }, { "id": "1512.05832" }, { "id": "1502.02072" }, { "id": "1506.02438" }, { "id": "1502.02362" } ]
1606.06737
52
Future studies could include generalizing our theorems to more complex formal languages such as Merge Gram- mars. Acknowledgments: This work was supported by the Foundational Questions Institute http://fqxi.org. The authors wish to thank Noam Chomsky and Greg Lessard for valuable comments on the linguistic aspects of this work, Taiga Abe, Meia Chita-Tegmark, Hanna Field, Esther Goldberg, Emily Mu, John Peurifoi, Tomaso Pog- gio, Luis Seoane, Leon Shen, David Theurel, Cindy Zhao, and two anonymous referees for helpful discussions and encouragement, Michelle Xu for help acquiring genome data and the Center for Brains Minds and Machines (CMBB) for hospitality. # Appendix A: Properties of rational mutual information In this appendix, we prove the following elementary prop- erties of rational mutual information: 1. Symmetry: for any two random variables X and Y , IR(X, Y ) = IR(Y, X). The proof is straightfor- ward: P(X =a,Y =b) Tr(X,Y) > P(X =a)P(Y =)) l P(Y =b,X =a)? py P(Y =b)P(X =a) ba (AD) =1= Ip(Y,X).
1606.06737#52
Criticality in Formal Languages and Statistical Physics
We show that the mutual information between two symbols, as a function of the number of symbols between the two, decays exponentially in any probabilistic regular grammar, but can decay like a power law for a context-free grammar. This result about formal languages is closely related to a well-known result in classical statistical mechanics that there are no phase transitions in dimensions fewer than two. It is also related to the emergence of power-law correlations in turbulence and cosmological inflation through recursive generative processes. We elucidate these physics connections and comment on potential applications of our results to machine learning tasks like training artificial recurrent neural networks. Along the way, we introduce a useful quantity which we dub the rational mutual information and discuss generalizations of our claims involving more complicated Bayesian networks.
http://arxiv.org/pdf/1606.06737
Henry W. Lin, Max Tegmark
cond-mat.dis-nn, cs.CL
Replaced to match final published version. Discussion improved, references added
Entropy, 19, 299 (2017)
cond-mat.dis-nn
20160621
20170823
[]
1606.06565
53
• Risk-Sensitive Performance Criteria: A body of existing literature considers changing the optimization criteria from expected total reward to other objectives that are better at preventing rare, catastrophic events; see [55] for a thorough and up-to-date review of this literature. These approaches involve optimizing worst-case performance, or ensuring that the probability of very bad performance is small, or penalizing the variance in performance. These methods have not yet been tested with expressive function approximators such as deep neural networks, but this should be possible in principle for some of the methods, such as [153], which proposes a modification to policy gradient algorithms to optimize a risk-sensitive criterion. There is also recent work studying how to estimate uncertainty in value functions that are represented by deep neural networks [114, 53]; these ideas could be incorporated into risk-sensitive RL algorithms. Another line of work relevant to risk sensitivity uses off-policy estimation to perform a policy update that is good with high probability [156]. • Use Demonstrations: Exploration is necessary to ensure that the agent finds the states that are necessary for near-optimal performance. We may be able to avoid the need for exploration 14
1606.06565#53
Concrete Problems in AI Safety
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.
http://arxiv.org/pdf/1606.06565
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané
cs.AI, cs.LG
29 pages
null
cs.AI
20160621
20160725
[ { "id": "1507.01986" }, { "id": "1506.02142" }, { "id": "1602.04621" }, { "id": "1602.04450" }, { "id": "1605.09304" }, { "id": "1606.05374" }, { "id": "1604.05288" }, { "id": "1603.00448" }, { "id": "1605.03661" }, { "id": "1512.02595" }, { "id": "1511.08228" }, { "id": "1510.04342" }, { "id": "1604.05280" }, { "id": "1606.04753" }, { "id": "1606.03976" }, { "id": "1602.02697" }, { "id": "1505.05424" }, { "id": "1604.06057" }, { "id": "1605.03143" }, { "id": "1506.06579" }, { "id": "1605.03142" }, { "id": "1502.05698" }, { "id": "1512.05832" }, { "id": "1502.02072" }, { "id": "1506.02438" }, { "id": "1502.02362" } ]
1606.06737
53
2. Upper bound to mutual information: The log- arithm function satisfies ln(1 + x) ≤ x with equal- ity if and only if (iff) x = 0. Therefore setting 10 x = P (a,b) P (a)P (b) − 1 gives _ P(a,b) TOY) = (los Pury) na"|+ (para ~!)]) < inp (pepe) 1) ae. nB Hence the rational mutual information IR ≥ I ln B with equality iff I = 0 (or simply IR ≥ I if we use the natural logarithm base B = e). It follows from the above in- equality that IR(X, Y ) ≥ 0 with equality iff P (a, b) = P (a)P (b), since IR = I = 0 iff P (a, b) = P (a)P (b). Note that this short proof is only pos- sible because of the information inequality I ≥ 0. From the definition of IR, it is only obvious that IR ≥ −1; information theory gives a much tighter bound. Our findings 1-3 can be summarized as fol- lows:
1606.06737#53
Criticality in Formal Languages and Statistical Physics
We show that the mutual information between two symbols, as a function of the number of symbols between the two, decays exponentially in any probabilistic regular grammar, but can decay like a power law for a context-free grammar. This result about formal languages is closely related to a well-known result in classical statistical mechanics that there are no phase transitions in dimensions fewer than two. It is also related to the emergence of power-law correlations in turbulence and cosmological inflation through recursive generative processes. We elucidate these physics connections and comment on potential applications of our results to machine learning tasks like training artificial recurrent neural networks. Along the way, we introduce a useful quantity which we dub the rational mutual information and discuss generalizations of our claims involving more complicated Bayesian networks.
http://arxiv.org/pdf/1606.06737
Henry W. Lin, Max Tegmark
cond-mat.dis-nn, cs.CL
Replaced to match final published version. Discussion improved, references added
Entropy, 19, 299 (2017)
cond-mat.dis-nn
20160621
20170823
[]
1606.06565
54
14 altogether if we instead use inverse RL or apprenticeship learning, where the learning algorithm is provided with expert trajectories of near-optimal behavior [128, 2]. Recent progress in inverse reinforcement learning using deep neural networks to learn the cost function or policy [51] suggests that it might also be possible to reduce the need for exploration in advanced RL systems by training on a small set of demonstrations. Such demonstrations could be used to create a baseline policy, such that even if further learning is necessary, exploration away from the baseline policy can be limited in magnitude.
1606.06565#54
Concrete Problems in AI Safety
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.
http://arxiv.org/pdf/1606.06565
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané
cs.AI, cs.LG
29 pages
null
cs.AI
20160621
20160725
[ { "id": "1507.01986" }, { "id": "1506.02142" }, { "id": "1602.04621" }, { "id": "1602.04450" }, { "id": "1605.09304" }, { "id": "1606.05374" }, { "id": "1604.05288" }, { "id": "1603.00448" }, { "id": "1605.03661" }, { "id": "1512.02595" }, { "id": "1511.08228" }, { "id": "1510.04342" }, { "id": "1604.05280" }, { "id": "1606.04753" }, { "id": "1606.03976" }, { "id": "1602.02697" }, { "id": "1505.05424" }, { "id": "1604.06057" }, { "id": "1605.03143" }, { "id": "1506.06579" }, { "id": "1605.03142" }, { "id": "1502.05698" }, { "id": "1512.05832" }, { "id": "1502.02072" }, { "id": "1506.02438" }, { "id": "1502.02362" } ]
1606.06737
54
IR(X, Y ) = IR(Y, X) ≥ I(X, Y ) ≥ 0, (A3) where both equalities occur iff p(X, Y ) = p(X)p(Y ). It is impossible for one of the last two relations to be an equality while the other is an inequality. 4. Generalization. Note that if we view the mutual information as the divergence between two joint probability distributions, we can generalize the no- tion of rational mutual information to that of ra- tional divergence: Da(p\|q) = (2) -1, (A4) where the expectation value is taken with respect to the “true” probability distribution p. This is a special case of what is known in the literature as α-divergence [54]. The α-divergence is itself a special case of so-called f -divergences [55–57]: Dy (olla) = >_ vif (Gi/pi), (A5) where DR(p||q) corresponds to f (x) = 1 x − 1.
1606.06737#54
Criticality in Formal Languages and Statistical Physics
We show that the mutual information between two symbols, as a function of the number of symbols between the two, decays exponentially in any probabilistic regular grammar, but can decay like a power law for a context-free grammar. This result about formal languages is closely related to a well-known result in classical statistical mechanics that there are no phase transitions in dimensions fewer than two. It is also related to the emergence of power-law correlations in turbulence and cosmological inflation through recursive generative processes. We elucidate these physics connections and comment on potential applications of our results to machine learning tasks like training artificial recurrent neural networks. Along the way, we introduce a useful quantity which we dub the rational mutual information and discuss generalizations of our claims involving more complicated Bayesian networks.
http://arxiv.org/pdf/1606.06737
Henry W. Lin, Max Tegmark
cond-mat.dis-nn, cs.CL
Replaced to match final published version. Discussion improved, references added
Entropy, 19, 299 (2017)
cond-mat.dis-nn
20160621
20170823
[]
1606.06565
55
• Simulated Exploration: The more we can do our exploration in simulated environments instead of the real world, the less opportunity there is for catastrophe. It will probably al- ways be necessary to do some real-world exploration, since many complex situations cannot be perfectly captured by a simulator, but it might be possible to learn about danger in sim- ulation and then adopt a more conservative “safe exploration” policy when acting in the real world. Training RL agents (particularly robots) in simulated environments is already quite common, so advances in “exploration-focused simulation” could be easily incorporated into current workflows. In systems that involve a continual cycle of learning and deployment, there may be interesting research problems associated with how to safely incrementally update poli- cies given simulation-based trajectories that imperfectly represent the consequences of those policies as well as reliably accurate off-policy trajectories (e.g. “semi-on-policy” evaluation).
1606.06565#55
Concrete Problems in AI Safety
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.
http://arxiv.org/pdf/1606.06565
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané
cs.AI, cs.LG
29 pages
null
cs.AI
20160621
20160725
[ { "id": "1507.01986" }, { "id": "1506.02142" }, { "id": "1602.04621" }, { "id": "1602.04450" }, { "id": "1605.09304" }, { "id": "1606.05374" }, { "id": "1604.05288" }, { "id": "1603.00448" }, { "id": "1605.03661" }, { "id": "1512.02595" }, { "id": "1511.08228" }, { "id": "1510.04342" }, { "id": "1604.05280" }, { "id": "1606.04753" }, { "id": "1606.03976" }, { "id": "1602.02697" }, { "id": "1505.05424" }, { "id": "1604.06057" }, { "id": "1605.03143" }, { "id": "1506.06579" }, { "id": "1605.03142" }, { "id": "1502.05698" }, { "id": "1512.05832" }, { "id": "1502.02072" }, { "id": "1506.02438" }, { "id": "1502.02362" } ]
1606.06737
55
Dy (olla) = >_ vif (Gi/pi), (A5) where DR(p||q) corresponds to f (x) = 1 x − 1. Note that as it is written, p could be any probability measure on either a discrete or continuous space. The above results can be trivially modified to show that DR(p||q) ≥ DKL(p||q) and hence DR(p||q) ≥ 0, with equality iff p = q. # Appendix B: General proof for Markov processes In this appendix, we drop the assumptions of non- degeneracy, irreducibility and non-periodicity made in the main body of the paper where we proved that Markov processes lead to exponential decay. # 1. The degenerate case
1606.06737#55
Criticality in Formal Languages and Statistical Physics
We show that the mutual information between two symbols, as a function of the number of symbols between the two, decays exponentially in any probabilistic regular grammar, but can decay like a power law for a context-free grammar. This result about formal languages is closely related to a well-known result in classical statistical mechanics that there are no phase transitions in dimensions fewer than two. It is also related to the emergence of power-law correlations in turbulence and cosmological inflation through recursive generative processes. We elucidate these physics connections and comment on potential applications of our results to machine learning tasks like training artificial recurrent neural networks. Along the way, we introduce a useful quantity which we dub the rational mutual information and discuss generalizations of our claims involving more complicated Bayesian networks.
http://arxiv.org/pdf/1606.06737
Henry W. Lin, Max Tegmark
cond-mat.dis-nn, cs.CL
Replaced to match final published version. Discussion improved, references added
Entropy, 19, 299 (2017)
cond-mat.dis-nn
20160621
20170823
[]
1606.06565
56
• Bounded Exploration: If we know that a certain portion of state space is safe, and that even the worst action within it can be recovered from or bounded in harm, we can allow the agent to run freely within those bounds. For example, a quadcopter sufficiently far from the ground might be able to explore safely, since even if something goes wrong there will be ample time for a human or another policy to rescue it. Better yet, if we have a model, we can extrapolate forward and ask whether an action will take us outside the safe state space. Safety can be defined as remaining within an ergodic region of the state space such that actions are reversible [104, 159], or as limiting the probability of huge negative reward to some small value [156]. Yet another approaches uses separate safety and performance functions and attempts to obey constraints on the safety function with high probabilty [22]. As with several of the other directions, applying or adapting these methods to recently developed advanced RL systems could be a promising area of research. This idea seems related to H-infinity control [20] and regional verification [148].
1606.06565#56
Concrete Problems in AI Safety
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.
http://arxiv.org/pdf/1606.06565
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané
cs.AI, cs.LG
29 pages
null
cs.AI
20160621
20160725
[ { "id": "1507.01986" }, { "id": "1506.02142" }, { "id": "1602.04621" }, { "id": "1602.04450" }, { "id": "1605.09304" }, { "id": "1606.05374" }, { "id": "1604.05288" }, { "id": "1603.00448" }, { "id": "1605.03661" }, { "id": "1512.02595" }, { "id": "1511.08228" }, { "id": "1510.04342" }, { "id": "1604.05280" }, { "id": "1606.04753" }, { "id": "1606.03976" }, { "id": "1602.02697" }, { "id": "1505.05424" }, { "id": "1604.06057" }, { "id": "1605.03143" }, { "id": "1506.06579" }, { "id": "1605.03142" }, { "id": "1502.05698" }, { "id": "1512.05832" }, { "id": "1502.02072" }, { "id": "1506.02438" }, { "id": "1502.02362" } ]
1606.06737
56
# 1. The degenerate case First, we consider the case where the Markov matrix M has degenerate eigenvalues. In this case, we cannot guar- antee that M can be diagonalized. However, any complex matrix can be put into Jordan normal form. In Jordan normal form, a matrix is block diagonal, with each d × d block corresponding to an eigenvalue with degeneracy d. These blocks have a particularly simple form, with block i having λi on the diagonal and ones right above the diagonal. For example, if there are only three distinct eigenvalues and λ2 is threefold degenerate, the the Jor- dan form of M would be B−1MB = 0 1 0 0 0 0 λ2 1 0 0 λ2 1 0 0 0 0 0 0 0 0 λ2 0 0 λ3 0 . (B1) Note that the largest eigenvalue is unique and equal to 1 for all irreducible and aperiodic M. In this example, the matrix power Mτ is 1 0 0 0 0 7 OAs (G)AR* (3)AR* 0 B'IM’B=|0 0 2% (AZ! OO (B2) 0 0 0 3 60 0 0 0 0 AM
1606.06737#56
Criticality in Formal Languages and Statistical Physics
We show that the mutual information between two symbols, as a function of the number of symbols between the two, decays exponentially in any probabilistic regular grammar, but can decay like a power law for a context-free grammar. This result about formal languages is closely related to a well-known result in classical statistical mechanics that there are no phase transitions in dimensions fewer than two. It is also related to the emergence of power-law correlations in turbulence and cosmological inflation through recursive generative processes. We elucidate these physics connections and comment on potential applications of our results to machine learning tasks like training artificial recurrent neural networks. Along the way, we introduce a useful quantity which we dub the rational mutual information and discuss generalizations of our claims involving more complicated Bayesian networks.
http://arxiv.org/pdf/1606.06737
Henry W. Lin, Max Tegmark
cond-mat.dis-nn, cs.CL
Replaced to match final published version. Discussion improved, references added
Entropy, 19, 299 (2017)
cond-mat.dis-nn
20160621
20170823
[]
1606.06565
57
• Trusted Policy Oversight: If we have a trusted policy and a model of the environment, we can limit exploration to actions the trusted policy believes we can recover from. It’s fine to dive towards the ground, as long as we know we can pull out of the dive in time. • Human Oversight: Another possibility is to check potentially unsafe actions with a human. Unfortunately, this problem runs into the scalable oversight problem: the agent may need to make too many exploratory actions for human oversight to be practical, or may need to make them too fast for humans to judge them. A key challenge to making this work is having the agent be a good judge of which exploratory actions are genuinely risky, versus which are safe actions it can unilaterally take; another challenge is finding appropriately safe actions to take while waiting for the oversight. Potential Experiments: It might be helpful to have a suite of toy environments where unwary agents can fall prey to harmful exploration, but there is enough pattern to the possible catastro- phes that clever agents can predict and avoid them. To some extent this feature already exists in autonomous helicopter competitions and Mars rover simulations [104], but there is always the risk of catastrophes being idiosyncratic, such that trained agents can overfit to them. A truly broad set of environments, containing conceptually distinct pitfalls that can cause unwary agents to receive 15
1606.06565#57
Concrete Problems in AI Safety
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.
http://arxiv.org/pdf/1606.06565
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané
cs.AI, cs.LG
29 pages
null
cs.AI
20160621
20160725
[ { "id": "1507.01986" }, { "id": "1506.02142" }, { "id": "1602.04621" }, { "id": "1602.04450" }, { "id": "1605.09304" }, { "id": "1606.05374" }, { "id": "1604.05288" }, { "id": "1603.00448" }, { "id": "1605.03661" }, { "id": "1512.02595" }, { "id": "1511.08228" }, { "id": "1510.04342" }, { "id": "1604.05280" }, { "id": "1606.04753" }, { "id": "1606.03976" }, { "id": "1602.02697" }, { "id": "1505.05424" }, { "id": "1604.06057" }, { "id": "1605.03143" }, { "id": "1506.06579" }, { "id": "1605.03142" }, { "id": "1502.05698" }, { "id": "1512.05832" }, { "id": "1502.02072" }, { "id": "1506.02438" }, { "id": "1502.02362" } ]
1606.06737
57
In the general case, raising a matrix to an arbitrary power will yield a matrix which is still block diagonal, with each block being an upper triangular matrix. The important point is that in block i, every entry scales ∝ λτ i , up to a combinatorial factor. Each combinatorial factor grows only polynomially with τ , with the degree of the polyno- mials in the ith block bounded by the multiplicity of λi, minus one. Using this Jordan decomposition, we can replicate equa- tion (7) and write M τ ij = µi + λτ 2 Aij. (B3) There are two cases, depending on whether the second eigenvalue λ2 is degenerate or not. If not, then the equa- tion lim τ →∞ Aij = Bi2B−1 2j (B4) 11 still holds, since for i ≥ 3, (λi/λ2)τ decays faster than any polynomial of finite degree. On the other hand, if the second eigenvalue is degenerate with multiplicity m2, we instead define A with the combinatorial factor removed: Tr. T T My, = m+ (7) 5 Aiy- (B5)
1606.06737#57
Criticality in Formal Languages and Statistical Physics
We show that the mutual information between two symbols, as a function of the number of symbols between the two, decays exponentially in any probabilistic regular grammar, but can decay like a power law for a context-free grammar. This result about formal languages is closely related to a well-known result in classical statistical mechanics that there are no phase transitions in dimensions fewer than two. It is also related to the emergence of power-law correlations in turbulence and cosmological inflation through recursive generative processes. We elucidate these physics connections and comment on potential applications of our results to machine learning tasks like training artificial recurrent neural networks. Along the way, we introduce a useful quantity which we dub the rational mutual information and discuss generalizations of our claims involving more complicated Bayesian networks.
http://arxiv.org/pdf/1606.06737
Henry W. Lin, Max Tegmark
cond-mat.dis-nn, cs.CL
Replaced to match final published version. Discussion improved, references added
Entropy, 19, 299 (2017)
cond-mat.dis-nn
20160621
20170823
[]
1606.06565
58
15 extremely negative reward, and covering both physical and abstract catastrophes, might help in the development of safe exploration techniques for advanced RL systems. Such a suite of environments might serve a benchmarking role similar to that of the bAbI tasks [163], with the eventual goal being to develop a single architecture that can learn to avoid catastrophes in all environments in the suite. # 7 Robustness to Distributional Change
1606.06565#58
Concrete Problems in AI Safety
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.
http://arxiv.org/pdf/1606.06565
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané
cs.AI, cs.LG
29 pages
null
cs.AI
20160621
20160725
[ { "id": "1507.01986" }, { "id": "1506.02142" }, { "id": "1602.04621" }, { "id": "1602.04450" }, { "id": "1605.09304" }, { "id": "1606.05374" }, { "id": "1604.05288" }, { "id": "1603.00448" }, { "id": "1605.03661" }, { "id": "1512.02595" }, { "id": "1511.08228" }, { "id": "1510.04342" }, { "id": "1604.05280" }, { "id": "1606.04753" }, { "id": "1606.03976" }, { "id": "1602.02697" }, { "id": "1505.05424" }, { "id": "1604.06057" }, { "id": "1605.03143" }, { "id": "1506.06579" }, { "id": "1605.03142" }, { "id": "1502.05698" }, { "id": "1512.05832" }, { "id": "1502.02072" }, { "id": "1506.02438" }, { "id": "1502.02362" } ]
1606.06737
58
Tr. T T My, = m+ (7) 5 Aiy- (B5) If m2 = 1, this definition simply reduces to the previous definition of A. With this definition, lim τ →∞ Aij = λ−m2 2 Bi2B−1 (2+m2)j, (B6) Hence in the most general case, the mutual information decays like a polynomial P(τ )e−γτ , where γ = 2 ln 1 . λ2 The polynomial is non-constant if and only if the second largest eigenvalue is degenerate. Note that even in this case, the mutual information decays exponentially in the sense that it is possible to bound the mutual information by an exponential. # 2. The reducible case Now let us generalize to the case where the Markov pro- cess is reducible. A general Markov state space can be partitioned into m subsets, m s=US, i=1 (B7) where elements in the same partition communicate with it is possible to transition from i → j and each other: j → i for i, j ∈ Si.
1606.06737#58
Criticality in Formal Languages and Statistical Physics
We show that the mutual information between two symbols, as a function of the number of symbols between the two, decays exponentially in any probabilistic regular grammar, but can decay like a power law for a context-free grammar. This result about formal languages is closely related to a well-known result in classical statistical mechanics that there are no phase transitions in dimensions fewer than two. It is also related to the emergence of power-law correlations in turbulence and cosmological inflation through recursive generative processes. We elucidate these physics connections and comment on potential applications of our results to machine learning tasks like training artificial recurrent neural networks. Along the way, we introduce a useful quantity which we dub the rational mutual information and discuss generalizations of our claims involving more complicated Bayesian networks.
http://arxiv.org/pdf/1606.06737
Henry W. Lin, Max Tegmark
cond-mat.dis-nn, cs.CL
Replaced to match final published version. Discussion improved, references added
Entropy, 19, 299 (2017)
cond-mat.dis-nn
20160621
20170823
[]
1606.06565
59
All of us occasionally find ourselves in situations that our previous experience has not adequately prepared us to deal with—for instance, flying an airplane, traveling to a country whose culture is very different from ours, or taking care of children for the first time. Such situations are inherently difficult to handle and inevitably lead to some missteps. However, a key (and often rare) skill in dealing with such situations is to recognize our own ignorance, rather than simply assuming that the heuristics and intuitions we’ve developed for other situations will carry over perfectly. Machine learning systems also have this problem—a speech system trained on clean speech will perform very poorly on noisy speech, yet often be highly confident in its erroneous classifications (some of the authors have personally observed this in training automatic speech recognition systems). In the case of our cleaning robot, harsh cleaning materials that it has found useful in cleaning factory floors could cause a lot of harm if used to clean an office. Or, an office might contain pets that the robot, never having seen before, attempts to wash with soap, leading to predictably bad results. In general, when the testing distribution differs from the training distribution, machine learning systems may not only exhibit poor performance, but also wrongly assume that their performance is good.
1606.06565#59
Concrete Problems in AI Safety
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.
http://arxiv.org/pdf/1606.06565
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané
cs.AI, cs.LG
29 pages
null
cs.AI
20160621
20160725
[ { "id": "1507.01986" }, { "id": "1506.02142" }, { "id": "1602.04621" }, { "id": "1602.04450" }, { "id": "1605.09304" }, { "id": "1606.05374" }, { "id": "1604.05288" }, { "id": "1603.00448" }, { "id": "1605.03661" }, { "id": "1512.02595" }, { "id": "1511.08228" }, { "id": "1510.04342" }, { "id": "1604.05280" }, { "id": "1606.04753" }, { "id": "1606.03976" }, { "id": "1602.02697" }, { "id": "1505.05424" }, { "id": "1604.06057" }, { "id": "1605.03143" }, { "id": "1506.06579" }, { "id": "1605.03142" }, { "id": "1502.05698" }, { "id": "1512.05832" }, { "id": "1502.02072" }, { "id": "1506.02438" }, { "id": "1502.02362" } ]
1606.06737
59
where elements in the same partition communicate with it is possible to transition from i → j and each other: j → i for i, j ∈ Si. In general, the set of partitions will be a finite directed acyclic graph (DAG), where the arrows of the DAG are inherited from the Markov chain. Since the DAG is finite, after some finite amount of time, almost all the proba- bility will be concentrated in the “final” partitions that have no outgoing arrows and almost no probability will be in the “transient” partitions. Since the statistics of the chain that we are interested are determined by run- ning the chain for infinite time, they are insensitive to transient behavior, and hence we can ignore all but the final partitions. (The mutual information at fixed sepa- ration is still determined by averaging over all (infinite) time steps.) Consider the case where the initial probability distribu- ion only has support on one of the S;. Since states in S; A S; will never be accessed, the Markov process (with this initial condition) is identical to an irreducible Markov process on $;. Our previous results imply that he mutual information will exponentially decay to zero.
1606.06737#59
Criticality in Formal Languages and Statistical Physics
We show that the mutual information between two symbols, as a function of the number of symbols between the two, decays exponentially in any probabilistic regular grammar, but can decay like a power law for a context-free grammar. This result about formal languages is closely related to a well-known result in classical statistical mechanics that there are no phase transitions in dimensions fewer than two. It is also related to the emergence of power-law correlations in turbulence and cosmological inflation through recursive generative processes. We elucidate these physics connections and comment on potential applications of our results to machine learning tasks like training artificial recurrent neural networks. Along the way, we introduce a useful quantity which we dub the rational mutual information and discuss generalizations of our claims involving more complicated Bayesian networks.
http://arxiv.org/pdf/1606.06737
Henry W. Lin, Max Tegmark
cond-mat.dis-nn, cs.CL
Replaced to match final published version. Discussion improved, references added
Entropy, 19, 299 (2017)
cond-mat.dis-nn
20160621
20170823
[]
1606.06565
60
Such errors can be harmful or offensive—a classifier could give the wrong medical diagnosis with such high confidence that the data isn’t flagged for human inspection, or a language model could output offensive text that it confidently believes is non-problematic. For autonomous agents acting in the world, there may be even greater potential for something bad to happen—for instance, an autonomous agent might overload a power grid because it incorrectly but confidently perceives that a particular region doesn’t have enough power, and concludes that more power is urgently needed and overload is unlikely. More broadly, any agent whose perception or heuristic reasoning processes are not trained on the correct distribution may badly misunderstand its situation, and thus runs the risk of committing harmful actions that it does not realize are harmful. Additionally, safety checks that depend on trained machine learning systems (e.g. “does my visual system believe this route is clear?”) may fail silently and unpredictably if those systems encounter real-world data that differs sufficiently from their training data. Having a better way to detect such failures, and ultimately having statistical assurances about how often they’ll happen, seems critical to building safe and predictable systems.
1606.06565#60
Concrete Problems in AI Safety
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.
http://arxiv.org/pdf/1606.06565
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané
cs.AI, cs.LG
29 pages
null
cs.AI
20160621
20160725
[ { "id": "1507.01986" }, { "id": "1506.02142" }, { "id": "1602.04621" }, { "id": "1602.04450" }, { "id": "1605.09304" }, { "id": "1606.05374" }, { "id": "1604.05288" }, { "id": "1603.00448" }, { "id": "1605.03661" }, { "id": "1512.02595" }, { "id": "1511.08228" }, { "id": "1510.04342" }, { "id": "1604.05280" }, { "id": "1606.04753" }, { "id": "1606.03976" }, { "id": "1602.02697" }, { "id": "1505.05424" }, { "id": "1604.06057" }, { "id": "1605.03143" }, { "id": "1506.06579" }, { "id": "1605.03142" }, { "id": "1502.05698" }, { "id": "1512.05832" }, { "id": "1502.02072" }, { "id": "1506.02438" }, { "id": "1502.02362" } ]
1606.06737
60
Let us define the random variable Z = f (X), where f (x ∈ Si) = Si. For a general initial condition, the total probability within each set Si is independent of time. This means that the entropy H(Z) is independent of time. Using the fact that H(Z|X) = H(Y |X) = 0, one can show that I(X, Y ) = I(X, Y |Z) + H(Z), (B8) where I(X, Y |Z) = H(X|Z) − H(Y |X, Z) is the condi- tional mutual information. Our previous results then im- ply that the conditional mutual information decays expo- nentially, whereas the second term H(Z) ≤ log m is con- stant. In the language of statistical physics, this is an ex- ample of topological order which leads to constant terms in the correlation functions; here, the Markov graph of M is disconnected, so there are m degenerate equilibrium states. # 3. The periodic case
1606.06737#60
Criticality in Formal Languages and Statistical Physics
We show that the mutual information between two symbols, as a function of the number of symbols between the two, decays exponentially in any probabilistic regular grammar, but can decay like a power law for a context-free grammar. This result about formal languages is closely related to a well-known result in classical statistical mechanics that there are no phase transitions in dimensions fewer than two. It is also related to the emergence of power-law correlations in turbulence and cosmological inflation through recursive generative processes. We elucidate these physics connections and comment on potential applications of our results to machine learning tasks like training artificial recurrent neural networks. Along the way, we introduce a useful quantity which we dub the rational mutual information and discuss generalizations of our claims involving more complicated Bayesian networks.
http://arxiv.org/pdf/1606.06737
Henry W. Lin, Max Tegmark
cond-mat.dis-nn, cs.CL
Replaced to match final published version. Discussion improved, references added
Entropy, 19, 299 (2017)
cond-mat.dis-nn
20160621
20170823
[]
1606.06565
61
For concreteness, we imagine that a machine learning model is trained on one distribution (call it p0) but deployed on a potentially different test distribution (call it p∗). There are many other ways to formalize this problem (for instance, in an online learning setting with concept drift [70, 54]) but we will focus on the above for simplicity. An important point is that we likely have access to a large amount of labeled data at training time, but little or no labeled data at test time. Our goal is to ensure that the model “performs reasonably” on p∗, in the sense that (1) it often performs well on p∗, and (2) it knows when it is performing badly (and ideally can avoid/mitigate the bad performance by taking conservative actions or soliciting human input). There are a variety of areas that are potentially relevant to this problem, including change detection and anomaly detection [21, 80, 91], hypothesis testing [145], transfer learning [138, 124, 125, 25], and several others [136, 87, 18, 122, 121, 74, 147]. Rather than fully reviewing all of this work in detail (which would necessitate a paper in itself), we will describe a few illustrative approaches and lay out some of their relative strengths and challenges. 16
1606.06565#61
Concrete Problems in AI Safety
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.
http://arxiv.org/pdf/1606.06565
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané
cs.AI, cs.LG
29 pages
null
cs.AI
20160621
20160725
[ { "id": "1507.01986" }, { "id": "1506.02142" }, { "id": "1602.04621" }, { "id": "1602.04450" }, { "id": "1605.09304" }, { "id": "1606.05374" }, { "id": "1604.05288" }, { "id": "1603.00448" }, { "id": "1605.03661" }, { "id": "1512.02595" }, { "id": "1511.08228" }, { "id": "1510.04342" }, { "id": "1604.05280" }, { "id": "1606.04753" }, { "id": "1606.03976" }, { "id": "1602.02697" }, { "id": "1505.05424" }, { "id": "1604.06057" }, { "id": "1605.03143" }, { "id": "1506.06579" }, { "id": "1605.03142" }, { "id": "1502.05698" }, { "id": "1512.05832" }, { "id": "1502.02072" }, { "id": "1506.02438" }, { "id": "1502.02362" } ]
1606.06737
61
# 3. The periodic case If a Markov process is periodic, one can further de- compose each final partition. It is easy to check that the period of each element in a partition must be con- stant throughout the partition. It follows that each fi- nal partition Si can be decomposed into cyclic classes Si1, Si2, · · · , Sid, where d is the period of the elements in the partition in Si. The arguments in the previous sec- tion with f (x ∈ Sik) = Sik then show that the mutual information again has two terms, one of which exponen- tially decays, the other of which is constant. # 4. The n > 1 case
1606.06737#61
Criticality in Formal Languages and Statistical Physics
We show that the mutual information between two symbols, as a function of the number of symbols between the two, decays exponentially in any probabilistic regular grammar, but can decay like a power law for a context-free grammar. This result about formal languages is closely related to a well-known result in classical statistical mechanics that there are no phase transitions in dimensions fewer than two. It is also related to the emergence of power-law correlations in turbulence and cosmological inflation through recursive generative processes. We elucidate these physics connections and comment on potential applications of our results to machine learning tasks like training artificial recurrent neural networks. Along the way, we introduce a useful quantity which we dub the rational mutual information and discuss generalizations of our claims involving more complicated Bayesian networks.
http://arxiv.org/pdf/1606.06737
Henry W. Lin, Max Tegmark
cond-mat.dis-nn, cs.CL
Replaced to match final published version. Discussion improved, references added
Entropy, 19, 299 (2017)
cond-mat.dis-nn
20160621
20170823
[]
1606.06565
62
16 Well-specified models: covariate shift and marginal likelihood. If we specialize to prediction tasks and let x denote the input and y denote the output (prediction target), then one possibility is to make the covariate shift assumption that p0(y|x) = p∗(y|x). In this case, assuming that we can model p0(x) and p∗(x) well, we can perform importance weighting by re-weighting each training example (x, y) by p∗(x)/p0(x) [138, 124]. Then the importance-weighted samples allow us to estimate the performance on p∗, and even re-train a model to perform well on p∗. This approach is limited by the variance of the importance estimate, which is very large or even infinite unless p0 and p∗ are close together.
1606.06565#62
Concrete Problems in AI Safety
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.
http://arxiv.org/pdf/1606.06565
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané
cs.AI, cs.LG
29 pages
null
cs.AI
20160621
20160725
[ { "id": "1507.01986" }, { "id": "1506.02142" }, { "id": "1602.04621" }, { "id": "1602.04450" }, { "id": "1605.09304" }, { "id": "1606.05374" }, { "id": "1604.05288" }, { "id": "1603.00448" }, { "id": "1605.03661" }, { "id": "1512.02595" }, { "id": "1511.08228" }, { "id": "1510.04342" }, { "id": "1604.05280" }, { "id": "1606.04753" }, { "id": "1606.03976" }, { "id": "1602.02697" }, { "id": "1505.05424" }, { "id": "1604.06057" }, { "id": "1605.03143" }, { "id": "1506.06579" }, { "id": "1605.03142" }, { "id": "1502.05698" }, { "id": "1512.05832" }, { "id": "1502.02072" }, { "id": "1506.02438" }, { "id": "1502.02362" } ]
1606.06737
62
# 4. The n > 1 case The following proof holds only for order n = 1 Markov processes, but we can easily extend the results for arbi- trary n. Any n = 2 Markov process can be converted into an n = 1 Markov process on pairs of letters X1X2. Hence our proof shows that I(X1X2, Y1Y2) decays ex- ponentially. But for any random variables X, Y , the data processing inequality [40] states that I(X, g(Y )) ≤ I(X, Y ), where g is an arbitrary function of Y . Let- ting g(Y1Y2) = Y1, and then permuting and applying g(X1, X2) = X1 gives I(X1X2, Y1Y2) ≥ I(X1X2, Y1) ≥ I(X1, Y1). (B9) Hence, we see that I(X1, Y1) must exponentially decay. The preceding remarks can be easily formalized into a proof for an arbitrary Markov process by induction on n. # 5. The detailed balance case This asymptotic relation can be strengthened for a sub- class of Markov processes which obey a condition known 12
1606.06737#62
Criticality in Formal Languages and Statistical Physics
We show that the mutual information between two symbols, as a function of the number of symbols between the two, decays exponentially in any probabilistic regular grammar, but can decay like a power law for a context-free grammar. This result about formal languages is closely related to a well-known result in classical statistical mechanics that there are no phase transitions in dimensions fewer than two. It is also related to the emergence of power-law correlations in turbulence and cosmological inflation through recursive generative processes. We elucidate these physics connections and comment on potential applications of our results to machine learning tasks like training artificial recurrent neural networks. Along the way, we introduce a useful quantity which we dub the rational mutual information and discuss generalizations of our claims involving more complicated Bayesian networks.
http://arxiv.org/pdf/1606.06737
Henry W. Lin, Max Tegmark
cond-mat.dis-nn, cs.CL
Replaced to match final published version. Discussion improved, references added
Entropy, 19, 299 (2017)
cond-mat.dis-nn
20160621
20170823
[]
1606.06565
63
An alternative to sample re-weighting involves assuming a well-specified model family, in which case there is a single optimal model for predicting under both p0 and p∗. In this case, one need only heed finite-sample variance in the estimated model [25, 87]. A limitation to this approach, at least currently, is that models are often mis-specified in practice. However, this could potentially be over- come by employing highly expressive model families such as reproducing kernel Hilbert spaces [72], Turing machines [143, 144], or sufficiently expressive neural nets [64, 79]. In the latter case, there has been interesting recent work on using bootstrapping to estimate finite-sample variation in the learned parameters of a neural network [114]; it seems worthwhile to better understand whether this approach can be used to effectively estimate out-of-sample performance in practice, as well as how local minima, lack of curvature, and other peculiarities relative to the typical setting of the bootstrap [47] affect the validity of this approach.
1606.06565#63
Concrete Problems in AI Safety
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.
http://arxiv.org/pdf/1606.06565
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané
cs.AI, cs.LG
29 pages
null
cs.AI
20160621
20160725
[ { "id": "1507.01986" }, { "id": "1506.02142" }, { "id": "1602.04621" }, { "id": "1602.04450" }, { "id": "1605.09304" }, { "id": "1606.05374" }, { "id": "1604.05288" }, { "id": "1603.00448" }, { "id": "1605.03661" }, { "id": "1512.02595" }, { "id": "1511.08228" }, { "id": "1510.04342" }, { "id": "1604.05280" }, { "id": "1606.04753" }, { "id": "1606.03976" }, { "id": "1602.02697" }, { "id": "1505.05424" }, { "id": "1604.06057" }, { "id": "1605.03143" }, { "id": "1506.06579" }, { "id": "1605.03142" }, { "id": "1502.05698" }, { "id": "1512.05832" }, { "id": "1502.02072" }, { "id": "1506.02438" }, { "id": "1502.02362" } ]
1606.06737
63
# 5. The detailed balance case This asymptotic relation can be strengthened for a sub- class of Markov processes which obey a condition known 12 as detailed balance. This subclass arises naturally in the study of statistical physics [58]. For our purposes, this simply means that there exist some real numbers Km and a symmetric matrix Sab = Sba such that Mab = eKa/2Sabe−Kb/2. (B10) Let us note the following facts. (1) The matrix power is simply (M τ )ab = eKa/2 (Sτ )ab e−Kb/2. (2) By the spec- tral theorem, we can diagonalize S into an orthonormal basis of eigenvectors, which we label as v (or sometimes w), e.g., Sv = λiv and v · w = δvw. Notice that =e Ke Sn tn = ieM Um. n n
1606.06737#63
Criticality in Formal Languages and Statistical Physics
We show that the mutual information between two symbols, as a function of the number of symbols between the two, decays exponentially in any probabilistic regular grammar, but can decay like a power law for a context-free grammar. This result about formal languages is closely related to a well-known result in classical statistical mechanics that there are no phase transitions in dimensions fewer than two. It is also related to the emergence of power-law correlations in turbulence and cosmological inflation through recursive generative processes. We elucidate these physics connections and comment on potential applications of our results to machine learning tasks like training artificial recurrent neural networks. Along the way, we introduce a useful quantity which we dub the rational mutual information and discuss generalizations of our claims involving more complicated Bayesian networks.
http://arxiv.org/pdf/1606.06737
Henry W. Lin, Max Tegmark
cond-mat.dis-nn, cs.CL
Replaced to match final published version. Discussion improved, references added
Entropy, 19, 299 (2017)
cond-mat.dis-nn
20160621
20170823
[]
1606.06565
64
All of the approaches so far rely on the covariate shift assumption, which is very strong and is also untestable; the latter property is particularly problematic from a safety perspective, since it could lead to silent failures in a machine learning system. Another approach, which does not rely on covariate shift, builds a generative model of the distribution. Rather than assuming that p(x) changes while p(y|x) stays the same, we are free to assume other invariants (for instance, that p(y) changes but p(x|y) stays the same, or that certain conditional independencies are preserved). An advantage is that such assumptions are typically more testable than the covariate shift assumption (since they do not only involve the unobserved variable y). A disadvantage is that generative approaches are even more fragile than discriminative approaches in the presence of model mis-specification — for instance, there is a large empirical literature showing that generative approaches to semi-supervised learning based on maximizing marginal likelihood can perform very poorly when the model is mis- specified [98, 110, 35, 90, 88].
1606.06565#64
Concrete Problems in AI Safety
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.
http://arxiv.org/pdf/1606.06565
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané
cs.AI, cs.LG
29 pages
null
cs.AI
20160621
20160725
[ { "id": "1507.01986" }, { "id": "1506.02142" }, { "id": "1602.04621" }, { "id": "1602.04450" }, { "id": "1605.09304" }, { "id": "1606.05374" }, { "id": "1604.05288" }, { "id": "1603.00448" }, { "id": "1605.03661" }, { "id": "1512.02595" }, { "id": "1511.08228" }, { "id": "1510.04342" }, { "id": "1604.05280" }, { "id": "1606.04753" }, { "id": "1606.03976" }, { "id": "1602.02697" }, { "id": "1505.05424" }, { "id": "1604.06057" }, { "id": "1605.03143" }, { "id": "1506.06579" }, { "id": "1605.03142" }, { "id": "1502.05698" }, { "id": "1512.05832" }, { "id": "1502.02072" }, { "id": "1506.02438" }, { "id": "1502.02362" } ]
1606.06737
64
=e Ke Sn tn = ieM Um. n n Hence we have found an eigenvector of M for every eigen- vector of S. Conversely, the set of eigenvectors of S forms a basis, so there cannot be any more eigenvectors of M . This implies that all the eigenvalues of M are given by m = eKm/2vm, and the eigenvalues of P v are λi. P v In other words, M and S share the same eigenvalues. (3) µa = 1 hence is the stationary state: > Mav = 1 - = zx > e0/2 Sy ,e7 Kel? = <2) Moa = La: z> e(KatKo)/2 (B11) The previous facts then let us finish the calculation: P(a, b) _ Ka (at ,—-Ky Ky-Ka (Paras) ~Y (VEN) ™) _ =X Ka (s7)2, eK) (eKe-Ka) =F (97 = ISTP. ab (B12) (B12) Now using the fact that ||A||? = tr (A7A) and is there- fore invariant under an orthogonal change of basis, we find that (Fara) 7 Dsl. (B13)
1606.06737#64
Criticality in Formal Languages and Statistical Physics
We show that the mutual information between two symbols, as a function of the number of symbols between the two, decays exponentially in any probabilistic regular grammar, but can decay like a power law for a context-free grammar. This result about formal languages is closely related to a well-known result in classical statistical mechanics that there are no phase transitions in dimensions fewer than two. It is also related to the emergence of power-law correlations in turbulence and cosmological inflation through recursive generative processes. We elucidate these physics connections and comment on potential applications of our results to machine learning tasks like training artificial recurrent neural networks. Along the way, we introduce a useful quantity which we dub the rational mutual information and discuss generalizations of our claims involving more complicated Bayesian networks.
http://arxiv.org/pdf/1606.06737
Henry W. Lin, Max Tegmark
cond-mat.dis-nn, cs.CL
Replaced to match final published version. Discussion improved, references added
Entropy, 19, 299 (2017)
cond-mat.dis-nn
20160621
20170823
[]
1606.06565
65
The approaches discussed above all rely relatively strongly on having a well-specified model family — one that contains the true distribution or true concept. This can be problematic in many cases, since nature is often more complicated than our model family is capable of capturing. As noted above, it may be possible to mitigate this with very expressive models, such as kernels, Turing machines, or very large neural networks, but even here there is at least some remaining problem: for example, even if our model family consists of all Turing machines, given any finite amount of data we can only actually learn among Turing machines up to a given description length, and if the Turing machine describing nature exceeds this length, we are back to the mis-specified regime (alternatively, nature might not even be describable by a Turing machine).
1606.06565#65
Concrete Problems in AI Safety
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.
http://arxiv.org/pdf/1606.06565
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané
cs.AI, cs.LG
29 pages
null
cs.AI
20160621
20160725
[ { "id": "1507.01986" }, { "id": "1506.02142" }, { "id": "1602.04621" }, { "id": "1602.04450" }, { "id": "1605.09304" }, { "id": "1606.05374" }, { "id": "1604.05288" }, { "id": "1603.00448" }, { "id": "1605.03661" }, { "id": "1512.02595" }, { "id": "1511.08228" }, { "id": "1510.04342" }, { "id": "1604.05280" }, { "id": "1606.04753" }, { "id": "1606.03976" }, { "id": "1602.02697" }, { "id": "1505.05424" }, { "id": "1604.06057" }, { "id": "1605.03143" }, { "id": "1506.06579" }, { "id": "1605.03142" }, { "id": "1502.05698" }, { "id": "1512.05832" }, { "id": "1502.02072" }, { "id": "1506.02438" }, { "id": "1502.02362" } ]
1606.06737
65
(Fara) 7 Dsl. (B13) Since the λi’s are both the eigenvalues of M and S, and since M is irreducible and aperiodic, there is exactly one eigenvalue λ1 = 1, and all other eigenvalues are less than one. Altogether, Ip(ti, te) = (7p) -l= Ss |Ai?”. (B14) i=2 Hence one can easily estimate the asymptotic behavior of the mutual information if one has knowledge of the spectrum of M . We see that the mutual information exponentially decays, with a decay scale time-scale given by the second largest eigenvalue λ2: τ −1 decay = 2 log 1 λ2 . (B15) # 6. Hidden Markov Model In this subsection, we generalize our findings to hidden Markov models and present a proof of Theorem 2. If we have a Bayesian network of the form W ← X → X → Y → Z, one can show that I(W, Z) ≤ I(X, Y ) using arguments similar to the proof of the data processing in- equality. Hence if I(X, Y ) decays exponentially, I(W, Z) should also decay exponentially. In what follows, we will show this in greater detail.
1606.06737#65
Criticality in Formal Languages and Statistical Physics
We show that the mutual information between two symbols, as a function of the number of symbols between the two, decays exponentially in any probabilistic regular grammar, but can decay like a power law for a context-free grammar. This result about formal languages is closely related to a well-known result in classical statistical mechanics that there are no phase transitions in dimensions fewer than two. It is also related to the emergence of power-law correlations in turbulence and cosmological inflation through recursive generative processes. We elucidate these physics connections and comment on potential applications of our results to machine learning tasks like training artificial recurrent neural networks. Along the way, we introduce a useful quantity which we dub the rational mutual information and discuss generalizations of our claims involving more complicated Bayesian networks.
http://arxiv.org/pdf/1606.06737
Henry W. Lin, Max Tegmark
cond-mat.dis-nn, cs.CL
Replaced to match final published version. Discussion improved, references added
Entropy, 19, 299 (2017)
cond-mat.dis-nn
20160621
20170823
[]
1606.06565
66
Partially specified models: method of moments, unsupervised risk estimation, causal identification, and limited-information maximum likelihood. Another approach is to take for granted that constructing a fully well-specified model family is probably infeasible, and to design methods that perform well despite this fact. This leads to the idea of partially specified models — models for which assumptions are made about some aspects of a distribution, but for which we are agnostic or make limited assumptions about other aspects. For a simple example, consider a variant of linear regression where we might assume that y = (w*,x) + v, where E[v|a] = 0, but we don’t make any further assumptions about the distributional form of the noise v. It turns out that this is already enough to identify the parameters w*, and that these parameters will minimize the squared 17 prediction error even if the distribution over x changes. What is interesting about this example is that w∗ can be identified even with an incomplete (partial) specification of the noise distribution.
1606.06565#66
Concrete Problems in AI Safety
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.
http://arxiv.org/pdf/1606.06565
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané
cs.AI, cs.LG
29 pages
null
cs.AI
20160621
20160725
[ { "id": "1507.01986" }, { "id": "1506.02142" }, { "id": "1602.04621" }, { "id": "1602.04450" }, { "id": "1605.09304" }, { "id": "1606.05374" }, { "id": "1604.05288" }, { "id": "1603.00448" }, { "id": "1605.03661" }, { "id": "1512.02595" }, { "id": "1511.08228" }, { "id": "1510.04342" }, { "id": "1604.05280" }, { "id": "1606.04753" }, { "id": "1606.03976" }, { "id": "1602.02697" }, { "id": "1505.05424" }, { "id": "1604.06057" }, { "id": "1605.03143" }, { "id": "1506.06579" }, { "id": "1605.03142" }, { "id": "1502.05698" }, { "id": "1512.05832" }, { "id": "1502.02072" }, { "id": "1506.02438" }, { "id": "1502.02362" } ]
1606.06737
66
Based on the considerations in the main body of the text, the joint probability distribution between two visi- ble states Xt1 , Xt2 is given by P(a,b) = > Goa [(M” )ac He] Gacs ed (B16) where the term in brackets would have been there in an ordinary Markov model and the two new factors of G are the result of the generalization. Note that as before, µ is the stationary state corresponding to M. We will only consider the typical case where M is aperiodic, irre- ducible, and non-degenerate; once we have this case, the other cases can be easily treated by mimicking our above proof for or ordinary Markov processes. Using equation (7) and defining g = Mµ gives = 2G (M7) a te] G = Ja9> +3 Y > (GoaAdcteGac) + cd (B17) Plugging this in to our definition of rational mutual in- formation gives Int 1 = Sp Pe uh ab Jab = > (x0 +3 Ss Giada) ab cd (B18) +A37C =14A3 ¥> Adette + AZC ed =14+3"¢,
1606.06737#66
Criticality in Formal Languages and Statistical Physics
We show that the mutual information between two symbols, as a function of the number of symbols between the two, decays exponentially in any probabilistic regular grammar, but can decay like a power law for a context-free grammar. This result about formal languages is closely related to a well-known result in classical statistical mechanics that there are no phase transitions in dimensions fewer than two. It is also related to the emergence of power-law correlations in turbulence and cosmological inflation through recursive generative processes. We elucidate these physics connections and comment on potential applications of our results to machine learning tasks like training artificial recurrent neural networks. Along the way, we introduce a useful quantity which we dub the rational mutual information and discuss generalizations of our claims involving more complicated Bayesian networks.
http://arxiv.org/pdf/1606.06737
Henry W. Lin, Max Tegmark
cond-mat.dis-nn, cs.CL
Replaced to match final published version. Discussion improved, references added
Entropy, 19, 299 (2017)
cond-mat.dis-nn
20160621
20170823
[]
1606.06565
67
This insight can be substantially generalized, and is one of the primary motivations for the gen- eralized method of moments in econometrics [68, 123, 69]. The econometrics literature has in fact developed a large family of tools for handling partial specification, which also includes limited- information maximum likelihood and instrumental variables [10, 11, 133, 132]. Returning to machine learning, the method of moments has recently seen a great deal of success for use in the estimation of latent variable models [9]. While the current focus is on using the method of moments to overcome non-convexity issues, it can also offer a way to perform unsupervised learning while relying only on conditional independence assumptions, rather than the strong distributional assumptions underlying maximum likelihood learning [147].
1606.06565#67
Concrete Problems in AI Safety
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.
http://arxiv.org/pdf/1606.06565
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané
cs.AI, cs.LG
29 pages
null
cs.AI
20160621
20160725
[ { "id": "1507.01986" }, { "id": "1506.02142" }, { "id": "1602.04621" }, { "id": "1602.04450" }, { "id": "1605.09304" }, { "id": "1606.05374" }, { "id": "1604.05288" }, { "id": "1603.00448" }, { "id": "1605.03661" }, { "id": "1512.02595" }, { "id": "1511.08228" }, { "id": "1510.04342" }, { "id": "1604.05280" }, { "id": "1606.04753" }, { "id": "1606.03976" }, { "id": "1602.02697" }, { "id": "1505.05424" }, { "id": "1604.06057" }, { "id": "1605.03143" }, { "id": "1506.06579" }, { "id": "1605.03142" }, { "id": "1502.05698" }, { "id": "1512.05832" }, { "id": "1502.02072" }, { "id": "1506.02438" }, { "id": "1502.02362" } ]
1606.06737
67
where we have used the facts that 3°; Gi; = 1, 30; Aij = 0, and as before C is asymptotically constant. This shows that Ip x  exponentially decays. 13 # Appendix C: Power laws for generative grammars Let us now generalize to the strongly correlated case. As discussed in the text, the joint probability is modified to In this appendix, we prove that the rational mutual infor- mation decays like a power law for a sub-class of gener- ative grammars. We proceed by mimicking the strategy employed in the above appendix. Let G be the linear operator associated with the matrix Pb|a, the probability that a node takes the value b given that the parent node has value b. We will assume that G is irreducible and aperiodic, with no degeneracies. From the above discus- sion, we see that removing the degeneracy assumption does not qualitatively change things; one simply replaces the procedure of diagonalizing G with putting G in Jor- dan normal form. (C7) Ss P(a,b) = Sr Qn (cae) (cae) : where Q is some symmetric matrix which satisfies >, Qrs = Hs. We now employ our favorite trick of diag- onalizing G and then writing
1606.06737#67
Criticality in Formal Languages and Statistical Physics
We show that the mutual information between two symbols, as a function of the number of symbols between the two, decays exponentially in any probabilistic regular grammar, but can decay like a power law for a context-free grammar. This result about formal languages is closely related to a well-known result in classical statistical mechanics that there are no phase transitions in dimensions fewer than two. It is also related to the emergence of power-law correlations in turbulence and cosmological inflation through recursive generative processes. We elucidate these physics connections and comment on potential applications of our results to machine learning tasks like training artificial recurrent neural networks. Along the way, we introduce a useful quantity which we dub the rational mutual information and discuss generalizations of our claims involving more complicated Bayesian networks.
http://arxiv.org/pdf/1606.06737
Henry W. Lin, Max Tegmark
cond-mat.dis-nn, cs.CL
Replaced to match final published version. Discussion improved, references added
Entropy, 19, 299 (2017)
cond-mat.dis-nn
20160621
20170823
[]
1606.06565
68
Finally, some recent work in machine learning focuses only on modeling the distribution of errors of a model, which is sufficient for determining whether a model is performing well or poorly. Formally, the goal is to perform unsupervised risk estimation — given a model and unlabeled data from a test distribution, estimate the labeled risk of the model. This formalism, introduced by [44], has the advantage of potentially handling very large changes between train and test — even if the test distribution looks completely different from the training distribution and we have no hope of outputting accurate predictions, unsupervised risk estimation may still be possible, as in this case we would only need to output a large estimate for the risk. As in [147], one can approach unsupervised risk estimation by positing certain conditional independencies in the distribution of errors, and using this to estimate the error distribution from unlabeled data [39, 170, 121, 74]. Instead of assuming independence, another assumption is that the errors are Gaussian conditioned on the true output y, in which case estimating the risk reduces to estimating a Gaussian mixture model [18]. Because these methods focus only on the model errors and ignore other aspects of the data distribution, they can also be seen as an instance of partial model specification.
1606.06565#68
Concrete Problems in AI Safety
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.
http://arxiv.org/pdf/1606.06565
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané
cs.AI, cs.LG
29 pages
null
cs.AI
20160621
20160725
[ { "id": "1507.01986" }, { "id": "1506.02142" }, { "id": "1602.04621" }, { "id": "1602.04450" }, { "id": "1605.09304" }, { "id": "1606.05374" }, { "id": "1604.05288" }, { "id": "1603.00448" }, { "id": "1605.03661" }, { "id": "1512.02595" }, { "id": "1511.08228" }, { "id": "1510.04342" }, { "id": "1604.05280" }, { "id": "1606.04753" }, { "id": "1606.03976" }, { "id": "1602.02697" }, { "id": "1505.05424" }, { "id": "1604.06057" }, { "id": "1605.03143" }, { "id": "1506.06579" }, { "id": "1605.03142" }, { "id": "1502.05698" }, { "id": "1512.05832" }, { "id": "1502.02072" }, { "id": "1506.02438" }, { "id": "1502.02362" } ]
1606.06737
68
where Q is some symmetric matrix which satisfies >, Qrs = Hs. We now employ our favorite trick of diag- onalizing G and then writing (G9). = pi + €Aiy, (C8) A/2-1 where € = A . This gives Let us start with the weakly correlated case. In this case, P(a,b) = ; (cr) (c”) , (a,b) a My 7 » (C1) since as we have discussed in the main text, the parent node has the stationary distribution µ and G∆/2 give the conditional probabilities from transitioning from the parent node to the nodes at the bottom of the tree that we are interested in. We now employ our favorite trick of diagonalizing G and then writing (G∆/2)ij = µi + λ∆/2 2 Aij, (C2) P(a,b) = Ss Qrs (Ha + €Aar) (He + €Abds) , rs = Malte + ¥° Qrs (Ha€Avs + MoeAar +P AarAds) - rs = Mato + ¥> HatAdstts +) Hoe Aarker 5 r +e Ss QrsAarAbs = Maly + > QrsAarAds- (C9) which gives
1606.06737#68
Criticality in Formal Languages and Statistical Physics
We show that the mutual information between two symbols, as a function of the number of symbols between the two, decays exponentially in any probabilistic regular grammar, but can decay like a power law for a context-free grammar. This result about formal languages is closely related to a well-known result in classical statistical mechanics that there are no phase transitions in dimensions fewer than two. It is also related to the emergence of power-law correlations in turbulence and cosmological inflation through recursive generative processes. We elucidate these physics connections and comment on potential applications of our results to machine learning tasks like training artificial recurrent neural networks. Along the way, we introduce a useful quantity which we dub the rational mutual information and discuss generalizations of our claims involving more complicated Bayesian networks.
http://arxiv.org/pdf/1606.06737
Henry W. Lin, Max Tegmark
cond-mat.dis-nn, cs.CL
Replaced to match final published version. Discussion improved, references added
Entropy, 19, 299 (2017)
cond-mat.dis-nn
20160621
20170823
[]
1606.06565
69
Training on multiple distributions. One could also train on multiple training distributions in the hope that a model which simultaneously works well on many training distributions will also work well on a novel test distribution. One of the authors has found this to be the case, for instance, in the context of automated speech recognition systems [7]. One could potentially combine this with any of the ideas above, and/or take an engineering approach of simply trying to develop design methodologies that consistently allow one to collect a representative set of training sets and from this build a model that consistently generalizes to novel distributions. Even for this engineering approach, it seems important to be able to detect when one is in a situation that was not covered by the training data and to respond appropriately, and to have methodologies for adequately stress-testing the model with distributions that are sufficiently different from the set of training distributions.
1606.06565#69
Concrete Problems in AI Safety
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.
http://arxiv.org/pdf/1606.06565
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané
cs.AI, cs.LG
29 pages
null
cs.AI
20160621
20160725
[ { "id": "1507.01986" }, { "id": "1506.02142" }, { "id": "1602.04621" }, { "id": "1602.04450" }, { "id": "1605.09304" }, { "id": "1606.05374" }, { "id": "1604.05288" }, { "id": "1603.00448" }, { "id": "1605.03661" }, { "id": "1512.02595" }, { "id": "1511.08228" }, { "id": "1510.04342" }, { "id": "1604.05280" }, { "id": "1606.04753" }, { "id": "1606.03976" }, { "id": "1602.02697" }, { "id": "1505.05424" }, { "id": "1604.06057" }, { "id": "1605.03143" }, { "id": "1506.06579" }, { "id": "1605.03142" }, { "id": "1502.05698" }, { "id": "1512.05832" }, { "id": "1502.02072" }, { "id": "1506.02438" }, { "id": "1502.02362" } ]
1606.06737
69
which gives P(A.) = So tr (Ha +3" Aar) (to + 2! Abr) 5 : = SO pr (Matty + Hae Ane + beAar + AarAbr) . (C3) where we have defined « = > 2 Now note that >, Aarfr = 0, since pz is an eigenvector with eigenvalue 1 of G4/?. Hence this simplifies the above to just ≡ rs QrsAarAbs ≡ (µaµb)1/2 Nab, and noting that a Rab = 0, we have yet nto + Rat) aD Hab C10 = [Mam + ANZ) . (C10) ab =1+e|NI/, P(a,b) = pats + > pr AarAbr- (C4) which gives IR = λ2∆−4 2 ||N||2. (C11) From the definition of rational mutual information, and employing the fact that }>, Aj; = 0 gives In either the strongly or the weakly correlated case, note that N is asymptotically constant. We can write the second largest eigenvalue |λ2|2 = q−k2/2, where q is the branching factor,
1606.06737#69
Criticality in Formal Languages and Statistical Physics
We show that the mutual information between two symbols, as a function of the number of symbols between the two, decays exponentially in any probabilistic regular grammar, but can decay like a power law for a context-free grammar. This result about formal languages is closely related to a well-known result in classical statistical mechanics that there are no phase transitions in dimensions fewer than two. It is also related to the emergence of power-law correlations in turbulence and cosmological inflation through recursive generative processes. We elucidate these physics connections and comment on potential applications of our results to machine learning tasks like training artificial recurrent neural networks. Along the way, we introduce a useful quantity which we dub the rational mutual information and discuss generalizations of our claims involving more complicated Bayesian networks.
http://arxiv.org/pdf/1606.06737
Henry W. Lin, Max Tegmark
cond-mat.dis-nn, cs.CL
Replaced to match final published version. Discussion improved, references added
Entropy, 19, 299 (2017)
cond-mat.dis-nn
20160621
20170823
[]
1606.06565
70
How to respond when out-of-distribution. The approaches described above focus on detecting when a model is unlikely to make good predictions on a new distribution. An important related question is what to do once the detection occurs. One natural approach would be to ask humans for information, though in the context of complex structured output tasks it may be unclear a priori what question to ask, and in time-critical situations asking for information may not be an option. For the former challenge, there has been some recent promising work on pinpointing aspects of a structure that a model is uncertain about [162, 81], as well as obtaining calibration in structured output settings [83], but we believe there is much work yet to be done. For the latter challenge, there is also relevant work based on reachability analysis [93, 100] and robust policy improvement [164], which provide potential methods for deploying conservative policies in situations of uncertainty; to our knowledge, this work has not yet been combined with methods for detecting out-of-distribution failures of a model. Beyond the structured output setting, for agents that can act in an environment (such as RL agents), 18
1606.06565#70
Concrete Problems in AI Safety
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.
http://arxiv.org/pdf/1606.06565
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané
cs.AI, cs.LG
29 pages
null
cs.AI
20160621
20160725
[ { "id": "1507.01986" }, { "id": "1506.02142" }, { "id": "1602.04621" }, { "id": "1602.04450" }, { "id": "1605.09304" }, { "id": "1606.05374" }, { "id": "1604.05288" }, { "id": "1603.00448" }, { "id": "1605.03661" }, { "id": "1512.02595" }, { "id": "1511.08228" }, { "id": "1510.04342" }, { "id": "1604.05280" }, { "id": "1606.04753" }, { "id": "1606.03976" }, { "id": "1602.02697" }, { "id": "1505.05424" }, { "id": "1604.06057" }, { "id": "1605.03143" }, { "id": "1506.06579" }, { "id": "1605.03142" }, { "id": "1502.05698" }, { "id": "1512.05832" }, { "id": "1502.02072" }, { "id": "1506.02438" }, { "id": "1502.02362" } ]
1606.06737
70
> (Wattn +2 oy Mr AarAbr) IR+1% ab Hablb C5 =D [rons + ANS] ©) ab =1+el(NIP, (C5) IR ∝ q−∆k2/2 ∝ q−k2 logq |i−j| = C|i − j|−k2. ∼ (C12) Behold the glorious power law! We note that the normal- ization C must be a function of the form C = m2f (λ2, q), where m2 is the multiplicity of the eigenvalue λ2. We evaluate this normalization in the next section. where Nay = (apy) —1/? > , HrAarApr is a symmetric matrix and || - || denotes the Frobenius norm. Hence IR = λ2∆ 2 ||S||2. (C6) As before, this result can be sharpened if we assume that G satisfies detailed balance Gmn = eKm/2Smne−Kn/2 14 where S is a symmetric matrix and Kn are just num- bers. Let us only consider the weakly correlated case. By the spectral theorem, we diagonalize S into an or- thonormal basis of eigenvectors v. As before, G and S share the same eigenvalues. Proceeding,
1606.06737#70
Criticality in Formal Languages and Statistical Physics
We show that the mutual information between two symbols, as a function of the number of symbols between the two, decays exponentially in any probabilistic regular grammar, but can decay like a power law for a context-free grammar. This result about formal languages is closely related to a well-known result in classical statistical mechanics that there are no phase transitions in dimensions fewer than two. It is also related to the emergence of power-law correlations in turbulence and cosmological inflation through recursive generative processes. We elucidate these physics connections and comment on potential applications of our results to machine learning tasks like training artificial recurrent neural networks. Along the way, we introduce a useful quantity which we dub the rational mutual information and discuss generalizations of our claims involving more complicated Bayesian networks.
http://arxiv.org/pdf/1606.06737
Henry W. Lin, Max Tegmark
cond-mat.dis-nn, cs.CL
Replaced to match final published version. Discussion improved, references added
Entropy, 19, 299 (2017)
cond-mat.dis-nn
20160621
20170823
[]
1606.06565
71
Beyond the structured output setting, for agents that can act in an environment (such as RL agents), 18 information about the reliability of percepts in uncertain situations seems to have great potential value. In sufficiently rich environments, these agents may have the option to gather information that clarifies the percept (e.g. if in a noisy environment, move closer to the speaker), engage in low- stakes experimentation when uncertainty is high (e.g. try a potentially dangerous chemical reaction in a controlled environment), or seek experiences that are likely to help expose the perception system to the relevant distribution (e.g. practice listening to accented speech). Humans utilize such information routinely, but to our knowledge current RL techniques make little effort to do so, perhaps because popular RL environments are typically not rich enough to require such subtle management of uncertainty. Properly responding to out-of-distribution information thus seems to the authors like an exciting and (as far as we are aware) mostly unexplored challenge for next generation RL systems.
1606.06565#71
Concrete Problems in AI Safety
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.
http://arxiv.org/pdf/1606.06565
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané
cs.AI, cs.LG
29 pages
null
cs.AI
20160621
20160725
[ { "id": "1507.01986" }, { "id": "1506.02142" }, { "id": "1602.04621" }, { "id": "1602.04450" }, { "id": "1605.09304" }, { "id": "1606.05374" }, { "id": "1604.05288" }, { "id": "1603.00448" }, { "id": "1605.03661" }, { "id": "1512.02595" }, { "id": "1511.08228" }, { "id": "1510.04342" }, { "id": "1604.05280" }, { "id": "1606.04753" }, { "id": "1606.03976" }, { "id": "1602.02697" }, { "id": "1505.05424" }, { "id": "1604.06057" }, { "id": "1605.03143" }, { "id": "1506.06579" }, { "id": "1605.03142" }, { "id": "1502.05698" }, { "id": "1512.05832" }, { "id": "1502.02072" }, { "id": "1506.02438" }, { "id": "1502.02362" } ]
1606.06737
71
1 P(a,b) = z Ss dM vaupe Ket Ko)/2 (C13) D where Z is a constant that ensures that P is properly normalized. Let us move full steam ahead to compute the rational mutual information: P(a,b)? » P(a)P(b) 2 = Ss e7KatKo) (= aerator) ab E v 2 = Ss (= s2oun) : ab v (C14) This is just the Frobenius norm of the symmetric matrix H =, ASvavy! The eigenvalues of the matrix can be read off, so we have IR(a, b) = |λi|2∆. (C15) i=2 Hence we have computed the rational mutual information exactly as a function of ∆.In the next section, we use this result to compute the mutual information as a function of separation |i − j|, which will lead to a precise evaluation of the normalization constant C in the equation I(a, b) ≈ C|i − j|−k2. (C16) # 1. Detailed evaluation of the normalization
1606.06737#71
Criticality in Formal Languages and Statistical Physics
We show that the mutual information between two symbols, as a function of the number of symbols between the two, decays exponentially in any probabilistic regular grammar, but can decay like a power law for a context-free grammar. This result about formal languages is closely related to a well-known result in classical statistical mechanics that there are no phase transitions in dimensions fewer than two. It is also related to the emergence of power-law correlations in turbulence and cosmological inflation through recursive generative processes. We elucidate these physics connections and comment on potential applications of our results to machine learning tasks like training artificial recurrent neural networks. Along the way, we introduce a useful quantity which we dub the rational mutual information and discuss generalizations of our claims involving more complicated Bayesian networks.
http://arxiv.org/pdf/1606.06737
Henry W. Lin, Max Tegmark
cond-mat.dis-nn, cs.CL
Replaced to match final published version. Discussion improved, references added
Entropy, 19, 299 (2017)
cond-mat.dis-nn
20160621
20170823
[]
1606.06565
72
A unifying view: counterfactual reasoning and machine learning with contracts. Some of the authors have found two viewpoints to be particularly helpful when thinking about problems related to out-of-distribution prediction. The first is counterfactual reasoning [106, 129, 117, 30], where one asks “what would have happened if the world were different in a certain way”? In some sense, distributional shift can be thought of as a particular type of counterfactual, and so understanding counterfactual reasoning is likely to help in making systems robust to distributional shift. We are excited by recent work applying counterfactual reasoning techniques to machine learning problems [30, 120, 151, 160, 77, 137] though there appears to be much work remaining to be done to scale these to high-dimensional and highly complex settings.
1606.06565#72
Concrete Problems in AI Safety
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.
http://arxiv.org/pdf/1606.06565
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané
cs.AI, cs.LG
29 pages
null
cs.AI
20160621
20160725
[ { "id": "1507.01986" }, { "id": "1506.02142" }, { "id": "1602.04621" }, { "id": "1602.04450" }, { "id": "1605.09304" }, { "id": "1606.05374" }, { "id": "1604.05288" }, { "id": "1603.00448" }, { "id": "1605.03661" }, { "id": "1512.02595" }, { "id": "1511.08228" }, { "id": "1510.04342" }, { "id": "1604.05280" }, { "id": "1606.04753" }, { "id": "1606.03976" }, { "id": "1602.02697" }, { "id": "1505.05424" }, { "id": "1604.06057" }, { "id": "1605.03143" }, { "id": "1506.06579" }, { "id": "1605.03142" }, { "id": "1502.05698" }, { "id": "1512.05832" }, { "id": "1502.02072" }, { "id": "1506.02438" }, { "id": "1502.02362" } ]
1606.06737
72
# 1. Detailed evaluation of the normalization For simplicity, we specialize to the case q = 2 although our results can surely be extended to q > 2. Define δ = ∆/2 and d = |i − j|. We wish to compute the ex- pected value of IR conditioned on knowledge of d. By Bayes rule, p(δ|d) ∝ p(d|δ)p(δ). Now p(d|δ) is given by a triangle distribution with mean 2δ−1 and compact sup- port (0, 2δ). On the other hand, p(δ) ∝ 2δ for δ ≤ δmax and p(δ) = 0 for δ ≤ 0 or δ > δmax. This new constant δmax serves two purposes. First, it can be thought of as a way to regulate the probability distribution p(δ) so that it is normalizable; at the end of the calculation we formally take δmax → ∞ without obstruction. Second, if we are interested in empirically sampling the mutual information, we cannot generate an infinite string, so set- ting δmax to a finite value accounts for the fact that our generated string may be finite.
1606.06737#72
Criticality in Formal Languages and Statistical Physics
We show that the mutual information between two symbols, as a function of the number of symbols between the two, decays exponentially in any probabilistic regular grammar, but can decay like a power law for a context-free grammar. This result about formal languages is closely related to a well-known result in classical statistical mechanics that there are no phase transitions in dimensions fewer than two. It is also related to the emergence of power-law correlations in turbulence and cosmological inflation through recursive generative processes. We elucidate these physics connections and comment on potential applications of our results to machine learning tasks like training artificial recurrent neural networks. Along the way, we introduce a useful quantity which we dub the rational mutual information and discuss generalizations of our claims involving more complicated Bayesian networks.
http://arxiv.org/pdf/1606.06737
Henry W. Lin, Max Tegmark
cond-mat.dis-nn, cs.CL
Replaced to match final published version. Discussion improved, references added
Entropy, 19, 299 (2017)
cond-mat.dis-nn
20160621
20170823
[]
1606.06565
73
The second perspective is machine learning with contracts — in this perspective, one would like to construct machine learning systems that satisfy a well-defined contract on their behavior in analogy with the design of software systems [135, 28, 89]. [135] enumerates a list of ways in which existing machine learning systems fail to do this, and the problems this can cause for deployment and maintenance of machine learning systems at scale. The simplest and to our mind most important failure is the extremely brittle implicit contract in most machine learning systems, namely that they only necessarily perform well if the training and test distributions are identical. This condition is difficult to check and rare in practice, and it would be valuable to build systems that perform well under weaker contracts that are easier to reason about. Partially specified models offer one approach to this — rather than requiring the distributions to be identical, we only need them to match on the pieces of the distribution that are specified in the model. Reachability analysis [93, 100] and model repair [58] provide other avenues for obtaining better contracts — in reachability analysis, we optimize performance subject to the condition that a safe region can always be reached by a known conservative policy, and in model repair we alter a trained model to ensure that certain desired safety properties hold.
1606.06565#73
Concrete Problems in AI Safety
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.
http://arxiv.org/pdf/1606.06565
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané
cs.AI, cs.LG
29 pages
null
cs.AI
20160621
20160725
[ { "id": "1507.01986" }, { "id": "1506.02142" }, { "id": "1602.04621" }, { "id": "1602.04450" }, { "id": "1605.09304" }, { "id": "1606.05374" }, { "id": "1604.05288" }, { "id": "1603.00448" }, { "id": "1605.03661" }, { "id": "1512.02595" }, { "id": "1511.08228" }, { "id": "1510.04342" }, { "id": "1604.05280" }, { "id": "1606.04753" }, { "id": "1606.03976" }, { "id": "1602.02697" }, { "id": "1505.05424" }, { "id": "1604.06057" }, { "id": "1605.03143" }, { "id": "1506.06579" }, { "id": "1605.03142" }, { "id": "1502.05698" }, { "id": "1512.05832" }, { "id": "1502.02072" }, { "id": "1506.02438" }, { "id": "1502.02362" } ]
1606.06737
73
0.50 0.20 10%) 0.10 0.05 0.010 0.005 0.000 - 0.005 @- 0.010 Residual from power law - 0.015 - 0.020 1 5 10 50 Distance between symbols d(X,Y) FIG. 5: Decay of rational mutual information with separation for a binary sequence from a numerical simulation with prob- abilities p(0|0) = p(1|1) = 0.9 and a branching factor q = 2. The blue curve is not a fit to the simulated data but rather an analytic calculation. The smooth power law displayed on the left is what is predicted by our “continuum” approximation. The very small discrepancies (right) are not random but are fully accounted for by more involved exact calculations with discrete sums. We now assume d > 1 so that we can swap discrete sums with integrals. We can then compute the conditional expectation value of 2~*?9, This yields In [ 2-*25 P(d|d) dé = G-28)ae (C17) ~ So ko (kz + 1) log(2)’ or equivalently, Cq=2 = 1 − |λ2|4 k2(k2 + 1) 1 log 2 . (C18) It turns out it is also possible to compute the answer without making any approximations with integrals:
1606.06737#73
Criticality in Formal Languages and Statistical Physics
We show that the mutual information between two symbols, as a function of the number of symbols between the two, decays exponentially in any probabilistic regular grammar, but can decay like a power law for a context-free grammar. This result about formal languages is closely related to a well-known result in classical statistical mechanics that there are no phase transitions in dimensions fewer than two. It is also related to the emergence of power-law correlations in turbulence and cosmological inflation through recursive generative processes. We elucidate these physics connections and comment on potential applications of our results to machine learning tasks like training artificial recurrent neural networks. Along the way, we introduce a useful quantity which we dub the rational mutual information and discuss generalizations of our claims involving more complicated Bayesian networks.
http://arxiv.org/pdf/1606.06737
Henry W. Lin, Max Tegmark
cond-mat.dis-nn, cs.CL
Replaced to match final published version. Discussion improved, references added
Entropy, 19, 299 (2017)
cond-mat.dis-nn
20160621
20170823
[]
1606.06565
74
Summary. There are a variety of approaches to building machine learning systems that robustly perform well when deployed on novel test distributions. One family of approaches is based on assuming a well-specified model; in this case, the primary obstacles are the difficulty of building well-specified models in practice, an incomplete picture of how to maintain uncertainty on novel distributions in the presence of finite training data, and the difficulty of detecting when a model is mis-specified. Another family of approaches only assumes a partially specified model; this approach is potentially promising, but it currently suffers from a lack of development in the context of machine learning, since most of the historical development has been by the field of econometrics; there is also a question of whether partially specified models are fundamentally constrained to simple situations and/or conservative predictions, or whether they can meaningfully scale to the complex situations demanded by modern machine learning applications. Finally, one could try to train on multiple training distributions in the hope that a model which simultaneously works well on many training distributions will also work well on a novel test distribution; for this approach it seems particularly important to stress-test the learned model with distributions that are substantially different from 19
1606.06565#74
Concrete Problems in AI Safety
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.
http://arxiv.org/pdf/1606.06565
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané
cs.AI, cs.LG
29 pages
null
cs.AI
20160621
20160725
[ { "id": "1507.01986" }, { "id": "1506.02142" }, { "id": "1602.04621" }, { "id": "1602.04450" }, { "id": "1605.09304" }, { "id": "1606.05374" }, { "id": "1604.05288" }, { "id": "1603.00448" }, { "id": "1605.03661" }, { "id": "1512.02595" }, { "id": "1511.08228" }, { "id": "1510.04342" }, { "id": "1604.05280" }, { "id": "1606.04753" }, { "id": "1606.03976" }, { "id": "1602.02697" }, { "id": "1505.05424" }, { "id": "1604.06057" }, { "id": "1605.03143" }, { "id": "1506.06579" }, { "id": "1605.03142" }, { "id": "1502.05698" }, { "id": "1512.05832" }, { "id": "1502.02072" }, { "id": "1506.02438" }, { "id": "1502.02362" } ]
1606.06737
74
It turns out it is also possible to compute the answer without making any approximations with integrals: In Q-(ke+1) [logs (d)] ((2h2+1 _ 1) gllese(4)] _ 2d (2* _ )) Qhetl (C19) The resulting predictions are compared in figure Figure 5. # Appendix D: Estimating (rational) mutual information from empirical data Estimating mutual information or rational mutual infor- mation from empirical data is fraught with subtleties. 15 . It is well known that a naive estimate of the Shannon entropy obtained S= -yitX 1 W log = Me is biased, gen- erally underestimating the true entropy from finite sam- ples. For example, We use the estimator advocated by Grassberger [59]: S= bed — FM (D1) where (x) is the digamma function, N = )> N;, and K is the number of characters in the alphabet. mutual information estimator can then be estimated I(X,Y) = $(X) + $(Y) — $(X,Y). The variance of estimator is then the sum of the variances The by his var( ˆI) = varEnt(X) + varEnt(Y ) + varEnt(X, Y ), (D2) where the varEntropy is defined as
1606.06737#74
Criticality in Formal Languages and Statistical Physics
We show that the mutual information between two symbols, as a function of the number of symbols between the two, decays exponentially in any probabilistic regular grammar, but can decay like a power law for a context-free grammar. This result about formal languages is closely related to a well-known result in classical statistical mechanics that there are no phase transitions in dimensions fewer than two. It is also related to the emergence of power-law correlations in turbulence and cosmological inflation through recursive generative processes. We elucidate these physics connections and comment on potential applications of our results to machine learning tasks like training artificial recurrent neural networks. Along the way, we introduce a useful quantity which we dub the rational mutual information and discuss generalizations of our claims involving more complicated Bayesian networks.
http://arxiv.org/pdf/1606.06737
Henry W. Lin, Max Tegmark
cond-mat.dis-nn, cs.CL
Replaced to match final published version. Discussion improved, references added
Entropy, 19, 299 (2017)
cond-mat.dis-nn
20160621
20170823
[]
1606.06737
75
(D2) where the varEntropy is defined as varEnt(X) = var (− log p(X), ) (D3) where we can again replace logarithms with the digamma function w. The uncertainty after N measurements is then © \/var(f)/N. [1] P. Bak, Physical Review Letters 59, 381 (1987). [2] P. Bak, C. Tang, and K. Wiesenfeld, Physical review A 38, 364 (1988). [3] K. Linkenkaer-Hansen, V. J. 21, V. Nikouline, The (2001), J. M. Journal http://www.jneurosci.org/content/21/4/1370.full.pdf+html, URL http://www.jneurosci.org/content/21/4/1370. abstract. Palva, of and R. Neuroscience Ilmoniemi, 1370 [4] D. J. Levitin, P. Chordia, and V. Menon, Proceedings of the National Academy of Sciences 109, 3716 (2012).
1606.06737#75
Criticality in Formal Languages and Statistical Physics
We show that the mutual information between two symbols, as a function of the number of symbols between the two, decays exponentially in any probabilistic regular grammar, but can decay like a power law for a context-free grammar. This result about formal languages is closely related to a well-known result in classical statistical mechanics that there are no phase transitions in dimensions fewer than two. It is also related to the emergence of power-law correlations in turbulence and cosmological inflation through recursive generative processes. We elucidate these physics connections and comment on potential applications of our results to machine learning tasks like training artificial recurrent neural networks. Along the way, we introduce a useful quantity which we dub the rational mutual information and discuss generalizations of our claims involving more complicated Bayesian networks.
http://arxiv.org/pdf/1606.06737
Henry W. Lin, Max Tegmark
cond-mat.dis-nn, cs.CL
Replaced to match final published version. Discussion improved, references added
Entropy, 19, 299 (2017)
cond-mat.dis-nn
20160621
20170823
[]
1606.06565
76
Potential Experiments: Speech systems frequently exhibit poor calibration when they go out-of- distribution, so a speech system that “knows when it is uncertain” could be one possible demon- stration project. To be specific, the challenge could be: train a state-of-the-art speech system on a standard dataset [116] that gives well-calibrated results (if not necessarily good results) on a range of other test sets, like noisy and accented speech. Current systems not only perform poorly on these test sets when trained only on small datasets, but are usually overconfident in their incorrect transcriptions. Fixing this problem without harming performance on the original training set would be a valuable achievement, and would obviously have practical value. More generally, it would be valuable to design models that could consistently estimate (bounds on) their performance on novel test distributions. If a single methodology could consistently accomplish this for a wide variety of tasks (including not just speech but e.g. sentiment analysis [24], as well as benchmarks in computer vision [158]), that would inspire confidence in the reliability of that methodology for handling novel
1606.06565#76
Concrete Problems in AI Safety
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.
http://arxiv.org/pdf/1606.06565
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané
cs.AI, cs.LG
29 pages
null
cs.AI
20160621
20160725
[ { "id": "1507.01986" }, { "id": "1506.02142" }, { "id": "1602.04621" }, { "id": "1602.04450" }, { "id": "1605.09304" }, { "id": "1606.05374" }, { "id": "1604.05288" }, { "id": "1603.00448" }, { "id": "1605.03661" }, { "id": "1512.02595" }, { "id": "1511.08228" }, { "id": "1510.04342" }, { "id": "1604.05280" }, { "id": "1606.04753" }, { "id": "1606.03976" }, { "id": "1602.02697" }, { "id": "1505.05424" }, { "id": "1604.06057" }, { "id": "1605.03143" }, { "id": "1506.06579" }, { "id": "1605.03142" }, { "id": "1502.05698" }, { "id": "1512.05832" }, { "id": "1502.02072" }, { "id": "1506.02438" }, { "id": "1502.02362" } ]
1606.06737
76
[4] D. J. Levitin, P. Chordia, and V. Menon, Proceedings of the National Academy of Sciences 109, 3716 (2012). [5] M. Tegmark, ArXiv e-prints (2014), 1401.1219. [6] B. Manaris, J. Romero, P. Machado, D. Krehbiel, T. Hirzel, W. Pharr, and R. B. Davis, Computer Mu- sic Journal 29, 55 (2005). [7] C. Peng, S. Buldyrev, A. Goldberger, S. Havlin, F. Sciortino, M. Simons, H. Stanley, et al., Nature 356, 168 (1992). [8] R. N. Mantegna, S. V. Buldyrev, A. L. Goldberger, S. Havlin, C.-K. Peng, M. Simons, and H. E. Stanley, Physical review letters 73, 3169 (1994). [9] W. Ebeling and T. P¨oschel, EPL (Europhysics Letters) 26, 241 (1994), cond-mat/0204108. [10] W. Ebeling and A. Neiman, Physica A: Statistical Me- chanics and its Applications 215, 233 (1995).
1606.06737#76
Criticality in Formal Languages and Statistical Physics
We show that the mutual information between two symbols, as a function of the number of symbols between the two, decays exponentially in any probabilistic regular grammar, but can decay like a power law for a context-free grammar. This result about formal languages is closely related to a well-known result in classical statistical mechanics that there are no phase transitions in dimensions fewer than two. It is also related to the emergence of power-law correlations in turbulence and cosmological inflation through recursive generative processes. We elucidate these physics connections and comment on potential applications of our results to machine learning tasks like training artificial recurrent neural networks. Along the way, we introduce a useful quantity which we dub the rational mutual information and discuss generalizations of our claims involving more complicated Bayesian networks.
http://arxiv.org/pdf/1606.06737
Henry W. Lin, Max Tegmark
cond-mat.dis-nn, cs.CL
Replaced to match final published version. Discussion improved, references added
Entropy, 19, 299 (2017)
cond-mat.dis-nn
20160621
20170823
[]
1606.06565
77
sentiment analysis [24], as well as benchmarks in computer vision [158]), that would inspire confidence in the reliability of that methodology for handling novel inputs. Note that estimating performance on novel distributions has additional practical value in allowing us to then potentially adapt the model to that new situation. Finally, it might also be valuable to create an environment where an RL agent must learn to interpret speech as part of some larger task, and to explore how to respond appropriately to its own estimates of its transcription error.
1606.06565#77
Concrete Problems in AI Safety
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.
http://arxiv.org/pdf/1606.06565
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané
cs.AI, cs.LG
29 pages
null
cs.AI
20160621
20160725
[ { "id": "1507.01986" }, { "id": "1506.02142" }, { "id": "1602.04621" }, { "id": "1602.04450" }, { "id": "1605.09304" }, { "id": "1606.05374" }, { "id": "1604.05288" }, { "id": "1603.00448" }, { "id": "1605.03661" }, { "id": "1512.02595" }, { "id": "1511.08228" }, { "id": "1510.04342" }, { "id": "1604.05280" }, { "id": "1606.04753" }, { "id": "1606.03976" }, { "id": "1602.02697" }, { "id": "1505.05424" }, { "id": "1604.06057" }, { "id": "1605.03143" }, { "id": "1506.06579" }, { "id": "1605.03142" }, { "id": "1502.05698" }, { "id": "1512.05832" }, { "id": "1502.02072" }, { "id": "1506.02438" }, { "id": "1502.02362" } ]
1606.06737
77
[10] W. Ebeling and A. Neiman, Physica A: Statistical Me- chanics and its Applications 215, 233 (1995). [11] E. G. Altmann, G. Cristadoro, and M. Degli Esposti, Proceedings of the National Academy of Sciences 109, 11582 (2012). [12] M. A. Montemurro and P. A. Pury, Fractals 10, 451 (2002). [13] G. Deco and B. Sch¨urmann, Information dynamics: foundations and applications (Springer Science & BusiTo compare our theoretical results with experiment in Fig. 4, we must measure the rational mutual information for a binary sequence from (simulated) data. For a binary sequence with covariance coefficient ρ(X, Y ) = P (1, 1) − P (1)2, the rational mutual information is o(X,Y) \? X.Y) = (oye) - (ps) This was essentially calculated in by considering the limit where the covariance coefficient is small p < 1. In their paper, there is an erroneous factor of 2. To estimate covariance p(d) as a function of d (sometimes confusingly referred to as the correlation function), we use the unbi- ased estimator for a data sequence {x1,72,--- pn}:
1606.06737#77
Criticality in Formal Languages and Statistical Physics
We show that the mutual information between two symbols, as a function of the number of symbols between the two, decays exponentially in any probabilistic regular grammar, but can decay like a power law for a context-free grammar. This result about formal languages is closely related to a well-known result in classical statistical mechanics that there are no phase transitions in dimensions fewer than two. It is also related to the emergence of power-law correlations in turbulence and cosmological inflation through recursive generative processes. We elucidate these physics connections and comment on potential applications of our results to machine learning tasks like training artificial recurrent neural networks. Along the way, we introduce a useful quantity which we dub the rational mutual information and discuss generalizations of our claims involving more complicated Bayesian networks.
http://arxiv.org/pdf/1606.06737
Henry W. Lin, Max Tegmark
cond-mat.dis-nn, cs.CL
Replaced to match final published version. Discussion improved, references added
Entropy, 19, 299 (2017)
cond-mat.dis-nn
20160621
20170823
[]
1606.06565
78
# 8 Related Efforts As mentioned in the introduction, several other communities have thought broadly about the safety of AI systems, both within and outside of the machine learning community. Work within the machine learning community on accidents in particular was discussed in detail above, but here we very briefly highlight a few other communities doing work that is broadly related to the topic of AI safety. • Cyber-Physical Systems Community: An existing community of researchers studies the security and safety of systems that interact with the physical world. Illustrative of this work is an impressive and successful effort to formally verify the entire federal aircraft collision avoidance system [75, 92]. Similar work includes traffic control algorithms [101] and many other topics. However, to date this work has not focused much on modern machine learning systems, where formal verification is often not feasible.
1606.06565#78
Concrete Problems in AI Safety
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.
http://arxiv.org/pdf/1606.06565
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané
cs.AI, cs.LG
29 pages
null
cs.AI
20160621
20160725
[ { "id": "1507.01986" }, { "id": "1506.02142" }, { "id": "1602.04621" }, { "id": "1602.04450" }, { "id": "1605.09304" }, { "id": "1606.05374" }, { "id": "1604.05288" }, { "id": "1603.00448" }, { "id": "1605.03661" }, { "id": "1512.02595" }, { "id": "1511.08228" }, { "id": "1510.04342" }, { "id": "1604.05280" }, { "id": "1606.04753" }, { "id": "1606.03976" }, { "id": "1602.02697" }, { "id": "1505.05424" }, { "id": "1604.06057" }, { "id": "1605.03143" }, { "id": "1506.06579" }, { "id": "1605.03142" }, { "id": "1502.05698" }, { "id": "1512.05832" }, { "id": "1502.02072" }, { "id": "1506.02438" }, { "id": "1502.02362" } ]
1606.06737
78
n—d Ad) = nod-1 > (ai — ) (tita — Z)- (D5) However, it is important to note that estimating the co- variance function ρ by averaging and then squaring will generically yield a biased estimate; we circumvent this by simply estimating IR(X, Y )1/2 ∝ ρ(X, Y ). ness Media, 2012). [14] G. K. Zipf, Human behavior and the principle of least effort (Addison-Wesley Press, 1949). [15] H. W. Lin and A. Loeb, Physical Review E 93, 032306 (2016). [16] L. Pietronero, E. Tosatti, V. Tosatti, and A. Vespig- nani, Physica A: Statistical Mechanics and its Ap- plications 293, 297 ISSN 0378-4371, URL (2001), http://www.sciencedirect.com/science/article/ pii/S0378437100006336. [17] M. Kardar, Statistical physics of fields (Cambridge Uni- versity Press, 2007). [18] URL ftp://ftp.ncbi.nih.gov/genomes/Homo_ sapiens/.
1606.06737#78
Criticality in Formal Languages and Statistical Physics
We show that the mutual information between two symbols, as a function of the number of symbols between the two, decays exponentially in any probabilistic regular grammar, but can decay like a power law for a context-free grammar. This result about formal languages is closely related to a well-known result in classical statistical mechanics that there are no phase transitions in dimensions fewer than two. It is also related to the emergence of power-law correlations in turbulence and cosmological inflation through recursive generative processes. We elucidate these physics connections and comment on potential applications of our results to machine learning tasks like training artificial recurrent neural networks. Along the way, we introduce a useful quantity which we dub the rational mutual information and discuss generalizations of our claims involving more complicated Bayesian networks.
http://arxiv.org/pdf/1606.06737
Henry W. Lin, Max Tegmark
cond-mat.dis-nn, cs.CL
Replaced to match final published version. Discussion improved, references added
Entropy, 19, 299 (2017)
cond-mat.dis-nn
20160621
20170823
[]
1606.06565
79
• Futurist Community: A cross-disciplinary group of academics and non-profits has raised concern about the long term implications of AI [27, 167], particularly superintelligent AI. The Future of Humanity Institute has studied this issue particularly as it relates to future AI sys- tems learning or executing humanity’s preferences [48, 43, 14, 12]. The Machine Intelligence Research Institute has studied safety issues that may arise in very advanced AI [57, 56, 36, 154, 142], including a few mentioned above (e.g., wireheading, environmental embedding, counter- factual reasoning), albeit at a more philosophical level. To date, they have not focused much on applications to modern machine learning. By contrast, our focus is on the empirical study of practical safety problems in modern machine learning systems, which we believe is likely to be robustly useful across a broad variety of potential risks, both short- and long-term. • Other Calls for Work on Safety: There have been other public documents within the research community pointing out the importance of work on AI safety. A 2015 Open Letter [8] signed by many members of the research community states the importance of “how to reap [AI’s] benefits while avoiding the potential pitfalls.” [130] propose research priorities for 20
1606.06565#79
Concrete Problems in AI Safety
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.
http://arxiv.org/pdf/1606.06565
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané
cs.AI, cs.LG
29 pages
null
cs.AI
20160621
20160725
[ { "id": "1507.01986" }, { "id": "1506.02142" }, { "id": "1602.04621" }, { "id": "1602.04450" }, { "id": "1605.09304" }, { "id": "1606.05374" }, { "id": "1604.05288" }, { "id": "1603.00448" }, { "id": "1605.03661" }, { "id": "1512.02595" }, { "id": "1511.08228" }, { "id": "1510.04342" }, { "id": "1604.05280" }, { "id": "1606.04753" }, { "id": "1606.03976" }, { "id": "1602.02697" }, { "id": "1505.05424" }, { "id": "1604.06057" }, { "id": "1605.03143" }, { "id": "1506.06579" }, { "id": "1605.03142" }, { "id": "1502.05698" }, { "id": "1512.05832" }, { "id": "1502.02072" }, { "id": "1506.02438" }, { "id": "1502.02362" } ]
1606.06737
79
[18] URL ftp://ftp.ncbi.nih.gov/genomes/Homo_ sapiens/. [19] URL http://www.jsbach.net/midi/midi_solo_ violin.html. # [20] URL http://prize.hutter1.net/. [21] URL http://www.lexique.org/public/lisezmoi. corpatext.htm. [22] A. M. Turing, Mind 59, 433 (1950). [23] D. Ferrucci, E. Brown, J. Fan, D. Gondek, A. A. Kalyanpur, A. Lally, J. W. Murdock, E. Nyberg, J. Prager, et al., AI magazine 31, 59 (2010). [24] M. Campbell, A. J. Hoane, and F.-h. Hsu, Artificial intelligence 134, 57 (2002).
1606.06737#79
Criticality in Formal Languages and Statistical Physics
We show that the mutual information between two symbols, as a function of the number of symbols between the two, decays exponentially in any probabilistic regular grammar, but can decay like a power law for a context-free grammar. This result about formal languages is closely related to a well-known result in classical statistical mechanics that there are no phase transitions in dimensions fewer than two. It is also related to the emergence of power-law correlations in turbulence and cosmological inflation through recursive generative processes. We elucidate these physics connections and comment on potential applications of our results to machine learning tasks like training artificial recurrent neural networks. Along the way, we introduce a useful quantity which we dub the rational mutual information and discuss generalizations of our claims involving more complicated Bayesian networks.
http://arxiv.org/pdf/1606.06737
Henry W. Lin, Max Tegmark
cond-mat.dis-nn, cs.CL
Replaced to match final published version. Discussion improved, references added
Entropy, 19, 299 (2017)
cond-mat.dis-nn
20160621
20170823
[]
1606.06565
80
20 robust and beneficial artificial intelligence, and includes several other topics in addition to a (briefer) discussion of AI-related accidents. [161], writing over 20 years ago, proposes that the community look for ways to formalize Asimov’s first law of robotics (robots must not harm humans), and focuses mainly on classical planning. Finally, two of the authors of this paper have written informally about safety in AI systems [146, 34]; these postings provided inspiration for parts of the present document. • Related Problems in Safety: A number of researchers in machine learning and other fields have begun to think about the social impacts of AI technologies. Aside from work directly on accidents (which we reviewed in the main document), there is also substantial work on other topics, many of which are closely related to or overlap with the issue of accidents. A thorough overview of all of this work is beyond the scope of this document, but we briefly list a few emerging themes: • Privacy: How can we ensure privacy when applying machine learning to sensitive data sources such as medical data? [76, 1] • Fairness: How can we make sure ML systems don’t discriminate? [3, 168, 6, 46, 119, 169]
1606.06565#80
Concrete Problems in AI Safety
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.
http://arxiv.org/pdf/1606.06565
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané
cs.AI, cs.LG
29 pages
null
cs.AI
20160621
20160725
[ { "id": "1507.01986" }, { "id": "1506.02142" }, { "id": "1602.04621" }, { "id": "1602.04450" }, { "id": "1605.09304" }, { "id": "1606.05374" }, { "id": "1604.05288" }, { "id": "1603.00448" }, { "id": "1605.03661" }, { "id": "1512.02595" }, { "id": "1511.08228" }, { "id": "1510.04342" }, { "id": "1604.05280" }, { "id": "1606.04753" }, { "id": "1606.03976" }, { "id": "1602.02697" }, { "id": "1505.05424" }, { "id": "1604.06057" }, { "id": "1605.03143" }, { "id": "1506.06579" }, { "id": "1605.03142" }, { "id": "1502.05698" }, { "id": "1512.05832" }, { "id": "1502.02072" }, { "id": "1506.02438" }, { "id": "1502.02362" } ]
1606.06737
80
[25] V. Mnih, Nature 518, 529 (2015). [26] D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, et al., Nature 529, 484 (2016), URL http://dx.doi.org/10. 1038/nature16961. [27] N. Chomsky, Information and control 2, 137 (1959). 16 [28] Y. Kim, Y. Jernite, D. Sontag, and A. M. Rush (2015), 1508.06615, URL https://arxiv.org/abs/1508.06615. [29] A. Graves, ArXiv e-prints (2013), 1308.0850. [30] A. Graves, A.-r. Mohamed, and G. Hinton, in 2013 IEEE international conference on acoustics, speech and signal processing (IEEE, 2013), pp. 6645–6649. [31] R. Collobert and J. Weston, in Proceedings of the 25th international conference on Machine learning (ACM, 2008), pp. 160–167.
1606.06737#80
Criticality in Formal Languages and Statistical Physics
We show that the mutual information between two symbols, as a function of the number of symbols between the two, decays exponentially in any probabilistic regular grammar, but can decay like a power law for a context-free grammar. This result about formal languages is closely related to a well-known result in classical statistical mechanics that there are no phase transitions in dimensions fewer than two. It is also related to the emergence of power-law correlations in turbulence and cosmological inflation through recursive generative processes. We elucidate these physics connections and comment on potential applications of our results to machine learning tasks like training artificial recurrent neural networks. Along the way, we introduce a useful quantity which we dub the rational mutual information and discuss generalizations of our claims involving more complicated Bayesian networks.
http://arxiv.org/pdf/1606.06737
Henry W. Lin, Max Tegmark
cond-mat.dis-nn, cs.CL
Replaced to match final published version. Discussion improved, references added
Entropy, 19, 299 (2017)
cond-mat.dis-nn
20160621
20170823
[]
1606.06565
81
• Fairness: How can we make sure ML systems don’t discriminate? [3, 168, 6, 46, 119, 169] Security: What can a malicious adversary do to a ML system? [149, 96, 97, 115, 108, 19] • Abuse:5 How do we prevent the misuse of ML systems to attack or harm people? [16] • Transparency: How can we understand what complicated ML systems are doing? [112, 166, 105, 109] • Policy: How do we predict and respond to the economic and social consequences of ML? [32, 52, 15, 33] We believe that research on these topics has both urgency and great promise, and that fruitful intersection is likely to exist between these topics and the topics we discuss in this paper. # 9 Conclusion This paper analyzed the problem of accidents in machine learning systems and particularly rein- forcement learning agents, where an accident is defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We presented five possible research problems related to accident risk and for each we discussed possible approaches that are highly amenable to concrete experimental work.
1606.06565#81
Concrete Problems in AI Safety
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.
http://arxiv.org/pdf/1606.06565
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané
cs.AI, cs.LG
29 pages
null
cs.AI
20160621
20160725
[ { "id": "1507.01986" }, { "id": "1506.02142" }, { "id": "1602.04621" }, { "id": "1602.04450" }, { "id": "1605.09304" }, { "id": "1606.05374" }, { "id": "1604.05288" }, { "id": "1603.00448" }, { "id": "1605.03661" }, { "id": "1512.02595" }, { "id": "1511.08228" }, { "id": "1510.04342" }, { "id": "1604.05280" }, { "id": "1606.04753" }, { "id": "1606.03976" }, { "id": "1602.02697" }, { "id": "1505.05424" }, { "id": "1604.06057" }, { "id": "1605.03143" }, { "id": "1506.06579" }, { "id": "1605.03142" }, { "id": "1502.05698" }, { "id": "1512.05832" }, { "id": "1502.02072" }, { "id": "1506.02438" }, { "id": "1502.02362" } ]
1606.06737
81
[31] R. Collobert and J. Weston, in Proceedings of the 25th international conference on Machine learning (ACM, 2008), pp. 160–167. [32] A. van den Oord, S. Dieleman, H. Zen, K. Simonyan, O. Vinyals, A. Graves, N. Kalchbrenner, A. Senior, and K. Kavukcuoglu (????). [33] J. Schmidhuber, Neural Networks 61, 85 (2015). [34] Y. LeCun, Y. Bengio, and G. Hinton, Nature 521, 436 (2015). [35] S. Hochreiter and J. Schmidhuber, Neural computation 9, 1735 (1997). [36] S. M. Shieber, in The Formal complexity of natural lan- guage (Springer, 1985), pp. 320–334. [37] A. V. Anisimov, Cybernetics and Systems Analysis 7, 594 (1971). [38] C. E. Shannon, ACM SIGMOBILE Mobile Computing and Communications Review 5, 3 (1948).
1606.06737#81
Criticality in Formal Languages and Statistical Physics
We show that the mutual information between two symbols, as a function of the number of symbols between the two, decays exponentially in any probabilistic regular grammar, but can decay like a power law for a context-free grammar. This result about formal languages is closely related to a well-known result in classical statistical mechanics that there are no phase transitions in dimensions fewer than two. It is also related to the emergence of power-law correlations in turbulence and cosmological inflation through recursive generative processes. We elucidate these physics connections and comment on potential applications of our results to machine learning tasks like training artificial recurrent neural networks. Along the way, we introduce a useful quantity which we dub the rational mutual information and discuss generalizations of our claims involving more complicated Bayesian networks.
http://arxiv.org/pdf/1606.06737
Henry W. Lin, Max Tegmark
cond-mat.dis-nn, cs.CL
Replaced to match final published version. Discussion improved, references added
Entropy, 19, 299 (2017)
cond-mat.dis-nn
20160621
20170823
[]
1606.06565
82
With the realistic possibility of machine learning-based systems controlling industrial processes, health-related systems, and other mission-critical technology, small-scale accidents seem like a very concrete threat, and are critical to prevent both intrinsically and because such accidents could cause a justified loss of trust in automated systems. The risk of larger accidents is more difficult to gauge, but we believe it is worthwhile and prudent to develop a principled and forward-looking approach to safety that continues to remain relevant as autonomous systems become more powerful. While many current-day safety problems can and have been handled with ad hoc fixes or case-by-case rules, we believe that the increasing trend towards end-to-end, fully autonomous systems points towards the need for a unified approach to prevent these systems from causing unintended harm. 5Note that “security” differs from “abuse” in that the former involves attacks against a legitimate ML system by an adversary (e.g. a criminal tries to fool a face recognition system), while the latter involves attacks by an ML system controlled by an adversary (e.g. a criminal trains a “smart hacker” system to break into a website). 21 # Acknowledgements
1606.06565#82
Concrete Problems in AI Safety
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.
http://arxiv.org/pdf/1606.06565
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané
cs.AI, cs.LG
29 pages
null
cs.AI
20160621
20160725
[ { "id": "1507.01986" }, { "id": "1506.02142" }, { "id": "1602.04621" }, { "id": "1602.04450" }, { "id": "1605.09304" }, { "id": "1606.05374" }, { "id": "1604.05288" }, { "id": "1603.00448" }, { "id": "1605.03661" }, { "id": "1512.02595" }, { "id": "1511.08228" }, { "id": "1510.04342" }, { "id": "1604.05280" }, { "id": "1606.04753" }, { "id": "1606.03976" }, { "id": "1602.02697" }, { "id": "1505.05424" }, { "id": "1604.06057" }, { "id": "1605.03143" }, { "id": "1506.06579" }, { "id": "1605.03142" }, { "id": "1502.05698" }, { "id": "1512.05832" }, { "id": "1502.02072" }, { "id": "1506.02438" }, { "id": "1502.02362" } ]
1606.06737
82
[38] C. E. Shannon, ACM SIGMOBILE Mobile Computing and Communications Review 5, 3 (1948). [39] S. Kullback and R. A. Leibler, Ann. Math. Statist. 22, 79 (1951), URL http://dx.doi.org/10.1214/aoms/ 1177729694. [40] T. M. Cover and J. A. Thomas, Elements of information theory (John Wiley &amp; Sons, 2012). [41] L. R. Rabiner, Proceedings of the IEEE 77, 257 (1989). [42] R. C. Carrasco and J. Oncina, in International Collo- quium on Grammatical Inference (Springer, 1994), pp. 139–152. [43] S. Ginsburg, The Mathematical Theory of Context Free Languages.[Mit Fig.] (McGraw-Hill Book Company, 1966). [44] T. L. Booth, in Switching and Automata Theory, 1969., IEEE Conference Record of 10th Annual Symposium on (IEEE, 1969), pp. 74–81. [45] T. Huang and K. Fu, Information Sciences 3, 201 (1971), ISSN 0020-0255, URL http://www.sciencedirect.com/ science/article/pii/S0020025571800075.
1606.06737#82
Criticality in Formal Languages and Statistical Physics
We show that the mutual information between two symbols, as a function of the number of symbols between the two, decays exponentially in any probabilistic regular grammar, but can decay like a power law for a context-free grammar. This result about formal languages is closely related to a well-known result in classical statistical mechanics that there are no phase transitions in dimensions fewer than two. It is also related to the emergence of power-law correlations in turbulence and cosmological inflation through recursive generative processes. We elucidate these physics connections and comment on potential applications of our results to machine learning tasks like training artificial recurrent neural networks. Along the way, we introduce a useful quantity which we dub the rational mutual information and discuss generalizations of our claims involving more complicated Bayesian networks.
http://arxiv.org/pdf/1606.06737
Henry W. Lin, Max Tegmark
cond-mat.dis-nn, cs.CL
Replaced to match final published version. Discussion improved, references added
Entropy, 19, 299 (2017)
cond-mat.dis-nn
20160621
20170823
[]
1606.06565
83
21 # Acknowledgements We thank Shane Legg, Peter Norvig, Ilya Sutskever, Greg Corrado, Laurent Orseau, David Krueger, Rif Saurous, David Andersen, and Victoria Krakovna for detailed feedback and suggestions. We would also like to thank Geoffrey Irving, Toby Ord, Quoc Le, Greg Wayne, Daniel Dewey, Nick Beckstead, Holden Karnofsky, Chelsea Finn, Marcello Herreshoff, Alex Donaldson, Jared Kaplan, Greg Brockman, Wojciech Zaremba, Ian Goodfellow, Dylan Hadfield-Menell, Jessica Taylor, Blaise Aguera y Arcas, David Berlekamp, Aaron Courville, and Jeff Dean for helpful discussions and comments. Paul Christiano was supported as part of the Future of Life Institute FLI-RFP-AI1 program, grant #2015–143898. In addition a minority of the work done by Paul Christiano was performed as a contractor for Theiss Research and at OpenAI. Finally, we thank the Google Brain team for providing a supportive environment and encouraging us to publish this work. # References
1606.06565#83
Concrete Problems in AI Safety
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.
http://arxiv.org/pdf/1606.06565
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané
cs.AI, cs.LG
29 pages
null
cs.AI
20160621
20160725
[ { "id": "1507.01986" }, { "id": "1506.02142" }, { "id": "1602.04621" }, { "id": "1602.04450" }, { "id": "1605.09304" }, { "id": "1606.05374" }, { "id": "1604.05288" }, { "id": "1603.00448" }, { "id": "1605.03661" }, { "id": "1512.02595" }, { "id": "1511.08228" }, { "id": "1510.04342" }, { "id": "1604.05280" }, { "id": "1606.04753" }, { "id": "1606.03976" }, { "id": "1602.02697" }, { "id": "1505.05424" }, { "id": "1604.06057" }, { "id": "1605.03143" }, { "id": "1506.06579" }, { "id": "1605.03142" }, { "id": "1502.05698" }, { "id": "1512.05832" }, { "id": "1502.02072" }, { "id": "1506.02438" }, { "id": "1502.02362" } ]
1606.06737
83
[46] K. Lari and S. J. Young, Computer speech &amp; lan- guage 4, 35 (1990). [47] D. Harlow, S. H. Shenker, D. Stanford, and L. Susskind, Physical Review D 85, 063516 (2012). [48] L. Van Hove, Physica 16, 137 (1950). [49] J. A. Cuesta and A. S´anchez, Journal of Statistical Physics 115, 869 (2004), cond-mat/0306354. [50] G. Evenbly and G. Vidal, Journal of Statistical Physics 145, 891 (2011). [51] A. M. Saxe, J. L. McClelland, and S. Ganguli, arXiv preprint arXiv:1312.6120 (2013). [52] M. Mahoney, Large text compression benchmark. [53] A. Karpathy, J. Johnson, and L. Fei-Fei, ArXiv e-prints (2015), 1506.02078. [54] S.-i. Amari, in Differential-Geometrical Methods in Statistics (Springer, 1985), pp. 66–103. [55] T. Morimoto, Journal of the Physical Society of Japan 18, 328 (1963).
1606.06737#83
Criticality in Formal Languages and Statistical Physics
We show that the mutual information between two symbols, as a function of the number of symbols between the two, decays exponentially in any probabilistic regular grammar, but can decay like a power law for a context-free grammar. This result about formal languages is closely related to a well-known result in classical statistical mechanics that there are no phase transitions in dimensions fewer than two. It is also related to the emergence of power-law correlations in turbulence and cosmological inflation through recursive generative processes. We elucidate these physics connections and comment on potential applications of our results to machine learning tasks like training artificial recurrent neural networks. Along the way, we introduce a useful quantity which we dub the rational mutual information and discuss generalizations of our claims involving more complicated Bayesian networks.
http://arxiv.org/pdf/1606.06737
Henry W. Lin, Max Tegmark
cond-mat.dis-nn, cs.CL
Replaced to match final published version. Discussion improved, references added
Entropy, 19, 299 (2017)
cond-mat.dis-nn
20160621
20170823
[]
1606.06565
84
# References [1] Martin Abadi et al. “Deep Learning with Differential Privacy”. In: (in press (2016)). [2] Pieter Abbeel and Andrew Y Ng. “Exploration and apprenticeship learning in reinforcement learning”. In: Proceedings of the 22nd international conference on Machine learning. ACM. 2005, pp. 1–8. [3] Julius Adebayo, Lalana Kagal, and Alex Pentland. The Hidden Cost of Efficiency: Fairness and Discrimination in Predictive Modeling. 2015. [4] Alekh Agarwal et al. “Taming the monster: A fast and simple algorithm for contextual ban- dits”. In: (2014). [5] Hana Ajakan et al. “Domain-adversarial neural networks”. In: arXiv preprint arXiv:1412.4446 (2014). Ifeoma Ajunwa et al. “Hiring by algorithm: predicting and preventing disparate impact”. In: Available at SSRN 2746078 (2016). 6] [7] Dario Amodei et al. “Deep Speech 2: End-to-End Speech Recognition in English and Man- darin”. In: arXiv preprint arXiv:1512.02595 (2015).
1606.06565#84
Concrete Problems in AI Safety
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.
http://arxiv.org/pdf/1606.06565
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané
cs.AI, cs.LG
29 pages
null
cs.AI
20160621
20160725
[ { "id": "1507.01986" }, { "id": "1506.02142" }, { "id": "1602.04621" }, { "id": "1602.04450" }, { "id": "1605.09304" }, { "id": "1606.05374" }, { "id": "1604.05288" }, { "id": "1603.00448" }, { "id": "1605.03661" }, { "id": "1512.02595" }, { "id": "1511.08228" }, { "id": "1510.04342" }, { "id": "1604.05280" }, { "id": "1606.04753" }, { "id": "1606.03976" }, { "id": "1602.02697" }, { "id": "1505.05424" }, { "id": "1604.06057" }, { "id": "1605.03143" }, { "id": "1506.06579" }, { "id": "1605.03142" }, { "id": "1502.05698" }, { "id": "1512.05832" }, { "id": "1502.02072" }, { "id": "1506.02438" }, { "id": "1502.02362" } ]
1606.06737
84
[55] T. Morimoto, Journal of the Physical Society of Japan 18, 328 (1963). [56] I. Csisz et al., Studia Sci. Math. Hungar. 2, 299 (1967). [57] S. M. Ali and S. D. Silvey, Journal of the Royal Statistical Society. Series B (Methodological) pp. 131–142 (1966). [58] C. W. Gardiner et al., Handbook of stochastic methods, vol. 3 (Springer Berlin, 1985). [59] P. Grassberger, ArXiv Physics e-prints (2003), physics/0307138. [60] W. Li, Journal of Statistical Physics 60, 823 (1990), ISSN 1572-9613, URL http://dx.doi.org/10.1007/ BF01025996. 17
1606.06737#84
Criticality in Formal Languages and Statistical Physics
We show that the mutual information between two symbols, as a function of the number of symbols between the two, decays exponentially in any probabilistic regular grammar, but can decay like a power law for a context-free grammar. This result about formal languages is closely related to a well-known result in classical statistical mechanics that there are no phase transitions in dimensions fewer than two. It is also related to the emergence of power-law correlations in turbulence and cosmological inflation through recursive generative processes. We elucidate these physics connections and comment on potential applications of our results to machine learning tasks like training artificial recurrent neural networks. Along the way, we introduce a useful quantity which we dub the rational mutual information and discuss generalizations of our claims involving more complicated Bayesian networks.
http://arxiv.org/pdf/1606.06737
Henry W. Lin, Max Tegmark
cond-mat.dis-nn, cs.CL
Replaced to match final published version. Discussion improved, references added
Entropy, 19, 299 (2017)
cond-mat.dis-nn
20160621
20170823
[]
1606.06565
85
[8] An Open Letter: Research Priorities for Robust and Beneficial Artificial Intelligence. Open Letter. Signed by 8,600 people; see attached research agenda. 2015. [9] Animashree Anandkumar, Daniel Hsu, and Sham M Kakade. “A method of moments for mixture models and hidden Markov models”. In: arXiv preprint arXiv:1203.0683 (2012). [10] Theodore W Anderson and Herman Rubin. “Estimation of the parameters of a single equation in a complete system of stochastic equations”. In: The Annals of Mathematical Statistics (1949), pp. 46–63. [11] Theodore W Anderson and Herman Rubin. “The asymptotic properties of estimates of the parameters of a single equation in a complete system of stochastic equations”. In: The Annals of Mathematical Statistics (1950), pp. 570–582. [12] Stuart Armstrong. “Motivated value selection for artificial agents”. In: Workshops at the Twenty-Ninth AAAI Conference on Artificial Intelligence. 2015.
1606.06565#85
Concrete Problems in AI Safety
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.
http://arxiv.org/pdf/1606.06565
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané
cs.AI, cs.LG
29 pages
null
cs.AI
20160621
20160725
[ { "id": "1507.01986" }, { "id": "1506.02142" }, { "id": "1602.04621" }, { "id": "1602.04450" }, { "id": "1605.09304" }, { "id": "1606.05374" }, { "id": "1604.05288" }, { "id": "1603.00448" }, { "id": "1605.03661" }, { "id": "1512.02595" }, { "id": "1511.08228" }, { "id": "1510.04342" }, { "id": "1604.05280" }, { "id": "1606.04753" }, { "id": "1606.03976" }, { "id": "1602.02697" }, { "id": "1505.05424" }, { "id": "1604.06057" }, { "id": "1605.03143" }, { "id": "1506.06579" }, { "id": "1605.03142" }, { "id": "1502.05698" }, { "id": "1512.05832" }, { "id": "1502.02072" }, { "id": "1506.02438" }, { "id": "1502.02362" } ]
1606.06565
86
[13] Stuart Armstrong. The mathematics of reduced impact: help needed. 2012. [14] Stuart Armstrong. Utility indifference. Tech. rep. Technical Report 2010-1. Oxford: Future of Humanity Institute, University of Oxford, 2010. [15] Melanie Arntz, Terry Gregory, and Ulrich Zierahn. “The Risk of Automation for Jobs in OECD Countries”. In: OECD Social, Employment and Migration Working Papers (2016). url: http://dx.doi.org/10.1787/5jlz9h56dvq7-en. [16] Autonomous Weapons: An Open Letter from AI & Robotics Researchers. Open Letter. Signed by 20,000+ people. 2015. 22 [17] James Babcock, Janos Kramar, and Roman Yampolskiy. “The AGI Containment Problem”. In: The Ninth Conference on Artificial General Intelligence (2016). [18] Krishnakumar Balasubramanian, Pinar Donmez, and Guy Lebanon. “Unsupervised super- vised learning ii: Margin-based classification without labels”. In: The Journal of Machine Learning Research 12 (2011), pp. 3119–3145.
1606.06565#86
Concrete Problems in AI Safety
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.
http://arxiv.org/pdf/1606.06565
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané
cs.AI, cs.LG
29 pages
null
cs.AI
20160621
20160725
[ { "id": "1507.01986" }, { "id": "1506.02142" }, { "id": "1602.04621" }, { "id": "1602.04450" }, { "id": "1605.09304" }, { "id": "1606.05374" }, { "id": "1604.05288" }, { "id": "1603.00448" }, { "id": "1605.03661" }, { "id": "1512.02595" }, { "id": "1511.08228" }, { "id": "1510.04342" }, { "id": "1604.05280" }, { "id": "1606.04753" }, { "id": "1606.03976" }, { "id": "1602.02697" }, { "id": "1505.05424" }, { "id": "1604.06057" }, { "id": "1605.03143" }, { "id": "1506.06579" }, { "id": "1605.03142" }, { "id": "1502.05698" }, { "id": "1512.05832" }, { "id": "1502.02072" }, { "id": "1506.02438" }, { "id": "1502.02362" } ]
1606.06565
87
[19] Marco Barreno et al. “The security of machine learning”. In: Machine Learning 81.2 (2010), pp. 121–148. [20] Tamer Ba¸sar and Pierre Bernhard. H-infinity optimal control and related minimax design problems: a dynamic game approach. Springer Science & Business Media, 2008. [21] Mich`ele Basseville. “Detecting changes in signals and systems—a survey”. In: Automatica 24.3 (1988), pp. 309–326. [22] F Berkenkamp, A Krause, and Angela P Schoellig. “Bayesian optimization with safety con- straints: safe and automatic parameter tuning in robotics.” arXiv, 2016”. In: arXiv preprint arXiv:1602.04450 (). [23] Jon Bird and Paul Layzell. “The evolved radio and its implications for modelling the evolution of novel sensors”. In: Evolutionary Computation, 2002. CEC’02. Proceedings of the 2002 Congress on. Vol. 2. IEEE. 2002, pp. 1836–1841.
1606.06565#87
Concrete Problems in AI Safety
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.
http://arxiv.org/pdf/1606.06565
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané
cs.AI, cs.LG
29 pages
null
cs.AI
20160621
20160725
[ { "id": "1507.01986" }, { "id": "1506.02142" }, { "id": "1602.04621" }, { "id": "1602.04450" }, { "id": "1605.09304" }, { "id": "1606.05374" }, { "id": "1604.05288" }, { "id": "1603.00448" }, { "id": "1605.03661" }, { "id": "1512.02595" }, { "id": "1511.08228" }, { "id": "1510.04342" }, { "id": "1604.05280" }, { "id": "1606.04753" }, { "id": "1606.03976" }, { "id": "1602.02697" }, { "id": "1505.05424" }, { "id": "1604.06057" }, { "id": "1605.03143" }, { "id": "1506.06579" }, { "id": "1605.03142" }, { "id": "1502.05698" }, { "id": "1512.05832" }, { "id": "1502.02072" }, { "id": "1506.02438" }, { "id": "1502.02362" } ]
1606.06565
88
[24] John Blitzer, Mark Dredze, Fernando Pereira, et al. “Biographies, bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification”. In: ACL. Vol. 7. 2007, pp. 440– 447. [25] John Blitzer, Sham Kakade, and Dean P Foster. “Domain adaptation with coupled sub- spaces”. In: International Conference on Artificial Intelligence and Statistics. 2011, pp. 173– 181. [26] Charles Blundell et al. “Weight uncertainty in neural networks”. In: arXiv preprint arXiv:1505.05424 (2015). [27] Nick Bostrom. Superintelligence: Paths, dangers, strategies. OUP Oxford, 2014. [28] L´eon Bottou. “Two high stakes challenges in machine learning”. Invited talk at the 32nd International Conference on Machine Learning. 2015. [29] L´eon Bottou et al. “Counterfactual Reasoning and Learning Systems”. In: arXiv preprint arXiv:1209.2355 (2012).
1606.06565#88
Concrete Problems in AI Safety
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.
http://arxiv.org/pdf/1606.06565
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané
cs.AI, cs.LG
29 pages
null
cs.AI
20160621
20160725
[ { "id": "1507.01986" }, { "id": "1506.02142" }, { "id": "1602.04621" }, { "id": "1602.04450" }, { "id": "1605.09304" }, { "id": "1606.05374" }, { "id": "1604.05288" }, { "id": "1603.00448" }, { "id": "1605.03661" }, { "id": "1512.02595" }, { "id": "1511.08228" }, { "id": "1510.04342" }, { "id": "1604.05280" }, { "id": "1606.04753" }, { "id": "1606.03976" }, { "id": "1602.02697" }, { "id": "1505.05424" }, { "id": "1604.06057" }, { "id": "1605.03143" }, { "id": "1506.06579" }, { "id": "1605.03142" }, { "id": "1502.05698" }, { "id": "1512.05832" }, { "id": "1502.02072" }, { "id": "1506.02438" }, { "id": "1502.02362" } ]
1606.06565
89
[30] L´eon Bottou et al. “Counterfactual reasoning and learning systems: The example of compu- tational advertising”. In: The Journal of Machine Learning Research 14.1 (2013), pp. 3207– 3260. [31] Ronen I Brafman and Moshe Tennenholtz. “R-max-a general polynomial time algorithm for near-optimal reinforcement learning”. In: The Journal of Machine Learning Research 3 (2003), pp. 213–231. [32] Erik Brynjolfsson and Andrew McAfee. The second machine age: work, progress, and pros- perity in a time of brilliant technologies. WW Norton & Company, 2014. [33] Ryan Calo. “Open robotics”. In: Maryland Law Review 70.3 (2011). [34] Paul Christiano. AI Control. [Online; accessed 13-June-2016]. 2015. url: https://medium. com/ai-control. [35] Fabio Cozman and Ira Cohen. “Risks of semi-supervised learning”. In: Semi-Supervised Learn- ing (2006), pp. 56–72. [36] Andrew Critch. “Parametric Bounded L¨ob’s Theorem and Robust Cooperation of Bounded Agents”. In: (2016).
1606.06565#89
Concrete Problems in AI Safety
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.
http://arxiv.org/pdf/1606.06565
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané
cs.AI, cs.LG
29 pages
null
cs.AI
20160621
20160725
[ { "id": "1507.01986" }, { "id": "1506.02142" }, { "id": "1602.04621" }, { "id": "1602.04450" }, { "id": "1605.09304" }, { "id": "1606.05374" }, { "id": "1604.05288" }, { "id": "1603.00448" }, { "id": "1605.03661" }, { "id": "1512.02595" }, { "id": "1511.08228" }, { "id": "1510.04342" }, { "id": "1604.05280" }, { "id": "1606.04753" }, { "id": "1606.03976" }, { "id": "1602.02697" }, { "id": "1505.05424" }, { "id": "1604.06057" }, { "id": "1605.03143" }, { "id": "1506.06579" }, { "id": "1605.03142" }, { "id": "1502.05698" }, { "id": "1512.05832" }, { "id": "1502.02072" }, { "id": "1506.02438" }, { "id": "1502.02362" } ]
1606.06565
90
[36] Andrew Critch. “Parametric Bounded L¨ob’s Theorem and Robust Cooperation of Bounded Agents”. In: (2016). [37] Christian Daniel et al. “Active reward learning”. In: Proceedings of Robotics Science & Sys- tems. 2014. [38] Ernest Davis. “Ethical guidelines for a superintelligence.” In: Artif. Intell. 220 (2015), pp. 121– 124. [39] Alexander Philip Dawid and Allan M Skene. “Maximum likelihood estimation of observer error-rates using the EM algorithm”. In: Applied statistics (1979), pp. 20–28. 23 [40] Peter Dayan and Geoffrey E Hinton. “Feudal reinforcement learning”. In: Advances in neural information processing systems. Morgan Kaufmann Publishers. 1993, pp. 271–271. [41] Kalyanmoy Deb. “Multi-objective optimization”. In: Search methodologies. Springer, 2014, pp. 403–449. [42] Daniel Dewey. “Learning what to value”. In: Artificial General Intelligence. Springer, 2011, pp. 309–314.
1606.06565#90
Concrete Problems in AI Safety
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.
http://arxiv.org/pdf/1606.06565
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané
cs.AI, cs.LG
29 pages
null
cs.AI
20160621
20160725
[ { "id": "1507.01986" }, { "id": "1506.02142" }, { "id": "1602.04621" }, { "id": "1602.04450" }, { "id": "1605.09304" }, { "id": "1606.05374" }, { "id": "1604.05288" }, { "id": "1603.00448" }, { "id": "1605.03661" }, { "id": "1512.02595" }, { "id": "1511.08228" }, { "id": "1510.04342" }, { "id": "1604.05280" }, { "id": "1606.04753" }, { "id": "1606.03976" }, { "id": "1602.02697" }, { "id": "1505.05424" }, { "id": "1604.06057" }, { "id": "1605.03143" }, { "id": "1506.06579" }, { "id": "1605.03142" }, { "id": "1502.05698" }, { "id": "1512.05832" }, { "id": "1502.02072" }, { "id": "1506.02438" }, { "id": "1502.02362" } ]
1606.06565
91
[42] Daniel Dewey. “Learning what to value”. In: Artificial General Intelligence. Springer, 2011, pp. 309–314. [43] Daniel Dewey. “Reinforcement learning and the reward engineering principle”. In: 2014 AAAI Spring Symposium Series. 2014. [44] Pinar Donmez, Guy Lebanon, and Krishnakumar Balasubramanian. “Unsupervised super- vised learning i: Estimating classification and regression errors without labels”. In: The Jour- nal of Machine Learning Research 11 (2010), pp. 1323–1351. [45] Gregory Druck, Gideon Mann, and Andrew McCallum. “Learning from labeled features using generalized expectation criteria”. In: Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval. ACM. 2008, pp. 595–602. [46] Cynthia Dwork et al. “Fairness through awareness”. In: Proceedings of the 3rd Innovations in Theoretical Computer Science Conference. ACM. 2012, pp. 214–226. [47] Bradley Efron. “Computers and the theory of statistics: thinking the unthinkable”. In: SIAM review 21.4 (1979), pp. 460–480.
1606.06565#91
Concrete Problems in AI Safety
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.
http://arxiv.org/pdf/1606.06565
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané
cs.AI, cs.LG
29 pages
null
cs.AI
20160621
20160725
[ { "id": "1507.01986" }, { "id": "1506.02142" }, { "id": "1602.04621" }, { "id": "1602.04450" }, { "id": "1605.09304" }, { "id": "1606.05374" }, { "id": "1604.05288" }, { "id": "1603.00448" }, { "id": "1605.03661" }, { "id": "1512.02595" }, { "id": "1511.08228" }, { "id": "1510.04342" }, { "id": "1604.05280" }, { "id": "1606.04753" }, { "id": "1606.03976" }, { "id": "1602.02697" }, { "id": "1505.05424" }, { "id": "1604.06057" }, { "id": "1605.03143" }, { "id": "1506.06579" }, { "id": "1605.03142" }, { "id": "1502.05698" }, { "id": "1512.05832" }, { "id": "1502.02072" }, { "id": "1506.02438" }, { "id": "1502.02362" } ]
1606.06565
92
[48] Owain Evans, Andreas Stuhlm¨uller, and Noah D Goodman. “Learning the preferences of ignorant, inconsistent agents”. In: arXiv preprint arXiv:1512.05832 (2015). [49] Tom Everitt and Marcus Hutter. “Avoiding wireheading with value reinforcement learning”. In: arXiv preprint arXiv:1605.03143 (2016). [50] Tom Everitt et al. “Self-Modification of Policy and Utility Function in Rational Agents”. In: arXiv preprint arXiv:1605.03142 (2016). [51] Chelsea Finn, Sergey Levine, and Pieter Abbeel. “Guided Cost Learning: Deep Inverse Op- timal Control via Policy Optimization”. In: arXiv preprint arXiv:1603.00448 (2016). [52] Carl Benedikt Frey and Michael A Osborne. “The future of employment: how susceptible are jobs to computerisation”. In: Retrieved September 7 (2013), p. 2013. [53] Yarin Gal and Zoubin Ghahramani. “Dropout as a Bayesian approximation: Representing model uncertainty in deep learning”. In: arXiv preprint arXiv:1506.02142 (2015).
1606.06565#92
Concrete Problems in AI Safety
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.
http://arxiv.org/pdf/1606.06565
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané
cs.AI, cs.LG
29 pages
null
cs.AI
20160621
20160725
[ { "id": "1507.01986" }, { "id": "1506.02142" }, { "id": "1602.04621" }, { "id": "1602.04450" }, { "id": "1605.09304" }, { "id": "1606.05374" }, { "id": "1604.05288" }, { "id": "1603.00448" }, { "id": "1605.03661" }, { "id": "1512.02595" }, { "id": "1511.08228" }, { "id": "1510.04342" }, { "id": "1604.05280" }, { "id": "1606.04753" }, { "id": "1606.03976" }, { "id": "1602.02697" }, { "id": "1505.05424" }, { "id": "1604.06057" }, { "id": "1605.03143" }, { "id": "1506.06579" }, { "id": "1605.03142" }, { "id": "1502.05698" }, { "id": "1512.05832" }, { "id": "1502.02072" }, { "id": "1506.02438" }, { "id": "1502.02362" } ]
1606.06565
93
[54] Joao Gama et al. “Learning with drift detection”. In: Advances in artificial intelligence–SBIA 2004. Springer, 2004, pp. 286–295. [55] Javier Garc´ıa and Fernando Fern´andez. “A Comprehensive Survey on Safe Reinforcement Learning”. In: Journal of Machine Learning Research 16 (2015), pp. 1437–1480. [56] Scott Garrabrant, Nate Soares, and Jessica Taylor. “Asymptotic Convergence in Online Learning with Unbounded Delays”. In: arXiv preprint arXiv:1604.05280 (2016). [57] Scott Garrabrant et al. “Uniform Coherence”. In: arXiv preprint arXiv:1604.05288 (2016). [58] Shalini Ghosh et al. “Trusted Machine Learning for Probabilistic Models”. In: Reliable Machine Learning in the Wild at ICML 2016 (2016). [59] Yolanda Gil et al. “Amplify scientific discovery with artificial intelligence”. In: Science 346.6206 (2014), pp. 171–172.
1606.06565#93
Concrete Problems in AI Safety
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.
http://arxiv.org/pdf/1606.06565
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané
cs.AI, cs.LG
29 pages
null
cs.AI
20160621
20160725
[ { "id": "1507.01986" }, { "id": "1506.02142" }, { "id": "1602.04621" }, { "id": "1602.04450" }, { "id": "1605.09304" }, { "id": "1606.05374" }, { "id": "1604.05288" }, { "id": "1603.00448" }, { "id": "1605.03661" }, { "id": "1512.02595" }, { "id": "1511.08228" }, { "id": "1510.04342" }, { "id": "1604.05280" }, { "id": "1606.04753" }, { "id": "1606.03976" }, { "id": "1602.02697" }, { "id": "1505.05424" }, { "id": "1604.06057" }, { "id": "1605.03143" }, { "id": "1506.06579" }, { "id": "1605.03142" }, { "id": "1502.05698" }, { "id": "1512.05832" }, { "id": "1502.02072" }, { "id": "1506.02438" }, { "id": "1502.02362" } ]
1606.06565
94
[60] Alec Go, Richa Bhayani, and Lei Huang. “Twitter sentiment classification using distant supervision”. In: CS224N Project Report, Stanford 1 (2009), p. 12. Ian Goodfellow et al. “Generative adversarial nets”. In: Advances in Neural Information Processing Systems. 2014, pp. 2672–2680. Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. “Explaining and harnessing ad- versarial examples”. In: arXiv preprint arXiv:1412.6572 (2014). 61 62 [63] Charles AE Goodhart. Problems of monetary management: the UK experience. Springer, 1984. [64] Alex Graves, Greg Wayne, and Ivo Danihelka. “Neural turing machines”. In: arXiv preprint arXiv:1410.5401 (2014). 24 [65] Sonal Gupta. “Distantly Supervised Information Extraction Using Bootstrapped Patterns”. PhD thesis. Stanford University, 2015.
1606.06565#94
Concrete Problems in AI Safety
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.
http://arxiv.org/pdf/1606.06565
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané
cs.AI, cs.LG
29 pages
null
cs.AI
20160621
20160725
[ { "id": "1507.01986" }, { "id": "1506.02142" }, { "id": "1602.04621" }, { "id": "1602.04450" }, { "id": "1605.09304" }, { "id": "1606.05374" }, { "id": "1604.05288" }, { "id": "1603.00448" }, { "id": "1605.03661" }, { "id": "1512.02595" }, { "id": "1511.08228" }, { "id": "1510.04342" }, { "id": "1604.05280" }, { "id": "1606.04753" }, { "id": "1606.03976" }, { "id": "1602.02697" }, { "id": "1505.05424" }, { "id": "1604.06057" }, { "id": "1605.03143" }, { "id": "1506.06579" }, { "id": "1605.03142" }, { "id": "1502.05698" }, { "id": "1512.05832" }, { "id": "1502.02072" }, { "id": "1506.02438" }, { "id": "1502.02362" } ]
1606.06565
95
24 [65] Sonal Gupta. “Distantly Supervised Information Extraction Using Bootstrapped Patterns”. PhD thesis. Stanford University, 2015. [66] Dylan Hadfield-Menell et al. Cooperative Inverse Reinforcement Learning. 2016. [67] Dylan Hadfield-Menell et al. “The Off-Switch”. In: (2016). [68] Lars Peter Hansen. “Large sample properties of generalized method of moments estimators”. In: Econometrica: Journal of the Econometric Society (1982), pp. 1029–1054. [69] Lars Peter Hansen. “Nobel Lecture: Uncertainty Outside and Inside Economic Models”. In: Journal of Political Economy 122.5 (2014), pp. 945–987. [70] Mark Herbster and Manfred K Warmuth. “Tracking the best linear predictor”. In: The Jour- nal of Machine Learning Research 1 (2001), pp. 281–309. [71] Bill Hibbard. “Model-based utility functions”. In: Journal of Artificial General Intelligence 3.1 (2012), pp. 1–24.
1606.06565#95
Concrete Problems in AI Safety
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.
http://arxiv.org/pdf/1606.06565
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané
cs.AI, cs.LG
29 pages
null
cs.AI
20160621
20160725
[ { "id": "1507.01986" }, { "id": "1506.02142" }, { "id": "1602.04621" }, { "id": "1602.04450" }, { "id": "1605.09304" }, { "id": "1606.05374" }, { "id": "1604.05288" }, { "id": "1603.00448" }, { "id": "1605.03661" }, { "id": "1512.02595" }, { "id": "1511.08228" }, { "id": "1510.04342" }, { "id": "1604.05280" }, { "id": "1606.04753" }, { "id": "1606.03976" }, { "id": "1602.02697" }, { "id": "1505.05424" }, { "id": "1604.06057" }, { "id": "1605.03143" }, { "id": "1506.06579" }, { "id": "1605.03142" }, { "id": "1502.05698" }, { "id": "1512.05832" }, { "id": "1502.02072" }, { "id": "1506.02438" }, { "id": "1502.02362" } ]
1606.06565
96
[72] Thomas Hofmann, Bernhard Sch¨olkopf, and Alexander J Smola. “Kernel methods in machine learning”. In: The annals of statistics (2008), pp. 1171–1220. [73] Garud N Iyengar. “Robust dynamic programming”. In: Mathematics of Operations Research 30.2 (2005), pp. 257–280. [74] Ariel Jaffe, Boaz Nadler, and Yuval Kluger. “Estimating the accuracies of multiple classifiers without labeled data”. In: arXiv preprint arXiv:1407.7644 (2014). [75] Jean-Baptiste Jeannin et al. “A formally verified hybrid system for the next-generation air- borne collision avoidance system”. In: Tools and Algorithms for the Construction and Analysis of Systems. Springer, 2015, pp. 21–36. [76] Zhanglong Ji, Zachary C Lipton, and Charles Elkan. “Differential privacy and machine learn- ing: A survey and review”. In: arXiv preprint arXiv:1412.7584 (2014).
1606.06565#96
Concrete Problems in AI Safety
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.
http://arxiv.org/pdf/1606.06565
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané
cs.AI, cs.LG
29 pages
null
cs.AI
20160621
20160725
[ { "id": "1507.01986" }, { "id": "1506.02142" }, { "id": "1602.04621" }, { "id": "1602.04450" }, { "id": "1605.09304" }, { "id": "1606.05374" }, { "id": "1604.05288" }, { "id": "1603.00448" }, { "id": "1605.03661" }, { "id": "1512.02595" }, { "id": "1511.08228" }, { "id": "1510.04342" }, { "id": "1604.05280" }, { "id": "1606.04753" }, { "id": "1606.03976" }, { "id": "1602.02697" }, { "id": "1505.05424" }, { "id": "1604.06057" }, { "id": "1605.03143" }, { "id": "1506.06579" }, { "id": "1605.03142" }, { "id": "1502.05698" }, { "id": "1512.05832" }, { "id": "1502.02072" }, { "id": "1506.02438" }, { "id": "1502.02362" } ]
1606.06565
97
[77] Fredrik D Johansson, Uri Shalit, and David Sontag. “Learning Representations for Counter- factual Inference”. In: arXiv preprint arXiv:1605.03661 (2016). 78 79 Leslie Pack Kaelbling, Michael L Littman, and Anthony R Cassandra. “Planning and acting in partially observable stochastic domains”. In: Artificial intelligence 101.1 (1998), pp. 99- 134. Lukasz Kaiser and Ilya Sutskever. “Neural GPUs learn algorithms”. In: arXiv preprint arXiv:1511.08228 (2015). [80] Yoshinobu Kawahara and Masashi Sugiyama. “Change-Point Detection in Time-Series Data by Direct Density-Ratio Estimation.” In: SDM. Vol. 9. SIAM. 2009, pp. 389–400. [81] F. Khani, M. Rinard, and P. Liang. “Unanimous Prediction for 100Learning Semantic Parsers”. In: Association for Computational Linguistics (ACL). 2016.
1606.06565#97
Concrete Problems in AI Safety
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.
http://arxiv.org/pdf/1606.06565
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané
cs.AI, cs.LG
29 pages
null
cs.AI
20160621
20160725
[ { "id": "1507.01986" }, { "id": "1506.02142" }, { "id": "1602.04621" }, { "id": "1602.04450" }, { "id": "1605.09304" }, { "id": "1606.05374" }, { "id": "1604.05288" }, { "id": "1603.00448" }, { "id": "1605.03661" }, { "id": "1512.02595" }, { "id": "1511.08228" }, { "id": "1510.04342" }, { "id": "1604.05280" }, { "id": "1606.04753" }, { "id": "1606.03976" }, { "id": "1602.02697" }, { "id": "1505.05424" }, { "id": "1604.06057" }, { "id": "1605.03143" }, { "id": "1506.06579" }, { "id": "1605.03142" }, { "id": "1502.05698" }, { "id": "1512.05832" }, { "id": "1502.02072" }, { "id": "1506.02438" }, { "id": "1502.02362" } ]
1606.06565
98
[82] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. “Imagenet classification with deep convolutional neural networks”. In: Advances in neural information processing systems. 2012, pp. 1097–1105. [83] Volodymyr Kuleshov and Percy S Liang. “Calibrated Structured Prediction”. In: Advances in Neural Information Processing Systems. 2015, pp. 3456–3464. [84] Tejas D Kulkarni et al. “Hierarchical Deep Reinforcement Learning: Integrating Temporal Abstraction and Intrinsic Motivation”. In: arXiv preprint arXiv:1604.06057 (2016). [85] Neil Lawrence. Discussion of ’Superintelligence: Paths, Dangers, Strategies’. 2016. [86] Jesse Levinson et al. “Towards fully autonomous driving: Systems and algorithms”. In: Intelligent Vehicles Symposium (IV), 2011 IEEE. IEEE. 2011, pp. 163–168. [87] Lihong Li et al. “Knows what it knows: a framework for self-aware learning”. In: Machine learning 82.3 (2011), pp. 399–443.
1606.06565#98
Concrete Problems in AI Safety
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.
http://arxiv.org/pdf/1606.06565
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané
cs.AI, cs.LG
29 pages
null
cs.AI
20160621
20160725
[ { "id": "1507.01986" }, { "id": "1506.02142" }, { "id": "1602.04621" }, { "id": "1602.04450" }, { "id": "1605.09304" }, { "id": "1606.05374" }, { "id": "1604.05288" }, { "id": "1603.00448" }, { "id": "1605.03661" }, { "id": "1512.02595" }, { "id": "1511.08228" }, { "id": "1510.04342" }, { "id": "1604.05280" }, { "id": "1606.04753" }, { "id": "1606.03976" }, { "id": "1602.02697" }, { "id": "1505.05424" }, { "id": "1604.06057" }, { "id": "1605.03143" }, { "id": "1506.06579" }, { "id": "1605.03142" }, { "id": "1502.05698" }, { "id": "1512.05832" }, { "id": "1502.02072" }, { "id": "1506.02438" }, { "id": "1502.02362" } ]
1606.06565
99
[88] Yu-Feng Li and Zhi-Hua Zhou. “Towards making unlabeled data never hurt”. In: Pattern Analysis and Machine Intelligence, IEEE Transactions on 37.1 (2015), pp. 175–188. [89] Percy Liang. “On the Elusiveness of a Specification for AI”. NIPS 2015, Symposium: Algo- rithms Among Us. 2015. url: http://research.microsoft.com/apps/video/default. aspx?id=260009&r=1. 25 [90] Percy Liang and Dan Klein. “Analyzing the Errors of Unsupervised Learning.” In: ACL. 2008, pp. 879–887. [91] Song Liu et al. “Change-point detection in time-series data by relative density-ratio estima- tion”. In: Neural Networks 43 (2013), pp. 72–83. [92] Sarah M Loos, David Renshaw, and Andr´e Platzer. “Formal verification of distributed air- craft controllers”. In: Proceedings of the 16th international conference on Hybrid systems: computation and control. ACM. 2013, pp. 125–130.
1606.06565#99
Concrete Problems in AI Safety
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.
http://arxiv.org/pdf/1606.06565
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané
cs.AI, cs.LG
29 pages
null
cs.AI
20160621
20160725
[ { "id": "1507.01986" }, { "id": "1506.02142" }, { "id": "1602.04621" }, { "id": "1602.04450" }, { "id": "1605.09304" }, { "id": "1606.05374" }, { "id": "1604.05288" }, { "id": "1603.00448" }, { "id": "1605.03661" }, { "id": "1512.02595" }, { "id": "1511.08228" }, { "id": "1510.04342" }, { "id": "1604.05280" }, { "id": "1606.04753" }, { "id": "1606.03976" }, { "id": "1602.02697" }, { "id": "1505.05424" }, { "id": "1604.06057" }, { "id": "1605.03143" }, { "id": "1506.06579" }, { "id": "1605.03142" }, { "id": "1502.05698" }, { "id": "1512.05832" }, { "id": "1502.02072" }, { "id": "1506.02438" }, { "id": "1502.02362" } ]
1606.06565
100
[93] John Lygeros, Claire Tomlin, and Shankar Sastry. “Controllers for reachability specifications for hybrid systems”. In: Automatica 35.3 (1999), pp. 349–370. [94] Gideon S Mann and Andrew McCallum. “Generalized expectation criteria for semi-supervised learning with weakly labeled data”. In: The Journal of Machine Learning Research 11 (2010), pp. 955–984. [95] John McCarthy and Patrick J Hayes. “Some philosophical problems from the standpoint of artificial intelligence”. In: Readings in artificial intelligence (1969), pp. 431–450. [96] Shike Mei and Xiaojin Zhu. “The Security of Latent Dirichlet Allocation.” In: AISTATS. 2015. [97] Shike Mei and Xiaojin Zhu. “Using Machine Teaching to Identify Optimal Training-Set At- tacks on Machine Learners.” In: AAAI. 2015, pp. 2871–2877. [98] Bernard Merialdo. “Tagging English text with a probabilistic model”. In: Computational linguistics 20.2 (1994), pp. 155–171.
1606.06565#100
Concrete Problems in AI Safety
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.
http://arxiv.org/pdf/1606.06565
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané
cs.AI, cs.LG
29 pages
null
cs.AI
20160621
20160725
[ { "id": "1507.01986" }, { "id": "1506.02142" }, { "id": "1602.04621" }, { "id": "1602.04450" }, { "id": "1605.09304" }, { "id": "1606.05374" }, { "id": "1604.05288" }, { "id": "1603.00448" }, { "id": "1605.03661" }, { "id": "1512.02595" }, { "id": "1511.08228" }, { "id": "1510.04342" }, { "id": "1604.05280" }, { "id": "1606.04753" }, { "id": "1606.03976" }, { "id": "1602.02697" }, { "id": "1505.05424" }, { "id": "1604.06057" }, { "id": "1605.03143" }, { "id": "1506.06579" }, { "id": "1605.03142" }, { "id": "1502.05698" }, { "id": "1512.05832" }, { "id": "1502.02072" }, { "id": "1506.02438" }, { "id": "1502.02362" } ]
1606.06565
101
[99] Mike Mintz et al. “Distant supervision for relation extraction without labeled data”. In: Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 2- Volume 2. Association for Computational Linguistics. 2009, pp. 1003–1011. Ian M Mitchell, Alexandre M Bayen, and Claire J Tomlin. “A time-dependent Hamilton- Jacobi formulation of reachable sets for continuous dynamic games”. In: Automatic Control, IEEE Transactions on 50.7 (2005), pp. 947–957. [101] Stefan Mitsch, Sarah M Loos, and Andr´e Platzer. “Towards formal verification of freeway traffic control”. In: Cyber-Physical Systems (ICCPS), 2012 IEEE/ACM Third International Conference on. IEEE. 2012, pp. 171–180. [102] Volodymyr Mnih et al. “Human-level control through deep reinforcement learning”. In: Nature 518.7540 (2015), pp. 529–533.
1606.06565#101
Concrete Problems in AI Safety
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.
http://arxiv.org/pdf/1606.06565
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané
cs.AI, cs.LG
29 pages
null
cs.AI
20160621
20160725
[ { "id": "1507.01986" }, { "id": "1506.02142" }, { "id": "1602.04621" }, { "id": "1602.04450" }, { "id": "1605.09304" }, { "id": "1606.05374" }, { "id": "1604.05288" }, { "id": "1603.00448" }, { "id": "1605.03661" }, { "id": "1512.02595" }, { "id": "1511.08228" }, { "id": "1510.04342" }, { "id": "1604.05280" }, { "id": "1606.04753" }, { "id": "1606.03976" }, { "id": "1602.02697" }, { "id": "1505.05424" }, { "id": "1604.06057" }, { "id": "1605.03143" }, { "id": "1506.06579" }, { "id": "1605.03142" }, { "id": "1502.05698" }, { "id": "1512.05832" }, { "id": "1502.02072" }, { "id": "1506.02438" }, { "id": "1502.02362" } ]
1606.06565
102
[103] Shakir Mohamed and Danilo Jimenez Rezende. “Variational Information Maximisation for Intrinsically Motivated Reinforcement Learning”. In: Advances in Neural Information Pro- cessing Systems. 2015, pp. 2116–2124. [104] Teodor Mihai Moldovan and Pieter Abbeel. “Safe exploration in markov decision processes”. In: arXiv preprint arXiv:1205.4810 (2012). [105] Alexander Mordvintsev, Christopher Olah, and Mike Tyka. “Inceptionism: Going deeper into neural networks”. In: Google Research Blog. Retrieved June 20 (2015). [106] Jersey Neyman. “Sur les applications de la th´eorie des probabilit´es aux experiences agricoles: Essai des principes”. In: Roczniki Nauk Rolniczych 10 (1923), pp. 1–51. [107] Andrew Y Ng, Stuart J Russell, et al. “Algorithms for inverse reinforcement learning.” In: Icml. 2000, pp. 663–670.
1606.06565#102
Concrete Problems in AI Safety
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.
http://arxiv.org/pdf/1606.06565
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané
cs.AI, cs.LG
29 pages
null
cs.AI
20160621
20160725
[ { "id": "1507.01986" }, { "id": "1506.02142" }, { "id": "1602.04621" }, { "id": "1602.04450" }, { "id": "1605.09304" }, { "id": "1606.05374" }, { "id": "1604.05288" }, { "id": "1603.00448" }, { "id": "1605.03661" }, { "id": "1512.02595" }, { "id": "1511.08228" }, { "id": "1510.04342" }, { "id": "1604.05280" }, { "id": "1606.04753" }, { "id": "1606.03976" }, { "id": "1602.02697" }, { "id": "1505.05424" }, { "id": "1604.06057" }, { "id": "1605.03143" }, { "id": "1506.06579" }, { "id": "1605.03142" }, { "id": "1502.05698" }, { "id": "1512.05832" }, { "id": "1502.02072" }, { "id": "1506.02438" }, { "id": "1502.02362" } ]
1606.06565
103
[107] Andrew Y Ng, Stuart J Russell, et al. “Algorithms for inverse reinforcement learning.” In: Icml. 2000, pp. 663–670. [108] Anh Nguyen, Jason Yosinski, and Jeff Clune. “Deep neural networks are easily fooled: High confidence predictions for unrecognizable images”. In: Computer Vision and Pattern Recog- nition (CVPR), 2015 IEEE Conference on. IEEE. 2015, pp. 427–436. [109] Anh Nguyen et al. “Synthesizing the preferred inputs for neurons in neural networks via deep generator networks”. In: arXiv preprint arXiv:1605.09304 (2016). [110] Kamal Nigam et al. “Learning to classify text from labeled and unlabeled documents”. In: AAAI/IAAI 792 (1998). 26 [111] Arnab Nilim and Laurent El Ghaoui. “Robust control of Markov decision processes with uncertain transition matrices”. In: Operations Research 53.5 (2005), pp. 780–798. [112] Christopher Olah. Visualizing Representations: Deep Learning and Human Beings. 2015. url: http://colah.github.io/posts/2015-01-Visualizing-Representations/.
1606.06565#103
Concrete Problems in AI Safety
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.
http://arxiv.org/pdf/1606.06565
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané
cs.AI, cs.LG
29 pages
null
cs.AI
20160621
20160725
[ { "id": "1507.01986" }, { "id": "1506.02142" }, { "id": "1602.04621" }, { "id": "1602.04450" }, { "id": "1605.09304" }, { "id": "1606.05374" }, { "id": "1604.05288" }, { "id": "1603.00448" }, { "id": "1605.03661" }, { "id": "1512.02595" }, { "id": "1511.08228" }, { "id": "1510.04342" }, { "id": "1604.05280" }, { "id": "1606.04753" }, { "id": "1606.03976" }, { "id": "1602.02697" }, { "id": "1505.05424" }, { "id": "1604.06057" }, { "id": "1605.03143" }, { "id": "1506.06579" }, { "id": "1605.03142" }, { "id": "1502.05698" }, { "id": "1512.05832" }, { "id": "1502.02072" }, { "id": "1506.02438" }, { "id": "1502.02362" } ]
1606.06565
104
[113] Laurent Orseau and Stuart Armstrong. “Safely Interruptible Agents”. In: (2016). [114] Ian Osband et al. “Deep Exploration via Bootstrapped DQN”. In: arXiv preprint arXiv:1602.04621 (2016). [115] Nicolas Papernot et al. “Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples”. In: arXiv preprint arXiv:1602.02697 (2016). [116] Douglas B Paul and Janet M Baker. “The design for the Wall Street Journal-based CSR corpus”. In: Proceedings of the workshop on Speech and Natural Language. Association for Computational Linguistics. 1992, pp. 357–362. [117] Judea Pearl et al. “Causal inference in statistics: An overview”. In: Statistics Surveys 3 (2009), pp. 96–146. [118] Martin Pecka and Tomas Svoboda. “Safe exploration techniques for reinforcement learning–an overview”. In: Modelling and Simulation for Autonomous Systems. Springer, 2014, pp. 357– 375.
1606.06565#104
Concrete Problems in AI Safety
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.
http://arxiv.org/pdf/1606.06565
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané
cs.AI, cs.LG
29 pages
null
cs.AI
20160621
20160725
[ { "id": "1507.01986" }, { "id": "1506.02142" }, { "id": "1602.04621" }, { "id": "1602.04450" }, { "id": "1605.09304" }, { "id": "1606.05374" }, { "id": "1604.05288" }, { "id": "1603.00448" }, { "id": "1605.03661" }, { "id": "1512.02595" }, { "id": "1511.08228" }, { "id": "1510.04342" }, { "id": "1604.05280" }, { "id": "1606.04753" }, { "id": "1606.03976" }, { "id": "1602.02697" }, { "id": "1505.05424" }, { "id": "1604.06057" }, { "id": "1605.03143" }, { "id": "1506.06579" }, { "id": "1605.03142" }, { "id": "1502.05698" }, { "id": "1512.05832" }, { "id": "1502.02072" }, { "id": "1506.02438" }, { "id": "1502.02362" } ]
1606.06565
105
[119] Dino Pedreshi, Salvatore Ruggieri, and Franco Turini. “Discrimination-aware data mining”. In: Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM. 2008, pp. 560–568. [120] Jonas Peters et al. “Causal discovery with continuous additive noise models”. In: The Journal of Machine Learning Research 15.1 (2014), pp. 2009–2053. [121] Emmanouil Antonios Platanios. “Estimating accuracy from unlabeled data”. MA thesis. Carnegie Mellon University, 2015. [122] Emmanouil Antonios Platanios, Avrim Blum, and Tom Mitchell. “Estimating accuracy from unlabeled data”. In: (2014). [123] Walter W Powell and Laurel Smith-Doerr. “Networks and economic life”. In: The handbook of economic sociology 368 (1994), p. 380. [124] Joaquin Quinonero-Candela et al. Dataset shift in machine learning, ser. Neural information processing series. 2009.
1606.06565#105
Concrete Problems in AI Safety
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.
http://arxiv.org/pdf/1606.06565
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané
cs.AI, cs.LG
29 pages
null
cs.AI
20160621
20160725
[ { "id": "1507.01986" }, { "id": "1506.02142" }, { "id": "1602.04621" }, { "id": "1602.04450" }, { "id": "1605.09304" }, { "id": "1606.05374" }, { "id": "1604.05288" }, { "id": "1603.00448" }, { "id": "1605.03661" }, { "id": "1512.02595" }, { "id": "1511.08228" }, { "id": "1510.04342" }, { "id": "1604.05280" }, { "id": "1606.04753" }, { "id": "1606.03976" }, { "id": "1602.02697" }, { "id": "1505.05424" }, { "id": "1604.06057" }, { "id": "1605.03143" }, { "id": "1506.06579" }, { "id": "1605.03142" }, { "id": "1502.05698" }, { "id": "1512.05832" }, { "id": "1502.02072" }, { "id": "1506.02438" }, { "id": "1502.02362" } ]
1606.06565
106
[124] Joaquin Quinonero-Candela et al. Dataset shift in machine learning, ser. Neural information processing series. 2009. [125] Rajat Raina et al. “Self-taught learning: transfer learning from unlabeled data”. In: Proceed- ings of the 24th international conference on Machine learning. ACM. 2007, pp. 759–766. [126] Bharath Ramsundar et al. “Massively multitask networks for drug discovery”. In: arXiv preprint arXiv:1502.02072 (2015). [127] Mark Ring and Laurent Orseau. “Delusion, survival, and intelligent agents”. In: Artificial General Intelligence. Springer, 2011, pp. 11–20. [128] St´ephane Ross, Geoffrey J Gordon, and J Andrew Bagnell. “A reduction of imitation learning and structured prediction to no-regret online learning”. In: arXiv preprint arXiv:1011.0686 (2010). [129] Donald B Rubin. “Estimating causal effects of treatments in randomized and nonrandomized studies.” In: Journal of educational Psychology 66.5 (1974), p. 688.
1606.06565#106
Concrete Problems in AI Safety
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.
http://arxiv.org/pdf/1606.06565
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané
cs.AI, cs.LG
29 pages
null
cs.AI
20160621
20160725
[ { "id": "1507.01986" }, { "id": "1506.02142" }, { "id": "1602.04621" }, { "id": "1602.04450" }, { "id": "1605.09304" }, { "id": "1606.05374" }, { "id": "1604.05288" }, { "id": "1603.00448" }, { "id": "1605.03661" }, { "id": "1512.02595" }, { "id": "1511.08228" }, { "id": "1510.04342" }, { "id": "1604.05280" }, { "id": "1606.04753" }, { "id": "1606.03976" }, { "id": "1602.02697" }, { "id": "1505.05424" }, { "id": "1604.06057" }, { "id": "1605.03143" }, { "id": "1506.06579" }, { "id": "1605.03142" }, { "id": "1502.05698" }, { "id": "1512.05832" }, { "id": "1502.02072" }, { "id": "1506.02438" }, { "id": "1502.02362" } ]
1606.06565
107
[130] Stuart Russell et al. “Research priorities for robust and beneficial artificial intelligence”. In: Future of Life Institute (2015). [131] Christoph Salge, Cornelius Glackin, and Daniel Polani. “Empowerment–an introduction”. In: Guided Self-Organization: Inception. Springer, 2014, pp. 67–114. [132] J Denis Sargan. “The estimation of relationships with autocorrelated residuals by the use of instrumental variables”. In: Journal of the Royal Statistical Society. Series B (Methodological) (1959), pp. 91–105. [133] John D Sargan. “The estimation of economic relationships using instrumental variables”. In: Econometrica: Journal of the Econometric Society (1958), pp. 393–415. 27 [134] John Schulman et al. “High-dimensional continuous control using generalized advantage es- timation”. In: arXiv preprint arXiv:1506.02438 (2015). [135] D Sculley et al. “Machine Learning: The High-Interest Credit Card of Technical Debt”. In: (2014).
1606.06565#107
Concrete Problems in AI Safety
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.
http://arxiv.org/pdf/1606.06565
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané
cs.AI, cs.LG
29 pages
null
cs.AI
20160621
20160725
[ { "id": "1507.01986" }, { "id": "1506.02142" }, { "id": "1602.04621" }, { "id": "1602.04450" }, { "id": "1605.09304" }, { "id": "1606.05374" }, { "id": "1604.05288" }, { "id": "1603.00448" }, { "id": "1605.03661" }, { "id": "1512.02595" }, { "id": "1511.08228" }, { "id": "1510.04342" }, { "id": "1604.05280" }, { "id": "1606.04753" }, { "id": "1606.03976" }, { "id": "1602.02697" }, { "id": "1505.05424" }, { "id": "1604.06057" }, { "id": "1605.03143" }, { "id": "1506.06579" }, { "id": "1605.03142" }, { "id": "1502.05698" }, { "id": "1512.05832" }, { "id": "1502.02072" }, { "id": "1506.02438" }, { "id": "1502.02362" } ]
1606.06565
108
[135] D Sculley et al. “Machine Learning: The High-Interest Credit Card of Technical Debt”. In: (2014). [136] Glenn Shafer and Vladimir Vovk. “A tutorial on conformal prediction”. In: The Journal of Machine Learning Research 9 (2008), pp. 371–421. [137] Uri Shalit, Fredrik Johansson, and David Sontag. “Bounding and Minimizing Counterfactual Error”. In: arXiv preprint arXiv:1606.03976 (2016). [138] Hidetoshi Shimodaira. “Improving predictive inference under covariate shift by weighting the log-likelihood function”. In: Journal of statistical planning and inference 90.2 (2000), pp. 227– 244. [139] Jaeho Shin et al. “Incremental knowledge base construction using deepdive”. In: Proceedings of the VLDB Endowment 8.11 (2015), pp. 1310–1321. [140] David Silver et al. “Mastering the game of Go with deep neural networks and tree search”. In: Nature 529.7587 (2016), pp. 484–489.
1606.06565#108
Concrete Problems in AI Safety
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.
http://arxiv.org/pdf/1606.06565
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané
cs.AI, cs.LG
29 pages
null
cs.AI
20160621
20160725
[ { "id": "1507.01986" }, { "id": "1506.02142" }, { "id": "1602.04621" }, { "id": "1602.04450" }, { "id": "1605.09304" }, { "id": "1606.05374" }, { "id": "1604.05288" }, { "id": "1603.00448" }, { "id": "1605.03661" }, { "id": "1512.02595" }, { "id": "1511.08228" }, { "id": "1510.04342" }, { "id": "1604.05280" }, { "id": "1606.04753" }, { "id": "1606.03976" }, { "id": "1602.02697" }, { "id": "1505.05424" }, { "id": "1604.06057" }, { "id": "1605.03143" }, { "id": "1506.06579" }, { "id": "1605.03142" }, { "id": "1502.05698" }, { "id": "1512.05832" }, { "id": "1502.02072" }, { "id": "1506.02438" }, { "id": "1502.02362" } ]
1606.06565
109
[141] SNES Super Mario World (USA) “arbitrary code execution”. Tool-assisted movies. 2014. url: http://tasvideos.org/2513M.html. [142] Nate Soares and Benja Fallenstein. “Toward idealized decision theory”. In: arXiv preprint arXiv:1507.01986 (2015). [143] Ray J Solomonoff. “A formal theory of inductive inference. Part I”. In: Information and control 7.1 (1964), pp. 1–22. [144] Ray J Solomonoff. “A formal theory of inductive inference. Part II”. In: Information and control 7.2 (1964), pp. 224–254. [145] J Steinebach. “EL Lehmann, JP Romano: Testing statistical hypotheses”. In: Metrika 64.2 (2006), pp. 255–256. [146] Jacob Steinhardt. Long-Term and Short-Term Challenges to Ensuring the Safety of AI Sys- tems. [Online; accessed 13-June-2016]. 2015. url: https://jsteinhardt.wordpress.com/ 2015/06/24/long- term- and- short- term- challenges- to- ensuring- the- safety- of- ai-systems/.
1606.06565#109
Concrete Problems in AI Safety
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.
http://arxiv.org/pdf/1606.06565
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané
cs.AI, cs.LG
29 pages
null
cs.AI
20160621
20160725
[ { "id": "1507.01986" }, { "id": "1506.02142" }, { "id": "1602.04621" }, { "id": "1602.04450" }, { "id": "1605.09304" }, { "id": "1606.05374" }, { "id": "1604.05288" }, { "id": "1603.00448" }, { "id": "1605.03661" }, { "id": "1512.02595" }, { "id": "1511.08228" }, { "id": "1510.04342" }, { "id": "1604.05280" }, { "id": "1606.04753" }, { "id": "1606.03976" }, { "id": "1602.02697" }, { "id": "1505.05424" }, { "id": "1604.06057" }, { "id": "1605.03143" }, { "id": "1506.06579" }, { "id": "1605.03142" }, { "id": "1502.05698" }, { "id": "1512.05832" }, { "id": "1502.02072" }, { "id": "1506.02438" }, { "id": "1502.02362" } ]
1606.06565
110
[147] Jacob Steinhardt and Percy Liang. “Unsupervised Risk Estimation with only Structural Assumptions”. In: (2016). [148] Jacob Steinhardt and Russ Tedrake. “Finite-time regional verification of stochastic non-linear systems”. In: The International Journal of Robotics Research 31.7 (2012), pp. 901–923. [149] Jacob Steinhardt, Gregory Valiant, and Moses Charikar. “Avoiding Imposters and Delin- quents: Adversarial Crowdsourcing and Peer Prediction”. In: arxiv prepring arXiv:1606.05374 (2016). url: http://arxiv.org/abs/1606.05374. [150] Richard S Sutton and Andrew G Barto. Reinforcement learning: An introduction. MIT press, 1998. [151] Adith Swaminathan and Thorsten Joachims. “Counterfactual risk minimization: Learning from logged bandit feedback”. In: arXiv preprint arXiv:1502.02362 (2015). [152] Christian Szegedy et al. “Intriguing properties of neural networks”. In: arXiv preprint arXiv:1312.6199 (2013).
1606.06565#110
Concrete Problems in AI Safety
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.
http://arxiv.org/pdf/1606.06565
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané
cs.AI, cs.LG
29 pages
null
cs.AI
20160621
20160725
[ { "id": "1507.01986" }, { "id": "1506.02142" }, { "id": "1602.04621" }, { "id": "1602.04450" }, { "id": "1605.09304" }, { "id": "1606.05374" }, { "id": "1604.05288" }, { "id": "1603.00448" }, { "id": "1605.03661" }, { "id": "1512.02595" }, { "id": "1511.08228" }, { "id": "1510.04342" }, { "id": "1604.05280" }, { "id": "1606.04753" }, { "id": "1606.03976" }, { "id": "1602.02697" }, { "id": "1505.05424" }, { "id": "1604.06057" }, { "id": "1605.03143" }, { "id": "1506.06579" }, { "id": "1605.03142" }, { "id": "1502.05698" }, { "id": "1512.05832" }, { "id": "1502.02072" }, { "id": "1506.02438" }, { "id": "1502.02362" } ]
1606.06565
111
[153] Aviv Tamar, Yonatan Glassner, and Shie Mannor. “Policy gradients beyond expectations: Conditional value-at-risk”. In: arXiv preprint arXiv:1404.3862 (2014). [154] Jessica Taylor. “Quantilizers: A Safer Alternative to Maximizers for Limited Optimization”. In: forthcoming). Submitted to AAAI (2016). [155] Matthew E Taylor and Peter Stone. “Transfer learning for reinforcement learning domains: A survey”. In: Journal of Machine Learning Research 10.Jul (2009), pp. 1633–1685. [156] Philip S Thomas, Georgios Theocharous, and Mohammad Ghavamzadeh. “High-Confidence # Off-Policy Evaluation.” In: AAAI. 2015, pp. 3000–3006. [157] Adrian Thompson. Artificial evolution in the physical world. 1997. 28 [158] Antonio Torralba and Alexei A Efros. “Unbiased look at dataset bias”. In: Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on. IEEE. 2011, pp. 1521–1528.
1606.06565#111
Concrete Problems in AI Safety
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.
http://arxiv.org/pdf/1606.06565
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané
cs.AI, cs.LG
29 pages
null
cs.AI
20160621
20160725
[ { "id": "1507.01986" }, { "id": "1506.02142" }, { "id": "1602.04621" }, { "id": "1602.04450" }, { "id": "1605.09304" }, { "id": "1606.05374" }, { "id": "1604.05288" }, { "id": "1603.00448" }, { "id": "1605.03661" }, { "id": "1512.02595" }, { "id": "1511.08228" }, { "id": "1510.04342" }, { "id": "1604.05280" }, { "id": "1606.04753" }, { "id": "1606.03976" }, { "id": "1602.02697" }, { "id": "1505.05424" }, { "id": "1604.06057" }, { "id": "1605.03143" }, { "id": "1506.06579" }, { "id": "1605.03142" }, { "id": "1502.05698" }, { "id": "1512.05832" }, { "id": "1502.02072" }, { "id": "1506.02438" }, { "id": "1502.02362" } ]
1606.06565
112
[159] Matteo Turchetta, Felix Berkenkamp, and Andreas Krause. “Safe Exploration in Finite Markov Decision Processes with Gaussian Processes”. In: arXiv preprint arXiv:1606.04753 (2016). [160] Stefan Wager and Susan Athey. “Estimation and Inference of Heterogeneous Treatment Ef- fects using Random Forests”. In: arXiv preprint arXiv:1510.04342 (2015). [161] Daniel Weld and Oren Etzioni. “The first law of robotics (a call to arms)”. In: AAAI. Vol. 94. 1994. 1994, pp. 1042–1047. [162] Keenon Werling et al. “On-the-job learning with bayesian decision theory”. In: Advances in Neural Information Processing Systems. 2015, pp. 3447–3455. [163] Jason Weston et al. “Towards ai-complete question answering: A set of prerequisite toy tasks”. In: arXiv preprint arXiv:1502.05698 (2015).
1606.06565#112
Concrete Problems in AI Safety
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.
http://arxiv.org/pdf/1606.06565
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané
cs.AI, cs.LG
29 pages
null
cs.AI
20160621
20160725
[ { "id": "1507.01986" }, { "id": "1506.02142" }, { "id": "1602.04621" }, { "id": "1602.04450" }, { "id": "1605.09304" }, { "id": "1606.05374" }, { "id": "1604.05288" }, { "id": "1603.00448" }, { "id": "1605.03661" }, { "id": "1512.02595" }, { "id": "1511.08228" }, { "id": "1510.04342" }, { "id": "1604.05280" }, { "id": "1606.04753" }, { "id": "1606.03976" }, { "id": "1602.02697" }, { "id": "1505.05424" }, { "id": "1604.06057" }, { "id": "1605.03143" }, { "id": "1506.06579" }, { "id": "1605.03142" }, { "id": "1502.05698" }, { "id": "1512.05832" }, { "id": "1502.02072" }, { "id": "1506.02438" }, { "id": "1502.02362" } ]